text
string
filename
string
file_size
int64
title
string
authors
string
journal
string
category
string
publisher
string
license
string
license_url
string
doi
string
source_file
string
content
string
year
string
# Stability Analysis andH∞ Model Reduction for Switched Discrete-Time Time-Delay Systems **Authors:** Zheng-Fan Liu; Chen-Xiao Cai; Wen-Yong Duan **Journal:** Mathematical Problems in Engineering (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101473 --- ## Abstract This paper is concerned with the problem of exponential stability andH∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF) approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs). For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results. --- ## Body ## 1. Introduction Switched systems belong to a special class of hybrid control systems, which comprises a collection of subsystems described by dynamics differential or difference equations, together with a switching law that specifies the switching rule among the subsystems. Due to the theoretical development as well as practical applications, analysis and synthesis of switched systems have recently gained considerable attention [1–4].Furthermore, the time-delay phenomenon is frequently encountered in a variety of industrial and engineering systems [5–7], for instance, chemical process, long distance transmission line, communication networks, and so forth. Moreover, time-delay is a predominant source of the poor performance and instability. In the last two decades, there has been increasing interest in the stability analysis for the systems; see, for example, [8, 9] and the references cited there in. For switched delay systems, due to the impact of time-delays, the behavior of switched delay systems is usually much more complicated than that of switched systems or delay systems [10, 11].The average dwell time (ADT) technique [12] and multiple Lyapunov function approach [13] are two powerful and effective tools for studying the problems of stability for switched systems under controlled switching. By applying ADT scheme, the disturbance attenuation properties of time-controlled switched systems are investigated in [14]. In [15], the exponential stability and L2-gain of switched delay systems are studied by using ADT approach. Furthermore, based on ADT method, in [16–18] the stability of switched systems with stable and unstable subsystems co existing was considered. Using the ADT scheme, switching design for exponential stability was proposed in [19] for a class of switched discrete-time constant time-delay systems. By using the multiple Lyapunov function approach and ADT technique, the literature [20] studied the problem of state feedback stabilization of a class of discrete-time switched singular systems with time-varying state delay under asynchronous switching. However, many free weighing matrices were introduced, which made the stability result complicated. In [11], the problem of stabilization and robust H∞ control via ADT method switching for discrete switched system with time-delay was considered. However, the procedures given in [11] could not be applied to the case of asynchronous switching or the case of switching delay systems with stable and unstable subsystems co existing. This motivates the present study.On another research front line, it is well known that mathematical modeling of physical systems often results in complex high-order models. However, this causes the great difficulties in analysis and synthesis of the systems. Therefore, in practical applications it is desirable to replace high-order models with reduced-order ones for reducing the computational complexities in some given criteria without incurring much loss of performance or information. The purpose of model reduction is to obtain a lower-order system which approximates a high-order system according to certain criterion. Recently, much attention has been focused on the model reduction problem [21–25]. Many important results have been reported, which involve various efficient approaches, such as the balanced truncation method [24], the optimal Hanker norm reduction method [25], the cone complementarily linearization method [26], and sequential linear programming matrix method [27]. In terms of LMIs with inverse constraints or other non convex conditions, the model reduction of the discrete-time context has been investigated in [28, 29]. However, it is difficult to obtain the numerical solutions. In [30], the existence conditions for H∞ model reduction for discrete-time uncertain switched systems are derived in terms of strict LMIs by using switched Lyapunov function method. However, time delays are not taken into account. In the literature [31], a novel idea to approximate the original time-delay system by a reduced time-delay model has been proposed recently. However, the unstable subsystems are not taken into account.Motivated by the preceding discussion, the main contributions of this paper are highlighted as follows. The problem of exponential stability andH∞ model reduction for a class of switched linear discrete-time systems with time-varying delay have been investigated. To lessen the computation complexity and to reduce the conservatism, new discrete LKF are constructed and the delay interval is divided into two unequal subintervals by the delay decomposition method. The switching law is given by ADT scheme, such that even if one or more subsystem is unstable the overall switched system still can be stable. For the high-order systems, sufficient conditions for the existence of the desired reduced-order model are derived in terms of strict LMIs, which can be easily solved by using MATLAB LMI control toolbox. Finally, numerical examples are given to show the effectiveness of the proposed methods.The remainder of this paper is structured as follows. In Section2, the problem formulation and some preliminaries are introduced. In Section 3, the main results are presented on the exponential stability of switched discrete-time systems with time-varying delay. In Section 4, the main results on the H∞ model reduction for the high-order systems are presented. Numerical examples are given in Section 5. The last section concludes the work.Notations.We use standard notations throughout the paper. λmin(M)(λmax(M)) stands for the minimal (maximum) eigenvalue of M. MT is the transpose of the matrix M. The relation M>N  (M<N) means that the matrix M-N is positive (negative) definite. ∥x∥ denotes the Euclidian-norm of the vector x∈Rn. Rn represents the n-dimensional real Euclidean space. Rn×m is the set of all real n×m matrices. diag{⋯} stands for a block-diagonal matrix. In symmetric block matrices or long matrix expressions, we use an asterisk “*” to represent a term that is induced by symmetry. I denotes the identity matrix. ## 2. Problem Description and Preliminaries Consider a class of switched linear discrete-time systems with time-varying state delay of the form(1)x(k+1)=Aix(k)+Adix(k-d(k))+Biu(k),y(k)=Cix(k)+Cdix(k-d(k))+Diu(k),x(θ)=ϕ(θ),θ=-h,-h+1,…,1, where x(k)∈Rn denotes the system state, y(k)∈Rm is the measured output, u(k)∈Rp is the disturbance input vector which belongs to l2[0,∞). ϕ(θ)∈Rn is a vector-valued initial function. The switching signal σ (denoting σ(k) for simplicity) :[0,∞)→N-={1,2,…,T} is a piecewise constant function and depends on time. σ=i means that the ith subsystem is activated. T is the number of subsystems. The system matrices Ai,Adi,Bi,Ci,Cdi, and Di are a set of known real matrices with appropriate dimensions.For a given finite positive integerh>0, d(k) is time-varying delay and satisfies the following condition (2)0≤d(k)≤h,∀k∈N+. To facilitate theoretical development, we introduce the following definitions and lemmas.Definition 1 (see [19]). The system (1) with disturbance input u(k)=0 is said to be exponentially stable if there exist a switching function σ(·) and positive number c such that every solution x(k,ϕ) of the system satisfies (3)∥x(k)∥≤cλk-k0∥ϕ∥s,∀k≥k0, for any initial conditions (k0,ϕ)∈R+×Cn. c>0 is the decay coefficient, 0<λ≤1 is the decay rate, and ∥ϕ∥s=sup{∥ϕ(l)∥,l=k0-h,k0-h+1,…,k0}.Definition 2 (see [11]). Consider the system (1) with the following conditions.(1) Withu(k)=0, the system (1) is exponentially stable with convergence rate λ>0.(2) TheH∞ performance ∥y(k)∥2<γ∥u(k)∥2 is guaranteed for all nonzero u(k)∈L2[0,∞) and a prescribed κ>0 under the zero condition.In the above conditions, the system (1) is exponentially stabilizable with H∞ performance γ and convergence rate λ.Hereγ characterizes the disturbance attenuation performance. The smaller the γ is, the better the performance is.Definition 3 (see [12]). For a switching signalσ(k) and any T2>k>T1≥0, let Nσ(T1,T2) denote the number of switching of σ(k) over (T1,T2). If for any given N0≥1 and Ta>0, we have Nσ(T1,T2)≤N0+(T2-T1)/Ta, then Ta and N0 are called the ADT and the chatter bound, respectively.Lemma 4 (see [9]). For any matrixR=RT>0, integers a≤b, vector function ξ(k):{-b,-b+1,…,-a}→Rn, then (4)(a-b)∑s=k-bk-a-1‍zT(s)Rz(s)≤ξT(k)[-RR-R]ξ(k). Here (5)z(k)=x(k+1)-x(k),ξT(k)=[xT(k-a)xT(k-b)].Lemma 5 (Schur complement [32]). LetM,P,Q be given matrices such that Q>0. Then (6)[PM*-Q]<0⟺P+MQ-1MT<0.The aim of this paper is to find a class of time-based switching signals for the discrete-time switched time-delay systems (1), whose subsystem is not necessarily stable, to guarantee the system to be exponentially stable. For a high-order system, we are interested in constructing a reduced-order switched system to approximate the system. ## 3. Stability Analysis With the preliminaries given in the previous section we are ready to state the exponential stability andH∞ performance of switched systems (1). To obtain the exponential stability of switched systems (1), we construct following discrete LKF: (7)Vi(k)=Vi1(k)+Vi2(k)+Vi3(k),∀i∈N-. Here (8)Vi1(k)=xT(k)Pix(k),Vi2(k)∑s=k-ϑk-1‍(1+αi)k-1-sxT(s)Qi1x(s)+∑s=k-hk-ϑ-1‍(1+αi)k-1-sxT(s)Qi2x(s)+∑s=k-d(k)k-1‍(1+αi)k-1-sxT(s)Qi3x(s),Vi3(k)∑θ=-ϑ-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri1z(s)+∑θ=-h-ϑ-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri2z(s)+∑θ=-d(k)-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri3z(s), where Pi,Qim,Rim(i∈N-, m=1,2,3) are symmetric positive definite matrices with appropriate dimensions, z(k)=x(k+1)-x(k), and integer ϑ∈(0,h) and αi are given constants.Remark 6. In order to derive less conservative criteria than the existing ones, the delay interval[0,h] is divided into two unequal subintervals: [0,ϑ] and [ϑ,h], where ϑ∈(0,h) is a tuning parameter. The information about x(t-ϑ) can be taken into account. This plays a vital role in deriving less conservative results. Thus, for any k∈Z+, we have d(k)∈[0,ϑ] or d(k)∈[ϑ,h]. Firstly, we will provide a decay estimation of the system LKF in (7) along the state trajectory of switched system (1) without disturbance input (i.e., u(k)=0).Lemma 7. Given constants-1<αi≤0, h>0 and ϑ∈(0,h), if there exist some symmetric positive definite matrices Pi,Qim,Rim(i∈N-, m=1,2,3) such that the following LMIs hold: (9)[Ψi11Ψi12Ψi22]<0,(10)[Φi11Φi12Φi22]<0, where (11)Ψi11=[ψ11iψ12i00ψ22iψ23i0*ψ33iψ34i**ψ44i],Φi11=[ψ11i0ϕ13i0ϕ22iϕ23iϕ24i*ϕ33i0**ψ44i],Ψi12=[AiTPi(Ai-I)TW1iTAdiTPiAdiTW1iT0000],Φi12=[AiTPi(Ai-I)TW2iTAdiTPiAdiTW2iT0000],Ψi22=diag{-Pi-W1i},Φi22=diag{-Pi-W2i},ψ11i=-(1+αi)Pi-(1+αi)ϑϑ(Ri1+Ri3)+Qi1+Qi3,ψ12i=(1+αi)ϑϑ(Ri1+Ri3),ψ22i=-(1+αi)ϑQi3-(1+αi)ϑϑ(2Ri1+Ri3),ψ23i=(1+αi)ϑϑRi1,ψ33i=(1+αi)ϑ(Qi2-Qi1)-(1+αi)hh-ϑRi2-(1+αi)ϑϑRi1,ψ34i=(1+αi)hh-ϑRi2,ψ44i=-(1+αi)hQi2-(1+αi)hh-ϑRi2,ϕ13i=(1+αi)ϑϑ(Ri1+Ri3),ϕ22i=-(1+αi)hQi3-(1+αi)hh-ϑ(2Ri2+Ri3),ϕ23i=(1+αi)hh-ϑ(Ri2+Ri3),ϕ24i=(1+αi)hh-ϑRi2,ϕ33=-(1+αi)hh-ϑ(Ri2+Ri3)-(1+αi)ϑϑ(Ri1+Ri3)-(1+αi)ϑ(Qi1-Qi2),W1i=(h-ϑ)Ri2+ϑRi1+ϑRi3,W2i=(h-ϑ)Ri2+ϑRi1+hRi3. Then, by means of LKF (7), along the trajectory of the systems (1) without disturbance input, one has (12)ΔVi(k)=Vi(k+1)-Vi(k)≤αiVi(k).Proof. Let us choose the system LKF (7). Define (13)Vi(k+1)-(1+αi)Vi(k)=∑m=13‍Δ~Vim(k), where (14)Δ~Vim(k)=Vim(k+1)-(1+αi)Vim(k). Therefore, the following equality holds along the solution of (1): (15)Δ~Vi1(k)=xT(k+1)Pix(k+1)-(1+αi)xT(k)Pix(k),(16)Δ~Vi2(k)=xT(k)(Qi1+Qi3)x(k)-(1+αi)hxT(k-h)Qi2x(k-h)-(1+αi)d(k)xT(k-d(k))Qi3x(k-d(k))-(1+αi)ϑxT(k-ϑ)(Qi1-Qi2)x(k-ϑ),(17)Δ~Vi3(k)=zT(k)((h-ϑ)Ri2+ϑRi1+d(k)Ri3)z(k)-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri1z(s)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri3z(s). For any k∈Z+, we have d(k)∈[0,ϑ] or d(k)∈[ϑ,h].(1) Ifd(k)∈[0,ϑ], it gets (18)-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri1z(s)=-∑s=k-ϑk-1-d(k)‍(1+αi)k-szT(s)Ri1z(s)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri1z(s).So (17) could be (19)Δ~Vi3(k)≤zT(k)((h-ϑ)Ri2+ϑRi1+ϑRi3)z(k)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)-∑s=k-τ(k)k-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)-∑s=k-ϑk-1-τ(k)‍(1+αi)k-szT(s)Ri1z(s). From Lemma 4, we have (20)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)≤(1+αi)hh-ϑξ1T(t)[-Ri2Ri2-Ri2]ξ1(t),(21)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)≤(1+αi)τ(k)τ(k)ξ2T(k)[-Ri1-Ri3Ri1+Ri3-Ri1-Ri3]ξ2(k)≤(1+αi)ϑϑξ2T(k)[-Ri1-Ri3Ri1+Ri3-Ri1-Ri3]ξ2(k),(22)-∑s=k-ϑk-1-d(k)‍(1+αi)k-szT(s)Ri1z(s)≤(1+αi)ϑϑ-τ(k)ξ3T(k)[-Ri1Ri1-Ri1]ξ3(k)≤(1+αi)ϑϑξ3T(k)[-Ri1Ri1-Ri1]ξ3(k), where (23)ξ1T(k)=[xT(k-ϑ)xT(k-h)],ξ2T(k)=[xT(k)xT(k-d(k))],ξ3T(k)=[xT(k-d(k))xT(k-ϑ)]. Combining (13)–(22), it yields (24)Vi(k+1)-(1+αi)Vi(k)≤ξT(k)Ψi11ξ(k)+xT(k+1)Pix(k+1)+zT(k)W1iz(k), where (25)ξT(k)=[xT(k)xT(k-d(k))xT(k-ϑ)xT(k-h)]. Multiplying (9) both from left and right by diag{0000Pi-1Wi-T}, by Schur Complement, further, considering (24), one can infer that (12) holds.(2) Ifd(k)∈[ϑ,h], it gets (26)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri3z(s)=-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri3z(s)-∑s=k-d(k)k-ϑ-1‍(1+αi)k-szT(s)Ri3z(s). One obtains (27)Δ~Vi3(k)≤zT(k)((h-ϑ)Ri2+ϑRi1+hRi3)z(k)-∑s=k-ϑk-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)-∑s=k-d(k)k-ϑ-1‍(1+αi)k-szT(s)(Ri2+Ri3)z(s)-∑s=k-hk-d(k)-1‍(1+αi)k-szT(s)Ri2z(s). Similarly, it is easy to get that (28)Vi(k+1)-(1+αi)Vi(k)≤ξT(k)Φi11ξ(k)+xT(k+1)Pix(k+1)+zT(k)W2iz(k). If (10) holds, by Schur Complement, then we have (12). This completes the proof.Remark 8. Our LKF does not include free-weighing matrices as in previous investigations, and this may lead to reduce the computational complexity and get less conservation results.Remark 9. In order to get less conservative results, the delay interval[0,h] can be divided into much more subintervals. However, when the number of dipartite numbers increases, the matrix formulation becomes more complex and the time-consuming grows bigger.Now we have the following theorem.Theorem 10. If there exist some constants-1<αi<0 and positive definite symmetric matrices Pi,Qim,Rim(i∈N-, m=1,2,3) and μ≥1 such that (9), (10), and the following inequalities hold: (29)Pi≤μPj,Qim≤μQjm,Rim≤μRjm,∀i,j∈N-. Then, the switched system (1) with u(k)=0 and ADT satisfies τa>-lnμ/lnα which is exponentially stable.Proof. By Lemma7, we have (30)ΔVi(k)=Vi(k+1)-Vi(k)≤αiVi(k),∀i∈N-. Therefore, (31)Vi(k0+n)≤(αi+1)nVi(k0). There exists μi≥1(i∈N-), such that (32)Vi(k)≤μiVj(k),∀i,j∈N-. We let τ1,…,τNσ(k0,k0+k) denote the switching times of σ in (k0,k0+k), and let Nσ(k0,k+k0) be the switching number of σ in (k0,k+k0), by (31) and (32), one obtains (33)Vσ(k+k0)(k+k0)≤μσ(τ1)⋯μσ(τNσ(k0,k0+k))(ασ(k+k0)+1)m1⋯(ασ(k0)+1)mNσ(k0,k+k0)Vσ(k0)(k0), where m1+⋯+mNσ(k0,k+k0)=k. By-1<αi<0, for all i∈N-, we know that there exists α≜maxi∈N-{αi+1}∈(0,1). Let μ=maxi∈N-{μi}; from (33), one obtains (34)Vi(k+k0)≤αkμNσVj(k0)=αk+Nσ(lnμ/lnα)Vj(k0). By Definition 2, for any k0<k, it follows that (35)Vi(k)≤αk+Nσ(lnμ/lnα)Vj(k0)≤αk(1+(lnμ/Talnα))Vj(k0). By the system LKF (7), there always exist two positive constants c1,c2 such that (36)c1∥x(k)∥2≤Vi(k),Vi(k0)≤c2∥x(k0)∥s2, where (37)c1=mini∈N-{λmin(Pi)},c2=maxi∈N-{λmax(Pi)+∑m=13‍(λmax(Qim)+λmax(Rim))}. Therefore, (38)∥x(k)∥2≤c2c1αk(1+(lnμ/Talnα))∥x(k0)∥s2. If the average dwell time τa satisfies τa>-lnμ/lnα for μ≥1, then the switched system (1) with u(k)=0 is exponentially stable with λ=α1/2=maxi∈N-{(αi+1)1/2}∈(0,1) stability degree.Remark 11. The caseα=0 implies the asymptotic stability.The following theorem provides exponential stability analysis withH∞ performance of the system (1).Theorem 12. For given constantsγ>0, λ>0 and -1<αi<0, if there exist positive definite symmetric matrices Pi,Qim,Rim(i∈N-,m=1,2,3) and μ≥1 such that (29) and the following LMIs hold: (39)[Ψi110Ψi13-γ2IΨi23*Ψi33]<0,(40)[Φi110Φi13-γ2IΦi23*Φi33]<0, where (41)Ψi13=[AiTPi(Ai-I)TW1iTCiTAdiTPiAdiTW1iTCdiT000000],Φi13=[AiTPi(Ai-I)TW2iTCiTAdiTPiAdiTW2iTCdiT000000],Ψi33=diag{-Pi-W1i-I},Φi33=diag{-Pi-W2i-I},Ψi23=[BiTPiBiTW1iTDiT],Φi23=[BiTPiBiTW2iTDiT]. Then, the system (1) with average dwell time satisfies τa>-lnμ/lnα  which is globally exponentially stable with convergence rate λ and H∞ performance γ.Proof. Choose the LKF (7); the result is carried out by using the techniques employed for proving Lemma 7 and Theorem 10. If d(k)∈[0,ϑ], by (24), we have (42)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤xT(k+1)Pix(k+1)+zT(k)Wiz(k)+yT(k)y(k)+ζ1T(k)[Ψi1100-γ2]ζ1(k), where (43)ζ1T(k)=[ξT(k)uT(k)]. If d(k)∈[ϑ,h], by (28), we have (44)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤xT(k+1)Pix(k+1)+zT(k)Wiz(k)+yT(k)y(k)+ζ1T(k)[Φi1100-γ2]ζ1(k). Combining (39) and (40), by Schur Complement, one can obtain (45)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤0. Let (46)J(k)=yT(k)y(k)-γ2uT(k)u(k); we have (47)Vi(k+1)≤αVi(k)-J(k). By Definition 2 and Theorem 10, it is sufficient to show that ∑k=0∞J(k)<0 for any nonzero u(k). Combining (35) and (47), it can be shown that (48)Vσ(k)≤αkμNσ(0,k)Vσ(0)-∑s=0k-1‍αk-s-1μNσ(s,k)J(s). Under the zero initial condition, we have (49)V(0)=0,V(∞)≥0. Combining (48), we have (50)∑s=0k-1‍αk-s-1μNσ(s,k)J(s)=∑s=0k-1‍α-1elnα+lnμ/τaJ(s)≤0. Now, we consider (51)∑k=1∞‍∑s=0k-1‍α-1elnα+lnμ/τaJ(s). Exchanging the double-sum region, by τa>-lnμ/lnα and α∈(0,1), one can easily get (52)∑k=1∞‍∑s=0k-1‍α-1elnα+lnμ/τaJ(s)=∑s=0∞‍J(s)∑k=s+1∞‍α-1elnα+lnμ/τa=elnα+lnμ/τaα-11-elnα+lnμ/τa∑s=1∞‍J(s)≤0, which means that ∑s=1∞J(s)≤0. Then, by Definition 2, the system (1) with average dwell time satisfies τa>-lnμ/lnα  which is globally exponentially stable with convergence rate λ and H∞ performance γ. This completes the proof.If there exist some unstable subsystems in the switched system (1) with u(k)=0, in this case, we need to estimate the growth rate of the system LKF in (7) along the state trajectory of switched system (1). And the corresponding αj>0(j∈N-). By using the techniques employed for proving Lemma 7, one can easily obtain the following Lemma.Lemma 13. Given constantsαj>0, h>0 and ϑ∈(0,h), if there exist some symmetric positive definite matrices Pj,Qjm,Rjm(j∈N-, m=1,2,3) such that the following LMIs hold: (53)[Ψ-j11Ψj12Ψj22]<0,[Φ-j11Φj12Φj22]<0, where (54)Ψ-j11=[ψ-11jψ-12j00ψ-22jψ-23j0*ψ-33jψ-34j**ψ-44j],Φ-j11=[ψ-11j0ϕ-13j0ϕ-22jϕ-23jϕ-24j*ϕ-33j0**ψ-44j],ψ-11j=-(1+αj)Pj+Qj1+Qj3-1ϑ(Rj1+Rj3),ψ-12j=1ϑ(Rj1+Rj3),ψ-22j=-Qj3-1ϑ(2Rj1+Rj3),ψ-23j=1ϑRj1,ψ-34j=(1+αj)ϑh-ϑRj2,ψ-33j=(1+αj)ϑ(Qj2-Qj1)-(1+αj)ϑh-ϑRj2-1ϑRj1,ψ-44j=-(1+αj)hQj2-(1+αj)ϑh-ϑRi2,ϕ-11j=ψ-11j,ϕ-13j=(1+αj)ϑ(Rj1+Rj3),ϕ-22j=-(1+αj)ϑQj3-(1+αj)ϑh-ϑ(2Rj2+Rj3),ϕ-23j=(1+αj)ϑh-ϑ(Rj2+Rj3),ϕ-24j=(1+αj)ϑh-ϑRj2,ϕ-33j=-(1+αj)ϑh-ϑ(Rj2+Rj3)-(1+αj)ϑ(Rj1+Rj3)-(1+αj)ϑ(Qj1-Qj2). Then, by means of LKF (7), along the trajectory of the systems (1) without disturbance input, one has (55)ΔVj(k)=Vj(k+1)-Vj(k)≤αjVj(k).Remark 14. The proof of Lemma13 is similar to that of Lemma 7 and is thus omitted here. Based on Lemmas 7 and 13, one can easily design the stabilizing switching law to guarantee the system (1) with u(k)=0 to be exponentially stable, although some subsystems are unstable. Without loss of generality, we can assume thatN-u={j1,j2,…,js} is the set of all unstable subsystems and N-s={is+1,is+2,…,ip} is the set of all stable subsystems. For simplicity, the LKF (7) is defined as Vi(αi,k)≜Vi(k). Choose the LKF Vi(αi,k)  (-1<αi<0, i∈N-s) for the stable subsystem and choose the LKF Vj(αj,k)  (αj>0, j∈N-u) for the unstable subsystem. Then, we have the following conclusion.Theorem 15. If there exist some constants-1<αi<0, αj>0  (j≠i, i∈N-s, j∈N-u) and positive definite symmetric matrices Pi,Qim,Rim,Pj,Qjm,Rjm  (m=1,2,3) and μ≥1 such that Lemmas 7 and 13 and the following LMIs hold: (56)Pl≤μPs,Qlm≤μQsm,Rlm≤μRsm,∀l,s∈N-. Then, the switched system (1) with u(k)=0 and the average dwell time satisfies τa>lnμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0)  which is exponentially stable.Proof. Consider the following LKF candidate:(57)Vσ(k)(k)={Vi(αi,k),σ(k)=i∈N-s,Vj(αj,k),σ(k)=j∈N-u. By Lemmas 7 and 13, we have (58)Vσ(k+1)(k+1)≤(ασ(k+1)+1)Vσ(k+1)(k). Let Tk0,n+k0α be the total activity time in which all subsystems satisfied 0>αi>-1 on the interval (k0,n+k0) and Tk0,n+k0β≜n-Tk0,n+k0α the total activity time in which all subsystems satisfied αj>0 on the interval (k0,n+k0). By using the techniques employed for proving Theorem 10, combining (56) and (58), we derive that (59)Vσ(n+k0)(n+k0)≤μNσ(n+k0)αTk0,n+k0αβTk0,n+k0βVσ(k0)(k0)=eTk0,n+k0αlnα+Tk0,n+k0βlnβ+Nσ(k0,n+k0)lnμVσ(k0)(k0), where (60)α≜maxi∈N-s{αi+1}∈(0,1),β≜maxj∈N-u{αj+1}>1. By Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0), one obtains (61)Tk0,n+k0αlnα+Tk0,n+k0βlnβ≤κn. So we have (62)Vσ(n+k0)(n+k0)≤eκn+Nσ(k0,n+k0)lnμVσ(k0)(k0). By Definition 2, for any n+k0>k0, it follows that (63)Vσ(n+k0)(n+k0)≤eκn+Nσ(k0,n+k0)lnμVσ(k0)(k0)≤en(κ+(lnμ/τa))Vσ(k0)(k0). By τa>lnμ/-κ, we have limk→∞Vσ(k)=0. Moreover, the overall system is exponentially stable. This completes the proof.Remark 16. From the proof of Theorem15, one can see that the obtained exponential stability for the switched system (1) with u(k)=0 is exponential stable with e-1/2 stability degree. In order to get a free decay rate, we can replace the condition τa>lnμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ),κ∈(lnα,0) by τa>logϵμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(logϵβ-κ)/(-logϵα+κ), κ∈(logϵα,0), ϵ>1; then the switched system (1) with u(k)=0 is exponentially stable with ϵ-1/2 stability degree.Theorem 17. For given constantsγ>0, -1<αi<0, αj>0  (j≠i, i∈N-s, j∈N-u), if there exist positive definite symmetric matrices Pi,Qim,Rim,Pj,Qjm,Rjm  (m=1,2,3) and μ≥1 such that (56), (39), (40), and the following LMIs hold: (64)[Ψ-j110Ψj13-γ2IΨj23*Ψj33]0,[Φ-j110Φj13-γ2IΦj23*Φj33]<0, and Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0), and the average dwell time satisfies τa>lnμ/-κ; then the switched system (1) is exponentially stable and with H∞ performance γ.Remark 18. The proof of Theorem17 is similar to that of Theorems 12 and 15 and is thus omitted here. ## 4.H∞ Model Reduction In this section, we will approximate system (1) by a reduced-order switched system described by (65)x^(k+1)=Arix^(k)+Ardix^(k-d(k))+Briu(k),y^(k)=Crix^(k)+Crdix^(k-d(k))+Driu(k), where x^(k)∈Rq is the state vector of the reduced-order system with q<n and y^(k)∈Rm is the output of reduced-order system. Ari,Ardi,Cri,Crdi,Bri, and Dri are the matrices with compatible dimensions to be determined. The system (65) is assumed to be switched synchronously by switching signal σ(k) in system (1).Augmenting the model of system (1) to include the states of (65), we can obtain the error system as follows: (66)x~(k+1)=A~ix~(k)+A~dix~(k-d(k))+B~iu(k),e~(k)=C~ix~(k)+C~dix~(k-d(k))+D~iu(k). Here (67)A~i=[Ai00Ari],A~di=[Adi00Ardi],B~i=[BiBri],x~(k)=[x(k)x^(k)],C~i=[Ci-Cri],C~di=[Cdi-Crdi],D~i=Di-Drdi,e~(k)=y(k)-y^(k).The following theorem gives a sufficient condition for the existence of an admissibleH∞ reduced-order model (65) for system (1).Theorem 19. Given constants0<α<1, γ>0, μ≥1, h>0, and ϑ  (0<ϑ<h), if there exist some symmetric positive definite matrices P~i,Q~im,R~im(m=1,2,3) and matrices Xi,Yi,Li,Hi,Fi(i∈N-) such that the following LMIs hold (68)[Πi1Πi2Πi3]<0,(69)[Π-i1Πi2Π-i3]<0,(70)P~i≤μP~j,Q~im≤μQ~jm,R~im≤μR~jm,∀i,j∈N-. Then system (66) with the average dwell time τa satisfies τa>-lnμ/lnα  which is exponentially stable with an H∞ norm bound γ. Here(71)Πi1=[φ11iφ12i000φ22iφ23i00*φ33iφ34i0**φ44i0***φ55i],Π-i1=[φ11i0φ-13i00φ-22iφ-23iφ-24i0*φ-33i00**φ44i0***φ55i],Πi2=[φi16Tφi17Tφi18Tφi26Tφi27Tφi28T000000φi56Tφi57Tφi58T],Πi3=diag{P~i-2U~iW~i-2U~i-I},Π-i3=diag{P~i-2U~iW^-2U~i-I},φ11i=Q~i1+Q~i3-αP~-αϑϑ(R~i1+R~i3),φ12i=αϑϑ(R~i1+R~i3),φ22i=-αϑQ~i3-αϑϑ(2R~i1+R~i3),φ23i=αϑϑR~i1,φ34i=αhh-ϑR~i2,φ33i=αϑ(Q~i2-Q~i1)-αhh-ϑR~i2-αϑϑR~i1,φ44i=-αhQ~i2-αhh-ϑR~i2,φ55i=-γ2I,φ-13i=φ12i,φ-22i=-αhQ~i3-αhh-ϑ(2R~i2+R~i3),φ-23i=αhh-ϑ(R~i2+R~i3),φ-24i=φ34i,φ-33=αϑ(Q~i2-Q~i1)-αϑϑ(R~i1+R~i3)-αhh-ϑ(R~i2+R~i3),W~i=(h-ϑ)R~i2+ϑR~i1+ϑR~i3,W^i=(h-ϑ)R~i2+ϑR~i1+hR~i3,φi16T=[AiTXiTAiTETYi0LiT],φi17T=[AiTXiT-XiTAiTETYi-ETY0LiT-YiT],φi18T=[CiT-CriT],φi26T=φi27T=[AidTXiTAidTETYi0HiT],φi28T=[CdiT-CrdiT],φi56=φi57=[XiBiFi+YiTEBi],φi58=Di-Drdi. Furthermore, if a feasible solution to the above LMIs (68), (69), and (70) exists, then the system matrices of an admissible H∞ reduced-order model in the form of (65) are given by (72)Ari=Yi-1Li,Ardi=Yi-1Hi,Bri=Yi-1Fi.Proof. Consider the following LKF for the switched system (66): (73)Vi(k)=Vi1(k)+Vi2(k)+Vi3(k). Here (74)Vi1(k)=x~T(k)P~ix~(k),Vi2(k)∑s=k-ϑk-1‍αk-1-sx~T(s)Q~i1x~(s)+∑s=k-hk-ϑ-1‍αk-1-sx~T(s)Q~i2x~(s)+∑s=k-d(k)k-1‍αk-1-sx~T(s)Q~i3x~(s),Vi3(k)∑θ=-ϑ-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i1z~(s)+∑θ=-h-ϑ-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i2z~(s)+∑θ=-d(k)-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i3z~(s), where z~(k)=x~(k+1)-x~(k) and P~i,Q~im,R~im(i∈N-, m=1,2,3) are symmetric positive definite matrices with appropriate dimensions; integer ϑ and α are given constants. By using the techniques employed for proving Lemma7, one can easily obtain the result. Calculate the difference of Vi(k) in (73) along the state trajectory of system (66).(1) Ifd(k)∈[0,ϑ], it gets (75)Vi(k+1)-αVi(k)+e~T(k)e~(k)-γ2uT(k)u(k)≤ξ~T(k)Πi1ξ~(k)+x~T(k+1)P~ix~(k+1)+zT(k)W~iz(k)+e~T(k)e~(k),where(76)ξ~T(k)=[x~T(k)x~T(k-d(k))x~T(k-ϑ)x~T(k-h)uT(k)].For any appropriately dimensioned matrices P~i>0 and nonsingular matrices U~i, we have (77)(P~i-U~i)TP~i-1(P~i-U~i)≥0. Thus (78)-U~iTP~i-1U~i≤P~i-2U~i. If (68) holds, we have (79)[Πi1Πi2Θi3]<0, where (80)Θi3=diag{-U~iTP~i-1U~iU~iTW~i-1U~i-I}. Let (81)U~i=[Xi0YiTEYi],E=[I0],YiAri=Li,YiArdi=Hi,YiBri=Fi. Multiplying (79) both from left and right by diag{00000U~i-TU~i-T-I}, by Schur Complement, further, considering (75), one can infer (82)Vi(k+1)-αVi(k)+e~T(k)e~(k)-γ2uT(k)u(k)≤0. Similarly, for the case of d(k)∈[ϑ,h], the fact that (69) holds means that (82) is true. Set (83)Γ(k)=e~T(k)e~(k)-γ2uT(k)u(k), we have (84)Vi(k+1)≤αVi(k)-Γ(k). Let Nσ(k0,k) be the number of switching times in (k0,k). From (84) and (70), we can obtain (85)Vi(k+k0)≤αkμNσ(k0,k)Vi(k0)-∑s=k0k-1‍αk-s-1μNσ(s,k)Γ(s)≤αk+Nσ(k0,k)(lnμ/lnα)Vj(k0)-∑s=k0k-1‍αk-s-1+Nσ(s,k)(lnμ/lnα)μNσ(s,k)Γ(s).Assume the zero disturbances input u(k)=0 to the state equation of system (66). By Definition 2, for any k0<k, it follows that (86)Vi(k)≤αk+Nσ(lnμ/lnα)Vj(k0)≤αk(1+(lnμ/τalnα))Vj(k0). From τa>-lnμ/lnα, one obtains limk→∞Vi(k)=0. There exist cn>0,  n=1,2, such that (87)c1∥x~(k)∥2≤Vi(k),Vi(k0)≤c2∥x~(k0)∥s2. Here (88)∥x~(k)∥s=maxθ=-h,…,0∥x~(k+θ)∥,c1=λmin(Pi),c2=λmax(Pi)+∑k=13‍(λmax(Qik)+λmax(Rik)). Therefore (89)∥x~(k)∥2≤c2c1αk(1+(lnμ/τalnα))∥x~(k0)∥s2. If the average dwell time τa satisfies τa>-lnμ/lnα, then the switched system (66) is exponentially stable with λ=α1/2 stability degree. For any nonzero u(k)∈l2[0,∞), under zero initial condition, combining (68), (69), (70), (85), and (89), one can easily obtain (90)J=∑k=0∞‍[e~T(k)e~(k)-γ2uT(k)u(k)]≤0. Therefore ∥e~(k)∥2≤γ∥u(k)∥2. This completes the proof.Remark 20. Recently, authors in [30, 31] have studied the problem of model reduction for discrete-time switched systems. In those papers, time delays are not taken into account. However, in most of the cases in engineering problems, there always exist unknown time-varying delays; moreover, the case of stable and unstable subsystems co exists. Motivated by this, in this paper, we discussed the problem of H∞ model reduction for switched linear discrete-time systems with time-varying delays via delay decomposition approach [10–12]. Accordingly, numerical results are given for time-varying delay cases. If there exist some unstable subsystems in the switched system (1), we have the following conclusion.Theorem 21. Given constants0<α<1, β>1, γ>0, μ≥1, h>0, and ϑ  (0<ϑ<h), if there exist some symmetric positive definite matrices P~i,Q~im,R~im(m=1,2,3) and matrices Xi,Yi,Li,Hi,Fi(i∈N-) such that (68), (69), (70), and the following LMIs hold: (91)[Π~i1Πi2Πi3]<0,[Π^i1Πi2Π-i3]<0. And Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0); then system (66) with the average dwell time τa satisfies τa>-lnμ/lnα  which is exponentially stable with an H∞ norm bound γ. Furthermore, if a feasible solution to the above LMIs (68), (69), (70), and (91) exists, then the system matrices of an admissible H∞ reduced-order model in the form of (65) are given by (72). Here,(92)Π~i1=[φ~11iφ~12i000φ~22iφ~23i00*φ~33iφ~34i0**φ~44i0***φ~55i],Π^i1=[φ~11i0φ^13i00φ^22iφ^23iφ^24i0*φ^33i00**φ~44i0***φ~55i],φ~11i=Q~i1+Q~i3-βP~-1ϑ(R~i1+R~i3),φ~12i=1ϑ(R~i1+R~i3),φ~22i=-Q~i3-1ϑ(2R~i1+R~i3),φ~23i=1ϑR~i1,φ~33i=βϑ(Q~i2-Q~i1)-βϑh-ϑR~i2-1ϑR~i1,φ~34i=βϑh-ϑR~i2,φ~44i=-βhQ~i2-βϑh-ϑR~i2,φ~55i=-γ2I,φ^13i=βϑ(R~i1+R~i3),φ^22i=-βϑQ~i3-βϑh-ϑ(2R~i2+R~i3),φ^23i=βϑh-ϑ(R~i2+R~i3),φ^24i=βϑh-ϑ(R~i2),φ^33i=βϑ(Q~i2-Q~i1)-βϑ(R~i1+R~i3)-βϑh-ϑ(R~i2+R~i3).Remark 22. The proof of Theorem21 is carried out by using the techniques employed in the previous section and is thus omitted here. ## 5. Examples In this section, we consider some numerical examples to illustrate the benefits of our results.Example 1 (see [20]). Consider the discrete-time switched system (1) with u(k)=0 and the following parameters: (93)A1=[00.3-0.20.1],Ad1=[00.100.2],A2=[00.3-0.2-0.1],Ad2=[00.100]. For this system, we choose μ=1.1 and λ=0.931. Applying Theorem 10, by solving the LMIs (9) and (10) and (29), we can obtain the allowable delay upper bound h=20. It is reported, with decay rate λ=0.931, that the upper bound h can be obtained as 14 in [19] and 16 in [20]. Therefore, the result in this brief can indeed provide larger delay bounds than the results in [19, 20]. This supports the effectiveness of the proposed idea in Theorem 10 in reducing the conservatism of stability criteria.Example 2. Consider the discrete-time switched system (1) with u(k)=0 and parameters as follows: (94)A1=[00.3-0.20.1],Ad1=[00.100.2],A2=[00.3-0.2-0.1],Ad2=[1.30.100.9]. It is easy to check that the A2+Ad2 is unstable. In this case, we need to find a class of switching signals to guarantee the overall switched system to be exponentially stable. Set d(k)=[|3sin(kπ/6)|] and α=0.5329, according to Theorem 15 and by solving the LMIs (9), (10), (53), and (29), set ϑ=1; we have μ=2.4 and β=2.01. Choosing γ′=-0.18, we have Tk0,n+k0α/Tk0,n+k0β≥(lnβ-γ′)/(-lnα+γ′)=1.953 and τa>lnμ/-γ′=4.9. The simulation result of the switched system is shown in Figure 1, where the initial condition ϕ(θ)=[1.1-0.8]T and the switching law is shown in Figure 2. It can be seen from Figure 1 that the designed switching signals are effective although one subsystem is unstable. However, the results in [20] cannot find any feasible solution to guarantee the exponential stability of system (1).Figure 1 The state response.Figure 2 Switching law.Example 3 (see [31]). Consider the system (1) with parameters as follows:(95)A1=[0.130.22-0.130.080.05-0.030.190.06-0.07-0.05-0.04-0.12-0.170.210.030.28],A2=[0.110.22-0.130.080.05-0.030.150.06-0.07-0.03-0.04-0.12-0.170.210.030.2],Ad1=Ad2=[0.020.010000.0200000.020.010000.02],B1=[0.19-0.180.16-0.08]T,B2=[0.23-0.130.16-0.04]T,C1=C2=[1.20.50.030.28],Cd1=Cd2=[0.020.050.010.09],D1=D2=0.1. When the decay rate α is fixed, the maximum value of the time-delay h and the minimum value of the performance index γ can computed by solving the LMIs (68)–(70) procedure in Theorem 19, which is listed in Table 1 via different methods. Here, we choose μ=1.001. Assume that decay rate α=0.9; we can compute the maximum value of allowed delay h=42 and the minimum value of the performance index γ=1.67. From ADT τa>-lnμ/lnα, we have τa>0.0095. When h=2 and α=0.9, we can compute the minimum value of performance index γ=0.53. On the other hand, assume that maximum allowed delay h=2 and performance index γ=2; we can compute the minimum value of the decay rate α=0.59 and τa>0.0019.Table 1 Comparison of parameters via different methods. α γ h τ a [31] 0.9 2 2 >1.7305 Theorem19 0.9 1.67 42 >0.0095 Theorem19 0.9 0.53 2 >0.0095 Theorem19 0.59 2 2 >0.0019 Theorem19 0.6 1.8 2 >0.002Letα=0.9; here, we are interested in designing a q-order (q<4) system (65) and choose the ADT τa=2 switching signals such that the model error system (66) is exponentially stable with H∞ norm bound γ=2. By solving the corresponding LMIs (68)–(70) procedure in Theorem 19. For comparison with [31], we set the delay d(k)=2, and the following reduced-order models can be given.Third Order Model (96) A r 1 = [ 0.2753 0.0282 - 0.0033 0.0097 0.2507 - 0.0033 - 0.0045 - 0.0124 0.2569 ] , A r 2 = [ 0.2799 0.0259 - 0.0058 0.0074 0.2581 - 0.0025 - 0.0051 - 0.01 0.2611 ] , A r d 1 = [ - 0.005 0.0069 - 0.0023 0.0037 - 0.0046 0.003 - 0.0011 0.0002 - 0.0006 ] , A r d 2 = [ - 0.001 0.0066 - 0.0025 0.0039 - 0.0044 0.0033 - 0.0018 0.001 - 0.0024 ] , B r 1 = [ - 0.171 0.1795 - 0.111 ] T , B r 2 = [ - 0.191 0.148 - 0.1285 ] T , C r 1 = [ - 0.3016 - 0.1328 - 0.0149 ] , C r 2 = [ - 0.2987 - 0.1265 - 0.0173 ] , D r 1 = - 0.1754 , C r d 1 = [ - 0.0314 - 0.0047 - 0.0182 ] , C r d 2 = [ - 0.0361 - 0.0011 - 0.0199 ] , D r 2 = - 0.2396 .Second Order Model (97) A r 1 = [ 0.2419 0.0355 0.015 0.2141 ] , A r d 1 = [ - 0.0028 0.0088 0.0052 - 0.007 ] , B r 1 = [ - 0.1528 0.1617 ] , C r 1 = [ - 0.3109 - 0.1453 ] T , A r 2 = [ 0.2382 0.0324 0.0147 0.2183 ] , A r d 2 = [ - 0.0023 0.0076 0.0046 - 0.006 ] , B r 2 = [ - 0.1667 0.1203 ] , C r 2 = [ - 0.3076 - 0.1362 ] T , C r d 1 = [ - 0.0488 0.0034 ] , D r 1 = - 0.2422 , C r d 2 = [ - 0.05 0.0057 ] , D r 2 = - 0.3605 .First Order Model (98) A r 1 = 0.2528 , A r d 1 = - 0.0057 , B r 1 = - 0.1498 , C r 1 = - 0.2769 , C r d 1 = 0.0301 , D r 1 = - 0.1792 , A r 2 = 0.2606 , A r d 2 = - 0.005 , B r 2 = - 0.1787 , C r 2 = - 0.2851 , C r d 2 = - 0.04 , D r 2 = - 0.2624 . To illustrate the model reduction performances of the obtained reduced-order models, let the initial condition be zero; the exogenous input is given as u(k)=1.8exp(-0.4k). The output errors between the original system and the corresponding three reduced models obtained in this paper (shown by the blue line) and the literature [31] (shown by the red line) are displayed in Figures 3, 4, and 5. The switching signal is shown in Figure 6. The simulation result of the switched system is shown in Figures 3–5. It can be seen from Figures 3–5 that the output errors between the original system and the reduced-order models obtained in this paper are smaller than that in [31].Figure 3 Output errors between the original system and the 3rd model.Figure 4 Output errors between the original system and the 2nd model.Figure 5 Output errors between the original system and the 1st model.Figure 6 Switching law. ## 6. Conclusions The problem of exponential stability withH∞ performance and H∞ model reduction for a class of switched linear discrete-time systems with time-varying delay have been investigated in this paper. The switching law is given by ADT technique, such that even if one or more subsystem is unstable the overall switched system can still be exponentially stable. Sufficient conditions for the existence of the desired reduced-order model are derived and formulated in terms of strict LMIs. By solving the LMIs, the system of reduced-order model can be obtained, which also provides an H∞ gain for the error system between the original system and the reduced-order model. Finally, numerical examples are provided to illustrate the effectiveness and less conservativeness of the proposed method. A potential extension of this method to nonlinear case deserves further research. --- *Source: 101473-2014-01-02.xml*
101473-2014-01-02_101473-2014-01-02.md
38,167
Stability Analysis andH∞ Model Reduction for Switched Discrete-Time Time-Delay Systems
Zheng-Fan Liu; Chen-Xiao Cai; Wen-Yong Duan
Mathematical Problems in Engineering (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101473
101473-2014-01-02.xml
--- ## Abstract This paper is concerned with the problem of exponential stability andH∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF) approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs). For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results. --- ## Body ## 1. Introduction Switched systems belong to a special class of hybrid control systems, which comprises a collection of subsystems described by dynamics differential or difference equations, together with a switching law that specifies the switching rule among the subsystems. Due to the theoretical development as well as practical applications, analysis and synthesis of switched systems have recently gained considerable attention [1–4].Furthermore, the time-delay phenomenon is frequently encountered in a variety of industrial and engineering systems [5–7], for instance, chemical process, long distance transmission line, communication networks, and so forth. Moreover, time-delay is a predominant source of the poor performance and instability. In the last two decades, there has been increasing interest in the stability analysis for the systems; see, for example, [8, 9] and the references cited there in. For switched delay systems, due to the impact of time-delays, the behavior of switched delay systems is usually much more complicated than that of switched systems or delay systems [10, 11].The average dwell time (ADT) technique [12] and multiple Lyapunov function approach [13] are two powerful and effective tools for studying the problems of stability for switched systems under controlled switching. By applying ADT scheme, the disturbance attenuation properties of time-controlled switched systems are investigated in [14]. In [15], the exponential stability and L2-gain of switched delay systems are studied by using ADT approach. Furthermore, based on ADT method, in [16–18] the stability of switched systems with stable and unstable subsystems co existing was considered. Using the ADT scheme, switching design for exponential stability was proposed in [19] for a class of switched discrete-time constant time-delay systems. By using the multiple Lyapunov function approach and ADT technique, the literature [20] studied the problem of state feedback stabilization of a class of discrete-time switched singular systems with time-varying state delay under asynchronous switching. However, many free weighing matrices were introduced, which made the stability result complicated. In [11], the problem of stabilization and robust H∞ control via ADT method switching for discrete switched system with time-delay was considered. However, the procedures given in [11] could not be applied to the case of asynchronous switching or the case of switching delay systems with stable and unstable subsystems co existing. This motivates the present study.On another research front line, it is well known that mathematical modeling of physical systems often results in complex high-order models. However, this causes the great difficulties in analysis and synthesis of the systems. Therefore, in practical applications it is desirable to replace high-order models with reduced-order ones for reducing the computational complexities in some given criteria without incurring much loss of performance or information. The purpose of model reduction is to obtain a lower-order system which approximates a high-order system according to certain criterion. Recently, much attention has been focused on the model reduction problem [21–25]. Many important results have been reported, which involve various efficient approaches, such as the balanced truncation method [24], the optimal Hanker norm reduction method [25], the cone complementarily linearization method [26], and sequential linear programming matrix method [27]. In terms of LMIs with inverse constraints or other non convex conditions, the model reduction of the discrete-time context has been investigated in [28, 29]. However, it is difficult to obtain the numerical solutions. In [30], the existence conditions for H∞ model reduction for discrete-time uncertain switched systems are derived in terms of strict LMIs by using switched Lyapunov function method. However, time delays are not taken into account. In the literature [31], a novel idea to approximate the original time-delay system by a reduced time-delay model has been proposed recently. However, the unstable subsystems are not taken into account.Motivated by the preceding discussion, the main contributions of this paper are highlighted as follows. The problem of exponential stability andH∞ model reduction for a class of switched linear discrete-time systems with time-varying delay have been investigated. To lessen the computation complexity and to reduce the conservatism, new discrete LKF are constructed and the delay interval is divided into two unequal subintervals by the delay decomposition method. The switching law is given by ADT scheme, such that even if one or more subsystem is unstable the overall switched system still can be stable. For the high-order systems, sufficient conditions for the existence of the desired reduced-order model are derived in terms of strict LMIs, which can be easily solved by using MATLAB LMI control toolbox. Finally, numerical examples are given to show the effectiveness of the proposed methods.The remainder of this paper is structured as follows. In Section2, the problem formulation and some preliminaries are introduced. In Section 3, the main results are presented on the exponential stability of switched discrete-time systems with time-varying delay. In Section 4, the main results on the H∞ model reduction for the high-order systems are presented. Numerical examples are given in Section 5. The last section concludes the work.Notations.We use standard notations throughout the paper. λmin(M)(λmax(M)) stands for the minimal (maximum) eigenvalue of M. MT is the transpose of the matrix M. The relation M>N  (M<N) means that the matrix M-N is positive (negative) definite. ∥x∥ denotes the Euclidian-norm of the vector x∈Rn. Rn represents the n-dimensional real Euclidean space. Rn×m is the set of all real n×m matrices. diag{⋯} stands for a block-diagonal matrix. In symmetric block matrices or long matrix expressions, we use an asterisk “*” to represent a term that is induced by symmetry. I denotes the identity matrix. ## 2. Problem Description and Preliminaries Consider a class of switched linear discrete-time systems with time-varying state delay of the form(1)x(k+1)=Aix(k)+Adix(k-d(k))+Biu(k),y(k)=Cix(k)+Cdix(k-d(k))+Diu(k),x(θ)=ϕ(θ),θ=-h,-h+1,…,1, where x(k)∈Rn denotes the system state, y(k)∈Rm is the measured output, u(k)∈Rp is the disturbance input vector which belongs to l2[0,∞). ϕ(θ)∈Rn is a vector-valued initial function. The switching signal σ (denoting σ(k) for simplicity) :[0,∞)→N-={1,2,…,T} is a piecewise constant function and depends on time. σ=i means that the ith subsystem is activated. T is the number of subsystems. The system matrices Ai,Adi,Bi,Ci,Cdi, and Di are a set of known real matrices with appropriate dimensions.For a given finite positive integerh>0, d(k) is time-varying delay and satisfies the following condition (2)0≤d(k)≤h,∀k∈N+. To facilitate theoretical development, we introduce the following definitions and lemmas.Definition 1 (see [19]). The system (1) with disturbance input u(k)=0 is said to be exponentially stable if there exist a switching function σ(·) and positive number c such that every solution x(k,ϕ) of the system satisfies (3)∥x(k)∥≤cλk-k0∥ϕ∥s,∀k≥k0, for any initial conditions (k0,ϕ)∈R+×Cn. c>0 is the decay coefficient, 0<λ≤1 is the decay rate, and ∥ϕ∥s=sup{∥ϕ(l)∥,l=k0-h,k0-h+1,…,k0}.Definition 2 (see [11]). Consider the system (1) with the following conditions.(1) Withu(k)=0, the system (1) is exponentially stable with convergence rate λ>0.(2) TheH∞ performance ∥y(k)∥2<γ∥u(k)∥2 is guaranteed for all nonzero u(k)∈L2[0,∞) and a prescribed κ>0 under the zero condition.In the above conditions, the system (1) is exponentially stabilizable with H∞ performance γ and convergence rate λ.Hereγ characterizes the disturbance attenuation performance. The smaller the γ is, the better the performance is.Definition 3 (see [12]). For a switching signalσ(k) and any T2>k>T1≥0, let Nσ(T1,T2) denote the number of switching of σ(k) over (T1,T2). If for any given N0≥1 and Ta>0, we have Nσ(T1,T2)≤N0+(T2-T1)/Ta, then Ta and N0 are called the ADT and the chatter bound, respectively.Lemma 4 (see [9]). For any matrixR=RT>0, integers a≤b, vector function ξ(k):{-b,-b+1,…,-a}→Rn, then (4)(a-b)∑s=k-bk-a-1‍zT(s)Rz(s)≤ξT(k)[-RR-R]ξ(k). Here (5)z(k)=x(k+1)-x(k),ξT(k)=[xT(k-a)xT(k-b)].Lemma 5 (Schur complement [32]). LetM,P,Q be given matrices such that Q>0. Then (6)[PM*-Q]<0⟺P+MQ-1MT<0.The aim of this paper is to find a class of time-based switching signals for the discrete-time switched time-delay systems (1), whose subsystem is not necessarily stable, to guarantee the system to be exponentially stable. For a high-order system, we are interested in constructing a reduced-order switched system to approximate the system. ## 3. Stability Analysis With the preliminaries given in the previous section we are ready to state the exponential stability andH∞ performance of switched systems (1). To obtain the exponential stability of switched systems (1), we construct following discrete LKF: (7)Vi(k)=Vi1(k)+Vi2(k)+Vi3(k),∀i∈N-. Here (8)Vi1(k)=xT(k)Pix(k),Vi2(k)∑s=k-ϑk-1‍(1+αi)k-1-sxT(s)Qi1x(s)+∑s=k-hk-ϑ-1‍(1+αi)k-1-sxT(s)Qi2x(s)+∑s=k-d(k)k-1‍(1+αi)k-1-sxT(s)Qi3x(s),Vi3(k)∑θ=-ϑ-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri1z(s)+∑θ=-h-ϑ-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri2z(s)+∑θ=-d(k)-1‍∑s=k+θk-1‍(1+αi)k-1-szT(s)Ri3z(s), where Pi,Qim,Rim(i∈N-, m=1,2,3) are symmetric positive definite matrices with appropriate dimensions, z(k)=x(k+1)-x(k), and integer ϑ∈(0,h) and αi are given constants.Remark 6. In order to derive less conservative criteria than the existing ones, the delay interval[0,h] is divided into two unequal subintervals: [0,ϑ] and [ϑ,h], where ϑ∈(0,h) is a tuning parameter. The information about x(t-ϑ) can be taken into account. This plays a vital role in deriving less conservative results. Thus, for any k∈Z+, we have d(k)∈[0,ϑ] or d(k)∈[ϑ,h]. Firstly, we will provide a decay estimation of the system LKF in (7) along the state trajectory of switched system (1) without disturbance input (i.e., u(k)=0).Lemma 7. Given constants-1<αi≤0, h>0 and ϑ∈(0,h), if there exist some symmetric positive definite matrices Pi,Qim,Rim(i∈N-, m=1,2,3) such that the following LMIs hold: (9)[Ψi11Ψi12Ψi22]<0,(10)[Φi11Φi12Φi22]<0, where (11)Ψi11=[ψ11iψ12i00ψ22iψ23i0*ψ33iψ34i**ψ44i],Φi11=[ψ11i0ϕ13i0ϕ22iϕ23iϕ24i*ϕ33i0**ψ44i],Ψi12=[AiTPi(Ai-I)TW1iTAdiTPiAdiTW1iT0000],Φi12=[AiTPi(Ai-I)TW2iTAdiTPiAdiTW2iT0000],Ψi22=diag{-Pi-W1i},Φi22=diag{-Pi-W2i},ψ11i=-(1+αi)Pi-(1+αi)ϑϑ(Ri1+Ri3)+Qi1+Qi3,ψ12i=(1+αi)ϑϑ(Ri1+Ri3),ψ22i=-(1+αi)ϑQi3-(1+αi)ϑϑ(2Ri1+Ri3),ψ23i=(1+αi)ϑϑRi1,ψ33i=(1+αi)ϑ(Qi2-Qi1)-(1+αi)hh-ϑRi2-(1+αi)ϑϑRi1,ψ34i=(1+αi)hh-ϑRi2,ψ44i=-(1+αi)hQi2-(1+αi)hh-ϑRi2,ϕ13i=(1+αi)ϑϑ(Ri1+Ri3),ϕ22i=-(1+αi)hQi3-(1+αi)hh-ϑ(2Ri2+Ri3),ϕ23i=(1+αi)hh-ϑ(Ri2+Ri3),ϕ24i=(1+αi)hh-ϑRi2,ϕ33=-(1+αi)hh-ϑ(Ri2+Ri3)-(1+αi)ϑϑ(Ri1+Ri3)-(1+αi)ϑ(Qi1-Qi2),W1i=(h-ϑ)Ri2+ϑRi1+ϑRi3,W2i=(h-ϑ)Ri2+ϑRi1+hRi3. Then, by means of LKF (7), along the trajectory of the systems (1) without disturbance input, one has (12)ΔVi(k)=Vi(k+1)-Vi(k)≤αiVi(k).Proof. Let us choose the system LKF (7). Define (13)Vi(k+1)-(1+αi)Vi(k)=∑m=13‍Δ~Vim(k), where (14)Δ~Vim(k)=Vim(k+1)-(1+αi)Vim(k). Therefore, the following equality holds along the solution of (1): (15)Δ~Vi1(k)=xT(k+1)Pix(k+1)-(1+αi)xT(k)Pix(k),(16)Δ~Vi2(k)=xT(k)(Qi1+Qi3)x(k)-(1+αi)hxT(k-h)Qi2x(k-h)-(1+αi)d(k)xT(k-d(k))Qi3x(k-d(k))-(1+αi)ϑxT(k-ϑ)(Qi1-Qi2)x(k-ϑ),(17)Δ~Vi3(k)=zT(k)((h-ϑ)Ri2+ϑRi1+d(k)Ri3)z(k)-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri1z(s)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri3z(s). For any k∈Z+, we have d(k)∈[0,ϑ] or d(k)∈[ϑ,h].(1) Ifd(k)∈[0,ϑ], it gets (18)-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri1z(s)=-∑s=k-ϑk-1-d(k)‍(1+αi)k-szT(s)Ri1z(s)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri1z(s).So (17) could be (19)Δ~Vi3(k)≤zT(k)((h-ϑ)Ri2+ϑRi1+ϑRi3)z(k)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)-∑s=k-τ(k)k-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)-∑s=k-ϑk-1-τ(k)‍(1+αi)k-szT(s)Ri1z(s). From Lemma 4, we have (20)-∑s=k-hk-ϑ-1‍(1+αi)k-szT(s)Ri2z(s)≤(1+αi)hh-ϑξ1T(t)[-Ri2Ri2-Ri2]ξ1(t),(21)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)≤(1+αi)τ(k)τ(k)ξ2T(k)[-Ri1-Ri3Ri1+Ri3-Ri1-Ri3]ξ2(k)≤(1+αi)ϑϑξ2T(k)[-Ri1-Ri3Ri1+Ri3-Ri1-Ri3]ξ2(k),(22)-∑s=k-ϑk-1-d(k)‍(1+αi)k-szT(s)Ri1z(s)≤(1+αi)ϑϑ-τ(k)ξ3T(k)[-Ri1Ri1-Ri1]ξ3(k)≤(1+αi)ϑϑξ3T(k)[-Ri1Ri1-Ri1]ξ3(k), where (23)ξ1T(k)=[xT(k-ϑ)xT(k-h)],ξ2T(k)=[xT(k)xT(k-d(k))],ξ3T(k)=[xT(k-d(k))xT(k-ϑ)]. Combining (13)–(22), it yields (24)Vi(k+1)-(1+αi)Vi(k)≤ξT(k)Ψi11ξ(k)+xT(k+1)Pix(k+1)+zT(k)W1iz(k), where (25)ξT(k)=[xT(k)xT(k-d(k))xT(k-ϑ)xT(k-h)]. Multiplying (9) both from left and right by diag{0000Pi-1Wi-T}, by Schur Complement, further, considering (24), one can infer that (12) holds.(2) Ifd(k)∈[ϑ,h], it gets (26)-∑s=k-d(k)k-1‍(1+αi)k-szT(s)Ri3z(s)=-∑s=k-ϑk-1‍(1+αi)k-szT(s)Ri3z(s)-∑s=k-d(k)k-ϑ-1‍(1+αi)k-szT(s)Ri3z(s). One obtains (27)Δ~Vi3(k)≤zT(k)((h-ϑ)Ri2+ϑRi1+hRi3)z(k)-∑s=k-ϑk-1‍(1+αi)k-szT(s)(Ri1+Ri3)z(s)-∑s=k-d(k)k-ϑ-1‍(1+αi)k-szT(s)(Ri2+Ri3)z(s)-∑s=k-hk-d(k)-1‍(1+αi)k-szT(s)Ri2z(s). Similarly, it is easy to get that (28)Vi(k+1)-(1+αi)Vi(k)≤ξT(k)Φi11ξ(k)+xT(k+1)Pix(k+1)+zT(k)W2iz(k). If (10) holds, by Schur Complement, then we have (12). This completes the proof.Remark 8. Our LKF does not include free-weighing matrices as in previous investigations, and this may lead to reduce the computational complexity and get less conservation results.Remark 9. In order to get less conservative results, the delay interval[0,h] can be divided into much more subintervals. However, when the number of dipartite numbers increases, the matrix formulation becomes more complex and the time-consuming grows bigger.Now we have the following theorem.Theorem 10. If there exist some constants-1<αi<0 and positive definite symmetric matrices Pi,Qim,Rim(i∈N-, m=1,2,3) and μ≥1 such that (9), (10), and the following inequalities hold: (29)Pi≤μPj,Qim≤μQjm,Rim≤μRjm,∀i,j∈N-. Then, the switched system (1) with u(k)=0 and ADT satisfies τa>-lnμ/lnα which is exponentially stable.Proof. By Lemma7, we have (30)ΔVi(k)=Vi(k+1)-Vi(k)≤αiVi(k),∀i∈N-. Therefore, (31)Vi(k0+n)≤(αi+1)nVi(k0). There exists μi≥1(i∈N-), such that (32)Vi(k)≤μiVj(k),∀i,j∈N-. We let τ1,…,τNσ(k0,k0+k) denote the switching times of σ in (k0,k0+k), and let Nσ(k0,k+k0) be the switching number of σ in (k0,k+k0), by (31) and (32), one obtains (33)Vσ(k+k0)(k+k0)≤μσ(τ1)⋯μσ(τNσ(k0,k0+k))(ασ(k+k0)+1)m1⋯(ασ(k0)+1)mNσ(k0,k+k0)Vσ(k0)(k0), where m1+⋯+mNσ(k0,k+k0)=k. By-1<αi<0, for all i∈N-, we know that there exists α≜maxi∈N-{αi+1}∈(0,1). Let μ=maxi∈N-{μi}; from (33), one obtains (34)Vi(k+k0)≤αkμNσVj(k0)=αk+Nσ(lnμ/lnα)Vj(k0). By Definition 2, for any k0<k, it follows that (35)Vi(k)≤αk+Nσ(lnμ/lnα)Vj(k0)≤αk(1+(lnμ/Talnα))Vj(k0). By the system LKF (7), there always exist two positive constants c1,c2 such that (36)c1∥x(k)∥2≤Vi(k),Vi(k0)≤c2∥x(k0)∥s2, where (37)c1=mini∈N-{λmin(Pi)},c2=maxi∈N-{λmax(Pi)+∑m=13‍(λmax(Qim)+λmax(Rim))}. Therefore, (38)∥x(k)∥2≤c2c1αk(1+(lnμ/Talnα))∥x(k0)∥s2. If the average dwell time τa satisfies τa>-lnμ/lnα for μ≥1, then the switched system (1) with u(k)=0 is exponentially stable with λ=α1/2=maxi∈N-{(αi+1)1/2}∈(0,1) stability degree.Remark 11. The caseα=0 implies the asymptotic stability.The following theorem provides exponential stability analysis withH∞ performance of the system (1).Theorem 12. For given constantsγ>0, λ>0 and -1<αi<0, if there exist positive definite symmetric matrices Pi,Qim,Rim(i∈N-,m=1,2,3) and μ≥1 such that (29) and the following LMIs hold: (39)[Ψi110Ψi13-γ2IΨi23*Ψi33]<0,(40)[Φi110Φi13-γ2IΦi23*Φi33]<0, where (41)Ψi13=[AiTPi(Ai-I)TW1iTCiTAdiTPiAdiTW1iTCdiT000000],Φi13=[AiTPi(Ai-I)TW2iTCiTAdiTPiAdiTW2iTCdiT000000],Ψi33=diag{-Pi-W1i-I},Φi33=diag{-Pi-W2i-I},Ψi23=[BiTPiBiTW1iTDiT],Φi23=[BiTPiBiTW2iTDiT]. Then, the system (1) with average dwell time satisfies τa>-lnμ/lnα  which is globally exponentially stable with convergence rate λ and H∞ performance γ.Proof. Choose the LKF (7); the result is carried out by using the techniques employed for proving Lemma 7 and Theorem 10. If d(k)∈[0,ϑ], by (24), we have (42)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤xT(k+1)Pix(k+1)+zT(k)Wiz(k)+yT(k)y(k)+ζ1T(k)[Ψi1100-γ2]ζ1(k), where (43)ζ1T(k)=[ξT(k)uT(k)]. If d(k)∈[ϑ,h], by (28), we have (44)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤xT(k+1)Pix(k+1)+zT(k)Wiz(k)+yT(k)y(k)+ζ1T(k)[Φi1100-γ2]ζ1(k). Combining (39) and (40), by Schur Complement, one can obtain (45)Vi(k+1)-αVi(k)+yT(k)y(k)-γ2uT(k)u(k)≤0. Let (46)J(k)=yT(k)y(k)-γ2uT(k)u(k); we have (47)Vi(k+1)≤αVi(k)-J(k). By Definition 2 and Theorem 10, it is sufficient to show that ∑k=0∞J(k)<0 for any nonzero u(k). Combining (35) and (47), it can be shown that (48)Vσ(k)≤αkμNσ(0,k)Vσ(0)-∑s=0k-1‍αk-s-1μNσ(s,k)J(s). Under the zero initial condition, we have (49)V(0)=0,V(∞)≥0. Combining (48), we have (50)∑s=0k-1‍αk-s-1μNσ(s,k)J(s)=∑s=0k-1‍α-1elnα+lnμ/τaJ(s)≤0. Now, we consider (51)∑k=1∞‍∑s=0k-1‍α-1elnα+lnμ/τaJ(s). Exchanging the double-sum region, by τa>-lnμ/lnα and α∈(0,1), one can easily get (52)∑k=1∞‍∑s=0k-1‍α-1elnα+lnμ/τaJ(s)=∑s=0∞‍J(s)∑k=s+1∞‍α-1elnα+lnμ/τa=elnα+lnμ/τaα-11-elnα+lnμ/τa∑s=1∞‍J(s)≤0, which means that ∑s=1∞J(s)≤0. Then, by Definition 2, the system (1) with average dwell time satisfies τa>-lnμ/lnα  which is globally exponentially stable with convergence rate λ and H∞ performance γ. This completes the proof.If there exist some unstable subsystems in the switched system (1) with u(k)=0, in this case, we need to estimate the growth rate of the system LKF in (7) along the state trajectory of switched system (1). And the corresponding αj>0(j∈N-). By using the techniques employed for proving Lemma 7, one can easily obtain the following Lemma.Lemma 13. Given constantsαj>0, h>0 and ϑ∈(0,h), if there exist some symmetric positive definite matrices Pj,Qjm,Rjm(j∈N-, m=1,2,3) such that the following LMIs hold: (53)[Ψ-j11Ψj12Ψj22]<0,[Φ-j11Φj12Φj22]<0, where (54)Ψ-j11=[ψ-11jψ-12j00ψ-22jψ-23j0*ψ-33jψ-34j**ψ-44j],Φ-j11=[ψ-11j0ϕ-13j0ϕ-22jϕ-23jϕ-24j*ϕ-33j0**ψ-44j],ψ-11j=-(1+αj)Pj+Qj1+Qj3-1ϑ(Rj1+Rj3),ψ-12j=1ϑ(Rj1+Rj3),ψ-22j=-Qj3-1ϑ(2Rj1+Rj3),ψ-23j=1ϑRj1,ψ-34j=(1+αj)ϑh-ϑRj2,ψ-33j=(1+αj)ϑ(Qj2-Qj1)-(1+αj)ϑh-ϑRj2-1ϑRj1,ψ-44j=-(1+αj)hQj2-(1+αj)ϑh-ϑRi2,ϕ-11j=ψ-11j,ϕ-13j=(1+αj)ϑ(Rj1+Rj3),ϕ-22j=-(1+αj)ϑQj3-(1+αj)ϑh-ϑ(2Rj2+Rj3),ϕ-23j=(1+αj)ϑh-ϑ(Rj2+Rj3),ϕ-24j=(1+αj)ϑh-ϑRj2,ϕ-33j=-(1+αj)ϑh-ϑ(Rj2+Rj3)-(1+αj)ϑ(Rj1+Rj3)-(1+αj)ϑ(Qj1-Qj2). Then, by means of LKF (7), along the trajectory of the systems (1) without disturbance input, one has (55)ΔVj(k)=Vj(k+1)-Vj(k)≤αjVj(k).Remark 14. The proof of Lemma13 is similar to that of Lemma 7 and is thus omitted here. Based on Lemmas 7 and 13, one can easily design the stabilizing switching law to guarantee the system (1) with u(k)=0 to be exponentially stable, although some subsystems are unstable. Without loss of generality, we can assume thatN-u={j1,j2,…,js} is the set of all unstable subsystems and N-s={is+1,is+2,…,ip} is the set of all stable subsystems. For simplicity, the LKF (7) is defined as Vi(αi,k)≜Vi(k). Choose the LKF Vi(αi,k)  (-1<αi<0, i∈N-s) for the stable subsystem and choose the LKF Vj(αj,k)  (αj>0, j∈N-u) for the unstable subsystem. Then, we have the following conclusion.Theorem 15. If there exist some constants-1<αi<0, αj>0  (j≠i, i∈N-s, j∈N-u) and positive definite symmetric matrices Pi,Qim,Rim,Pj,Qjm,Rjm  (m=1,2,3) and μ≥1 such that Lemmas 7 and 13 and the following LMIs hold: (56)Pl≤μPs,Qlm≤μQsm,Rlm≤μRsm,∀l,s∈N-. Then, the switched system (1) with u(k)=0 and the average dwell time satisfies τa>lnμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0)  which is exponentially stable.Proof. Consider the following LKF candidate:(57)Vσ(k)(k)={Vi(αi,k),σ(k)=i∈N-s,Vj(αj,k),σ(k)=j∈N-u. By Lemmas 7 and 13, we have (58)Vσ(k+1)(k+1)≤(ασ(k+1)+1)Vσ(k+1)(k). Let Tk0,n+k0α be the total activity time in which all subsystems satisfied 0>αi>-1 on the interval (k0,n+k0) and Tk0,n+k0β≜n-Tk0,n+k0α the total activity time in which all subsystems satisfied αj>0 on the interval (k0,n+k0). By using the techniques employed for proving Theorem 10, combining (56) and (58), we derive that (59)Vσ(n+k0)(n+k0)≤μNσ(n+k0)αTk0,n+k0αβTk0,n+k0βVσ(k0)(k0)=eTk0,n+k0αlnα+Tk0,n+k0βlnβ+Nσ(k0,n+k0)lnμVσ(k0)(k0), where (60)α≜maxi∈N-s{αi+1}∈(0,1),β≜maxj∈N-u{αj+1}>1. By Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0), one obtains (61)Tk0,n+k0αlnα+Tk0,n+k0βlnβ≤κn. So we have (62)Vσ(n+k0)(n+k0)≤eκn+Nσ(k0,n+k0)lnμVσ(k0)(k0). By Definition 2, for any n+k0>k0, it follows that (63)Vσ(n+k0)(n+k0)≤eκn+Nσ(k0,n+k0)lnμVσ(k0)(k0)≤en(κ+(lnμ/τa))Vσ(k0)(k0). By τa>lnμ/-κ, we have limk→∞Vσ(k)=0. Moreover, the overall system is exponentially stable. This completes the proof.Remark 16. From the proof of Theorem15, one can see that the obtained exponential stability for the switched system (1) with u(k)=0 is exponential stable with e-1/2 stability degree. In order to get a free decay rate, we can replace the condition τa>lnμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ),κ∈(lnα,0) by τa>logϵμ/-κ, Tk0,n+k0α/Tk0,n+k0β≥(logϵβ-κ)/(-logϵα+κ), κ∈(logϵα,0), ϵ>1; then the switched system (1) with u(k)=0 is exponentially stable with ϵ-1/2 stability degree.Theorem 17. For given constantsγ>0, -1<αi<0, αj>0  (j≠i, i∈N-s, j∈N-u), if there exist positive definite symmetric matrices Pi,Qim,Rim,Pj,Qjm,Rjm  (m=1,2,3) and μ≥1 such that (56), (39), (40), and the following LMIs hold: (64)[Ψ-j110Ψj13-γ2IΨj23*Ψj33]0,[Φ-j110Φj13-γ2IΦj23*Φj33]<0, and Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0), and the average dwell time satisfies τa>lnμ/-κ; then the switched system (1) is exponentially stable and with H∞ performance γ.Remark 18. The proof of Theorem17 is similar to that of Theorems 12 and 15 and is thus omitted here. ## 4.H∞ Model Reduction In this section, we will approximate system (1) by a reduced-order switched system described by (65)x^(k+1)=Arix^(k)+Ardix^(k-d(k))+Briu(k),y^(k)=Crix^(k)+Crdix^(k-d(k))+Driu(k), where x^(k)∈Rq is the state vector of the reduced-order system with q<n and y^(k)∈Rm is the output of reduced-order system. Ari,Ardi,Cri,Crdi,Bri, and Dri are the matrices with compatible dimensions to be determined. The system (65) is assumed to be switched synchronously by switching signal σ(k) in system (1).Augmenting the model of system (1) to include the states of (65), we can obtain the error system as follows: (66)x~(k+1)=A~ix~(k)+A~dix~(k-d(k))+B~iu(k),e~(k)=C~ix~(k)+C~dix~(k-d(k))+D~iu(k). Here (67)A~i=[Ai00Ari],A~di=[Adi00Ardi],B~i=[BiBri],x~(k)=[x(k)x^(k)],C~i=[Ci-Cri],C~di=[Cdi-Crdi],D~i=Di-Drdi,e~(k)=y(k)-y^(k).The following theorem gives a sufficient condition for the existence of an admissibleH∞ reduced-order model (65) for system (1).Theorem 19. Given constants0<α<1, γ>0, μ≥1, h>0, and ϑ  (0<ϑ<h), if there exist some symmetric positive definite matrices P~i,Q~im,R~im(m=1,2,3) and matrices Xi,Yi,Li,Hi,Fi(i∈N-) such that the following LMIs hold (68)[Πi1Πi2Πi3]<0,(69)[Π-i1Πi2Π-i3]<0,(70)P~i≤μP~j,Q~im≤μQ~jm,R~im≤μR~jm,∀i,j∈N-. Then system (66) with the average dwell time τa satisfies τa>-lnμ/lnα  which is exponentially stable with an H∞ norm bound γ. Here(71)Πi1=[φ11iφ12i000φ22iφ23i00*φ33iφ34i0**φ44i0***φ55i],Π-i1=[φ11i0φ-13i00φ-22iφ-23iφ-24i0*φ-33i00**φ44i0***φ55i],Πi2=[φi16Tφi17Tφi18Tφi26Tφi27Tφi28T000000φi56Tφi57Tφi58T],Πi3=diag{P~i-2U~iW~i-2U~i-I},Π-i3=diag{P~i-2U~iW^-2U~i-I},φ11i=Q~i1+Q~i3-αP~-αϑϑ(R~i1+R~i3),φ12i=αϑϑ(R~i1+R~i3),φ22i=-αϑQ~i3-αϑϑ(2R~i1+R~i3),φ23i=αϑϑR~i1,φ34i=αhh-ϑR~i2,φ33i=αϑ(Q~i2-Q~i1)-αhh-ϑR~i2-αϑϑR~i1,φ44i=-αhQ~i2-αhh-ϑR~i2,φ55i=-γ2I,φ-13i=φ12i,φ-22i=-αhQ~i3-αhh-ϑ(2R~i2+R~i3),φ-23i=αhh-ϑ(R~i2+R~i3),φ-24i=φ34i,φ-33=αϑ(Q~i2-Q~i1)-αϑϑ(R~i1+R~i3)-αhh-ϑ(R~i2+R~i3),W~i=(h-ϑ)R~i2+ϑR~i1+ϑR~i3,W^i=(h-ϑ)R~i2+ϑR~i1+hR~i3,φi16T=[AiTXiTAiTETYi0LiT],φi17T=[AiTXiT-XiTAiTETYi-ETY0LiT-YiT],φi18T=[CiT-CriT],φi26T=φi27T=[AidTXiTAidTETYi0HiT],φi28T=[CdiT-CrdiT],φi56=φi57=[XiBiFi+YiTEBi],φi58=Di-Drdi. Furthermore, if a feasible solution to the above LMIs (68), (69), and (70) exists, then the system matrices of an admissible H∞ reduced-order model in the form of (65) are given by (72)Ari=Yi-1Li,Ardi=Yi-1Hi,Bri=Yi-1Fi.Proof. Consider the following LKF for the switched system (66): (73)Vi(k)=Vi1(k)+Vi2(k)+Vi3(k). Here (74)Vi1(k)=x~T(k)P~ix~(k),Vi2(k)∑s=k-ϑk-1‍αk-1-sx~T(s)Q~i1x~(s)+∑s=k-hk-ϑ-1‍αk-1-sx~T(s)Q~i2x~(s)+∑s=k-d(k)k-1‍αk-1-sx~T(s)Q~i3x~(s),Vi3(k)∑θ=-ϑ-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i1z~(s)+∑θ=-h-ϑ-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i2z~(s)+∑θ=-d(k)-1‍∑s=k+θk-1‍αk-1-sz~T(s)R~i3z~(s), where z~(k)=x~(k+1)-x~(k) and P~i,Q~im,R~im(i∈N-, m=1,2,3) are symmetric positive definite matrices with appropriate dimensions; integer ϑ and α are given constants. By using the techniques employed for proving Lemma7, one can easily obtain the result. Calculate the difference of Vi(k) in (73) along the state trajectory of system (66).(1) Ifd(k)∈[0,ϑ], it gets (75)Vi(k+1)-αVi(k)+e~T(k)e~(k)-γ2uT(k)u(k)≤ξ~T(k)Πi1ξ~(k)+x~T(k+1)P~ix~(k+1)+zT(k)W~iz(k)+e~T(k)e~(k),where(76)ξ~T(k)=[x~T(k)x~T(k-d(k))x~T(k-ϑ)x~T(k-h)uT(k)].For any appropriately dimensioned matrices P~i>0 and nonsingular matrices U~i, we have (77)(P~i-U~i)TP~i-1(P~i-U~i)≥0. Thus (78)-U~iTP~i-1U~i≤P~i-2U~i. If (68) holds, we have (79)[Πi1Πi2Θi3]<0, where (80)Θi3=diag{-U~iTP~i-1U~iU~iTW~i-1U~i-I}. Let (81)U~i=[Xi0YiTEYi],E=[I0],YiAri=Li,YiArdi=Hi,YiBri=Fi. Multiplying (79) both from left and right by diag{00000U~i-TU~i-T-I}, by Schur Complement, further, considering (75), one can infer (82)Vi(k+1)-αVi(k)+e~T(k)e~(k)-γ2uT(k)u(k)≤0. Similarly, for the case of d(k)∈[ϑ,h], the fact that (69) holds means that (82) is true. Set (83)Γ(k)=e~T(k)e~(k)-γ2uT(k)u(k), we have (84)Vi(k+1)≤αVi(k)-Γ(k). Let Nσ(k0,k) be the number of switching times in (k0,k). From (84) and (70), we can obtain (85)Vi(k+k0)≤αkμNσ(k0,k)Vi(k0)-∑s=k0k-1‍αk-s-1μNσ(s,k)Γ(s)≤αk+Nσ(k0,k)(lnμ/lnα)Vj(k0)-∑s=k0k-1‍αk-s-1+Nσ(s,k)(lnμ/lnα)μNσ(s,k)Γ(s).Assume the zero disturbances input u(k)=0 to the state equation of system (66). By Definition 2, for any k0<k, it follows that (86)Vi(k)≤αk+Nσ(lnμ/lnα)Vj(k0)≤αk(1+(lnμ/τalnα))Vj(k0). From τa>-lnμ/lnα, one obtains limk→∞Vi(k)=0. There exist cn>0,  n=1,2, such that (87)c1∥x~(k)∥2≤Vi(k),Vi(k0)≤c2∥x~(k0)∥s2. Here (88)∥x~(k)∥s=maxθ=-h,…,0∥x~(k+θ)∥,c1=λmin(Pi),c2=λmax(Pi)+∑k=13‍(λmax(Qik)+λmax(Rik)). Therefore (89)∥x~(k)∥2≤c2c1αk(1+(lnμ/τalnα))∥x~(k0)∥s2. If the average dwell time τa satisfies τa>-lnμ/lnα, then the switched system (66) is exponentially stable with λ=α1/2 stability degree. For any nonzero u(k)∈l2[0,∞), under zero initial condition, combining (68), (69), (70), (85), and (89), one can easily obtain (90)J=∑k=0∞‍[e~T(k)e~(k)-γ2uT(k)u(k)]≤0. Therefore ∥e~(k)∥2≤γ∥u(k)∥2. This completes the proof.Remark 20. Recently, authors in [30, 31] have studied the problem of model reduction for discrete-time switched systems. In those papers, time delays are not taken into account. However, in most of the cases in engineering problems, there always exist unknown time-varying delays; moreover, the case of stable and unstable subsystems co exists. Motivated by this, in this paper, we discussed the problem of H∞ model reduction for switched linear discrete-time systems with time-varying delays via delay decomposition approach [10–12]. Accordingly, numerical results are given for time-varying delay cases. If there exist some unstable subsystems in the switched system (1), we have the following conclusion.Theorem 21. Given constants0<α<1, β>1, γ>0, μ≥1, h>0, and ϑ  (0<ϑ<h), if there exist some symmetric positive definite matrices P~i,Q~im,R~im(m=1,2,3) and matrices Xi,Yi,Li,Hi,Fi(i∈N-) such that (68), (69), (70), and the following LMIs hold: (91)[Π~i1Πi2Πi3]<0,[Π^i1Πi2Π-i3]<0. And Tk0,n+k0α/Tk0,n+k0β≥(lnβ-κ)/(-lnα+κ), κ∈(lnα,0); then system (66) with the average dwell time τa satisfies τa>-lnμ/lnα  which is exponentially stable with an H∞ norm bound γ. Furthermore, if a feasible solution to the above LMIs (68), (69), (70), and (91) exists, then the system matrices of an admissible H∞ reduced-order model in the form of (65) are given by (72). Here,(92)Π~i1=[φ~11iφ~12i000φ~22iφ~23i00*φ~33iφ~34i0**φ~44i0***φ~55i],Π^i1=[φ~11i0φ^13i00φ^22iφ^23iφ^24i0*φ^33i00**φ~44i0***φ~55i],φ~11i=Q~i1+Q~i3-βP~-1ϑ(R~i1+R~i3),φ~12i=1ϑ(R~i1+R~i3),φ~22i=-Q~i3-1ϑ(2R~i1+R~i3),φ~23i=1ϑR~i1,φ~33i=βϑ(Q~i2-Q~i1)-βϑh-ϑR~i2-1ϑR~i1,φ~34i=βϑh-ϑR~i2,φ~44i=-βhQ~i2-βϑh-ϑR~i2,φ~55i=-γ2I,φ^13i=βϑ(R~i1+R~i3),φ^22i=-βϑQ~i3-βϑh-ϑ(2R~i2+R~i3),φ^23i=βϑh-ϑ(R~i2+R~i3),φ^24i=βϑh-ϑ(R~i2),φ^33i=βϑ(Q~i2-Q~i1)-βϑ(R~i1+R~i3)-βϑh-ϑ(R~i2+R~i3).Remark 22. The proof of Theorem21 is carried out by using the techniques employed in the previous section and is thus omitted here. ## 5. Examples In this section, we consider some numerical examples to illustrate the benefits of our results.Example 1 (see [20]). Consider the discrete-time switched system (1) with u(k)=0 and the following parameters: (93)A1=[00.3-0.20.1],Ad1=[00.100.2],A2=[00.3-0.2-0.1],Ad2=[00.100]. For this system, we choose μ=1.1 and λ=0.931. Applying Theorem 10, by solving the LMIs (9) and (10) and (29), we can obtain the allowable delay upper bound h=20. It is reported, with decay rate λ=0.931, that the upper bound h can be obtained as 14 in [19] and 16 in [20]. Therefore, the result in this brief can indeed provide larger delay bounds than the results in [19, 20]. This supports the effectiveness of the proposed idea in Theorem 10 in reducing the conservatism of stability criteria.Example 2. Consider the discrete-time switched system (1) with u(k)=0 and parameters as follows: (94)A1=[00.3-0.20.1],Ad1=[00.100.2],A2=[00.3-0.2-0.1],Ad2=[1.30.100.9]. It is easy to check that the A2+Ad2 is unstable. In this case, we need to find a class of switching signals to guarantee the overall switched system to be exponentially stable. Set d(k)=[|3sin(kπ/6)|] and α=0.5329, according to Theorem 15 and by solving the LMIs (9), (10), (53), and (29), set ϑ=1; we have μ=2.4 and β=2.01. Choosing γ′=-0.18, we have Tk0,n+k0α/Tk0,n+k0β≥(lnβ-γ′)/(-lnα+γ′)=1.953 and τa>lnμ/-γ′=4.9. The simulation result of the switched system is shown in Figure 1, where the initial condition ϕ(θ)=[1.1-0.8]T and the switching law is shown in Figure 2. It can be seen from Figure 1 that the designed switching signals are effective although one subsystem is unstable. However, the results in [20] cannot find any feasible solution to guarantee the exponential stability of system (1).Figure 1 The state response.Figure 2 Switching law.Example 3 (see [31]). Consider the system (1) with parameters as follows:(95)A1=[0.130.22-0.130.080.05-0.030.190.06-0.07-0.05-0.04-0.12-0.170.210.030.28],A2=[0.110.22-0.130.080.05-0.030.150.06-0.07-0.03-0.04-0.12-0.170.210.030.2],Ad1=Ad2=[0.020.010000.0200000.020.010000.02],B1=[0.19-0.180.16-0.08]T,B2=[0.23-0.130.16-0.04]T,C1=C2=[1.20.50.030.28],Cd1=Cd2=[0.020.050.010.09],D1=D2=0.1. When the decay rate α is fixed, the maximum value of the time-delay h and the minimum value of the performance index γ can computed by solving the LMIs (68)–(70) procedure in Theorem 19, which is listed in Table 1 via different methods. Here, we choose μ=1.001. Assume that decay rate α=0.9; we can compute the maximum value of allowed delay h=42 and the minimum value of the performance index γ=1.67. From ADT τa>-lnμ/lnα, we have τa>0.0095. When h=2 and α=0.9, we can compute the minimum value of performance index γ=0.53. On the other hand, assume that maximum allowed delay h=2 and performance index γ=2; we can compute the minimum value of the decay rate α=0.59 and τa>0.0019.Table 1 Comparison of parameters via different methods. α γ h τ a [31] 0.9 2 2 >1.7305 Theorem19 0.9 1.67 42 >0.0095 Theorem19 0.9 0.53 2 >0.0095 Theorem19 0.59 2 2 >0.0019 Theorem19 0.6 1.8 2 >0.002Letα=0.9; here, we are interested in designing a q-order (q<4) system (65) and choose the ADT τa=2 switching signals such that the model error system (66) is exponentially stable with H∞ norm bound γ=2. By solving the corresponding LMIs (68)–(70) procedure in Theorem 19. For comparison with [31], we set the delay d(k)=2, and the following reduced-order models can be given.Third Order Model (96) A r 1 = [ 0.2753 0.0282 - 0.0033 0.0097 0.2507 - 0.0033 - 0.0045 - 0.0124 0.2569 ] , A r 2 = [ 0.2799 0.0259 - 0.0058 0.0074 0.2581 - 0.0025 - 0.0051 - 0.01 0.2611 ] , A r d 1 = [ - 0.005 0.0069 - 0.0023 0.0037 - 0.0046 0.003 - 0.0011 0.0002 - 0.0006 ] , A r d 2 = [ - 0.001 0.0066 - 0.0025 0.0039 - 0.0044 0.0033 - 0.0018 0.001 - 0.0024 ] , B r 1 = [ - 0.171 0.1795 - 0.111 ] T , B r 2 = [ - 0.191 0.148 - 0.1285 ] T , C r 1 = [ - 0.3016 - 0.1328 - 0.0149 ] , C r 2 = [ - 0.2987 - 0.1265 - 0.0173 ] , D r 1 = - 0.1754 , C r d 1 = [ - 0.0314 - 0.0047 - 0.0182 ] , C r d 2 = [ - 0.0361 - 0.0011 - 0.0199 ] , D r 2 = - 0.2396 .Second Order Model (97) A r 1 = [ 0.2419 0.0355 0.015 0.2141 ] , A r d 1 = [ - 0.0028 0.0088 0.0052 - 0.007 ] , B r 1 = [ - 0.1528 0.1617 ] , C r 1 = [ - 0.3109 - 0.1453 ] T , A r 2 = [ 0.2382 0.0324 0.0147 0.2183 ] , A r d 2 = [ - 0.0023 0.0076 0.0046 - 0.006 ] , B r 2 = [ - 0.1667 0.1203 ] , C r 2 = [ - 0.3076 - 0.1362 ] T , C r d 1 = [ - 0.0488 0.0034 ] , D r 1 = - 0.2422 , C r d 2 = [ - 0.05 0.0057 ] , D r 2 = - 0.3605 .First Order Model (98) A r 1 = 0.2528 , A r d 1 = - 0.0057 , B r 1 = - 0.1498 , C r 1 = - 0.2769 , C r d 1 = 0.0301 , D r 1 = - 0.1792 , A r 2 = 0.2606 , A r d 2 = - 0.005 , B r 2 = - 0.1787 , C r 2 = - 0.2851 , C r d 2 = - 0.04 , D r 2 = - 0.2624 . To illustrate the model reduction performances of the obtained reduced-order models, let the initial condition be zero; the exogenous input is given as u(k)=1.8exp(-0.4k). The output errors between the original system and the corresponding three reduced models obtained in this paper (shown by the blue line) and the literature [31] (shown by the red line) are displayed in Figures 3, 4, and 5. The switching signal is shown in Figure 6. The simulation result of the switched system is shown in Figures 3–5. It can be seen from Figures 3–5 that the output errors between the original system and the reduced-order models obtained in this paper are smaller than that in [31].Figure 3 Output errors between the original system and the 3rd model.Figure 4 Output errors between the original system and the 2nd model.Figure 5 Output errors between the original system and the 1st model.Figure 6 Switching law. ## 6. Conclusions The problem of exponential stability withH∞ performance and H∞ model reduction for a class of switched linear discrete-time systems with time-varying delay have been investigated in this paper. The switching law is given by ADT technique, such that even if one or more subsystem is unstable the overall switched system can still be exponentially stable. Sufficient conditions for the existence of the desired reduced-order model are derived and formulated in terms of strict LMIs. By solving the LMIs, the system of reduced-order model can be obtained, which also provides an H∞ gain for the error system between the original system and the reduced-order model. Finally, numerical examples are provided to illustrate the effectiveness and less conservativeness of the proposed method. A potential extension of this method to nonlinear case deserves further research. --- *Source: 101473-2014-01-02.xml*
2014
# Dual-Task Does Not Increase Slip and Fall Risk in Healthy Young and Older Adults during Walking **Authors:** Rahul Soangra; Thurmon E. Lockhart **Journal:** Applied Bionics and Biomechanics (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1014784 --- ## Abstract Dual-task tests can identify gait characteristics peculiar to fallers and nonfallers. Understanding the relationship between gait performance and dual-task related cognitive-motor interference is important for fall prevention. Dual-task adapted changes in gait instability/variability can adversely affect fall risks. Although implicated, it is unclear if healthy participants’ fall risks are modified by dual-task walking conditions. Seven healthy young and seven healthy older adults were randomly assigned to normal walking and dual-task walking sessions with a slip perturbation. In the dual-task session, the participants walked and simultaneously counted backwards from a randomly provided number. The results indicate that the gait changes in dual-task walking have no destabilizing effect on gait and slip responses in healthy individuals. We also found that, during dual-tasking, healthy individuals adopted cautious gait mode (CGM) strategy that is characterized by reduced walking speed, shorter step length, increased step width, and reduced heel contact velocity and is likely to be an adaptation to minimize attentional demand and decrease slip and fall risk during limited available attentional resources. Exploring interactions between gait variability and cognitive functions while walking may lead to designing appropriate fall interventions among healthy and patient population with fall risk. --- ## Body ## 1. Introduction Slip-induced fall accidents account for 87% of hip fractures and are associated with considerable medical cost and human suffering in older adults [1, 2]. Often, such fractures result in immobility [3] and admissions to skilled nursing facility and sometimes lead to death within one year [4]. Walking is a somewhat complex task associated with higher level cognitive processing such as estimation, planning, and real-time adjustments; specifically, executive function is involved. During walking, numerous sensory (visual, proprioception, and vestibular), conscious inputs and competing objectives (e.g., upright posture versus locomotion) are seamlessly integrated across hierarchical systems, with subtle real-time decisions and adjustments made using cognitive capabilities [5].In essence, gait performance is affected by the simultaneous performance of dual tasks [6–11]. The dual-task paradigm is commonly used to assess multitasking capabilities. It is presumed that multitasking is influenced by age-related changes in attentional capacities [12] and reducing the abilities to shared processing domains for two concurrent tasks [13]. O’Shea and colleagues [14] suggested that detrimental effects of physical task in the presence of competing attentional task supports a“capacity sharing model” of dual-tasking. According to capacity sharing model, performing two attention demanding tasks reduces the performance of one or both tasks when attentional capacity limit is exceeded [14]. Previous research has demonstrated that dual-task interference results in slower gait speeds [13, 15, 16], reduced cadence [16, 17], shorter stride length [15, 16], increased stride duration [16], and longer double-support time [13, 18]. Unwittingly, dual-task performances can potentially identify gait characteristics peculiar to fallers and nonfallers [19].The influence of attention on gait stability has been studied in numerous patient populations and results consistently show decreased gait velocity and increased gait variability in dual-task conditions [11, 20–23]. Apparently, persons with history of falls have more significant gait changes while performing dual task than nonfallers [9, 24–26]. Despite their report, dual task and its association with falls continues to be debated, and there is limited knowledge about how dual-tasking influenced slip characteristics. Some studies showed worsening of gait performance while concluding dual-tasking as a predictive of falls [6, 27, 28] while others have failed to establish any relationship [29, 30]. One of the studies reported that dual-task related gait changes did not provide any additional information than performance under single task conditions [31]. But these differences may be due to several confounds such as age [7, 9] and comorbidities [10, 27] and kind of attentional demanding task [11, 32]. Some findings corroborate well with previous investigations that poorer ability of subjects to perform a basic mobility task while carrying a cup of water [6] and the cessation of walking when engaged in conversation [27] are both associated with a four times’ risk of fall. Lundin-Ohson et al. reported that dual motor task can differentiate fall prone frail elderly from healthy older adults [27]. Dual-task tests can identify gait characteristics peculiar to fallers and nonfallers [19]. The influence of attention on gait stability has been studied in numerous patient populations and results consistently show decreased gait velocity and increased gait variability in dual-task conditions [11, 20–23]. Fall risk is independent of gait speed but is modulated instead by gait variability [33]. Previous studies demonstrate that gait speed and gait unsteadiness may be dissociated [33–35]. Healthy older adults walk with the same small amount of variability as healthy young adults, even though they walk slower than healthy young adults [36]. However, Springer et al. [37] reported that gait variability increased in older fallers and not in young adults and older nonfallers. They reported that healthy young adults and nonfallers maintain their stable gait in dual-task walking and that there is no evidence of detrimental effects of dual-task activities on gait variability associated with aging. In essence, dual-task paradigm is considered more sensitive to identify fall risk since it widens the gap between fallers and nonfallers [37–39].To stabilize, healthy people are found to decrease their gait speed [37]. Accordingly, elderly nonfallers were also found to decrease their swing times and their gait speed [37]. This dual-task related decline in walking speed is interpreted as an implicit strategy to avoid loss of balance [8]. Reduction of gait speed among groups represents a coping mechanism to handle the attention demanding challenge of the dual-task activity.The finding of decrease of gait velocity in dual-task walking is undebated and consistent with most of the studies [20, 21, 23, 37, 40]. Although dual-task related decline in walking speed is not specific for increased risk of falling, increase of stride time variability is closely associated with the occurrence of falls [33, 41]. There exists association between low stride time variability and efficient executive function in healthy older adults [5] and high stride time variability and impaired executive function in demented older adults [23]. Low stride time variability in healthy older adults is associated with minor involvement of attention in the control of the rhythmic stepping mechanism [11].Previous research has shown that dual-task related gait changes consisted of increase in number of stops, lateral deviations, steps, and walking time [6, 7, 11] and increase in stride width, stride length, and stride time variabilities [7, 42]. Intrasubject variability of kinematic variables is an index of movement consistency or stability of gait performance. However, there exists negative correlation between variability in step width and balance performance of the elderly women [43] and also an increased variability in step length for hospitalized fallers compared with nonfallers [44]. Gabell and Nayak [45] could not find any effects of age on variability in step length and step width while walking. Maruyama and Nagasaki [46] reported that temporal variability in stance phase durations in gait cycle was decreasing function of speed. Increasing the walking speed produced linear increment in step width variability in contrast to step length variability in healthy adults [47]. Gabell and Nayak [45] suggested that variability in step length is determined predominantly by gait patterning mechanism; on the contrary, step width variability is largely determined by balance control mechanism. Similarly, Heitmann et al. found negative correlation between balance performance and variability in step width but not the same for balance performance and step length. ### 1.1. Objective Performance of secondary task, that is, dual task, affects certain aspects of gait, but the relationship between gait variability, dual-tasking, and slip and fall risk is not well understood. This study was conducted to better understand the motor control of gait and the relationship between an individual’s motor variability and fall risk during dual-tasking walking conditions. Exploring dual-task related gait changes is of particular interest in understanding variability because a strong relationship exists between dual-task related gait changes and the risk of falling in older adults [6, 28, 29].The primary objective of this study was to investigate the relationship between dual-task and slip-induced fall risk. As per our knowledge, no previous study has looked into effects of dual-tasking on slip and fall risk. This study involves two groups (young and old individuals) with (known) different slip and fall risk [48]. It was hypothesized that dual-tasking while walking would affect gait characteristics and may increase the slip initiation characteristics in the elderly individuals and will negatively influence slip-induced risk. It was also hypothesized that friction demand and trip risk measured using toe clearance will be significantly different for normal walking and dual-task walking. ## 1.1. Objective Performance of secondary task, that is, dual task, affects certain aspects of gait, but the relationship between gait variability, dual-tasking, and slip and fall risk is not well understood. This study was conducted to better understand the motor control of gait and the relationship between an individual’s motor variability and fall risk during dual-tasking walking conditions. Exploring dual-task related gait changes is of particular interest in understanding variability because a strong relationship exists between dual-task related gait changes and the risk of falling in older adults [6, 28, 29].The primary objective of this study was to investigate the relationship between dual-task and slip-induced fall risk. As per our knowledge, no previous study has looked into effects of dual-tasking on slip and fall risk. This study involves two groups (young and old individuals) with (known) different slip and fall risk [48]. It was hypothesized that dual-tasking while walking would affect gait characteristics and may increase the slip initiation characteristics in the elderly individuals and will negatively influence slip-induced risk. It was also hypothesized that friction demand and trip risk measured using toe clearance will be significantly different for normal walking and dual-task walking. ## 2. Methods ### 2.1. Participants The sample size was estimated using power analysis on the results of the published study by focusing on sample sizes that are large enough to determine differences between the velocities during normal walking. Palombaro et al. have determined that minimal clinically important difference (MCID) for habitual gait speed is 0.10 m/s [49]. The standard deviation of measurement is 0.10 m/s [49]. Therefore, means and standard deviations of velocity in this study were used to compute the required sample size (using JMP, version 7, SAS Institute Inc., Cary, NC, 1989–2007). The required sample size for detecting significant differences in velocity, given α=0.05 and power = 0.80 and small effect size (Cohen’s d) of 0.2 [50], was determined with n=7 per group. Seven young and seven old participants were recruited for this study. The younger population consisted of college students of Virginia Tech campus, and older adults were retired people in Blacksburg area. The recruited participants were in a general good health condition, with no recent cardiovascular, respiratory, neurological, and musculoskeletal abnormalities. Only one of the elderly participants (O02) was suffering from chronic obstructive pulmonary disease (COPD). All participants were recruited based on criteria of complete ambulation, without the use of any assistive devices, and ability to rise from a chair without assistance and free of orthopedic injury. This study was approved by the Institutional Review Board (IRB) of Virginia Tech. All participants who participated in this study provided written consent prior to the beginning of data collection. Demographic information for the participants is provided in Table 1.Table 1 Background characteristics of study participants. Age group Old Young Mean SD Mean SD Age [years] 71.14 6.51 22.64 2.56 Height [cm] 174.57 10.24 170.37 9.33 Weight [Kg] 78.55 18.25 69.65 15.52 BMI 25.52 4.27 23.78 4.00 ### 2.2. Instrumentation The experiments were conducted on a 15-meter linear walking track, embedded with two force plates (BERTEC #K80102, Type 45550-08, Bertec Corporation, OH 43212, USA, and AMTI BP400600 SN6780, Advances Mechanical Technology Inc., Watertown, MA 02472, USA). A six-camera ProReflex system (Qualisys, Gothenburg, Sweden) was used to collect three-dimensional kinematics of posture and gait data in participants. Kinematic data were sampled and recorded at 120 Hz. Ground reaction forces of participants walking over the test surfaces were measured using two force plates and sampled at a rate of 1200 Hz. A sixteen-channel surface electromyography (s-EMG) DTS Telemyo system (Noraxon 15770N Greenway-Hayden Loop, #100 Scottsdale, AZ, USA), was used to record the temporal activation of two ankle muscles (gastrocnemius and tibialis anterior) in the both lower extremities during walking. ### 2.3. Experimental Protocol All participants were first familiarized with laboratory equipment’s and were provided a verbal explanation of the experimental procedure. Participants were requested to wear laboratory clothes and shoes, fitting to their sizes. Height and weight of participants were noted below the ID numbers assigned to the subject. Surface Electromyogram (s-EMG) electrodes were affixed by asking participants to plantarflex and dorsiflex their ankle for gastrocnemius and tibialis anterior muscles. Twenty-six reflective markers were attached to various bony landmarks of the body such as head, both ears, both acromioclavicular joints, acromions, humoral condyles, ulnar stylus, knuckles, both right and left anterior superior iliac spine (ASIS), greater trochanters, both medial and lateral condyle of both limbs, malleolus (medial and lateral), and heel and toes of both feet (shown in Figure1). The marker configuration was similar to that defined by Lockhart et al. 2003 and was used to derive the whole-body center of mass biomechanical model [51]. Kinetic data was acquired using two forceplates such that two consecutive steps would fall on them. The slippery surface (which was at top of the second forceplate) was covered with 1 : 1 water and jelly mixture to reduce the coefficient of friction (COF) of the floor surface (dynamic COF ~ 0.12). Participants were kept unaware of the position of this surface as the embedded forceplates are also covered with similar vinyl texture as the walkway. This is well standardized protocol used in several previous slip and fall research [48, 51]. The experiment was divided into two sessions: normal session and dual-task session (Figure 2). Each session was separated by 4 days and each participant was randomly assigned to either normal or dual task as his/her first session.Figure 1 The picture of placement of reflective marker, inertial sensors, and s-EMG.Figure 2 Participants were assigned to normal or dual-task session randomly and the listed tests were conducted. #### 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. #### 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ### 2.4. Data Processing Normal and dual-task walking trials provided kinematic and kinetic data that was filtered using low-pass Butterworth filter at cut-off frequency of 6 Hz. The EMG data was digitally bandpass filtered at 20–500 Hz. The EMG signals were then rectified and low-pass filtered using Butterworth filter with a 6 Hz cut-off frequency to create a linear envelope. Heel contact (HC) and toe-off (TO) events were identified from the ground reaction forces with threshold set at 11 Newton (see Abbreviations). The analysis was performed from stance phase (HC to TO) of nonslipping foot. #### 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. #### 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. #### 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ### 2.5. Plantar Flexion Muscle Cocontraction EMG activity was peak normalized within each subject using the ensemble average method during the complete gait cycle [54]. Then, cocontraction index (CCI) was calculated by the following equation [55]:(3)CCI=LowerEMGiHigherEMGi×LowerEMGi+HigherEMGi,where LowerEMGi refers to the less active muscle at time i and HigherEMGi refers to the more active muscle at time i.The ratio of the EMG activity of tibialis anterior to gastrocnemius was considered for this study (Figure4). The ratio is multiplied by the sum of activity found in the two muscles. The cocontraction index was defined as the event when bursts of the muscle activity of the agonist and antagonist muscles overlapped for at least 5 ms [56]. Slip-severity parameters associated with cocontraction indexes among slipping foot (SF) (right foot in all our trials) and contralateral limb (nonslipping foot (NSF) i.e., left foot in all our slip trials): (vii)SFMeanCCIvalue: it was defined as the mean CCI in slipping foot during slip start to slip stop. (viii)SFPeakCCIvalue: it was defined as the peak CCI value during slip start to slip stop. (ix)SFTime2PeakCCIfromNSFHC: it was defined as the time to generate peak ankle cocontraction from the heel contact of unperturbed foot. (x) NSFMeanCCI value: it was defined as the mean CCI in nonslipping foot during slip start to slip stop. (xi) NSFStanceTime: it is the single stance duration in nonslipping foot right before the perturbation event.Figure 4 (a) Ankle cocontraction values in the slipping foot. (b) Ankle cocontraction values in nonslipping foot. ### 2.6. Mini-Mental State Examination (MMSE) The MMSE examines multiple areas of cognition in human brain. The highest possible score is 30; a score of less than 24 denotes cognitive impairment. Mild cognitive impairment is reflected in scores of 18 to 23, moderate cognitive impairment is suggested by scores of 17 to 10, and severe cognitive impairment is denoted by scores of less than 10 [57]. ### 2.7. Statistical Design There were two independent variables: age group (young versus old) and condition (normal versus dual task). Mixed factor multivariate analysis of variance (MANOVA) was conducted where age group was a between-subjects factor and dual-task/normal conditions group was within-subject factor. Using the Wilks’ Lambda test, the MANOVA allowed for determination of which factors had significant effects on the multiple dependent variables as a whole (i.e., gait parameters, muscle cocontraction, and slip parameters). Following MANOVA test, subsequent univariate ANOVA (mixed factor design) were conducted separately for each dependent variable.All statistical analyses were conducted using JMP (Pro 10.0.2, SAS Institute Inc.) with significance level ofα = 0.05 for all the statistical tests. All dependent variables were evaluated for normality (using Shapiro-WilkW test) and residual analysis. The results did not indicate any violation of normality assumptions. ## 2.1. Participants The sample size was estimated using power analysis on the results of the published study by focusing on sample sizes that are large enough to determine differences between the velocities during normal walking. Palombaro et al. have determined that minimal clinically important difference (MCID) for habitual gait speed is 0.10 m/s [49]. The standard deviation of measurement is 0.10 m/s [49]. Therefore, means and standard deviations of velocity in this study were used to compute the required sample size (using JMP, version 7, SAS Institute Inc., Cary, NC, 1989–2007). The required sample size for detecting significant differences in velocity, given α=0.05 and power = 0.80 and small effect size (Cohen’s d) of 0.2 [50], was determined with n=7 per group. Seven young and seven old participants were recruited for this study. The younger population consisted of college students of Virginia Tech campus, and older adults were retired people in Blacksburg area. The recruited participants were in a general good health condition, with no recent cardiovascular, respiratory, neurological, and musculoskeletal abnormalities. Only one of the elderly participants (O02) was suffering from chronic obstructive pulmonary disease (COPD). All participants were recruited based on criteria of complete ambulation, without the use of any assistive devices, and ability to rise from a chair without assistance and free of orthopedic injury. This study was approved by the Institutional Review Board (IRB) of Virginia Tech. All participants who participated in this study provided written consent prior to the beginning of data collection. Demographic information for the participants is provided in Table 1.Table 1 Background characteristics of study participants. Age group Old Young Mean SD Mean SD Age [years] 71.14 6.51 22.64 2.56 Height [cm] 174.57 10.24 170.37 9.33 Weight [Kg] 78.55 18.25 69.65 15.52 BMI 25.52 4.27 23.78 4.00 ## 2.2. Instrumentation The experiments were conducted on a 15-meter linear walking track, embedded with two force plates (BERTEC #K80102, Type 45550-08, Bertec Corporation, OH 43212, USA, and AMTI BP400600 SN6780, Advances Mechanical Technology Inc., Watertown, MA 02472, USA). A six-camera ProReflex system (Qualisys, Gothenburg, Sweden) was used to collect three-dimensional kinematics of posture and gait data in participants. Kinematic data were sampled and recorded at 120 Hz. Ground reaction forces of participants walking over the test surfaces were measured using two force plates and sampled at a rate of 1200 Hz. A sixteen-channel surface electromyography (s-EMG) DTS Telemyo system (Noraxon 15770N Greenway-Hayden Loop, #100 Scottsdale, AZ, USA), was used to record the temporal activation of two ankle muscles (gastrocnemius and tibialis anterior) in the both lower extremities during walking. ## 2.3. Experimental Protocol All participants were first familiarized with laboratory equipment’s and were provided a verbal explanation of the experimental procedure. Participants were requested to wear laboratory clothes and shoes, fitting to their sizes. Height and weight of participants were noted below the ID numbers assigned to the subject. Surface Electromyogram (s-EMG) electrodes were affixed by asking participants to plantarflex and dorsiflex their ankle for gastrocnemius and tibialis anterior muscles. Twenty-six reflective markers were attached to various bony landmarks of the body such as head, both ears, both acromioclavicular joints, acromions, humoral condyles, ulnar stylus, knuckles, both right and left anterior superior iliac spine (ASIS), greater trochanters, both medial and lateral condyle of both limbs, malleolus (medial and lateral), and heel and toes of both feet (shown in Figure1). The marker configuration was similar to that defined by Lockhart et al. 2003 and was used to derive the whole-body center of mass biomechanical model [51]. Kinetic data was acquired using two forceplates such that two consecutive steps would fall on them. The slippery surface (which was at top of the second forceplate) was covered with 1 : 1 water and jelly mixture to reduce the coefficient of friction (COF) of the floor surface (dynamic COF ~ 0.12). Participants were kept unaware of the position of this surface as the embedded forceplates are also covered with similar vinyl texture as the walkway. This is well standardized protocol used in several previous slip and fall research [48, 51]. The experiment was divided into two sessions: normal session and dual-task session (Figure 2). Each session was separated by 4 days and each participant was randomly assigned to either normal or dual task as his/her first session.Figure 1 The picture of placement of reflective marker, inertial sensors, and s-EMG.Figure 2 Participants were assigned to normal or dual-task session randomly and the listed tests were conducted. ### 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. ### 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ## 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. ## 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ## 2.4. Data Processing Normal and dual-task walking trials provided kinematic and kinetic data that was filtered using low-pass Butterworth filter at cut-off frequency of 6 Hz. The EMG data was digitally bandpass filtered at 20–500 Hz. The EMG signals were then rectified and low-pass filtered using Butterworth filter with a 6 Hz cut-off frequency to create a linear envelope. Heel contact (HC) and toe-off (TO) events were identified from the ground reaction forces with threshold set at 11 Newton (see Abbreviations). The analysis was performed from stance phase (HC to TO) of nonslipping foot. ### 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. ### 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. ### 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ## 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. ## 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. ## 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ## 2.5. Plantar Flexion Muscle Cocontraction EMG activity was peak normalized within each subject using the ensemble average method during the complete gait cycle [54]. Then, cocontraction index (CCI) was calculated by the following equation [55]:(3)CCI=LowerEMGiHigherEMGi×LowerEMGi+HigherEMGi,where LowerEMGi refers to the less active muscle at time i and HigherEMGi refers to the more active muscle at time i.The ratio of the EMG activity of tibialis anterior to gastrocnemius was considered for this study (Figure4). The ratio is multiplied by the sum of activity found in the two muscles. The cocontraction index was defined as the event when bursts of the muscle activity of the agonist and antagonist muscles overlapped for at least 5 ms [56]. Slip-severity parameters associated with cocontraction indexes among slipping foot (SF) (right foot in all our trials) and contralateral limb (nonslipping foot (NSF) i.e., left foot in all our slip trials): (vii)SFMeanCCIvalue: it was defined as the mean CCI in slipping foot during slip start to slip stop. (viii)SFPeakCCIvalue: it was defined as the peak CCI value during slip start to slip stop. (ix)SFTime2PeakCCIfromNSFHC: it was defined as the time to generate peak ankle cocontraction from the heel contact of unperturbed foot. (x) NSFMeanCCI value: it was defined as the mean CCI in nonslipping foot during slip start to slip stop. (xi) NSFStanceTime: it is the single stance duration in nonslipping foot right before the perturbation event.Figure 4 (a) Ankle cocontraction values in the slipping foot. (b) Ankle cocontraction values in nonslipping foot. ## 2.6. Mini-Mental State Examination (MMSE) The MMSE examines multiple areas of cognition in human brain. The highest possible score is 30; a score of less than 24 denotes cognitive impairment. Mild cognitive impairment is reflected in scores of 18 to 23, moderate cognitive impairment is suggested by scores of 17 to 10, and severe cognitive impairment is denoted by scores of less than 10 [57]. ## 2.7. Statistical Design There were two independent variables: age group (young versus old) and condition (normal versus dual task). Mixed factor multivariate analysis of variance (MANOVA) was conducted where age group was a between-subjects factor and dual-task/normal conditions group was within-subject factor. Using the Wilks’ Lambda test, the MANOVA allowed for determination of which factors had significant effects on the multiple dependent variables as a whole (i.e., gait parameters, muscle cocontraction, and slip parameters). Following MANOVA test, subsequent univariate ANOVA (mixed factor design) were conducted separately for each dependent variable.All statistical analyses were conducted using JMP (Pro 10.0.2, SAS Institute Inc.) with significance level ofα = 0.05 for all the statistical tests. All dependent variables were evaluated for normality (using Shapiro-WilkW test) and residual analysis. The results did not indicate any violation of normality assumptions. ## 3. Results ### 3.1. Gait Changes due to Dual-Task Performance The results indicated that both age groups (young/old) were affected by dual task and their step length (df=1, p<0.0046) decreased significantly. Double-support time (DST) (df=1, p<0.0048) and mean single stance time (SST) (df=1, p<0.013) increased as well in both young and elderly subjects. It was also found that RCOF and TCOF values decreased slightly due to dual-tasking in both younger and older individuals but the effects were not statistically significant. Older adults were also found to have higher linear variability in some of the gait variables as measured by standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time due to dual-tasking (as seen in Tables 2(a) and 2(b)).Table 2 (a) Dual-task changes in gait parameters. (b) General gait parameters for younger and older population for normal and dual-tasking types of walking. (a) Dual task walk Normal walk p value Mean SD Mean SD Step length [mm]∗ 703.79 39.25 750.89 47.87 0.004 Step width [mm] 119.10 27.67 115.92 25.45 0.69 HCV [mm/s] 1191.20 765.06 1029.41 193.01 0.55 GCT [s] 1.12 0.06 1.08 0.06 0.075 DST [s]∗ 0.26 0.03 0.24 0.03 0.004 Gait speed [m/s] 1.11 0.16 1.17 0.14 0.06 SST [s]∗ 0.70 0.04 0.66 0.05 0.013 Step time [s] 0.55 0.04 0.53 0.04 0.067 Swing time [s] 0.43 0.03 0.42 0.02 0.54 Toe clearance 16.40 8.29 16.29 7.62 0.96 RCOF 0.19 0.03 0.20 0.03 0.08 TCOF 0.07 0.02 0.07 0.01 0.90 (b) Age group Old Young Condition Condition DTW NW DTW NW Mean SD CV Mean SD CV Mean SD CV Mean SD CV Step length [mm] 702.92 48.75 6.94 739.10 57.89 7.83 704.26 35.36 5.02 757.24 42.76 5.65 Step width [mm] 117.89 21.29 18.06 113.16 20.25 17.89 119.74 31.37 26.20 117.41 28.53 24.30 HCV [mm/s] 993.76 516.51 51.98 1048.44 195.34 18.63 1297.52 870.85 67.12 1019.16 198.94 19.52 GCT [s] 1.11 0.07 6.47 1.08 0.07 6.17 1.13 0.06 5.12 1.08 0.06 5.88 DST [s] 0.27 0.03 10.60 0.22 0.02 9.48 0.26 0.03 11.84 0.24 0.03 11.99 SST [s] 0.68 0.05 7.11 0.65 0.04 5.72 0.71 0.04 5.85 0.66 0.05 7.62 Step time [s] 0.55 0.04 8.09 0.53 0.04 7.80 0.56 0.03 6.22 0.53 0.03 6.59 Swing time [s] 0.42 0.02 5.71 0.43 0.03 5.90 0.44 0.03 6.60 0.42 0.02 5.54 Toe clearance [mm] 18.66 12.43 66.62 19.58 5.94 30.31 15.18 5.19 34.18 14.52 8.03 55.32 RCOF 0.17 0.02 12.92 0.19 0.02 9.05 0.19 0.03 14.25 0.20 0.03 15.24 TCOF 0.07 0.01 17.59 0.07 0.01 17.57 0.07 0.02 23.70 0.08 0.01 19.43 Gait speed [m/s] 1.08 0.19 17.58 1.17 0.16 13.68 1.15 0.14 12.17 1.18 0.12 10.17 p ∗ < 0.05. ### 3.2. Effects of Dual-Tasking Induced Changes in Slip Characteristics It was found from the variableSFTime2PeakCCIFromNSFHC that the elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults (p<0.001) (Table 3(a)). Also, in the same line, the results ofNSFMeanCCI value also depict that mean coactivity in nonslipping foot during the time of slip start to slip stop is significantly higher in older adults.Table 3 (a) Normal slip parameters in young and older individuals. (b) Slip parameters for normal and dual-task conditions. (c) Slip parameters in young and older individuals in normal and dual-task walking condition. (a) Age group Old Young Mean SD CV Mean SD CV Time2SDI [s] 0.046 0.014 31.492 0.051 0.008 16.180 TimeSDI2SDII [s] 0.069 0.013 18.182 0.081 0.032 39.817 TimeSDTotal [s] 0.115 0.024 20.889 0.132 0.034 25.740 SDI [mm] 38.867 23.605 60.733 33.063 13.556 41.001 SDII [mm] 116.722 43.147 36.966 154.175 88.275 57.256 SDTotal [mm] 154.713 64.122 41.446 186.863 94.645 50.650 SF MeanCCIvalue 0.081 0.043 53.626 0.093 0.111 119.722 SF PeakCCIvalue 0.938 0.809 86.261 0.828 0.396 47.874 SF Time2PeakCCIFromNSFHC∗ [s] 0.383 0.412 107.398 0.752 0.081 10.722 NSF MeanCCIvalue∗ 0.052 0.030 57.476 0.022 0.013 60.464 PHSV 1033.688 404.843 39.165 1062.313 321.351 30.250 NSF StanceTime [s] 0.646 0.060 9.215 0.649 0.056 8.612 (b) Dual-task slip Normal walk slip p value Mean SD Mean SD Time2SDI [s]∗ 0.07 0.00 0.05 0.01 0.047 TimeSDI2SDII [s] 0.08 0.01 0.08 0.03 1 TimeSDTotal [s] 0.14 0.01 0.13 0.03 0.46 SDI [mm] 17.65 3.89 35.00 16.65 0.19 SDII [mm] 87.09 20.68 141.69 76.20 0.43 SDTotal [mm] 104.20 16.38 176.15 84.10 0.34 SF MeanCCIvalue 0.06 0.05 0.09 0.09 0.69 SF PeakCCIvalue 0.36 0.01 0.86 0.53 0.24 SFTime2PeakCCIFromNSFHC [s] 0.87 0.07 0.63 0.29 0.13 NSF MeanCCIvalue 0.03 0.03 0.03 0.02 0.90 PHSV [mm/s] 699.23 138.99 1052.77 332.59 0.22 NSF StanceTime [s]∗ 0.75 0.02 0.65 0.05 0.03 (c) Age group Old Young Slip condition Slip condition DTS NS DTS NS Mean Mean Mean Mean Time2SDI [s]∗ 0.067 0.046 0.067 0.051 TimeSDI2SDII [s] 0.083 0.069 0.067 0.081 TimeSDTotal [s] 0.150 0.115 0.133 0.132 SDI [mm] 14.903 38.867 20.406 33.063 SDII [mm] 101.71 116.722 72.463 154.175 SDTotal [mm] 115.78 154.713 92.622 186.863 SF MeanCCIvalue 0.093 0.081 0.021 0.093 SF PeakCCIvalue 0.368 0.938 0.355 0.828 SFTime2PeakCCIFromNSFHC [s] 0.917 0.383 0.817 0.752 NSF MeanCCIvalue 0.011 0.052 0.059 0.022 PHSV [mm/s] 797.51 1033.688 600.95 1062.313 NSF StanceTime [s]∗ 0.767 0.646 0.742 0.649 p ∗ < 0.05.It was also found that Time2SDI was significantly increased in dual-task walking trials (p=0.04) although there were no significant differences in SDI (Table 3(b)). Interaction effects were seen for the NSF Mean CCI value (p=0.02) for the two independent variables, age group and slipping condition (normal versus dual task). It was also found that dual task increased the nonslipping foot stance time (p=0.03) in both young and elderly participants compared to that in normal walk slip condition (Table 3(c)). The MMSE score ranged from 28 to 30 for all older participants, whereas all younger participants scored 30. ## 3.1. Gait Changes due to Dual-Task Performance The results indicated that both age groups (young/old) were affected by dual task and their step length (df=1, p<0.0046) decreased significantly. Double-support time (DST) (df=1, p<0.0048) and mean single stance time (SST) (df=1, p<0.013) increased as well in both young and elderly subjects. It was also found that RCOF and TCOF values decreased slightly due to dual-tasking in both younger and older individuals but the effects were not statistically significant. Older adults were also found to have higher linear variability in some of the gait variables as measured by standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time due to dual-tasking (as seen in Tables 2(a) and 2(b)).Table 2 (a) Dual-task changes in gait parameters. (b) General gait parameters for younger and older population for normal and dual-tasking types of walking. (a) Dual task walk Normal walk p value Mean SD Mean SD Step length [mm]∗ 703.79 39.25 750.89 47.87 0.004 Step width [mm] 119.10 27.67 115.92 25.45 0.69 HCV [mm/s] 1191.20 765.06 1029.41 193.01 0.55 GCT [s] 1.12 0.06 1.08 0.06 0.075 DST [s]∗ 0.26 0.03 0.24 0.03 0.004 Gait speed [m/s] 1.11 0.16 1.17 0.14 0.06 SST [s]∗ 0.70 0.04 0.66 0.05 0.013 Step time [s] 0.55 0.04 0.53 0.04 0.067 Swing time [s] 0.43 0.03 0.42 0.02 0.54 Toe clearance 16.40 8.29 16.29 7.62 0.96 RCOF 0.19 0.03 0.20 0.03 0.08 TCOF 0.07 0.02 0.07 0.01 0.90 (b) Age group Old Young Condition Condition DTW NW DTW NW Mean SD CV Mean SD CV Mean SD CV Mean SD CV Step length [mm] 702.92 48.75 6.94 739.10 57.89 7.83 704.26 35.36 5.02 757.24 42.76 5.65 Step width [mm] 117.89 21.29 18.06 113.16 20.25 17.89 119.74 31.37 26.20 117.41 28.53 24.30 HCV [mm/s] 993.76 516.51 51.98 1048.44 195.34 18.63 1297.52 870.85 67.12 1019.16 198.94 19.52 GCT [s] 1.11 0.07 6.47 1.08 0.07 6.17 1.13 0.06 5.12 1.08 0.06 5.88 DST [s] 0.27 0.03 10.60 0.22 0.02 9.48 0.26 0.03 11.84 0.24 0.03 11.99 SST [s] 0.68 0.05 7.11 0.65 0.04 5.72 0.71 0.04 5.85 0.66 0.05 7.62 Step time [s] 0.55 0.04 8.09 0.53 0.04 7.80 0.56 0.03 6.22 0.53 0.03 6.59 Swing time [s] 0.42 0.02 5.71 0.43 0.03 5.90 0.44 0.03 6.60 0.42 0.02 5.54 Toe clearance [mm] 18.66 12.43 66.62 19.58 5.94 30.31 15.18 5.19 34.18 14.52 8.03 55.32 RCOF 0.17 0.02 12.92 0.19 0.02 9.05 0.19 0.03 14.25 0.20 0.03 15.24 TCOF 0.07 0.01 17.59 0.07 0.01 17.57 0.07 0.02 23.70 0.08 0.01 19.43 Gait speed [m/s] 1.08 0.19 17.58 1.17 0.16 13.68 1.15 0.14 12.17 1.18 0.12 10.17 p ∗ < 0.05. ## 3.2. Effects of Dual-Tasking Induced Changes in Slip Characteristics It was found from the variableSFTime2PeakCCIFromNSFHC that the elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults (p<0.001) (Table 3(a)). Also, in the same line, the results ofNSFMeanCCI value also depict that mean coactivity in nonslipping foot during the time of slip start to slip stop is significantly higher in older adults.Table 3 (a) Normal slip parameters in young and older individuals. (b) Slip parameters for normal and dual-task conditions. (c) Slip parameters in young and older individuals in normal and dual-task walking condition. (a) Age group Old Young Mean SD CV Mean SD CV Time2SDI [s] 0.046 0.014 31.492 0.051 0.008 16.180 TimeSDI2SDII [s] 0.069 0.013 18.182 0.081 0.032 39.817 TimeSDTotal [s] 0.115 0.024 20.889 0.132 0.034 25.740 SDI [mm] 38.867 23.605 60.733 33.063 13.556 41.001 SDII [mm] 116.722 43.147 36.966 154.175 88.275 57.256 SDTotal [mm] 154.713 64.122 41.446 186.863 94.645 50.650 SF MeanCCIvalue 0.081 0.043 53.626 0.093 0.111 119.722 SF PeakCCIvalue 0.938 0.809 86.261 0.828 0.396 47.874 SF Time2PeakCCIFromNSFHC∗ [s] 0.383 0.412 107.398 0.752 0.081 10.722 NSF MeanCCIvalue∗ 0.052 0.030 57.476 0.022 0.013 60.464 PHSV 1033.688 404.843 39.165 1062.313 321.351 30.250 NSF StanceTime [s] 0.646 0.060 9.215 0.649 0.056 8.612 (b) Dual-task slip Normal walk slip p value Mean SD Mean SD Time2SDI [s]∗ 0.07 0.00 0.05 0.01 0.047 TimeSDI2SDII [s] 0.08 0.01 0.08 0.03 1 TimeSDTotal [s] 0.14 0.01 0.13 0.03 0.46 SDI [mm] 17.65 3.89 35.00 16.65 0.19 SDII [mm] 87.09 20.68 141.69 76.20 0.43 SDTotal [mm] 104.20 16.38 176.15 84.10 0.34 SF MeanCCIvalue 0.06 0.05 0.09 0.09 0.69 SF PeakCCIvalue 0.36 0.01 0.86 0.53 0.24 SFTime2PeakCCIFromNSFHC [s] 0.87 0.07 0.63 0.29 0.13 NSF MeanCCIvalue 0.03 0.03 0.03 0.02 0.90 PHSV [mm/s] 699.23 138.99 1052.77 332.59 0.22 NSF StanceTime [s]∗ 0.75 0.02 0.65 0.05 0.03 (c) Age group Old Young Slip condition Slip condition DTS NS DTS NS Mean Mean Mean Mean Time2SDI [s]∗ 0.067 0.046 0.067 0.051 TimeSDI2SDII [s] 0.083 0.069 0.067 0.081 TimeSDTotal [s] 0.150 0.115 0.133 0.132 SDI [mm] 14.903 38.867 20.406 33.063 SDII [mm] 101.71 116.722 72.463 154.175 SDTotal [mm] 115.78 154.713 92.622 186.863 SF MeanCCIvalue 0.093 0.081 0.021 0.093 SF PeakCCIvalue 0.368 0.938 0.355 0.828 SFTime2PeakCCIFromNSFHC [s] 0.917 0.383 0.817 0.752 NSF MeanCCIvalue 0.011 0.052 0.059 0.022 PHSV [mm/s] 797.51 1033.688 600.95 1062.313 NSF StanceTime [s]∗ 0.767 0.646 0.742 0.649 p ∗ < 0.05.It was also found that Time2SDI was significantly increased in dual-task walking trials (p=0.04) although there were no significant differences in SDI (Table 3(b)). Interaction effects were seen for the NSF Mean CCI value (p=0.02) for the two independent variables, age group and slipping condition (normal versus dual task). It was also found that dual task increased the nonslipping foot stance time (p=0.03) in both young and elderly participants compared to that in normal walk slip condition (Table 3(c)). The MMSE score ranged from 28 to 30 for all older participants, whereas all younger participants scored 30. ## 4. Discussion This study examined the effects of dual task on older adults and established a relationship between dual-task adaptations in gait and associated slip and fall risk. Major findings were that the dual-task paradigm influenced slip initiation characteristics by modulating to “safer” or “cautious” gait. Dual-task related gait changes are associated with intrinsic (one’s health related) risk factors for falls. As we did not have frail individuals in this study, we found that healthy young and older individuals adapted to dual-task scenarios by shifting to more “cautious” gait. This was well evidenced by a decrease in step length and heel contact velocity and an increase in step width and single and double-support time during gait.The results suggest that attentional capacity limit for healthy young and old adults is perhaps exceeded during dual-task walking but did not result in instability or increased fall risk. Collectively, the study findings argue in favor of a critical gait behavior:“Preferred speed of walking in healthy human beings requires less allocation of attentional resources for safe transitioning.” These findings support previous investigations:(1) In a seminal work by Lajoie and coworkers, it was found that reaction times when participants were in single support phase were significantly longer than those in double-support phase, suggesting that attentional demands increased with an increase in balance requirement tasks [58]. Thus, attentional demands varied within a gait cycle. Dual-task walking resulted in higher double stance times; thus, it could be inferred that healthy young and old adapt their gait to reduce attentional demand.(2) Previous research has also reported that dual-task interference results in slower gait speeds [13, 15, 16], reduced cadence [16, 17], shorter stride length [15, 16], increased stride duration [16], and longer double-support time [13, 18]. These cautious gait pattern adopted by healthy adults during dual-tasking, characterized by reduced speed, shorter step length, and increased step width, is likely a consequence of adaptations to minimize perturbations to the body and reduce the risk of falls [33] during reduced attentional demands of walking. We also found that heel contact velocity and required coefficient of friction decreased slightly but not statistically significantly during dual-tasking. This unwittingly indicates that several mechanisms contribute to reduce risk of falls and adapt body movements to cautious gait mode, when less attentional resources are available for gait.Because walking has greater attentional demands, from an information processing viewpoint, walking is not considered an automated task requiring no cognitive processing [58]. Overall analysis of this study suggests that gait in healthy adults was affected by concurrent cognitive tasks and the evidence is sufficiently robust to support the notion ofcautious gait. Even in healthy individuals, age-related changes have been reported in cognitive and motor systems; thus, aging may be attributed to higher cognitive-motor interference [59, 60]. We believe that the dual-task changes observed are compensatory mechanisms to stabilize and allow safe locomotion, in a condition when less attention is available.Intuitively, the concept of cautious gait adaptations observed in healthy younger and older adults while walking in a dual-task paradigm draws intriguing interests in understanding“Why humans threaten their safety when walking at their preferred speed without dual-tasking?”Findings of this study elucidate that dual-task related changes in gait do not predispose healthy young and older adults to falls. Healthy people walk at their preferred walking speed, step length, and cadence which is selected to optimize the stability of their gait pattern [27, 61]; this has been addressed in several studies in context of spatial variability [47, 62, 63] and temporal variability [64]. It is reported that shorter steps and longer double-support times are associated with small sensorimotor and frontoparietal regions, whereas cognitive processing speed is linked to individual differences in gait [65]. Accordingly, Sekiya et al. [47] suggested an optimal method of walking that consists of optimal criteria in terms of energy efficiency, temporal and spatial variability, and attention (Figure 5).Figure 5 Interrelationship of movement variability with attention and energy expenditure while performing task.During dual-task walking, less attentional resources are allocated for gait; thus, there is compromise with energy expenditure and variability of kinematic parameters. The present study determined that older adults redressed the diminished attentional investment through differing variability in selective gait parameters including standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time. Therefore, future studies in energy expenditure with a dual-task criterion would further strengthen intuitive understanding on relationship of energy expenditure, variability, and attention.This study also evaluated the effects of the dual-task paradigm on slip severity and on linear measures of variability changes in two known healthy age groups. The results suggest that single stance duration is increased in dual-task walking trials which may elicit a congruent adaptation in both young and elderly individuals to maintain “stable gait.” During the stance phase of a gait cycle, proprioceptive input from extensor muscles and mechanoreceptors in the sole of the foot provide loading information [66] to the central nervous system. Thus, the increased stance duration increases foot-loading information through afferent sensory and proprioceptive mechanoreceptors, such as Golgi-tendon units, muscle spindles, and joint receptors, and may facilitate motor control of the lower extremity during walking [67].Additionally, the dual-task condition shortened step length significantly, which may be associated with modulation of self-selected pace in order to continue counting rhythmically and need of longer (longer single and double stance) and frequent (shorter steps) proprioception due to dual task [68] or perhaps due to changes in the motor control schema with the adoption of alternative compensatory strategies to increase stability while walking with another task being given primary importance and walking being innately automatic to some extent.Furthermore, dual-task trials did not significantly affect heel contact velocity (HCV) but slightly decreased RCOF and TCOF in walking trials although these effects were not statistically significant. Considering HCV as a kinematic gait parameter that can drastically alter the friction demand (by change in required coefficient of friction) [51] and influence the likelihood of slip-induced falls [69–71], dual-task conditions ultimately decrease slip-induced fall risks. Likewise, considering dual-task events had no deleterious effects on toe clearance; therefore, it is inferred that participants delineated reduced trip risk as well.Through the parameter SF Time2PeakCCIFromLHC, the results suggest that elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults; that is, they may be quick to introduce ankle muscle coactivity in the slipping foot. Further, coactivity during slipping limits the ankle joint’s degrees of freedom, thus reducing requisite motor control adaptability to recover balance after slip. In turn, this affirms that the health status of older age group participating in this study was nonfrail. It should be noted that the subject population recruited for the study was healthy with intact cognitive function (or executive function) with mean Mini-Mental State Examination score above 28 for all.The NSF Mean CCI value depicted that the mean coactivity in the nonslipping foot during the time of slip start to slip stop was significantly higher in older adults. This phenomenon also lowers the degrees of freedom in the nonslipping foot. The reduction in degrees of freedom in both the right and left feet may influence slip severity amongst older adults. Although greater SDI was reported in the older population, in accordance with previous studies [48, 51], they were not significantly different for the current population. It was found that the Time2SDI in the dual-task paradigm was significantly increased; thus, if elderly individuals have a higher SDI, they require lower heel velocities to cover SDI, thus showing that slower movement of heel from slip start to midslip is seen as an effect of dual-tasking. Probably, this could also be partially explained by higher transitional acceleration of center of mass, in these individuals. Interaction effects were seen for the NSF Mean CCI value for the two independent variables, age group and slipping condition (normal versus dual task), which is interesting because older people have lower nonslipping ankle coactivation (or stiffness reduced) during dual-tasking when compared to normal slipping. On the contrary, younger individuals have higher coactivity in ankle muscles of nonslipping foot during dual-activity. This might be influenced by age-related involvement of attentional resources for the dual-task; perhaps this dual task (counting backwards by subtracting 3) may not be challenging enough to involve higher attentional resources for younger participants.In sum, this study investigated the effects of attentional interference (induced by dual task) on gait variability and associated fall risk, particularly to understand the following: (a) what is the effect of dual task on spatiotemporal gait parameters? (b) does dual task deteriorate or modify to unsafe gait by predisposing to falls? The findings suggest that in everyday walking tasks with increased attention demands would certainly reduce the resources available for other tasks which may be secondary. But the slow speed, wider step width, and longer double-support time adopted by participants [27] may serve to produce a more safer and stable gait [72, 73] and energy-efficient speed of progression [72, 74] or probably to maintain certain amount of variability in its kinematics. A cautious gait can be typically marked by moderate slowing, reduced stride length, and mild widening of base-of-support characterized by step width [75]. It is also possible that the kinematic adaptations adopted may serve to reduce the cognitive demand necessary to control the continuous disequilibrium inherent to walking [58, 76]. ## 5. Limitations The strength of the conclusions of this study must be tempered by the study’s limitations. Although all the participants performed similar kind of dual-task while walking, one’s exposure to mathematical background or day-to-day usage of arithmetic operations was a confounding variable in this study. Therefore, the dual task involved in this study may not have required equivalent attentional demand for every subject. The values of toe clearance as found in this study are higher since reflective marker at toe was positioned over laboratory shoes. Thus, toe clearance values are limited due to offset and cannot be compared to existing literature toe clearance values. ## 6. Conclusion Overall, the current research has contributed knowledge about slip risk in healthy young and older adults and the effects that a dual-task paradigm has on slip initiation characteristics and slip severity. The study results suggest that a dual-task elicits a “cautious gait mode” (CGM) which is an innate adaptive response to counter reduced attention while walking. Attention resources are appropriated for the relevant cognitive task (e.g., counting backwards); therefore, the healthy human response is to adopt a cautious gait mode, which includes a shorter step length and longer stance duration, acquiring more proprioceptive information from the ground (or using less attentional resources). The response of CGM is innate for healthy human beings, but in the case of frail elderly persons, who require considerable attention for performing relatively perfunctory gait and postural movements, they may find it challenging to maintain stability. --- *Source: 1014784-2017-01-31.xml*
1014784-2017-01-31_1014784-2017-01-31.md
66,832
Dual-Task Does Not Increase Slip and Fall Risk in Healthy Young and Older Adults during Walking
Rahul Soangra; Thurmon E. Lockhart
Applied Bionics and Biomechanics (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1014784
1014784-2017-01-31.xml
--- ## Abstract Dual-task tests can identify gait characteristics peculiar to fallers and nonfallers. Understanding the relationship between gait performance and dual-task related cognitive-motor interference is important for fall prevention. Dual-task adapted changes in gait instability/variability can adversely affect fall risks. Although implicated, it is unclear if healthy participants’ fall risks are modified by dual-task walking conditions. Seven healthy young and seven healthy older adults were randomly assigned to normal walking and dual-task walking sessions with a slip perturbation. In the dual-task session, the participants walked and simultaneously counted backwards from a randomly provided number. The results indicate that the gait changes in dual-task walking have no destabilizing effect on gait and slip responses in healthy individuals. We also found that, during dual-tasking, healthy individuals adopted cautious gait mode (CGM) strategy that is characterized by reduced walking speed, shorter step length, increased step width, and reduced heel contact velocity and is likely to be an adaptation to minimize attentional demand and decrease slip and fall risk during limited available attentional resources. Exploring interactions between gait variability and cognitive functions while walking may lead to designing appropriate fall interventions among healthy and patient population with fall risk. --- ## Body ## 1. Introduction Slip-induced fall accidents account for 87% of hip fractures and are associated with considerable medical cost and human suffering in older adults [1, 2]. Often, such fractures result in immobility [3] and admissions to skilled nursing facility and sometimes lead to death within one year [4]. Walking is a somewhat complex task associated with higher level cognitive processing such as estimation, planning, and real-time adjustments; specifically, executive function is involved. During walking, numerous sensory (visual, proprioception, and vestibular), conscious inputs and competing objectives (e.g., upright posture versus locomotion) are seamlessly integrated across hierarchical systems, with subtle real-time decisions and adjustments made using cognitive capabilities [5].In essence, gait performance is affected by the simultaneous performance of dual tasks [6–11]. The dual-task paradigm is commonly used to assess multitasking capabilities. It is presumed that multitasking is influenced by age-related changes in attentional capacities [12] and reducing the abilities to shared processing domains for two concurrent tasks [13]. O’Shea and colleagues [14] suggested that detrimental effects of physical task in the presence of competing attentional task supports a“capacity sharing model” of dual-tasking. According to capacity sharing model, performing two attention demanding tasks reduces the performance of one or both tasks when attentional capacity limit is exceeded [14]. Previous research has demonstrated that dual-task interference results in slower gait speeds [13, 15, 16], reduced cadence [16, 17], shorter stride length [15, 16], increased stride duration [16], and longer double-support time [13, 18]. Unwittingly, dual-task performances can potentially identify gait characteristics peculiar to fallers and nonfallers [19].The influence of attention on gait stability has been studied in numerous patient populations and results consistently show decreased gait velocity and increased gait variability in dual-task conditions [11, 20–23]. Apparently, persons with history of falls have more significant gait changes while performing dual task than nonfallers [9, 24–26]. Despite their report, dual task and its association with falls continues to be debated, and there is limited knowledge about how dual-tasking influenced slip characteristics. Some studies showed worsening of gait performance while concluding dual-tasking as a predictive of falls [6, 27, 28] while others have failed to establish any relationship [29, 30]. One of the studies reported that dual-task related gait changes did not provide any additional information than performance under single task conditions [31]. But these differences may be due to several confounds such as age [7, 9] and comorbidities [10, 27] and kind of attentional demanding task [11, 32]. Some findings corroborate well with previous investigations that poorer ability of subjects to perform a basic mobility task while carrying a cup of water [6] and the cessation of walking when engaged in conversation [27] are both associated with a four times’ risk of fall. Lundin-Ohson et al. reported that dual motor task can differentiate fall prone frail elderly from healthy older adults [27]. Dual-task tests can identify gait characteristics peculiar to fallers and nonfallers [19]. The influence of attention on gait stability has been studied in numerous patient populations and results consistently show decreased gait velocity and increased gait variability in dual-task conditions [11, 20–23]. Fall risk is independent of gait speed but is modulated instead by gait variability [33]. Previous studies demonstrate that gait speed and gait unsteadiness may be dissociated [33–35]. Healthy older adults walk with the same small amount of variability as healthy young adults, even though they walk slower than healthy young adults [36]. However, Springer et al. [37] reported that gait variability increased in older fallers and not in young adults and older nonfallers. They reported that healthy young adults and nonfallers maintain their stable gait in dual-task walking and that there is no evidence of detrimental effects of dual-task activities on gait variability associated with aging. In essence, dual-task paradigm is considered more sensitive to identify fall risk since it widens the gap between fallers and nonfallers [37–39].To stabilize, healthy people are found to decrease their gait speed [37]. Accordingly, elderly nonfallers were also found to decrease their swing times and their gait speed [37]. This dual-task related decline in walking speed is interpreted as an implicit strategy to avoid loss of balance [8]. Reduction of gait speed among groups represents a coping mechanism to handle the attention demanding challenge of the dual-task activity.The finding of decrease of gait velocity in dual-task walking is undebated and consistent with most of the studies [20, 21, 23, 37, 40]. Although dual-task related decline in walking speed is not specific for increased risk of falling, increase of stride time variability is closely associated with the occurrence of falls [33, 41]. There exists association between low stride time variability and efficient executive function in healthy older adults [5] and high stride time variability and impaired executive function in demented older adults [23]. Low stride time variability in healthy older adults is associated with minor involvement of attention in the control of the rhythmic stepping mechanism [11].Previous research has shown that dual-task related gait changes consisted of increase in number of stops, lateral deviations, steps, and walking time [6, 7, 11] and increase in stride width, stride length, and stride time variabilities [7, 42]. Intrasubject variability of kinematic variables is an index of movement consistency or stability of gait performance. However, there exists negative correlation between variability in step width and balance performance of the elderly women [43] and also an increased variability in step length for hospitalized fallers compared with nonfallers [44]. Gabell and Nayak [45] could not find any effects of age on variability in step length and step width while walking. Maruyama and Nagasaki [46] reported that temporal variability in stance phase durations in gait cycle was decreasing function of speed. Increasing the walking speed produced linear increment in step width variability in contrast to step length variability in healthy adults [47]. Gabell and Nayak [45] suggested that variability in step length is determined predominantly by gait patterning mechanism; on the contrary, step width variability is largely determined by balance control mechanism. Similarly, Heitmann et al. found negative correlation between balance performance and variability in step width but not the same for balance performance and step length. ### 1.1. Objective Performance of secondary task, that is, dual task, affects certain aspects of gait, but the relationship between gait variability, dual-tasking, and slip and fall risk is not well understood. This study was conducted to better understand the motor control of gait and the relationship between an individual’s motor variability and fall risk during dual-tasking walking conditions. Exploring dual-task related gait changes is of particular interest in understanding variability because a strong relationship exists between dual-task related gait changes and the risk of falling in older adults [6, 28, 29].The primary objective of this study was to investigate the relationship between dual-task and slip-induced fall risk. As per our knowledge, no previous study has looked into effects of dual-tasking on slip and fall risk. This study involves two groups (young and old individuals) with (known) different slip and fall risk [48]. It was hypothesized that dual-tasking while walking would affect gait characteristics and may increase the slip initiation characteristics in the elderly individuals and will negatively influence slip-induced risk. It was also hypothesized that friction demand and trip risk measured using toe clearance will be significantly different for normal walking and dual-task walking. ## 1.1. Objective Performance of secondary task, that is, dual task, affects certain aspects of gait, but the relationship between gait variability, dual-tasking, and slip and fall risk is not well understood. This study was conducted to better understand the motor control of gait and the relationship between an individual’s motor variability and fall risk during dual-tasking walking conditions. Exploring dual-task related gait changes is of particular interest in understanding variability because a strong relationship exists between dual-task related gait changes and the risk of falling in older adults [6, 28, 29].The primary objective of this study was to investigate the relationship between dual-task and slip-induced fall risk. As per our knowledge, no previous study has looked into effects of dual-tasking on slip and fall risk. This study involves two groups (young and old individuals) with (known) different slip and fall risk [48]. It was hypothesized that dual-tasking while walking would affect gait characteristics and may increase the slip initiation characteristics in the elderly individuals and will negatively influence slip-induced risk. It was also hypothesized that friction demand and trip risk measured using toe clearance will be significantly different for normal walking and dual-task walking. ## 2. Methods ### 2.1. Participants The sample size was estimated using power analysis on the results of the published study by focusing on sample sizes that are large enough to determine differences between the velocities during normal walking. Palombaro et al. have determined that minimal clinically important difference (MCID) for habitual gait speed is 0.10 m/s [49]. The standard deviation of measurement is 0.10 m/s [49]. Therefore, means and standard deviations of velocity in this study were used to compute the required sample size (using JMP, version 7, SAS Institute Inc., Cary, NC, 1989–2007). The required sample size for detecting significant differences in velocity, given α=0.05 and power = 0.80 and small effect size (Cohen’s d) of 0.2 [50], was determined with n=7 per group. Seven young and seven old participants were recruited for this study. The younger population consisted of college students of Virginia Tech campus, and older adults were retired people in Blacksburg area. The recruited participants were in a general good health condition, with no recent cardiovascular, respiratory, neurological, and musculoskeletal abnormalities. Only one of the elderly participants (O02) was suffering from chronic obstructive pulmonary disease (COPD). All participants were recruited based on criteria of complete ambulation, without the use of any assistive devices, and ability to rise from a chair without assistance and free of orthopedic injury. This study was approved by the Institutional Review Board (IRB) of Virginia Tech. All participants who participated in this study provided written consent prior to the beginning of data collection. Demographic information for the participants is provided in Table 1.Table 1 Background characteristics of study participants. Age group Old Young Mean SD Mean SD Age [years] 71.14 6.51 22.64 2.56 Height [cm] 174.57 10.24 170.37 9.33 Weight [Kg] 78.55 18.25 69.65 15.52 BMI 25.52 4.27 23.78 4.00 ### 2.2. Instrumentation The experiments were conducted on a 15-meter linear walking track, embedded with two force plates (BERTEC #K80102, Type 45550-08, Bertec Corporation, OH 43212, USA, and AMTI BP400600 SN6780, Advances Mechanical Technology Inc., Watertown, MA 02472, USA). A six-camera ProReflex system (Qualisys, Gothenburg, Sweden) was used to collect three-dimensional kinematics of posture and gait data in participants. Kinematic data were sampled and recorded at 120 Hz. Ground reaction forces of participants walking over the test surfaces were measured using two force plates and sampled at a rate of 1200 Hz. A sixteen-channel surface electromyography (s-EMG) DTS Telemyo system (Noraxon 15770N Greenway-Hayden Loop, #100 Scottsdale, AZ, USA), was used to record the temporal activation of two ankle muscles (gastrocnemius and tibialis anterior) in the both lower extremities during walking. ### 2.3. Experimental Protocol All participants were first familiarized with laboratory equipment’s and were provided a verbal explanation of the experimental procedure. Participants were requested to wear laboratory clothes and shoes, fitting to their sizes. Height and weight of participants were noted below the ID numbers assigned to the subject. Surface Electromyogram (s-EMG) electrodes were affixed by asking participants to plantarflex and dorsiflex their ankle for gastrocnemius and tibialis anterior muscles. Twenty-six reflective markers were attached to various bony landmarks of the body such as head, both ears, both acromioclavicular joints, acromions, humoral condyles, ulnar stylus, knuckles, both right and left anterior superior iliac spine (ASIS), greater trochanters, both medial and lateral condyle of both limbs, malleolus (medial and lateral), and heel and toes of both feet (shown in Figure1). The marker configuration was similar to that defined by Lockhart et al. 2003 and was used to derive the whole-body center of mass biomechanical model [51]. Kinetic data was acquired using two forceplates such that two consecutive steps would fall on them. The slippery surface (which was at top of the second forceplate) was covered with 1 : 1 water and jelly mixture to reduce the coefficient of friction (COF) of the floor surface (dynamic COF ~ 0.12). Participants were kept unaware of the position of this surface as the embedded forceplates are also covered with similar vinyl texture as the walkway. This is well standardized protocol used in several previous slip and fall research [48, 51]. The experiment was divided into two sessions: normal session and dual-task session (Figure 2). Each session was separated by 4 days and each participant was randomly assigned to either normal or dual task as his/her first session.Figure 1 The picture of placement of reflective marker, inertial sensors, and s-EMG.Figure 2 Participants were assigned to normal or dual-task session randomly and the listed tests were conducted. #### 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. #### 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ### 2.4. Data Processing Normal and dual-task walking trials provided kinematic and kinetic data that was filtered using low-pass Butterworth filter at cut-off frequency of 6 Hz. The EMG data was digitally bandpass filtered at 20–500 Hz. The EMG signals were then rectified and low-pass filtered using Butterworth filter with a 6 Hz cut-off frequency to create a linear envelope. Heel contact (HC) and toe-off (TO) events were identified from the ground reaction forces with threshold set at 11 Newton (see Abbreviations). The analysis was performed from stance phase (HC to TO) of nonslipping foot. #### 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. #### 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. #### 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ### 2.5. Plantar Flexion Muscle Cocontraction EMG activity was peak normalized within each subject using the ensemble average method during the complete gait cycle [54]. Then, cocontraction index (CCI) was calculated by the following equation [55]:(3)CCI=LowerEMGiHigherEMGi×LowerEMGi+HigherEMGi,where LowerEMGi refers to the less active muscle at time i and HigherEMGi refers to the more active muscle at time i.The ratio of the EMG activity of tibialis anterior to gastrocnemius was considered for this study (Figure4). The ratio is multiplied by the sum of activity found in the two muscles. The cocontraction index was defined as the event when bursts of the muscle activity of the agonist and antagonist muscles overlapped for at least 5 ms [56]. Slip-severity parameters associated with cocontraction indexes among slipping foot (SF) (right foot in all our trials) and contralateral limb (nonslipping foot (NSF) i.e., left foot in all our slip trials): (vii)SFMeanCCIvalue: it was defined as the mean CCI in slipping foot during slip start to slip stop. (viii)SFPeakCCIvalue: it was defined as the peak CCI value during slip start to slip stop. (ix)SFTime2PeakCCIfromNSFHC: it was defined as the time to generate peak ankle cocontraction from the heel contact of unperturbed foot. (x) NSFMeanCCI value: it was defined as the mean CCI in nonslipping foot during slip start to slip stop. (xi) NSFStanceTime: it is the single stance duration in nonslipping foot right before the perturbation event.Figure 4 (a) Ankle cocontraction values in the slipping foot. (b) Ankle cocontraction values in nonslipping foot. ### 2.6. Mini-Mental State Examination (MMSE) The MMSE examines multiple areas of cognition in human brain. The highest possible score is 30; a score of less than 24 denotes cognitive impairment. Mild cognitive impairment is reflected in scores of 18 to 23, moderate cognitive impairment is suggested by scores of 17 to 10, and severe cognitive impairment is denoted by scores of less than 10 [57]. ### 2.7. Statistical Design There were two independent variables: age group (young versus old) and condition (normal versus dual task). Mixed factor multivariate analysis of variance (MANOVA) was conducted where age group was a between-subjects factor and dual-task/normal conditions group was within-subject factor. Using the Wilks’ Lambda test, the MANOVA allowed for determination of which factors had significant effects on the multiple dependent variables as a whole (i.e., gait parameters, muscle cocontraction, and slip parameters). Following MANOVA test, subsequent univariate ANOVA (mixed factor design) were conducted separately for each dependent variable.All statistical analyses were conducted using JMP (Pro 10.0.2, SAS Institute Inc.) with significance level ofα = 0.05 for all the statistical tests. All dependent variables were evaluated for normality (using Shapiro-WilkW test) and residual analysis. The results did not indicate any violation of normality assumptions. ## 2.1. Participants The sample size was estimated using power analysis on the results of the published study by focusing on sample sizes that are large enough to determine differences between the velocities during normal walking. Palombaro et al. have determined that minimal clinically important difference (MCID) for habitual gait speed is 0.10 m/s [49]. The standard deviation of measurement is 0.10 m/s [49]. Therefore, means and standard deviations of velocity in this study were used to compute the required sample size (using JMP, version 7, SAS Institute Inc., Cary, NC, 1989–2007). The required sample size for detecting significant differences in velocity, given α=0.05 and power = 0.80 and small effect size (Cohen’s d) of 0.2 [50], was determined with n=7 per group. Seven young and seven old participants were recruited for this study. The younger population consisted of college students of Virginia Tech campus, and older adults were retired people in Blacksburg area. The recruited participants were in a general good health condition, with no recent cardiovascular, respiratory, neurological, and musculoskeletal abnormalities. Only one of the elderly participants (O02) was suffering from chronic obstructive pulmonary disease (COPD). All participants were recruited based on criteria of complete ambulation, without the use of any assistive devices, and ability to rise from a chair without assistance and free of orthopedic injury. This study was approved by the Institutional Review Board (IRB) of Virginia Tech. All participants who participated in this study provided written consent prior to the beginning of data collection. Demographic information for the participants is provided in Table 1.Table 1 Background characteristics of study participants. Age group Old Young Mean SD Mean SD Age [years] 71.14 6.51 22.64 2.56 Height [cm] 174.57 10.24 170.37 9.33 Weight [Kg] 78.55 18.25 69.65 15.52 BMI 25.52 4.27 23.78 4.00 ## 2.2. Instrumentation The experiments were conducted on a 15-meter linear walking track, embedded with two force plates (BERTEC #K80102, Type 45550-08, Bertec Corporation, OH 43212, USA, and AMTI BP400600 SN6780, Advances Mechanical Technology Inc., Watertown, MA 02472, USA). A six-camera ProReflex system (Qualisys, Gothenburg, Sweden) was used to collect three-dimensional kinematics of posture and gait data in participants. Kinematic data were sampled and recorded at 120 Hz. Ground reaction forces of participants walking over the test surfaces were measured using two force plates and sampled at a rate of 1200 Hz. A sixteen-channel surface electromyography (s-EMG) DTS Telemyo system (Noraxon 15770N Greenway-Hayden Loop, #100 Scottsdale, AZ, USA), was used to record the temporal activation of two ankle muscles (gastrocnemius and tibialis anterior) in the both lower extremities during walking. ## 2.3. Experimental Protocol All participants were first familiarized with laboratory equipment’s and were provided a verbal explanation of the experimental procedure. Participants were requested to wear laboratory clothes and shoes, fitting to their sizes. Height and weight of participants were noted below the ID numbers assigned to the subject. Surface Electromyogram (s-EMG) electrodes were affixed by asking participants to plantarflex and dorsiflex their ankle for gastrocnemius and tibialis anterior muscles. Twenty-six reflective markers were attached to various bony landmarks of the body such as head, both ears, both acromioclavicular joints, acromions, humoral condyles, ulnar stylus, knuckles, both right and left anterior superior iliac spine (ASIS), greater trochanters, both medial and lateral condyle of both limbs, malleolus (medial and lateral), and heel and toes of both feet (shown in Figure1). The marker configuration was similar to that defined by Lockhart et al. 2003 and was used to derive the whole-body center of mass biomechanical model [51]. Kinetic data was acquired using two forceplates such that two consecutive steps would fall on them. The slippery surface (which was at top of the second forceplate) was covered with 1 : 1 water and jelly mixture to reduce the coefficient of friction (COF) of the floor surface (dynamic COF ~ 0.12). Participants were kept unaware of the position of this surface as the embedded forceplates are also covered with similar vinyl texture as the walkway. This is well standardized protocol used in several previous slip and fall research [48, 51]. The experiment was divided into two sessions: normal session and dual-task session (Figure 2). Each session was separated by 4 days and each participant was randomly assigned to either normal or dual task as his/her first session.Figure 1 The picture of placement of reflective marker, inertial sensors, and s-EMG.Figure 2 Participants were assigned to normal or dual-task session randomly and the listed tests were conducted. ### 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. ### 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ## 2.3.1. Normal Walking and Slip After attaching s-EMGs and markers, participants were instructed to walk on the walkway for 15–20 minutes at their self-selected pace. Participant’s gait data were acquired using motion capture, IMUs, forceplates, and EMG system. The starting point during the walk was adjusted such that their nonslipping foot (nondominant) landed on the first forceplate and dominant foot landed on the second platform. The participants were told in the session that they “may or may not slip,” and they should look forward while walking. Additionally, participants were unaware of the placement of slippery surface. Once five walking trials with complete foot fall on the forceplate were obtained, the slippery surface was introduced above the forceplate where dominant foot was expected to strike. ## 2.3.2. Dual-Task Walking and Slip This study used a clear and standardized cognitive task, such as serial subtraction [52, 53]. This session was similar to normal walking session described above, except that the participants were counting backwards when walking. The investigator told a random number before the walking trial and participants had to subtract the number by three continuously until he/she reached the other end of walkway. The investigator corrected the participants, if error was made in counting backwards. ## 2.4. Data Processing Normal and dual-task walking trials provided kinematic and kinetic data that was filtered using low-pass Butterworth filter at cut-off frequency of 6 Hz. The EMG data was digitally bandpass filtered at 20–500 Hz. The EMG signals were then rectified and low-pass filtered using Butterworth filter with a 6 Hz cut-off frequency to create a linear envelope. Heel contact (HC) and toe-off (TO) events were identified from the ground reaction forces with threshold set at 11 Newton (see Abbreviations). The analysis was performed from stance phase (HC to TO) of nonslipping foot. ### 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. ### 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. ### 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ## 2.4.1. Gait Variables In this study, the gait variables assessed for walking conditions were as follows: (i) Step length (SL): the distance travelled by the participant in one step, it is computed as the anterior-posterior directed distance between ipsilateral limb heel and contralateral limb heel markers at one step. (ii) Step width (SW): It is the mediolateral distance travelled in one step, it is computed as the mediolateral distance between the feet during a step. Double-support time (DST): double support is the time when both feet are on the ground. In one stride, there are two double-support time intervals. It starts with initial contact of one foot until the toe-off of the other foot. (iii) Heel contact velocity (HCV): it is the instantaneous horizontal velocity of the heel at the moment of heel contact. Heel contact is defined as the instant at which the vertical force on the forceplate exceeds 11 N. After processing the heel marker data, HCV was extracted by horizontal heel position at 1/120 sec before and after heel contact:(1)Xi+1-Xi-12Δt,where i is the frame index at the moment of heel contact. The variables Xi+1 and Xi-1 represent the horizontal heel positions at the frames occurring 1/120 sec before and after the instant of heel contact, respectively. The time variable Δt is 1/120 sec. ## 2.4.2. Slip Propensity Measures (i) Required coefficient of friction (RCOF): the RCOF is the minimum coefficient of friction which is required between the shoe sole and floor interface to prevent slipping. Thus, if the floor surface and shoe tribology can meet RCOF, walking is possible, whereas if the RCOF is greater than the available friction between the shoe and floor surface, then a slip occurs. The RCOF is defined as the ratio of forward horizontal ground reaction force to vertical ground reaction force,FX/FZ. (ii) Transverse coefficient of friction (TCOF): the TCOF was defined as the ratio of lateral ground reaction force component to vertical ground reaction force, FY/FZ.Trip propensity measures: (i) toe clearance: toe clearance is a critical event during midswing of the foot when the foot clearance is achieved with minimum height from the ground surface. ## 2.4.3. Slip-Severity Parameters (i) Initial slip distance (SDI): initial slip distance begins after heel contact when the first nonrearward positive acceleration of the foot is identified (Figure3). This SDI is the distance travelled by the heel from this point of no-rearward positive acceleration (minimum velocity) to the time of the first peak in heel acceleration [51]:(2)SDI=X2-X12+Y2-Y12.(ii) Slip distance II (SDII): slip distance II begins at the slip-point of SDI. Slip stop for SDII is the point at which the first maximum in horizontal heel velocity occurs after the start of SDII. SDI and SDII are used as indices for comparing the severity of slips (Figure 3). (iii) Peak sliding heel velocity (PSHV): it is the maximum forward speed of the heel during slipping. This parameter is calculated using the time derivative of heel marker position during the slip.Figure 3 (a) Horizontal heel velocity of slipping foot. (b) Horizontal heel acceleration of the slipping foot.(iv) Time2SDI: it was defined as the time to reach midslip from slip start or time to cover SDI. (v) TimeSDI2SDII: it was defined as the time taken from midslip to slip stop. (vi) Time SD total: it was defined as the time taken from slip start to slip stop events. ## 2.5. Plantar Flexion Muscle Cocontraction EMG activity was peak normalized within each subject using the ensemble average method during the complete gait cycle [54]. Then, cocontraction index (CCI) was calculated by the following equation [55]:(3)CCI=LowerEMGiHigherEMGi×LowerEMGi+HigherEMGi,where LowerEMGi refers to the less active muscle at time i and HigherEMGi refers to the more active muscle at time i.The ratio of the EMG activity of tibialis anterior to gastrocnemius was considered for this study (Figure4). The ratio is multiplied by the sum of activity found in the two muscles. The cocontraction index was defined as the event when bursts of the muscle activity of the agonist and antagonist muscles overlapped for at least 5 ms [56]. Slip-severity parameters associated with cocontraction indexes among slipping foot (SF) (right foot in all our trials) and contralateral limb (nonslipping foot (NSF) i.e., left foot in all our slip trials): (vii)SFMeanCCIvalue: it was defined as the mean CCI in slipping foot during slip start to slip stop. (viii)SFPeakCCIvalue: it was defined as the peak CCI value during slip start to slip stop. (ix)SFTime2PeakCCIfromNSFHC: it was defined as the time to generate peak ankle cocontraction from the heel contact of unperturbed foot. (x) NSFMeanCCI value: it was defined as the mean CCI in nonslipping foot during slip start to slip stop. (xi) NSFStanceTime: it is the single stance duration in nonslipping foot right before the perturbation event.Figure 4 (a) Ankle cocontraction values in the slipping foot. (b) Ankle cocontraction values in nonslipping foot. ## 2.6. Mini-Mental State Examination (MMSE) The MMSE examines multiple areas of cognition in human brain. The highest possible score is 30; a score of less than 24 denotes cognitive impairment. Mild cognitive impairment is reflected in scores of 18 to 23, moderate cognitive impairment is suggested by scores of 17 to 10, and severe cognitive impairment is denoted by scores of less than 10 [57]. ## 2.7. Statistical Design There were two independent variables: age group (young versus old) and condition (normal versus dual task). Mixed factor multivariate analysis of variance (MANOVA) was conducted where age group was a between-subjects factor and dual-task/normal conditions group was within-subject factor. Using the Wilks’ Lambda test, the MANOVA allowed for determination of which factors had significant effects on the multiple dependent variables as a whole (i.e., gait parameters, muscle cocontraction, and slip parameters). Following MANOVA test, subsequent univariate ANOVA (mixed factor design) were conducted separately for each dependent variable.All statistical analyses were conducted using JMP (Pro 10.0.2, SAS Institute Inc.) with significance level ofα = 0.05 for all the statistical tests. All dependent variables were evaluated for normality (using Shapiro-WilkW test) and residual analysis. The results did not indicate any violation of normality assumptions. ## 3. Results ### 3.1. Gait Changes due to Dual-Task Performance The results indicated that both age groups (young/old) were affected by dual task and their step length (df=1, p<0.0046) decreased significantly. Double-support time (DST) (df=1, p<0.0048) and mean single stance time (SST) (df=1, p<0.013) increased as well in both young and elderly subjects. It was also found that RCOF and TCOF values decreased slightly due to dual-tasking in both younger and older individuals but the effects were not statistically significant. Older adults were also found to have higher linear variability in some of the gait variables as measured by standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time due to dual-tasking (as seen in Tables 2(a) and 2(b)).Table 2 (a) Dual-task changes in gait parameters. (b) General gait parameters for younger and older population for normal and dual-tasking types of walking. (a) Dual task walk Normal walk p value Mean SD Mean SD Step length [mm]∗ 703.79 39.25 750.89 47.87 0.004 Step width [mm] 119.10 27.67 115.92 25.45 0.69 HCV [mm/s] 1191.20 765.06 1029.41 193.01 0.55 GCT [s] 1.12 0.06 1.08 0.06 0.075 DST [s]∗ 0.26 0.03 0.24 0.03 0.004 Gait speed [m/s] 1.11 0.16 1.17 0.14 0.06 SST [s]∗ 0.70 0.04 0.66 0.05 0.013 Step time [s] 0.55 0.04 0.53 0.04 0.067 Swing time [s] 0.43 0.03 0.42 0.02 0.54 Toe clearance 16.40 8.29 16.29 7.62 0.96 RCOF 0.19 0.03 0.20 0.03 0.08 TCOF 0.07 0.02 0.07 0.01 0.90 (b) Age group Old Young Condition Condition DTW NW DTW NW Mean SD CV Mean SD CV Mean SD CV Mean SD CV Step length [mm] 702.92 48.75 6.94 739.10 57.89 7.83 704.26 35.36 5.02 757.24 42.76 5.65 Step width [mm] 117.89 21.29 18.06 113.16 20.25 17.89 119.74 31.37 26.20 117.41 28.53 24.30 HCV [mm/s] 993.76 516.51 51.98 1048.44 195.34 18.63 1297.52 870.85 67.12 1019.16 198.94 19.52 GCT [s] 1.11 0.07 6.47 1.08 0.07 6.17 1.13 0.06 5.12 1.08 0.06 5.88 DST [s] 0.27 0.03 10.60 0.22 0.02 9.48 0.26 0.03 11.84 0.24 0.03 11.99 SST [s] 0.68 0.05 7.11 0.65 0.04 5.72 0.71 0.04 5.85 0.66 0.05 7.62 Step time [s] 0.55 0.04 8.09 0.53 0.04 7.80 0.56 0.03 6.22 0.53 0.03 6.59 Swing time [s] 0.42 0.02 5.71 0.43 0.03 5.90 0.44 0.03 6.60 0.42 0.02 5.54 Toe clearance [mm] 18.66 12.43 66.62 19.58 5.94 30.31 15.18 5.19 34.18 14.52 8.03 55.32 RCOF 0.17 0.02 12.92 0.19 0.02 9.05 0.19 0.03 14.25 0.20 0.03 15.24 TCOF 0.07 0.01 17.59 0.07 0.01 17.57 0.07 0.02 23.70 0.08 0.01 19.43 Gait speed [m/s] 1.08 0.19 17.58 1.17 0.16 13.68 1.15 0.14 12.17 1.18 0.12 10.17 p ∗ < 0.05. ### 3.2. Effects of Dual-Tasking Induced Changes in Slip Characteristics It was found from the variableSFTime2PeakCCIFromNSFHC that the elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults (p<0.001) (Table 3(a)). Also, in the same line, the results ofNSFMeanCCI value also depict that mean coactivity in nonslipping foot during the time of slip start to slip stop is significantly higher in older adults.Table 3 (a) Normal slip parameters in young and older individuals. (b) Slip parameters for normal and dual-task conditions. (c) Slip parameters in young and older individuals in normal and dual-task walking condition. (a) Age group Old Young Mean SD CV Mean SD CV Time2SDI [s] 0.046 0.014 31.492 0.051 0.008 16.180 TimeSDI2SDII [s] 0.069 0.013 18.182 0.081 0.032 39.817 TimeSDTotal [s] 0.115 0.024 20.889 0.132 0.034 25.740 SDI [mm] 38.867 23.605 60.733 33.063 13.556 41.001 SDII [mm] 116.722 43.147 36.966 154.175 88.275 57.256 SDTotal [mm] 154.713 64.122 41.446 186.863 94.645 50.650 SF MeanCCIvalue 0.081 0.043 53.626 0.093 0.111 119.722 SF PeakCCIvalue 0.938 0.809 86.261 0.828 0.396 47.874 SF Time2PeakCCIFromNSFHC∗ [s] 0.383 0.412 107.398 0.752 0.081 10.722 NSF MeanCCIvalue∗ 0.052 0.030 57.476 0.022 0.013 60.464 PHSV 1033.688 404.843 39.165 1062.313 321.351 30.250 NSF StanceTime [s] 0.646 0.060 9.215 0.649 0.056 8.612 (b) Dual-task slip Normal walk slip p value Mean SD Mean SD Time2SDI [s]∗ 0.07 0.00 0.05 0.01 0.047 TimeSDI2SDII [s] 0.08 0.01 0.08 0.03 1 TimeSDTotal [s] 0.14 0.01 0.13 0.03 0.46 SDI [mm] 17.65 3.89 35.00 16.65 0.19 SDII [mm] 87.09 20.68 141.69 76.20 0.43 SDTotal [mm] 104.20 16.38 176.15 84.10 0.34 SF MeanCCIvalue 0.06 0.05 0.09 0.09 0.69 SF PeakCCIvalue 0.36 0.01 0.86 0.53 0.24 SFTime2PeakCCIFromNSFHC [s] 0.87 0.07 0.63 0.29 0.13 NSF MeanCCIvalue 0.03 0.03 0.03 0.02 0.90 PHSV [mm/s] 699.23 138.99 1052.77 332.59 0.22 NSF StanceTime [s]∗ 0.75 0.02 0.65 0.05 0.03 (c) Age group Old Young Slip condition Slip condition DTS NS DTS NS Mean Mean Mean Mean Time2SDI [s]∗ 0.067 0.046 0.067 0.051 TimeSDI2SDII [s] 0.083 0.069 0.067 0.081 TimeSDTotal [s] 0.150 0.115 0.133 0.132 SDI [mm] 14.903 38.867 20.406 33.063 SDII [mm] 101.71 116.722 72.463 154.175 SDTotal [mm] 115.78 154.713 92.622 186.863 SF MeanCCIvalue 0.093 0.081 0.021 0.093 SF PeakCCIvalue 0.368 0.938 0.355 0.828 SFTime2PeakCCIFromNSFHC [s] 0.917 0.383 0.817 0.752 NSF MeanCCIvalue 0.011 0.052 0.059 0.022 PHSV [mm/s] 797.51 1033.688 600.95 1062.313 NSF StanceTime [s]∗ 0.767 0.646 0.742 0.649 p ∗ < 0.05.It was also found that Time2SDI was significantly increased in dual-task walking trials (p=0.04) although there were no significant differences in SDI (Table 3(b)). Interaction effects were seen for the NSF Mean CCI value (p=0.02) for the two independent variables, age group and slipping condition (normal versus dual task). It was also found that dual task increased the nonslipping foot stance time (p=0.03) in both young and elderly participants compared to that in normal walk slip condition (Table 3(c)). The MMSE score ranged from 28 to 30 for all older participants, whereas all younger participants scored 30. ## 3.1. Gait Changes due to Dual-Task Performance The results indicated that both age groups (young/old) were affected by dual task and their step length (df=1, p<0.0046) decreased significantly. Double-support time (DST) (df=1, p<0.0048) and mean single stance time (SST) (df=1, p<0.013) increased as well in both young and elderly subjects. It was also found that RCOF and TCOF values decreased slightly due to dual-tasking in both younger and older individuals but the effects were not statistically significant. Older adults were also found to have higher linear variability in some of the gait variables as measured by standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time due to dual-tasking (as seen in Tables 2(a) and 2(b)).Table 2 (a) Dual-task changes in gait parameters. (b) General gait parameters for younger and older population for normal and dual-tasking types of walking. (a) Dual task walk Normal walk p value Mean SD Mean SD Step length [mm]∗ 703.79 39.25 750.89 47.87 0.004 Step width [mm] 119.10 27.67 115.92 25.45 0.69 HCV [mm/s] 1191.20 765.06 1029.41 193.01 0.55 GCT [s] 1.12 0.06 1.08 0.06 0.075 DST [s]∗ 0.26 0.03 0.24 0.03 0.004 Gait speed [m/s] 1.11 0.16 1.17 0.14 0.06 SST [s]∗ 0.70 0.04 0.66 0.05 0.013 Step time [s] 0.55 0.04 0.53 0.04 0.067 Swing time [s] 0.43 0.03 0.42 0.02 0.54 Toe clearance 16.40 8.29 16.29 7.62 0.96 RCOF 0.19 0.03 0.20 0.03 0.08 TCOF 0.07 0.02 0.07 0.01 0.90 (b) Age group Old Young Condition Condition DTW NW DTW NW Mean SD CV Mean SD CV Mean SD CV Mean SD CV Step length [mm] 702.92 48.75 6.94 739.10 57.89 7.83 704.26 35.36 5.02 757.24 42.76 5.65 Step width [mm] 117.89 21.29 18.06 113.16 20.25 17.89 119.74 31.37 26.20 117.41 28.53 24.30 HCV [mm/s] 993.76 516.51 51.98 1048.44 195.34 18.63 1297.52 870.85 67.12 1019.16 198.94 19.52 GCT [s] 1.11 0.07 6.47 1.08 0.07 6.17 1.13 0.06 5.12 1.08 0.06 5.88 DST [s] 0.27 0.03 10.60 0.22 0.02 9.48 0.26 0.03 11.84 0.24 0.03 11.99 SST [s] 0.68 0.05 7.11 0.65 0.04 5.72 0.71 0.04 5.85 0.66 0.05 7.62 Step time [s] 0.55 0.04 8.09 0.53 0.04 7.80 0.56 0.03 6.22 0.53 0.03 6.59 Swing time [s] 0.42 0.02 5.71 0.43 0.03 5.90 0.44 0.03 6.60 0.42 0.02 5.54 Toe clearance [mm] 18.66 12.43 66.62 19.58 5.94 30.31 15.18 5.19 34.18 14.52 8.03 55.32 RCOF 0.17 0.02 12.92 0.19 0.02 9.05 0.19 0.03 14.25 0.20 0.03 15.24 TCOF 0.07 0.01 17.59 0.07 0.01 17.57 0.07 0.02 23.70 0.08 0.01 19.43 Gait speed [m/s] 1.08 0.19 17.58 1.17 0.16 13.68 1.15 0.14 12.17 1.18 0.12 10.17 p ∗ < 0.05. ## 3.2. Effects of Dual-Tasking Induced Changes in Slip Characteristics It was found from the variableSFTime2PeakCCIFromNSFHC that the elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults (p<0.001) (Table 3(a)). Also, in the same line, the results ofNSFMeanCCI value also depict that mean coactivity in nonslipping foot during the time of slip start to slip stop is significantly higher in older adults.Table 3 (a) Normal slip parameters in young and older individuals. (b) Slip parameters for normal and dual-task conditions. (c) Slip parameters in young and older individuals in normal and dual-task walking condition. (a) Age group Old Young Mean SD CV Mean SD CV Time2SDI [s] 0.046 0.014 31.492 0.051 0.008 16.180 TimeSDI2SDII [s] 0.069 0.013 18.182 0.081 0.032 39.817 TimeSDTotal [s] 0.115 0.024 20.889 0.132 0.034 25.740 SDI [mm] 38.867 23.605 60.733 33.063 13.556 41.001 SDII [mm] 116.722 43.147 36.966 154.175 88.275 57.256 SDTotal [mm] 154.713 64.122 41.446 186.863 94.645 50.650 SF MeanCCIvalue 0.081 0.043 53.626 0.093 0.111 119.722 SF PeakCCIvalue 0.938 0.809 86.261 0.828 0.396 47.874 SF Time2PeakCCIFromNSFHC∗ [s] 0.383 0.412 107.398 0.752 0.081 10.722 NSF MeanCCIvalue∗ 0.052 0.030 57.476 0.022 0.013 60.464 PHSV 1033.688 404.843 39.165 1062.313 321.351 30.250 NSF StanceTime [s] 0.646 0.060 9.215 0.649 0.056 8.612 (b) Dual-task slip Normal walk slip p value Mean SD Mean SD Time2SDI [s]∗ 0.07 0.00 0.05 0.01 0.047 TimeSDI2SDII [s] 0.08 0.01 0.08 0.03 1 TimeSDTotal [s] 0.14 0.01 0.13 0.03 0.46 SDI [mm] 17.65 3.89 35.00 16.65 0.19 SDII [mm] 87.09 20.68 141.69 76.20 0.43 SDTotal [mm] 104.20 16.38 176.15 84.10 0.34 SF MeanCCIvalue 0.06 0.05 0.09 0.09 0.69 SF PeakCCIvalue 0.36 0.01 0.86 0.53 0.24 SFTime2PeakCCIFromNSFHC [s] 0.87 0.07 0.63 0.29 0.13 NSF MeanCCIvalue 0.03 0.03 0.03 0.02 0.90 PHSV [mm/s] 699.23 138.99 1052.77 332.59 0.22 NSF StanceTime [s]∗ 0.75 0.02 0.65 0.05 0.03 (c) Age group Old Young Slip condition Slip condition DTS NS DTS NS Mean Mean Mean Mean Time2SDI [s]∗ 0.067 0.046 0.067 0.051 TimeSDI2SDII [s] 0.083 0.069 0.067 0.081 TimeSDTotal [s] 0.150 0.115 0.133 0.132 SDI [mm] 14.903 38.867 20.406 33.063 SDII [mm] 101.71 116.722 72.463 154.175 SDTotal [mm] 115.78 154.713 92.622 186.863 SF MeanCCIvalue 0.093 0.081 0.021 0.093 SF PeakCCIvalue 0.368 0.938 0.355 0.828 SFTime2PeakCCIFromNSFHC [s] 0.917 0.383 0.817 0.752 NSF MeanCCIvalue 0.011 0.052 0.059 0.022 PHSV [mm/s] 797.51 1033.688 600.95 1062.313 NSF StanceTime [s]∗ 0.767 0.646 0.742 0.649 p ∗ < 0.05.It was also found that Time2SDI was significantly increased in dual-task walking trials (p=0.04) although there were no significant differences in SDI (Table 3(b)). Interaction effects were seen for the NSF Mean CCI value (p=0.02) for the two independent variables, age group and slipping condition (normal versus dual task). It was also found that dual task increased the nonslipping foot stance time (p=0.03) in both young and elderly participants compared to that in normal walk slip condition (Table 3(c)). The MMSE score ranged from 28 to 30 for all older participants, whereas all younger participants scored 30. ## 4. Discussion This study examined the effects of dual task on older adults and established a relationship between dual-task adaptations in gait and associated slip and fall risk. Major findings were that the dual-task paradigm influenced slip initiation characteristics by modulating to “safer” or “cautious” gait. Dual-task related gait changes are associated with intrinsic (one’s health related) risk factors for falls. As we did not have frail individuals in this study, we found that healthy young and older individuals adapted to dual-task scenarios by shifting to more “cautious” gait. This was well evidenced by a decrease in step length and heel contact velocity and an increase in step width and single and double-support time during gait.The results suggest that attentional capacity limit for healthy young and old adults is perhaps exceeded during dual-task walking but did not result in instability or increased fall risk. Collectively, the study findings argue in favor of a critical gait behavior:“Preferred speed of walking in healthy human beings requires less allocation of attentional resources for safe transitioning.” These findings support previous investigations:(1) In a seminal work by Lajoie and coworkers, it was found that reaction times when participants were in single support phase were significantly longer than those in double-support phase, suggesting that attentional demands increased with an increase in balance requirement tasks [58]. Thus, attentional demands varied within a gait cycle. Dual-task walking resulted in higher double stance times; thus, it could be inferred that healthy young and old adapt their gait to reduce attentional demand.(2) Previous research has also reported that dual-task interference results in slower gait speeds [13, 15, 16], reduced cadence [16, 17], shorter stride length [15, 16], increased stride duration [16], and longer double-support time [13, 18]. These cautious gait pattern adopted by healthy adults during dual-tasking, characterized by reduced speed, shorter step length, and increased step width, is likely a consequence of adaptations to minimize perturbations to the body and reduce the risk of falls [33] during reduced attentional demands of walking. We also found that heel contact velocity and required coefficient of friction decreased slightly but not statistically significantly during dual-tasking. This unwittingly indicates that several mechanisms contribute to reduce risk of falls and adapt body movements to cautious gait mode, when less attentional resources are available for gait.Because walking has greater attentional demands, from an information processing viewpoint, walking is not considered an automated task requiring no cognitive processing [58]. Overall analysis of this study suggests that gait in healthy adults was affected by concurrent cognitive tasks and the evidence is sufficiently robust to support the notion ofcautious gait. Even in healthy individuals, age-related changes have been reported in cognitive and motor systems; thus, aging may be attributed to higher cognitive-motor interference [59, 60]. We believe that the dual-task changes observed are compensatory mechanisms to stabilize and allow safe locomotion, in a condition when less attention is available.Intuitively, the concept of cautious gait adaptations observed in healthy younger and older adults while walking in a dual-task paradigm draws intriguing interests in understanding“Why humans threaten their safety when walking at their preferred speed without dual-tasking?”Findings of this study elucidate that dual-task related changes in gait do not predispose healthy young and older adults to falls. Healthy people walk at their preferred walking speed, step length, and cadence which is selected to optimize the stability of their gait pattern [27, 61]; this has been addressed in several studies in context of spatial variability [47, 62, 63] and temporal variability [64]. It is reported that shorter steps and longer double-support times are associated with small sensorimotor and frontoparietal regions, whereas cognitive processing speed is linked to individual differences in gait [65]. Accordingly, Sekiya et al. [47] suggested an optimal method of walking that consists of optimal criteria in terms of energy efficiency, temporal and spatial variability, and attention (Figure 5).Figure 5 Interrelationship of movement variability with attention and energy expenditure while performing task.During dual-task walking, less attentional resources are allocated for gait; thus, there is compromise with energy expenditure and variability of kinematic parameters. The present study determined that older adults redressed the diminished attentional investment through differing variability in selective gait parameters including standard deviation and coefficient of variation of step width, HCV, DST, SST, and gait cycle time. Therefore, future studies in energy expenditure with a dual-task criterion would further strengthen intuitive understanding on relationship of energy expenditure, variability, and attention.This study also evaluated the effects of the dual-task paradigm on slip severity and on linear measures of variability changes in two known healthy age groups. The results suggest that single stance duration is increased in dual-task walking trials which may elicit a congruent adaptation in both young and elderly individuals to maintain “stable gait.” During the stance phase of a gait cycle, proprioceptive input from extensor muscles and mechanoreceptors in the sole of the foot provide loading information [66] to the central nervous system. Thus, the increased stance duration increases foot-loading information through afferent sensory and proprioceptive mechanoreceptors, such as Golgi-tendon units, muscle spindles, and joint receptors, and may facilitate motor control of the lower extremity during walking [67].Additionally, the dual-task condition shortened step length significantly, which may be associated with modulation of self-selected pace in order to continue counting rhythmically and need of longer (longer single and double stance) and frequent (shorter steps) proprioception due to dual task [68] or perhaps due to changes in the motor control schema with the adoption of alternative compensatory strategies to increase stability while walking with another task being given primary importance and walking being innately automatic to some extent.Furthermore, dual-task trials did not significantly affect heel contact velocity (HCV) but slightly decreased RCOF and TCOF in walking trials although these effects were not statistically significant. Considering HCV as a kinematic gait parameter that can drastically alter the friction demand (by change in required coefficient of friction) [51] and influence the likelihood of slip-induced falls [69–71], dual-task conditions ultimately decrease slip-induced fall risks. Likewise, considering dual-task events had no deleterious effects on toe clearance; therefore, it is inferred that participants delineated reduced trip risk as well.Through the parameter SF Time2PeakCCIFromLHC, the results suggest that elderly people generated peak ankle muscle cocontractions in half of the time taken by younger adults; that is, they may be quick to introduce ankle muscle coactivity in the slipping foot. Further, coactivity during slipping limits the ankle joint’s degrees of freedom, thus reducing requisite motor control adaptability to recover balance after slip. In turn, this affirms that the health status of older age group participating in this study was nonfrail. It should be noted that the subject population recruited for the study was healthy with intact cognitive function (or executive function) with mean Mini-Mental State Examination score above 28 for all.The NSF Mean CCI value depicted that the mean coactivity in the nonslipping foot during the time of slip start to slip stop was significantly higher in older adults. This phenomenon also lowers the degrees of freedom in the nonslipping foot. The reduction in degrees of freedom in both the right and left feet may influence slip severity amongst older adults. Although greater SDI was reported in the older population, in accordance with previous studies [48, 51], they were not significantly different for the current population. It was found that the Time2SDI in the dual-task paradigm was significantly increased; thus, if elderly individuals have a higher SDI, they require lower heel velocities to cover SDI, thus showing that slower movement of heel from slip start to midslip is seen as an effect of dual-tasking. Probably, this could also be partially explained by higher transitional acceleration of center of mass, in these individuals. Interaction effects were seen for the NSF Mean CCI value for the two independent variables, age group and slipping condition (normal versus dual task), which is interesting because older people have lower nonslipping ankle coactivation (or stiffness reduced) during dual-tasking when compared to normal slipping. On the contrary, younger individuals have higher coactivity in ankle muscles of nonslipping foot during dual-activity. This might be influenced by age-related involvement of attentional resources for the dual-task; perhaps this dual task (counting backwards by subtracting 3) may not be challenging enough to involve higher attentional resources for younger participants.In sum, this study investigated the effects of attentional interference (induced by dual task) on gait variability and associated fall risk, particularly to understand the following: (a) what is the effect of dual task on spatiotemporal gait parameters? (b) does dual task deteriorate or modify to unsafe gait by predisposing to falls? The findings suggest that in everyday walking tasks with increased attention demands would certainly reduce the resources available for other tasks which may be secondary. But the slow speed, wider step width, and longer double-support time adopted by participants [27] may serve to produce a more safer and stable gait [72, 73] and energy-efficient speed of progression [72, 74] or probably to maintain certain amount of variability in its kinematics. A cautious gait can be typically marked by moderate slowing, reduced stride length, and mild widening of base-of-support characterized by step width [75]. It is also possible that the kinematic adaptations adopted may serve to reduce the cognitive demand necessary to control the continuous disequilibrium inherent to walking [58, 76]. ## 5. Limitations The strength of the conclusions of this study must be tempered by the study’s limitations. Although all the participants performed similar kind of dual-task while walking, one’s exposure to mathematical background or day-to-day usage of arithmetic operations was a confounding variable in this study. Therefore, the dual task involved in this study may not have required equivalent attentional demand for every subject. The values of toe clearance as found in this study are higher since reflective marker at toe was positioned over laboratory shoes. Thus, toe clearance values are limited due to offset and cannot be compared to existing literature toe clearance values. ## 6. Conclusion Overall, the current research has contributed knowledge about slip risk in healthy young and older adults and the effects that a dual-task paradigm has on slip initiation characteristics and slip severity. The study results suggest that a dual-task elicits a “cautious gait mode” (CGM) which is an innate adaptive response to counter reduced attention while walking. Attention resources are appropriated for the relevant cognitive task (e.g., counting backwards); therefore, the healthy human response is to adopt a cautious gait mode, which includes a shorter step length and longer stance duration, acquiring more proprioceptive information from the ground (or using less attentional resources). The response of CGM is innate for healthy human beings, but in the case of frail elderly persons, who require considerable attention for performing relatively perfunctory gait and postural movements, they may find it challenging to maintain stability. --- *Source: 1014784-2017-01-31.xml*
2017
# Control of Vector-Borne Human Parasitic Diseases **Authors:** Fernando A. Genta; Hector M. Diaz-Albiter; Patrícia Salgueiro; Bruno Gomes **Journal:** BioMed Research International (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1014805 --- ## Body --- *Source: 1014805-2016-12-20.xml*
1014805-2016-12-20_1014805-2016-12-20.md
384
Control of Vector-Borne Human Parasitic Diseases
Fernando A. Genta; Hector M. Diaz-Albiter; Patrícia Salgueiro; Bruno Gomes
BioMed Research International (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1014805
1014805-2016-12-20.xml
--- ## Body --- *Source: 1014805-2016-12-20.xml*
2016
# Comparative Evaluation of a New Depth of Anesthesia Index in ConView® System and the Bispectral Index during Total Intravenous Anesthesia: A Multicenter Clinical Trial **Authors:** Yang Fu; Tao Xu; Keliang Xie; Wei Wei; Ping Gao; Huang Nie; Xiaoming Deng; Guolin Wang; Ming Tian; Min Yan; Hailong Dong; Yun Yue **Journal:** BioMed Research International (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1014825 --- ## Abstract The performance of a new monitor for the depth of anesthesia (DOA), the Depth of Anesthesia Index (Ai) based on sample entropy (SampEn), 95% spectral edge frequency (95%SEF), and burst suppression ratio (BSR) was evaluated compared to Bispectral Index (BIS) during total intravenous anesthesia (TIVA). 144 patients in six medical centers were enrolled. General anesthesia was induced with stepwise-increased target-controlled infusion (TCI) of propofol until loss of consciousness (LOC). During surgery propofol was titrated according to BIS. Both Ai and BIS were recorded. Primary outcomes: the limits of agreement between Ai and BIS were -17.68 and 16.49, which were, respectively, -30.0% and 28.0% of the mean value of BIS. Secondary outcomes: prediction probability (Pk) of BIS and Ai was 0.943 and 0.935 (p=0.102) during LOC and 0.928 and 0.918 (p=0.037) during recovery of consciousness (ROC). And the values of BIS and Ai were 68.19 and 66.44 at 50%LOC, and 76.65 and 78.60 at 50%ROC. A decrease or an increase of Ai was significantly greater than that of BIS when consciousness changes (during LOC: -9.13±10.20 versus -5.83±9.63, p<0.001; during ROC: 10.88±11.51 versus 5.32±7.53, p<0.001). The conclusion is that Ai has similar characteristic of BIS as a DOA monitor and revealed the advantage of SampEn for indicating conscious level. This trial is registered at Chinese Clinical Trial Registry withChiCTR-IOR-16009471. --- ## Body ## 1. Introduction The accurate and noninvasive assessment of DOA is important for anesthesiologists, and there are several kinds of monitoring devices using electroencephalogram (EEG) signal to provide such information about DOA. EEG reflects cerebral electrical activity over time. During anesthesia, the changes of EEG are nonlinear. Entropy from thermodynamics is then used to explain the DOA, such as response entropy (RE) and state entropy (SE), based on spectral entropy. Because fast Fourier transform (a linear method) is used at the beginning of spectral entropy calculation, some valuable information may be missed [1–3]. Recently, SampEn is used to estimate the complexity and the predictability of EEG signals. The conscious EEG tends to be irregular, which means it cannot be predicted from the previous one and SampEn has a great value. The unconscious EEG tends to be regular, which means it can be predicted from the previous one and SampEn has a small value [4]. The indexes of DOA based on SampEn have better performance than RE, SE, and BIS in predicting consciousness level [3, 4].It is suggested in previous study that frequency domain analysis of EEG, such as 95%SEF, is suitable to discriminate different anesthesia levels, and time domain analysis, such as BSR, can qualify the extent of deep anesthesia [5]. Based on these three different parameters of EEG: SampEn, 95%SEF, and BSR, a new index of DOA in ConView® system is designed by Pearlcare Medical Technology Company Limited (Zhejiang, China), which is Ai and is calculated with the algorithm based on decision tree and least square [5]. The values of SampEn, 95%SEF, and BSR are treated as inputs, and four different anesthetic levels assessed by experts are treated as outputs. With both of inputs and outputs, the decision tree is trained and modified [5]. In each anesthetic level, the relationship between Ai values estimated by experts and the values of SampEn, 95%SEF, and BRS is almost linear and is fitted with least square. Ai ranges from an isoelectric EEG (0) to a deep hypnotic state (40), general anesthesia (40-60), light/moderate sedation (60-80), and awake (80-99), which is quite the same as BIS does.BIS is the most widely used DOA-monitoring system and is approved for monitoring hypnosis by the Food and Drug Administration (FDA). It can be a useful monitoring guide for the titration of propofol [6, 7]. And it is the only one that has been studied in large randomized controlled trials, which identified an approximately 80% reduction in the incidence of recall after anesthesia [7]. But it will not predict the exact moment consciousness returns [8]. With the improvement of SampEn in predicting consciousness level, Ai might have better performance than BIS in monitoring DOA.In this study we tried to evaluate the performance of Ai in predicting anesthetic state compared with BIS during TIVA in six medical centers in China. ## 2. Materials and Methods This study was approved by the Ethics Committee of Beijing Chaoyang Hospital affiliated to Capital Medical University (No. 2016-ke-100) and registered at Chinese Clinical Trial Registry (No. ChiCTR-IOR-16009471). This comparative evaluation in multicenter was carried out from November 2016 to February 2017. The side of the forehead was randomized on which the EEG electrode strips for Ai or BIS were positioned.After informed consent, 144 patients (ASA physical state I-II, BMI 18.5-24.9kg/m2) aged 18-65 years old and receiving elective surgery under general anesthesia with estimated surgical hours from one to three were enrolled consecutively in each of the six medical centers, which are Tianjin Medical University General Hospital, Beijing Friendship Hospital affiliated to Capital Medical University, Beijing Chaoyang Hospital affiliated to Capital Medical University, the Second Affiliated Hospital of Zhejiang University Medical College, Changhai Hospital Affiliated to Second Military Medical University, and Xijing Hospital affiliated to The Fourth Military Medical University. None of these patients had a medicine history of psychiatric or neurological disorders; impaired cardiac, pulmonary, hepatic, or renal functions; sleep apnea hypopnea syndrome; sedative or analgesic drug therapy or abuse; or contraindication for or allergy to any sedative and analgesic drugs.EEG electrode strips for recording BIS (BIS XP, system revision 3.31, smoothing rate 15s, Aspect Medical Systems) and Ai (ConView® system, software 2.4.1, Pearlcare Medical Technology Company Limited) were positioned on the forehead cleaned with an alcohol swap, the side of which was randomized by the random numbers from the statistical software. And electrocardiogram, noninvasive blood pressure (NIBP), pulse oximetry, and end-tidal CO2 were also monitored. One large vein of the forearm was cannulated with a 18G indwelling needle to administrate drugs.Oxygen was given by mask. Without premedication, a slow induction was started with 0.01-0.02mg/kg midazolam i.v. push first. Propofol (10mg/ml) was administered i.v. using TCI (Marsh model). Infusion was started at target plasma concentration of 0.5μg/ml, followed by 0.5μg/ml target concentration increase one minute later until LOC [9]. LOC was defined as no response to verbal commands during induction and was tested every thirty seconds. After LOC, remifentanil was applied at 0.2μg/kg/min. Five minutes later, 0.6mg/kg rocuronium was given. And intubation of the trachea was performed one minute later. During surgery, the target plasma concentration of propofol was adjusted to maintain BIS value between 40 and 60, and the infusion rate of remifentanil was titrated to keep NIBP within 1±20% regular NIBP. Rocuronium was added p.r.n. (pro re nata) until thirty minutes before the estimated end of surgery, when 0.1-0.2μg/kg sufentanil was given as the initial postoperation analgesia. After the surgery was finished, propofol and remifentanil infusions were stopped at the same time. ROC was defined as opening eyes following commands and was tested every one minute during emergence.The values of BIS and Ai were recorded before induction, every one minute while the target concentration of propofol increased until LOC and during the first five minutes of remifentanil infusion, at the time of intubation, and one minute and three minutes after intubation. During the first surgical hour, the values of BIS and Ai were recorded every five minutes and at the time when the infusion rates of propofol or remifentanil were changed based on BIS or NIBP. During emergence, the values of BIS and Ai were recorded every one minute until ROC and one to three minutes after ROC. The target plasma concentration of propofol was recorded at LOC, the end of surgery, and ROC. During data collection, the anesthesiologist estimated the patient’s states and recorded the BIS and Ai values at the same time. After data collection, each enrolled patient was assigned a specific number and there was no other patient’s identity information involved during data analysis.Primary outcome was the agreement test of Bland-Altman between Ai and BIS. Secondary outcomes were Pk of BIS and Ai during LOC or ROC and the values of BIS and Ai at 50%LOC, 95%LOC, 5%ROC, and 50%ROC. The sample size in the agreement test of Bland-Altman was suggested to be more than one hundred [10]. It was estimated according to the previous study (n=124) that the performance of Ai was evaluated compared to Narcotrend (not published) and the randomization between left and right sides of forehead used for the EEG electrode strip of Ai among the six medical centers, which should include blocks. The smallest block in this randomization is four. So the sample size in each medical center is 24 and the total sample size is 144.The agreement test of Bland-Altman is the comparisons of two measurements by bias and precision statistics. The bias is the differences between the two comparative measurements. And with the standard deviation of all the individual bias measurements, the 95% confidence limits are estimated and referred to as the limits of agreement, which is used to judge the precision and acceptability of one measurement against another [10, 11]. The acceptance of a new measurement should rely on limits of agreement being no more than 30% [12]. Pk was used to evaluate how accurately Ai and BIS distinguish conscious and unconscious state [13]. A value of Pk=1.0 means that the index always predicts the conscious state correctly and a value of Pk=0.5 means that the index predicts the conscious state no better than 50/50 chance. Pk and its standard error were calculated with the jack-knife method using a custom spreadsheet PKMACRO in Microsoft Excel 2016 [13]. Pk of LOC was based on all the data during induction and Pk of ROC was calculated using all the data during emergence. Pk was compared with 0.5 using Student’s t-test. The difference between these two Pk values of BIS and Ai was studied with paired t-test using another spreadsheet PKDMACRO [13]. And the p values in studying Pk were calculated with the function of TDIST in Microsoft Excel 2016. The relationships between the conscious state and the BIS or Ai values were also defined using logistic regression. Both the BIS and Ai values for 50% or 95% LOC were calculated from the estimated regression equation based on all data during induction. And, based on all data during emergence, so were the BIS and Ai values for 5% or 50% ROC. During LOC and ROC, the changes of Ai or BIS mean values were studied with Wilcoxon test. Data are presented as mean ± SD if not otherwise stated. ## 3. Results Twenty-four patients for each medical center (144 in total) have accomplished this protocol safely. The males were 41.7% and the females were 58.3%. The age was 44.8 ± 11.8 years and the BMI was 22.8 ± 2.2 kg/m2. The left side of the forehead where the EEG electrode strips for Ai were positioned was 52.1% and the right side was 47.9%. In average, the total surgical time was 97.3 ± 35 min. The emergence time was 11.8 ± 7.8 min, which is from the end of anesthetic drugs infusion to ROC. The target plasma concentrations of propofol at LOC, the end of surgery and ROC, and the emergence time for each medical center are shown in Table 1.Table 1 The target plasma concentrations of propofol and emergence time for each medical center. medical center Concentration of propofol (μg/ml) Emergence time (min) LOC End of surgery ROC 1 1.8 ± 0.4 2.3 ± 0.5 1.1 ± 0.3 8 ± 3 2 3.8 ± 0.5 2.0 ± 0.4 0.7 ± 0.2 10 ± 3 3 3.0 ± 0.6 2.5 ± 0.3 1.1 ± 0.2 10 ± 5 4 3.1 ± 0.5 3.0 ± 0.5 0.8 ± 0.2 23 ± 10 5 2.5 ± 0.7 2.3 ± 0.5 1.1 ± 0.4 12 ± 7 6 2.8 ± 0.7 2.6 ± 0.5 1.4 ± 0.3 7 ± 2 Data are presented as mean ± SD. Medical Center: 1, Tianjin Medical University General Hospital; 2, Beijing Friendship Hospital affiliated to Capital Medical University; 3, Beijing Chaoyang Hospital affiliated to Capital Medical University; 4, the Second Affiliated Hospital of Zhejiang University Medical College; 5, Changhai Hospital Affiliated to Second Military Medical University; 6, Xijing Hospital affiliated to the Fourth Military Medical University. LOC: loss of consciousness. ROC: recovery of consciousness. Emergence time: from the end of anesthetic drugs infusion to ROC.The agreement test between Ai and BIS is shown in the Bland-Altman plot in Figure1. The mean value of BIS was 58.93 ± 17.00 and the mean value of Ai was 58.36 ± 17.50. The bias between Ai and BIS was -0.59 ± 8.72. The limits of agreement were -17.68 and 16.49, which were, respectively, -30.0% and 28.0% of the mean value of BIS. The percentage error (±2SD/mean) was ±29.6%. The relation of BIS and Ai is shown in Figure 2.Figure 1 Bland-Altman plot. The bias (mean difference) between Ai and BIS was -0.59. The upper limit (mean difference + 1.96SD) was 16.49, and the lower limit (mean difference - 1.96SD) was -17.68. (n=6391).Figure 2 Ai and BIS plot. The value of BIS was from 18 to 99. The value of Ai was from 15 to 99. The correlation coefficient between BIS and Ai was 0.873.Pk values of BIS and Ai are shown in Table2. All of the Pk values were greater than 0.5. During ROC, Pk of BIS was greater than that of Ai (p=0.037). During LOC, there was no significant difference between the Pk of Ai and BIS (p=0.102).Table 2 Pk values of BIS and Ai during LOC or ROC. Pk Ai BIS LOC 0.935 ± 0.005 0.943 ± 0.005 ROC 0.918 ± 0.007# 0.928 ± 0.006# Data are presented as mean ± SE. # Difference between BIS and Ai during ROC (p<0.05). All of the Pk values were greater than 0.5 (p<0.01).The BIS and Ai for 50%, 95% LOC and 5%, 50% ROC were calculated from the estimated logistic regression equation and are shown in Table3.Table 3 The values of BIS and Ai at 50%, 95% LOC and 5%, 50% ROC. Ai BIS 50% LOC 66.44 68.19 95% LOC 48.25 52.31 5% ROC 55.72 63.13 50% ROC 78.6 76.65 LOC: loss of consciousness. ROC: recovery of consciousness.The values of Ai and BIS during LOC and ROC are shown in Tables4 and 5. Ai changed far more obviously than BIS from LOC to one minute after LOC (-9.13±10.20 versus -5.83±9.63, p<0.001) and from ROC to one minute after ROC (10.88±11.51 versus 5.32±7.53, p<0.001). The values of Ai and BIS from LOC to three minutes after intubation are shown in Table 6. During the process of deepening anesthesia after LOC, Ai barely changed, which was quite different from BIS.Table 4 The values of Ai and BIS during LOC. Ai BIS Ai-BIS 1 min before LOC 62.85 ± 12.68 64.01 ± 10.82 -1.49 ± 9.05 LOC 60.76 ± 12.37 62.18 ± 10.69 -1.42 ± 9.19 1 min after LOC 51.63 ± 11.48 56.35 ± 8.98 -4.27 ± 9.25 Data are presented as mean ± SD. Ai changed far more obviously than BIS from LOC to one minute after LOC (-9.13±10.20 VS -5.83±9.63, p<0.001).Table 5 The values of Ai and BIS during ROC. Ai BIS Ai-BIS 1 min before ROC 69.65 ± 15.25 72.01 ± 9.76 -2.35 ± 9.21 ROC 73.90 ± 13.67 75.66 ± 7.99 -1.75 ± 10.19 1 min after ROC 84.78 ± 9.33 80.98 ± 5.52 3.81 ± 8.31 Data are presented as mean ± SD. Ai changed far more obviously than BIS from ROC to one minute after ROC (10.88±11.51 VS 5.32±7.53, p<0.001).Table 6 The values of Ai and BIS from one minute after remifentanil infusion to one minute after intubation. Ai BIS Ai-BIS R1 51.63 ± 11.48 56.35 ± 8.98 -4.72 ± 9.25 R5 51.38 ± 7.50 52.95 ± 8.60 -1.57 ± 6.25 T0 50.35 ± 8.26 48.92 ± 9.98 1.37 ± 8.73 T1 49.19 ± 8.21 46.71 ± 10.06 2.49 ± 8.48 Data are presented as mean ± SD. R1: one minute after remifentanil infusion. R5: five minutes after remifentanil infusion. T0: the time of intubation. T1: one minute after intubation. During the process of deepening anesthesia from R1 to T1, Ai barely changed, which was quite different from BIS (-2.15±12.25 VS -9.58±11.67, p<0.001). ## 4. Discussions The variation of the target plasma concentrations of propofol for LOC among medical centers (Table1) was not noticed until the statistical result revealed it. In this protocol, we tried to define the LOC as concise and practicable as possible. Before starting this study, we checked and discussed every detail of the protocol with the anesthesiologists from different medical centers and performed one together according to this protocol. During carrying out this study, we kept communication with each other in a group by WeChat.According to the statistical result, the standard deviations among these medical centers are similar, but the mean target plasma concentrations vary a lot. So the differences should be among medical centers and not within each medical center. The lowest concentration is 1.8μg/ml and the highest one is 3.8 μg/ml. The difference of 2 μg/ml needs four times of concentration increase, which last four minutes, and requires eight times of consciousness checking. Therefore, this big difference comes from not only how we might check LOC differently, but also the different dosages of midazolam (from 0.01mg/kg to 0.02mg/kg), the different kinds of TCI pumps with Marsh Model, and so on. Maybe there is something more important, which we did not find out or we missed.For the data management, even if there is such obvious difference among medical centers, the data trends of BIS and Ai values are similar during induction, surgery, and emergence. In other words, the quality of anesthesia was maintained well. So the difference of anesthesia among medical centers was finally considered as a new challenge for the agreement test between BIS and Ai, which was not our original intention.The performance of Ai during the whole surgery with TIVA was evaluated in this multicenter study. The protocol included three components: the slow induction, the first hour of duration of surgery, and the normal emergence. During induction, hypnotics, narcotics, and muscle relaxants were administered one by one and LOC was mainly the result of the accumulating effect of hypnotics. In contrast, ROC during emergence was from the weakening effect of the combination of these components of anesthesia. During surgery the nociceptive stimulations and some kinds of noise such as electrosurgical knife might interfere the EEG monitoring. The differences of anesthesia among the six medical centers (Table1) might also affect the results of EEG monitoring. All the different situations above were used to evaluate the performance of Ai and to compare the performances of Ai and BIS. So, the agreement test between Ai and BIS included all the data within this protocol. The limits of agreement between the new and the reference technique of up to ±30% are accepted [12], which is the criterion. In this study, the limits of agreement between Ai and BIS are from -30% to 28%, which means that Ai has similar characteristic to BIS index. Pk is a tool to measure the performance of anesthetic depth indicators. In this study, the Pk values of BIS and Ai were 0.943 and 0.935 during LOC and 0.928 and 0.918 during ROC, which means both BIS and Ai were good indicators for consciousness levels. The values of BIS and Ai were 68.19 and 66.44 at 50%LOC, and 76.65 and 78.60 at 50%ROC, which were similar numbers to distinguish consciousness states.Because it takes time to calculate the properties of EEG signals into BIS values or Ai values, there is some delay when BIS or Ai reveals the information of EEG [14]. Therefore, the change of BIS or Ai from the moment of LOC to one minute after LOC was considered as the response of BIS or Ai for LOC, and so was the change of BIS or Ai during ROC. In this study we found that the change of Ai values during LOC or ROC was greater than that of BIS values (Tables 4 and 5). According to the algorithm of Ai, SampEn is the main component to indicate the change of conscious state [5]. There is a similar finding which suggests that SampEn is more sensitive to the change of conscious state. Shalbaf et al. designed an index of DOA based on SampEn only, which had greater change than SE or RE during LOC and had better performance of estimating the effects of sevoflurane [4]. This might mean that Ai has the advantage of SampEn.During the slow induction, both the infusion of remifentanil and administration of rocuronium deepened on the anesthesia and prepared the patient for intubation, when Ai barely changed and was quite different from BIS (Table6). Narcotics in their ordinary doses have no noticeable influence on EEG, but they reduce the change of EEG during nociceptive stimulations [15], which means remifentanil does not cause the difference. The muscle relaxants have no direct action on EEG but they can suppress the activity of frontal electromyogram, which might interfere the EEG measurement [15]. However, after anesthesia induction, Ai has declined markedly and muscle relaxants may no longer have a more pronounced effect.There are some limitations in this study. During the first surgical hour, the resistance of Ai to the noise which might interfere with the EEG monitoring was meant to be estimated. The duration of interfering was short, the comparison of which needs to be processed in real time. The interval of data points in this study was five minutes and was too long to estimate the resistance of Ai to the noise. Besides, if the interval of data points was shorter during induction or emergence, the change of Ai and BIS at LOC or ROC might be presented in more detail.Furthermore, as cerebral maturation, the awake resting EEG changes according to the age of child [15], and for a given BIS level, the target concentration of propofol infusion actually decreases as the age increases [16]. So the result of this study focused on the age of 44.8 ± 11.8 could not be extended to children.Compared with propofol, sevoflurane has quite a different EEG profile. During induction with sevoflurane, the EEG has a biphasic change, which is an increase in fast rhythms followed by a decrease in fast rhythms associated with a simultaneous increase in delta activity [17], revealing an paradoxical increase in BIS during incremental sevoflurane inhalation. So, similarly the result based on propofol infusion could not be extrapolated to inhalational anesthetics either. ## 5. Conclusions In this study, the performance of Ai was compared with BIS in six medical centers. It is found that Ai has similar characteristic of BIS and revealed the advantage of SampEn for indicating conscious levels. --- *Source: 1014825-2019-03-04.xml*
1014825-2019-03-04_1014825-2019-03-04.md
23,447
Comparative Evaluation of a New Depth of Anesthesia Index in ConView® System and the Bispectral Index during Total Intravenous Anesthesia: A Multicenter Clinical Trial
Yang Fu; Tao Xu; Keliang Xie; Wei Wei; Ping Gao; Huang Nie; Xiaoming Deng; Guolin Wang; Ming Tian; Min Yan; Hailong Dong; Yun Yue
BioMed Research International (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1014825
1014825-2019-03-04.xml
--- ## Abstract The performance of a new monitor for the depth of anesthesia (DOA), the Depth of Anesthesia Index (Ai) based on sample entropy (SampEn), 95% spectral edge frequency (95%SEF), and burst suppression ratio (BSR) was evaluated compared to Bispectral Index (BIS) during total intravenous anesthesia (TIVA). 144 patients in six medical centers were enrolled. General anesthesia was induced with stepwise-increased target-controlled infusion (TCI) of propofol until loss of consciousness (LOC). During surgery propofol was titrated according to BIS. Both Ai and BIS were recorded. Primary outcomes: the limits of agreement between Ai and BIS were -17.68 and 16.49, which were, respectively, -30.0% and 28.0% of the mean value of BIS. Secondary outcomes: prediction probability (Pk) of BIS and Ai was 0.943 and 0.935 (p=0.102) during LOC and 0.928 and 0.918 (p=0.037) during recovery of consciousness (ROC). And the values of BIS and Ai were 68.19 and 66.44 at 50%LOC, and 76.65 and 78.60 at 50%ROC. A decrease or an increase of Ai was significantly greater than that of BIS when consciousness changes (during LOC: -9.13±10.20 versus -5.83±9.63, p<0.001; during ROC: 10.88±11.51 versus 5.32±7.53, p<0.001). The conclusion is that Ai has similar characteristic of BIS as a DOA monitor and revealed the advantage of SampEn for indicating conscious level. This trial is registered at Chinese Clinical Trial Registry withChiCTR-IOR-16009471. --- ## Body ## 1. Introduction The accurate and noninvasive assessment of DOA is important for anesthesiologists, and there are several kinds of monitoring devices using electroencephalogram (EEG) signal to provide such information about DOA. EEG reflects cerebral electrical activity over time. During anesthesia, the changes of EEG are nonlinear. Entropy from thermodynamics is then used to explain the DOA, such as response entropy (RE) and state entropy (SE), based on spectral entropy. Because fast Fourier transform (a linear method) is used at the beginning of spectral entropy calculation, some valuable information may be missed [1–3]. Recently, SampEn is used to estimate the complexity and the predictability of EEG signals. The conscious EEG tends to be irregular, which means it cannot be predicted from the previous one and SampEn has a great value. The unconscious EEG tends to be regular, which means it can be predicted from the previous one and SampEn has a small value [4]. The indexes of DOA based on SampEn have better performance than RE, SE, and BIS in predicting consciousness level [3, 4].It is suggested in previous study that frequency domain analysis of EEG, such as 95%SEF, is suitable to discriminate different anesthesia levels, and time domain analysis, such as BSR, can qualify the extent of deep anesthesia [5]. Based on these three different parameters of EEG: SampEn, 95%SEF, and BSR, a new index of DOA in ConView® system is designed by Pearlcare Medical Technology Company Limited (Zhejiang, China), which is Ai and is calculated with the algorithm based on decision tree and least square [5]. The values of SampEn, 95%SEF, and BSR are treated as inputs, and four different anesthetic levels assessed by experts are treated as outputs. With both of inputs and outputs, the decision tree is trained and modified [5]. In each anesthetic level, the relationship between Ai values estimated by experts and the values of SampEn, 95%SEF, and BRS is almost linear and is fitted with least square. Ai ranges from an isoelectric EEG (0) to a deep hypnotic state (40), general anesthesia (40-60), light/moderate sedation (60-80), and awake (80-99), which is quite the same as BIS does.BIS is the most widely used DOA-monitoring system and is approved for monitoring hypnosis by the Food and Drug Administration (FDA). It can be a useful monitoring guide for the titration of propofol [6, 7]. And it is the only one that has been studied in large randomized controlled trials, which identified an approximately 80% reduction in the incidence of recall after anesthesia [7]. But it will not predict the exact moment consciousness returns [8]. With the improvement of SampEn in predicting consciousness level, Ai might have better performance than BIS in monitoring DOA.In this study we tried to evaluate the performance of Ai in predicting anesthetic state compared with BIS during TIVA in six medical centers in China. ## 2. Materials and Methods This study was approved by the Ethics Committee of Beijing Chaoyang Hospital affiliated to Capital Medical University (No. 2016-ke-100) and registered at Chinese Clinical Trial Registry (No. ChiCTR-IOR-16009471). This comparative evaluation in multicenter was carried out from November 2016 to February 2017. The side of the forehead was randomized on which the EEG electrode strips for Ai or BIS were positioned.After informed consent, 144 patients (ASA physical state I-II, BMI 18.5-24.9kg/m2) aged 18-65 years old and receiving elective surgery under general anesthesia with estimated surgical hours from one to three were enrolled consecutively in each of the six medical centers, which are Tianjin Medical University General Hospital, Beijing Friendship Hospital affiliated to Capital Medical University, Beijing Chaoyang Hospital affiliated to Capital Medical University, the Second Affiliated Hospital of Zhejiang University Medical College, Changhai Hospital Affiliated to Second Military Medical University, and Xijing Hospital affiliated to The Fourth Military Medical University. None of these patients had a medicine history of psychiatric or neurological disorders; impaired cardiac, pulmonary, hepatic, or renal functions; sleep apnea hypopnea syndrome; sedative or analgesic drug therapy or abuse; or contraindication for or allergy to any sedative and analgesic drugs.EEG electrode strips for recording BIS (BIS XP, system revision 3.31, smoothing rate 15s, Aspect Medical Systems) and Ai (ConView® system, software 2.4.1, Pearlcare Medical Technology Company Limited) were positioned on the forehead cleaned with an alcohol swap, the side of which was randomized by the random numbers from the statistical software. And electrocardiogram, noninvasive blood pressure (NIBP), pulse oximetry, and end-tidal CO2 were also monitored. One large vein of the forearm was cannulated with a 18G indwelling needle to administrate drugs.Oxygen was given by mask. Without premedication, a slow induction was started with 0.01-0.02mg/kg midazolam i.v. push first. Propofol (10mg/ml) was administered i.v. using TCI (Marsh model). Infusion was started at target plasma concentration of 0.5μg/ml, followed by 0.5μg/ml target concentration increase one minute later until LOC [9]. LOC was defined as no response to verbal commands during induction and was tested every thirty seconds. After LOC, remifentanil was applied at 0.2μg/kg/min. Five minutes later, 0.6mg/kg rocuronium was given. And intubation of the trachea was performed one minute later. During surgery, the target plasma concentration of propofol was adjusted to maintain BIS value between 40 and 60, and the infusion rate of remifentanil was titrated to keep NIBP within 1±20% regular NIBP. Rocuronium was added p.r.n. (pro re nata) until thirty minutes before the estimated end of surgery, when 0.1-0.2μg/kg sufentanil was given as the initial postoperation analgesia. After the surgery was finished, propofol and remifentanil infusions were stopped at the same time. ROC was defined as opening eyes following commands and was tested every one minute during emergence.The values of BIS and Ai were recorded before induction, every one minute while the target concentration of propofol increased until LOC and during the first five minutes of remifentanil infusion, at the time of intubation, and one minute and three minutes after intubation. During the first surgical hour, the values of BIS and Ai were recorded every five minutes and at the time when the infusion rates of propofol or remifentanil were changed based on BIS or NIBP. During emergence, the values of BIS and Ai were recorded every one minute until ROC and one to three minutes after ROC. The target plasma concentration of propofol was recorded at LOC, the end of surgery, and ROC. During data collection, the anesthesiologist estimated the patient’s states and recorded the BIS and Ai values at the same time. After data collection, each enrolled patient was assigned a specific number and there was no other patient’s identity information involved during data analysis.Primary outcome was the agreement test of Bland-Altman between Ai and BIS. Secondary outcomes were Pk of BIS and Ai during LOC or ROC and the values of BIS and Ai at 50%LOC, 95%LOC, 5%ROC, and 50%ROC. The sample size in the agreement test of Bland-Altman was suggested to be more than one hundred [10]. It was estimated according to the previous study (n=124) that the performance of Ai was evaluated compared to Narcotrend (not published) and the randomization between left and right sides of forehead used for the EEG electrode strip of Ai among the six medical centers, which should include blocks. The smallest block in this randomization is four. So the sample size in each medical center is 24 and the total sample size is 144.The agreement test of Bland-Altman is the comparisons of two measurements by bias and precision statistics. The bias is the differences between the two comparative measurements. And with the standard deviation of all the individual bias measurements, the 95% confidence limits are estimated and referred to as the limits of agreement, which is used to judge the precision and acceptability of one measurement against another [10, 11]. The acceptance of a new measurement should rely on limits of agreement being no more than 30% [12]. Pk was used to evaluate how accurately Ai and BIS distinguish conscious and unconscious state [13]. A value of Pk=1.0 means that the index always predicts the conscious state correctly and a value of Pk=0.5 means that the index predicts the conscious state no better than 50/50 chance. Pk and its standard error were calculated with the jack-knife method using a custom spreadsheet PKMACRO in Microsoft Excel 2016 [13]. Pk of LOC was based on all the data during induction and Pk of ROC was calculated using all the data during emergence. Pk was compared with 0.5 using Student’s t-test. The difference between these two Pk values of BIS and Ai was studied with paired t-test using another spreadsheet PKDMACRO [13]. And the p values in studying Pk were calculated with the function of TDIST in Microsoft Excel 2016. The relationships between the conscious state and the BIS or Ai values were also defined using logistic regression. Both the BIS and Ai values for 50% or 95% LOC were calculated from the estimated regression equation based on all data during induction. And, based on all data during emergence, so were the BIS and Ai values for 5% or 50% ROC. During LOC and ROC, the changes of Ai or BIS mean values were studied with Wilcoxon test. Data are presented as mean ± SD if not otherwise stated. ## 3. Results Twenty-four patients for each medical center (144 in total) have accomplished this protocol safely. The males were 41.7% and the females were 58.3%. The age was 44.8 ± 11.8 years and the BMI was 22.8 ± 2.2 kg/m2. The left side of the forehead where the EEG electrode strips for Ai were positioned was 52.1% and the right side was 47.9%. In average, the total surgical time was 97.3 ± 35 min. The emergence time was 11.8 ± 7.8 min, which is from the end of anesthetic drugs infusion to ROC. The target plasma concentrations of propofol at LOC, the end of surgery and ROC, and the emergence time for each medical center are shown in Table 1.Table 1 The target plasma concentrations of propofol and emergence time for each medical center. medical center Concentration of propofol (μg/ml) Emergence time (min) LOC End of surgery ROC 1 1.8 ± 0.4 2.3 ± 0.5 1.1 ± 0.3 8 ± 3 2 3.8 ± 0.5 2.0 ± 0.4 0.7 ± 0.2 10 ± 3 3 3.0 ± 0.6 2.5 ± 0.3 1.1 ± 0.2 10 ± 5 4 3.1 ± 0.5 3.0 ± 0.5 0.8 ± 0.2 23 ± 10 5 2.5 ± 0.7 2.3 ± 0.5 1.1 ± 0.4 12 ± 7 6 2.8 ± 0.7 2.6 ± 0.5 1.4 ± 0.3 7 ± 2 Data are presented as mean ± SD. Medical Center: 1, Tianjin Medical University General Hospital; 2, Beijing Friendship Hospital affiliated to Capital Medical University; 3, Beijing Chaoyang Hospital affiliated to Capital Medical University; 4, the Second Affiliated Hospital of Zhejiang University Medical College; 5, Changhai Hospital Affiliated to Second Military Medical University; 6, Xijing Hospital affiliated to the Fourth Military Medical University. LOC: loss of consciousness. ROC: recovery of consciousness. Emergence time: from the end of anesthetic drugs infusion to ROC.The agreement test between Ai and BIS is shown in the Bland-Altman plot in Figure1. The mean value of BIS was 58.93 ± 17.00 and the mean value of Ai was 58.36 ± 17.50. The bias between Ai and BIS was -0.59 ± 8.72. The limits of agreement were -17.68 and 16.49, which were, respectively, -30.0% and 28.0% of the mean value of BIS. The percentage error (±2SD/mean) was ±29.6%. The relation of BIS and Ai is shown in Figure 2.Figure 1 Bland-Altman plot. The bias (mean difference) between Ai and BIS was -0.59. The upper limit (mean difference + 1.96SD) was 16.49, and the lower limit (mean difference - 1.96SD) was -17.68. (n=6391).Figure 2 Ai and BIS plot. The value of BIS was from 18 to 99. The value of Ai was from 15 to 99. The correlation coefficient between BIS and Ai was 0.873.Pk values of BIS and Ai are shown in Table2. All of the Pk values were greater than 0.5. During ROC, Pk of BIS was greater than that of Ai (p=0.037). During LOC, there was no significant difference between the Pk of Ai and BIS (p=0.102).Table 2 Pk values of BIS and Ai during LOC or ROC. Pk Ai BIS LOC 0.935 ± 0.005 0.943 ± 0.005 ROC 0.918 ± 0.007# 0.928 ± 0.006# Data are presented as mean ± SE. # Difference between BIS and Ai during ROC (p<0.05). All of the Pk values were greater than 0.5 (p<0.01).The BIS and Ai for 50%, 95% LOC and 5%, 50% ROC were calculated from the estimated logistic regression equation and are shown in Table3.Table 3 The values of BIS and Ai at 50%, 95% LOC and 5%, 50% ROC. Ai BIS 50% LOC 66.44 68.19 95% LOC 48.25 52.31 5% ROC 55.72 63.13 50% ROC 78.6 76.65 LOC: loss of consciousness. ROC: recovery of consciousness.The values of Ai and BIS during LOC and ROC are shown in Tables4 and 5. Ai changed far more obviously than BIS from LOC to one minute after LOC (-9.13±10.20 versus -5.83±9.63, p<0.001) and from ROC to one minute after ROC (10.88±11.51 versus 5.32±7.53, p<0.001). The values of Ai and BIS from LOC to three minutes after intubation are shown in Table 6. During the process of deepening anesthesia after LOC, Ai barely changed, which was quite different from BIS.Table 4 The values of Ai and BIS during LOC. Ai BIS Ai-BIS 1 min before LOC 62.85 ± 12.68 64.01 ± 10.82 -1.49 ± 9.05 LOC 60.76 ± 12.37 62.18 ± 10.69 -1.42 ± 9.19 1 min after LOC 51.63 ± 11.48 56.35 ± 8.98 -4.27 ± 9.25 Data are presented as mean ± SD. Ai changed far more obviously than BIS from LOC to one minute after LOC (-9.13±10.20 VS -5.83±9.63, p<0.001).Table 5 The values of Ai and BIS during ROC. Ai BIS Ai-BIS 1 min before ROC 69.65 ± 15.25 72.01 ± 9.76 -2.35 ± 9.21 ROC 73.90 ± 13.67 75.66 ± 7.99 -1.75 ± 10.19 1 min after ROC 84.78 ± 9.33 80.98 ± 5.52 3.81 ± 8.31 Data are presented as mean ± SD. Ai changed far more obviously than BIS from ROC to one minute after ROC (10.88±11.51 VS 5.32±7.53, p<0.001).Table 6 The values of Ai and BIS from one minute after remifentanil infusion to one minute after intubation. Ai BIS Ai-BIS R1 51.63 ± 11.48 56.35 ± 8.98 -4.72 ± 9.25 R5 51.38 ± 7.50 52.95 ± 8.60 -1.57 ± 6.25 T0 50.35 ± 8.26 48.92 ± 9.98 1.37 ± 8.73 T1 49.19 ± 8.21 46.71 ± 10.06 2.49 ± 8.48 Data are presented as mean ± SD. R1: one minute after remifentanil infusion. R5: five minutes after remifentanil infusion. T0: the time of intubation. T1: one minute after intubation. During the process of deepening anesthesia from R1 to T1, Ai barely changed, which was quite different from BIS (-2.15±12.25 VS -9.58±11.67, p<0.001). ## 4. Discussions The variation of the target plasma concentrations of propofol for LOC among medical centers (Table1) was not noticed until the statistical result revealed it. In this protocol, we tried to define the LOC as concise and practicable as possible. Before starting this study, we checked and discussed every detail of the protocol with the anesthesiologists from different medical centers and performed one together according to this protocol. During carrying out this study, we kept communication with each other in a group by WeChat.According to the statistical result, the standard deviations among these medical centers are similar, but the mean target plasma concentrations vary a lot. So the differences should be among medical centers and not within each medical center. The lowest concentration is 1.8μg/ml and the highest one is 3.8 μg/ml. The difference of 2 μg/ml needs four times of concentration increase, which last four minutes, and requires eight times of consciousness checking. Therefore, this big difference comes from not only how we might check LOC differently, but also the different dosages of midazolam (from 0.01mg/kg to 0.02mg/kg), the different kinds of TCI pumps with Marsh Model, and so on. Maybe there is something more important, which we did not find out or we missed.For the data management, even if there is such obvious difference among medical centers, the data trends of BIS and Ai values are similar during induction, surgery, and emergence. In other words, the quality of anesthesia was maintained well. So the difference of anesthesia among medical centers was finally considered as a new challenge for the agreement test between BIS and Ai, which was not our original intention.The performance of Ai during the whole surgery with TIVA was evaluated in this multicenter study. The protocol included three components: the slow induction, the first hour of duration of surgery, and the normal emergence. During induction, hypnotics, narcotics, and muscle relaxants were administered one by one and LOC was mainly the result of the accumulating effect of hypnotics. In contrast, ROC during emergence was from the weakening effect of the combination of these components of anesthesia. During surgery the nociceptive stimulations and some kinds of noise such as electrosurgical knife might interfere the EEG monitoring. The differences of anesthesia among the six medical centers (Table1) might also affect the results of EEG monitoring. All the different situations above were used to evaluate the performance of Ai and to compare the performances of Ai and BIS. So, the agreement test between Ai and BIS included all the data within this protocol. The limits of agreement between the new and the reference technique of up to ±30% are accepted [12], which is the criterion. In this study, the limits of agreement between Ai and BIS are from -30% to 28%, which means that Ai has similar characteristic to BIS index. Pk is a tool to measure the performance of anesthetic depth indicators. In this study, the Pk values of BIS and Ai were 0.943 and 0.935 during LOC and 0.928 and 0.918 during ROC, which means both BIS and Ai were good indicators for consciousness levels. The values of BIS and Ai were 68.19 and 66.44 at 50%LOC, and 76.65 and 78.60 at 50%ROC, which were similar numbers to distinguish consciousness states.Because it takes time to calculate the properties of EEG signals into BIS values or Ai values, there is some delay when BIS or Ai reveals the information of EEG [14]. Therefore, the change of BIS or Ai from the moment of LOC to one minute after LOC was considered as the response of BIS or Ai for LOC, and so was the change of BIS or Ai during ROC. In this study we found that the change of Ai values during LOC or ROC was greater than that of BIS values (Tables 4 and 5). According to the algorithm of Ai, SampEn is the main component to indicate the change of conscious state [5]. There is a similar finding which suggests that SampEn is more sensitive to the change of conscious state. Shalbaf et al. designed an index of DOA based on SampEn only, which had greater change than SE or RE during LOC and had better performance of estimating the effects of sevoflurane [4]. This might mean that Ai has the advantage of SampEn.During the slow induction, both the infusion of remifentanil and administration of rocuronium deepened on the anesthesia and prepared the patient for intubation, when Ai barely changed and was quite different from BIS (Table6). Narcotics in their ordinary doses have no noticeable influence on EEG, but they reduce the change of EEG during nociceptive stimulations [15], which means remifentanil does not cause the difference. The muscle relaxants have no direct action on EEG but they can suppress the activity of frontal electromyogram, which might interfere the EEG measurement [15]. However, after anesthesia induction, Ai has declined markedly and muscle relaxants may no longer have a more pronounced effect.There are some limitations in this study. During the first surgical hour, the resistance of Ai to the noise which might interfere with the EEG monitoring was meant to be estimated. The duration of interfering was short, the comparison of which needs to be processed in real time. The interval of data points in this study was five minutes and was too long to estimate the resistance of Ai to the noise. Besides, if the interval of data points was shorter during induction or emergence, the change of Ai and BIS at LOC or ROC might be presented in more detail.Furthermore, as cerebral maturation, the awake resting EEG changes according to the age of child [15], and for a given BIS level, the target concentration of propofol infusion actually decreases as the age increases [16]. So the result of this study focused on the age of 44.8 ± 11.8 could not be extended to children.Compared with propofol, sevoflurane has quite a different EEG profile. During induction with sevoflurane, the EEG has a biphasic change, which is an increase in fast rhythms followed by a decrease in fast rhythms associated with a simultaneous increase in delta activity [17], revealing an paradoxical increase in BIS during incremental sevoflurane inhalation. So, similarly the result based on propofol infusion could not be extrapolated to inhalational anesthetics either. ## 5. Conclusions In this study, the performance of Ai was compared with BIS in six medical centers. It is found that Ai has similar characteristic of BIS and revealed the advantage of SampEn for indicating conscious levels. --- *Source: 1014825-2019-03-04.xml*
2019
# Structural Damage Identification of Pipe Based on GA and SCE-UA Algorithm **Authors:** Yaojin Bao; He Xia; Zhenming Bao; Shuiping Ke; Yahui Li **Journal:** Mathematical Problems in Engineering (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101483 --- ## Abstract Structure of offshore platform is very huge, which is easy to be with crack caused by a variety of environmental factors including winds, waves, and ice and threatened by some unexpected factors such as earthquake, typhoon, tsunami, and ship collision. Thus, as a main part of the jacket offshore platform, pipe is often with crack. However, it is difficult to detect the crack due to its unknown location. Genetic algorithm (GA) and SCE-UA algorithm are used to detect crack in this paper, respectively. In the experiment, five damages of the pipe in the platform model can be intelligently identified by genetic algorithm (GA) and SCE-UA. The network inputs are the differences between the strain mode shapes. The results of the two algorithms for structural damage diagnosis show that both of the two algorithms have high identification accuracy and good adaptability. Furthermore, the error of SCE-UA algorithm is smaller. The results also suggest that the structural damage of pipe can be identified by intelligent algorithm. --- ## Body ## 1. Introduction Nondestructive testing of fault diagnosis can detect whether the construction has flaws under the condition that the construction is not assembled and destroyed, because of properties of material changing with the flaws. General nondestructive testing such as impact-echo method, infrared method, and ultrasonic method is all local damage detection technology that need check damaged points of the constructions first. Although these technologies do not need many instruments and their consequences are accurate, they spend long time and high cost to detect structural damage. In addition, they are hard to detect and evaluate some invisible components about some large-scale complicated structures completely and accurately. Heuristic algorithm is often a first choice to solve complicated problem [1–4]. Thus, this paper attempts to identify the structural damage for pipe with intelligent algorithm.One of the most important problems of damage identification is to determine damage signature. The damage index of strain is more sensitive than the displacement damage index; therefore, the application fields of strain modal are more widely. Hillary and Ewins [5] proposed the concept of strain modal and applied the strain transfer function to the identification of exciting force. Staker [6] utilized the strain transfer function to estimate the fatigue lifetime. Bernasconi and Ewins [7] and Yam et al. [8] deduced and discussed the theory of strain modal by using displacement modal differential operation method. Tsang [9] used element format to verify the correlation theory of strain modal that was verified by numerical simulation and experiment. Hong et al. [10] used the property of Wavelet Transform which could forecast the Lipshitz index and combined the Wavelet Transform with Lipshitz index to estimate damage. Ren et al. [11] applied Wavelet Transform and Lipshitz index to locate the damage and identify the degree of the damage under the moving load. Kirkegaard and Rytter [12] took advantage of the frequency change before and after damage, applying BP neural networks to locate and identify the damage of steel beam. Mitsuru et al. [13] took relative displacement and relative speed between structure layers as input of the whole network and took the recovery ability between layers as output, using the steel structure data of seven layers before and after repair to verify the effectiveness of the proposed method. Ruotolo and Surace [14] put BP neural networks into use to diagnose the damage of slab construction. Masri et al. [15] proved that BP neural network is a powerful tool to dissolve the typical structural dynamic system identification problem. Chiang and Lai [16] did relative researches about construe damage identification using GA: first, approximate location of a structural damage is estimated by using modal residual force; then, the possibility of non unique recognition results will be reduced by using GA; at last, possible damage location will be determined by using optimization method. Chou and Ghaboussi [17] regarded a damage problem as an optimization problem and adopted GA to solve it. Cross-sectional area and change of structure modulus of elasticity could be estimated by measuring the freedom to the static displacement. Different from other algorithms, GA confirms the optimum value of objective function by searching a lot of points. It is found that GA can identify the structural damage even with little information. Yi and Liu [18] employed the weighted combination of vibration mode errors and frequency errors to do some numerical simulation about damage of fixed beam, five-span continuous beams, and three spans ten layer framework. They considered that global search ability of GA is useful to identify the structural damage. Zhu and Xiao [19] defined the relative error summation of stress test and stress analysis of all point in the structure as objective function and imposed inequality constraint condition and penalty function. They identified the damage of the Zhaobaoshan bridge.The effectiveness of construal damage identification based on GA dependeds on the stability of the objective function and algorithm. Considering that GA can perform optimization search based on multiparameter, the objective function based on multiple information can enhance the accuracy of damage identification on some degree. This paper adopted the intelligent algorithm based on strain modal to identify the damage of the nonpenetrative platform pipe crack. Selecting five damages of the platform as an example, this paper used GA and SCE-UA algorithm with better parallel effect to obtain the signal damage degree output of the platform model. At the same time, two methods were evaluated by using the same data. ## 2. Improved GA for Pipe Structure Damage Identification ### 2.1. Coarse-Grained Strategies GA has disadvantages such as precocity and low search efficiency in a practical application. Therefore, the eventual result is local optimal solution, not global optimal solution. For this reason, this paper drew upon predecessors and introduced genetic algorithm to prevent the breeding between relatives. Coarse-grain genetic algorithm is also known as MIMD or distributed genetic algorithm. Individuals are exchanged between the islands, which is called the migration. The division of population and the migration strategy are the pivotal issues of the coarse-grain genetic algorithm.If the terminal conditions are the same, serial genetic algorithm and coarse-grain genetic algorithm have different iteration time and number of times. That is because that the genetic operators of serial genetic algorithm are operated in the local environment. At present, coarse-grain genetic algorithm has some difficulties, the most important one of which is how to determine the right migration policies. ### 2.2. Parallel Strategy Step  1 (Migration Operation). To improve the path diversity, new paths need to be exploited so that better solutions could be obtained. Thus, migration operation is introduced, which means that some excellent individuals are migrated to other subgroups in search of more optimal paths [20]. When external individuals immigrated into a stable new local environment, some individual of the original subgroup will be stimulated by the environment and make the leap of progress, which is very similar to nature.Step  2 (Convergence Judgment). If the operation reaches the maximum number of iterations or satisfies the convergence condition, then exit sub. Otherwise go to Step  2. The specific flow chart is as in Figure 1.Figure 1 Flow chart of coarse-grained parallel genetic algorithm. ### 2.3. GA Parameters Determination for Damage Identification (1) Design Variable. The damage degree of cell αi (i is on behalf of cell number) is treated as design variable. If αi=0, this cell has no damage. The number of design variable should be equal to the number of possible damage cells.(2) Fitness Function. Due to the fact that it is a minimization problem, GA does the selective operation according to individual fitness value. The bigger the fitness value is, the greater the probability chosen into the next generation was. Therefore, a fitness function should be used to transform the individual objective function values. After transforming, the fitness value will be larger if the objective function value is smaller. The fitness function is as follows: (1)fitness=11+∑i=1N∑j=1M|uijm-uija|.(3) Genetic Manipulation. This example uses real coding to describe the individuals, and the value of genes of chromosome is viewed as the damage degree of corresponding cell. Assume that there is a line with a certain length; each parent corresponds to the part of the line according to the fitness value with the ratio. The algorithm moves along the line in the same size of the step. Every step of the algorithm determines the location of the parent based on landing position, and the value of the first step is uniform random value of a less step. Then, mutation and crossover operation should be fulfilled by creating a new binary vector. If one digit of the vector is 1, the gene comes from the first generation, and if it is 0, the gene is generated by the second generation and these genes should be merged to form a new individual.(4) Parallel Strategies. First, the individual similarity of the current population ought to be calculated. It is shown that all the differences are small and value of the population is closed to the global optimal target value if the individual similarity of the current population is less than the minimum. For increasing the diversity of population, the current population should accept an individual of other population with maximum similarity to the current one. On the other hand, when the individual similarity is bigger than the maximum, the differences between individuals are great, meaning that the individual is far from the global optimal value. So the individual can accept an optimal individual from other population to hasten convergence. This migration strategy could save the optimal individual of population directly to the next generation. ## 2.1. Coarse-Grained Strategies GA has disadvantages such as precocity and low search efficiency in a practical application. Therefore, the eventual result is local optimal solution, not global optimal solution. For this reason, this paper drew upon predecessors and introduced genetic algorithm to prevent the breeding between relatives. Coarse-grain genetic algorithm is also known as MIMD or distributed genetic algorithm. Individuals are exchanged between the islands, which is called the migration. The division of population and the migration strategy are the pivotal issues of the coarse-grain genetic algorithm.If the terminal conditions are the same, serial genetic algorithm and coarse-grain genetic algorithm have different iteration time and number of times. That is because that the genetic operators of serial genetic algorithm are operated in the local environment. At present, coarse-grain genetic algorithm has some difficulties, the most important one of which is how to determine the right migration policies. ## 2.2. Parallel Strategy Step  1 (Migration Operation). To improve the path diversity, new paths need to be exploited so that better solutions could be obtained. Thus, migration operation is introduced, which means that some excellent individuals are migrated to other subgroups in search of more optimal paths [20]. When external individuals immigrated into a stable new local environment, some individual of the original subgroup will be stimulated by the environment and make the leap of progress, which is very similar to nature.Step  2 (Convergence Judgment). If the operation reaches the maximum number of iterations or satisfies the convergence condition, then exit sub. Otherwise go to Step  2. The specific flow chart is as in Figure 1.Figure 1 Flow chart of coarse-grained parallel genetic algorithm. ## 2.3. GA Parameters Determination for Damage Identification (1) Design Variable. The damage degree of cell αi (i is on behalf of cell number) is treated as design variable. If αi=0, this cell has no damage. The number of design variable should be equal to the number of possible damage cells.(2) Fitness Function. Due to the fact that it is a minimization problem, GA does the selective operation according to individual fitness value. The bigger the fitness value is, the greater the probability chosen into the next generation was. Therefore, a fitness function should be used to transform the individual objective function values. After transforming, the fitness value will be larger if the objective function value is smaller. The fitness function is as follows: (1)fitness=11+∑i=1N∑j=1M|uijm-uija|.(3) Genetic Manipulation. This example uses real coding to describe the individuals, and the value of genes of chromosome is viewed as the damage degree of corresponding cell. Assume that there is a line with a certain length; each parent corresponds to the part of the line according to the fitness value with the ratio. The algorithm moves along the line in the same size of the step. Every step of the algorithm determines the location of the parent based on landing position, and the value of the first step is uniform random value of a less step. Then, mutation and crossover operation should be fulfilled by creating a new binary vector. If one digit of the vector is 1, the gene comes from the first generation, and if it is 0, the gene is generated by the second generation and these genes should be merged to form a new individual.(4) Parallel Strategies. First, the individual similarity of the current population ought to be calculated. It is shown that all the differences are small and value of the population is closed to the global optimal target value if the individual similarity of the current population is less than the minimum. For increasing the diversity of population, the current population should accept an individual of other population with maximum similarity to the current one. On the other hand, when the individual similarity is bigger than the maximum, the differences between individuals are great, meaning that the individual is far from the global optimal value. So the individual can accept an optimal individual from other population to hasten convergence. This migration strategy could save the optimal individual of population directly to the next generation. ## 3. SCE-UA Algorithm for Damage Identification of Pipe Structure ### 3.1. Principle and Description of SCE-UA Algorithm SCE-UA algorithm is a global optimization algorithm, which integrates the advantages of random search algorithm simplex method, clustering analysis and biological evolution method, and so on. It can effectively deal with the objective function with rough reflecting surface, not sensitive area [21]. Moreover, the algorithm is not interfered from the local minimum point [22]. It combines the complex search technology based on the deterministic with the biological competition principle of evolution in nature, and its key point is CCE. In CCE algorithm, each compound vertex is potential parent and could be involved in the calculation of producing the next generation. In the process of building, the child compound is selected by random, which leads to the search in the feasible region more thorough. ### 3.2. Parallel Strategy SCE-UA algorithm has high intrinsic parallelism. Its process conforms to the Master-Slave pattern, so it is paralleled easily without changing any structure. In Master-Slave pattern, Master process performs the operation including sample space, initialization, compound sorting, and other global mixing operations; Slave process does the evolutionary operation. The figure of SCE-UA algorithm based on the Master-Slave is as in Figure2.Figure 2 Parallel SCE-UA algorithm based on Master-Slave strategies.First, the Master program performs the initialization operation, inputs the data of the model, and generatess samples randomly in the sample space. Then, the samples are sorted according to the new objective function. Finally, the compound types divided are delivered to the Slave process. If the number of processor n>p, a processor is assigned to process a Slave; otherwise, a processor will be assigned to process more than a Slave. After that, a Slave process will be divided to some child compound types that generates next generation. When the evolutionary operation is completed, slave process will transfer the results to Master process for performing the mixing process. Then, the above processes are performed continuously, until the results are converged. ## 3.1. Principle and Description of SCE-UA Algorithm SCE-UA algorithm is a global optimization algorithm, which integrates the advantages of random search algorithm simplex method, clustering analysis and biological evolution method, and so on. It can effectively deal with the objective function with rough reflecting surface, not sensitive area [21]. Moreover, the algorithm is not interfered from the local minimum point [22]. It combines the complex search technology based on the deterministic with the biological competition principle of evolution in nature, and its key point is CCE. In CCE algorithm, each compound vertex is potential parent and could be involved in the calculation of producing the next generation. In the process of building, the child compound is selected by random, which leads to the search in the feasible region more thorough. ## 3.2. Parallel Strategy SCE-UA algorithm has high intrinsic parallelism. Its process conforms to the Master-Slave pattern, so it is paralleled easily without changing any structure. In Master-Slave pattern, Master process performs the operation including sample space, initialization, compound sorting, and other global mixing operations; Slave process does the evolutionary operation. The figure of SCE-UA algorithm based on the Master-Slave is as in Figure2.Figure 2 Parallel SCE-UA algorithm based on Master-Slave strategies.First, the Master program performs the initialization operation, inputs the data of the model, and generatess samples randomly in the sample space. Then, the samples are sorted according to the new objective function. Finally, the compound types divided are delivered to the Slave process. If the number of processor n>p, a processor is assigned to process a Slave; otherwise, a processor will be assigned to process more than a Slave. After that, a Slave process will be divided to some child compound types that generates next generation. When the evolutionary operation is completed, slave process will transfer the results to Master process for performing the mixing process. Then, the above processes are performed continuously, until the results are converged. ## 4. Applying GA and SCE-UA Algorithm for Damage Identification of Pipe Structure ### 4.1. Input Parameters of the Algorithms GA and SCE-UA algorithm must constitute an objective function when they are applied to damage identification of pipe structure. At present, the structural modal parameters are widely adopted (vibration mode and frequency), but strain modal can achieve high accuracy [8], so this paper chooses strain modal as input parameter.Displacement modal calculated by finite element could be transferred to strain modal which is very accurate. The objective function of GA and SCE-UA algorithm is the difference value of strain modal between the intact and damage structure. ### 4.2. Damage Degree Identification and Evaluation of Pipe Structure Generally, damage could reduce structural stiffness. Then, assuming that there areR variables, the descripiton of structural damage follows: (2)αi∈(0,1),i=1~R, where αi is the damage degree of cell i, so stiffness matrix of cell i is shown as (4) (3)Keid=αiKei0, where Kei0 stands for a stiffness matrix generated in an undamaged condition and Keid stands for a stiffness matrix generated in a damaged condition. Then, global stiffness matrix in damaged condition is as follows: (4)Kd(α1,α2,…,αR)=∑i=1RKeid. This paper calculated the strain modal in N different damage cases, collected M displacement data in each working condition, and obtained the corresponding strain modal. uija denotes the strain modal j under damage case i; uijm means corresponding strain modal in undamaged condition. By adjusting damage variable αi(i=1~R), uija will be approached to uijm: (5)min∑i=1N∑j=1M|uijm-uija|,0<αi<1. Thus, the problem is transferred to the mentioned objective function (5). As a result, GA and SCE-UA algorithm with powerful searching ability are applied for structural damage identification. ## 4.1. Input Parameters of the Algorithms GA and SCE-UA algorithm must constitute an objective function when they are applied to damage identification of pipe structure. At present, the structural modal parameters are widely adopted (vibration mode and frequency), but strain modal can achieve high accuracy [8], so this paper chooses strain modal as input parameter.Displacement modal calculated by finite element could be transferred to strain modal which is very accurate. The objective function of GA and SCE-UA algorithm is the difference value of strain modal between the intact and damage structure. ## 4.2. Damage Degree Identification and Evaluation of Pipe Structure Generally, damage could reduce structural stiffness. Then, assuming that there areR variables, the descripiton of structural damage follows: (2)αi∈(0,1),i=1~R, where αi is the damage degree of cell i, so stiffness matrix of cell i is shown as (4) (3)Keid=αiKei0, where Kei0 stands for a stiffness matrix generated in an undamaged condition and Keid stands for a stiffness matrix generated in a damaged condition. Then, global stiffness matrix in damaged condition is as follows: (4)Kd(α1,α2,…,αR)=∑i=1RKeid. This paper calculated the strain modal in N different damage cases, collected M displacement data in each working condition, and obtained the corresponding strain modal. uija denotes the strain modal j under damage case i; uijm means corresponding strain modal in undamaged condition. By adjusting damage variable αi(i=1~R), uija will be approached to uijm: (5)min∑i=1N∑j=1M|uijm-uija|,0<αi<1. Thus, the problem is transferred to the mentioned objective function (5). As a result, GA and SCE-UA algorithm with powerful searching ability are applied for structural damage identification. ## 5. Case Studies There is a jacket offshore platform, which can be seen in Figure3. The platform is made of Q235 steel pipe. There are seven points selected for damage identification, which are noted as A, B, C, D, E, F, and G. Their locations can be seen in Figure 3. This paper considered five locations under 4 different damage cases. Damage cases are shown in Table 1.(1) Location 1: damage is in Point A at Platform jacket leg.(2) Location 2: damage is in Point B at horizontal pipe BC.(3) Location 3: damage is in Point D at inclined pipe DG.(4) Location 4: damage is in Point C at horizontal pipe AE.(5) Location 5: damage is in Point G of the second layer jacket FG.Table 1 The list of damage cases. The label of damage cases Damage degree (%) 1 10 2 20 3 30 4 40Figure 3 The structure of jacket offshore platform. ### 5.1. Damage Identification of Horizontal Pipe Structure GA and SCE-UA algorithm are used to identify the structural damage of spud leg FG, horizontal pipe AE, and inclined pipe BC. Strain modal data were input to GA and SCE-UA algorithm, the number of groups is 50, and the iterations are 1000. Specific damage degree of two methods is shown as in Tables2 and 3.Table 2 The damage degree table of Point A at platform jacket leg in genetic algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.187 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.291 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.383 0.000Table 3 The damage degree table of Point A at platform jacket leg in SCE-UA algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.293 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000 ### 5.2. Analysis of Results It is indicated that both GA and SCE-UA algorithm have accurate identification results from Tables3, 4, 5, 6, 7, 8, 9, 10, and 11. Although some identification values are not the same as the design values, the errors are very small. By comparing the two algorithms in Table 12, it is found that the results of SCE-UA are more exact. SCE-UA algorithm obtained better identification accuracy than GA did, which proves that SCE-UA is a powerful approach for damage identification of signal crack.Table 4 The damage degree table of Point B at horizontal pipe BC in genetic algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.089 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.271 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 5 The damage degree table of Point B at horizontal pipe BC in SCE-UA algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.091 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.193 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.284 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.393 0.000Table 6 The degree damage table of Point D at inclined pipe DG in genetic algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.092 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 7 The damage degree table of Point D at inclined pipe DG in SCE-UA algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.097 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.392 0.000Table 8 The damage degree table of Point C at horizontal pipe AE in genetic algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.086 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000Table 9 The damage degree table of Point C at horizontal pipe AE in SCE-UA algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.093 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 10 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.083 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.197 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.274 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.388 0.000Table 11 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.082 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.199 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 12 Damage determination analysis of the two methods. Type Damage degree (%) Accuracy of GA Accuracy of SCE-UA Single crack 10 100% 100% 20 99.8% 99.9% 30 100% 100% 40 99.9% 100% ### 5.3. Comparison of Algorithms The parameters of this paper are shown as in Table13, and we compared the evolution results of two algorithms as in Figure 4.Table 13 Parameters in GA and SCE-UA. Algorithm Population size Evolutional generation Computation time GA 50 1000 293.7 SCE-UA 50 1000 231.2Figure 4 Fitness of each calculation.We can find that operation time of SCE-UA algorithm is less under the same population size and evolutional generation. Although two algorithms almost did not change afterwards, SCE-UA has better convergence and higher robustness [23]. The fitness of SCE-UA did not change after 800th. The fitness of GA increased rapidly before 500th generation, but it changed smoothly after 500th generation. GA had good convergence rate only in the initial stage of evolution, but when the distance is close to the optimal solution, GA was unable to search the optimal solution of this area quickly. ## 5.1. Damage Identification of Horizontal Pipe Structure GA and SCE-UA algorithm are used to identify the structural damage of spud leg FG, horizontal pipe AE, and inclined pipe BC. Strain modal data were input to GA and SCE-UA algorithm, the number of groups is 50, and the iterations are 1000. Specific damage degree of two methods is shown as in Tables2 and 3.Table 2 The damage degree table of Point A at platform jacket leg in genetic algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.187 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.291 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.383 0.000Table 3 The damage degree table of Point A at platform jacket leg in SCE-UA algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.293 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000 ## 5.2. Analysis of Results It is indicated that both GA and SCE-UA algorithm have accurate identification results from Tables3, 4, 5, 6, 7, 8, 9, 10, and 11. Although some identification values are not the same as the design values, the errors are very small. By comparing the two algorithms in Table 12, it is found that the results of SCE-UA are more exact. SCE-UA algorithm obtained better identification accuracy than GA did, which proves that SCE-UA is a powerful approach for damage identification of signal crack.Table 4 The damage degree table of Point B at horizontal pipe BC in genetic algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.089 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.271 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 5 The damage degree table of Point B at horizontal pipe BC in SCE-UA algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.091 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.193 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.284 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.393 0.000Table 6 The degree damage table of Point D at inclined pipe DG in genetic algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.092 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 7 The damage degree table of Point D at inclined pipe DG in SCE-UA algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.097 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.392 0.000Table 8 The damage degree table of Point C at horizontal pipe AE in genetic algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.086 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000Table 9 The damage degree table of Point C at horizontal pipe AE in SCE-UA algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.093 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 10 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.083 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.197 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.274 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.388 0.000Table 11 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.082 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.199 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 12 Damage determination analysis of the two methods. Type Damage degree (%) Accuracy of GA Accuracy of SCE-UA Single crack 10 100% 100% 20 99.8% 99.9% 30 100% 100% 40 99.9% 100% ## 5.3. Comparison of Algorithms The parameters of this paper are shown as in Table13, and we compared the evolution results of two algorithms as in Figure 4.Table 13 Parameters in GA and SCE-UA. Algorithm Population size Evolutional generation Computation time GA 50 1000 293.7 SCE-UA 50 1000 231.2Figure 4 Fitness of each calculation.We can find that operation time of SCE-UA algorithm is less under the same population size and evolutional generation. Although two algorithms almost did not change afterwards, SCE-UA has better convergence and higher robustness [23]. The fitness of SCE-UA did not change after 800th. The fitness of GA increased rapidly before 500th generation, but it changed smoothly after 500th generation. GA had good convergence rate only in the initial stage of evolution, but when the distance is close to the optimal solution, GA was unable to search the optimal solution of this area quickly. ## 6. Conclusions Pipe structure of offshore platform is very huge, which was suffered by a variety of environmental factors. Thus, the jacket offshore platform was often caused with crack. It is an important task to identify the structural damage of pipe. However, it is difficult to detect the crack due to its unknown location. Thus, the damage identification problem is a large-scale complicated problem. Heuristic algorithm is often a first choice to solve this kind of complicated problem. This paper firstly overviewed the basic theory of GA and SCE-UA algorithm and expounded the flow of the two algorithms. The process of applying GA and SCE-UA to identify the structural damage degree of platform is introduced. We selected strain modal difference as the input parameter owing to its accuracy and took 5 damages of a platform as an example to identify structural damage. The results showed that GA and SCE-UA algorithm can achieve higher recognition accuracy and have good adaptabilities. The errors of SCE-UA algorithm are smaller, the computation time is less, and convergence is better. SCE-UA is a powerful tool for identifying structural damage of a platform pipe. In addition, the results also suggest that intelligent algorithm is a feasible method for structural damage identification. --- *Source: 101483-2013-12-30.xml*
101483-2013-12-30_101483-2013-12-30.md
40,581
Structural Damage Identification of Pipe Based on GA and SCE-UA Algorithm
Yaojin Bao; He Xia; Zhenming Bao; Shuiping Ke; Yahui Li
Mathematical Problems in Engineering (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101483
101483-2013-12-30.xml
--- ## Abstract Structure of offshore platform is very huge, which is easy to be with crack caused by a variety of environmental factors including winds, waves, and ice and threatened by some unexpected factors such as earthquake, typhoon, tsunami, and ship collision. Thus, as a main part of the jacket offshore platform, pipe is often with crack. However, it is difficult to detect the crack due to its unknown location. Genetic algorithm (GA) and SCE-UA algorithm are used to detect crack in this paper, respectively. In the experiment, five damages of the pipe in the platform model can be intelligently identified by genetic algorithm (GA) and SCE-UA. The network inputs are the differences between the strain mode shapes. The results of the two algorithms for structural damage diagnosis show that both of the two algorithms have high identification accuracy and good adaptability. Furthermore, the error of SCE-UA algorithm is smaller. The results also suggest that the structural damage of pipe can be identified by intelligent algorithm. --- ## Body ## 1. Introduction Nondestructive testing of fault diagnosis can detect whether the construction has flaws under the condition that the construction is not assembled and destroyed, because of properties of material changing with the flaws. General nondestructive testing such as impact-echo method, infrared method, and ultrasonic method is all local damage detection technology that need check damaged points of the constructions first. Although these technologies do not need many instruments and their consequences are accurate, they spend long time and high cost to detect structural damage. In addition, they are hard to detect and evaluate some invisible components about some large-scale complicated structures completely and accurately. Heuristic algorithm is often a first choice to solve complicated problem [1–4]. Thus, this paper attempts to identify the structural damage for pipe with intelligent algorithm.One of the most important problems of damage identification is to determine damage signature. The damage index of strain is more sensitive than the displacement damage index; therefore, the application fields of strain modal are more widely. Hillary and Ewins [5] proposed the concept of strain modal and applied the strain transfer function to the identification of exciting force. Staker [6] utilized the strain transfer function to estimate the fatigue lifetime. Bernasconi and Ewins [7] and Yam et al. [8] deduced and discussed the theory of strain modal by using displacement modal differential operation method. Tsang [9] used element format to verify the correlation theory of strain modal that was verified by numerical simulation and experiment. Hong et al. [10] used the property of Wavelet Transform which could forecast the Lipshitz index and combined the Wavelet Transform with Lipshitz index to estimate damage. Ren et al. [11] applied Wavelet Transform and Lipshitz index to locate the damage and identify the degree of the damage under the moving load. Kirkegaard and Rytter [12] took advantage of the frequency change before and after damage, applying BP neural networks to locate and identify the damage of steel beam. Mitsuru et al. [13] took relative displacement and relative speed between structure layers as input of the whole network and took the recovery ability between layers as output, using the steel structure data of seven layers before and after repair to verify the effectiveness of the proposed method. Ruotolo and Surace [14] put BP neural networks into use to diagnose the damage of slab construction. Masri et al. [15] proved that BP neural network is a powerful tool to dissolve the typical structural dynamic system identification problem. Chiang and Lai [16] did relative researches about construe damage identification using GA: first, approximate location of a structural damage is estimated by using modal residual force; then, the possibility of non unique recognition results will be reduced by using GA; at last, possible damage location will be determined by using optimization method. Chou and Ghaboussi [17] regarded a damage problem as an optimization problem and adopted GA to solve it. Cross-sectional area and change of structure modulus of elasticity could be estimated by measuring the freedom to the static displacement. Different from other algorithms, GA confirms the optimum value of objective function by searching a lot of points. It is found that GA can identify the structural damage even with little information. Yi and Liu [18] employed the weighted combination of vibration mode errors and frequency errors to do some numerical simulation about damage of fixed beam, five-span continuous beams, and three spans ten layer framework. They considered that global search ability of GA is useful to identify the structural damage. Zhu and Xiao [19] defined the relative error summation of stress test and stress analysis of all point in the structure as objective function and imposed inequality constraint condition and penalty function. They identified the damage of the Zhaobaoshan bridge.The effectiveness of construal damage identification based on GA dependeds on the stability of the objective function and algorithm. Considering that GA can perform optimization search based on multiparameter, the objective function based on multiple information can enhance the accuracy of damage identification on some degree. This paper adopted the intelligent algorithm based on strain modal to identify the damage of the nonpenetrative platform pipe crack. Selecting five damages of the platform as an example, this paper used GA and SCE-UA algorithm with better parallel effect to obtain the signal damage degree output of the platform model. At the same time, two methods were evaluated by using the same data. ## 2. Improved GA for Pipe Structure Damage Identification ### 2.1. Coarse-Grained Strategies GA has disadvantages such as precocity and low search efficiency in a practical application. Therefore, the eventual result is local optimal solution, not global optimal solution. For this reason, this paper drew upon predecessors and introduced genetic algorithm to prevent the breeding between relatives. Coarse-grain genetic algorithm is also known as MIMD or distributed genetic algorithm. Individuals are exchanged between the islands, which is called the migration. The division of population and the migration strategy are the pivotal issues of the coarse-grain genetic algorithm.If the terminal conditions are the same, serial genetic algorithm and coarse-grain genetic algorithm have different iteration time and number of times. That is because that the genetic operators of serial genetic algorithm are operated in the local environment. At present, coarse-grain genetic algorithm has some difficulties, the most important one of which is how to determine the right migration policies. ### 2.2. Parallel Strategy Step  1 (Migration Operation). To improve the path diversity, new paths need to be exploited so that better solutions could be obtained. Thus, migration operation is introduced, which means that some excellent individuals are migrated to other subgroups in search of more optimal paths [20]. When external individuals immigrated into a stable new local environment, some individual of the original subgroup will be stimulated by the environment and make the leap of progress, which is very similar to nature.Step  2 (Convergence Judgment). If the operation reaches the maximum number of iterations or satisfies the convergence condition, then exit sub. Otherwise go to Step  2. The specific flow chart is as in Figure 1.Figure 1 Flow chart of coarse-grained parallel genetic algorithm. ### 2.3. GA Parameters Determination for Damage Identification (1) Design Variable. The damage degree of cell αi (i is on behalf of cell number) is treated as design variable. If αi=0, this cell has no damage. The number of design variable should be equal to the number of possible damage cells.(2) Fitness Function. Due to the fact that it is a minimization problem, GA does the selective operation according to individual fitness value. The bigger the fitness value is, the greater the probability chosen into the next generation was. Therefore, a fitness function should be used to transform the individual objective function values. After transforming, the fitness value will be larger if the objective function value is smaller. The fitness function is as follows: (1)fitness=11+∑i=1N∑j=1M|uijm-uija|.(3) Genetic Manipulation. This example uses real coding to describe the individuals, and the value of genes of chromosome is viewed as the damage degree of corresponding cell. Assume that there is a line with a certain length; each parent corresponds to the part of the line according to the fitness value with the ratio. The algorithm moves along the line in the same size of the step. Every step of the algorithm determines the location of the parent based on landing position, and the value of the first step is uniform random value of a less step. Then, mutation and crossover operation should be fulfilled by creating a new binary vector. If one digit of the vector is 1, the gene comes from the first generation, and if it is 0, the gene is generated by the second generation and these genes should be merged to form a new individual.(4) Parallel Strategies. First, the individual similarity of the current population ought to be calculated. It is shown that all the differences are small and value of the population is closed to the global optimal target value if the individual similarity of the current population is less than the minimum. For increasing the diversity of population, the current population should accept an individual of other population with maximum similarity to the current one. On the other hand, when the individual similarity is bigger than the maximum, the differences between individuals are great, meaning that the individual is far from the global optimal value. So the individual can accept an optimal individual from other population to hasten convergence. This migration strategy could save the optimal individual of population directly to the next generation. ## 2.1. Coarse-Grained Strategies GA has disadvantages such as precocity and low search efficiency in a practical application. Therefore, the eventual result is local optimal solution, not global optimal solution. For this reason, this paper drew upon predecessors and introduced genetic algorithm to prevent the breeding between relatives. Coarse-grain genetic algorithm is also known as MIMD or distributed genetic algorithm. Individuals are exchanged between the islands, which is called the migration. The division of population and the migration strategy are the pivotal issues of the coarse-grain genetic algorithm.If the terminal conditions are the same, serial genetic algorithm and coarse-grain genetic algorithm have different iteration time and number of times. That is because that the genetic operators of serial genetic algorithm are operated in the local environment. At present, coarse-grain genetic algorithm has some difficulties, the most important one of which is how to determine the right migration policies. ## 2.2. Parallel Strategy Step  1 (Migration Operation). To improve the path diversity, new paths need to be exploited so that better solutions could be obtained. Thus, migration operation is introduced, which means that some excellent individuals are migrated to other subgroups in search of more optimal paths [20]. When external individuals immigrated into a stable new local environment, some individual of the original subgroup will be stimulated by the environment and make the leap of progress, which is very similar to nature.Step  2 (Convergence Judgment). If the operation reaches the maximum number of iterations or satisfies the convergence condition, then exit sub. Otherwise go to Step  2. The specific flow chart is as in Figure 1.Figure 1 Flow chart of coarse-grained parallel genetic algorithm. ## 2.3. GA Parameters Determination for Damage Identification (1) Design Variable. The damage degree of cell αi (i is on behalf of cell number) is treated as design variable. If αi=0, this cell has no damage. The number of design variable should be equal to the number of possible damage cells.(2) Fitness Function. Due to the fact that it is a minimization problem, GA does the selective operation according to individual fitness value. The bigger the fitness value is, the greater the probability chosen into the next generation was. Therefore, a fitness function should be used to transform the individual objective function values. After transforming, the fitness value will be larger if the objective function value is smaller. The fitness function is as follows: (1)fitness=11+∑i=1N∑j=1M|uijm-uija|.(3) Genetic Manipulation. This example uses real coding to describe the individuals, and the value of genes of chromosome is viewed as the damage degree of corresponding cell. Assume that there is a line with a certain length; each parent corresponds to the part of the line according to the fitness value with the ratio. The algorithm moves along the line in the same size of the step. Every step of the algorithm determines the location of the parent based on landing position, and the value of the first step is uniform random value of a less step. Then, mutation and crossover operation should be fulfilled by creating a new binary vector. If one digit of the vector is 1, the gene comes from the first generation, and if it is 0, the gene is generated by the second generation and these genes should be merged to form a new individual.(4) Parallel Strategies. First, the individual similarity of the current population ought to be calculated. It is shown that all the differences are small and value of the population is closed to the global optimal target value if the individual similarity of the current population is less than the minimum. For increasing the diversity of population, the current population should accept an individual of other population with maximum similarity to the current one. On the other hand, when the individual similarity is bigger than the maximum, the differences between individuals are great, meaning that the individual is far from the global optimal value. So the individual can accept an optimal individual from other population to hasten convergence. This migration strategy could save the optimal individual of population directly to the next generation. ## 3. SCE-UA Algorithm for Damage Identification of Pipe Structure ### 3.1. Principle and Description of SCE-UA Algorithm SCE-UA algorithm is a global optimization algorithm, which integrates the advantages of random search algorithm simplex method, clustering analysis and biological evolution method, and so on. It can effectively deal with the objective function with rough reflecting surface, not sensitive area [21]. Moreover, the algorithm is not interfered from the local minimum point [22]. It combines the complex search technology based on the deterministic with the biological competition principle of evolution in nature, and its key point is CCE. In CCE algorithm, each compound vertex is potential parent and could be involved in the calculation of producing the next generation. In the process of building, the child compound is selected by random, which leads to the search in the feasible region more thorough. ### 3.2. Parallel Strategy SCE-UA algorithm has high intrinsic parallelism. Its process conforms to the Master-Slave pattern, so it is paralleled easily without changing any structure. In Master-Slave pattern, Master process performs the operation including sample space, initialization, compound sorting, and other global mixing operations; Slave process does the evolutionary operation. The figure of SCE-UA algorithm based on the Master-Slave is as in Figure2.Figure 2 Parallel SCE-UA algorithm based on Master-Slave strategies.First, the Master program performs the initialization operation, inputs the data of the model, and generatess samples randomly in the sample space. Then, the samples are sorted according to the new objective function. Finally, the compound types divided are delivered to the Slave process. If the number of processor n>p, a processor is assigned to process a Slave; otherwise, a processor will be assigned to process more than a Slave. After that, a Slave process will be divided to some child compound types that generates next generation. When the evolutionary operation is completed, slave process will transfer the results to Master process for performing the mixing process. Then, the above processes are performed continuously, until the results are converged. ## 3.1. Principle and Description of SCE-UA Algorithm SCE-UA algorithm is a global optimization algorithm, which integrates the advantages of random search algorithm simplex method, clustering analysis and biological evolution method, and so on. It can effectively deal with the objective function with rough reflecting surface, not sensitive area [21]. Moreover, the algorithm is not interfered from the local minimum point [22]. It combines the complex search technology based on the deterministic with the biological competition principle of evolution in nature, and its key point is CCE. In CCE algorithm, each compound vertex is potential parent and could be involved in the calculation of producing the next generation. In the process of building, the child compound is selected by random, which leads to the search in the feasible region more thorough. ## 3.2. Parallel Strategy SCE-UA algorithm has high intrinsic parallelism. Its process conforms to the Master-Slave pattern, so it is paralleled easily without changing any structure. In Master-Slave pattern, Master process performs the operation including sample space, initialization, compound sorting, and other global mixing operations; Slave process does the evolutionary operation. The figure of SCE-UA algorithm based on the Master-Slave is as in Figure2.Figure 2 Parallel SCE-UA algorithm based on Master-Slave strategies.First, the Master program performs the initialization operation, inputs the data of the model, and generatess samples randomly in the sample space. Then, the samples are sorted according to the new objective function. Finally, the compound types divided are delivered to the Slave process. If the number of processor n>p, a processor is assigned to process a Slave; otherwise, a processor will be assigned to process more than a Slave. After that, a Slave process will be divided to some child compound types that generates next generation. When the evolutionary operation is completed, slave process will transfer the results to Master process for performing the mixing process. Then, the above processes are performed continuously, until the results are converged. ## 4. Applying GA and SCE-UA Algorithm for Damage Identification of Pipe Structure ### 4.1. Input Parameters of the Algorithms GA and SCE-UA algorithm must constitute an objective function when they are applied to damage identification of pipe structure. At present, the structural modal parameters are widely adopted (vibration mode and frequency), but strain modal can achieve high accuracy [8], so this paper chooses strain modal as input parameter.Displacement modal calculated by finite element could be transferred to strain modal which is very accurate. The objective function of GA and SCE-UA algorithm is the difference value of strain modal between the intact and damage structure. ### 4.2. Damage Degree Identification and Evaluation of Pipe Structure Generally, damage could reduce structural stiffness. Then, assuming that there areR variables, the descripiton of structural damage follows: (2)αi∈(0,1),i=1~R, where αi is the damage degree of cell i, so stiffness matrix of cell i is shown as (4) (3)Keid=αiKei0, where Kei0 stands for a stiffness matrix generated in an undamaged condition and Keid stands for a stiffness matrix generated in a damaged condition. Then, global stiffness matrix in damaged condition is as follows: (4)Kd(α1,α2,…,αR)=∑i=1RKeid. This paper calculated the strain modal in N different damage cases, collected M displacement data in each working condition, and obtained the corresponding strain modal. uija denotes the strain modal j under damage case i; uijm means corresponding strain modal in undamaged condition. By adjusting damage variable αi(i=1~R), uija will be approached to uijm: (5)min∑i=1N∑j=1M|uijm-uija|,0<αi<1. Thus, the problem is transferred to the mentioned objective function (5). As a result, GA and SCE-UA algorithm with powerful searching ability are applied for structural damage identification. ## 4.1. Input Parameters of the Algorithms GA and SCE-UA algorithm must constitute an objective function when they are applied to damage identification of pipe structure. At present, the structural modal parameters are widely adopted (vibration mode and frequency), but strain modal can achieve high accuracy [8], so this paper chooses strain modal as input parameter.Displacement modal calculated by finite element could be transferred to strain modal which is very accurate. The objective function of GA and SCE-UA algorithm is the difference value of strain modal between the intact and damage structure. ## 4.2. Damage Degree Identification and Evaluation of Pipe Structure Generally, damage could reduce structural stiffness. Then, assuming that there areR variables, the descripiton of structural damage follows: (2)αi∈(0,1),i=1~R, where αi is the damage degree of cell i, so stiffness matrix of cell i is shown as (4) (3)Keid=αiKei0, where Kei0 stands for a stiffness matrix generated in an undamaged condition and Keid stands for a stiffness matrix generated in a damaged condition. Then, global stiffness matrix in damaged condition is as follows: (4)Kd(α1,α2,…,αR)=∑i=1RKeid. This paper calculated the strain modal in N different damage cases, collected M displacement data in each working condition, and obtained the corresponding strain modal. uija denotes the strain modal j under damage case i; uijm means corresponding strain modal in undamaged condition. By adjusting damage variable αi(i=1~R), uija will be approached to uijm: (5)min∑i=1N∑j=1M|uijm-uija|,0<αi<1. Thus, the problem is transferred to the mentioned objective function (5). As a result, GA and SCE-UA algorithm with powerful searching ability are applied for structural damage identification. ## 5. Case Studies There is a jacket offshore platform, which can be seen in Figure3. The platform is made of Q235 steel pipe. There are seven points selected for damage identification, which are noted as A, B, C, D, E, F, and G. Their locations can be seen in Figure 3. This paper considered five locations under 4 different damage cases. Damage cases are shown in Table 1.(1) Location 1: damage is in Point A at Platform jacket leg.(2) Location 2: damage is in Point B at horizontal pipe BC.(3) Location 3: damage is in Point D at inclined pipe DG.(4) Location 4: damage is in Point C at horizontal pipe AE.(5) Location 5: damage is in Point G of the second layer jacket FG.Table 1 The list of damage cases. The label of damage cases Damage degree (%) 1 10 2 20 3 30 4 40Figure 3 The structure of jacket offshore platform. ### 5.1. Damage Identification of Horizontal Pipe Structure GA and SCE-UA algorithm are used to identify the structural damage of spud leg FG, horizontal pipe AE, and inclined pipe BC. Strain modal data were input to GA and SCE-UA algorithm, the number of groups is 50, and the iterations are 1000. Specific damage degree of two methods is shown as in Tables2 and 3.Table 2 The damage degree table of Point A at platform jacket leg in genetic algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.187 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.291 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.383 0.000Table 3 The damage degree table of Point A at platform jacket leg in SCE-UA algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.293 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000 ### 5.2. Analysis of Results It is indicated that both GA and SCE-UA algorithm have accurate identification results from Tables3, 4, 5, 6, 7, 8, 9, 10, and 11. Although some identification values are not the same as the design values, the errors are very small. By comparing the two algorithms in Table 12, it is found that the results of SCE-UA are more exact. SCE-UA algorithm obtained better identification accuracy than GA did, which proves that SCE-UA is a powerful approach for damage identification of signal crack.Table 4 The damage degree table of Point B at horizontal pipe BC in genetic algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.089 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.271 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 5 The damage degree table of Point B at horizontal pipe BC in SCE-UA algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.091 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.193 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.284 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.393 0.000Table 6 The degree damage table of Point D at inclined pipe DG in genetic algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.092 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 7 The damage degree table of Point D at inclined pipe DG in SCE-UA algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.097 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.392 0.000Table 8 The damage degree table of Point C at horizontal pipe AE in genetic algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.086 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000Table 9 The damage degree table of Point C at horizontal pipe AE in SCE-UA algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.093 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 10 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.083 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.197 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.274 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.388 0.000Table 11 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.082 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.199 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 12 Damage determination analysis of the two methods. Type Damage degree (%) Accuracy of GA Accuracy of SCE-UA Single crack 10 100% 100% 20 99.8% 99.9% 30 100% 100% 40 99.9% 100% ### 5.3. Comparison of Algorithms The parameters of this paper are shown as in Table13, and we compared the evolution results of two algorithms as in Figure 4.Table 13 Parameters in GA and SCE-UA. Algorithm Population size Evolutional generation Computation time GA 50 1000 293.7 SCE-UA 50 1000 231.2Figure 4 Fitness of each calculation.We can find that operation time of SCE-UA algorithm is less under the same population size and evolutional generation. Although two algorithms almost did not change afterwards, SCE-UA has better convergence and higher robustness [23]. The fitness of SCE-UA did not change after 800th. The fitness of GA increased rapidly before 500th generation, but it changed smoothly after 500th generation. GA had good convergence rate only in the initial stage of evolution, but when the distance is close to the optimal solution, GA was unable to search the optimal solution of this area quickly. ## 5.1. Damage Identification of Horizontal Pipe Structure GA and SCE-UA algorithm are used to identify the structural damage of spud leg FG, horizontal pipe AE, and inclined pipe BC. Strain modal data were input to GA and SCE-UA algorithm, the number of groups is 50, and the iterations are 1000. Specific damage degree of two methods is shown as in Tables2 and 3.Table 2 The damage degree table of Point A at platform jacket leg in genetic algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.187 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.291 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.383 0.000Table 3 The damage degree table of Point A at platform jacket leg in SCE-UA algorithm. Cell number 1–304 305 306–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.098 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.293 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000 ## 5.2. Analysis of Results It is indicated that both GA and SCE-UA algorithm have accurate identification results from Tables3, 4, 5, 6, 7, 8, 9, 10, and 11. Although some identification values are not the same as the design values, the errors are very small. By comparing the two algorithms in Table 12, it is found that the results of SCE-UA are more exact. SCE-UA algorithm obtained better identification accuracy than GA did, which proves that SCE-UA is a powerful approach for damage identification of signal crack.Table 4 The damage degree table of Point B at horizontal pipe BC in genetic algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.089 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.271 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 5 The damage degree table of Point B at horizontal pipe BC in SCE-UA algorithm. Cell number 1–8754 8755 8756–21600 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.091 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.193 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.284 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.393 0.000Table 6 The degree damage table of Point D at inclined pipe DG in genetic algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.092 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.189 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.387 0.000Table 7 The damage degree table of Point D at inclined pipe DG in SCE-UA algorithm. Cell number 1–15 16 17–3300 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.097 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.392 0.000Table 8 The damage degree table of Point C at horizontal pipe AE in genetic algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.086 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.191 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.391 0.000Table 9 The damage degree table of Point C at horizontal pipe AE in SCE-UA algorithm. Cell number 1–26695 26696 26697–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.093 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.192 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.286 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 10 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.083 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.197 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.274 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.388 0.000Table 11 The damage degree table of Point G of the second layer jacket FG in genetic algorithm. Cell number 1–22004 22005 22006–56400 Case1 Design value of damage degree 0.000 0.100 0.000 Identification value of damage degree 0.000 0.082 0.000 Case2 Design value of damage degree 0.000 0.200 0.000 Identification value of damage degree 0.000 0.199 0.000 Case3 Design value of damage degree 0.000 0.300 0.000 Identification value of damage degree 0.000 0.281 0.000 Case4 Design value of damage degree 0.000 0.400 0.000 Identification value of damage degree 0.000 0.394 0.000Table 12 Damage determination analysis of the two methods. Type Damage degree (%) Accuracy of GA Accuracy of SCE-UA Single crack 10 100% 100% 20 99.8% 99.9% 30 100% 100% 40 99.9% 100% ## 5.3. Comparison of Algorithms The parameters of this paper are shown as in Table13, and we compared the evolution results of two algorithms as in Figure 4.Table 13 Parameters in GA and SCE-UA. Algorithm Population size Evolutional generation Computation time GA 50 1000 293.7 SCE-UA 50 1000 231.2Figure 4 Fitness of each calculation.We can find that operation time of SCE-UA algorithm is less under the same population size and evolutional generation. Although two algorithms almost did not change afterwards, SCE-UA has better convergence and higher robustness [23]. The fitness of SCE-UA did not change after 800th. The fitness of GA increased rapidly before 500th generation, but it changed smoothly after 500th generation. GA had good convergence rate only in the initial stage of evolution, but when the distance is close to the optimal solution, GA was unable to search the optimal solution of this area quickly. ## 6. Conclusions Pipe structure of offshore platform is very huge, which was suffered by a variety of environmental factors. Thus, the jacket offshore platform was often caused with crack. It is an important task to identify the structural damage of pipe. However, it is difficult to detect the crack due to its unknown location. Thus, the damage identification problem is a large-scale complicated problem. Heuristic algorithm is often a first choice to solve this kind of complicated problem. This paper firstly overviewed the basic theory of GA and SCE-UA algorithm and expounded the flow of the two algorithms. The process of applying GA and SCE-UA to identify the structural damage degree of platform is introduced. We selected strain modal difference as the input parameter owing to its accuracy and took 5 damages of a platform as an example to identify structural damage. The results showed that GA and SCE-UA algorithm can achieve higher recognition accuracy and have good adaptabilities. The errors of SCE-UA algorithm are smaller, the computation time is less, and convergence is better. SCE-UA is a powerful tool for identifying structural damage of a platform pipe. In addition, the results also suggest that intelligent algorithm is a feasible method for structural damage identification. --- *Source: 101483-2013-12-30.xml*
2013
# Study of the Residual Strength of an RC Shear Wall with Fractal Crack Taking into Account Interlocking Interface Phenomena **Authors:** O. Panagouli; K. Iordanidou **Journal:** Mathematical Problems in Engineering (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101484 --- ## Abstract In the present paper, the postcracking strength of an RC shear wall element which follows the construction practices applied in Greece during the 70s is examined by taking into account the complex geometry of the crack of the wall and the mixed friction-plastification mechanisms that develop in the vicinity of the crack. Due to the significance of the crack geometry, a multiresolution analysis based on fractal geometry is performed, taking into account the size of the aggregates of concrete. The materials (steel and concrete) are assumed to have elastic-plastic behaviour. For concrete, both cracking and crushing are taken into account in an accurate manner. On the interfaces of the crack, unilateral contact and friction conditions are assumed to hold. For every structure corresponding to each resolution of the interface, a classical Euclidean problem is solved. The obtained results lead to interesting conclusions concerning the influence of the simulation of the geometry of the fractal crack on the mechanical interlock between the two faces of the crack, a factor which seems to be very important to the postcracking strength of the lightly reinforced shear wall studied here. --- ## Body ## 1. Introduction Many RC structures are facing a number of challenges, that is, earthquakes, hurricanes, and so forth, which may threaten their safety and serviceability. Therefore, modern structures built in seismic prone areas are designed to have significant bending and shear strength and ductility. However, existing structures designed according to earlier versions of the seismic codes and constructed using low strength materials usually have inadequate shear strength. For that, shear cracks appear in the shear wall elements of these structures reducing their overall capacity.Generally, cracks are of large interest in RC structures since their properties reflect not only the condition of concrete as material but also the condition of the entire system at structural level. Crack width is commonly used as a convenient indicator of damage to RC elements, but it should be noted that the distribution and the geometry of the cracks are also important in measuring the extent of damage presented in the structure [1, 2] and in calculating the residual strength of it.It is well known that the geometry of the interfaces of a crack is of fundamental importance to the study of friction, wear, and also strength evaluation. The recent research of fractured surfaces of various materials provides a deeper insight into the geometry of cracks. The corresponding research on metals [3] and on concrete and rocks [4–8] showed that the fractured surfaces hold fractal properties in a well-defined scale range. In the case of concrete the effects of aggregates sizes and the quality of concrete on the fractality of fractured surfaces were also investigated in [9–12], respectively. Therefore, an accurate description of the geometry of crack interfaces is of great importance for the simulation of contact. It is important to mention here that the actual contact between two real interfaces is realized only over a small fraction in a discrete number of areas. Consequently, the real contact area is only a fraction of the apparent area [13, 14], and the parameters of the actual contact regions are strongly influenced by the roughness of the contacting surfaces.The multiscale nature of the surface roughness suggests the use of fractal geometry. The fractal approach adopted here for the simulation of the geometry of the cracks formed in a shear wall uses computer generated self-affine curves for the modelling of the interface roughness, which is strongly dependent on the values of the structural parameters of these curves. The computer generated fractal interfaces, which are based on a given discrete set of interface data, are characterized by a precise value of the resolutionδ of the interface. This fact permits the study of the interface roughness on iteratively generated rough profiles, making this approach suitable for engineering problems, since it permits the satisfactory study of the whole problem with reliable numerical calculations.The aim of this paper is to study how the resolution of a fractal crackℱ affects the strength of a reinforced concrete shear wall element. On the interface between the two cracked surfaces, unilateral contact and friction conditions are assumed to hold. The applied approach takes into account the nonlinear behaviour of the materials, including the limited strength of concrete under tension. The shear wall is submitted to shear loading. As a result of the applied approach, the contribution of the friction between the cracked surfaces is taken into account, as well as the additional strength coming from the mechanical interlock between the two faces of the crack. For every structure resulting for each resolution of the interface, a classical Euclidean problem is solved by using a variational formulation [15]. It must be mentioned here that the finest resolution of the interface is related to the size of the aggregates. ## 2. Fractal Representation of Rough Surfaces The fractal nature of material damage has been a matter of a very intense research during the last three decades. The fractal nature of fractured surfaces in metals was shown more than 30 years ago by Mandelbrot et al. [3]. More specifically, it was shown that the fractured surfaces in metals develop a fractal structure over more than three orders of magnitude. In quasibrittle materials, observations have shown that fractured surfaces display self-affine scale properties in a certain range of scales which is in most cases very large and which greatly depends on the material microstructure. This is true for a large variety of quasibrittle materials such as rock, concrete, and ceramics [4–8, 16–18].Fractal sets are characterized by noninteger dimensions [19]. The dimension of a fractal set in plane can vary from 0 to 2. Accordingly, by increasing the resolution of a fractal set, its length tends to 0 if its dimension is smaller than 1 (totally disconnected set) or tends to infinity if it is larger than 1. In these cases, the length is a nominal, useless quantity since it changes as the resolution increases. Conversely, the fractal dimension of a fractal set is a parameter of great importance because of its scale-independent character.Many methods which are based on experimental or numerical calculations, such as the Richardson method [19], have been developed for the estimation of the fractal dimension of a curve. According to this method, dividers, which are set to a prescribed opening δ, are used. Moving with these dividers along the curve so that each new step starts where the previous step leaves off, one obtains the number of steps N(δ). The curve is said to be of fractal nature if, by repeating this procedure for different values of δ, the relation (1)N(δ)~δ-D is obtained in some interval δ*<δ<Δ*. The power D denotes the fractal dimension of the profile, which is in the range 1<D<2. The relation between the fractal dimension D of this profile and the dimension of the corresponding surface is Ds=D+1 [16].In relation (1) there is an upper and a lower bound in the scaling range and, consequently, a transition from the fractal regime at the microscopic level to the Euclidean regime at largest scales. The upper bound is represented by the macroscopic size of the set, while the lower one is related to the size of the smallest measurable particles, that is, the aggregates in the case of concrete. Mandelbrot [20] first pointed out the transition from a fractal regime characterized by noninteger dimensions to the homogeneous one characterized by classical topological dimensions, a fact which points out the main difference between mathematical and physical fractals.The idea of self-affinity is very popular in studying surface roughness because experimental studies have shown that, under repeated magnifications, the profiles of real surfaces are usually statistically self-affine to themselves [3, 21]. The self-affine fractals were used in a number of papers as a tool for the description of rough surfaces [22–28]. Typically, such a profile can be measured by taking height data yi with respect to an arbitrary datum at N equidistant discrete points xi. In the sequence, fractal interpolation functions ℱ(xi)=yi,i=0,1,…,N are used for the passage from this discrete set of data {(xi,yi),i=0,1,2,…,N} to a continuous model. According to the theory of Barnsley [29], the sequence of functions ℱn+1(x)=(Tℱn)(x)=cili-1(x)+diℱn(li-1(x))+gi converges to a fractal curve ℱ as n→∞. The transformation li transforms [x0,xN] to [xi-1,xi], and it is defined by the relation li(x)=aix+bi. The calculation of the parameters ai,bi,ci,andgi is based on the given set of data and the free parameters di.Fractal interpolation functions give profiles which look quite attractive from the viewpoint of a graphic roughness simulation. In higher approximations, these profiles appear rougher as it is shown in the next section where the first to fifth approximations of a fractal interpolation function are presented. Moreover, the roughness of the profile is strongly affected by the free parametersdi of the interpolation functions. As these parameters take larger values, the resulting profiles appear rougher with sharper peaks.Another model of self-affine profiles, which can be used for roughness description, is the multilevel hierarchical profile. This profile has a hierarchical structure and is constructed by using a certain iterative scheme presented in [24]. As in the case of the fractal interpolation functions, the surfaces produced by this scheme are characterized by a precise value of the resolution δ of the fractal curve. More specifically, in both cases, the iterative construction of the profiles permits us to analyse “prefractals” of arbitrary generation and, therefore, of arbitrary resolution δn.It must be mentioned here that an important advantage of the fractal interpolation functions presented in [29] and of the multilevel hierarchical approach presented in [24] is that their fractal dimension can be obtained analytically and it depends strongly on their construction parameters. Thus, in the case of fractal interpolation functions which are used in this paper for the simulation of the geometry of the crack, the fractal dimension D is given by the relation (2)∑i=1N|di|aiD-1=1. ## 3. Description of the Considered Problem In Figure1, an RC shear wall element which follows the construction practices applied in Greece during the 70s is presented. More specifically, the wall is reinforced by a double steel mesh consisting of horizontal and vertical rebars having a diameter of 8 mm and a spacing of 200 mm. The quality of the steel mesh is assumed to be S220 (typical for buildings of that age). At the two ends of the wall, the amount of reinforcement is higher. Four 20 mm rebars of higher quality (S400) are used without specific provisions to increase the confinement. The thickness of the wall is 200 mm and the quality of concrete is assumed to be C16, typical for this kind of constructions. The wall is fixed on the lower horizontal boundary.Figure 1 The considered shear wall.The considered shear wall is divided into two parts by a crack which has been formed due to shear failure of concrete. It is important to be mentioned here that, in low strength concretes, as in the case examined here, the fractured surfaces are rougher compared to the fractured surfaces developed in high strength concretes [12] because in the first case the cracks are developed in the contact zone between the aggregates and the cement paste, whereas in the second case the failure of the aggregates ensures a less rough interface. For the description of the roughness of the crack, the notion of fractals is used. More specifically, the crack is described by a fractal interpolation function which interpolates the set of data {(-1.0,2.95),(0.4,2.0),(1.8,1.0),(3.2,0.5)}. The free parameters of the function are taken to have the values d1=d2=d3=0.50 in order for the interface to be rough (the fractal dimension of the interface results to be equal to 1.369).The computer generated interfacesℱn,n=1,2,… are “prefractals” images of the fractal set characterized by a precise value of the resolution δn, which is related to the nth iteration of the fractal interpolation function and represents the characteristic linear size of the interface. As it is shown in Figure 2 where five iterations of a fractal interface are given, the linear size of the interface changes rapidly when higher iterations are taken into account. In Table 1, the characteristics of each resolution are presented. More specifically, in the second column, the resolution of the interface δn is given, whereas in the third column the total crack length Ln is presented.Table 1 Characteristics of the considered structures. Iteration (n) Resolutionδn (m) Interface lengthLn (m) 1st 1.404 4.888 2nd 0.468 4.946 3rd 0.156 5.080 4th 0.052 5.373 5th 0.017 5.935Figure 2 The first five resolutions of the fractal crack.The objective here is to estimate the capacity of the shear wall under an action similar to the one that has created the crack. For this reason, a horizontal displacement of 20 mm is applied on the upper side of the wall (see Figure1). Moreover, a vertical distributed loading qN is applied on the upper horizontal boundary, creating a compressive axial loading. The resultant of this loading is denoted by N. For N, six different values will be considered from 0 to 2.500 kN with a step of 500 kN.For the modelling of the above problem it is assumed that the opposite sides of the fracture are perfectly matching surfaces in a distance of 0.1 mm and the finite element method is used. In order to avoid a much more complicated three-dimensional analysis, two-dimensional finite elements were employed; however, special consideration was given to the incorporation of the nonlinearities that govern the response of the wall. More specifically, the mass of concrete was modelled through quadrilateral and triangular plain stress elements. The finite element discretization density is similar for all the considered problems [30] in order for the discretization density not to affect the comparison between the results of the various analyses that were performed. The modulus of elasticity for the elements representing the mass of concrete was taken to be equal to E=21GPa and the Poisson’s coefficient to be equal to ν=0.16. The material was assumed to follow the nonlinear law depicted in Figure 3(a). Under compression, the material behaves elastoplastically, until a total strain of 0.004. After this strain value crushing develops in concrete, leading its strength to zero. A more complicated behaviour is considered under tension. More specifically, after the exhaustion of the tension strength of concrete, a softening branch follows, having a slope ks=10GPa. Progressively the tension strength of concrete is also zeroed. The above unidirectional nonlinear law is complemented by an appropriate yield criterion (Tresca) which takes into account the two-dimensional stress fields that develop in the considered problem. For the simulation of cracking, a smeared crack algorithm is used, in which the cracks are evenly distributed over the area of each finite element [31].The adopted materials laws: (a) C16 concrete, (b) S220 steel, and (c) S400 steel. (a) (b) (c)The steel rebars were modelled through two-dimensional beam elements, which were connected to the same grid of nodes as the plain stress elements simulating the concrete. At each position, the properties that were given to the steel rebars take into account the reinforcement that exists in the whole depth of the wall. For example, the horizontal and vertical elements that simulate the steel mesh are assigned an area of 100.48 mm2 that corresponds to the cross-sectional area of two 8 mm steel rebars. For simplicity, the edge reinforcements (4ϕ20) were simulated by a single row of beam elements that have an area of 1256 mm2 (i.e., 4 × 314 mm2). For the steel rebars, a modulus of elasticity E=210GPa was assumed. Moreover, the nonlinear laws of Figures 3(b) and 3(c) which exhibit a hardening branch, after the yield stress of the material is attained, were considered for S220 and S400 steel qualities, respectively.Figure4 depicts the finite element discretizations for the structures that correspond to the third, fourth, and fifth iterations of the fractal crack. The grey lines in the finite element meshes correspond to the positions; of the steel rebars. Special attention was given in the modelling so that the steel rebars retain their initial horizontal and vertical positions, that is, no eccentricity exists between the corresponding rows of beam finite elements due to the formation of the crack.Figure 4 FE discretizations for third, fourt, and fifth resolutions of the fractal interface.In this paper, only the finite element models corresponding to the 3rd, 4th, and 5th resolutions of the fractal crack were considered, because 1st and 2nd resolutions do not have a meaning from the engineering point of view. On the other hand, the 5th resolution gives a good lower bound ofδ because δ5 is related to the size of the aggregates of concrete.At the interfaces, unilateral contact and friction conditions were assumed to hold. The Coulomb’s friction model was followed with a coefficient equal to 0.6. At each scale, a classical Euclidean problem is solved by using a variational formulation [15].For every value of the vertical loadingN, a solution is taken in terms of shear forces and horizontal displacements at the interface, for different values of the resolution of the cracked wall and for the case of the uncracked wall. The aim of this work is to study the behaviour of the shear wall, that is, the behaviour of concrete and the forces in the rods, as the vertical loading and the resolution of the interface change.Two cases are considered:(i) in the first case, the wall is uncracked;(ii) in the second case, where a fractal crackℱ has been developed in the wall, different resolutions are taken into account in order to examine how the resolution of a fractal interface ℱ affects the strength of the RC shear wall element.The solution of the above problems is obtained through the application of the Newton-Raphson iterative method. Due to the highly nonlinear nature of the problem, a very fine load incrementation was used. The maximum value of the horizontal displacement (20 mm) was applied in 2000 loading steps, while the total vertical loading was applied in the 1st load step and was assumed as being constant in the subsequent steps. ## 4. Numerical Results Figure5 presents the applied horizontal load versus the corresponding displacement (P-δ curves) for the different values of vertical loading N. Starting from the case of the uncracked wall, it must be mentioned that the value of the vertical loading plays a significant role. As the value of the vertical loading increases, the capacity of the wall to undertake horizontal loading increases as well. However, for higher load values (for N=2.000 and 2.500 kN), strength degradations due to the exhaustion of the shear strength of concrete are noticed. In the sequel, the resistance of the wall increases again as a result of the transfer of the loading from concrete to the horizontal steel rebars.Figure 5 Load-displacement (P-δ) curves for the cases of the uncracked and cracked walls.In cases of the cracked walls, the beneficial effect of the normal compressive loading is once more verified. This result holds for the 3rd, the 4th, and the 5th resolutions of the fractal crack but for small displacement values only. For larger displacement values, the three variants of the cracked wall behave differently. The 4th and the 5th resolutions appear to have a stable behaviour without strength degradations. However, it is noticed that, in the case of the 3rd resolution and for heavy axial loading, significant strength degradation takes place.The above results can be more easily understood if we compare in the same diagram the curves obtained for the four different structures studied here (uncracked, 3rd resolution, 4th resolution, and 5th resolution) for specific load levels. In Figure6, the P-δ curves for three cases of axial loading (N=0, N=1500 kN, and N=2.500 kN) are presented. We observe that, for low values of the compressive axial loading, there is actually no difference between the uncracked and the cracked walls. In all cases, the horizontal loading is easily transferred and no signs of strength degradation are noticed because the wall works mainly in bending and the shear forces are well below the shear strength of the wall. For moderate axial loading values (i.e., for N=1.500 kN), it is noticed that the uncracked wall appears to have greater strength than the cracked variants examined here. It is also noticed that the 5th resolution of the fractal crack leads to greater ultimate strength, a result which can be attributed to the fact that the 5th resolution seems to lead to a greater degree of interlocking between the two interfaces of the crack.Figure 6 Comparison of the behaviour of the four variants of the examined wall for specific values of the compressive axial loading.However, the most interesting case is the case where heavy axial loading (N=2.500 kN) is applied to the wall. First, it can be noticed that the behaviour of the 4th and the 5th resolutions of the crack leads to results that are close enough to those of the uncracked wall. There exist some differences between the uncracked and the cracked walls for horizontal displacements in the range of 2 mm–6 mm, where the uncracked wall exhibits greater resistance. However, when the horizontal displacement reaches the value of 6 mm, the uncracked wall appears to have strength degradation, and after this displacement value the results of the 4th and the 5th resolutions of the fractal crack are again very close to those of the initially uncracked wall.Significantly different is the behaviour of the 3rd resolution of the fractal crack. Although in the first loading steps the results are close to those of the 4th and 5th resolutions, after a displacement value of 2.5 mm, significant strength degradation appears, having the form of successive vertical branches. Moreover, the ultimate strength of this wall is significantly lower than the other variants.It is interesting to try to explain this significantly different behaviour that appears between the walls corresponding to the 3rd and the higher resolutions of the fractal crack. For this reason, all the parameters affecting the behaviour of the wall will be comparatively studied in the sequel.Figure7 depicts the cracking strains of concrete for specific values of the axial loading. All the depicted results correspond to the end of the analysis; that is, they have been obtained for an applied horizontal displacement of 20 mm. First of all, it can be noticed that, for low values of axial loading, the cracking patterns that have been developed in all the studied walls are rather similar. The larger cracking strain values (yellow and grey colours) have their nature in the bending deformation of the wall. For moderate axial loading, the cracking patterns are quite different. The uncracked wall presents a bending type cracking pattern. The cracked walls (3rd and 4th resolutions) seem to behave differently. Both of them exhibit significant cracking in the vicinity of the crack (grey colours). Moreover, shear type cracking patterns are developed at the upper parts of the walls.Figure 7 Cracking strains for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).The above results alone cannot explain the significantly different responses that the two cracked variants of the wall exhibit. For this reason, the plastic strains of concrete obtained for the same applied horizontal displacement are examined. Figure8 depicts the plastic concrete strains for the three different variants of the wall and for specific values of axial loading. The upper value of the presented scale corresponds actually to the crushing limit (grey values). Therefore, it can be considered that concrete stresses in these areas are actually zero. For the uncracked wall (left column of Figure 8) it can be noticed that the more heavily deformed region is the lower right corner. It is clear that in this case the wall exhibits a typical bending type deformation behaviour (cracking at the lower left region, crushing at the lower right corner).Figure 8 Plastic strains for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).On the other hand, the cracked walls seem to deform significantly in the vicinity of the crack. This phenomenon is more pronounced in the case of the 3rd resolution of the fractal wall and especially in the case of heavy axial loading, where it can be noticed that the vicinity of the crack is in the crushed state; that is, the forces are transmitted solely by the steel mesh in this region. For the case of the 4th resolution, this phenomenon is rather limited; that is, it can be concluded that in this case the crack retains partially its ability to transfer shear and compressive forces through the contact and friction phenomena developed in the interface and through the mechanical interlocking that occurs between the two interface parts.In the sequel, it is interesting to examine the deformations that have occurred at the steel mesh. Figure9 displays the steel mesh for the cases studied above. The presented deformations which correspond to the last load step have been magnified by a factor of 10 so that the differences between the examined cases are visible. It is obvious that, for low vertical loading values, the deformations of the steel meshes are actually very similar. However, for moderate values of the vertical loading (N=1.500 kN), there exist some differences. The steel meshes of the cracked walls seem to be distorted in the vicinity of the right part of the formed crack. In this region, the vertical rebars above and below the crack present an offset which can be attributed to the inability of the interface to transfer shear forces. For the case of heavy vertical loading, the situation is much different. The wall corresponding to the 4th resolution of the fractal crack has a deformation similar to that of the case of moderate loading. However, the steel mesh of the wall corresponding to the 3rd resolution of the fractal crack exhibits significant deformations along the crack. More specifically, the upper vertical rebars present a significant horizontal offset with respect to the lower ones. This horizontal offset is obvious even in the leftmost part of the wall. Moreover, the horizontal rebars of the upper part present a vertical offset with respect to the ones of the lower part. This deformation pattern of the steel mesh verifies the findings that were noticed in Figure 8 concerning the excessive strains in the vicinity of the crack (which had values well above the crushing strain limit) resulting from the inability of concrete to transfer any loading in this case.Figure 9 Deformation of the steel mesh for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).Now, the difference in the response between the 3rd and the 4th resolutions of the fractal crack for the case of heavy vertical loading will be explained. As it has already been noticed, higher vertical loading leads to higher values of the horizontal loading. The increased horizontal forces have to be transferred from the upper part of the cracked wall to its lower part. In this respect, three mechanisms are developed in order to facilitate the horizontal load transfer:(i) exploitation of the tensile strength of the horizontal rebars;(ii) development of friction on the part of the crack where contact forces occur;(iii) mechanical interlock between the two interfaces of the crack.The first two mechanisms are almost similar in both cracked walls. However, it is obvious from Figure6 that in higher resolutions fractal crack improves its capacity to transfer forces through the mechanical interlock mechanism. According to the authors’ opinion, this fact is the main reason for the difference in the response between the walls corresponding to the 3rd and the 4th resolutions of the fractal crack. For lower vertical load values the differences are rather limited; however, as the vertical loading increases, the response is completely different because the increased vertical forces are combined with the increased horizontal forces and “destroy” completely the vicinity of the interface.Figures10, 11, and 12 display the forces developed at the horizontal and vertical rebars of the steel mesh for the three variants of the wall examined here for displacements of 5 mm and 20 mm. The left column of each figure corresponds to lower values of the axial loading (N=500 kN), the middle column to moderate loading values (N=1500 kN), and the right column to heavy axial loading (N=2.500 kN).Figure 10 Forces developed in the steel mesh for various values of the vertical loading for the uncracked wall (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).Figure 11 Forces developed in the steel mesh for various values of the vertical loading for the 3rd resolution of the crack (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).Figure 12 Forces developed in the steel mesh for various values of the vertical loading for the 4th resolution of the crack (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).For the case of the uncracked wall (Figure10), it is noticed that, in the early horizontal loading steps (δ=5 mm), only the vertical rebars are significantly loaded. The rebars in the left side of the wall have tensile forces while the rebars in the right side develop compressive forces, as a result of the bending of the wall. For δ=20 mm, after the development of cracking in various parts of the initially uncracked wall, the horizontal rebars are also stressed, mainly in the areas where the corresponding cracks have reduced or zeroed the ability of concrete to transfer shear forces.For the case of the 3rd resolution of the crack (Figure11), it is noticed that the vertical rebars are stressed only for small axial loading values. For moderate and heavy axial loading, the vertical rebars are partially only stressed. It deserves to be noticed that the rebar stresses are negative in the vicinity of the crack, a fact which verifies that the concrete is unable to transfer even compressive loading. Moreover, it is noticed that the rebars of the right side of the wall do not develop compressive stresses any more, due to the fact that the magnitude of bending that develops in this case is significantly smaller than that in the case of the uncracked wall. The horizontal rebars are stressed only in specific areas, near the crack and in the regions where cracking strains have been developed. In any case, a closer look in the forces that have been developed in the rebars verifies the significantly decreased bending capacity of the specific wall.The situation is rather different in the case of the 4th resolution of the fractal crack (Figure12). It can easily be verified that, for small values of the axial loading, the picture of the forces of the vertical rebars is quite similar to that of the uncracked wall. The same holds also for the forces of the horizontal rebars. For moderate axial load values, the forces of the vertical rebars appear to be discontinuities. At the right part of the crack, it can be noticed that in some rebars the forces are compressive, indicating once again the partial inability of concrete in this region to transfer compressive loading. The horizontal rebars are mainly stressed in the upper part of the cracked wall and in the vicinity of the crack. This result is absolutely compatible with the remarks given for the cracked areas in Figure 7. ## 5. Conclusions In the paper, the finite element analysis of a typical shear wall element which follows the construction practices applied in Greece during the 70s was presented assuming that a certain crack has been developed as a result of an earthquake action. The crack was modelled following tools from the theory of fractals. Three different resolutions of the fractal crack were considered by taking into account the aggregate sizes of the concrete, and their results were compared to those of the initially uncracked wall. The main finding of the paper is that the cracked wall still has the capacity to sustain monotonic horizontal loading. For small axial loading values, this capacity is similar to that of the initially uncracked wall. However, for larger axial loading values where the demands increase, it seems that a more accurate simulation of the geometry of the fractal crack (i.e., considering higher values of the resolution of the interfaces) leads to better results. Using lower resolution values, the roughness of the interfaces is not taken into account, and therefore the mechanical interlock between the two faces of the crack is rather limited, leading the concrete in the vicinity of the crack to overstressing and gradually to a complete loss of its capacity to sustain any kind of forces. In this case the bending capacity of the wall is significantly limited. --- *Source: 101484-2013-11-20.xml*
101484-2013-11-20_101484-2013-11-20.md
34,605
Study of the Residual Strength of an RC Shear Wall with Fractal Crack Taking into Account Interlocking Interface Phenomena
O. Panagouli; K. Iordanidou
Mathematical Problems in Engineering (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101484
101484-2013-11-20.xml
--- ## Abstract In the present paper, the postcracking strength of an RC shear wall element which follows the construction practices applied in Greece during the 70s is examined by taking into account the complex geometry of the crack of the wall and the mixed friction-plastification mechanisms that develop in the vicinity of the crack. Due to the significance of the crack geometry, a multiresolution analysis based on fractal geometry is performed, taking into account the size of the aggregates of concrete. The materials (steel and concrete) are assumed to have elastic-plastic behaviour. For concrete, both cracking and crushing are taken into account in an accurate manner. On the interfaces of the crack, unilateral contact and friction conditions are assumed to hold. For every structure corresponding to each resolution of the interface, a classical Euclidean problem is solved. The obtained results lead to interesting conclusions concerning the influence of the simulation of the geometry of the fractal crack on the mechanical interlock between the two faces of the crack, a factor which seems to be very important to the postcracking strength of the lightly reinforced shear wall studied here. --- ## Body ## 1. Introduction Many RC structures are facing a number of challenges, that is, earthquakes, hurricanes, and so forth, which may threaten their safety and serviceability. Therefore, modern structures built in seismic prone areas are designed to have significant bending and shear strength and ductility. However, existing structures designed according to earlier versions of the seismic codes and constructed using low strength materials usually have inadequate shear strength. For that, shear cracks appear in the shear wall elements of these structures reducing their overall capacity.Generally, cracks are of large interest in RC structures since their properties reflect not only the condition of concrete as material but also the condition of the entire system at structural level. Crack width is commonly used as a convenient indicator of damage to RC elements, but it should be noted that the distribution and the geometry of the cracks are also important in measuring the extent of damage presented in the structure [1, 2] and in calculating the residual strength of it.It is well known that the geometry of the interfaces of a crack is of fundamental importance to the study of friction, wear, and also strength evaluation. The recent research of fractured surfaces of various materials provides a deeper insight into the geometry of cracks. The corresponding research on metals [3] and on concrete and rocks [4–8] showed that the fractured surfaces hold fractal properties in a well-defined scale range. In the case of concrete the effects of aggregates sizes and the quality of concrete on the fractality of fractured surfaces were also investigated in [9–12], respectively. Therefore, an accurate description of the geometry of crack interfaces is of great importance for the simulation of contact. It is important to mention here that the actual contact between two real interfaces is realized only over a small fraction in a discrete number of areas. Consequently, the real contact area is only a fraction of the apparent area [13, 14], and the parameters of the actual contact regions are strongly influenced by the roughness of the contacting surfaces.The multiscale nature of the surface roughness suggests the use of fractal geometry. The fractal approach adopted here for the simulation of the geometry of the cracks formed in a shear wall uses computer generated self-affine curves for the modelling of the interface roughness, which is strongly dependent on the values of the structural parameters of these curves. The computer generated fractal interfaces, which are based on a given discrete set of interface data, are characterized by a precise value of the resolutionδ of the interface. This fact permits the study of the interface roughness on iteratively generated rough profiles, making this approach suitable for engineering problems, since it permits the satisfactory study of the whole problem with reliable numerical calculations.The aim of this paper is to study how the resolution of a fractal crackℱ affects the strength of a reinforced concrete shear wall element. On the interface between the two cracked surfaces, unilateral contact and friction conditions are assumed to hold. The applied approach takes into account the nonlinear behaviour of the materials, including the limited strength of concrete under tension. The shear wall is submitted to shear loading. As a result of the applied approach, the contribution of the friction between the cracked surfaces is taken into account, as well as the additional strength coming from the mechanical interlock between the two faces of the crack. For every structure resulting for each resolution of the interface, a classical Euclidean problem is solved by using a variational formulation [15]. It must be mentioned here that the finest resolution of the interface is related to the size of the aggregates. ## 2. Fractal Representation of Rough Surfaces The fractal nature of material damage has been a matter of a very intense research during the last three decades. The fractal nature of fractured surfaces in metals was shown more than 30 years ago by Mandelbrot et al. [3]. More specifically, it was shown that the fractured surfaces in metals develop a fractal structure over more than three orders of magnitude. In quasibrittle materials, observations have shown that fractured surfaces display self-affine scale properties in a certain range of scales which is in most cases very large and which greatly depends on the material microstructure. This is true for a large variety of quasibrittle materials such as rock, concrete, and ceramics [4–8, 16–18].Fractal sets are characterized by noninteger dimensions [19]. The dimension of a fractal set in plane can vary from 0 to 2. Accordingly, by increasing the resolution of a fractal set, its length tends to 0 if its dimension is smaller than 1 (totally disconnected set) or tends to infinity if it is larger than 1. In these cases, the length is a nominal, useless quantity since it changes as the resolution increases. Conversely, the fractal dimension of a fractal set is a parameter of great importance because of its scale-independent character.Many methods which are based on experimental or numerical calculations, such as the Richardson method [19], have been developed for the estimation of the fractal dimension of a curve. According to this method, dividers, which are set to a prescribed opening δ, are used. Moving with these dividers along the curve so that each new step starts where the previous step leaves off, one obtains the number of steps N(δ). The curve is said to be of fractal nature if, by repeating this procedure for different values of δ, the relation (1)N(δ)~δ-D is obtained in some interval δ*<δ<Δ*. The power D denotes the fractal dimension of the profile, which is in the range 1<D<2. The relation between the fractal dimension D of this profile and the dimension of the corresponding surface is Ds=D+1 [16].In relation (1) there is an upper and a lower bound in the scaling range and, consequently, a transition from the fractal regime at the microscopic level to the Euclidean regime at largest scales. The upper bound is represented by the macroscopic size of the set, while the lower one is related to the size of the smallest measurable particles, that is, the aggregates in the case of concrete. Mandelbrot [20] first pointed out the transition from a fractal regime characterized by noninteger dimensions to the homogeneous one characterized by classical topological dimensions, a fact which points out the main difference between mathematical and physical fractals.The idea of self-affinity is very popular in studying surface roughness because experimental studies have shown that, under repeated magnifications, the profiles of real surfaces are usually statistically self-affine to themselves [3, 21]. The self-affine fractals were used in a number of papers as a tool for the description of rough surfaces [22–28]. Typically, such a profile can be measured by taking height data yi with respect to an arbitrary datum at N equidistant discrete points xi. In the sequence, fractal interpolation functions ℱ(xi)=yi,i=0,1,…,N are used for the passage from this discrete set of data {(xi,yi),i=0,1,2,…,N} to a continuous model. According to the theory of Barnsley [29], the sequence of functions ℱn+1(x)=(Tℱn)(x)=cili-1(x)+diℱn(li-1(x))+gi converges to a fractal curve ℱ as n→∞. The transformation li transforms [x0,xN] to [xi-1,xi], and it is defined by the relation li(x)=aix+bi. The calculation of the parameters ai,bi,ci,andgi is based on the given set of data and the free parameters di.Fractal interpolation functions give profiles which look quite attractive from the viewpoint of a graphic roughness simulation. In higher approximations, these profiles appear rougher as it is shown in the next section where the first to fifth approximations of a fractal interpolation function are presented. Moreover, the roughness of the profile is strongly affected by the free parametersdi of the interpolation functions. As these parameters take larger values, the resulting profiles appear rougher with sharper peaks.Another model of self-affine profiles, which can be used for roughness description, is the multilevel hierarchical profile. This profile has a hierarchical structure and is constructed by using a certain iterative scheme presented in [24]. As in the case of the fractal interpolation functions, the surfaces produced by this scheme are characterized by a precise value of the resolution δ of the fractal curve. More specifically, in both cases, the iterative construction of the profiles permits us to analyse “prefractals” of arbitrary generation and, therefore, of arbitrary resolution δn.It must be mentioned here that an important advantage of the fractal interpolation functions presented in [29] and of the multilevel hierarchical approach presented in [24] is that their fractal dimension can be obtained analytically and it depends strongly on their construction parameters. Thus, in the case of fractal interpolation functions which are used in this paper for the simulation of the geometry of the crack, the fractal dimension D is given by the relation (2)∑i=1N|di|aiD-1=1. ## 3. Description of the Considered Problem In Figure1, an RC shear wall element which follows the construction practices applied in Greece during the 70s is presented. More specifically, the wall is reinforced by a double steel mesh consisting of horizontal and vertical rebars having a diameter of 8 mm and a spacing of 200 mm. The quality of the steel mesh is assumed to be S220 (typical for buildings of that age). At the two ends of the wall, the amount of reinforcement is higher. Four 20 mm rebars of higher quality (S400) are used without specific provisions to increase the confinement. The thickness of the wall is 200 mm and the quality of concrete is assumed to be C16, typical for this kind of constructions. The wall is fixed on the lower horizontal boundary.Figure 1 The considered shear wall.The considered shear wall is divided into two parts by a crack which has been formed due to shear failure of concrete. It is important to be mentioned here that, in low strength concretes, as in the case examined here, the fractured surfaces are rougher compared to the fractured surfaces developed in high strength concretes [12] because in the first case the cracks are developed in the contact zone between the aggregates and the cement paste, whereas in the second case the failure of the aggregates ensures a less rough interface. For the description of the roughness of the crack, the notion of fractals is used. More specifically, the crack is described by a fractal interpolation function which interpolates the set of data {(-1.0,2.95),(0.4,2.0),(1.8,1.0),(3.2,0.5)}. The free parameters of the function are taken to have the values d1=d2=d3=0.50 in order for the interface to be rough (the fractal dimension of the interface results to be equal to 1.369).The computer generated interfacesℱn,n=1,2,… are “prefractals” images of the fractal set characterized by a precise value of the resolution δn, which is related to the nth iteration of the fractal interpolation function and represents the characteristic linear size of the interface. As it is shown in Figure 2 where five iterations of a fractal interface are given, the linear size of the interface changes rapidly when higher iterations are taken into account. In Table 1, the characteristics of each resolution are presented. More specifically, in the second column, the resolution of the interface δn is given, whereas in the third column the total crack length Ln is presented.Table 1 Characteristics of the considered structures. Iteration (n) Resolutionδn (m) Interface lengthLn (m) 1st 1.404 4.888 2nd 0.468 4.946 3rd 0.156 5.080 4th 0.052 5.373 5th 0.017 5.935Figure 2 The first five resolutions of the fractal crack.The objective here is to estimate the capacity of the shear wall under an action similar to the one that has created the crack. For this reason, a horizontal displacement of 20 mm is applied on the upper side of the wall (see Figure1). Moreover, a vertical distributed loading qN is applied on the upper horizontal boundary, creating a compressive axial loading. The resultant of this loading is denoted by N. For N, six different values will be considered from 0 to 2.500 kN with a step of 500 kN.For the modelling of the above problem it is assumed that the opposite sides of the fracture are perfectly matching surfaces in a distance of 0.1 mm and the finite element method is used. In order to avoid a much more complicated three-dimensional analysis, two-dimensional finite elements were employed; however, special consideration was given to the incorporation of the nonlinearities that govern the response of the wall. More specifically, the mass of concrete was modelled through quadrilateral and triangular plain stress elements. The finite element discretization density is similar for all the considered problems [30] in order for the discretization density not to affect the comparison between the results of the various analyses that were performed. The modulus of elasticity for the elements representing the mass of concrete was taken to be equal to E=21GPa and the Poisson’s coefficient to be equal to ν=0.16. The material was assumed to follow the nonlinear law depicted in Figure 3(a). Under compression, the material behaves elastoplastically, until a total strain of 0.004. After this strain value crushing develops in concrete, leading its strength to zero. A more complicated behaviour is considered under tension. More specifically, after the exhaustion of the tension strength of concrete, a softening branch follows, having a slope ks=10GPa. Progressively the tension strength of concrete is also zeroed. The above unidirectional nonlinear law is complemented by an appropriate yield criterion (Tresca) which takes into account the two-dimensional stress fields that develop in the considered problem. For the simulation of cracking, a smeared crack algorithm is used, in which the cracks are evenly distributed over the area of each finite element [31].The adopted materials laws: (a) C16 concrete, (b) S220 steel, and (c) S400 steel. (a) (b) (c)The steel rebars were modelled through two-dimensional beam elements, which were connected to the same grid of nodes as the plain stress elements simulating the concrete. At each position, the properties that were given to the steel rebars take into account the reinforcement that exists in the whole depth of the wall. For example, the horizontal and vertical elements that simulate the steel mesh are assigned an area of 100.48 mm2 that corresponds to the cross-sectional area of two 8 mm steel rebars. For simplicity, the edge reinforcements (4ϕ20) were simulated by a single row of beam elements that have an area of 1256 mm2 (i.e., 4 × 314 mm2). For the steel rebars, a modulus of elasticity E=210GPa was assumed. Moreover, the nonlinear laws of Figures 3(b) and 3(c) which exhibit a hardening branch, after the yield stress of the material is attained, were considered for S220 and S400 steel qualities, respectively.Figure4 depicts the finite element discretizations for the structures that correspond to the third, fourth, and fifth iterations of the fractal crack. The grey lines in the finite element meshes correspond to the positions; of the steel rebars. Special attention was given in the modelling so that the steel rebars retain their initial horizontal and vertical positions, that is, no eccentricity exists between the corresponding rows of beam finite elements due to the formation of the crack.Figure 4 FE discretizations for third, fourt, and fifth resolutions of the fractal interface.In this paper, only the finite element models corresponding to the 3rd, 4th, and 5th resolutions of the fractal crack were considered, because 1st and 2nd resolutions do not have a meaning from the engineering point of view. On the other hand, the 5th resolution gives a good lower bound ofδ because δ5 is related to the size of the aggregates of concrete.At the interfaces, unilateral contact and friction conditions were assumed to hold. The Coulomb’s friction model was followed with a coefficient equal to 0.6. At each scale, a classical Euclidean problem is solved by using a variational formulation [15].For every value of the vertical loadingN, a solution is taken in terms of shear forces and horizontal displacements at the interface, for different values of the resolution of the cracked wall and for the case of the uncracked wall. The aim of this work is to study the behaviour of the shear wall, that is, the behaviour of concrete and the forces in the rods, as the vertical loading and the resolution of the interface change.Two cases are considered:(i) in the first case, the wall is uncracked;(ii) in the second case, where a fractal crackℱ has been developed in the wall, different resolutions are taken into account in order to examine how the resolution of a fractal interface ℱ affects the strength of the RC shear wall element.The solution of the above problems is obtained through the application of the Newton-Raphson iterative method. Due to the highly nonlinear nature of the problem, a very fine load incrementation was used. The maximum value of the horizontal displacement (20 mm) was applied in 2000 loading steps, while the total vertical loading was applied in the 1st load step and was assumed as being constant in the subsequent steps. ## 4. Numerical Results Figure5 presents the applied horizontal load versus the corresponding displacement (P-δ curves) for the different values of vertical loading N. Starting from the case of the uncracked wall, it must be mentioned that the value of the vertical loading plays a significant role. As the value of the vertical loading increases, the capacity of the wall to undertake horizontal loading increases as well. However, for higher load values (for N=2.000 and 2.500 kN), strength degradations due to the exhaustion of the shear strength of concrete are noticed. In the sequel, the resistance of the wall increases again as a result of the transfer of the loading from concrete to the horizontal steel rebars.Figure 5 Load-displacement (P-δ) curves for the cases of the uncracked and cracked walls.In cases of the cracked walls, the beneficial effect of the normal compressive loading is once more verified. This result holds for the 3rd, the 4th, and the 5th resolutions of the fractal crack but for small displacement values only. For larger displacement values, the three variants of the cracked wall behave differently. The 4th and the 5th resolutions appear to have a stable behaviour without strength degradations. However, it is noticed that, in the case of the 3rd resolution and for heavy axial loading, significant strength degradation takes place.The above results can be more easily understood if we compare in the same diagram the curves obtained for the four different structures studied here (uncracked, 3rd resolution, 4th resolution, and 5th resolution) for specific load levels. In Figure6, the P-δ curves for three cases of axial loading (N=0, N=1500 kN, and N=2.500 kN) are presented. We observe that, for low values of the compressive axial loading, there is actually no difference between the uncracked and the cracked walls. In all cases, the horizontal loading is easily transferred and no signs of strength degradation are noticed because the wall works mainly in bending and the shear forces are well below the shear strength of the wall. For moderate axial loading values (i.e., for N=1.500 kN), it is noticed that the uncracked wall appears to have greater strength than the cracked variants examined here. It is also noticed that the 5th resolution of the fractal crack leads to greater ultimate strength, a result which can be attributed to the fact that the 5th resolution seems to lead to a greater degree of interlocking between the two interfaces of the crack.Figure 6 Comparison of the behaviour of the four variants of the examined wall for specific values of the compressive axial loading.However, the most interesting case is the case where heavy axial loading (N=2.500 kN) is applied to the wall. First, it can be noticed that the behaviour of the 4th and the 5th resolutions of the crack leads to results that are close enough to those of the uncracked wall. There exist some differences between the uncracked and the cracked walls for horizontal displacements in the range of 2 mm–6 mm, where the uncracked wall exhibits greater resistance. However, when the horizontal displacement reaches the value of 6 mm, the uncracked wall appears to have strength degradation, and after this displacement value the results of the 4th and the 5th resolutions of the fractal crack are again very close to those of the initially uncracked wall.Significantly different is the behaviour of the 3rd resolution of the fractal crack. Although in the first loading steps the results are close to those of the 4th and 5th resolutions, after a displacement value of 2.5 mm, significant strength degradation appears, having the form of successive vertical branches. Moreover, the ultimate strength of this wall is significantly lower than the other variants.It is interesting to try to explain this significantly different behaviour that appears between the walls corresponding to the 3rd and the higher resolutions of the fractal crack. For this reason, all the parameters affecting the behaviour of the wall will be comparatively studied in the sequel.Figure7 depicts the cracking strains of concrete for specific values of the axial loading. All the depicted results correspond to the end of the analysis; that is, they have been obtained for an applied horizontal displacement of 20 mm. First of all, it can be noticed that, for low values of axial loading, the cracking patterns that have been developed in all the studied walls are rather similar. The larger cracking strain values (yellow and grey colours) have their nature in the bending deformation of the wall. For moderate axial loading, the cracking patterns are quite different. The uncracked wall presents a bending type cracking pattern. The cracked walls (3rd and 4th resolutions) seem to behave differently. Both of them exhibit significant cracking in the vicinity of the crack (grey colours). Moreover, shear type cracking patterns are developed at the upper parts of the walls.Figure 7 Cracking strains for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).The above results alone cannot explain the significantly different responses that the two cracked variants of the wall exhibit. For this reason, the plastic strains of concrete obtained for the same applied horizontal displacement are examined. Figure8 depicts the plastic concrete strains for the three different variants of the wall and for specific values of axial loading. The upper value of the presented scale corresponds actually to the crushing limit (grey values). Therefore, it can be considered that concrete stresses in these areas are actually zero. For the uncracked wall (left column of Figure 8) it can be noticed that the more heavily deformed region is the lower right corner. It is clear that in this case the wall exhibits a typical bending type deformation behaviour (cracking at the lower left region, crushing at the lower right corner).Figure 8 Plastic strains for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).On the other hand, the cracked walls seem to deform significantly in the vicinity of the crack. This phenomenon is more pronounced in the case of the 3rd resolution of the fractal wall and especially in the case of heavy axial loading, where it can be noticed that the vicinity of the crack is in the crushed state; that is, the forces are transmitted solely by the steel mesh in this region. For the case of the 4th resolution, this phenomenon is rather limited; that is, it can be concluded that in this case the crack retains partially its ability to transfer shear and compressive forces through the contact and friction phenomena developed in the interface and through the mechanical interlocking that occurs between the two interface parts.In the sequel, it is interesting to examine the deformations that have occurred at the steel mesh. Figure9 displays the steel mesh for the cases studied above. The presented deformations which correspond to the last load step have been magnified by a factor of 10 so that the differences between the examined cases are visible. It is obvious that, for low vertical loading values, the deformations of the steel meshes are actually very similar. However, for moderate values of the vertical loading (N=1.500 kN), there exist some differences. The steel meshes of the cracked walls seem to be distorted in the vicinity of the right part of the formed crack. In this region, the vertical rebars above and below the crack present an offset which can be attributed to the inability of the interface to transfer shear forces. For the case of heavy vertical loading, the situation is much different. The wall corresponding to the 4th resolution of the fractal crack has a deformation similar to that of the case of moderate loading. However, the steel mesh of the wall corresponding to the 3rd resolution of the fractal crack exhibits significant deformations along the crack. More specifically, the upper vertical rebars present a significant horizontal offset with respect to the lower ones. This horizontal offset is obvious even in the leftmost part of the wall. Moreover, the horizontal rebars of the upper part present a vertical offset with respect to the ones of the lower part. This deformation pattern of the steel mesh verifies the findings that were noticed in Figure 8 concerning the excessive strains in the vicinity of the crack (which had values well above the crushing strain limit) resulting from the inability of concrete to transfer any loading in this case.Figure 9 Deformation of the steel mesh for various values of the vertical loading, for the cases of the uncracked wall (left column) and the cracked walls (3rd resolution: middle column and 4th resolution: right column).Now, the difference in the response between the 3rd and the 4th resolutions of the fractal crack for the case of heavy vertical loading will be explained. As it has already been noticed, higher vertical loading leads to higher values of the horizontal loading. The increased horizontal forces have to be transferred from the upper part of the cracked wall to its lower part. In this respect, three mechanisms are developed in order to facilitate the horizontal load transfer:(i) exploitation of the tensile strength of the horizontal rebars;(ii) development of friction on the part of the crack where contact forces occur;(iii) mechanical interlock between the two interfaces of the crack.The first two mechanisms are almost similar in both cracked walls. However, it is obvious from Figure6 that in higher resolutions fractal crack improves its capacity to transfer forces through the mechanical interlock mechanism. According to the authors’ opinion, this fact is the main reason for the difference in the response between the walls corresponding to the 3rd and the 4th resolutions of the fractal crack. For lower vertical load values the differences are rather limited; however, as the vertical loading increases, the response is completely different because the increased vertical forces are combined with the increased horizontal forces and “destroy” completely the vicinity of the interface.Figures10, 11, and 12 display the forces developed at the horizontal and vertical rebars of the steel mesh for the three variants of the wall examined here for displacements of 5 mm and 20 mm. The left column of each figure corresponds to lower values of the axial loading (N=500 kN), the middle column to moderate loading values (N=1500 kN), and the right column to heavy axial loading (N=2.500 kN).Figure 10 Forces developed in the steel mesh for various values of the vertical loading for the uncracked wall (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).Figure 11 Forces developed in the steel mesh for various values of the vertical loading for the 3rd resolution of the crack (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).Figure 12 Forces developed in the steel mesh for various values of the vertical loading for the 4th resolution of the crack (left column:N=500 kN, middle column: N=1500 kN, and right column: N=2500 kN).For the case of the uncracked wall (Figure10), it is noticed that, in the early horizontal loading steps (δ=5 mm), only the vertical rebars are significantly loaded. The rebars in the left side of the wall have tensile forces while the rebars in the right side develop compressive forces, as a result of the bending of the wall. For δ=20 mm, after the development of cracking in various parts of the initially uncracked wall, the horizontal rebars are also stressed, mainly in the areas where the corresponding cracks have reduced or zeroed the ability of concrete to transfer shear forces.For the case of the 3rd resolution of the crack (Figure11), it is noticed that the vertical rebars are stressed only for small axial loading values. For moderate and heavy axial loading, the vertical rebars are partially only stressed. It deserves to be noticed that the rebar stresses are negative in the vicinity of the crack, a fact which verifies that the concrete is unable to transfer even compressive loading. Moreover, it is noticed that the rebars of the right side of the wall do not develop compressive stresses any more, due to the fact that the magnitude of bending that develops in this case is significantly smaller than that in the case of the uncracked wall. The horizontal rebars are stressed only in specific areas, near the crack and in the regions where cracking strains have been developed. In any case, a closer look in the forces that have been developed in the rebars verifies the significantly decreased bending capacity of the specific wall.The situation is rather different in the case of the 4th resolution of the fractal crack (Figure12). It can easily be verified that, for small values of the axial loading, the picture of the forces of the vertical rebars is quite similar to that of the uncracked wall. The same holds also for the forces of the horizontal rebars. For moderate axial load values, the forces of the vertical rebars appear to be discontinuities. At the right part of the crack, it can be noticed that in some rebars the forces are compressive, indicating once again the partial inability of concrete in this region to transfer compressive loading. The horizontal rebars are mainly stressed in the upper part of the cracked wall and in the vicinity of the crack. This result is absolutely compatible with the remarks given for the cracked areas in Figure 7. ## 5. Conclusions In the paper, the finite element analysis of a typical shear wall element which follows the construction practices applied in Greece during the 70s was presented assuming that a certain crack has been developed as a result of an earthquake action. The crack was modelled following tools from the theory of fractals. Three different resolutions of the fractal crack were considered by taking into account the aggregate sizes of the concrete, and their results were compared to those of the initially uncracked wall. The main finding of the paper is that the cracked wall still has the capacity to sustain monotonic horizontal loading. For small axial loading values, this capacity is similar to that of the initially uncracked wall. However, for larger axial loading values where the demands increase, it seems that a more accurate simulation of the geometry of the fractal crack (i.e., considering higher values of the resolution of the interfaces) leads to better results. Using lower resolution values, the roughness of the interfaces is not taken into account, and therefore the mechanical interlock between the two faces of the crack is rather limited, leading the concrete in the vicinity of the crack to overstressing and gradually to a complete loss of its capacity to sustain any kind of forces. In this case the bending capacity of the wall is significantly limited. --- *Source: 101484-2013-11-20.xml*
2013
# On Delay-Range-Dependent Stochastic Stability Conditions of Uncertain Neutral Delay Markovian Jump Systems **Authors:** Xinghua Liu; Hongsheng Xi **Journal:** Journal of Applied Mathematics (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101485 --- ## Abstract The delay-range-dependent stochastic stability for uncertain neutral Markovian jump systems with interval time-varying delays is studied in this paper. The uncertainties under consideration are assumed to be time varying but norm bounded. To begin with the nominal systems, a novel augmented Lyapunov functional which contains some triple-integral terms is introduced. Then, by employing some integral inequalities and the nature of convex combination, some less conservative stochastic stability conditions are presented in terms of linear matrix inequalities without introducing any free-weighting matrices. Finally, numerical examples are provided to demonstrate the effectiveness and to show that the proposed results significantly improve the allowed upper bounds of the delay size over some existing ones in the literature. --- ## Body ## 1. Introduction Time delays are frequently encountered in various engineering systems, such as chemical or process control systems, networked control systems, and manufacturing systems. To sum up, delays can appear in the state, input, or output variables (retarded systems), as well as in the state derivative (neutral systems). In fact, neutral delay systems constitute a more general class than those of the retarded type because such systems can be found in places such as population ecology [1], distributed neural networks [2], heat exchangers, and robots in contact with rigid environments [3]. Since it is shown that the existence of delays in a dynamic system may result in instability, oscillations, or poor performances [3–5], the stability of time-delay systems has been an important problem of recurring interest for many years. Existing results on this topic can be roughly classified into two categories, namely, delay-independent criteria [6] and delay-dependent criteria, and it is generally recognized that the latter cases are less conservative. Actually, the stability of neutral time-delay systems proves to be a more complex issue as well as singular systems [7–9] because the systems involve the derivative of the delayed state. So considerable attention has been devoted to the problem of robust delay-independent stability or delay-dependent stability and stabilization via different approaches for linear neutral systems with delayed state input and parameter uncertainties. Results are mainly presented based on Lyapunov-Krasovskii (L-K) method; see, for example, [10–16] and the references therein. However, there is room for further investigation because the conservativeness of the neutral systems can be further reduced by a better technique.On the other hand, with the development of science and technology, many practical dynamics, for example, solar thermal central receivers, robotic manipulator systems, aircraft control systems, economic systems, and so on, experience abrupt changes in their structures, whose parameters are caused by phenomena such as component failures or repairs, changes in subsystem interconnections, and sudden environmental changes. This class of systems is more appropriate to be described as Markovian jump systems (MJSs), which can be regarded as a special class of hybrid systems with finite operation modes. The system parameters jump among finite modes, and the mode switching is governed by a Markov process to represent the abrupt variation in their structures and parameters. With so many applications in engineering systems, a great deal of attention has been paid to the stability analysis and controller synthesis for Markovian jump systems (MJSs) in recent years. Many researchers have made a lot of progress on Markovian jump delay systems and Markovian jump control theory; see, for example, [17–23] and references therein for more details. However, a few of these papers have considered the effect of delay on the stability or stabilization for the corresponding neutral systems. Besides, to the best of the authors’ knowledge, it seems that the problem of stochastic stability for neutral Markovian jumping systems with interval time-varying delays has not been fully investigated and it is very challenging. Motivated by the previous description, this paper investigates the stochastic stability of neutral Markovian jumping systems with interval time-varying delays to seek less conservative stochastic stability conditions than some previous ones.In order to simplify the treatment of the problem, in this paper, we first investigate the nominal systems and construct a new augmented Lyapunov functional containing some triple-integral terms to reduce conservativeness. By some integral inequalities and the nature of convex combination, the delay-range-dependent stochastic stability conditions are derived for the nominal neutral systems with Markovian jump parameters and interval time-varying delays. Then, the results are extended to the corresponding uncertain case on the basis of obtained conditions. In addition, these conditions are expressed in linear matrix inequalities (LMIs), which can be easily checked by utilizing the LMI Toolbox in MATLAB. Numerical examples are given to show the effectiveness and reduced conservativeness over some previous references.The main contributions of this paper can be summarized as follows: (1) the proposed Lyapunov functional contains some triple-integral terms which is very effective in the reduction of conservativeness, and has not been used in any of the existing literatures in the same context before; (2) the delay-range-dependent stability conditions are obtained in terms of LMIs without introducing any free-weighting matrices besides the Lyapunov matrices, which will reduce the number of variables and decrease the complexity of computation; (3) the proposed results are expressed in a new representation and proved to be less conservative than some existing ones.The remainder of this paper is organized as follows: Section2 contains the problem statement and preliminaries; Section 3 presents the main results; Section 4 provides a numerical example to verify the effectiveness of the results; Section 5 draws a brief conclusion. ### 1.1. Notations In this paper,ℝn denotes the n dimensional Euclidean space and ℝm×n is for the set of all m×n matrices. The notation X<Y (X>Y), where X and Y are both symmetric matrices, means that X-Y is negative (positive) definite. I denotes the identity matrix with proper dimensions. λmax(min)(A) is the eigenvalue of matrix A with maximum (minimum) real part. For a symmetric block matrix, we use the sign * to denote the terms introduced by symmetry. ℰ stands for the mathematical expectation, and ∥v∥ is the Euclidean norm of vector v, ∥v∥=(vTv)1/2, while ∥A∥ is spectral norm of matrix A, ∥A∥=[λmax(ATA)]1/2. C([-ρ,0],ℝn) is the space of continuous function from [-ρ,0] to ℝn. In addition, if not explicitly stated, matrices are assumed to have compatible dimensions. ## 1.1. Notations In this paper,ℝn denotes the n dimensional Euclidean space and ℝm×n is for the set of all m×n matrices. The notation X<Y (X>Y), where X and Y are both symmetric matrices, means that X-Y is negative (positive) definite. I denotes the identity matrix with proper dimensions. λmax(min)(A) is the eigenvalue of matrix A with maximum (minimum) real part. For a symmetric block matrix, we use the sign * to denote the terms introduced by symmetry. ℰ stands for the mathematical expectation, and ∥v∥ is the Euclidean norm of vector v, ∥v∥=(vTv)1/2, while ∥A∥ is spectral norm of matrix A, ∥A∥=[λmax(ATA)]1/2. C([-ρ,0],ℝn) is the space of continuous function from [-ρ,0] to ℝn. In addition, if not explicitly stated, matrices are assumed to have compatible dimensions. ## 2. Problem Statement and Preliminaries Given a probability space{Ω,ℱ,P} where Ω is the sample space, ℱ is the algebra of events and P is the probability measure defined on ℱ. {rt,t≥0} is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in a finite set S={1,2,3,…,N}, with the mode transition probability matrix (1)P(rt+Δt=j∣rt=i)={πijΔt+o(Δt)i≠j,1+πiiΔt+o(Δt)i=j, where Δt>0, limΔt→0(o(Δt)/Δt)=0, and πij≥0(i,j∈S,i≠j) denote the transition rate from mode i to j. For any state or mode i∈S, we have (2)πii=-∑j=1,j≠iN‍πij.In this paper, we consider the following uncertain neutral systems with Markovian jump parameters and time-varying delay over the space{Ω,ℱ,P} as follows: (3)x˙(t)-C(rt)x˙(t-τ)=[A(rt)+ΔA(rt)]x(t)+[B(rt)+ΔB(rt)]x(t-d(t)),(4)x(s)=φ(s),rs=r0,s∈[-ρ,0], where x(t)∈ℝn is the system state and τ>0 is a constant neutral delay. It is assumed that the time-varying delay d(t) satisfies (5)0<d1≤d(t)≤d2,d˙(t)≤μ, where d1<d2 and μ≥0 are constant real values. The initial condition φ(s) is a continuously differentiable vector-valued function. The continuous norm of φ(s) is defined as (6)∥φ∥c=maxs∈[-ρ,0]|φ(s)|,ρ=max{τ,d2};A(rt)∈ℝn×n, B(rt)∈ℝn×n, and C(rt)∈ℝn×n are known mode-dependent constant matrices, while ΔA(rt)∈ℝn×n and ΔB(rt)∈ℝn×n are uncertainties. For notational simplicity, when rt=i∈S, A(rt), ΔA(rt), B(rt), ΔB(rt), and C(rt) are, respectively, denoted as Ai, ΔAi, Bi, ΔBi, and Ci. Throughout this paper, the parametric matrix ∥Ci∥<1 and the admissible parametric uncertainties are assumed to satisfy the following condition: (7)[ΔAi(t)ΔBi(t)]=HiFi(t)[EAiEBi], where Hi, EAi, and EBi are known mode-dependent constant matrices with appropriate dimensions and Fi(t) is an unknown and time-varying matrix satisfying (8)FiT(t)Fi(t)≤I,∀t. Particularly, when we consider Fi(t)=0, we get the nominal systems which can be described as (9)x˙(t)-Cix˙(t-τ)=Aix(t)+Bix(t-d(t)).Before proceeding further, the following assumptions, definitions, and lemmas need to be introduced.Assumption 1. The system matrixAi(foralli∈S) is Hurwitz matrix with all the eigenvalues having negative real parts for each mode. The matrix Hi(foralli∈S) is chosen as a full row rank matrix.Assumption 2. The Markov process is irreducible, and the system modert is available at time t.With regard to neutral systems, the operator𝔇:C([-ρ,0],ℝn)→ℝn is defined to be (10)𝔇(xt)=x(t)-Cx(t-τ). Then, the stability of operator 𝔇 is defined as follows.Definition 3 (see [4]). The operator𝔇 is said to be stable if the homogeneous difference equation (11)𝔇(xt)=0,t≥0,x0=ψ∈{ϕ∈C([-ρ,0],ℝn):𝔇ϕ=0} is uniformly asymptotically stable. In order to guarantee the stability of the operator 𝔇, one has assumed that ∥Ci∥<1 as previosuly mentioned, which was introduced in [24].Definition 4 (see [25]). The systems which are described in (3) are said to be stochastically stable if there exists a positive constant Υ such that (12)ℰ{∫0∞‍∥x(rt,t)∥2dt∣φ(s),s∈[-ρ,0],r0}<Υ.Definition 5 (see [26]). In the Euclidean space{ℝn×S×R+}, where x(t)∈ℝn, rt∈S, and t∈R+, one introduces the stochastic Lyapunov-Krasovskii function of system (3) as V(x(t),rt=i,t>0)=V(xt,i,t), the infinitesimal generator satisfying (13)𝔏V(x(t),i,t)=limΔt→01Δt[ℰ{V(x(t+Δt),rt+Δt,t+Δt)∣x(t)=x,rt=i}ppppppppp-V(x(t),i,t)ℰ{V(x(t+Δt),rt+Δt,t+Δt)∣x(t)=x,rt=i}]=∂∂tV(x(t),i,t)+∂∂xV(x(t),i,t)x˙(t)+∑j=1N‍πijV(x(t),j,t).Lemma 6 (see [27, 28]). For any constant matrixH=HT>0 and scalars τ2>τ1>0 such that the following integrations are well defined, then (14)(a)-(τ2-τ1)∫t-τ2t-τ1‍xT(s)Hx(s)ds≤-[∫t-τ2t-τ1‍xT(s)ds]H[∫t-τ2t-τ1‍x(s)ds],(b)-12(τ22-τ12)∫-τ2-τ1‍∫t+θt‍xT(s)Hx(s)dsdθ≤-[∫-τ2-τ1‍∫t+θt‍xT(s)dsdθ]H[∫-τ2-τ1‍∫t+θt‍x(s)dsdθ].Lemma 7 (see [19]). Suppose that0≤τm≤τ(t)≤τM, Ξ1, Ξ2, and Ω are constant matrices of appropriate dimensions, then (15)(τ(t)-τm)Ξ1+(τM-τ(t))Ξ2+Ω<0 if and only if (τM-τm)Ξ1+Ω<0 and (τM-τm)Ξ2+Ω<0 hold.Lemma 8 (see [29]). For given matricesQ=QT, M, and N with appropriate dimensions, (16)Q+MF(t)N+NTFT(t)MT<0 for all F(t) satisfying FT(t)F(t)≤I if and only if there exists a scalar δ>0 such that (17)Q+δ-1MMT+δNNT<0.Lemma 9 (see [30]). Given constant matricesΩ1, Ω2, and Ω3, where Ω1=Ω1T and Ω2=Ω2T>0, then Ω1+Ω3TΩ2-1Ω3<0 if and only if (18)[Ω1Ω3T*-Ω2]<0or[-Ω2Ω3T*Ω1]<0. ## 3. Main Results In this section, we first consider the nominal systems described by (9) and extend to the uncertain case. The following theorems present sufficient conditions to guarantee the stochastic stability for the neutral systems with Markovian jump parameters and time-varying delays. ### 3.1. Stochastic Stability for the Nominal Systems Theorem 10. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following linear matrix inequalities hold: (19)Πi1+ΓTMΓ<0,Πi2+ΓTMΓ<0, where (20)Πi1=Πi0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Πi2=Πi0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (21)Πi0=e1Ye1T-e2[(1-μ)R3]e2T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2+R3-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), where ei{i=1,2,…,12} are block entry matrices; for instance, (22)e2T=[0I0000000000],Y=AiTPi+PiAi+∑j=1N‍πijpj+Q1+τ2Q3+R1+d12R8+d122R9,M=Q2+τ2Q4+τ44Q5+R4+d12R6+d122R7+d144R10+dm2R11,Γ=[AiBi00000000Ci0],d12=d2-d1,dm=12(d22-d12).Proof. Construct the novel Lyapunov functional as follows:(23)V(x(t),i,t)=Vr(xt,i)+Vτ(xt,i)+Vd1(xt,i)+Vd2(xt,i)+Vd3(xt,i), where (24)Vr(xt,i)=xT(t)Pix(t),Vτ(xt,i)=∫t-τt‍xT(s)Q1x(s)ds+∫t-τt‍x˙T(s)Q2x˙(s)ds+∫-τ0‍∫t+θt‍xT(s)[τQ3]x(s)dsdθ+∫-τ0‍∫t+θt‍x˙T(s)[τQ4]x˙(s)dsdθ+∫-τ0‍∫θ0‍∫t+λt‍x˙T(s)[τ22Q5]x˙(s)dsdλdθ,Vd1(xt,i)=∫t-d1t‍xT(s)R1x(s)ds+∫t-d2t-d1‍xT(s)R2x(s)ds+∫t-d(t)t-d1‍xT(s)R3x(s)ds+∫t-d1t‍x˙T(s)R4x˙(s)ds+∫t-d2t-d1‍x˙T(s)R5x˙(s)ds,Vd2(xt,i)=∫-d10‍∫t+θt‍x˙T(s)[d1R6]x˙(s)dsdθ+∫-d2-d1‍∫t+θt‍x˙T(s)[d12R7]x˙(s)dsdθ+∫-d10‍∫t+θt‍xT(s)[d1R8]x(s)dsdθ+∫-d2-d1‍∫t+θt‍xT(s)[d12R9]x(s)dsdθ,Vd3(xt,i)=∫-d10‍∫θ0‍∫t+λt‍x˙T(s)[d122R10]x˙(s)dsdλdθ+∫-d2-d1‍∫θ0‍∫t+λt‍x˙T(s)[dmR11]x˙(s)dsdλdθ. From Definition5, taking 𝔏 as its infinitesimal generator along the trajectory of system (9), then from (23) and (24) we get the following equalities and inequalities: (25)𝔏V(x(t),i,t)=𝔏Vr(xt,i)+𝔏Vτ(xt,i)+𝔏Vd1(xt,i)+𝔏Vd2(xt,i)+𝔏Vd3(xt,i),(26)𝔏Vr(xt,i)=2[xT(t)AiT+xT(t-d(t))BiT+x˙T(t-τ)CiT]Pix(t)+∑j=1N‍πijxT(t)Pjx(t),(27)𝔏Vτ(xt,i)=xT(t)[Q1+τ2Q3]x(t)+x˙T(t)[Q2+τ2Q4+τ44Q5]x˙(t)-xT(t-τ)Q1x(t-τ)-x˙T(t-τ)Q2x˙(t-τ)-∫t-τt‍xT(s)[τQ3]x(s)ds-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds-∫-τ0‍∫t+θt‍x˙(s)[τ22Q5]x˙(s)dsdθ,𝔏Vd1(xt,i)=xT(t)R1x(t)+x˙T(t)R4x˙(t)+xT(t-d1)[R2+R3-R1]x(t-d1)+x˙T(t-d1)[R5-R4]x˙(t-d1)-x˙T(t-d2)R5x˙(t-d2)-xT(t-d2)R2x(t-d2)-(1-d˙(t))xT(t-d(t))R3x(t-d(t)),𝔏Vd2(xt,i)=xT(t)[d12R8+d122R9]x(t)+x˙T(t)[d12R6+d122R7]x˙(t)-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds-∫t-d1t‍xT(s)[d1R8]x(s)ds-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds,𝔏Vd3(xt,i)=x˙T(t)[d144R10+dm2R11]x˙(t)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ. Let us define(28)ξ(t)=col{∫t-d2t-d(t)‍x(t)x(t-d(t))x(t-d1)x(t-d2)x˙(t-d1)ppppppx˙(t-d2)∫t-d1t‍x(s)ds∫t-d(t)t-d1‍x(s)dspppppp∫t-d2t-d(t)‍x(s)dsx(t-τ)x˙(t-τ)∫t-τt‍x(s)ds}. Applying (a) of Lemma6, we obtain (29)-∫t-τt‍xT(s)[τQ3]x(s)ds≤-[∫t-τt‍xT(s)ds]Q3[∫t-τt‍x(s)ds]=-ξT(t)e12Q3e12Tξ(t). Following the same procedure, we also obtain the inequalities as follows: (30)-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds≤-ξT(t)(e1-e10)Q4(e1T-e10T)ξ(t),-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds≤-ξT(t)(e1-e3)R6(e1T-e3T)ξ(t),-∫t-d1t‍xT(s)[d1R8]x(s)ds≤-[∫t-d1t‍xT(s)ds]R8[∫t-d1t‍x(s)ds]. Applying (b) of Lemma6, we have (31)-∫-τ0‍∫t+θt‍x˙T(s)[τ22Q5]x˙(s)dsdθ≤-[∫-τ0‍∫t+θt‍x˙T(s)dsdθ]Q5[∫-τ0‍∫t+θt‍x˙(s)dsdθ]=-[τxT(t)-∫t-τt‍xT(s)ds]Q5[τx(t)-∫t-τt‍x(s)ds]=-ξT(t)(τe1-e12)Q5(τe1T-e12T)ξ(t). Then the following inequalities are obtained by the same technique: (32)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ≤-ξT(t)(d1e1-e7)R10(d1e1T-e7T)ξ(t),-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ≤-ξT(t)(d12e1-e8-e9)R11(d12e1T-e8T-e9T)ξ(t). Letλ(t)=(d(t)-d1)/d12, then we have (33)-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds=-d12∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-d12∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-(d2-d(t))∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds-(d2-d(t))∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-d(t)-d1d2-d(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-d2-d(t)d(t)-d1ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)≤-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-λ(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-(1-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)=-(1+λ(t))ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-(2-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t). Consistent with the technique of (33), we obtain (34)-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds≤-(2-λ(t))ξT(t)(e8R9e8T)ξ(t)pppppp-(1+λ(t))ξT(t)(e9R9e9T)ξ(t), considering (35)x˙T(t)Mx˙(t)=(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))T×M(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))=ξT(t)ΓTMΓξ(t), where M, Γ have been defined as before. We take the previous equalities and inequalities (26)–(35) into (25); thus, we finally get (36)𝔏V(x(t),i,t)≤ξT(t)[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]×ξ(t). Since0≤λ(t)≤1, by utilizing Lemma 7, we know that λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ<0 is equivalent to (19). So we choose (37)β=maxi∈S,λ(t)∈[0,1]λmax[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]. Then β<0 and (38)𝔏V(x(t),i,t)≤β∥ξ(t)∥2≤β∥x(t)∥2. According to (38), from Dynkin's formula [31], we obtain (39)ℰ{V(x(t),i,t)}-V(x0,r0)≤βℰ{∫0t‍∥x(s)∥2ds}. Let t→∞, then we have (40)limt→∞ℰ{∫0t‍∥x(s)∥2ds}≤(-β)-1V(x0,r0). From Definition 4, we know that the systems described by (9) are stochastically stable. This completes the proof.Remark 11. Theorem10 provides a delay-range-dependent stochastic stability criterion for nominal neutral systems with interval time-varying delays and Markovian jump parameters as described by (9). By utilizing a new Lyapunov functional, the less conservative criterion is obtained in terms of LMIs and it can be verified in Section 4.Remark 12. In the same context of the stochastic stability for neutral systems with Markovian jumping parameters and time-varying delays, the type of augmented Lyapunov functional has not been used in any of the existing literatures. Compared with the existing Lyapunov functional, the proposed one (23) contains some triple-integral terms, which is very effective in the reduction of conservativeness in [28]. Besides, the information on the lower bound of the delay is sufficiently used in the Lyapunov functional by introducing the terms, such as ∫t-d2t-d1‍xT(s)R2x(s)ds and ∫t-d(t)t-d1‍xT(s)R3x(s)ds.In many circumstances, the information on the delay derivative may not be available. That is,μ is usually unknown in the real systems. So we give the following result as a corollary which can be obtained from Theorem 10 by setting R3=0.Corollary 13. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following linear matrix inequalities hold: (41)Π~i1+ΓTMΓ<0,Π~i2+ΓTMΓ<0, where (42)Π~i1=Π~i0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Π~i2=Π~i0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (43)Π~i0=e1Ye1T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), and other notations are the same as Theorem 10. ### 3.2. Stochastic Stability for the Uncertain Neutral Markovian Jump Systems In this subsection, we consider the uncertain case which can be described by (3). Based on Theorem 10, we obtain the following theorem to guarantee the stochastic stability for the uncertain neutral systems with interval time-varying delays and Markovian jump parameters.Theorem 14. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following matrix inequalities hold: (44)[Πi1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,(45)[Πi2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where εT=EAie1T+EBie2T, Πi1, Πi2, Γ, and M have been defined in Theorem 10.Proof. On the basis of Theorem10, we directly replace Ai and Bi with Ai+ΔAi(t), Bi+ΔBi(t) and obtain (46)Πi1(t)+ΓT(t)MΓ(t)<0,(47)Πi2(t)+ΓT(t)MΓ(t)<0, where (48)Πi1(t)=Πi1+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T,Πi2(t)=Πi2+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T. Considering (46) and combining the uncertainties condition (7) by Lemma 9, we have (49)[Πi1ΓTMMΓ-M]+[e1Pi0M0]HiFi(t)εT+εFiT(t)HiT[Pie1TM00]<0. With (8), by Lemma 8 from (49), we obtain (50)[Πi1ΓTMMΓ-M]+1δ1[e1Pi0M0]HiHiT[Pie1TM00]+δ1[εεT000]<0. Obviously, (50) is equivalent to (44). Similarly, considering (46) and following the same procedure, we can get (45). Finally, following from the latter proof of Theorem 10, we know that the uncertain neutral systems with Markovian jump parameters and time-varying delay as described by (3) are stochastically stable. This completes the proof.Remark 15. It should be noted that (44) and (45) can be viewed as linear matrix inequalities by introducing new variables. That is, define matrices (51)Pi(1)=1δ1PiHiHiTPi,Pi(2)=1δ2PiHiHiTPi,PiM(1)=1δ1PiHiHiTM,PiM(2)=1δ2PiHiHiTM,M(1)=1δ1MHiHiTM,M(2)=1δ2MHiHiTM, where Hi, i∈S, are known constant matrices and have been defined in (7). Then (44) and (45) can be easily solved by the LMI Toolbox in MATLAB.Remark 16. It should be mentioned that Theorem14 is an extension of Theorem 10 to uncertain neutral Markovian jump systems with interval time-varying delays. It provides a stochastic delay-range-dependent stability criterion for (3) and it will be verified to be less conservative than some existing ones in Section 4.Consistent with the nominal systems, in the uncertain case, we have the following result as a corollary if the information on the delay derivativeμ may not be available. The corollary is also obtained by setting R3=0 in Theorem 14.Corollary 17. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following symmetric matrix inequalities hold: (52)[Π~i1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,[Π~i2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where Π~i1 and Π~i2 have been defined in Corollary 13, the remaining notations are the same as Theorem 14. ## 3.1. Stochastic Stability for the Nominal Systems Theorem 10. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following linear matrix inequalities hold: (19)Πi1+ΓTMΓ<0,Πi2+ΓTMΓ<0, where (20)Πi1=Πi0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Πi2=Πi0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (21)Πi0=e1Ye1T-e2[(1-μ)R3]e2T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2+R3-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), where ei{i=1,2,…,12} are block entry matrices; for instance, (22)e2T=[0I0000000000],Y=AiTPi+PiAi+∑j=1N‍πijpj+Q1+τ2Q3+R1+d12R8+d122R9,M=Q2+τ2Q4+τ44Q5+R4+d12R6+d122R7+d144R10+dm2R11,Γ=[AiBi00000000Ci0],d12=d2-d1,dm=12(d22-d12).Proof. Construct the novel Lyapunov functional as follows:(23)V(x(t),i,t)=Vr(xt,i)+Vτ(xt,i)+Vd1(xt,i)+Vd2(xt,i)+Vd3(xt,i), where (24)Vr(xt,i)=xT(t)Pix(t),Vτ(xt,i)=∫t-τt‍xT(s)Q1x(s)ds+∫t-τt‍x˙T(s)Q2x˙(s)ds+∫-τ0‍∫t+θt‍xT(s)[τQ3]x(s)dsdθ+∫-τ0‍∫t+θt‍x˙T(s)[τQ4]x˙(s)dsdθ+∫-τ0‍∫θ0‍∫t+λt‍x˙T(s)[τ22Q5]x˙(s)dsdλdθ,Vd1(xt,i)=∫t-d1t‍xT(s)R1x(s)ds+∫t-d2t-d1‍xT(s)R2x(s)ds+∫t-d(t)t-d1‍xT(s)R3x(s)ds+∫t-d1t‍x˙T(s)R4x˙(s)ds+∫t-d2t-d1‍x˙T(s)R5x˙(s)ds,Vd2(xt,i)=∫-d10‍∫t+θt‍x˙T(s)[d1R6]x˙(s)dsdθ+∫-d2-d1‍∫t+θt‍x˙T(s)[d12R7]x˙(s)dsdθ+∫-d10‍∫t+θt‍xT(s)[d1R8]x(s)dsdθ+∫-d2-d1‍∫t+θt‍xT(s)[d12R9]x(s)dsdθ,Vd3(xt,i)=∫-d10‍∫θ0‍∫t+λt‍x˙T(s)[d122R10]x˙(s)dsdλdθ+∫-d2-d1‍∫θ0‍∫t+λt‍x˙T(s)[dmR11]x˙(s)dsdλdθ. From Definition5, taking 𝔏 as its infinitesimal generator along the trajectory of system (9), then from (23) and (24) we get the following equalities and inequalities: (25)𝔏V(x(t),i,t)=𝔏Vr(xt,i)+𝔏Vτ(xt,i)+𝔏Vd1(xt,i)+𝔏Vd2(xt,i)+𝔏Vd3(xt,i),(26)𝔏Vr(xt,i)=2[xT(t)AiT+xT(t-d(t))BiT+x˙T(t-τ)CiT]Pix(t)+∑j=1N‍πijxT(t)Pjx(t),(27)𝔏Vτ(xt,i)=xT(t)[Q1+τ2Q3]x(t)+x˙T(t)[Q2+τ2Q4+τ44Q5]x˙(t)-xT(t-τ)Q1x(t-τ)-x˙T(t-τ)Q2x˙(t-τ)-∫t-τt‍xT(s)[τQ3]x(s)ds-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds-∫-τ0‍∫t+θt‍x˙(s)[τ22Q5]x˙(s)dsdθ,𝔏Vd1(xt,i)=xT(t)R1x(t)+x˙T(t)R4x˙(t)+xT(t-d1)[R2+R3-R1]x(t-d1)+x˙T(t-d1)[R5-R4]x˙(t-d1)-x˙T(t-d2)R5x˙(t-d2)-xT(t-d2)R2x(t-d2)-(1-d˙(t))xT(t-d(t))R3x(t-d(t)),𝔏Vd2(xt,i)=xT(t)[d12R8+d122R9]x(t)+x˙T(t)[d12R6+d122R7]x˙(t)-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds-∫t-d1t‍xT(s)[d1R8]x(s)ds-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds,𝔏Vd3(xt,i)=x˙T(t)[d144R10+dm2R11]x˙(t)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ. Let us define(28)ξ(t)=col{∫t-d2t-d(t)‍x(t)x(t-d(t))x(t-d1)x(t-d2)x˙(t-d1)ppppppx˙(t-d2)∫t-d1t‍x(s)ds∫t-d(t)t-d1‍x(s)dspppppp∫t-d2t-d(t)‍x(s)dsx(t-τ)x˙(t-τ)∫t-τt‍x(s)ds}. Applying (a) of Lemma6, we obtain (29)-∫t-τt‍xT(s)[τQ3]x(s)ds≤-[∫t-τt‍xT(s)ds]Q3[∫t-τt‍x(s)ds]=-ξT(t)e12Q3e12Tξ(t). Following the same procedure, we also obtain the inequalities as follows: (30)-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds≤-ξT(t)(e1-e10)Q4(e1T-e10T)ξ(t),-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds≤-ξT(t)(e1-e3)R6(e1T-e3T)ξ(t),-∫t-d1t‍xT(s)[d1R8]x(s)ds≤-[∫t-d1t‍xT(s)ds]R8[∫t-d1t‍x(s)ds]. Applying (b) of Lemma6, we have (31)-∫-τ0‍∫t+θt‍x˙T(s)[τ22Q5]x˙(s)dsdθ≤-[∫-τ0‍∫t+θt‍x˙T(s)dsdθ]Q5[∫-τ0‍∫t+θt‍x˙(s)dsdθ]=-[τxT(t)-∫t-τt‍xT(s)ds]Q5[τx(t)-∫t-τt‍x(s)ds]=-ξT(t)(τe1-e12)Q5(τe1T-e12T)ξ(t). Then the following inequalities are obtained by the same technique: (32)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ≤-ξT(t)(d1e1-e7)R10(d1e1T-e7T)ξ(t),-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ≤-ξT(t)(d12e1-e8-e9)R11(d12e1T-e8T-e9T)ξ(t). Letλ(t)=(d(t)-d1)/d12, then we have (33)-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds=-d12∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-d12∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-(d2-d(t))∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds-(d2-d(t))∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-d(t)-d1d2-d(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-d2-d(t)d(t)-d1ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)≤-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-λ(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-(1-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)=-(1+λ(t))ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-(2-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t). Consistent with the technique of (33), we obtain (34)-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds≤-(2-λ(t))ξT(t)(e8R9e8T)ξ(t)pppppp-(1+λ(t))ξT(t)(e9R9e9T)ξ(t), considering (35)x˙T(t)Mx˙(t)=(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))T×M(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))=ξT(t)ΓTMΓξ(t), where M, Γ have been defined as before. We take the previous equalities and inequalities (26)–(35) into (25); thus, we finally get (36)𝔏V(x(t),i,t)≤ξT(t)[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]×ξ(t). Since0≤λ(t)≤1, by utilizing Lemma 7, we know that λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ<0 is equivalent to (19). So we choose (37)β=maxi∈S,λ(t)∈[0,1]λmax[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]. Then β<0 and (38)𝔏V(x(t),i,t)≤β∥ξ(t)∥2≤β∥x(t)∥2. According to (38), from Dynkin's formula [31], we obtain (39)ℰ{V(x(t),i,t)}-V(x0,r0)≤βℰ{∫0t‍∥x(s)∥2ds}. Let t→∞, then we have (40)limt→∞ℰ{∫0t‍∥x(s)∥2ds}≤(-β)-1V(x0,r0). From Definition 4, we know that the systems described by (9) are stochastically stable. This completes the proof.Remark 11. Theorem10 provides a delay-range-dependent stochastic stability criterion for nominal neutral systems with interval time-varying delays and Markovian jump parameters as described by (9). By utilizing a new Lyapunov functional, the less conservative criterion is obtained in terms of LMIs and it can be verified in Section 4.Remark 12. In the same context of the stochastic stability for neutral systems with Markovian jumping parameters and time-varying delays, the type of augmented Lyapunov functional has not been used in any of the existing literatures. Compared with the existing Lyapunov functional, the proposed one (23) contains some triple-integral terms, which is very effective in the reduction of conservativeness in [28]. Besides, the information on the lower bound of the delay is sufficiently used in the Lyapunov functional by introducing the terms, such as ∫t-d2t-d1‍xT(s)R2x(s)ds and ∫t-d(t)t-d1‍xT(s)R3x(s)ds.In many circumstances, the information on the delay derivative may not be available. That is,μ is usually unknown in the real systems. So we give the following result as a corollary which can be obtained from Theorem 10 by setting R3=0.Corollary 13. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following linear matrix inequalities hold: (41)Π~i1+ΓTMΓ<0,Π~i2+ΓTMΓ<0, where (42)Π~i1=Π~i0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Π~i2=Π~i0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (43)Π~i0=e1Ye1T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), and other notations are the same as Theorem 10. ## 3.2. Stochastic Stability for the Uncertain Neutral Markovian Jump Systems In this subsection, we consider the uncertain case which can be described by (3). Based on Theorem 10, we obtain the following theorem to guarantee the stochastic stability for the uncertain neutral systems with interval time-varying delays and Markovian jump parameters.Theorem 14. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following matrix inequalities hold: (44)[Πi1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,(45)[Πi2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where εT=EAie1T+EBie2T, Πi1, Πi2, Γ, and M have been defined in Theorem 10.Proof. On the basis of Theorem10, we directly replace Ai and Bi with Ai+ΔAi(t), Bi+ΔBi(t) and obtain (46)Πi1(t)+ΓT(t)MΓ(t)<0,(47)Πi2(t)+ΓT(t)MΓ(t)<0, where (48)Πi1(t)=Πi1+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T,Πi2(t)=Πi2+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T. Considering (46) and combining the uncertainties condition (7) by Lemma 9, we have (49)[Πi1ΓTMMΓ-M]+[e1Pi0M0]HiFi(t)εT+εFiT(t)HiT[Pie1TM00]<0. With (8), by Lemma 8 from (49), we obtain (50)[Πi1ΓTMMΓ-M]+1δ1[e1Pi0M0]HiHiT[Pie1TM00]+δ1[εεT000]<0. Obviously, (50) is equivalent to (44). Similarly, considering (46) and following the same procedure, we can get (45). Finally, following from the latter proof of Theorem 10, we know that the uncertain neutral systems with Markovian jump parameters and time-varying delay as described by (3) are stochastically stable. This completes the proof.Remark 15. It should be noted that (44) and (45) can be viewed as linear matrix inequalities by introducing new variables. That is, define matrices (51)Pi(1)=1δ1PiHiHiTPi,Pi(2)=1δ2PiHiHiTPi,PiM(1)=1δ1PiHiHiTM,PiM(2)=1δ2PiHiHiTM,M(1)=1δ1MHiHiTM,M(2)=1δ2MHiHiTM, where Hi, i∈S, are known constant matrices and have been defined in (7). Then (44) and (45) can be easily solved by the LMI Toolbox in MATLAB.Remark 16. It should be mentioned that Theorem14 is an extension of Theorem 10 to uncertain neutral Markovian jump systems with interval time-varying delays. It provides a stochastic delay-range-dependent stability criterion for (3) and it will be verified to be less conservative than some existing ones in Section 4.Consistent with the nominal systems, in the uncertain case, we have the following result as a corollary if the information on the delay derivativeμ may not be available. The corollary is also obtained by setting R3=0 in Theorem 14.Corollary 17. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following symmetric matrix inequalities hold: (52)[Π~i1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,[Π~i2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where Π~i1 and Π~i2 have been defined in Corollary 13, the remaining notations are the same as Theorem 14. ## 4. Numerical Examples In this section, numerical examples are given to show that the proposed theoretical results in this paper are effective and less conservative than some previous ones in the literature.Example 1. Consider the nominal system in the form of (9) described as follows: (53)x˙(t)-Cix˙(t-0.1)=Aix(t)+Bix(t-d(t)), where i∈S={1,2} and the mode switching is governed by the rate matrix [πij]2×2=[-554-4], which is described by Figure 1(54)A1=[25-2-3],B1=[-0.30.5-0.2-0.3],C1=[-0.200.1-0.2],A2=[-5-1.62-4],B2=[-0.30.5-0.2-0.3],C2=[-0.10.20-0.2]. Given the time-varying delay d(t)=0.5(2+sin3t), from the graph of d(t) with t∈[0,2π] in Figure 2, we easily obtain d1=0.5 and d2=1.5. In addition, we have d˙(t)=1.5sin2tcost and its maximum μ=3/3.Figure 1 Operation modes of Example1.Figure 2 Interval time-varying delayd(t) of Example 1.By Theorem10, with the help of LMI toolbox in MATLAB, we solve (19) and get a group of matrices for the solution to guarantee the stochastic stability for the system (53) as follows: for simplicity, we only list the matrices for Pi, i∈S={1,2}, Qj, j=1,2,…,5. (55)P1=[1.4857-0.3614*0.6329],P2=[2.0645-0.3761*0.5086],Q1=[0.7342-0.1546*0.2978],Q2=[0.4083-0.1873*0.3652],Q3=[0.4165-0.2137*0.3056],Q4=[0.4576-0.2539*0.3684],Q5=[0.2673-0.0845*0.1766]. Therefore, it can be seen that the system (9) is determined to be stochastically stable by Theorem 10.Example 2. As said in the literature [32], with the abrupt variation in its structures and parameters, we can present the partial element equivalent circuit (PEEC) model as a stochastic jump one. Then, a general form of PEEC model is given by (3), where we assume that the neutral delay of the PEEC model is constant. Consider the stochastic neutral partial element equivalent circuit (PEEC) model described by the following equation: (56)x˙(t)-Cix˙(t-0.3)=(Ai+ΔA(t))x(t)+(Bi+ΔB(t))x(t-d(t)), where i∈S={1,2} and the mode switching is governed by the rate matrix [πij]2×2=[-443-3], which is described by Figure 3(57)A1=[-500-6],B1=[-1.60-1.8-1.5],C1=0.5I,H1=[0.20.2],A2=[-400-5],B2=[-20-0.9-1.2],C2=0.3I,H2=[0-0.3],EA1=[0.20],EA2=[00.2],EB1=[-0.30.3],EB2=[0.20.2]. Given ∥Fi(t)∥<1 and the time-varying delay d(t)=0.5(2+cos3t), from the graph of d(t) with t∈[0,2π] in Figure 4, we easily obtain d1=0.5 and d2=1.5. In addition, we have d˙(t)=-1.5cos2tsint and its maximum μ=3/3.Figure 3 Operation modes of Example2.Figure 4 Interval time-varying delayd(t) of Example 2.By Theorem14, with the help of LMI toolbox in MATLAB, we solve (44) and (45) and get a group of matrices for the solution to guarantee the stochastic stability for the system (56) as follows: for simplicity, we only list the matrices for Pi, i∈S={1,2}, Qj, j=1,2,…,5. (58)P1=[2.4327-0.4713*0.7846],P2=[2.7685-0.4617*0.7432],Q1=[0.5122-0.1558*0.3976],Q2=[0.6083-0.1898*0.3976],Q3=[0.3164-0.2058*0.4751],Q4=[0.2563-0.1584*0.3476],Q5=[0.1574-0.0713*0.1798]. Therefore, according to Theorem 14, the uncertain neutral PEEC system presented by (3) is stochastically stable.Example 3. In the study of practical electrical circuit systems, a small test circuit which consists of a partial element equivalent circuit (PEEC) was considered in [33], which can be described as the following form: (59)x˙(t)-Cx˙(t-τ)=Ax(t)+Bx(t-d). Compared with (9), (59) can be regarded as i∈S={1} and d(t)=d. So we have d1=d2=d, μ=0 and utilize Theorem 10 to compute the maximum discrete delay for system stability.Remark 18. It should be pointed out that we required1≠d2 and d1≠0 in order to conveniently organize this paper. But from the results of the theorems and corollaries in this paper, we know that they are applicable to many special cases, such as d(t)≡d, d1=d2, or d1=0, τ=0. Actually, we just need to delete the corresponding integral terms in the Lyapunov functional and obtain homologous results. Consider (59) with the following parameters: (60)A=[-0.90.20.1-0.9],B=[-1.1-0.2-0.1-1.1],C=[-0.200.2-0.1]. For given τ, by Theorem 10, the maximum d, which satisfies the LMIs in (19), can be calculated by solving a quasiconvex optimization problem. This neutral system was considered in references [34–36]. The results on the maximum upper bound of d are compared in Table 1. From Table1, we know that the maximum upper bound of delay d=2.3026 in this paper by setting τ=0.1, while the maximum upper bound of delay d=1.7100 for [36], d=2.1229 for [34], and d=2.2951 for [35]. The results are also given by setting τ=0.5 and τ=1, and it is found that the maximum upper bound in this paper is larger than those in [34–36]. So it can be demonstrated that the stability condition Theorem 10 in this paper yields less conservative results than the previous ones.Table 1 Maximum upper bound ofd with different neutral delay τ. Methods τ = 0.1 τ = 0.5 τ = 1 He et al. [36] 1.7100 1.6718 1.6543 Han [34] 2.1229 2.1229 2.1229 Li et al. [35] 2.2951 2.3471 2.3752 Theorem10 2.3026 2.3547 2.3835Example 4. Further consideration on Example3, if we take the parameter uncertainties commonly existing in the modeling of a real system into account, a general form of PEEC model is given by (61)x˙(t)-Cx˙(t-τ)=(A+ΔA(t))x(t)+(B+ΔB(t))x(t-d), where A, B, and C are given in Example 3 and the uncertain matrices ΔA(t) and ΔB(t) satisfy (62)∥ΔA(t)∥≤κ,∥ΔB(t)∥≤κ,κ≥0. Moreover, in the form of (7) and (8), we assume that (63)H=κI,EA=EB=I,0≤κ≤1. Consider (61), for given τ and κ, by Theorem 14, the maximum upper bound of d, which satisfies the LMIs in (44) and (45), can be calculated by solving a quasiconvex optimization problem. When τ=1.0, Table 2 gives the comparisons of the maximum allowed delay of d for various parameters κ in different methods. From Table2, provided that τ=1.0, we know that the maximum upper bound of delay d=1.5316 in this paper by setting κ=0.10, while the maximum upper bound of delay d=1.3864 for [37], d=1.4385 for [36], and d=1.5047 for [38]. The results are also given by setting κ=0.15, κ=0.20, and κ=0.25, and it is found that the maximum upper bound in this paper is larger than those in [36–38]. So it can be seen that the delay-range-dependent stability condition Theorem 14 in this paper is less conservative than some earlier reported ones in the literature.Table 2 Maximum upper bound ofd with τ=1.0 and different parameter κ. Methods κ = 0.10 κ = 0.15 κ = 0.20 κ = 0.25 Han [37] 1.3864 1.2705 1.1607 1.0456 He et al. [36] 1.4385 1.3309 1.2396 1.1547 Xu et al. [38] 1.5047 1.4052 1.2998 1.2136 Theorem14 1.5316 1.4089 1.3028 1.2217Example 5. In this example, to compare the stochastic stability result in Theorem10 with those in [23, 39, 40], we consider the nominal system (9) with Crt=0 and d1=0. In fact, we consider here that there are no longer neutral delay systems, for given (9) with the following parameters: (64)A1=[-3.490.81-0.65-3.27],A2=[-2.490.291.34-0.02],B1=[-0.86-1.29-0.68-2.07],B2=[-2.830.50-0.84-1.01],C1=C2=0,Pij=[πij]2×2,i,j∈S={1,2}. As described previously, for given π22=-0.8, different values of π11 and different values of μ, by Theorem 10 and the maximum d2, which satisfies the LMIs in (19), can be calculated by solving a quasiconvex optimization problem. Tables 3 and 4 give the contrastive results. From Table3, provided that μ=0, we know that the maximum upper bound of delay d2=0.6853 in this paper by setting π11=-0.10, while the maximum upper bound of delay d2=0.5012 for [40], d2=0.5012 for [39], and d2=0.6797 for [23]. The results are also given by setting π11=-0.50, π11=-0.80, and π11=-1.00, and it is found that the maximum upper bound of delay d2 in this paper is larger than those in [23, 39, 40]. So it also can be shown that the stochastic stability result in Theorem 10 is less conservative than those results in [23, 39, 40]. From Table4, provided that μ=1.5, we know that the maximum upper bound of delay d2=0.3953 in this paper by setting π11=-0.10, while the methods in [39, 40] cannot be applicable to the case μ≥1, and the maximum upper bound of delay d2=0.3860 for [23]. So it can be shown that Theorem 10 in this paper is less conservative and can be applied to the time-varying delay without the requirement on μ<1.Table 3 Maximum upper bound ofd2 with μ=0 and different parameter π11. Methods π 11 = - 0.10 π 11 = - 0.50 π 11 = - 0.80 π 11 = - 1.00 Cao et al. [40] 0.5012 0.4941 0.4915 0.4903 Chen et al. [39] 0.5012 0.4941 0.4915 0.4903 Xu et al. [23] 0.6797 0.5794 0.5562 0.5465 Theorem10 0.6853 0.5874 0.5625 0.5574Table 4 Maximum upper bound ofd2 with μ=1.5 and different parameter π11. Methods π 11 = - 0.10 π 11 = - 0.50 π 11 = - 0.80 π 11 = - 1.00 Cao et al. [40] — — — — Chen et al. [39] — — — — Xu et al. [23] 0.3860 0.3656 0.3487 0.3378 Theorem10 0.3953 0.3746 0.3502 0.3449 ## 5. Conclusions In this paper, some new delay-range-dependent conditions have been provided to guarantee the stochastic stability of the neutral systems with Markovian jumping parameters and interval time-varying delays. A novel augmented Lyapunov-Krasovskii functional which contains some triple-integral terms is constructed. By some integral inequalities and the nature of convex combination, some less conservative delay-range-dependent stochastic stability criteria are obtained. Numerical examples are given to demonstrate the effectiveness and less conservativeness of our result. --- *Source: 101485-2013-07-10.xml*
101485-2013-07-10_101485-2013-07-10.md
45,941
On Delay-Range-Dependent Stochastic Stability Conditions of Uncertain Neutral Delay Markovian Jump Systems
Xinghua Liu; Hongsheng Xi
Journal of Applied Mathematics (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101485
101485-2013-07-10.xml
--- ## Abstract The delay-range-dependent stochastic stability for uncertain neutral Markovian jump systems with interval time-varying delays is studied in this paper. The uncertainties under consideration are assumed to be time varying but norm bounded. To begin with the nominal systems, a novel augmented Lyapunov functional which contains some triple-integral terms is introduced. Then, by employing some integral inequalities and the nature of convex combination, some less conservative stochastic stability conditions are presented in terms of linear matrix inequalities without introducing any free-weighting matrices. Finally, numerical examples are provided to demonstrate the effectiveness and to show that the proposed results significantly improve the allowed upper bounds of the delay size over some existing ones in the literature. --- ## Body ## 1. Introduction Time delays are frequently encountered in various engineering systems, such as chemical or process control systems, networked control systems, and manufacturing systems. To sum up, delays can appear in the state, input, or output variables (retarded systems), as well as in the state derivative (neutral systems). In fact, neutral delay systems constitute a more general class than those of the retarded type because such systems can be found in places such as population ecology [1], distributed neural networks [2], heat exchangers, and robots in contact with rigid environments [3]. Since it is shown that the existence of delays in a dynamic system may result in instability, oscillations, or poor performances [3–5], the stability of time-delay systems has been an important problem of recurring interest for many years. Existing results on this topic can be roughly classified into two categories, namely, delay-independent criteria [6] and delay-dependent criteria, and it is generally recognized that the latter cases are less conservative. Actually, the stability of neutral time-delay systems proves to be a more complex issue as well as singular systems [7–9] because the systems involve the derivative of the delayed state. So considerable attention has been devoted to the problem of robust delay-independent stability or delay-dependent stability and stabilization via different approaches for linear neutral systems with delayed state input and parameter uncertainties. Results are mainly presented based on Lyapunov-Krasovskii (L-K) method; see, for example, [10–16] and the references therein. However, there is room for further investigation because the conservativeness of the neutral systems can be further reduced by a better technique.On the other hand, with the development of science and technology, many practical dynamics, for example, solar thermal central receivers, robotic manipulator systems, aircraft control systems, economic systems, and so on, experience abrupt changes in their structures, whose parameters are caused by phenomena such as component failures or repairs, changes in subsystem interconnections, and sudden environmental changes. This class of systems is more appropriate to be described as Markovian jump systems (MJSs), which can be regarded as a special class of hybrid systems with finite operation modes. The system parameters jump among finite modes, and the mode switching is governed by a Markov process to represent the abrupt variation in their structures and parameters. With so many applications in engineering systems, a great deal of attention has been paid to the stability analysis and controller synthesis for Markovian jump systems (MJSs) in recent years. Many researchers have made a lot of progress on Markovian jump delay systems and Markovian jump control theory; see, for example, [17–23] and references therein for more details. However, a few of these papers have considered the effect of delay on the stability or stabilization for the corresponding neutral systems. Besides, to the best of the authors’ knowledge, it seems that the problem of stochastic stability for neutral Markovian jumping systems with interval time-varying delays has not been fully investigated and it is very challenging. Motivated by the previous description, this paper investigates the stochastic stability of neutral Markovian jumping systems with interval time-varying delays to seek less conservative stochastic stability conditions than some previous ones.In order to simplify the treatment of the problem, in this paper, we first investigate the nominal systems and construct a new augmented Lyapunov functional containing some triple-integral terms to reduce conservativeness. By some integral inequalities and the nature of convex combination, the delay-range-dependent stochastic stability conditions are derived for the nominal neutral systems with Markovian jump parameters and interval time-varying delays. Then, the results are extended to the corresponding uncertain case on the basis of obtained conditions. In addition, these conditions are expressed in linear matrix inequalities (LMIs), which can be easily checked by utilizing the LMI Toolbox in MATLAB. Numerical examples are given to show the effectiveness and reduced conservativeness over some previous references.The main contributions of this paper can be summarized as follows: (1) the proposed Lyapunov functional contains some triple-integral terms which is very effective in the reduction of conservativeness, and has not been used in any of the existing literatures in the same context before; (2) the delay-range-dependent stability conditions are obtained in terms of LMIs without introducing any free-weighting matrices besides the Lyapunov matrices, which will reduce the number of variables and decrease the complexity of computation; (3) the proposed results are expressed in a new representation and proved to be less conservative than some existing ones.The remainder of this paper is organized as follows: Section2 contains the problem statement and preliminaries; Section 3 presents the main results; Section 4 provides a numerical example to verify the effectiveness of the results; Section 5 draws a brief conclusion. ### 1.1. Notations In this paper,ℝn denotes the n dimensional Euclidean space and ℝm×n is for the set of all m×n matrices. The notation X<Y (X>Y), where X and Y are both symmetric matrices, means that X-Y is negative (positive) definite. I denotes the identity matrix with proper dimensions. λmax(min)(A) is the eigenvalue of matrix A with maximum (minimum) real part. For a symmetric block matrix, we use the sign * to denote the terms introduced by symmetry. ℰ stands for the mathematical expectation, and ∥v∥ is the Euclidean norm of vector v, ∥v∥=(vTv)1/2, while ∥A∥ is spectral norm of matrix A, ∥A∥=[λmax(ATA)]1/2. C([-ρ,0],ℝn) is the space of continuous function from [-ρ,0] to ℝn. In addition, if not explicitly stated, matrices are assumed to have compatible dimensions. ## 1.1. Notations In this paper,ℝn denotes the n dimensional Euclidean space and ℝm×n is for the set of all m×n matrices. The notation X<Y (X>Y), where X and Y are both symmetric matrices, means that X-Y is negative (positive) definite. I denotes the identity matrix with proper dimensions. λmax(min)(A) is the eigenvalue of matrix A with maximum (minimum) real part. For a symmetric block matrix, we use the sign * to denote the terms introduced by symmetry. ℰ stands for the mathematical expectation, and ∥v∥ is the Euclidean norm of vector v, ∥v∥=(vTv)1/2, while ∥A∥ is spectral norm of matrix A, ∥A∥=[λmax(ATA)]1/2. C([-ρ,0],ℝn) is the space of continuous function from [-ρ,0] to ℝn. In addition, if not explicitly stated, matrices are assumed to have compatible dimensions. ## 2. Problem Statement and Preliminaries Given a probability space{Ω,ℱ,P} where Ω is the sample space, ℱ is the algebra of events and P is the probability measure defined on ℱ. {rt,t≥0} is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in a finite set S={1,2,3,…,N}, with the mode transition probability matrix (1)P(rt+Δt=j∣rt=i)={πijΔt+o(Δt)i≠j,1+πiiΔt+o(Δt)i=j, where Δt>0, limΔt→0(o(Δt)/Δt)=0, and πij≥0(i,j∈S,i≠j) denote the transition rate from mode i to j. For any state or mode i∈S, we have (2)πii=-∑j=1,j≠iN‍πij.In this paper, we consider the following uncertain neutral systems with Markovian jump parameters and time-varying delay over the space{Ω,ℱ,P} as follows: (3)x˙(t)-C(rt)x˙(t-τ)=[A(rt)+ΔA(rt)]x(t)+[B(rt)+ΔB(rt)]x(t-d(t)),(4)x(s)=φ(s),rs=r0,s∈[-ρ,0], where x(t)∈ℝn is the system state and τ>0 is a constant neutral delay. It is assumed that the time-varying delay d(t) satisfies (5)0<d1≤d(t)≤d2,d˙(t)≤μ, where d1<d2 and μ≥0 are constant real values. The initial condition φ(s) is a continuously differentiable vector-valued function. The continuous norm of φ(s) is defined as (6)∥φ∥c=maxs∈[-ρ,0]|φ(s)|,ρ=max{τ,d2};A(rt)∈ℝn×n, B(rt)∈ℝn×n, and C(rt)∈ℝn×n are known mode-dependent constant matrices, while ΔA(rt)∈ℝn×n and ΔB(rt)∈ℝn×n are uncertainties. For notational simplicity, when rt=i∈S, A(rt), ΔA(rt), B(rt), ΔB(rt), and C(rt) are, respectively, denoted as Ai, ΔAi, Bi, ΔBi, and Ci. Throughout this paper, the parametric matrix ∥Ci∥<1 and the admissible parametric uncertainties are assumed to satisfy the following condition: (7)[ΔAi(t)ΔBi(t)]=HiFi(t)[EAiEBi], where Hi, EAi, and EBi are known mode-dependent constant matrices with appropriate dimensions and Fi(t) is an unknown and time-varying matrix satisfying (8)FiT(t)Fi(t)≤I,∀t. Particularly, when we consider Fi(t)=0, we get the nominal systems which can be described as (9)x˙(t)-Cix˙(t-τ)=Aix(t)+Bix(t-d(t)).Before proceeding further, the following assumptions, definitions, and lemmas need to be introduced.Assumption 1. The system matrixAi(foralli∈S) is Hurwitz matrix with all the eigenvalues having negative real parts for each mode. The matrix Hi(foralli∈S) is chosen as a full row rank matrix.Assumption 2. The Markov process is irreducible, and the system modert is available at time t.With regard to neutral systems, the operator𝔇:C([-ρ,0],ℝn)→ℝn is defined to be (10)𝔇(xt)=x(t)-Cx(t-τ). Then, the stability of operator 𝔇 is defined as follows.Definition 3 (see [4]). The operator𝔇 is said to be stable if the homogeneous difference equation (11)𝔇(xt)=0,t≥0,x0=ψ∈{ϕ∈C([-ρ,0],ℝn):𝔇ϕ=0} is uniformly asymptotically stable. In order to guarantee the stability of the operator 𝔇, one has assumed that ∥Ci∥<1 as previosuly mentioned, which was introduced in [24].Definition 4 (see [25]). The systems which are described in (3) are said to be stochastically stable if there exists a positive constant Υ such that (12)ℰ{∫0∞‍∥x(rt,t)∥2dt∣φ(s),s∈[-ρ,0],r0}<Υ.Definition 5 (see [26]). In the Euclidean space{ℝn×S×R+}, where x(t)∈ℝn, rt∈S, and t∈R+, one introduces the stochastic Lyapunov-Krasovskii function of system (3) as V(x(t),rt=i,t>0)=V(xt,i,t), the infinitesimal generator satisfying (13)𝔏V(x(t),i,t)=limΔt→01Δt[ℰ{V(x(t+Δt),rt+Δt,t+Δt)∣x(t)=x,rt=i}ppppppppp-V(x(t),i,t)ℰ{V(x(t+Δt),rt+Δt,t+Δt)∣x(t)=x,rt=i}]=∂∂tV(x(t),i,t)+∂∂xV(x(t),i,t)x˙(t)+∑j=1N‍πijV(x(t),j,t).Lemma 6 (see [27, 28]). For any constant matrixH=HT>0 and scalars τ2>τ1>0 such that the following integrations are well defined, then (14)(a)-(τ2-τ1)∫t-τ2t-τ1‍xT(s)Hx(s)ds≤-[∫t-τ2t-τ1‍xT(s)ds]H[∫t-τ2t-τ1‍x(s)ds],(b)-12(τ22-τ12)∫-τ2-τ1‍∫t+θt‍xT(s)Hx(s)dsdθ≤-[∫-τ2-τ1‍∫t+θt‍xT(s)dsdθ]H[∫-τ2-τ1‍∫t+θt‍x(s)dsdθ].Lemma 7 (see [19]). Suppose that0≤τm≤τ(t)≤τM, Ξ1, Ξ2, and Ω are constant matrices of appropriate dimensions, then (15)(τ(t)-τm)Ξ1+(τM-τ(t))Ξ2+Ω<0 if and only if (τM-τm)Ξ1+Ω<0 and (τM-τm)Ξ2+Ω<0 hold.Lemma 8 (see [29]). For given matricesQ=QT, M, and N with appropriate dimensions, (16)Q+MF(t)N+NTFT(t)MT<0 for all F(t) satisfying FT(t)F(t)≤I if and only if there exists a scalar δ>0 such that (17)Q+δ-1MMT+δNNT<0.Lemma 9 (see [30]). Given constant matricesΩ1, Ω2, and Ω3, where Ω1=Ω1T and Ω2=Ω2T>0, then Ω1+Ω3TΩ2-1Ω3<0 if and only if (18)[Ω1Ω3T*-Ω2]<0or[-Ω2Ω3T*Ω1]<0. ## 3. Main Results In this section, we first consider the nominal systems described by (9) and extend to the uncertain case. The following theorems present sufficient conditions to guarantee the stochastic stability for the neutral systems with Markovian jump parameters and time-varying delays. ### 3.1. Stochastic Stability for the Nominal Systems Theorem 10. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following linear matrix inequalities hold: (19)Πi1+ΓTMΓ<0,Πi2+ΓTMΓ<0, where (20)Πi1=Πi0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Πi2=Πi0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (21)Πi0=e1Ye1T-e2[(1-μ)R3]e2T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2+R3-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), where ei{i=1,2,…,12} are block entry matrices; for instance, (22)e2T=[0I0000000000],Y=AiTPi+PiAi+∑j=1N‍πijpj+Q1+τ2Q3+R1+d12R8+d122R9,M=Q2+τ2Q4+τ44Q5+R4+d12R6+d122R7+d144R10+dm2R11,Γ=[AiBi00000000Ci0],d12=d2-d1,dm=12(d22-d12).Proof. Construct the novel Lyapunov functional as follows:(23)V(x(t),i,t)=Vr(xt,i)+Vτ(xt,i)+Vd1(xt,i)+Vd2(xt,i)+Vd3(xt,i), where (24)Vr(xt,i)=xT(t)Pix(t),Vτ(xt,i)=∫t-τt‍xT(s)Q1x(s)ds+∫t-τt‍x˙T(s)Q2x˙(s)ds+∫-τ0‍∫t+θt‍xT(s)[τQ3]x(s)dsdθ+∫-τ0‍∫t+θt‍x˙T(s)[τQ4]x˙(s)dsdθ+∫-τ0‍∫θ0‍∫t+λt‍x˙T(s)[τ22Q5]x˙(s)dsdλdθ,Vd1(xt,i)=∫t-d1t‍xT(s)R1x(s)ds+∫t-d2t-d1‍xT(s)R2x(s)ds+∫t-d(t)t-d1‍xT(s)R3x(s)ds+∫t-d1t‍x˙T(s)R4x˙(s)ds+∫t-d2t-d1‍x˙T(s)R5x˙(s)ds,Vd2(xt,i)=∫-d10‍∫t+θt‍x˙T(s)[d1R6]x˙(s)dsdθ+∫-d2-d1‍∫t+θt‍x˙T(s)[d12R7]x˙(s)dsdθ+∫-d10‍∫t+θt‍xT(s)[d1R8]x(s)dsdθ+∫-d2-d1‍∫t+θt‍xT(s)[d12R9]x(s)dsdθ,Vd3(xt,i)=∫-d10‍∫θ0‍∫t+λt‍x˙T(s)[d122R10]x˙(s)dsdλdθ+∫-d2-d1‍∫θ0‍∫t+λt‍x˙T(s)[dmR11]x˙(s)dsdλdθ. From Definition5, taking 𝔏 as its infinitesimal generator along the trajectory of system (9), then from (23) and (24) we get the following equalities and inequalities: (25)𝔏V(x(t),i,t)=𝔏Vr(xt,i)+𝔏Vτ(xt,i)+𝔏Vd1(xt,i)+𝔏Vd2(xt,i)+𝔏Vd3(xt,i),(26)𝔏Vr(xt,i)=2[xT(t)AiT+xT(t-d(t))BiT+x˙T(t-τ)CiT]Pix(t)+∑j=1N‍πijxT(t)Pjx(t),(27)𝔏Vτ(xt,i)=xT(t)[Q1+τ2Q3]x(t)+x˙T(t)[Q2+τ2Q4+τ44Q5]x˙(t)-xT(t-τ)Q1x(t-τ)-x˙T(t-τ)Q2x˙(t-τ)-∫t-τt‍xT(s)[τQ3]x(s)ds-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds-∫-τ0‍∫t+θt‍x˙(s)[τ22Q5]x˙(s)dsdθ,𝔏Vd1(xt,i)=xT(t)R1x(t)+x˙T(t)R4x˙(t)+xT(t-d1)[R2+R3-R1]x(t-d1)+x˙T(t-d1)[R5-R4]x˙(t-d1)-x˙T(t-d2)R5x˙(t-d2)-xT(t-d2)R2x(t-d2)-(1-d˙(t))xT(t-d(t))R3x(t-d(t)),𝔏Vd2(xt,i)=xT(t)[d12R8+d122R9]x(t)+x˙T(t)[d12R6+d122R7]x˙(t)-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds-∫t-d1t‍xT(s)[d1R8]x(s)ds-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds,𝔏Vd3(xt,i)=x˙T(t)[d144R10+dm2R11]x˙(t)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ. Let us define(28)ξ(t)=col{∫t-d2t-d(t)‍x(t)x(t-d(t))x(t-d1)x(t-d2)x˙(t-d1)ppppppx˙(t-d2)∫t-d1t‍x(s)ds∫t-d(t)t-d1‍x(s)dspppppp∫t-d2t-d(t)‍x(s)dsx(t-τ)x˙(t-τ)∫t-τt‍x(s)ds}. Applying (a) of Lemma6, we obtain (29)-∫t-τt‍xT(s)[τQ3]x(s)ds≤-[∫t-τt‍xT(s)ds]Q3[∫t-τt‍x(s)ds]=-ξT(t)e12Q3e12Tξ(t). Following the same procedure, we also obtain the inequalities as follows: (30)-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds≤-ξT(t)(e1-e10)Q4(e1T-e10T)ξ(t),-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds≤-ξT(t)(e1-e3)R6(e1T-e3T)ξ(t),-∫t-d1t‍xT(s)[d1R8]x(s)ds≤-[∫t-d1t‍xT(s)ds]R8[∫t-d1t‍x(s)ds]. Applying (b) of Lemma6, we have (31)-∫-τ0‍∫t+θt‍x˙T(s)[τ22Q5]x˙(s)dsdθ≤-[∫-τ0‍∫t+θt‍x˙T(s)dsdθ]Q5[∫-τ0‍∫t+θt‍x˙(s)dsdθ]=-[τxT(t)-∫t-τt‍xT(s)ds]Q5[τx(t)-∫t-τt‍x(s)ds]=-ξT(t)(τe1-e12)Q5(τe1T-e12T)ξ(t). Then the following inequalities are obtained by the same technique: (32)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ≤-ξT(t)(d1e1-e7)R10(d1e1T-e7T)ξ(t),-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ≤-ξT(t)(d12e1-e8-e9)R11(d12e1T-e8T-e9T)ξ(t). Letλ(t)=(d(t)-d1)/d12, then we have (33)-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds=-d12∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-d12∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-(d2-d(t))∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds-(d2-d(t))∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-d(t)-d1d2-d(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-d2-d(t)d(t)-d1ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)≤-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-λ(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-(1-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)=-(1+λ(t))ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-(2-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t). Consistent with the technique of (33), we obtain (34)-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds≤-(2-λ(t))ξT(t)(e8R9e8T)ξ(t)pppppp-(1+λ(t))ξT(t)(e9R9e9T)ξ(t), considering (35)x˙T(t)Mx˙(t)=(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))T×M(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))=ξT(t)ΓTMΓξ(t), where M, Γ have been defined as before. We take the previous equalities and inequalities (26)–(35) into (25); thus, we finally get (36)𝔏V(x(t),i,t)≤ξT(t)[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]×ξ(t). Since0≤λ(t)≤1, by utilizing Lemma 7, we know that λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ<0 is equivalent to (19). So we choose (37)β=maxi∈S,λ(t)∈[0,1]λmax[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]. Then β<0 and (38)𝔏V(x(t),i,t)≤β∥ξ(t)∥2≤β∥x(t)∥2. According to (38), from Dynkin's formula [31], we obtain (39)ℰ{V(x(t),i,t)}-V(x0,r0)≤βℰ{∫0t‍∥x(s)∥2ds}. Let t→∞, then we have (40)limt→∞ℰ{∫0t‍∥x(s)∥2ds}≤(-β)-1V(x0,r0). From Definition 4, we know that the systems described by (9) are stochastically stable. This completes the proof.Remark 11. Theorem10 provides a delay-range-dependent stochastic stability criterion for nominal neutral systems with interval time-varying delays and Markovian jump parameters as described by (9). By utilizing a new Lyapunov functional, the less conservative criterion is obtained in terms of LMIs and it can be verified in Section 4.Remark 12. In the same context of the stochastic stability for neutral systems with Markovian jumping parameters and time-varying delays, the type of augmented Lyapunov functional has not been used in any of the existing literatures. Compared with the existing Lyapunov functional, the proposed one (23) contains some triple-integral terms, which is very effective in the reduction of conservativeness in [28]. Besides, the information on the lower bound of the delay is sufficiently used in the Lyapunov functional by introducing the terms, such as ∫t-d2t-d1‍xT(s)R2x(s)ds and ∫t-d(t)t-d1‍xT(s)R3x(s)ds.In many circumstances, the information on the delay derivative may not be available. That is,μ is usually unknown in the real systems. So we give the following result as a corollary which can be obtained from Theorem 10 by setting R3=0.Corollary 13. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following linear matrix inequalities hold: (41)Π~i1+ΓTMΓ<0,Π~i2+ΓTMΓ<0, where (42)Π~i1=Π~i0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Π~i2=Π~i0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (43)Π~i0=e1Ye1T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), and other notations are the same as Theorem 10. ### 3.2. Stochastic Stability for the Uncertain Neutral Markovian Jump Systems In this subsection, we consider the uncertain case which can be described by (3). Based on Theorem 10, we obtain the following theorem to guarantee the stochastic stability for the uncertain neutral systems with interval time-varying delays and Markovian jump parameters.Theorem 14. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following matrix inequalities hold: (44)[Πi1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,(45)[Πi2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where εT=EAie1T+EBie2T, Πi1, Πi2, Γ, and M have been defined in Theorem 10.Proof. On the basis of Theorem10, we directly replace Ai and Bi with Ai+ΔAi(t), Bi+ΔBi(t) and obtain (46)Πi1(t)+ΓT(t)MΓ(t)<0,(47)Πi2(t)+ΓT(t)MΓ(t)<0, where (48)Πi1(t)=Πi1+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T,Πi2(t)=Πi2+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T. Considering (46) and combining the uncertainties condition (7) by Lemma 9, we have (49)[Πi1ΓTMMΓ-M]+[e1Pi0M0]HiFi(t)εT+εFiT(t)HiT[Pie1TM00]<0. With (8), by Lemma 8 from (49), we obtain (50)[Πi1ΓTMMΓ-M]+1δ1[e1Pi0M0]HiHiT[Pie1TM00]+δ1[εεT000]<0. Obviously, (50) is equivalent to (44). Similarly, considering (46) and following the same procedure, we can get (45). Finally, following from the latter proof of Theorem 10, we know that the uncertain neutral systems with Markovian jump parameters and time-varying delay as described by (3) are stochastically stable. This completes the proof.Remark 15. It should be noted that (44) and (45) can be viewed as linear matrix inequalities by introducing new variables. That is, define matrices (51)Pi(1)=1δ1PiHiHiTPi,Pi(2)=1δ2PiHiHiTPi,PiM(1)=1δ1PiHiHiTM,PiM(2)=1δ2PiHiHiTM,M(1)=1δ1MHiHiTM,M(2)=1δ2MHiHiTM, where Hi, i∈S, are known constant matrices and have been defined in (7). Then (44) and (45) can be easily solved by the LMI Toolbox in MATLAB.Remark 16. It should be mentioned that Theorem14 is an extension of Theorem 10 to uncertain neutral Markovian jump systems with interval time-varying delays. It provides a stochastic delay-range-dependent stability criterion for (3) and it will be verified to be less conservative than some existing ones in Section 4.Consistent with the nominal systems, in the uncertain case, we have the following result as a corollary if the information on the delay derivativeμ may not be available. The corollary is also obtained by setting R3=0 in Theorem 14.Corollary 17. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following symmetric matrix inequalities hold: (52)[Π~i1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,[Π~i2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where Π~i1 and Π~i2 have been defined in Corollary 13, the remaining notations are the same as Theorem 14. ## 3.1. Stochastic Stability for the Nominal Systems Theorem 10. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following linear matrix inequalities hold: (19)Πi1+ΓTMΓ<0,Πi2+ΓTMΓ<0, where (20)Πi1=Πi0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Πi2=Πi0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (21)Πi0=e1Ye1T-e2[(1-μ)R3]e2T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2+R3-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), where ei{i=1,2,…,12} are block entry matrices; for instance, (22)e2T=[0I0000000000],Y=AiTPi+PiAi+∑j=1N‍πijpj+Q1+τ2Q3+R1+d12R8+d122R9,M=Q2+τ2Q4+τ44Q5+R4+d12R6+d122R7+d144R10+dm2R11,Γ=[AiBi00000000Ci0],d12=d2-d1,dm=12(d22-d12).Proof. Construct the novel Lyapunov functional as follows:(23)V(x(t),i,t)=Vr(xt,i)+Vτ(xt,i)+Vd1(xt,i)+Vd2(xt,i)+Vd3(xt,i), where (24)Vr(xt,i)=xT(t)Pix(t),Vτ(xt,i)=∫t-τt‍xT(s)Q1x(s)ds+∫t-τt‍x˙T(s)Q2x˙(s)ds+∫-τ0‍∫t+θt‍xT(s)[τQ3]x(s)dsdθ+∫-τ0‍∫t+θt‍x˙T(s)[τQ4]x˙(s)dsdθ+∫-τ0‍∫θ0‍∫t+λt‍x˙T(s)[τ22Q5]x˙(s)dsdλdθ,Vd1(xt,i)=∫t-d1t‍xT(s)R1x(s)ds+∫t-d2t-d1‍xT(s)R2x(s)ds+∫t-d(t)t-d1‍xT(s)R3x(s)ds+∫t-d1t‍x˙T(s)R4x˙(s)ds+∫t-d2t-d1‍x˙T(s)R5x˙(s)ds,Vd2(xt,i)=∫-d10‍∫t+θt‍x˙T(s)[d1R6]x˙(s)dsdθ+∫-d2-d1‍∫t+θt‍x˙T(s)[d12R7]x˙(s)dsdθ+∫-d10‍∫t+θt‍xT(s)[d1R8]x(s)dsdθ+∫-d2-d1‍∫t+θt‍xT(s)[d12R9]x(s)dsdθ,Vd3(xt,i)=∫-d10‍∫θ0‍∫t+λt‍x˙T(s)[d122R10]x˙(s)dsdλdθ+∫-d2-d1‍∫θ0‍∫t+λt‍x˙T(s)[dmR11]x˙(s)dsdλdθ. From Definition5, taking 𝔏 as its infinitesimal generator along the trajectory of system (9), then from (23) and (24) we get the following equalities and inequalities: (25)𝔏V(x(t),i,t)=𝔏Vr(xt,i)+𝔏Vτ(xt,i)+𝔏Vd1(xt,i)+𝔏Vd2(xt,i)+𝔏Vd3(xt,i),(26)𝔏Vr(xt,i)=2[xT(t)AiT+xT(t-d(t))BiT+x˙T(t-τ)CiT]Pix(t)+∑j=1N‍πijxT(t)Pjx(t),(27)𝔏Vτ(xt,i)=xT(t)[Q1+τ2Q3]x(t)+x˙T(t)[Q2+τ2Q4+τ44Q5]x˙(t)-xT(t-τ)Q1x(t-τ)-x˙T(t-τ)Q2x˙(t-τ)-∫t-τt‍xT(s)[τQ3]x(s)ds-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds-∫-τ0‍∫t+θt‍x˙(s)[τ22Q5]x˙(s)dsdθ,𝔏Vd1(xt,i)=xT(t)R1x(t)+x˙T(t)R4x˙(t)+xT(t-d1)[R2+R3-R1]x(t-d1)+x˙T(t-d1)[R5-R4]x˙(t-d1)-x˙T(t-d2)R5x˙(t-d2)-xT(t-d2)R2x(t-d2)-(1-d˙(t))xT(t-d(t))R3x(t-d(t)),𝔏Vd2(xt,i)=xT(t)[d12R8+d122R9]x(t)+x˙T(t)[d12R6+d122R7]x˙(t)-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds-∫t-d1t‍xT(s)[d1R8]x(s)ds-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds,𝔏Vd3(xt,i)=x˙T(t)[d144R10+dm2R11]x˙(t)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ. Let us define(28)ξ(t)=col{∫t-d2t-d(t)‍x(t)x(t-d(t))x(t-d1)x(t-d2)x˙(t-d1)ppppppx˙(t-d2)∫t-d1t‍x(s)ds∫t-d(t)t-d1‍x(s)dspppppp∫t-d2t-d(t)‍x(s)dsx(t-τ)x˙(t-τ)∫t-τt‍x(s)ds}. Applying (a) of Lemma6, we obtain (29)-∫t-τt‍xT(s)[τQ3]x(s)ds≤-[∫t-τt‍xT(s)ds]Q3[∫t-τt‍x(s)ds]=-ξT(t)e12Q3e12Tξ(t). Following the same procedure, we also obtain the inequalities as follows: (30)-∫t-τt‍x˙T(s)[τQ4]x˙(s)ds≤-ξT(t)(e1-e10)Q4(e1T-e10T)ξ(t),-∫t-d1t‍x˙T(s)[d1R6]x˙(s)ds≤-ξT(t)(e1-e3)R6(e1T-e3T)ξ(t),-∫t-d1t‍xT(s)[d1R8]x(s)ds≤-[∫t-d1t‍xT(s)ds]R8[∫t-d1t‍x(s)ds]. Applying (b) of Lemma6, we have (31)-∫-τ0‍∫t+θt‍x˙T(s)[τ22Q5]x˙(s)dsdθ≤-[∫-τ0‍∫t+θt‍x˙T(s)dsdθ]Q5[∫-τ0‍∫t+θt‍x˙(s)dsdθ]=-[τxT(t)-∫t-τt‍xT(s)ds]Q5[τx(t)-∫t-τt‍x(s)ds]=-ξT(t)(τe1-e12)Q5(τe1T-e12T)ξ(t). Then the following inequalities are obtained by the same technique: (32)-∫-d10‍∫t+θt‍x˙T(s)[d122R10]x˙(s)dsdθ≤-ξT(t)(d1e1-e7)R10(d1e1T-e7T)ξ(t),-∫-d2-d1‍∫t+θt‍x˙T(s)[dmR11]x˙(s)dsdθ≤-ξT(t)(d12e1-e8-e9)R11(d12e1T-e8T-e9T)ξ(t). Letλ(t)=(d(t)-d1)/d12, then we have (33)-∫t-d2t-d1‍x˙T(s)[d12R7]x˙(s)ds=-d12∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-d12∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-(d2-d(t))∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d2t-d(t)‍x˙T(s)R7x˙(s)ds-(d(t)-d1)∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds-(d2-d(t))∫t-d(t)t-d1‍x˙T(s)R7x˙(s)ds=-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-d(t)-d1d2-d(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-d2-d(t)d(t)-d1ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)≤-ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-λ(t)ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)-(1-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t)=-(1+λ(t))ξT(t)(e2-e4)R7(e2T-e4T)ξ(t)-(2-λ(t))ξT(t)(e3-e2)R7(e3T-e2T)ξ(t). Consistent with the technique of (33), we obtain (34)-∫t-d2t-d1‍xT(s)[d12R9]x(s)ds≤-(2-λ(t))ξT(t)(e8R9e8T)ξ(t)pppppp-(1+λ(t))ξT(t)(e9R9e9T)ξ(t), considering (35)x˙T(t)Mx˙(t)=(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))T×M(Aix(t)+Bix(t-d(t))+Cix˙(t-τ))=ξT(t)ΓTMΓξ(t), where M, Γ have been defined as before. We take the previous equalities and inequalities (26)–(35) into (25); thus, we finally get (36)𝔏V(x(t),i,t)≤ξT(t)[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]×ξ(t). Since0≤λ(t)≤1, by utilizing Lemma 7, we know that λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ<0 is equivalent to (19). So we choose (37)β=maxi∈S,λ(t)∈[0,1]λmax[λ(t)Πi1+(1-λ(t))Πi2+ΓTMΓ]. Then β<0 and (38)𝔏V(x(t),i,t)≤β∥ξ(t)∥2≤β∥x(t)∥2. According to (38), from Dynkin's formula [31], we obtain (39)ℰ{V(x(t),i,t)}-V(x0,r0)≤βℰ{∫0t‍∥x(s)∥2ds}. Let t→∞, then we have (40)limt→∞ℰ{∫0t‍∥x(s)∥2ds}≤(-β)-1V(x0,r0). From Definition 4, we know that the systems described by (9) are stochastically stable. This completes the proof.Remark 11. Theorem10 provides a delay-range-dependent stochastic stability criterion for nominal neutral systems with interval time-varying delays and Markovian jump parameters as described by (9). By utilizing a new Lyapunov functional, the less conservative criterion is obtained in terms of LMIs and it can be verified in Section 4.Remark 12. In the same context of the stochastic stability for neutral systems with Markovian jumping parameters and time-varying delays, the type of augmented Lyapunov functional has not been used in any of the existing literatures. Compared with the existing Lyapunov functional, the proposed one (23) contains some triple-integral terms, which is very effective in the reduction of conservativeness in [28]. Besides, the information on the lower bound of the delay is sufficiently used in the Lyapunov functional by introducing the terms, such as ∫t-d2t-d1‍xT(s)R2x(s)ds and ∫t-d(t)t-d1‍xT(s)R3x(s)ds.In many circumstances, the information on the delay derivative may not be available. That is,μ is usually unknown in the real systems. So we give the following result as a corollary which can be obtained from Theorem 10 by setting R3=0.Corollary 13. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the neutral systems with Markovian jump parameters and time-varying delays as described by (9) are stochastically stable if the operator 𝔇 is stable, and there exist symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following linear matrix inequalities hold: (41)Π~i1+ΓTMΓ<0,Π~i2+ΓTMΓ<0, where (42)Π~i1=Π~i0-2(e2-e4)R7(e2T-e4T)-(e3-e2)R7(e3T-e2T)-e8R9e8T-2e9R9e9T,Π~i2=Π~i0-(e2-e4)R7(e2T-e4T)-2(e3-e2)R7(e3T-e2T)-2e8R9e8T-e9R9e9T, where (43)Π~i0=e1Ye1T+e1PiBie2T+e2BiTPie1T+e1PiCie11T+e11CiTPie1T+e3(R2-R1)e3T-e4R2e4T+e5(R5-R4)e5T-e6R5e6T-e7R8e7T-e10Q1e10T-e11Q2e11T-e12Q3e12T-(e1-e10)Q4(e1T-e10T)-(e1-e3)R6(e1T-e3T)-(τe1-e12)Q5(τe1T-e12T)-(d1e1-e7)R10(d1e1T-e7T)-(d12e1-e8-e9)R11(d12e1T-e8T-e9T), and other notations are the same as Theorem 10. ## 3.2. Stochastic Stability for the Uncertain Neutral Markovian Jump Systems In this subsection, we consider the uncertain case which can be described by (3). Based on Theorem 10, we obtain the following theorem to guarantee the stochastic stability for the uncertain neutral systems with interval time-varying delays and Markovian jump parameters.Theorem 14. For the given finite setS of modes with transition rates matrix, scalars d1, d2, τ, and μ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,…,11) such that the following matrix inequalities hold: (44)[Πi1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,(45)[Πi2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where εT=EAie1T+EBie2T, Πi1, Πi2, Γ, and M have been defined in Theorem 10.Proof. On the basis of Theorem10, we directly replace Ai and Bi with Ai+ΔAi(t), Bi+ΔBi(t) and obtain (46)Πi1(t)+ΓT(t)MΓ(t)<0,(47)Πi2(t)+ΓT(t)MΓ(t)<0, where (48)Πi1(t)=Πi1+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T,Πi2(t)=Πi2+e1[ΔAiT(t)Pi+PiΔAi(t)]e1T+e1PiΔBi(t)e2T+e2ΔBiT(t)Pie1T. Considering (46) and combining the uncertainties condition (7) by Lemma 9, we have (49)[Πi1ΓTMMΓ-M]+[e1Pi0M0]HiFi(t)εT+εFiT(t)HiT[Pie1TM00]<0. With (8), by Lemma 8 from (49), we obtain (50)[Πi1ΓTMMΓ-M]+1δ1[e1Pi0M0]HiHiT[Pie1TM00]+δ1[εεT000]<0. Obviously, (50) is equivalent to (44). Similarly, considering (46) and following the same procedure, we can get (45). Finally, following from the latter proof of Theorem 10, we know that the uncertain neutral systems with Markovian jump parameters and time-varying delay as described by (3) are stochastically stable. This completes the proof.Remark 15. It should be noted that (44) and (45) can be viewed as linear matrix inequalities by introducing new variables. That is, define matrices (51)Pi(1)=1δ1PiHiHiTPi,Pi(2)=1δ2PiHiHiTPi,PiM(1)=1δ1PiHiHiTM,PiM(2)=1δ2PiHiHiTM,M(1)=1δ1MHiHiTM,M(2)=1δ2MHiHiTM, where Hi, i∈S, are known constant matrices and have been defined in (7). Then (44) and (45) can be easily solved by the LMI Toolbox in MATLAB.Remark 16. It should be mentioned that Theorem14 is an extension of Theorem 10 to uncertain neutral Markovian jump systems with interval time-varying delays. It provides a stochastic delay-range-dependent stability criterion for (3) and it will be verified to be less conservative than some existing ones in Section 4.Consistent with the nominal systems, in the uncertain case, we have the following result as a corollary if the information on the delay derivativeμ may not be available. The corollary is also obtained by setting R3=0 in Theorem 14.Corollary 17. For the given finite setS of modes with transition rates matrix, scalars d1, d2, and τ, the uncertain neutral systems with Markovian jump parameters and time-varying delays as described by (3) are stochastically stable if the operator 𝔇 is stable, and there exist scalars δ1>0, δ2>0, symmetric positive matrices Pi>0(i∈S), Qj>0(j=1,2,3,4,5), and Rk>0(k=1,2,4,5,…,11) such that the following symmetric matrix inequalities hold: (52)[Π~i1+1δ1e1PiHiHiTPie1T+δ1εεT(ΓT+1δ1e1PiHiHiT)M*1δ1MHiHiTM-M]<0,[Π~i2+1δ2e1PiHiHiTPie1T+δ2εεT(ΓT+1δ2e1PiHiHiT)M*1δ2MHiHiTM-M]<0, where Π~i1 and Π~i2 have been defined in Corollary 13, the remaining notations are the same as Theorem 14. ## 4. Numerical Examples In this section, numerical examples are given to show that the proposed theoretical results in this paper are effective and less conservative than some previous ones in the literature.Example 1. Consider the nominal system in the form of (9) described as follows: (53)x˙(t)-Cix˙(t-0.1)=Aix(t)+Bix(t-d(t)), where i∈S={1,2} and the mode switching is governed by the rate matrix [πij]2×2=[-554-4], which is described by Figure 1(54)A1=[25-2-3],B1=[-0.30.5-0.2-0.3],C1=[-0.200.1-0.2],A2=[-5-1.62-4],B2=[-0.30.5-0.2-0.3],C2=[-0.10.20-0.2]. Given the time-varying delay d(t)=0.5(2+sin3t), from the graph of d(t) with t∈[0,2π] in Figure 2, we easily obtain d1=0.5 and d2=1.5. In addition, we have d˙(t)=1.5sin2tcost and its maximum μ=3/3.Figure 1 Operation modes of Example1.Figure 2 Interval time-varying delayd(t) of Example 1.By Theorem10, with the help of LMI toolbox in MATLAB, we solve (19) and get a group of matrices for the solution to guarantee the stochastic stability for the system (53) as follows: for simplicity, we only list the matrices for Pi, i∈S={1,2}, Qj, j=1,2,…,5. (55)P1=[1.4857-0.3614*0.6329],P2=[2.0645-0.3761*0.5086],Q1=[0.7342-0.1546*0.2978],Q2=[0.4083-0.1873*0.3652],Q3=[0.4165-0.2137*0.3056],Q4=[0.4576-0.2539*0.3684],Q5=[0.2673-0.0845*0.1766]. Therefore, it can be seen that the system (9) is determined to be stochastically stable by Theorem 10.Example 2. As said in the literature [32], with the abrupt variation in its structures and parameters, we can present the partial element equivalent circuit (PEEC) model as a stochastic jump one. Then, a general form of PEEC model is given by (3), where we assume that the neutral delay of the PEEC model is constant. Consider the stochastic neutral partial element equivalent circuit (PEEC) model described by the following equation: (56)x˙(t)-Cix˙(t-0.3)=(Ai+ΔA(t))x(t)+(Bi+ΔB(t))x(t-d(t)), where i∈S={1,2} and the mode switching is governed by the rate matrix [πij]2×2=[-443-3], which is described by Figure 3(57)A1=[-500-6],B1=[-1.60-1.8-1.5],C1=0.5I,H1=[0.20.2],A2=[-400-5],B2=[-20-0.9-1.2],C2=0.3I,H2=[0-0.3],EA1=[0.20],EA2=[00.2],EB1=[-0.30.3],EB2=[0.20.2]. Given ∥Fi(t)∥<1 and the time-varying delay d(t)=0.5(2+cos3t), from the graph of d(t) with t∈[0,2π] in Figure 4, we easily obtain d1=0.5 and d2=1.5. In addition, we have d˙(t)=-1.5cos2tsint and its maximum μ=3/3.Figure 3 Operation modes of Example2.Figure 4 Interval time-varying delayd(t) of Example 2.By Theorem14, with the help of LMI toolbox in MATLAB, we solve (44) and (45) and get a group of matrices for the solution to guarantee the stochastic stability for the system (56) as follows: for simplicity, we only list the matrices for Pi, i∈S={1,2}, Qj, j=1,2,…,5. (58)P1=[2.4327-0.4713*0.7846],P2=[2.7685-0.4617*0.7432],Q1=[0.5122-0.1558*0.3976],Q2=[0.6083-0.1898*0.3976],Q3=[0.3164-0.2058*0.4751],Q4=[0.2563-0.1584*0.3476],Q5=[0.1574-0.0713*0.1798]. Therefore, according to Theorem 14, the uncertain neutral PEEC system presented by (3) is stochastically stable.Example 3. In the study of practical electrical circuit systems, a small test circuit which consists of a partial element equivalent circuit (PEEC) was considered in [33], which can be described as the following form: (59)x˙(t)-Cx˙(t-τ)=Ax(t)+Bx(t-d). Compared with (9), (59) can be regarded as i∈S={1} and d(t)=d. So we have d1=d2=d, μ=0 and utilize Theorem 10 to compute the maximum discrete delay for system stability.Remark 18. It should be pointed out that we required1≠d2 and d1≠0 in order to conveniently organize this paper. But from the results of the theorems and corollaries in this paper, we know that they are applicable to many special cases, such as d(t)≡d, d1=d2, or d1=0, τ=0. Actually, we just need to delete the corresponding integral terms in the Lyapunov functional and obtain homologous results. Consider (59) with the following parameters: (60)A=[-0.90.20.1-0.9],B=[-1.1-0.2-0.1-1.1],C=[-0.200.2-0.1]. For given τ, by Theorem 10, the maximum d, which satisfies the LMIs in (19), can be calculated by solving a quasiconvex optimization problem. This neutral system was considered in references [34–36]. The results on the maximum upper bound of d are compared in Table 1. From Table1, we know that the maximum upper bound of delay d=2.3026 in this paper by setting τ=0.1, while the maximum upper bound of delay d=1.7100 for [36], d=2.1229 for [34], and d=2.2951 for [35]. The results are also given by setting τ=0.5 and τ=1, and it is found that the maximum upper bound in this paper is larger than those in [34–36]. So it can be demonstrated that the stability condition Theorem 10 in this paper yields less conservative results than the previous ones.Table 1 Maximum upper bound ofd with different neutral delay τ. Methods τ = 0.1 τ = 0.5 τ = 1 He et al. [36] 1.7100 1.6718 1.6543 Han [34] 2.1229 2.1229 2.1229 Li et al. [35] 2.2951 2.3471 2.3752 Theorem10 2.3026 2.3547 2.3835Example 4. Further consideration on Example3, if we take the parameter uncertainties commonly existing in the modeling of a real system into account, a general form of PEEC model is given by (61)x˙(t)-Cx˙(t-τ)=(A+ΔA(t))x(t)+(B+ΔB(t))x(t-d), where A, B, and C are given in Example 3 and the uncertain matrices ΔA(t) and ΔB(t) satisfy (62)∥ΔA(t)∥≤κ,∥ΔB(t)∥≤κ,κ≥0. Moreover, in the form of (7) and (8), we assume that (63)H=κI,EA=EB=I,0≤κ≤1. Consider (61), for given τ and κ, by Theorem 14, the maximum upper bound of d, which satisfies the LMIs in (44) and (45), can be calculated by solving a quasiconvex optimization problem. When τ=1.0, Table 2 gives the comparisons of the maximum allowed delay of d for various parameters κ in different methods. From Table2, provided that τ=1.0, we know that the maximum upper bound of delay d=1.5316 in this paper by setting κ=0.10, while the maximum upper bound of delay d=1.3864 for [37], d=1.4385 for [36], and d=1.5047 for [38]. The results are also given by setting κ=0.15, κ=0.20, and κ=0.25, and it is found that the maximum upper bound in this paper is larger than those in [36–38]. So it can be seen that the delay-range-dependent stability condition Theorem 14 in this paper is less conservative than some earlier reported ones in the literature.Table 2 Maximum upper bound ofd with τ=1.0 and different parameter κ. Methods κ = 0.10 κ = 0.15 κ = 0.20 κ = 0.25 Han [37] 1.3864 1.2705 1.1607 1.0456 He et al. [36] 1.4385 1.3309 1.2396 1.1547 Xu et al. [38] 1.5047 1.4052 1.2998 1.2136 Theorem14 1.5316 1.4089 1.3028 1.2217Example 5. In this example, to compare the stochastic stability result in Theorem10 with those in [23, 39, 40], we consider the nominal system (9) with Crt=0 and d1=0. In fact, we consider here that there are no longer neutral delay systems, for given (9) with the following parameters: (64)A1=[-3.490.81-0.65-3.27],A2=[-2.490.291.34-0.02],B1=[-0.86-1.29-0.68-2.07],B2=[-2.830.50-0.84-1.01],C1=C2=0,Pij=[πij]2×2,i,j∈S={1,2}. As described previously, for given π22=-0.8, different values of π11 and different values of μ, by Theorem 10 and the maximum d2, which satisfies the LMIs in (19), can be calculated by solving a quasiconvex optimization problem. Tables 3 and 4 give the contrastive results. From Table3, provided that μ=0, we know that the maximum upper bound of delay d2=0.6853 in this paper by setting π11=-0.10, while the maximum upper bound of delay d2=0.5012 for [40], d2=0.5012 for [39], and d2=0.6797 for [23]. The results are also given by setting π11=-0.50, π11=-0.80, and π11=-1.00, and it is found that the maximum upper bound of delay d2 in this paper is larger than those in [23, 39, 40]. So it also can be shown that the stochastic stability result in Theorem 10 is less conservative than those results in [23, 39, 40]. From Table4, provided that μ=1.5, we know that the maximum upper bound of delay d2=0.3953 in this paper by setting π11=-0.10, while the methods in [39, 40] cannot be applicable to the case μ≥1, and the maximum upper bound of delay d2=0.3860 for [23]. So it can be shown that Theorem 10 in this paper is less conservative and can be applied to the time-varying delay without the requirement on μ<1.Table 3 Maximum upper bound ofd2 with μ=0 and different parameter π11. Methods π 11 = - 0.10 π 11 = - 0.50 π 11 = - 0.80 π 11 = - 1.00 Cao et al. [40] 0.5012 0.4941 0.4915 0.4903 Chen et al. [39] 0.5012 0.4941 0.4915 0.4903 Xu et al. [23] 0.6797 0.5794 0.5562 0.5465 Theorem10 0.6853 0.5874 0.5625 0.5574Table 4 Maximum upper bound ofd2 with μ=1.5 and different parameter π11. Methods π 11 = - 0.10 π 11 = - 0.50 π 11 = - 0.80 π 11 = - 1.00 Cao et al. [40] — — — — Chen et al. [39] — — — — Xu et al. [23] 0.3860 0.3656 0.3487 0.3378 Theorem10 0.3953 0.3746 0.3502 0.3449 ## 5. Conclusions In this paper, some new delay-range-dependent conditions have been provided to guarantee the stochastic stability of the neutral systems with Markovian jumping parameters and interval time-varying delays. A novel augmented Lyapunov-Krasovskii functional which contains some triple-integral terms is constructed. By some integral inequalities and the nature of convex combination, some less conservative delay-range-dependent stochastic stability criteria are obtained. Numerical examples are given to demonstrate the effectiveness and less conservativeness of our result. --- *Source: 101485-2013-07-10.xml*
2013
# Pattern-Reversal Visual Evoked Potentials Tests in Persons with Type 2 Diabetes Mellitus with and without Diabetic Retinopathy **Authors:** Raghda S. Al-Najjar; Nehaya M. Al-Aubody; Salah Z. Al-Asadi; Majid Alabbood **Journal:** Neurology Research International (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1014857 --- ## Abstract Background. Currently, diabetic retinopathy (DR) has a wide recognition as a neurovascular rather than a microvascular diabetic complication with an increasing need for enhanced detection approaches. Pattern-reversal visual evoked potentials (PRVEPs) test, as an objective electrophysiological measure of the optic nerve and retinal function, can be of great value in the detection of diabetic retinal changes. Objectives. The use of two sizes of checkerboard PRVEPs testing to detect any neurological changes in persons with type 2 diabetes mellitus (T2DM) with and without a clinically detected DR. Also, to compare the results according to the candidate age, duration, and glycemic status of T2DM. Methods. This study included 50 candidates as group A with T2DM and did not have a clinically detected DR and 50 candidates as group B with T2DM and had a clinically detected early DR and 50 candidates as controls who were neither diabetic nor had any other medical or ophthalmic condition that might affect PRVEPs test results. The PRVEPs were recorded in the consultant unit of ophthalmology in Almawani Teaching Hospital. Monocular PRVEPs testing of both eyes was done by using large (60 min) and small (15 min) checks to measure N75 latency and P100 latency and amplitude. Results. There was a statistically significant P100 latency delay and P100 amplitude reduction in both groups A and B in comparison with the controls. The difference between groups A and B was also significant. In both test results of groups A and B, the proportions of abnormal P100 latency were higher than those of P100 amplitude with a higher abnormal proportions in 15 min test. Conclusions. The PRVEP test detected neurological changes, mainly as conductive alterations affecting mostly the foveal region prior to any overt DR clinical changes, and these alterations were heightened by the presence of DR clinical changes. --- ## Body ## 1. Background In the recent past, diabetic retinopathy (DR) is frequently categorized as a microvascular complication of diabetes mellitus (DM). However, in the last few years, DR is recognized as a neurovascular impairment or sensory neuropathy subsequent to the neurovascular impairment [1]. It is well documented that hyperglycemia and its related metabolic abnormalities have a major harmful effect on retinal neurovascular unit including neuronal, vascular, glial, and immune cells, and not just a microvascular effect. This hypothesis opens a new window to manage DR [2]. Many studies showed that electrophysiological procedures are sensitive tools in the early identification of diabetic neural alterations way before the clinical vascular changes become apparent on fundoscopy. Albeit, its use in regular screening is still low and have obtained a much less attention than the tests for diabetic peripheral neuropathy [3, 4].The visual evoked potentials (VEPs) test is the primary tool and is superior to the magnetic resonance imaging (MRI) in assessing the functional integrity of the anterior visual pathways [5]. The pattern-reversal VEPs (PRVEPs) test is the standard and ideal modality for most clinical uses as it is less variable in timing and waveform than other VEP modalities. The use of large and small size checks is recommended by the International Society for Clinical Electrophysiology of Vision (ISCEV) standards [6]. The large size (60 min) mainly stimulates the retinal neural elements responsible for peripheral vision (parafovea), while the small size (15 min) mainly stimulates the retinal neural elements responsible for central vision (fovea) [7, 8].The most prominent component of PRVEPs wave is the P100 as a positive peak with relatively minimal variability. The increased P100 latency is an indicator for a retinocortical conduction decrement as occurring in the demyelinating process. On the other hand, the P100 amplitude and waveform abnormalities may indicate axon loss in the visual pathway [9].The aim of this study was to detect any neurological changes in persons with type 2 diabetes mellitus (T2DM) with and without a clinically detected DR through the use of two sizes of checkerboard PRVEP testing and also, to compare the results according to the candidates age, duration, and glycemic status of T2DM. ## 2. Subjects and Methods ### 2.1. Study Design This is a prospective study conducted in Basra Governorate, Iraq, from December 1, 2017, to October 1, 2018. All candidates were interviewed, and informed consents were taken from them. The study included 150 participants randomly who attended the ophthalmology consultant unit in Almawani Teaching Hospital. The age of the candidates was restricted to forty years and above at time of DM first diagnosed to limit the study to T2DM [10]. The candidates were divided into group A which included 50 persons with T2DM and did not have a clinically detected DR and group B which included 50 persons with T2DM and had a clinically detected mild-moderate nonproliferative DR (NPDR) [11] (Supplementary Figure 1) and 50 candidates as the control group who were free from DM and did not have any of the exclusion criteria. Both eyes of the controls and group A were included, while only the eyes which had the clinical features of mild-moderate NPDR [11] were included in group B.The PRVEPs were recorded using the RETI-port/scan 21 machine (Roland Consult, Brandenburg/Havel, Germany). It was done according to ISCEV standards [6], by using a full field pattern of black and white checks with central red fixation point. The checkerboard stimulus was of two sizes, large (60 min) and small (15 min) size checks. Monocular recording of both eyes were done by using a single-channel electrode of gold-plated type, with a four-channel amplifier whose band-pass filters were set at 1–50 Hz. The contrast was 97%, the plot time was (300msec), and the stimulus frequency was 1.53872 reversals per second. These test parameters were customized by the manufacturer and designated to measure the N75 latency, P100 latency, and amplitude. In this study, we will concentrate on P100 components as P100 is a prominent feature with relatively minimal variability [6]. ### 2.2. Exclusion Criteria Significant ocular diseases such as severe NPDR, proliferative DR, macular disease, vitreous opacities, visually significant cataract, glaucoma, optic neuropathy disease, best-corrected visual acuity less than 20/20, and amblyopia, all these conditions were excluded from the study. Any medical illness that can affect PRVEPs findings such as multiple sclerosis, epilepsy, thyroid disease, type 1 DM (T1DM), patients with a past history of head trauma or cerebrovascular accident, and uncontrolled hypertension (blood pressure (BP) above 140/90 mmHg) were excluded. In addition, alcoholics and drug addicts using such as heroin, morphine, cough syrups, pain killers, and sedatives (due to their negative impact on neural transmission) [12] and pregnant women were also excluded. ### 2.3. Data Collection Each candidate underwent a thorough history taking, BP, weight, height, fasting plasma glucose (FPG), and glycated hemoglobin (HbA1c) measurement, and a comprehensive ophthalmic examination including refraction and visual acuity, intraocular pressure (IOP), and anterior and fundus segment examinations after mydriasis. ### 2.4. Subjects and Testing Room Preparation Verbal consents were taken from all subjects who had a briefing about the procedure and instructed to fast the night before the tests and to avoid hair oils and cycloplegic drops. Subjects were seated comfortably in a stable position approximately 100 cm away from the monitoring screen, and the tested eye was in a proper alignment to the central fixation point with a precise focusing on it during testing. Subjects with refractive errors were asked to wear their corrective glasses. The testing room was maintained quiet and dim lighted with no other operating instruments during the test. ### 2.5. Electrode Placement The study was done according to the International 10/20 system [6, 7], by gently scrubbing the scalp sites by a piece of cotton and skin preparation gel and the electrodes with the electrode paste placed with the active electrode at the occipital scalp (Oz), the reference electrode at the frontal scalp (Fz), and grounded at the vertex (Cz). The electrode impedance was checked and kept ≤5kohm, and the impedance difference among electrodes was ≤3 kohm. ### 2.6. Statistical Analysis The data were analyzed by using SPSS version 20. The one-way (ANOVA) test was used to test the significant differences between the three groups. Significant differences between each paired groups were then evaluated by the post hoc Tukey test to measure the lowest significant difference (LSD), andP value < 0.05 was considered statistically significant.The proportions of normal and abnormal results were estimated by comparing with the control means of this study, i.e., the longest normal P100 latency was calculated by control mean + 2SE and the lowest normal P100 amplitude was calculated by control mean – 2SE. ## 2.1. Study Design This is a prospective study conducted in Basra Governorate, Iraq, from December 1, 2017, to October 1, 2018. All candidates were interviewed, and informed consents were taken from them. The study included 150 participants randomly who attended the ophthalmology consultant unit in Almawani Teaching Hospital. The age of the candidates was restricted to forty years and above at time of DM first diagnosed to limit the study to T2DM [10]. The candidates were divided into group A which included 50 persons with T2DM and did not have a clinically detected DR and group B which included 50 persons with T2DM and had a clinically detected mild-moderate nonproliferative DR (NPDR) [11] (Supplementary Figure 1) and 50 candidates as the control group who were free from DM and did not have any of the exclusion criteria. Both eyes of the controls and group A were included, while only the eyes which had the clinical features of mild-moderate NPDR [11] were included in group B.The PRVEPs were recorded using the RETI-port/scan 21 machine (Roland Consult, Brandenburg/Havel, Germany). It was done according to ISCEV standards [6], by using a full field pattern of black and white checks with central red fixation point. The checkerboard stimulus was of two sizes, large (60 min) and small (15 min) size checks. Monocular recording of both eyes were done by using a single-channel electrode of gold-plated type, with a four-channel amplifier whose band-pass filters were set at 1–50 Hz. The contrast was 97%, the plot time was (300msec), and the stimulus frequency was 1.53872 reversals per second. These test parameters were customized by the manufacturer and designated to measure the N75 latency, P100 latency, and amplitude. In this study, we will concentrate on P100 components as P100 is a prominent feature with relatively minimal variability [6]. ## 2.2. Exclusion Criteria Significant ocular diseases such as severe NPDR, proliferative DR, macular disease, vitreous opacities, visually significant cataract, glaucoma, optic neuropathy disease, best-corrected visual acuity less than 20/20, and amblyopia, all these conditions were excluded from the study. Any medical illness that can affect PRVEPs findings such as multiple sclerosis, epilepsy, thyroid disease, type 1 DM (T1DM), patients with a past history of head trauma or cerebrovascular accident, and uncontrolled hypertension (blood pressure (BP) above 140/90 mmHg) were excluded. In addition, alcoholics and drug addicts using such as heroin, morphine, cough syrups, pain killers, and sedatives (due to their negative impact on neural transmission) [12] and pregnant women were also excluded. ## 2.3. Data Collection Each candidate underwent a thorough history taking, BP, weight, height, fasting plasma glucose (FPG), and glycated hemoglobin (HbA1c) measurement, and a comprehensive ophthalmic examination including refraction and visual acuity, intraocular pressure (IOP), and anterior and fundus segment examinations after mydriasis. ## 2.4. Subjects and Testing Room Preparation Verbal consents were taken from all subjects who had a briefing about the procedure and instructed to fast the night before the tests and to avoid hair oils and cycloplegic drops. Subjects were seated comfortably in a stable position approximately 100 cm away from the monitoring screen, and the tested eye was in a proper alignment to the central fixation point with a precise focusing on it during testing. Subjects with refractive errors were asked to wear their corrective glasses. The testing room was maintained quiet and dim lighted with no other operating instruments during the test. ## 2.5. Electrode Placement The study was done according to the International 10/20 system [6, 7], by gently scrubbing the scalp sites by a piece of cotton and skin preparation gel and the electrodes with the electrode paste placed with the active electrode at the occipital scalp (Oz), the reference electrode at the frontal scalp (Fz), and grounded at the vertex (Cz). The electrode impedance was checked and kept ≤5kohm, and the impedance difference among electrodes was ≤3 kohm. ## 2.6. Statistical Analysis The data were analyzed by using SPSS version 20. The one-way (ANOVA) test was used to test the significant differences between the three groups. Significant differences between each paired groups were then evaluated by the post hoc Tukey test to measure the lowest significant difference (LSD), andP value < 0.05 was considered statistically significant.The proportions of normal and abnormal results were estimated by comparing with the control means of this study, i.e., the longest normal P100 latency was calculated by control mean + 2SE and the lowest normal P100 amplitude was calculated by control mean – 2SE. ## 3. Results The baseline characteristics of the participants are presented in Table1.Table 1 Baseline characteristics of the participants. VariableControlsN = 50Group AN = 50Group BN = 50∗P valueAge (mean ± SD)40-60 yrs50.5 ± 3.654.7 ± 4.255.7 ± 4.10.001N = 31 (62%)N = 30 (60%)N = 27 (54%)—>60 yrs63.5 ± 3.364.1 ± 3.165.7 ± 4.50.022N = 19 (38%)N = 20 (40%)N = 23 (46%)—Sex (male/female)25/2524/2626/24—BP (mmHg) (mean ± SD)Systolic125.5 ± 11.3122.8 ± 10.8122 ± 100.08Diastolic78.5 ± 8.177.2 ± 9.379.5 ± 90.182BMI (Kg/m2) (mean ± SD)30.7 ± 5.429.4 ± 5.128.7 ± 3.80.01FPG (mg/dl)88.6 ± 9.9163.8 ± 30.8177.6 ± 340.001HbA1c (%)4.07 ± 0.68.5 ± 1.79.4 ± 2.80.001T2DM duration (mean ± SD)—7.7 ± 9.79.7 ± 4.40.001BP: blood pressure, BMI: body mass index, FPG: fasting plasma glucose, HbA1c: glycated hemoglobin.The test results of both eyes are presented together without right\left discrimination as there was no statistically significant difference in the mean values of the three parameters of both PRVEPs tests between the right and left eyes of each group. Additional tables show this in more details (Supplementary Tables1 and 2).Table2 presents that, in the 60 min and 15 min tests, there was an increase in the mean values of P100 latency significantly and a decrease in the mean values of P100 amplitude significantly in group B as compared with those of group A and controls. The differences among group A and controls were also significant. With regard to N75 latency mean value, no statistical significant difference was detected among the three studied groups.Table 2 The PRVEP tests results of both eyes in each group. ParametersControlN = 100 eyesGroup AN = 100 eyesGroup BN = 76 eyesLSD andP value60 min PRVEP testN75 Latency (ms)68.3 ± 0.67a69.4 ± 0.67ab71 ± 0.96b3.01P1=0.276P2=0.006P3=0.084P100 Latency (ms)104.32 ± 0.6c108.63 ± 0.58b117.5 ± 0.9a4.31P1=0.001P2=0.001P3=0.001P100 Amplitude (μV)12.6 ± 0.5a10.4 ± 0.46b8.2 ± 0.46c2.14P1=0.001P2=0.001P3=0.00315 min PRVEP testN75 Latency (ms)82.57 ± 0.6381.8 ± 1.179.6 ± 1.3P1=0.615P2=0.053P3=0.141P100 Latency (ms)110.4 ± 0.54c121.5 ± 0.58b127.2 ± 0.45a5.7P1=0.001P2=0.001P3=0.001P100 Amplitude (μV)15.35 ± 0.73a11 ± 0.54b7.7 ± 0.55c3.05P1=0.001P2=0.001P3=0.001Values are expressed as mean ± SE. Different letters represent significant difference at (P value < 0.05); LSD: lowest significant difference between the three groups. P1=Pvalue between controls and group A; P2=P=P value between controls and group B; P3=P value between group A and group B.By calculating the upper limit of the normal P100 latency for the 60 min test (105.52) and for 15 min test (111.48) and the lower limit for normal P100 amplitude for the 60 min test (11.6) and for 15 min test (13.86), we can use them as the cutoff points between normal and abnormal results. The proportions of normal and abnormal P100 latency and amplitude of both tests are shown in Table3.Table 3 The proportion of normal and abnormal PRVEPs test results. ParametersResultsControlsGroup AGroup B60 min PRVEP testP100 latencyNormalN = 58 (58%)N = 24 (24%)N = 5 (6.6%)AbnormalN = 42 (42%)N = 76 (76%)N = 71 (93.4%)P100 amplitudeNormalN = 51 (51%)N = 39 (39%)N = 15 (19.7%)AbnormalN = 49 (49%)N = 61 (61%)N = 61 (80.3%)15 min PRVEP testP100 latencyNormalN = 50 (50%)N = 4 (4%)N = 0 (0%)AbnormalN = 50 (50%)N = 96 (96%)N = 76 (100%)P100 amplitudeNormalN = 49 (49%)N = 24 (24%)N = 8 (10.5%)AbnormalN = 51 (51%)N = 76 (76%)N = 68 (89.5%)As the ISCEV standards recommend, adult age ranges from 18 to 60 years and older than 60 years is considered as elderly age to be compared separately [6]. By dividing the three groups in two categories for each one according to the age as adult (40–60 yrs) and elderly (>60 yrs), we can evaluate them separately. In both 60 and 15 min PRVEPs, the difference was still statistically significant among controls and groups A and B in both age categories in relation to P100 latency and amplitude with the longest latency and lowest amplitude in group B. It is suggested that the differences between groups is not related to age difference.Table4 shows the results of the 60 and 15 min PRVEP test parameters in group A and group B according to good glycemic control (HbA1c < 7.5%) and poor glycemic control (HbA1c level ≥ 7.5%) [13].Table 4 The 60 min and 15 min PRVEP tests parameters in group A and group B according to good glycemic control and poor glycemic control (mean ± SD). Groups according to the HbA1c level60 min PRVEP test15 min PRVEP testP100P100P100P100Latency (ms)Amplitude (μV)Latency(ms)Amplitude (μV)Group A<7.5% (N = 28)109.5 ± 5.612.8 ± 5120.7 ± 5.513.8 ± 5.2≥7.5% (N = 72)108.3 ± 69.4 ± 4.1121.7 ± 69.8 ± 5∗P value0.3370.0010.4240.001Group B<7.5% (N = 12)114.7 ± 76.8 ± 2.3123.6 ± 6.38.4 ± 4.8≥7.5% (N = 64)118 ± 88.5 ± 4.2127.2 ± 37.5 ± 4.8∗P value0.1960.1760.0010.537∗P value < 0.05 considered statistically significant.Table5 shows the results of the 60 and 15 min PRVEP test parameters in group A and group B according to the duration of T2DM.Table 5 The 60 min and 15 min PRVEP test parameters in group A and group B according to the duration of T2DM (mean ± SD). Groups according to T2DM duration60 min PRVEP test15 min PRVEP testP100P100P100P100Latency (ms)Amplitude (μV)Latency (ms)Amplitude (μV)Group A≤5 yrsN = 26108.1 ± 3.813 ± 4.1 a121.2 ± 514.3 ± 6.2 a6–10 yrsN = 38108 ± 5.19.3 ± 4.1 b121.3 ± 6.510 ± 4.4 b>10 yrsN = 36109.7 ± 7.59.6 ± 5 b121.9 ± 5.89.5 ± 4.8 b∗P value0.3830.0040.8770.001Group B≤5 yrsN = 16119.5 ± 8.29.2 ± 3126.7 ± 3.29.3 ± 4.46–10 yrsN = 19118.4 ± 8.38.1 ± 3.6128.5 ± 2.77.1 ± 4.2>10 yrsN = 41116.3 ± 7.88 ± 4.5126.7 ± 4.57.3 ± 5.1∗P value0.3370.5410.2250.325∗P value < 0.05 considered statistically significant. Different letters represent significant difference at P value < 0.05). ## 4. Discussion Although the main clinical diagnosis of DR is based on subjective detection of microvascular changes, the functional test as electrophysiological measures has the potential to be an early alternative determinant [14]. According to Hari Kumar et al. [15], the VEP changes were evident even in short-term hyperglycemia in gestational DM and T2DM pregnant females in comparison with normal glycemic pregnant females in spite of all being free from DR.In this study, both 60 min and 15 min test results of group A revealed a statistically significant delay in the P100 latency and a decrease in the P100 amplitude when compared with the controls results, and these results were in accordance with those of Gupta et al. [16] for the 60 min test and with those of Heravian et al. [17] for the 15 min test. In addition, the presence of early NPDR clinical findings in group B was associated with a more deranged PRVEP test parameters; these results were in accordance with other studies’ results [17, 18]. These data exhibited that neurological alterations occurred prior to the development of clinically significant DR and was more altered in the presence of DR. Despite the fact that Daniel et al. [19] who used mid-size checks (24–32 min) detected a significant delay in P100 latency, they did not find any significant decrease in P100 amplitude. This may be attributed to factors affecting the P100 amplitude as it is more influenced by technical factors and subject cooperation than the P100 latency [7].In both tests results of group A and B, the proportions of abnormal P100 latency were higher than those of P100 amplitude with higher abnormal proportions in 15 min test. These proportions were greater than those measured in other studies [17, 18]. This variability could be explained by variation in the inclusion and exclusion criteria, DR diagnosis, recording conditions, and stimulus parameters.As the proportions of abnormal P100 latency for group A (96%) and group B (100%) in the 15 min test were higher than those of the 60 min test, this could suggest that the foveal region is affected much earlier by DM and more altered by the presence of DR changes than the of the parafoveal region, unlike Balta et al. [20] who found a significant difference in P100 latency only in 60 min check size and no significant difference in other check sizes that he tested in the right eye of the diabetic patients with no DR.Also, as the latency is more affected than the amplitude in group A, this mainly resembles multiple sclerosis features. And, in group B, the presence of early NPDR clinical features was associated with a more delay in the P100 latency and a more decrease in P100 amplitude in both tests, and these results also follow the VEP changes in multiple sclerosis, in which the VEPs are progressively delayed, and then as demyelination progresses, the amplitude will be attenuated [5]. Thus, early diabetic neural involvement seems to be a conductive damage at the myelin sheath level of optic nerve fibers [17]. These results contradict the ischemic optic neuropathy VEP findings as it mainly reduces the P100 amplitude with a much lower effect on the P100 latency than demyelination does [21].The changes in the myelin sheath of the optic nerve is stated for the first time in experimental diabetes by Fernandez et al. [22], who identified extensive myelin irregularities and axonal loss with oligodendrocyte and astrocyte abnormalities at the distal portion of the optic nerve, and all were preceding retinal ganglion cell loss; these changes were detectable in animal models after only six weeks of diabetes. More recently, the reactive gliosis and neuronal apoptosis are hypothesized as n early DR processes, and these imply DR as a neurovascular complication [23]. These neural alterations were also detected anatomically by using spectral domain optical coherence topography (OCT) in many studies concluding a thinning in the inner retinal layer as a result of DM [24, 25]. Van Dijk et al. [26] reported that, in the eyes with minimal DR, there was a thinning in the retinal nerve fiber layer (RNFL), inner plexiform layer (IPL), and ganglion cell layer (GCL) of the pericentral zone of the macula, while in the peripheral zone of the macula, only the RNFL and IPL were thinner compared with normal eyes. These results are more suggestive that the foveal region is affected more than that of the parafoveal region and the loss of RNFL and IPL is preceding the loss of GCL.Compared with other diabetic neuropathies, it seems to follow the same path as in polyneuropathy of peripheral nerves, as Valls-Canals et al. [27] concluded that the diabetic polyneuropathy is of two kinds: a demyelination which occurs with and without symptoms and an axonal loss which is the main cause of symptoms. DR pathology seems to be an actual central neuropathy similar to that of the peripheral nerves [3]. The perception of neural alterations as an early stage of DR proposes the possibility to find out other treatments to prevent vision loss [28]. In the nearest future, it is very likely that DR management will be established on neuroprotective agents [29].In Tables4 and 5, in both test results of group A, there are higher amplitudes detected in patients with good glycemic control and with less than ≤5 yrs DM duration, whereas the difference was nonsignificant in the P100 latency results. However, the latency is significantly prolonged in group B with poor glycemic control only on 15 min test. These results contrast Heravian et al.’s [17] results where they found no significant difference in the PRVEP parameters with the duration and glycemic status of DM. However, their study depended on FPG to assess the glycemic status of the patients, whereas the gold standard investigation to assess the glycemic status is by measuring the HbA1c level [30].As the P100 latency showed no significant difference in group A according to the duration and glycemic status of T2DM in both PRVEP tests, this could indicate that the retinocortical conduction is affected early by DM and unrelated to glycemic status, whereas the P100 amplitude is affected by the increase in the DM duration and poor glycemic control. While in group B, the poor glycemic control was associated with more conduction delay in 15 min test, indicating a higher damaging effect of hyperglycemia on the retinocortical conduction affecting mostly the foveal region. ## 5. Conclusions Collectively, the results of PRVEPs tests in this study are highly confirmative to the presence of neural alteration in the retina and/or optic nerve, before any clinically diagnosed DR changes, mainly as a conductive defect. In addition, these tests are noninvasive, quick, objective, cheap, and do not require mydriasis. Therefore, PRVEP tests could be considered as a valid tool to detect any early neurological changes which could be of great value in the prevention of permanent neuronal loss and blindness. In addition, the results of the 60 min test were not the same as the results of the 15 min test in both patients’ groups; these could indicate that the T2DM effect on the different parts of the retina is not similar with more impact on the foveal region. ## 6. Recommendations Further studies are required with the simultaneous use of pattern electroretinography (PERG) and PRVEP tests to distinguish between the purely optic nerve changes from those of the retinal abnormality origin, in addition to the use of OCT angiography to evaluate any subclinical macular edema. ## 7. Limitations (1) In the ophthalmology consultant in the Almawani Teaching Hospital, unfortunately, the PERG software needs an update setup in the VEP machine. Also, the OCT angiography is not available in the consultant.(2) Because of choosing to evaluate patients with T2DM and all are older than 40 years, this created a major difficulty to find subjects who are free from all the exclusion criteria. So, the number of candidates was limited to 50 in each group. --- *Source: 1014857-2020-08-24.xml*
1014857-2020-08-24_1014857-2020-08-24.md
29,068
Pattern-Reversal Visual Evoked Potentials Tests in Persons with Type 2 Diabetes Mellitus with and without Diabetic Retinopathy
Raghda S. Al-Najjar; Nehaya M. Al-Aubody; Salah Z. Al-Asadi; Majid Alabbood
Neurology Research International (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1014857
1014857-2020-08-24.xml
--- ## Abstract Background. Currently, diabetic retinopathy (DR) has a wide recognition as a neurovascular rather than a microvascular diabetic complication with an increasing need for enhanced detection approaches. Pattern-reversal visual evoked potentials (PRVEPs) test, as an objective electrophysiological measure of the optic nerve and retinal function, can be of great value in the detection of diabetic retinal changes. Objectives. The use of two sizes of checkerboard PRVEPs testing to detect any neurological changes in persons with type 2 diabetes mellitus (T2DM) with and without a clinically detected DR. Also, to compare the results according to the candidate age, duration, and glycemic status of T2DM. Methods. This study included 50 candidates as group A with T2DM and did not have a clinically detected DR and 50 candidates as group B with T2DM and had a clinically detected early DR and 50 candidates as controls who were neither diabetic nor had any other medical or ophthalmic condition that might affect PRVEPs test results. The PRVEPs were recorded in the consultant unit of ophthalmology in Almawani Teaching Hospital. Monocular PRVEPs testing of both eyes was done by using large (60 min) and small (15 min) checks to measure N75 latency and P100 latency and amplitude. Results. There was a statistically significant P100 latency delay and P100 amplitude reduction in both groups A and B in comparison with the controls. The difference between groups A and B was also significant. In both test results of groups A and B, the proportions of abnormal P100 latency were higher than those of P100 amplitude with a higher abnormal proportions in 15 min test. Conclusions. The PRVEP test detected neurological changes, mainly as conductive alterations affecting mostly the foveal region prior to any overt DR clinical changes, and these alterations were heightened by the presence of DR clinical changes. --- ## Body ## 1. Background In the recent past, diabetic retinopathy (DR) is frequently categorized as a microvascular complication of diabetes mellitus (DM). However, in the last few years, DR is recognized as a neurovascular impairment or sensory neuropathy subsequent to the neurovascular impairment [1]. It is well documented that hyperglycemia and its related metabolic abnormalities have a major harmful effect on retinal neurovascular unit including neuronal, vascular, glial, and immune cells, and not just a microvascular effect. This hypothesis opens a new window to manage DR [2]. Many studies showed that electrophysiological procedures are sensitive tools in the early identification of diabetic neural alterations way before the clinical vascular changes become apparent on fundoscopy. Albeit, its use in regular screening is still low and have obtained a much less attention than the tests for diabetic peripheral neuropathy [3, 4].The visual evoked potentials (VEPs) test is the primary tool and is superior to the magnetic resonance imaging (MRI) in assessing the functional integrity of the anterior visual pathways [5]. The pattern-reversal VEPs (PRVEPs) test is the standard and ideal modality for most clinical uses as it is less variable in timing and waveform than other VEP modalities. The use of large and small size checks is recommended by the International Society for Clinical Electrophysiology of Vision (ISCEV) standards [6]. The large size (60 min) mainly stimulates the retinal neural elements responsible for peripheral vision (parafovea), while the small size (15 min) mainly stimulates the retinal neural elements responsible for central vision (fovea) [7, 8].The most prominent component of PRVEPs wave is the P100 as a positive peak with relatively minimal variability. The increased P100 latency is an indicator for a retinocortical conduction decrement as occurring in the demyelinating process. On the other hand, the P100 amplitude and waveform abnormalities may indicate axon loss in the visual pathway [9].The aim of this study was to detect any neurological changes in persons with type 2 diabetes mellitus (T2DM) with and without a clinically detected DR through the use of two sizes of checkerboard PRVEP testing and also, to compare the results according to the candidates age, duration, and glycemic status of T2DM. ## 2. Subjects and Methods ### 2.1. Study Design This is a prospective study conducted in Basra Governorate, Iraq, from December 1, 2017, to October 1, 2018. All candidates were interviewed, and informed consents were taken from them. The study included 150 participants randomly who attended the ophthalmology consultant unit in Almawani Teaching Hospital. The age of the candidates was restricted to forty years and above at time of DM first diagnosed to limit the study to T2DM [10]. The candidates were divided into group A which included 50 persons with T2DM and did not have a clinically detected DR and group B which included 50 persons with T2DM and had a clinically detected mild-moderate nonproliferative DR (NPDR) [11] (Supplementary Figure 1) and 50 candidates as the control group who were free from DM and did not have any of the exclusion criteria. Both eyes of the controls and group A were included, while only the eyes which had the clinical features of mild-moderate NPDR [11] were included in group B.The PRVEPs were recorded using the RETI-port/scan 21 machine (Roland Consult, Brandenburg/Havel, Germany). It was done according to ISCEV standards [6], by using a full field pattern of black and white checks with central red fixation point. The checkerboard stimulus was of two sizes, large (60 min) and small (15 min) size checks. Monocular recording of both eyes were done by using a single-channel electrode of gold-plated type, with a four-channel amplifier whose band-pass filters were set at 1–50 Hz. The contrast was 97%, the plot time was (300msec), and the stimulus frequency was 1.53872 reversals per second. These test parameters were customized by the manufacturer and designated to measure the N75 latency, P100 latency, and amplitude. In this study, we will concentrate on P100 components as P100 is a prominent feature with relatively minimal variability [6]. ### 2.2. Exclusion Criteria Significant ocular diseases such as severe NPDR, proliferative DR, macular disease, vitreous opacities, visually significant cataract, glaucoma, optic neuropathy disease, best-corrected visual acuity less than 20/20, and amblyopia, all these conditions were excluded from the study. Any medical illness that can affect PRVEPs findings such as multiple sclerosis, epilepsy, thyroid disease, type 1 DM (T1DM), patients with a past history of head trauma or cerebrovascular accident, and uncontrolled hypertension (blood pressure (BP) above 140/90 mmHg) were excluded. In addition, alcoholics and drug addicts using such as heroin, morphine, cough syrups, pain killers, and sedatives (due to their negative impact on neural transmission) [12] and pregnant women were also excluded. ### 2.3. Data Collection Each candidate underwent a thorough history taking, BP, weight, height, fasting plasma glucose (FPG), and glycated hemoglobin (HbA1c) measurement, and a comprehensive ophthalmic examination including refraction and visual acuity, intraocular pressure (IOP), and anterior and fundus segment examinations after mydriasis. ### 2.4. Subjects and Testing Room Preparation Verbal consents were taken from all subjects who had a briefing about the procedure and instructed to fast the night before the tests and to avoid hair oils and cycloplegic drops. Subjects were seated comfortably in a stable position approximately 100 cm away from the monitoring screen, and the tested eye was in a proper alignment to the central fixation point with a precise focusing on it during testing. Subjects with refractive errors were asked to wear their corrective glasses. The testing room was maintained quiet and dim lighted with no other operating instruments during the test. ### 2.5. Electrode Placement The study was done according to the International 10/20 system [6, 7], by gently scrubbing the scalp sites by a piece of cotton and skin preparation gel and the electrodes with the electrode paste placed with the active electrode at the occipital scalp (Oz), the reference electrode at the frontal scalp (Fz), and grounded at the vertex (Cz). The electrode impedance was checked and kept ≤5kohm, and the impedance difference among electrodes was ≤3 kohm. ### 2.6. Statistical Analysis The data were analyzed by using SPSS version 20. The one-way (ANOVA) test was used to test the significant differences between the three groups. Significant differences between each paired groups were then evaluated by the post hoc Tukey test to measure the lowest significant difference (LSD), andP value < 0.05 was considered statistically significant.The proportions of normal and abnormal results were estimated by comparing with the control means of this study, i.e., the longest normal P100 latency was calculated by control mean + 2SE and the lowest normal P100 amplitude was calculated by control mean – 2SE. ## 2.1. Study Design This is a prospective study conducted in Basra Governorate, Iraq, from December 1, 2017, to October 1, 2018. All candidates were interviewed, and informed consents were taken from them. The study included 150 participants randomly who attended the ophthalmology consultant unit in Almawani Teaching Hospital. The age of the candidates was restricted to forty years and above at time of DM first diagnosed to limit the study to T2DM [10]. The candidates were divided into group A which included 50 persons with T2DM and did not have a clinically detected DR and group B which included 50 persons with T2DM and had a clinically detected mild-moderate nonproliferative DR (NPDR) [11] (Supplementary Figure 1) and 50 candidates as the control group who were free from DM and did not have any of the exclusion criteria. Both eyes of the controls and group A were included, while only the eyes which had the clinical features of mild-moderate NPDR [11] were included in group B.The PRVEPs were recorded using the RETI-port/scan 21 machine (Roland Consult, Brandenburg/Havel, Germany). It was done according to ISCEV standards [6], by using a full field pattern of black and white checks with central red fixation point. The checkerboard stimulus was of two sizes, large (60 min) and small (15 min) size checks. Monocular recording of both eyes were done by using a single-channel electrode of gold-plated type, with a four-channel amplifier whose band-pass filters were set at 1–50 Hz. The contrast was 97%, the plot time was (300msec), and the stimulus frequency was 1.53872 reversals per second. These test parameters were customized by the manufacturer and designated to measure the N75 latency, P100 latency, and amplitude. In this study, we will concentrate on P100 components as P100 is a prominent feature with relatively minimal variability [6]. ## 2.2. Exclusion Criteria Significant ocular diseases such as severe NPDR, proliferative DR, macular disease, vitreous opacities, visually significant cataract, glaucoma, optic neuropathy disease, best-corrected visual acuity less than 20/20, and amblyopia, all these conditions were excluded from the study. Any medical illness that can affect PRVEPs findings such as multiple sclerosis, epilepsy, thyroid disease, type 1 DM (T1DM), patients with a past history of head trauma or cerebrovascular accident, and uncontrolled hypertension (blood pressure (BP) above 140/90 mmHg) were excluded. In addition, alcoholics and drug addicts using such as heroin, morphine, cough syrups, pain killers, and sedatives (due to their negative impact on neural transmission) [12] and pregnant women were also excluded. ## 2.3. Data Collection Each candidate underwent a thorough history taking, BP, weight, height, fasting plasma glucose (FPG), and glycated hemoglobin (HbA1c) measurement, and a comprehensive ophthalmic examination including refraction and visual acuity, intraocular pressure (IOP), and anterior and fundus segment examinations after mydriasis. ## 2.4. Subjects and Testing Room Preparation Verbal consents were taken from all subjects who had a briefing about the procedure and instructed to fast the night before the tests and to avoid hair oils and cycloplegic drops. Subjects were seated comfortably in a stable position approximately 100 cm away from the monitoring screen, and the tested eye was in a proper alignment to the central fixation point with a precise focusing on it during testing. Subjects with refractive errors were asked to wear their corrective glasses. The testing room was maintained quiet and dim lighted with no other operating instruments during the test. ## 2.5. Electrode Placement The study was done according to the International 10/20 system [6, 7], by gently scrubbing the scalp sites by a piece of cotton and skin preparation gel and the electrodes with the electrode paste placed with the active electrode at the occipital scalp (Oz), the reference electrode at the frontal scalp (Fz), and grounded at the vertex (Cz). The electrode impedance was checked and kept ≤5kohm, and the impedance difference among electrodes was ≤3 kohm. ## 2.6. Statistical Analysis The data were analyzed by using SPSS version 20. The one-way (ANOVA) test was used to test the significant differences between the three groups. Significant differences between each paired groups were then evaluated by the post hoc Tukey test to measure the lowest significant difference (LSD), andP value < 0.05 was considered statistically significant.The proportions of normal and abnormal results were estimated by comparing with the control means of this study, i.e., the longest normal P100 latency was calculated by control mean + 2SE and the lowest normal P100 amplitude was calculated by control mean – 2SE. ## 3. Results The baseline characteristics of the participants are presented in Table1.Table 1 Baseline characteristics of the participants. VariableControlsN = 50Group AN = 50Group BN = 50∗P valueAge (mean ± SD)40-60 yrs50.5 ± 3.654.7 ± 4.255.7 ± 4.10.001N = 31 (62%)N = 30 (60%)N = 27 (54%)—>60 yrs63.5 ± 3.364.1 ± 3.165.7 ± 4.50.022N = 19 (38%)N = 20 (40%)N = 23 (46%)—Sex (male/female)25/2524/2626/24—BP (mmHg) (mean ± SD)Systolic125.5 ± 11.3122.8 ± 10.8122 ± 100.08Diastolic78.5 ± 8.177.2 ± 9.379.5 ± 90.182BMI (Kg/m2) (mean ± SD)30.7 ± 5.429.4 ± 5.128.7 ± 3.80.01FPG (mg/dl)88.6 ± 9.9163.8 ± 30.8177.6 ± 340.001HbA1c (%)4.07 ± 0.68.5 ± 1.79.4 ± 2.80.001T2DM duration (mean ± SD)—7.7 ± 9.79.7 ± 4.40.001BP: blood pressure, BMI: body mass index, FPG: fasting plasma glucose, HbA1c: glycated hemoglobin.The test results of both eyes are presented together without right\left discrimination as there was no statistically significant difference in the mean values of the three parameters of both PRVEPs tests between the right and left eyes of each group. Additional tables show this in more details (Supplementary Tables1 and 2).Table2 presents that, in the 60 min and 15 min tests, there was an increase in the mean values of P100 latency significantly and a decrease in the mean values of P100 amplitude significantly in group B as compared with those of group A and controls. The differences among group A and controls were also significant. With regard to N75 latency mean value, no statistical significant difference was detected among the three studied groups.Table 2 The PRVEP tests results of both eyes in each group. ParametersControlN = 100 eyesGroup AN = 100 eyesGroup BN = 76 eyesLSD andP value60 min PRVEP testN75 Latency (ms)68.3 ± 0.67a69.4 ± 0.67ab71 ± 0.96b3.01P1=0.276P2=0.006P3=0.084P100 Latency (ms)104.32 ± 0.6c108.63 ± 0.58b117.5 ± 0.9a4.31P1=0.001P2=0.001P3=0.001P100 Amplitude (μV)12.6 ± 0.5a10.4 ± 0.46b8.2 ± 0.46c2.14P1=0.001P2=0.001P3=0.00315 min PRVEP testN75 Latency (ms)82.57 ± 0.6381.8 ± 1.179.6 ± 1.3P1=0.615P2=0.053P3=0.141P100 Latency (ms)110.4 ± 0.54c121.5 ± 0.58b127.2 ± 0.45a5.7P1=0.001P2=0.001P3=0.001P100 Amplitude (μV)15.35 ± 0.73a11 ± 0.54b7.7 ± 0.55c3.05P1=0.001P2=0.001P3=0.001Values are expressed as mean ± SE. Different letters represent significant difference at (P value < 0.05); LSD: lowest significant difference between the three groups. P1=Pvalue between controls and group A; P2=P=P value between controls and group B; P3=P value between group A and group B.By calculating the upper limit of the normal P100 latency for the 60 min test (105.52) and for 15 min test (111.48) and the lower limit for normal P100 amplitude for the 60 min test (11.6) and for 15 min test (13.86), we can use them as the cutoff points between normal and abnormal results. The proportions of normal and abnormal P100 latency and amplitude of both tests are shown in Table3.Table 3 The proportion of normal and abnormal PRVEPs test results. ParametersResultsControlsGroup AGroup B60 min PRVEP testP100 latencyNormalN = 58 (58%)N = 24 (24%)N = 5 (6.6%)AbnormalN = 42 (42%)N = 76 (76%)N = 71 (93.4%)P100 amplitudeNormalN = 51 (51%)N = 39 (39%)N = 15 (19.7%)AbnormalN = 49 (49%)N = 61 (61%)N = 61 (80.3%)15 min PRVEP testP100 latencyNormalN = 50 (50%)N = 4 (4%)N = 0 (0%)AbnormalN = 50 (50%)N = 96 (96%)N = 76 (100%)P100 amplitudeNormalN = 49 (49%)N = 24 (24%)N = 8 (10.5%)AbnormalN = 51 (51%)N = 76 (76%)N = 68 (89.5%)As the ISCEV standards recommend, adult age ranges from 18 to 60 years and older than 60 years is considered as elderly age to be compared separately [6]. By dividing the three groups in two categories for each one according to the age as adult (40–60 yrs) and elderly (>60 yrs), we can evaluate them separately. In both 60 and 15 min PRVEPs, the difference was still statistically significant among controls and groups A and B in both age categories in relation to P100 latency and amplitude with the longest latency and lowest amplitude in group B. It is suggested that the differences between groups is not related to age difference.Table4 shows the results of the 60 and 15 min PRVEP test parameters in group A and group B according to good glycemic control (HbA1c < 7.5%) and poor glycemic control (HbA1c level ≥ 7.5%) [13].Table 4 The 60 min and 15 min PRVEP tests parameters in group A and group B according to good glycemic control and poor glycemic control (mean ± SD). Groups according to the HbA1c level60 min PRVEP test15 min PRVEP testP100P100P100P100Latency (ms)Amplitude (μV)Latency(ms)Amplitude (μV)Group A<7.5% (N = 28)109.5 ± 5.612.8 ± 5120.7 ± 5.513.8 ± 5.2≥7.5% (N = 72)108.3 ± 69.4 ± 4.1121.7 ± 69.8 ± 5∗P value0.3370.0010.4240.001Group B<7.5% (N = 12)114.7 ± 76.8 ± 2.3123.6 ± 6.38.4 ± 4.8≥7.5% (N = 64)118 ± 88.5 ± 4.2127.2 ± 37.5 ± 4.8∗P value0.1960.1760.0010.537∗P value < 0.05 considered statistically significant.Table5 shows the results of the 60 and 15 min PRVEP test parameters in group A and group B according to the duration of T2DM.Table 5 The 60 min and 15 min PRVEP test parameters in group A and group B according to the duration of T2DM (mean ± SD). Groups according to T2DM duration60 min PRVEP test15 min PRVEP testP100P100P100P100Latency (ms)Amplitude (μV)Latency (ms)Amplitude (μV)Group A≤5 yrsN = 26108.1 ± 3.813 ± 4.1 a121.2 ± 514.3 ± 6.2 a6–10 yrsN = 38108 ± 5.19.3 ± 4.1 b121.3 ± 6.510 ± 4.4 b>10 yrsN = 36109.7 ± 7.59.6 ± 5 b121.9 ± 5.89.5 ± 4.8 b∗P value0.3830.0040.8770.001Group B≤5 yrsN = 16119.5 ± 8.29.2 ± 3126.7 ± 3.29.3 ± 4.46–10 yrsN = 19118.4 ± 8.38.1 ± 3.6128.5 ± 2.77.1 ± 4.2>10 yrsN = 41116.3 ± 7.88 ± 4.5126.7 ± 4.57.3 ± 5.1∗P value0.3370.5410.2250.325∗P value < 0.05 considered statistically significant. Different letters represent significant difference at P value < 0.05). ## 4. Discussion Although the main clinical diagnosis of DR is based on subjective detection of microvascular changes, the functional test as electrophysiological measures has the potential to be an early alternative determinant [14]. According to Hari Kumar et al. [15], the VEP changes were evident even in short-term hyperglycemia in gestational DM and T2DM pregnant females in comparison with normal glycemic pregnant females in spite of all being free from DR.In this study, both 60 min and 15 min test results of group A revealed a statistically significant delay in the P100 latency and a decrease in the P100 amplitude when compared with the controls results, and these results were in accordance with those of Gupta et al. [16] for the 60 min test and with those of Heravian et al. [17] for the 15 min test. In addition, the presence of early NPDR clinical findings in group B was associated with a more deranged PRVEP test parameters; these results were in accordance with other studies’ results [17, 18]. These data exhibited that neurological alterations occurred prior to the development of clinically significant DR and was more altered in the presence of DR. Despite the fact that Daniel et al. [19] who used mid-size checks (24–32 min) detected a significant delay in P100 latency, they did not find any significant decrease in P100 amplitude. This may be attributed to factors affecting the P100 amplitude as it is more influenced by technical factors and subject cooperation than the P100 latency [7].In both tests results of group A and B, the proportions of abnormal P100 latency were higher than those of P100 amplitude with higher abnormal proportions in 15 min test. These proportions were greater than those measured in other studies [17, 18]. This variability could be explained by variation in the inclusion and exclusion criteria, DR diagnosis, recording conditions, and stimulus parameters.As the proportions of abnormal P100 latency for group A (96%) and group B (100%) in the 15 min test were higher than those of the 60 min test, this could suggest that the foveal region is affected much earlier by DM and more altered by the presence of DR changes than the of the parafoveal region, unlike Balta et al. [20] who found a significant difference in P100 latency only in 60 min check size and no significant difference in other check sizes that he tested in the right eye of the diabetic patients with no DR.Also, as the latency is more affected than the amplitude in group A, this mainly resembles multiple sclerosis features. And, in group B, the presence of early NPDR clinical features was associated with a more delay in the P100 latency and a more decrease in P100 amplitude in both tests, and these results also follow the VEP changes in multiple sclerosis, in which the VEPs are progressively delayed, and then as demyelination progresses, the amplitude will be attenuated [5]. Thus, early diabetic neural involvement seems to be a conductive damage at the myelin sheath level of optic nerve fibers [17]. These results contradict the ischemic optic neuropathy VEP findings as it mainly reduces the P100 amplitude with a much lower effect on the P100 latency than demyelination does [21].The changes in the myelin sheath of the optic nerve is stated for the first time in experimental diabetes by Fernandez et al. [22], who identified extensive myelin irregularities and axonal loss with oligodendrocyte and astrocyte abnormalities at the distal portion of the optic nerve, and all were preceding retinal ganglion cell loss; these changes were detectable in animal models after only six weeks of diabetes. More recently, the reactive gliosis and neuronal apoptosis are hypothesized as n early DR processes, and these imply DR as a neurovascular complication [23]. These neural alterations were also detected anatomically by using spectral domain optical coherence topography (OCT) in many studies concluding a thinning in the inner retinal layer as a result of DM [24, 25]. Van Dijk et al. [26] reported that, in the eyes with minimal DR, there was a thinning in the retinal nerve fiber layer (RNFL), inner plexiform layer (IPL), and ganglion cell layer (GCL) of the pericentral zone of the macula, while in the peripheral zone of the macula, only the RNFL and IPL were thinner compared with normal eyes. These results are more suggestive that the foveal region is affected more than that of the parafoveal region and the loss of RNFL and IPL is preceding the loss of GCL.Compared with other diabetic neuropathies, it seems to follow the same path as in polyneuropathy of peripheral nerves, as Valls-Canals et al. [27] concluded that the diabetic polyneuropathy is of two kinds: a demyelination which occurs with and without symptoms and an axonal loss which is the main cause of symptoms. DR pathology seems to be an actual central neuropathy similar to that of the peripheral nerves [3]. The perception of neural alterations as an early stage of DR proposes the possibility to find out other treatments to prevent vision loss [28]. In the nearest future, it is very likely that DR management will be established on neuroprotective agents [29].In Tables4 and 5, in both test results of group A, there are higher amplitudes detected in patients with good glycemic control and with less than ≤5 yrs DM duration, whereas the difference was nonsignificant in the P100 latency results. However, the latency is significantly prolonged in group B with poor glycemic control only on 15 min test. These results contrast Heravian et al.’s [17] results where they found no significant difference in the PRVEP parameters with the duration and glycemic status of DM. However, their study depended on FPG to assess the glycemic status of the patients, whereas the gold standard investigation to assess the glycemic status is by measuring the HbA1c level [30].As the P100 latency showed no significant difference in group A according to the duration and glycemic status of T2DM in both PRVEP tests, this could indicate that the retinocortical conduction is affected early by DM and unrelated to glycemic status, whereas the P100 amplitude is affected by the increase in the DM duration and poor glycemic control. While in group B, the poor glycemic control was associated with more conduction delay in 15 min test, indicating a higher damaging effect of hyperglycemia on the retinocortical conduction affecting mostly the foveal region. ## 5. Conclusions Collectively, the results of PRVEPs tests in this study are highly confirmative to the presence of neural alteration in the retina and/or optic nerve, before any clinically diagnosed DR changes, mainly as a conductive defect. In addition, these tests are noninvasive, quick, objective, cheap, and do not require mydriasis. Therefore, PRVEP tests could be considered as a valid tool to detect any early neurological changes which could be of great value in the prevention of permanent neuronal loss and blindness. In addition, the results of the 60 min test were not the same as the results of the 15 min test in both patients’ groups; these could indicate that the T2DM effect on the different parts of the retina is not similar with more impact on the foveal region. ## 6. Recommendations Further studies are required with the simultaneous use of pattern electroretinography (PERG) and PRVEP tests to distinguish between the purely optic nerve changes from those of the retinal abnormality origin, in addition to the use of OCT angiography to evaluate any subclinical macular edema. ## 7. Limitations (1) In the ophthalmology consultant in the Almawani Teaching Hospital, unfortunately, the PERG software needs an update setup in the VEP machine. Also, the OCT angiography is not available in the consultant.(2) Because of choosing to evaluate patients with T2DM and all are older than 40 years, this created a major difficulty to find subjects who are free from all the exclusion criteria. So, the number of candidates was limited to 50 in each group. --- *Source: 1014857-2020-08-24.xml*
2020
# Effects of Attitude, Motivation, and Eagerness for Physical Activity among Middle-Aged and Older Adults **Authors:** Md Mizanur Rahman; Dongxiao Gu; Changyong Liang; Rao Muhammad Rashid; Monira Akter **Journal:** Journal of Healthcare Engineering (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1014891 --- ## Abstract Background. Although physical activity (PA) is a noninvasive and cost-effective method of improving the quality of health, global statistics show that only a few middle-aged and older adults engage in the recommended PAs. This is due to a lack of motivation and companionship. Objective. This study analyses the attitudes and self-determined motivation of Chinese middle-aged and older adults for PAs and their eagerness to participate in PAs such as sports, exercise, and recreational and cultural activities (RCAs), from attitudinal, eagerness, and motivational objectives of PAs perspective. Methods. A cross-sectional study was carried out on 840 middle-aged (35–54 years) and older adults (55+ years). To determine their attitude, eagerness, and self-determined motivation for PA, we used attitudinal, Eagerness for Physical Activity Scale (EPAS), and Situational Motivational Scale (SIMS). The data were analyzed with SPSS 23.0. Results. The results show that 39.1% of the participants were not satisfied with PAs. Compared with females, males reported a less positive attitude towards PAs. Moreover, a positive attitude decreases with age. Participants’ motivation and eagerness in activities such as RCAs, exercise, and sports are decreasing. Regarding self-determined motivation, there are gender differences in RCAs, but there is none for exercise and sports participation. Conclusion. The findings show the importance of RCAs and the support of family and friends enhancing the eagerness, attitude, and motivation to participate in PAs. Furthermore, the findings can help to create more effective PA programs for middle-aged and older adults. By engaging in RCAs, participants can reap the benefits of PAs. Participating in RCAs can lead to social equity in health. --- ## Body ## 1. Introduction Globally, 13% of adults are obese, and 39% are overweight; these numbers are expected to increase in the coming years [1]. For instance, according to the World Health Organization (WHO), 36.2% of Chinese adults are overweight, and 5.9% are obese [2]. As they advance in age, adults face several health-related, debilitating diseases, such as diabetes, cardiovascular disease, and dementia. Despite the age-related changes, they need to maintain a good quality of health and life and improve their wellbeing. Recent studies show that when adults engage in PAs, it leads to health benefits, such as an increase in muscle power [3] and improvement in mental health, physical health, cognitive functions, and self-assurance. PAs decrease depression, anxiety, dementia, and coronary heart diseases [4]. According to Yu and Lin [5], “the specific type of PA (e.g., walking) itself may be a key in promoting PA for older adults and the general adult population” (p. 483). Others denote that PAs are connected with some sort of enjoyment [6], self-determined motivation [7, 8], and the fulfillment of basic psychological requirements. Several theories are used to guide, encourage, and enhance the participation of PAs and evaluate adults’ active behaviors [9].Several scholars have attempted to investigate the fundamental dynamics of the attitudes of participants of PAs [10, 11] and the benefits of PAs [3, 12, 13]. Only a few studies have examined the role of attitude and motivation in enhancing PAs participation. Attitude has been considered as an interrelated concept. In contrast, enjoyment [6] is regarded as an essential characteristic of self-determined motivation, which motivates people to engage in PAs [14]. Most studies used the self-determination theory (SDT) [7, 8] to examine people’s approach to PAs by analyzing self-determined motivation [15]. SDT explains motivation and human behavior with a critical focus on the supply of a differentiated motivation approach.Both controlled and autonomous motivations are distinguished by amotivation initiated and controlled by forces that are out of a person’s internal control. Both motivations may be regulated to participate in individual activities, and they can also generate higher participation. Participants who are autonomously motivated show signs of satisfaction and developmental health [16]. The fundamental characteristic of SDT is the relationship between the amotivation and the joy of fulfillment [7]. Although many autonomously motivated participants do complete their essential requirements of PAs, there are a significant number of participants who want to support and improve their controlled motivation and minimize autonomous motivation [17, 18]. To increase PAs, we should decrease amotivation in the participants for them to focus on the improved level of satisfaction. ### 1.1. Attitudes towards PAs Attitude can determine one’s involvement in a specific behavior [11]. In PAs, positive disposition leads to a constructive attitude, while a negative disposition yields a destructive attitude. Studies about the attitude towards PAs [10, 19] have approached the idea from the positive perspective of enjoyment. This approach is vital in establishing optimistic involvements in PAs and hence promotes participation. In the current scenario, it will be significant to identify the relationship between attitude towards PAs and engagement in PAs. ### 1.2. Self-Determination and Attitudes towards PAs According to the relational theories of human behavior development, participants’ self-determination and positive attitude development depend on the PA’s environments [20]. As middle-aged (35–54) and older adults (55+) carry traits of individualism, the organizing process should consider their aspiration and needs in the entire elements of PAs. Middle-aged and older adults’ self-determined motivation and attitude towards PAs can be influenced by different variables, such as age, gender, family, and friends influence and attachment with cultural activities (dance, dance exercise, wushu (武术), tai chi chuan (太极拳), and qigong (气功)) [21–24]. In this perspective, attitude, motivation, and location of the activity play a significant role in gradually improving middle-age and older adults’ level of participation in PAs. ### 1.3. Eagerness towards PAs Eagerness is the way of recognizing behavior that influences participants to undertake a particular action that contrasts rationally or instrumentally driven practices. This idea is theoretically attached to live understandings [25] to represent the persons’ situation and evaluation base when encountering new occurrences. Eagerness also indicates a regulatory tendency towards behaviors that evaluates personal importance or is in itself significant. Eagerness for PAs encourages participants’ mental condition, enhances passion, and incredible longing or desire for PAs, which is good [25] for health. Hope is significant in understanding the participant’s drive for learning and improvement [26]. Moreover, eagerness is associated with the encouragement of desirable behavior rather than the prevention of negative behavior [27]. In PAs, the concept of eagerness illustrates the motivation for a particular action, which is fulfilling and satisfying. Furthermore, the psychological qualities of eagerness towards PAs, which is hope and positive intention to maintain PAs in the future, possess significant potential in predicting sustainable involvement and participation in PAs. Researchers [25] show that eagerness for PAs has predictive validity above self-determined motivation.As stated above, attitude, motivation, and eagerness for PAs are assumed to be relevant predictors of PAs. However, question such as “Do these factors motivate and facilitate middle-aged and older adults to engage in PAs [24]?” remains unanswered. Moreover, existing studies [24] lack the comprehensive theoretical representation of factors that could enhance participants’ attitude, motivation, and their effects to increase their PA levels. Only a few studies identify the effects of attitude and motivation on PAs in adolescents. Therefore, it is vital to examine the effect of the support of family and friends on these middle-aged and older adults’ attitudes, eagerness, and motivation for PAs. It is essential to consider how participants realize and adopt their family and friends’ PA-related attitudes and behaviors in their values, understandings, and intentions to be physically active people. To bridge this research gap, we integrate PA constructs in different groups, such as sports, exercise, and RCAs with participants’ enjoyment level of PAs, attitude, motivation, the support of family and friends, and eagerness for PAs. Hence, this study will examine the antecedents and the effects of attitude, motivation, support of family and friends, and eagerness of middle-aged and older adults for PAs.By considering the points mentioned above, this study investigates the level of PAs in middle-aged and older adults in China. The objectives of this study are three-fold. The first is how the enjoyment of and assess to PAs affect the attitude of middle-aged and older adults towards PAs. The second is the participants’ self-determined motivation and their eagerness to participate in different types of PAs, such as sports, exercise, and RCAs. The last objective is how participating in RCAs is correlated with sports and exercises, and how these three groups affect participants’ motivation for PAs. Overall, this study also measures the relationship between the attitudes, self-determined motivation, and eagerness with actual involvement in PA. ## 1.1. Attitudes towards PAs Attitude can determine one’s involvement in a specific behavior [11]. In PAs, positive disposition leads to a constructive attitude, while a negative disposition yields a destructive attitude. Studies about the attitude towards PAs [10, 19] have approached the idea from the positive perspective of enjoyment. This approach is vital in establishing optimistic involvements in PAs and hence promotes participation. In the current scenario, it will be significant to identify the relationship between attitude towards PAs and engagement in PAs. ## 1.2. Self-Determination and Attitudes towards PAs According to the relational theories of human behavior development, participants’ self-determination and positive attitude development depend on the PA’s environments [20]. As middle-aged (35–54) and older adults (55+) carry traits of individualism, the organizing process should consider their aspiration and needs in the entire elements of PAs. Middle-aged and older adults’ self-determined motivation and attitude towards PAs can be influenced by different variables, such as age, gender, family, and friends influence and attachment with cultural activities (dance, dance exercise, wushu (武术), tai chi chuan (太极拳), and qigong (气功)) [21–24]. In this perspective, attitude, motivation, and location of the activity play a significant role in gradually improving middle-age and older adults’ level of participation in PAs. ## 1.3. Eagerness towards PAs Eagerness is the way of recognizing behavior that influences participants to undertake a particular action that contrasts rationally or instrumentally driven practices. This idea is theoretically attached to live understandings [25] to represent the persons’ situation and evaluation base when encountering new occurrences. Eagerness also indicates a regulatory tendency towards behaviors that evaluates personal importance or is in itself significant. Eagerness for PAs encourages participants’ mental condition, enhances passion, and incredible longing or desire for PAs, which is good [25] for health. Hope is significant in understanding the participant’s drive for learning and improvement [26]. Moreover, eagerness is associated with the encouragement of desirable behavior rather than the prevention of negative behavior [27]. In PAs, the concept of eagerness illustrates the motivation for a particular action, which is fulfilling and satisfying. Furthermore, the psychological qualities of eagerness towards PAs, which is hope and positive intention to maintain PAs in the future, possess significant potential in predicting sustainable involvement and participation in PAs. Researchers [25] show that eagerness for PAs has predictive validity above self-determined motivation.As stated above, attitude, motivation, and eagerness for PAs are assumed to be relevant predictors of PAs. However, question such as “Do these factors motivate and facilitate middle-aged and older adults to engage in PAs [24]?” remains unanswered. Moreover, existing studies [24] lack the comprehensive theoretical representation of factors that could enhance participants’ attitude, motivation, and their effects to increase their PA levels. Only a few studies identify the effects of attitude and motivation on PAs in adolescents. Therefore, it is vital to examine the effect of the support of family and friends on these middle-aged and older adults’ attitudes, eagerness, and motivation for PAs. It is essential to consider how participants realize and adopt their family and friends’ PA-related attitudes and behaviors in their values, understandings, and intentions to be physically active people. To bridge this research gap, we integrate PA constructs in different groups, such as sports, exercise, and RCAs with participants’ enjoyment level of PAs, attitude, motivation, the support of family and friends, and eagerness for PAs. Hence, this study will examine the antecedents and the effects of attitude, motivation, support of family and friends, and eagerness of middle-aged and older adults for PAs.By considering the points mentioned above, this study investigates the level of PAs in middle-aged and older adults in China. The objectives of this study are three-fold. The first is how the enjoyment of and assess to PAs affect the attitude of middle-aged and older adults towards PAs. The second is the participants’ self-determined motivation and their eagerness to participate in different types of PAs, such as sports, exercise, and RCAs. The last objective is how participating in RCAs is correlated with sports and exercises, and how these three groups affect participants’ motivation for PAs. Overall, this study also measures the relationship between the attitudes, self-determined motivation, and eagerness with actual involvement in PA. ## 2. Materials and Methods ### 2.1. Sample Selection and Data Collection In the first part of the questionnaire, the participants were asked to state their gender, age, marital status, social environment motivator, and physical limitations. In the second part, we used the International Physical Activity Questionnaire (IPAQ) (In a typical week, how many hours do you spend participating in physical activity?) to identify participants’ PAs levels. After that, we used the SIMS [28] and EPAS [25] questionnaires to determine participants’ PA motivation and eagerness for PAs. Before the primary survey, we conducted a study based on a focus group of a few teachers and PhD students from the school of management who have specialized skills in survey design. Subsequently, based on the recommendations of the focus group, minor changes were made in the sequence and wording of some of the questions. Second, to get the feedback and confirm the content’s validity with the participants’ attitude, motivation, and eagerness for PAs, 40 valuable random sample reviews were analyzed. After calculating Cronbach’s alpha value, we realized that the mean, standard deviation, and factor loading values were significantly high; hence, we did further investigation. After this analysis, we approved the final version of the questionnaire. The authors are fluent in English and Chinese. They translated the English version of the questionnaire to the local language (simplified Chinese) and translated it back to English to check the quality of the translation. This translation approach was adopted because of the recommendations of the translating committee [29]. After we made changes to some words, the questionnaire was in its final version. The final version of the questionnaires was sent to a different group of people via a web link and barcode. The questionnaires were administered in different places, such as playgrounds, parks, malls, and recreational centers, where middle-aged and old people engage in different kinds of PAs. ### 2.2. Participants Self-administered questionnaires were used to collect the data from different big cities in China, such as Shanghai, Beijing, Hefei, Bozhou, and Guangzhou. The survey was conducted with the help of trained students who have experience in data collection. The main target population includes those who are close to parks, malls, playgrounds, and recreational centers. For wushu, tai chi chuan, and qigong, we collected the data from Huangshan International Wushu compaction and Bozhou International health Qigong Expo, 2019, and the 11th Huatuo WuqinXi health and festival exchange completion. We targeted the middle-aged and older respondents; the justification for this choice is that these people prefer PAs. With the help of the researchers, the respondents were given a gift card (红包) after completing the survey. A total of 894 questionnaires were collected, and 54 of those were rejected based on principles and missing data. Hence, 840 questionnaires were used for the final analysis. The details of the demographics are shown in Table 1.Table 1 Demographics with immediate response to like/dislike of PA factor and participants reflection. Response rateI do not like PAI like PA, but it provided some difficultyI like PA, and this should remain in the long runMean (SD)Percentage (participants)Total number (N = 840/894)5.6 (1.4)11.1 (93)27.0 (227)61.9 (520)Males (N = 327/347) (38.9%)5.2 (1.2)11.9 (39)34.2 (112)53.9 (176)Females (N = 513/547) (61.1%)5.7 (1.4)9.0 (46)21.0 (108)70.0 (359)AgeMiddle age (35–54) (273/295) (32.7%)5.7 (1.3)8.8 (24)34.1 (93)57.1 (156)Older adults 55+ (567/599) (67.3%)5.4 (1.4)12.0 (68)21.9 (124)66.1 (375)Sports (132/140) (15.7%)4.8 (1.2)9.1 (12)43.2 (57)47.7 (63)Exercise (273/291) (32.5%)5.1 (1.3)9.9 (27)29.7 (81)60.4 (165)RCAs (435/463) (51.8%)5.9 (1.4)4.9 (21)21.4 (93)73.8 (321)RCAs = recreational and cultural activities. ### 2.3. Measures A structured questionnaire was designed, and we used Likert-type response formats. For attitude, motivation, and eagerness, we used a seven-point response that ranges from “strongly disagree” (1) to “strongly agree” (7). For Self-Determination Motivation Index (SDMI), we used a −18 to +18 range scale. For the total number of PAs, we used a six-point range; for the support of family and friends, we used a five-point response ranging from “strongly disagree” (1) to “strongly agree” (5). To keep a degree of rationality, we took the constructs from existing studies, especially the items for attitude and motivation dimensions were adopted from Guay et al. [28]. The elements for the participants’ eagerness were adopted from Säfvenbom et al. [25]. The construct items of the total amount of PAs and the support of family and friends were adopted from Rahman et al. [24]. #### 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 #### 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). #### 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. #### 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. #### 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. #### 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ### 2.4. Data Analysis To check and measure the levels of PAs and the motivation to engage in them, we analyzed the data with SPSS 23.0. Statistical analysis was utilized to find out the preferred type of PA and determine what highly motivates middle-aged and older people to engage in PAs. To compute the participants’ PAs levels, we conducted F-test and one-way ANOVA to assess the PAs groups (sports, exercise, and RCAs). The significance level for alpha wasα ≤ 0.05. We also performed a post hoc Bonferroni test on a series of individual tests that compare the mean of each group to the mean of every other group [32]. We used the adopted editions of SIMS and EPAS to evaluate the participants’ motivation and eagerness, respectively. We used multivariate regression analysis to predict motivation (SDMI) for PAs. The different types of PAs (sports, exercise, and RCAs) are the independent variables, and the SDMI scores are the dependent variables. This analysis enabled us to search for the outcome of a particular type of PA on the motivational scales. The results are considered significant when p≤0.05. ## 2.1. Sample Selection and Data Collection In the first part of the questionnaire, the participants were asked to state their gender, age, marital status, social environment motivator, and physical limitations. In the second part, we used the International Physical Activity Questionnaire (IPAQ) (In a typical week, how many hours do you spend participating in physical activity?) to identify participants’ PAs levels. After that, we used the SIMS [28] and EPAS [25] questionnaires to determine participants’ PA motivation and eagerness for PAs. Before the primary survey, we conducted a study based on a focus group of a few teachers and PhD students from the school of management who have specialized skills in survey design. Subsequently, based on the recommendations of the focus group, minor changes were made in the sequence and wording of some of the questions. Second, to get the feedback and confirm the content’s validity with the participants’ attitude, motivation, and eagerness for PAs, 40 valuable random sample reviews were analyzed. After calculating Cronbach’s alpha value, we realized that the mean, standard deviation, and factor loading values were significantly high; hence, we did further investigation. After this analysis, we approved the final version of the questionnaire. The authors are fluent in English and Chinese. They translated the English version of the questionnaire to the local language (simplified Chinese) and translated it back to English to check the quality of the translation. This translation approach was adopted because of the recommendations of the translating committee [29]. After we made changes to some words, the questionnaire was in its final version. The final version of the questionnaires was sent to a different group of people via a web link and barcode. The questionnaires were administered in different places, such as playgrounds, parks, malls, and recreational centers, where middle-aged and old people engage in different kinds of PAs. ## 2.2. Participants Self-administered questionnaires were used to collect the data from different big cities in China, such as Shanghai, Beijing, Hefei, Bozhou, and Guangzhou. The survey was conducted with the help of trained students who have experience in data collection. The main target population includes those who are close to parks, malls, playgrounds, and recreational centers. For wushu, tai chi chuan, and qigong, we collected the data from Huangshan International Wushu compaction and Bozhou International health Qigong Expo, 2019, and the 11th Huatuo WuqinXi health and festival exchange completion. We targeted the middle-aged and older respondents; the justification for this choice is that these people prefer PAs. With the help of the researchers, the respondents were given a gift card (红包) after completing the survey. A total of 894 questionnaires were collected, and 54 of those were rejected based on principles and missing data. Hence, 840 questionnaires were used for the final analysis. The details of the demographics are shown in Table 1.Table 1 Demographics with immediate response to like/dislike of PA factor and participants reflection. Response rateI do not like PAI like PA, but it provided some difficultyI like PA, and this should remain in the long runMean (SD)Percentage (participants)Total number (N = 840/894)5.6 (1.4)11.1 (93)27.0 (227)61.9 (520)Males (N = 327/347) (38.9%)5.2 (1.2)11.9 (39)34.2 (112)53.9 (176)Females (N = 513/547) (61.1%)5.7 (1.4)9.0 (46)21.0 (108)70.0 (359)AgeMiddle age (35–54) (273/295) (32.7%)5.7 (1.3)8.8 (24)34.1 (93)57.1 (156)Older adults 55+ (567/599) (67.3%)5.4 (1.4)12.0 (68)21.9 (124)66.1 (375)Sports (132/140) (15.7%)4.8 (1.2)9.1 (12)43.2 (57)47.7 (63)Exercise (273/291) (32.5%)5.1 (1.3)9.9 (27)29.7 (81)60.4 (165)RCAs (435/463) (51.8%)5.9 (1.4)4.9 (21)21.4 (93)73.8 (321)RCAs = recreational and cultural activities. ## 2.3. Measures A structured questionnaire was designed, and we used Likert-type response formats. For attitude, motivation, and eagerness, we used a seven-point response that ranges from “strongly disagree” (1) to “strongly agree” (7). For Self-Determination Motivation Index (SDMI), we used a −18 to +18 range scale. For the total number of PAs, we used a six-point range; for the support of family and friends, we used a five-point response ranging from “strongly disagree” (1) to “strongly agree” (5). To keep a degree of rationality, we took the constructs from existing studies, especially the items for attitude and motivation dimensions were adopted from Guay et al. [28]. The elements for the participants’ eagerness were adopted from Säfvenbom et al. [25]. The construct items of the total amount of PAs and the support of family and friends were adopted from Rahman et al. [24]. ### 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 ### 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). ### 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. ### 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. ### 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. ### 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ## 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 ## 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). ## 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. ## 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. ## 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. ## 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ## 2.4. Data Analysis To check and measure the levels of PAs and the motivation to engage in them, we analyzed the data with SPSS 23.0. Statistical analysis was utilized to find out the preferred type of PA and determine what highly motivates middle-aged and older people to engage in PAs. To compute the participants’ PAs levels, we conducted F-test and one-way ANOVA to assess the PAs groups (sports, exercise, and RCAs). The significance level for alpha wasα ≤ 0.05. We also performed a post hoc Bonferroni test on a series of individual tests that compare the mean of each group to the mean of every other group [32]. We used the adopted editions of SIMS and EPAS to evaluate the participants’ motivation and eagerness, respectively. We used multivariate regression analysis to predict motivation (SDMI) for PAs. The different types of PAs (sports, exercise, and RCAs) are the independent variables, and the SDMI scores are the dependent variables. This analysis enabled us to search for the outcome of a particular type of PA on the motivational scales. The results are considered significant when p≤0.05. ## 3. Results ### 3.1. Common Method Bias According to Mackenzie and Podsakoff [33], when data are gathered from only one source at the same time, the issue of common method bias may impact the validity of the study. To check the common method bias, we performed Harman’s single-factor test. The outcome shows a value of 31.8%, which is below the cutoff rate. Thus, the outcome confirms that there is no severe concern about common method bias. We then checked the reliability of all the variables; the results were satisfactory. ### 3.2. Attitudes towards PAs First, we used IPAQ to test the participants’ PAs levels over the last seven days. The correlation assessments show that the IPAQ’s short edition has a Cronbach’s alpha (α = 0.83). We calculated the alpha (α) values for sports (α = 0.82), exercise (α = 0.84), and RCAs (α = 0.87) (Table 2). This research found that there is diversity in PA levels between sports, exercise, and RCAs. The PAs of the participants included sports (n = 132), exercise (n = 273), and RCAs (n = 435). The descriptive statistics of PAs involvement show that the participants perform vigorous PAs (M = 3.53) and moderate PAs (M = 3.84) every week.Middle-aged and older adult’s satisfaction in engaging in PA results is shown in Table1. From left, Table 1 shows the average score of the respondents (5.60) (column 1). The results show that 11.1% of participants do not like PA (column 2), and 27% of the participants reported that it is challenging for them to engage in PAs (column 3). The results also show that 39.1% of the participants are not fully satisfied (column 2 plus column 3) with PAs; on the other hand, 61.9% of the middle-aged and older adults are satisfied with PAs (column 4).Compared with older adults (M = 5.4), the middle-aged (M = 5.7) indicated a higher enjoyment level (t = 3.6; p<0.001) and frequently engaged in PAs, with a chi-square χ2 = 44.7; p<0.001.The descriptive statistics show that the middle-aged and older female adult participants (M = 5.7) have the highest scores in the level of enjoyment (t = 5.6; p<0.001). On the other hand, the male participants (M = 5.2) gave positive responses about PAs, with a chi-square χ2 = 114.2; p<0.001. In addition, participants who engage in RCAs reported the highest score (M = 5.9) in PAs for enjoyment (t = 3.5; p<0.001) as compared to participants who engage in exercise (M = 5.2), and sports (M = 4.8). One-way ANOVA shows the enjoyment level in PAs (F = 47.7; p<0.001) for middle-aged and older adults. Post hoc Bonferroni test showed that participants that engage in RCAs outside the home have a higher score of enjoyment (t = 3.9, p<0.001) in PA (M = 5.9) than those that participate in organized movement activities, exercise (M = 5.1) and sports (M = 4.8), as well as the lowest levels of enjoyment (t = 3.1, p<0.001). ### 3.3. Self-Determined Motivation in PAs The self-determined motivations for sports, exercise, and RCAs areM = 6.6, M = 7.0, and M = 7.6, respectively. A regression analysis (including variance) for eagerness and self-determined motivation for outdoor PAs as functions of age and gender was conducted. The descriptive statistics show gender differences for weekly PA levels and the subdimensions of the SDI.The statistics also show strong relationships among RCAs, participants’ involvement, support of family and friends, eagerness, the total amount of PAs, and motivation for involvement in different types of movement activities. The SDMI in PAs and all subdimensions of SDMI in PA are shown in Table3.Table 3 Mean and Standard Deviation of key study variables. MalesFemalesSportsExerciseCultural activitiesScaleTotal number (N = 840)327 (38.9)513 (61.1)132 (15.7)273 (32.5)435 (51.8)Males, % (N = 327)102 (77.3)183 (67.1)42 (23.4)Females, % (N = 513)30 (22.7)90 (32.9)393 (76.6)Middle age, % (35–54) 273 (32.5%)115 (35.2)158 (30.8)81 (61.3)87 (31.9)105 (24.1)Older adults, % (55+) 567 (67.5%)212 (64.8)355 (69.2)51 (38.6)186 (68.1)330 (75.8)Family and friends support, % (651; 77.5%)253 (38.7)399 (61.3)99 (15.2)210 (32.2)342 (52.5)Friends and family support3.5 (1.4)3.8 (1.8)∗3.4 (1.1)3.8 (1.0)4.0 (0.9)1–5Eagerness4.8 (1.0)5.2 (1.0)∗∗∗3.2 (0.9)a4.8 (0.8)a5.7 (0.9)a1–7Intrinsic motivation (IM)5.2 (1.0)5.8 (0.9)∗∗4.4 (1.0)a5.2 (1.1)a6.0 (0.9)a1–7Identified regulation-IR5.1 (1.0)5.7 (1.1)4.5 (1.0)a5.2 (1.1)a5.9 (0.9)a1–7Extrinsic motivation (EM)4.1 (1.2)4.4 (1.1)∗4.6 (1.0)b4.5 (1.2)a4.2 (1.2)ab1–7Amotivation (AM)2.6 (0.8)2.4 (0.9)3.2 (0.9)a2.5 (1.0)a2.1 (0.9)a1–7SDMI in PA7.1 (2.9)7.2 (2.8)2.5 (2.3)a6.1 (2.9)a9.5 (1.9)a−18-18Total amount of PA3.4 (2.3)3.8 (2.5)∗∗∗2.2 (1.5)a3.6 (1.8)a4.5 (2.2)a1–6SDMI = 2 × IM + IR − ER − 2 × AM; Here, a, b, and ab = homogenous subsets indicate significant differences, using one-way ANOVA test, Bonferroni post hoc test (p≤0.05). ∗p<0.05; ∗∗p<0.01; ∗∗∗P<0.001.Table4 shows the multivariate regression analysis. The upper part shows the overall model explained by 40.6% of the total variance in SDI (F = 81.16; p<0.001). According to this table, there are gender differences, but there is no significant difference in middle-aged and older adults. This table also shows that the model took into account the gender, age, the total amount of PAs, support of family and friends, and eagerness for movement activities. Participation in sports and exercise has no significant relationship between middle-aged and older adult’s SDI in PAs, but participation in RCAs has significant results (StB = 0.114; p<0.007).Table 4 Multivariate regression analysis for predicting motivation (SDMI) in PA. CoefficientsaModelUnstandardized coefficientsStandardized coefficientstSig.BSEBOverall model:R2 = 0.406; F = 81.16; p<0.001(Constant)0.2490.5240.4750.635Gender0.3210.1640.0541.9570.051Middle-aged and older adults−0.5540.165−0.090−3.3500.001Friends and family support0.4730.0830.1945.6730.000Total amount of physical activity−0.2300.064−0.113−3.5810.000Eagerness1.3000.0740.54017.5890.000Sports participants0.0960.2400.0120.4000.689Exercise participants0.0820.1740.0140.4690.639Recreation and cultural activity participants0.1850.0680.1142.7270.007Model male:R2 = 0.485; F = 42.9; p<0.001(Constant)0.7750.8880.8730.384Middle-aged and older adults−0.6390.285−0.092−2.2370.026Friends and family support0.6450.1490.2634.3190.000Total amount of physical activity−0.4600.194−0.142−2.3680.018Eagerness1.5860.1150.64213.7660.000Sports participants0.0070.3530.0010.0210.633Exercise participants−0.3720.302−0.058−1.2300.220Recreation and cultural activity participants0.1900.0870.0471.0360.031Model female:R2 = 0.346; F = 38.1; p<0.001(Constant)2.0390.6343.2150.001Middle-aged and older adults−0.5650.203−0.100−2.7780.006Friends and family support0.4860.1140.2014.2550.000Total amount of physical activity−0.1800.068−0.107−2.6470.008Eagerness1.1560.1120.49210.3200.000Sports participants−0.4360.329−0.050−1.3230.186Exercise participants−0.0740.224−0.013−0.3320.740Recreation and cultural activity participants0.2010.1200.0821.6770.025aDependent variable: SDMI.The distinct subanalysis for genders showed an interaction outcome among participation in RCAs and gender’s self-determined motivation to participate in PAs. The outcomes are displayed in the lower half of Table4. The RCAs have a significant effect on male and female middle-aged and older adults’ self-determined motivation for PAs; StB = 0.047; p<0.031 and StB = 0.082; p<0.025 for males and females, respectively. Furthermore, the descriptive analysis showed no significant differences in males (M = 7.1) and females (M = 7.2) and SDI scores between middle-aged and older adults (see Table 1). Moreover, the group that engages in sports has a lower SDI score (M = 2.5) than the score associated with exercise (M = 6.1). However, middle-aged and older adults who engage in RCAs as their major form of PA had significant scores on their SDI score (M = 9.5). For RCAs participants, there were differences in the average participation values for males (M = 5.6) and females (M = 7.6) with higher enjoyment level (t = 3.6; p<0.009). ## 3.1. Common Method Bias According to Mackenzie and Podsakoff [33], when data are gathered from only one source at the same time, the issue of common method bias may impact the validity of the study. To check the common method bias, we performed Harman’s single-factor test. The outcome shows a value of 31.8%, which is below the cutoff rate. Thus, the outcome confirms that there is no severe concern about common method bias. We then checked the reliability of all the variables; the results were satisfactory. ## 3.2. Attitudes towards PAs First, we used IPAQ to test the participants’ PAs levels over the last seven days. The correlation assessments show that the IPAQ’s short edition has a Cronbach’s alpha (α = 0.83). We calculated the alpha (α) values for sports (α = 0.82), exercise (α = 0.84), and RCAs (α = 0.87) (Table 2). This research found that there is diversity in PA levels between sports, exercise, and RCAs. The PAs of the participants included sports (n = 132), exercise (n = 273), and RCAs (n = 435). The descriptive statistics of PAs involvement show that the participants perform vigorous PAs (M = 3.53) and moderate PAs (M = 3.84) every week.Middle-aged and older adult’s satisfaction in engaging in PA results is shown in Table1. From left, Table 1 shows the average score of the respondents (5.60) (column 1). The results show that 11.1% of participants do not like PA (column 2), and 27% of the participants reported that it is challenging for them to engage in PAs (column 3). The results also show that 39.1% of the participants are not fully satisfied (column 2 plus column 3) with PAs; on the other hand, 61.9% of the middle-aged and older adults are satisfied with PAs (column 4).Compared with older adults (M = 5.4), the middle-aged (M = 5.7) indicated a higher enjoyment level (t = 3.6; p<0.001) and frequently engaged in PAs, with a chi-square χ2 = 44.7; p<0.001.The descriptive statistics show that the middle-aged and older female adult participants (M = 5.7) have the highest scores in the level of enjoyment (t = 5.6; p<0.001). On the other hand, the male participants (M = 5.2) gave positive responses about PAs, with a chi-square χ2 = 114.2; p<0.001. In addition, participants who engage in RCAs reported the highest score (M = 5.9) in PAs for enjoyment (t = 3.5; p<0.001) as compared to participants who engage in exercise (M = 5.2), and sports (M = 4.8). One-way ANOVA shows the enjoyment level in PAs (F = 47.7; p<0.001) for middle-aged and older adults. Post hoc Bonferroni test showed that participants that engage in RCAs outside the home have a higher score of enjoyment (t = 3.9, p<0.001) in PA (M = 5.9) than those that participate in organized movement activities, exercise (M = 5.1) and sports (M = 4.8), as well as the lowest levels of enjoyment (t = 3.1, p<0.001). ## 3.3. Self-Determined Motivation in PAs The self-determined motivations for sports, exercise, and RCAs areM = 6.6, M = 7.0, and M = 7.6, respectively. A regression analysis (including variance) for eagerness and self-determined motivation for outdoor PAs as functions of age and gender was conducted. The descriptive statistics show gender differences for weekly PA levels and the subdimensions of the SDI.The statistics also show strong relationships among RCAs, participants’ involvement, support of family and friends, eagerness, the total amount of PAs, and motivation for involvement in different types of movement activities. The SDMI in PAs and all subdimensions of SDMI in PA are shown in Table3.Table 3 Mean and Standard Deviation of key study variables. MalesFemalesSportsExerciseCultural activitiesScaleTotal number (N = 840)327 (38.9)513 (61.1)132 (15.7)273 (32.5)435 (51.8)Males, % (N = 327)102 (77.3)183 (67.1)42 (23.4)Females, % (N = 513)30 (22.7)90 (32.9)393 (76.6)Middle age, % (35–54) 273 (32.5%)115 (35.2)158 (30.8)81 (61.3)87 (31.9)105 (24.1)Older adults, % (55+) 567 (67.5%)212 (64.8)355 (69.2)51 (38.6)186 (68.1)330 (75.8)Family and friends support, % (651; 77.5%)253 (38.7)399 (61.3)99 (15.2)210 (32.2)342 (52.5)Friends and family support3.5 (1.4)3.8 (1.8)∗3.4 (1.1)3.8 (1.0)4.0 (0.9)1–5Eagerness4.8 (1.0)5.2 (1.0)∗∗∗3.2 (0.9)a4.8 (0.8)a5.7 (0.9)a1–7Intrinsic motivation (IM)5.2 (1.0)5.8 (0.9)∗∗4.4 (1.0)a5.2 (1.1)a6.0 (0.9)a1–7Identified regulation-IR5.1 (1.0)5.7 (1.1)4.5 (1.0)a5.2 (1.1)a5.9 (0.9)a1–7Extrinsic motivation (EM)4.1 (1.2)4.4 (1.1)∗4.6 (1.0)b4.5 (1.2)a4.2 (1.2)ab1–7Amotivation (AM)2.6 (0.8)2.4 (0.9)3.2 (0.9)a2.5 (1.0)a2.1 (0.9)a1–7SDMI in PA7.1 (2.9)7.2 (2.8)2.5 (2.3)a6.1 (2.9)a9.5 (1.9)a−18-18Total amount of PA3.4 (2.3)3.8 (2.5)∗∗∗2.2 (1.5)a3.6 (1.8)a4.5 (2.2)a1–6SDMI = 2 × IM + IR − ER − 2 × AM; Here, a, b, and ab = homogenous subsets indicate significant differences, using one-way ANOVA test, Bonferroni post hoc test (p≤0.05). ∗p<0.05; ∗∗p<0.01; ∗∗∗P<0.001.Table4 shows the multivariate regression analysis. The upper part shows the overall model explained by 40.6% of the total variance in SDI (F = 81.16; p<0.001). According to this table, there are gender differences, but there is no significant difference in middle-aged and older adults. This table also shows that the model took into account the gender, age, the total amount of PAs, support of family and friends, and eagerness for movement activities. Participation in sports and exercise has no significant relationship between middle-aged and older adult’s SDI in PAs, but participation in RCAs has significant results (StB = 0.114; p<0.007).Table 4 Multivariate regression analysis for predicting motivation (SDMI) in PA. CoefficientsaModelUnstandardized coefficientsStandardized coefficientstSig.BSEBOverall model:R2 = 0.406; F = 81.16; p<0.001(Constant)0.2490.5240.4750.635Gender0.3210.1640.0541.9570.051Middle-aged and older adults−0.5540.165−0.090−3.3500.001Friends and family support0.4730.0830.1945.6730.000Total amount of physical activity−0.2300.064−0.113−3.5810.000Eagerness1.3000.0740.54017.5890.000Sports participants0.0960.2400.0120.4000.689Exercise participants0.0820.1740.0140.4690.639Recreation and cultural activity participants0.1850.0680.1142.7270.007Model male:R2 = 0.485; F = 42.9; p<0.001(Constant)0.7750.8880.8730.384Middle-aged and older adults−0.6390.285−0.092−2.2370.026Friends and family support0.6450.1490.2634.3190.000Total amount of physical activity−0.4600.194−0.142−2.3680.018Eagerness1.5860.1150.64213.7660.000Sports participants0.0070.3530.0010.0210.633Exercise participants−0.3720.302−0.058−1.2300.220Recreation and cultural activity participants0.1900.0870.0471.0360.031Model female:R2 = 0.346; F = 38.1; p<0.001(Constant)2.0390.6343.2150.001Middle-aged and older adults−0.5650.203−0.100−2.7780.006Friends and family support0.4860.1140.2014.2550.000Total amount of physical activity−0.1800.068−0.107−2.6470.008Eagerness1.1560.1120.49210.3200.000Sports participants−0.4360.329−0.050−1.3230.186Exercise participants−0.0740.224−0.013−0.3320.740Recreation and cultural activity participants0.2010.1200.0821.6770.025aDependent variable: SDMI.The distinct subanalysis for genders showed an interaction outcome among participation in RCAs and gender’s self-determined motivation to participate in PAs. The outcomes are displayed in the lower half of Table4. The RCAs have a significant effect on male and female middle-aged and older adults’ self-determined motivation for PAs; StB = 0.047; p<0.031 and StB = 0.082; p<0.025 for males and females, respectively. Furthermore, the descriptive analysis showed no significant differences in males (M = 7.1) and females (M = 7.2) and SDI scores between middle-aged and older adults (see Table 1). Moreover, the group that engages in sports has a lower SDI score (M = 2.5) than the score associated with exercise (M = 6.1). However, middle-aged and older adults who engage in RCAs as their major form of PA had significant scores on their SDI score (M = 9.5). For RCAs participants, there were differences in the average participation values for males (M = 5.6) and females (M = 7.6) with higher enjoyment level (t = 3.6; p<0.009). ## 4. Discussion and Implications Our results validate the existing body of literature [34–36]. The primary aim of this study is to assess the attitudes of middle-aged and older adults towards different groups of PAs. The effect of attitudes varies according to the type of activity, age, and physical condition. According to our analysis, there is a positive relationship between the participant’s intentions and attitude towards PAs. Furthermore, the results show that attitude towards PA has high scores, which indicate that Chinese middle-aged and older adults have positive attitudes towards PAs. In particular, compared with males, middle-aged and older female adult participants have a more encouraging attitude towards PAs; this supports existing studies [34, 35]. Moreover, this study shows that middle-aged and older adults who do not participate in or enjoy movement activities or split their self-interest into competitive movement activities may take part in PAs. Still, in the long run, it can affect their PA involvement. Hence, they can develop a negative attitude towards PAs and eventually get demotivated to participate in PAs.Individuals whose primary activities are RCAs have the highest proportion of PAs (Table1). It emphasizes that more middle-aged and older adults participate in RCAs rather than in sports (51.8% vs 15.7%) due to a lack of enjoyment, health condition, and limited interest. Engaging in RCAs is highly correlated with motives related to social engagement and satisfaction. Moreover, middle-aged and older adults’ attitudes and eagerness are critical in motivating them to engage in RCAs, but it is not that essential in exercise and even less in sports. Although PAs are fun and enjoyable, they are not without challenges. People willingly engage in activities with playfulness and thereby need little or no extrinsic motivations to do so. In particular, the activities involve enjoyment, which builds a sense of eagerness and creates an opportunity for social relations (RCAs and sports). The results of this study indicate that the participants might be eager to engage in PAs but do not yet get involved in competitive sports or exercise, due to some personal reasons like basic psychological needs. However, participants that engage in RCAs show more eagerness, and this increases a positive attitude towards participating in PAs to maintain better health.The analysis of this study also validates earlier studies about the relationship between SDT and PA [21]. According to the SDT, the results of this study indicate that when participants are supported to feel autonomous, they are more likely to be intrinsically motivated. Supporting surroundings not only promotes their autonomous motivation but also positively increases their beliefs (attitudes) towards that behavior. There is no significant difference in the self-determined motivation index for PAs between male and female participants, but in the group activities, it shows great differences. Male participants’ interest in sports or exercise and female participant’s interest in RCAs reported considerably better scores on self-determined PA.Furthermore, these findings highlight the significance of motivation, eagerness, and attitude towards RCAs in maintaining better health. Moreover, female participants show high levels of interest in PAs with RCAs but a lower level in sports, which supports earlier studies [37]. Moreover, this study shows that Chinese female participation in RCAs is a key predictor for middle-aged and older adults’ participation in PAs. Finally, the results of the relationship between eagerness and amotivation show a negative association towards PAs, which is contrary to earlier studies [25]. This indicates that it is affected by demands, recommendations, and positive experiences. Therefore, participating in RCAs could be one of the critical predictors for middle-aged and older adults.This study also investigates the relationship between the support of family and friends and participating in PAs. We found that the support of family and friends is associated with PAs; where friends’ engagements in PAs influence them significantly because of a better source of gatherings and company [38]. As they advance in age, older adults’ dependency increases. Hence, compared to friends, family members were a better source of social control (reducing risky health behaviors), as well as being instrumental and providing emotional support. We found that participants who regularly participate in PAs with friends or family members have more opportunities to achieve a high level of physical activeness. ### 4.1. Implications for Scholars Based on the above discussion, this study provides some theoretical enlightenment. The results of this study confirm the importance of PAs in middle-aged and older adults by integrating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Moreover, this study also highlights why it is necessary to consider the effect of attitude and eagerness on the different dimensions of motivation when investigating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Our analysis shows that motivation, attitude, and eagerness play a significant role in developing people’s attitude towards PAs.Second, from the perspective of the role of PAs values, the findings of this study show that sports and exercise intention values have a profound influence on the participants’ PA motivational intentions than RCAs values. These outcomes are contrary to those seen in existing studies [31]. Moreover, this study shows that RCAs have a significant direct influence on participants’ age and gender.Third, it is enjoyable to participate in RCAs, and this has a significant effect on PA’s motivational intentions in middle-aged and older Chinese adults. A possible explanation for this could be that when participants find RCAs to be easy, they may develop a positive attitude for the effectiveness of the PA. We recommend that scholars apply these constructs in their research to get more awareness from their target audiences and add new facts to the PA styles.Finally, RCAs are a new phenomenon in China, and they are studied along with demographic aspects such as gender, region (for example, urban or rural), and religion, which may affect its adoption. Researchers can develop mobile applications by using artificial intelligence digital tools (like a watch) to guide or monitor how people should engage in PAs to maintain better health in their everyday life. With these applications or tools, participants can ask any questions in their national language about a method without the difficulty of engaging in the activity. ### 4.2. Implications for Managers This study also has several implications for project managers. First, project managers are strongly encouraged to improve their RCAs techniques [39] because this aspect is essential for participants who use RCA’s platforms in their PAs. The participants are getting used to executing decisions on an “anywhere-anytime” basis, that is, whenever they get the time, they engage in PAs. For example, wushu and dance exercises have included structural assurances in their PA policy that has made PAs successful.Second, the findings of this study offer handy information for PAs’ decision-makers in China since the study provides full information on how to use different ways of managing and developing PA strategies. For example, this study confirms that RCAs have a significant influence on middle-aged and older adult’s attitudes, eagerness, and motivations for PAs. The results show that participants of RCAs may be more likely to engage in PAs just for fun and enjoyment. In contrast, active participants may have a positive attitude and eagerness, which motivate them to use RCAs to enhance their psychophysical productivity. Thus, trainers or decision-makers should ensure quality and a diversified range of activities and services as well as other activities related to effective PA values. More precisely, given utilitarian PAs, project managers or decision-makers should offer multipurpose services and free training systems for participants. Moreover, due to cultural beliefs, it is easier for the Chinese to go outside and engage in PAs. RCAs, such as dance, wushu, and thaichi, provide a platform where Chinese can easily and conveniently engage in long-lasting PAs with all their norms and cultural values at any place of their choice.Finally, it is vital for PA managers or decision-makers to adopt one of the most attention-grasping trends of PA, which is RCAs for Chinese. Chinese must understand the need to integrate people’s PA systems before it is too late. It could assist people in developing optimistic views relating to PA values. This trend can reshape the PA sector. ## 4.1. Implications for Scholars Based on the above discussion, this study provides some theoretical enlightenment. The results of this study confirm the importance of PAs in middle-aged and older adults by integrating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Moreover, this study also highlights why it is necessary to consider the effect of attitude and eagerness on the different dimensions of motivation when investigating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Our analysis shows that motivation, attitude, and eagerness play a significant role in developing people’s attitude towards PAs.Second, from the perspective of the role of PAs values, the findings of this study show that sports and exercise intention values have a profound influence on the participants’ PA motivational intentions than RCAs values. These outcomes are contrary to those seen in existing studies [31]. Moreover, this study shows that RCAs have a significant direct influence on participants’ age and gender.Third, it is enjoyable to participate in RCAs, and this has a significant effect on PA’s motivational intentions in middle-aged and older Chinese adults. A possible explanation for this could be that when participants find RCAs to be easy, they may develop a positive attitude for the effectiveness of the PA. We recommend that scholars apply these constructs in their research to get more awareness from their target audiences and add new facts to the PA styles.Finally, RCAs are a new phenomenon in China, and they are studied along with demographic aspects such as gender, region (for example, urban or rural), and religion, which may affect its adoption. Researchers can develop mobile applications by using artificial intelligence digital tools (like a watch) to guide or monitor how people should engage in PAs to maintain better health in their everyday life. With these applications or tools, participants can ask any questions in their national language about a method without the difficulty of engaging in the activity. ## 4.2. Implications for Managers This study also has several implications for project managers. First, project managers are strongly encouraged to improve their RCAs techniques [39] because this aspect is essential for participants who use RCA’s platforms in their PAs. The participants are getting used to executing decisions on an “anywhere-anytime” basis, that is, whenever they get the time, they engage in PAs. For example, wushu and dance exercises have included structural assurances in their PA policy that has made PAs successful.Second, the findings of this study offer handy information for PAs’ decision-makers in China since the study provides full information on how to use different ways of managing and developing PA strategies. For example, this study confirms that RCAs have a significant influence on middle-aged and older adult’s attitudes, eagerness, and motivations for PAs. The results show that participants of RCAs may be more likely to engage in PAs just for fun and enjoyment. In contrast, active participants may have a positive attitude and eagerness, which motivate them to use RCAs to enhance their psychophysical productivity. Thus, trainers or decision-makers should ensure quality and a diversified range of activities and services as well as other activities related to effective PA values. More precisely, given utilitarian PAs, project managers or decision-makers should offer multipurpose services and free training systems for participants. Moreover, due to cultural beliefs, it is easier for the Chinese to go outside and engage in PAs. RCAs, such as dance, wushu, and thaichi, provide a platform where Chinese can easily and conveniently engage in long-lasting PAs with all their norms and cultural values at any place of their choice.Finally, it is vital for PA managers or decision-makers to adopt one of the most attention-grasping trends of PA, which is RCAs for Chinese. Chinese must understand the need to integrate people’s PA systems before it is too late. It could assist people in developing optimistic views relating to PA values. This trend can reshape the PA sector. ## 5. Conclusions and Future Directions The purpose of this study was to determine the attitude, motivation, and eagerness for different types of PA and how these groups of activities affect participants’ motivation for PAs. This study confirms that gender differences play a vital role in shaping middle-aged and older adults’ attitudes towards PA. The tendency of middle-aged and older adults’ attitudes and eagerness for PAs declines with age. Participants’ engagement in competitive, enjoyable PAs, such as RCAs, and motivation for PAs are the primary benefits of PAs. In RCA’s participation, there are gender differences, where female participants significantly benefit. On the other hand, in sports, female participants’ PAs show an opposite result. This study emphasizes the importance of middle-aged and older adults’ engagement in PAs and seems to favor those participants’ who are already engaged in cultural and aerobic activities, such as RCAs. Participants that engage in RCAs believe that these activities could be one of the sources of healthy behavior. Moreover, attitude and eagerness for PAs can be developed through the influence of family and friends. In addition, these will gradually improve middle-aged and older adults’ attitudes, motivation, and eagerness; this might increase PA levels soon.First, future studies can use cross-cultural data. Second, we only considered middle-aged and older adults; future studies can focus on young people. Finally, we used SPSS to analyze the data; future studies can use AMOS. Future research can study events, such as sports, exercise, and RCAs, in terms of thoughts as mastery, competence, and ability to participate and analyze how they are influenced and predicted by diverse motives and their role in improving health. --- *Source: 1014891-2020-09-01.xml*
1014891-2020-09-01_1014891-2020-09-01.md
71,078
Effects of Attitude, Motivation, and Eagerness for Physical Activity among Middle-Aged and Older Adults
Md Mizanur Rahman; Dongxiao Gu; Changyong Liang; Rao Muhammad Rashid; Monira Akter
Journal of Healthcare Engineering (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1014891
1014891-2020-09-01.xml
--- ## Abstract Background. Although physical activity (PA) is a noninvasive and cost-effective method of improving the quality of health, global statistics show that only a few middle-aged and older adults engage in the recommended PAs. This is due to a lack of motivation and companionship. Objective. This study analyses the attitudes and self-determined motivation of Chinese middle-aged and older adults for PAs and their eagerness to participate in PAs such as sports, exercise, and recreational and cultural activities (RCAs), from attitudinal, eagerness, and motivational objectives of PAs perspective. Methods. A cross-sectional study was carried out on 840 middle-aged (35–54 years) and older adults (55+ years). To determine their attitude, eagerness, and self-determined motivation for PA, we used attitudinal, Eagerness for Physical Activity Scale (EPAS), and Situational Motivational Scale (SIMS). The data were analyzed with SPSS 23.0. Results. The results show that 39.1% of the participants were not satisfied with PAs. Compared with females, males reported a less positive attitude towards PAs. Moreover, a positive attitude decreases with age. Participants’ motivation and eagerness in activities such as RCAs, exercise, and sports are decreasing. Regarding self-determined motivation, there are gender differences in RCAs, but there is none for exercise and sports participation. Conclusion. The findings show the importance of RCAs and the support of family and friends enhancing the eagerness, attitude, and motivation to participate in PAs. Furthermore, the findings can help to create more effective PA programs for middle-aged and older adults. By engaging in RCAs, participants can reap the benefits of PAs. Participating in RCAs can lead to social equity in health. --- ## Body ## 1. Introduction Globally, 13% of adults are obese, and 39% are overweight; these numbers are expected to increase in the coming years [1]. For instance, according to the World Health Organization (WHO), 36.2% of Chinese adults are overweight, and 5.9% are obese [2]. As they advance in age, adults face several health-related, debilitating diseases, such as diabetes, cardiovascular disease, and dementia. Despite the age-related changes, they need to maintain a good quality of health and life and improve their wellbeing. Recent studies show that when adults engage in PAs, it leads to health benefits, such as an increase in muscle power [3] and improvement in mental health, physical health, cognitive functions, and self-assurance. PAs decrease depression, anxiety, dementia, and coronary heart diseases [4]. According to Yu and Lin [5], “the specific type of PA (e.g., walking) itself may be a key in promoting PA for older adults and the general adult population” (p. 483). Others denote that PAs are connected with some sort of enjoyment [6], self-determined motivation [7, 8], and the fulfillment of basic psychological requirements. Several theories are used to guide, encourage, and enhance the participation of PAs and evaluate adults’ active behaviors [9].Several scholars have attempted to investigate the fundamental dynamics of the attitudes of participants of PAs [10, 11] and the benefits of PAs [3, 12, 13]. Only a few studies have examined the role of attitude and motivation in enhancing PAs participation. Attitude has been considered as an interrelated concept. In contrast, enjoyment [6] is regarded as an essential characteristic of self-determined motivation, which motivates people to engage in PAs [14]. Most studies used the self-determination theory (SDT) [7, 8] to examine people’s approach to PAs by analyzing self-determined motivation [15]. SDT explains motivation and human behavior with a critical focus on the supply of a differentiated motivation approach.Both controlled and autonomous motivations are distinguished by amotivation initiated and controlled by forces that are out of a person’s internal control. Both motivations may be regulated to participate in individual activities, and they can also generate higher participation. Participants who are autonomously motivated show signs of satisfaction and developmental health [16]. The fundamental characteristic of SDT is the relationship between the amotivation and the joy of fulfillment [7]. Although many autonomously motivated participants do complete their essential requirements of PAs, there are a significant number of participants who want to support and improve their controlled motivation and minimize autonomous motivation [17, 18]. To increase PAs, we should decrease amotivation in the participants for them to focus on the improved level of satisfaction. ### 1.1. Attitudes towards PAs Attitude can determine one’s involvement in a specific behavior [11]. In PAs, positive disposition leads to a constructive attitude, while a negative disposition yields a destructive attitude. Studies about the attitude towards PAs [10, 19] have approached the idea from the positive perspective of enjoyment. This approach is vital in establishing optimistic involvements in PAs and hence promotes participation. In the current scenario, it will be significant to identify the relationship between attitude towards PAs and engagement in PAs. ### 1.2. Self-Determination and Attitudes towards PAs According to the relational theories of human behavior development, participants’ self-determination and positive attitude development depend on the PA’s environments [20]. As middle-aged (35–54) and older adults (55+) carry traits of individualism, the organizing process should consider their aspiration and needs in the entire elements of PAs. Middle-aged and older adults’ self-determined motivation and attitude towards PAs can be influenced by different variables, such as age, gender, family, and friends influence and attachment with cultural activities (dance, dance exercise, wushu (武术), tai chi chuan (太极拳), and qigong (气功)) [21–24]. In this perspective, attitude, motivation, and location of the activity play a significant role in gradually improving middle-age and older adults’ level of participation in PAs. ### 1.3. Eagerness towards PAs Eagerness is the way of recognizing behavior that influences participants to undertake a particular action that contrasts rationally or instrumentally driven practices. This idea is theoretically attached to live understandings [25] to represent the persons’ situation and evaluation base when encountering new occurrences. Eagerness also indicates a regulatory tendency towards behaviors that evaluates personal importance or is in itself significant. Eagerness for PAs encourages participants’ mental condition, enhances passion, and incredible longing or desire for PAs, which is good [25] for health. Hope is significant in understanding the participant’s drive for learning and improvement [26]. Moreover, eagerness is associated with the encouragement of desirable behavior rather than the prevention of negative behavior [27]. In PAs, the concept of eagerness illustrates the motivation for a particular action, which is fulfilling and satisfying. Furthermore, the psychological qualities of eagerness towards PAs, which is hope and positive intention to maintain PAs in the future, possess significant potential in predicting sustainable involvement and participation in PAs. Researchers [25] show that eagerness for PAs has predictive validity above self-determined motivation.As stated above, attitude, motivation, and eagerness for PAs are assumed to be relevant predictors of PAs. However, question such as “Do these factors motivate and facilitate middle-aged and older adults to engage in PAs [24]?” remains unanswered. Moreover, existing studies [24] lack the comprehensive theoretical representation of factors that could enhance participants’ attitude, motivation, and their effects to increase their PA levels. Only a few studies identify the effects of attitude and motivation on PAs in adolescents. Therefore, it is vital to examine the effect of the support of family and friends on these middle-aged and older adults’ attitudes, eagerness, and motivation for PAs. It is essential to consider how participants realize and adopt their family and friends’ PA-related attitudes and behaviors in their values, understandings, and intentions to be physically active people. To bridge this research gap, we integrate PA constructs in different groups, such as sports, exercise, and RCAs with participants’ enjoyment level of PAs, attitude, motivation, the support of family and friends, and eagerness for PAs. Hence, this study will examine the antecedents and the effects of attitude, motivation, support of family and friends, and eagerness of middle-aged and older adults for PAs.By considering the points mentioned above, this study investigates the level of PAs in middle-aged and older adults in China. The objectives of this study are three-fold. The first is how the enjoyment of and assess to PAs affect the attitude of middle-aged and older adults towards PAs. The second is the participants’ self-determined motivation and their eagerness to participate in different types of PAs, such as sports, exercise, and RCAs. The last objective is how participating in RCAs is correlated with sports and exercises, and how these three groups affect participants’ motivation for PAs. Overall, this study also measures the relationship between the attitudes, self-determined motivation, and eagerness with actual involvement in PA. ## 1.1. Attitudes towards PAs Attitude can determine one’s involvement in a specific behavior [11]. In PAs, positive disposition leads to a constructive attitude, while a negative disposition yields a destructive attitude. Studies about the attitude towards PAs [10, 19] have approached the idea from the positive perspective of enjoyment. This approach is vital in establishing optimistic involvements in PAs and hence promotes participation. In the current scenario, it will be significant to identify the relationship between attitude towards PAs and engagement in PAs. ## 1.2. Self-Determination and Attitudes towards PAs According to the relational theories of human behavior development, participants’ self-determination and positive attitude development depend on the PA’s environments [20]. As middle-aged (35–54) and older adults (55+) carry traits of individualism, the organizing process should consider their aspiration and needs in the entire elements of PAs. Middle-aged and older adults’ self-determined motivation and attitude towards PAs can be influenced by different variables, such as age, gender, family, and friends influence and attachment with cultural activities (dance, dance exercise, wushu (武术), tai chi chuan (太极拳), and qigong (气功)) [21–24]. In this perspective, attitude, motivation, and location of the activity play a significant role in gradually improving middle-age and older adults’ level of participation in PAs. ## 1.3. Eagerness towards PAs Eagerness is the way of recognizing behavior that influences participants to undertake a particular action that contrasts rationally or instrumentally driven practices. This idea is theoretically attached to live understandings [25] to represent the persons’ situation and evaluation base when encountering new occurrences. Eagerness also indicates a regulatory tendency towards behaviors that evaluates personal importance or is in itself significant. Eagerness for PAs encourages participants’ mental condition, enhances passion, and incredible longing or desire for PAs, which is good [25] for health. Hope is significant in understanding the participant’s drive for learning and improvement [26]. Moreover, eagerness is associated with the encouragement of desirable behavior rather than the prevention of negative behavior [27]. In PAs, the concept of eagerness illustrates the motivation for a particular action, which is fulfilling and satisfying. Furthermore, the psychological qualities of eagerness towards PAs, which is hope and positive intention to maintain PAs in the future, possess significant potential in predicting sustainable involvement and participation in PAs. Researchers [25] show that eagerness for PAs has predictive validity above self-determined motivation.As stated above, attitude, motivation, and eagerness for PAs are assumed to be relevant predictors of PAs. However, question such as “Do these factors motivate and facilitate middle-aged and older adults to engage in PAs [24]?” remains unanswered. Moreover, existing studies [24] lack the comprehensive theoretical representation of factors that could enhance participants’ attitude, motivation, and their effects to increase their PA levels. Only a few studies identify the effects of attitude and motivation on PAs in adolescents. Therefore, it is vital to examine the effect of the support of family and friends on these middle-aged and older adults’ attitudes, eagerness, and motivation for PAs. It is essential to consider how participants realize and adopt their family and friends’ PA-related attitudes and behaviors in their values, understandings, and intentions to be physically active people. To bridge this research gap, we integrate PA constructs in different groups, such as sports, exercise, and RCAs with participants’ enjoyment level of PAs, attitude, motivation, the support of family and friends, and eagerness for PAs. Hence, this study will examine the antecedents and the effects of attitude, motivation, support of family and friends, and eagerness of middle-aged and older adults for PAs.By considering the points mentioned above, this study investigates the level of PAs in middle-aged and older adults in China. The objectives of this study are three-fold. The first is how the enjoyment of and assess to PAs affect the attitude of middle-aged and older adults towards PAs. The second is the participants’ self-determined motivation and their eagerness to participate in different types of PAs, such as sports, exercise, and RCAs. The last objective is how participating in RCAs is correlated with sports and exercises, and how these three groups affect participants’ motivation for PAs. Overall, this study also measures the relationship between the attitudes, self-determined motivation, and eagerness with actual involvement in PA. ## 2. Materials and Methods ### 2.1. Sample Selection and Data Collection In the first part of the questionnaire, the participants were asked to state their gender, age, marital status, social environment motivator, and physical limitations. In the second part, we used the International Physical Activity Questionnaire (IPAQ) (In a typical week, how many hours do you spend participating in physical activity?) to identify participants’ PAs levels. After that, we used the SIMS [28] and EPAS [25] questionnaires to determine participants’ PA motivation and eagerness for PAs. Before the primary survey, we conducted a study based on a focus group of a few teachers and PhD students from the school of management who have specialized skills in survey design. Subsequently, based on the recommendations of the focus group, minor changes were made in the sequence and wording of some of the questions. Second, to get the feedback and confirm the content’s validity with the participants’ attitude, motivation, and eagerness for PAs, 40 valuable random sample reviews were analyzed. After calculating Cronbach’s alpha value, we realized that the mean, standard deviation, and factor loading values were significantly high; hence, we did further investigation. After this analysis, we approved the final version of the questionnaire. The authors are fluent in English and Chinese. They translated the English version of the questionnaire to the local language (simplified Chinese) and translated it back to English to check the quality of the translation. This translation approach was adopted because of the recommendations of the translating committee [29]. After we made changes to some words, the questionnaire was in its final version. The final version of the questionnaires was sent to a different group of people via a web link and barcode. The questionnaires were administered in different places, such as playgrounds, parks, malls, and recreational centers, where middle-aged and old people engage in different kinds of PAs. ### 2.2. Participants Self-administered questionnaires were used to collect the data from different big cities in China, such as Shanghai, Beijing, Hefei, Bozhou, and Guangzhou. The survey was conducted with the help of trained students who have experience in data collection. The main target population includes those who are close to parks, malls, playgrounds, and recreational centers. For wushu, tai chi chuan, and qigong, we collected the data from Huangshan International Wushu compaction and Bozhou International health Qigong Expo, 2019, and the 11th Huatuo WuqinXi health and festival exchange completion. We targeted the middle-aged and older respondents; the justification for this choice is that these people prefer PAs. With the help of the researchers, the respondents were given a gift card (红包) after completing the survey. A total of 894 questionnaires were collected, and 54 of those were rejected based on principles and missing data. Hence, 840 questionnaires were used for the final analysis. The details of the demographics are shown in Table 1.Table 1 Demographics with immediate response to like/dislike of PA factor and participants reflection. Response rateI do not like PAI like PA, but it provided some difficultyI like PA, and this should remain in the long runMean (SD)Percentage (participants)Total number (N = 840/894)5.6 (1.4)11.1 (93)27.0 (227)61.9 (520)Males (N = 327/347) (38.9%)5.2 (1.2)11.9 (39)34.2 (112)53.9 (176)Females (N = 513/547) (61.1%)5.7 (1.4)9.0 (46)21.0 (108)70.0 (359)AgeMiddle age (35–54) (273/295) (32.7%)5.7 (1.3)8.8 (24)34.1 (93)57.1 (156)Older adults 55+ (567/599) (67.3%)5.4 (1.4)12.0 (68)21.9 (124)66.1 (375)Sports (132/140) (15.7%)4.8 (1.2)9.1 (12)43.2 (57)47.7 (63)Exercise (273/291) (32.5%)5.1 (1.3)9.9 (27)29.7 (81)60.4 (165)RCAs (435/463) (51.8%)5.9 (1.4)4.9 (21)21.4 (93)73.8 (321)RCAs = recreational and cultural activities. ### 2.3. Measures A structured questionnaire was designed, and we used Likert-type response formats. For attitude, motivation, and eagerness, we used a seven-point response that ranges from “strongly disagree” (1) to “strongly agree” (7). For Self-Determination Motivation Index (SDMI), we used a −18 to +18 range scale. For the total number of PAs, we used a six-point range; for the support of family and friends, we used a five-point response ranging from “strongly disagree” (1) to “strongly agree” (5). To keep a degree of rationality, we took the constructs from existing studies, especially the items for attitude and motivation dimensions were adopted from Guay et al. [28]. The elements for the participants’ eagerness were adopted from Säfvenbom et al. [25]. The construct items of the total amount of PAs and the support of family and friends were adopted from Rahman et al. [24]. #### 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 #### 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). #### 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. #### 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. #### 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. #### 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ### 2.4. Data Analysis To check and measure the levels of PAs and the motivation to engage in them, we analyzed the data with SPSS 23.0. Statistical analysis was utilized to find out the preferred type of PA and determine what highly motivates middle-aged and older people to engage in PAs. To compute the participants’ PAs levels, we conducted F-test and one-way ANOVA to assess the PAs groups (sports, exercise, and RCAs). The significance level for alpha wasα ≤ 0.05. We also performed a post hoc Bonferroni test on a series of individual tests that compare the mean of each group to the mean of every other group [32]. We used the adopted editions of SIMS and EPAS to evaluate the participants’ motivation and eagerness, respectively. We used multivariate regression analysis to predict motivation (SDMI) for PAs. The different types of PAs (sports, exercise, and RCAs) are the independent variables, and the SDMI scores are the dependent variables. This analysis enabled us to search for the outcome of a particular type of PA on the motivational scales. The results are considered significant when p≤0.05. ## 2.1. Sample Selection and Data Collection In the first part of the questionnaire, the participants were asked to state their gender, age, marital status, social environment motivator, and physical limitations. In the second part, we used the International Physical Activity Questionnaire (IPAQ) (In a typical week, how many hours do you spend participating in physical activity?) to identify participants’ PAs levels. After that, we used the SIMS [28] and EPAS [25] questionnaires to determine participants’ PA motivation and eagerness for PAs. Before the primary survey, we conducted a study based on a focus group of a few teachers and PhD students from the school of management who have specialized skills in survey design. Subsequently, based on the recommendations of the focus group, minor changes were made in the sequence and wording of some of the questions. Second, to get the feedback and confirm the content’s validity with the participants’ attitude, motivation, and eagerness for PAs, 40 valuable random sample reviews were analyzed. After calculating Cronbach’s alpha value, we realized that the mean, standard deviation, and factor loading values were significantly high; hence, we did further investigation. After this analysis, we approved the final version of the questionnaire. The authors are fluent in English and Chinese. They translated the English version of the questionnaire to the local language (simplified Chinese) and translated it back to English to check the quality of the translation. This translation approach was adopted because of the recommendations of the translating committee [29]. After we made changes to some words, the questionnaire was in its final version. The final version of the questionnaires was sent to a different group of people via a web link and barcode. The questionnaires were administered in different places, such as playgrounds, parks, malls, and recreational centers, where middle-aged and old people engage in different kinds of PAs. ## 2.2. Participants Self-administered questionnaires were used to collect the data from different big cities in China, such as Shanghai, Beijing, Hefei, Bozhou, and Guangzhou. The survey was conducted with the help of trained students who have experience in data collection. The main target population includes those who are close to parks, malls, playgrounds, and recreational centers. For wushu, tai chi chuan, and qigong, we collected the data from Huangshan International Wushu compaction and Bozhou International health Qigong Expo, 2019, and the 11th Huatuo WuqinXi health and festival exchange completion. We targeted the middle-aged and older respondents; the justification for this choice is that these people prefer PAs. With the help of the researchers, the respondents were given a gift card (红包) after completing the survey. A total of 894 questionnaires were collected, and 54 of those were rejected based on principles and missing data. Hence, 840 questionnaires were used for the final analysis. The details of the demographics are shown in Table 1.Table 1 Demographics with immediate response to like/dislike of PA factor and participants reflection. Response rateI do not like PAI like PA, but it provided some difficultyI like PA, and this should remain in the long runMean (SD)Percentage (participants)Total number (N = 840/894)5.6 (1.4)11.1 (93)27.0 (227)61.9 (520)Males (N = 327/347) (38.9%)5.2 (1.2)11.9 (39)34.2 (112)53.9 (176)Females (N = 513/547) (61.1%)5.7 (1.4)9.0 (46)21.0 (108)70.0 (359)AgeMiddle age (35–54) (273/295) (32.7%)5.7 (1.3)8.8 (24)34.1 (93)57.1 (156)Older adults 55+ (567/599) (67.3%)5.4 (1.4)12.0 (68)21.9 (124)66.1 (375)Sports (132/140) (15.7%)4.8 (1.2)9.1 (12)43.2 (57)47.7 (63)Exercise (273/291) (32.5%)5.1 (1.3)9.9 (27)29.7 (81)60.4 (165)RCAs (435/463) (51.8%)5.9 (1.4)4.9 (21)21.4 (93)73.8 (321)RCAs = recreational and cultural activities. ## 2.3. Measures A structured questionnaire was designed, and we used Likert-type response formats. For attitude, motivation, and eagerness, we used a seven-point response that ranges from “strongly disagree” (1) to “strongly agree” (7). For Self-Determination Motivation Index (SDMI), we used a −18 to +18 range scale. For the total number of PAs, we used a six-point range; for the support of family and friends, we used a five-point response ranging from “strongly disagree” (1) to “strongly agree” (5). To keep a degree of rationality, we took the constructs from existing studies, especially the items for attitude and motivation dimensions were adopted from Guay et al. [28]. The elements for the participants’ eagerness were adopted from Säfvenbom et al. [25]. The construct items of the total amount of PAs and the support of family and friends were adopted from Rahman et al. [24]. ### 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 ### 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). ### 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. ### 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. ### 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. ### 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ## 2.3.1. Total Amount of PAs Participants gave information about their weekly PA levels. PA levels were evaluated using the IPAQ [30], which is used to determine the weight of the PA and judgmental instrument that accounts for reliability and validity. To evaluate the total weekly level of PAs of the participants, we asked “In a usual week, in total, how many hours do you spend to participate in physical activity: 1-2, 3-4, 5-6, 7-8, 9-10, or 11 hours in each week?” [24] (p.3). The reactions were categorized into six different segments as 1, 2, …, 6 (see Table 2).Table 2 Physical activity per week and other variables alpha values. ConstructsItemsHours/weekSports (n = 132)Exercises (n = 273)RCAs (n = 435)Cronbach’s alpha (α)(1) Physical activity per week71–23050810.833–461951235–623751357–81239669–1041223≥11227α0.820.840.87(2) Attitude towards PA30.79(3) Intrinsic motivation40.87(4) Identified regulation40.93(5) Extrinsic motivation40.81(6) Amotivation40.82(7) Family and friends support50.78(8) Eagerness of PA90.95 ## 2.3.2. Self-Determined Motivation We used the Situational Motivation Scale (SIMS) [28], which contains 16 questions to assess the participants’ self-determined motivation for PAs. Previous studies on PAs used SIMS as a reliable and suitable tool [25, 28, 31], which uses four subdimensions to assess and measure why participants engage in PAs. The subdimensions are (1) intrinsic motivation (I feel good when I engage in this activity), (2) identified regulation (I believe that this activity is good for me), (3) extrinsic motivation (this is something that I have to do), and (4) amotivation (I do this activity; however, I am not sure if it is worth it). ## 2.3.3. Self-Determination Motivation Index (SDMI) To determine the participants’ PA position on the self-determination range, we used an SDMI, which was built from four subscales of SIMS. The SDMI illustrated the potency of the participants’ self-determination, and it is expressed as follows:(1)SDMI=2×IM+IR−ER−2×AM.This SDMI uses simple weighting, and it ranges from −18 to +18, where a higher score indicates stronger self-determination. ## 2.3.4. Support of Family and Friends for PAs The support of family and friends for PAs was measured by five questions, for example, “How often do your family members or your friends inspire you to do PA, such as exercise and sports or participate in RCAs?” and “How often do you and your family members go out for PAs during holidays?” These were used to evaluate the perceived verbal and behavioral support of family and friends of middle-aged and older adults for engaging in exercise, sports, and RCAs in daily life and weekends. ## 2.3.5. Involvement in PAs Contexts Middle-age and older adults’ PA levels were calculated for three groups of PAs. Group 1 (exercise) consists of structured and unstructured body movements, such as walking, jogging, cycling, running, weight lifting, and water aerobics. Group 2 (sports) includes participating in PAs taking place casually or in an organized manner, such as badminton, basketball, tennis, racquetball, table-tennis, soccer, and golf, to maintain and develop physical capabilities. Group 3 (RCAs) includes activities such as dance, dance exercise, wushu (武术), thaichi (太极), tai chi chuan (太极拳), and qigong (气功) [21, 22, 24]. ## 2.3.6. Eagerness for PAs (EPA) EPAS [25], which is one-dimensional and containing nine questions, was used to assess the EPAs. EPAS items were used to evaluate four significant correlates: identity, emotional experience, cognitive evaluation, and behavior in participating in PAs, such as exercise, sports, and RCAs. The primary aim of using this scale is to estimate the affective and cognitive aspects of the participant’s desire to be physically active, enjoy the activity, and self identity by engaging in a particular PA. We analyzed the behavioral issues, such as the intentions and expectations of the participant in keeping up with PAs (I enjoy keeping fit or being physically active). ## 2.4. Data Analysis To check and measure the levels of PAs and the motivation to engage in them, we analyzed the data with SPSS 23.0. Statistical analysis was utilized to find out the preferred type of PA and determine what highly motivates middle-aged and older people to engage in PAs. To compute the participants’ PAs levels, we conducted F-test and one-way ANOVA to assess the PAs groups (sports, exercise, and RCAs). The significance level for alpha wasα ≤ 0.05. We also performed a post hoc Bonferroni test on a series of individual tests that compare the mean of each group to the mean of every other group [32]. We used the adopted editions of SIMS and EPAS to evaluate the participants’ motivation and eagerness, respectively. We used multivariate regression analysis to predict motivation (SDMI) for PAs. The different types of PAs (sports, exercise, and RCAs) are the independent variables, and the SDMI scores are the dependent variables. This analysis enabled us to search for the outcome of a particular type of PA on the motivational scales. The results are considered significant when p≤0.05. ## 3. Results ### 3.1. Common Method Bias According to Mackenzie and Podsakoff [33], when data are gathered from only one source at the same time, the issue of common method bias may impact the validity of the study. To check the common method bias, we performed Harman’s single-factor test. The outcome shows a value of 31.8%, which is below the cutoff rate. Thus, the outcome confirms that there is no severe concern about common method bias. We then checked the reliability of all the variables; the results were satisfactory. ### 3.2. Attitudes towards PAs First, we used IPAQ to test the participants’ PAs levels over the last seven days. The correlation assessments show that the IPAQ’s short edition has a Cronbach’s alpha (α = 0.83). We calculated the alpha (α) values for sports (α = 0.82), exercise (α = 0.84), and RCAs (α = 0.87) (Table 2). This research found that there is diversity in PA levels between sports, exercise, and RCAs. The PAs of the participants included sports (n = 132), exercise (n = 273), and RCAs (n = 435). The descriptive statistics of PAs involvement show that the participants perform vigorous PAs (M = 3.53) and moderate PAs (M = 3.84) every week.Middle-aged and older adult’s satisfaction in engaging in PA results is shown in Table1. From left, Table 1 shows the average score of the respondents (5.60) (column 1). The results show that 11.1% of participants do not like PA (column 2), and 27% of the participants reported that it is challenging for them to engage in PAs (column 3). The results also show that 39.1% of the participants are not fully satisfied (column 2 plus column 3) with PAs; on the other hand, 61.9% of the middle-aged and older adults are satisfied with PAs (column 4).Compared with older adults (M = 5.4), the middle-aged (M = 5.7) indicated a higher enjoyment level (t = 3.6; p<0.001) and frequently engaged in PAs, with a chi-square χ2 = 44.7; p<0.001.The descriptive statistics show that the middle-aged and older female adult participants (M = 5.7) have the highest scores in the level of enjoyment (t = 5.6; p<0.001). On the other hand, the male participants (M = 5.2) gave positive responses about PAs, with a chi-square χ2 = 114.2; p<0.001. In addition, participants who engage in RCAs reported the highest score (M = 5.9) in PAs for enjoyment (t = 3.5; p<0.001) as compared to participants who engage in exercise (M = 5.2), and sports (M = 4.8). One-way ANOVA shows the enjoyment level in PAs (F = 47.7; p<0.001) for middle-aged and older adults. Post hoc Bonferroni test showed that participants that engage in RCAs outside the home have a higher score of enjoyment (t = 3.9, p<0.001) in PA (M = 5.9) than those that participate in organized movement activities, exercise (M = 5.1) and sports (M = 4.8), as well as the lowest levels of enjoyment (t = 3.1, p<0.001). ### 3.3. Self-Determined Motivation in PAs The self-determined motivations for sports, exercise, and RCAs areM = 6.6, M = 7.0, and M = 7.6, respectively. A regression analysis (including variance) for eagerness and self-determined motivation for outdoor PAs as functions of age and gender was conducted. The descriptive statistics show gender differences for weekly PA levels and the subdimensions of the SDI.The statistics also show strong relationships among RCAs, participants’ involvement, support of family and friends, eagerness, the total amount of PAs, and motivation for involvement in different types of movement activities. The SDMI in PAs and all subdimensions of SDMI in PA are shown in Table3.Table 3 Mean and Standard Deviation of key study variables. MalesFemalesSportsExerciseCultural activitiesScaleTotal number (N = 840)327 (38.9)513 (61.1)132 (15.7)273 (32.5)435 (51.8)Males, % (N = 327)102 (77.3)183 (67.1)42 (23.4)Females, % (N = 513)30 (22.7)90 (32.9)393 (76.6)Middle age, % (35–54) 273 (32.5%)115 (35.2)158 (30.8)81 (61.3)87 (31.9)105 (24.1)Older adults, % (55+) 567 (67.5%)212 (64.8)355 (69.2)51 (38.6)186 (68.1)330 (75.8)Family and friends support, % (651; 77.5%)253 (38.7)399 (61.3)99 (15.2)210 (32.2)342 (52.5)Friends and family support3.5 (1.4)3.8 (1.8)∗3.4 (1.1)3.8 (1.0)4.0 (0.9)1–5Eagerness4.8 (1.0)5.2 (1.0)∗∗∗3.2 (0.9)a4.8 (0.8)a5.7 (0.9)a1–7Intrinsic motivation (IM)5.2 (1.0)5.8 (0.9)∗∗4.4 (1.0)a5.2 (1.1)a6.0 (0.9)a1–7Identified regulation-IR5.1 (1.0)5.7 (1.1)4.5 (1.0)a5.2 (1.1)a5.9 (0.9)a1–7Extrinsic motivation (EM)4.1 (1.2)4.4 (1.1)∗4.6 (1.0)b4.5 (1.2)a4.2 (1.2)ab1–7Amotivation (AM)2.6 (0.8)2.4 (0.9)3.2 (0.9)a2.5 (1.0)a2.1 (0.9)a1–7SDMI in PA7.1 (2.9)7.2 (2.8)2.5 (2.3)a6.1 (2.9)a9.5 (1.9)a−18-18Total amount of PA3.4 (2.3)3.8 (2.5)∗∗∗2.2 (1.5)a3.6 (1.8)a4.5 (2.2)a1–6SDMI = 2 × IM + IR − ER − 2 × AM; Here, a, b, and ab = homogenous subsets indicate significant differences, using one-way ANOVA test, Bonferroni post hoc test (p≤0.05). ∗p<0.05; ∗∗p<0.01; ∗∗∗P<0.001.Table4 shows the multivariate regression analysis. The upper part shows the overall model explained by 40.6% of the total variance in SDI (F = 81.16; p<0.001). According to this table, there are gender differences, but there is no significant difference in middle-aged and older adults. This table also shows that the model took into account the gender, age, the total amount of PAs, support of family and friends, and eagerness for movement activities. Participation in sports and exercise has no significant relationship between middle-aged and older adult’s SDI in PAs, but participation in RCAs has significant results (StB = 0.114; p<0.007).Table 4 Multivariate regression analysis for predicting motivation (SDMI) in PA. CoefficientsaModelUnstandardized coefficientsStandardized coefficientstSig.BSEBOverall model:R2 = 0.406; F = 81.16; p<0.001(Constant)0.2490.5240.4750.635Gender0.3210.1640.0541.9570.051Middle-aged and older adults−0.5540.165−0.090−3.3500.001Friends and family support0.4730.0830.1945.6730.000Total amount of physical activity−0.2300.064−0.113−3.5810.000Eagerness1.3000.0740.54017.5890.000Sports participants0.0960.2400.0120.4000.689Exercise participants0.0820.1740.0140.4690.639Recreation and cultural activity participants0.1850.0680.1142.7270.007Model male:R2 = 0.485; F = 42.9; p<0.001(Constant)0.7750.8880.8730.384Middle-aged and older adults−0.6390.285−0.092−2.2370.026Friends and family support0.6450.1490.2634.3190.000Total amount of physical activity−0.4600.194−0.142−2.3680.018Eagerness1.5860.1150.64213.7660.000Sports participants0.0070.3530.0010.0210.633Exercise participants−0.3720.302−0.058−1.2300.220Recreation and cultural activity participants0.1900.0870.0471.0360.031Model female:R2 = 0.346; F = 38.1; p<0.001(Constant)2.0390.6343.2150.001Middle-aged and older adults−0.5650.203−0.100−2.7780.006Friends and family support0.4860.1140.2014.2550.000Total amount of physical activity−0.1800.068−0.107−2.6470.008Eagerness1.1560.1120.49210.3200.000Sports participants−0.4360.329−0.050−1.3230.186Exercise participants−0.0740.224−0.013−0.3320.740Recreation and cultural activity participants0.2010.1200.0821.6770.025aDependent variable: SDMI.The distinct subanalysis for genders showed an interaction outcome among participation in RCAs and gender’s self-determined motivation to participate in PAs. The outcomes are displayed in the lower half of Table4. The RCAs have a significant effect on male and female middle-aged and older adults’ self-determined motivation for PAs; StB = 0.047; p<0.031 and StB = 0.082; p<0.025 for males and females, respectively. Furthermore, the descriptive analysis showed no significant differences in males (M = 7.1) and females (M = 7.2) and SDI scores between middle-aged and older adults (see Table 1). Moreover, the group that engages in sports has a lower SDI score (M = 2.5) than the score associated with exercise (M = 6.1). However, middle-aged and older adults who engage in RCAs as their major form of PA had significant scores on their SDI score (M = 9.5). For RCAs participants, there were differences in the average participation values for males (M = 5.6) and females (M = 7.6) with higher enjoyment level (t = 3.6; p<0.009). ## 3.1. Common Method Bias According to Mackenzie and Podsakoff [33], when data are gathered from only one source at the same time, the issue of common method bias may impact the validity of the study. To check the common method bias, we performed Harman’s single-factor test. The outcome shows a value of 31.8%, which is below the cutoff rate. Thus, the outcome confirms that there is no severe concern about common method bias. We then checked the reliability of all the variables; the results were satisfactory. ## 3.2. Attitudes towards PAs First, we used IPAQ to test the participants’ PAs levels over the last seven days. The correlation assessments show that the IPAQ’s short edition has a Cronbach’s alpha (α = 0.83). We calculated the alpha (α) values for sports (α = 0.82), exercise (α = 0.84), and RCAs (α = 0.87) (Table 2). This research found that there is diversity in PA levels between sports, exercise, and RCAs. The PAs of the participants included sports (n = 132), exercise (n = 273), and RCAs (n = 435). The descriptive statistics of PAs involvement show that the participants perform vigorous PAs (M = 3.53) and moderate PAs (M = 3.84) every week.Middle-aged and older adult’s satisfaction in engaging in PA results is shown in Table1. From left, Table 1 shows the average score of the respondents (5.60) (column 1). The results show that 11.1% of participants do not like PA (column 2), and 27% of the participants reported that it is challenging for them to engage in PAs (column 3). The results also show that 39.1% of the participants are not fully satisfied (column 2 plus column 3) with PAs; on the other hand, 61.9% of the middle-aged and older adults are satisfied with PAs (column 4).Compared with older adults (M = 5.4), the middle-aged (M = 5.7) indicated a higher enjoyment level (t = 3.6; p<0.001) and frequently engaged in PAs, with a chi-square χ2 = 44.7; p<0.001.The descriptive statistics show that the middle-aged and older female adult participants (M = 5.7) have the highest scores in the level of enjoyment (t = 5.6; p<0.001). On the other hand, the male participants (M = 5.2) gave positive responses about PAs, with a chi-square χ2 = 114.2; p<0.001. In addition, participants who engage in RCAs reported the highest score (M = 5.9) in PAs for enjoyment (t = 3.5; p<0.001) as compared to participants who engage in exercise (M = 5.2), and sports (M = 4.8). One-way ANOVA shows the enjoyment level in PAs (F = 47.7; p<0.001) for middle-aged and older adults. Post hoc Bonferroni test showed that participants that engage in RCAs outside the home have a higher score of enjoyment (t = 3.9, p<0.001) in PA (M = 5.9) than those that participate in organized movement activities, exercise (M = 5.1) and sports (M = 4.8), as well as the lowest levels of enjoyment (t = 3.1, p<0.001). ## 3.3. Self-Determined Motivation in PAs The self-determined motivations for sports, exercise, and RCAs areM = 6.6, M = 7.0, and M = 7.6, respectively. A regression analysis (including variance) for eagerness and self-determined motivation for outdoor PAs as functions of age and gender was conducted. The descriptive statistics show gender differences for weekly PA levels and the subdimensions of the SDI.The statistics also show strong relationships among RCAs, participants’ involvement, support of family and friends, eagerness, the total amount of PAs, and motivation for involvement in different types of movement activities. The SDMI in PAs and all subdimensions of SDMI in PA are shown in Table3.Table 3 Mean and Standard Deviation of key study variables. MalesFemalesSportsExerciseCultural activitiesScaleTotal number (N = 840)327 (38.9)513 (61.1)132 (15.7)273 (32.5)435 (51.8)Males, % (N = 327)102 (77.3)183 (67.1)42 (23.4)Females, % (N = 513)30 (22.7)90 (32.9)393 (76.6)Middle age, % (35–54) 273 (32.5%)115 (35.2)158 (30.8)81 (61.3)87 (31.9)105 (24.1)Older adults, % (55+) 567 (67.5%)212 (64.8)355 (69.2)51 (38.6)186 (68.1)330 (75.8)Family and friends support, % (651; 77.5%)253 (38.7)399 (61.3)99 (15.2)210 (32.2)342 (52.5)Friends and family support3.5 (1.4)3.8 (1.8)∗3.4 (1.1)3.8 (1.0)4.0 (0.9)1–5Eagerness4.8 (1.0)5.2 (1.0)∗∗∗3.2 (0.9)a4.8 (0.8)a5.7 (0.9)a1–7Intrinsic motivation (IM)5.2 (1.0)5.8 (0.9)∗∗4.4 (1.0)a5.2 (1.1)a6.0 (0.9)a1–7Identified regulation-IR5.1 (1.0)5.7 (1.1)4.5 (1.0)a5.2 (1.1)a5.9 (0.9)a1–7Extrinsic motivation (EM)4.1 (1.2)4.4 (1.1)∗4.6 (1.0)b4.5 (1.2)a4.2 (1.2)ab1–7Amotivation (AM)2.6 (0.8)2.4 (0.9)3.2 (0.9)a2.5 (1.0)a2.1 (0.9)a1–7SDMI in PA7.1 (2.9)7.2 (2.8)2.5 (2.3)a6.1 (2.9)a9.5 (1.9)a−18-18Total amount of PA3.4 (2.3)3.8 (2.5)∗∗∗2.2 (1.5)a3.6 (1.8)a4.5 (2.2)a1–6SDMI = 2 × IM + IR − ER − 2 × AM; Here, a, b, and ab = homogenous subsets indicate significant differences, using one-way ANOVA test, Bonferroni post hoc test (p≤0.05). ∗p<0.05; ∗∗p<0.01; ∗∗∗P<0.001.Table4 shows the multivariate regression analysis. The upper part shows the overall model explained by 40.6% of the total variance in SDI (F = 81.16; p<0.001). According to this table, there are gender differences, but there is no significant difference in middle-aged and older adults. This table also shows that the model took into account the gender, age, the total amount of PAs, support of family and friends, and eagerness for movement activities. Participation in sports and exercise has no significant relationship between middle-aged and older adult’s SDI in PAs, but participation in RCAs has significant results (StB = 0.114; p<0.007).Table 4 Multivariate regression analysis for predicting motivation (SDMI) in PA. CoefficientsaModelUnstandardized coefficientsStandardized coefficientstSig.BSEBOverall model:R2 = 0.406; F = 81.16; p<0.001(Constant)0.2490.5240.4750.635Gender0.3210.1640.0541.9570.051Middle-aged and older adults−0.5540.165−0.090−3.3500.001Friends and family support0.4730.0830.1945.6730.000Total amount of physical activity−0.2300.064−0.113−3.5810.000Eagerness1.3000.0740.54017.5890.000Sports participants0.0960.2400.0120.4000.689Exercise participants0.0820.1740.0140.4690.639Recreation and cultural activity participants0.1850.0680.1142.7270.007Model male:R2 = 0.485; F = 42.9; p<0.001(Constant)0.7750.8880.8730.384Middle-aged and older adults−0.6390.285−0.092−2.2370.026Friends and family support0.6450.1490.2634.3190.000Total amount of physical activity−0.4600.194−0.142−2.3680.018Eagerness1.5860.1150.64213.7660.000Sports participants0.0070.3530.0010.0210.633Exercise participants−0.3720.302−0.058−1.2300.220Recreation and cultural activity participants0.1900.0870.0471.0360.031Model female:R2 = 0.346; F = 38.1; p<0.001(Constant)2.0390.6343.2150.001Middle-aged and older adults−0.5650.203−0.100−2.7780.006Friends and family support0.4860.1140.2014.2550.000Total amount of physical activity−0.1800.068−0.107−2.6470.008Eagerness1.1560.1120.49210.3200.000Sports participants−0.4360.329−0.050−1.3230.186Exercise participants−0.0740.224−0.013−0.3320.740Recreation and cultural activity participants0.2010.1200.0821.6770.025aDependent variable: SDMI.The distinct subanalysis for genders showed an interaction outcome among participation in RCAs and gender’s self-determined motivation to participate in PAs. The outcomes are displayed in the lower half of Table4. The RCAs have a significant effect on male and female middle-aged and older adults’ self-determined motivation for PAs; StB = 0.047; p<0.031 and StB = 0.082; p<0.025 for males and females, respectively. Furthermore, the descriptive analysis showed no significant differences in males (M = 7.1) and females (M = 7.2) and SDI scores between middle-aged and older adults (see Table 1). Moreover, the group that engages in sports has a lower SDI score (M = 2.5) than the score associated with exercise (M = 6.1). However, middle-aged and older adults who engage in RCAs as their major form of PA had significant scores on their SDI score (M = 9.5). For RCAs participants, there were differences in the average participation values for males (M = 5.6) and females (M = 7.6) with higher enjoyment level (t = 3.6; p<0.009). ## 4. Discussion and Implications Our results validate the existing body of literature [34–36]. The primary aim of this study is to assess the attitudes of middle-aged and older adults towards different groups of PAs. The effect of attitudes varies according to the type of activity, age, and physical condition. According to our analysis, there is a positive relationship between the participant’s intentions and attitude towards PAs. Furthermore, the results show that attitude towards PA has high scores, which indicate that Chinese middle-aged and older adults have positive attitudes towards PAs. In particular, compared with males, middle-aged and older female adult participants have a more encouraging attitude towards PAs; this supports existing studies [34, 35]. Moreover, this study shows that middle-aged and older adults who do not participate in or enjoy movement activities or split their self-interest into competitive movement activities may take part in PAs. Still, in the long run, it can affect their PA involvement. Hence, they can develop a negative attitude towards PAs and eventually get demotivated to participate in PAs.Individuals whose primary activities are RCAs have the highest proportion of PAs (Table1). It emphasizes that more middle-aged and older adults participate in RCAs rather than in sports (51.8% vs 15.7%) due to a lack of enjoyment, health condition, and limited interest. Engaging in RCAs is highly correlated with motives related to social engagement and satisfaction. Moreover, middle-aged and older adults’ attitudes and eagerness are critical in motivating them to engage in RCAs, but it is not that essential in exercise and even less in sports. Although PAs are fun and enjoyable, they are not without challenges. People willingly engage in activities with playfulness and thereby need little or no extrinsic motivations to do so. In particular, the activities involve enjoyment, which builds a sense of eagerness and creates an opportunity for social relations (RCAs and sports). The results of this study indicate that the participants might be eager to engage in PAs but do not yet get involved in competitive sports or exercise, due to some personal reasons like basic psychological needs. However, participants that engage in RCAs show more eagerness, and this increases a positive attitude towards participating in PAs to maintain better health.The analysis of this study also validates earlier studies about the relationship between SDT and PA [21]. According to the SDT, the results of this study indicate that when participants are supported to feel autonomous, they are more likely to be intrinsically motivated. Supporting surroundings not only promotes their autonomous motivation but also positively increases their beliefs (attitudes) towards that behavior. There is no significant difference in the self-determined motivation index for PAs between male and female participants, but in the group activities, it shows great differences. Male participants’ interest in sports or exercise and female participant’s interest in RCAs reported considerably better scores on self-determined PA.Furthermore, these findings highlight the significance of motivation, eagerness, and attitude towards RCAs in maintaining better health. Moreover, female participants show high levels of interest in PAs with RCAs but a lower level in sports, which supports earlier studies [37]. Moreover, this study shows that Chinese female participation in RCAs is a key predictor for middle-aged and older adults’ participation in PAs. Finally, the results of the relationship between eagerness and amotivation show a negative association towards PAs, which is contrary to earlier studies [25]. This indicates that it is affected by demands, recommendations, and positive experiences. Therefore, participating in RCAs could be one of the critical predictors for middle-aged and older adults.This study also investigates the relationship between the support of family and friends and participating in PAs. We found that the support of family and friends is associated with PAs; where friends’ engagements in PAs influence them significantly because of a better source of gatherings and company [38]. As they advance in age, older adults’ dependency increases. Hence, compared to friends, family members were a better source of social control (reducing risky health behaviors), as well as being instrumental and providing emotional support. We found that participants who regularly participate in PAs with friends or family members have more opportunities to achieve a high level of physical activeness. ### 4.1. Implications for Scholars Based on the above discussion, this study provides some theoretical enlightenment. The results of this study confirm the importance of PAs in middle-aged and older adults by integrating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Moreover, this study also highlights why it is necessary to consider the effect of attitude and eagerness on the different dimensions of motivation when investigating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Our analysis shows that motivation, attitude, and eagerness play a significant role in developing people’s attitude towards PAs.Second, from the perspective of the role of PAs values, the findings of this study show that sports and exercise intention values have a profound influence on the participants’ PA motivational intentions than RCAs values. These outcomes are contrary to those seen in existing studies [31]. Moreover, this study shows that RCAs have a significant direct influence on participants’ age and gender.Third, it is enjoyable to participate in RCAs, and this has a significant effect on PA’s motivational intentions in middle-aged and older Chinese adults. A possible explanation for this could be that when participants find RCAs to be easy, they may develop a positive attitude for the effectiveness of the PA. We recommend that scholars apply these constructs in their research to get more awareness from their target audiences and add new facts to the PA styles.Finally, RCAs are a new phenomenon in China, and they are studied along with demographic aspects such as gender, region (for example, urban or rural), and religion, which may affect its adoption. Researchers can develop mobile applications by using artificial intelligence digital tools (like a watch) to guide or monitor how people should engage in PAs to maintain better health in their everyday life. With these applications or tools, participants can ask any questions in their national language about a method without the difficulty of engaging in the activity. ### 4.2. Implications for Managers This study also has several implications for project managers. First, project managers are strongly encouraged to improve their RCAs techniques [39] because this aspect is essential for participants who use RCA’s platforms in their PAs. The participants are getting used to executing decisions on an “anywhere-anytime” basis, that is, whenever they get the time, they engage in PAs. For example, wushu and dance exercises have included structural assurances in their PA policy that has made PAs successful.Second, the findings of this study offer handy information for PAs’ decision-makers in China since the study provides full information on how to use different ways of managing and developing PA strategies. For example, this study confirms that RCAs have a significant influence on middle-aged and older adult’s attitudes, eagerness, and motivations for PAs. The results show that participants of RCAs may be more likely to engage in PAs just for fun and enjoyment. In contrast, active participants may have a positive attitude and eagerness, which motivate them to use RCAs to enhance their psychophysical productivity. Thus, trainers or decision-makers should ensure quality and a diversified range of activities and services as well as other activities related to effective PA values. More precisely, given utilitarian PAs, project managers or decision-makers should offer multipurpose services and free training systems for participants. Moreover, due to cultural beliefs, it is easier for the Chinese to go outside and engage in PAs. RCAs, such as dance, wushu, and thaichi, provide a platform where Chinese can easily and conveniently engage in long-lasting PAs with all their norms and cultural values at any place of their choice.Finally, it is vital for PA managers or decision-makers to adopt one of the most attention-grasping trends of PA, which is RCAs for Chinese. Chinese must understand the need to integrate people’s PA systems before it is too late. It could assist people in developing optimistic views relating to PA values. This trend can reshape the PA sector. ## 4.1. Implications for Scholars Based on the above discussion, this study provides some theoretical enlightenment. The results of this study confirm the importance of PAs in middle-aged and older adults by integrating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Moreover, this study also highlights why it is necessary to consider the effect of attitude and eagerness on the different dimensions of motivation when investigating people’s motivation in the model to measure their intentions and actual attitude towards PAs. Our analysis shows that motivation, attitude, and eagerness play a significant role in developing people’s attitude towards PAs.Second, from the perspective of the role of PAs values, the findings of this study show that sports and exercise intention values have a profound influence on the participants’ PA motivational intentions than RCAs values. These outcomes are contrary to those seen in existing studies [31]. Moreover, this study shows that RCAs have a significant direct influence on participants’ age and gender.Third, it is enjoyable to participate in RCAs, and this has a significant effect on PA’s motivational intentions in middle-aged and older Chinese adults. A possible explanation for this could be that when participants find RCAs to be easy, they may develop a positive attitude for the effectiveness of the PA. We recommend that scholars apply these constructs in their research to get more awareness from their target audiences and add new facts to the PA styles.Finally, RCAs are a new phenomenon in China, and they are studied along with demographic aspects such as gender, region (for example, urban or rural), and religion, which may affect its adoption. Researchers can develop mobile applications by using artificial intelligence digital tools (like a watch) to guide or monitor how people should engage in PAs to maintain better health in their everyday life. With these applications or tools, participants can ask any questions in their national language about a method without the difficulty of engaging in the activity. ## 4.2. Implications for Managers This study also has several implications for project managers. First, project managers are strongly encouraged to improve their RCAs techniques [39] because this aspect is essential for participants who use RCA’s platforms in their PAs. The participants are getting used to executing decisions on an “anywhere-anytime” basis, that is, whenever they get the time, they engage in PAs. For example, wushu and dance exercises have included structural assurances in their PA policy that has made PAs successful.Second, the findings of this study offer handy information for PAs’ decision-makers in China since the study provides full information on how to use different ways of managing and developing PA strategies. For example, this study confirms that RCAs have a significant influence on middle-aged and older adult’s attitudes, eagerness, and motivations for PAs. The results show that participants of RCAs may be more likely to engage in PAs just for fun and enjoyment. In contrast, active participants may have a positive attitude and eagerness, which motivate them to use RCAs to enhance their psychophysical productivity. Thus, trainers or decision-makers should ensure quality and a diversified range of activities and services as well as other activities related to effective PA values. More precisely, given utilitarian PAs, project managers or decision-makers should offer multipurpose services and free training systems for participants. Moreover, due to cultural beliefs, it is easier for the Chinese to go outside and engage in PAs. RCAs, such as dance, wushu, and thaichi, provide a platform where Chinese can easily and conveniently engage in long-lasting PAs with all their norms and cultural values at any place of their choice.Finally, it is vital for PA managers or decision-makers to adopt one of the most attention-grasping trends of PA, which is RCAs for Chinese. Chinese must understand the need to integrate people’s PA systems before it is too late. It could assist people in developing optimistic views relating to PA values. This trend can reshape the PA sector. ## 5. Conclusions and Future Directions The purpose of this study was to determine the attitude, motivation, and eagerness for different types of PA and how these groups of activities affect participants’ motivation for PAs. This study confirms that gender differences play a vital role in shaping middle-aged and older adults’ attitudes towards PA. The tendency of middle-aged and older adults’ attitudes and eagerness for PAs declines with age. Participants’ engagement in competitive, enjoyable PAs, such as RCAs, and motivation for PAs are the primary benefits of PAs. In RCA’s participation, there are gender differences, where female participants significantly benefit. On the other hand, in sports, female participants’ PAs show an opposite result. This study emphasizes the importance of middle-aged and older adults’ engagement in PAs and seems to favor those participants’ who are already engaged in cultural and aerobic activities, such as RCAs. Participants that engage in RCAs believe that these activities could be one of the sources of healthy behavior. Moreover, attitude and eagerness for PAs can be developed through the influence of family and friends. In addition, these will gradually improve middle-aged and older adults’ attitudes, motivation, and eagerness; this might increase PA levels soon.First, future studies can use cross-cultural data. Second, we only considered middle-aged and older adults; future studies can focus on young people. Finally, we used SPSS to analyze the data; future studies can use AMOS. Future research can study events, such as sports, exercise, and RCAs, in terms of thoughts as mastery, competence, and ability to participate and analyze how they are influenced and predicted by diverse motives and their role in improving health. --- *Source: 1014891-2020-09-01.xml*
2020
# The Effects of the Combination of a Refined Carbohydrate Diet and Exposure to Hyperoxia in Mice **Authors:** Nicia Pedreira Soares; Keila Karine Duarte Campos; Karina Braga Pena; Ana Carla Balthar Bandeira; André Talvani; Marcelo Eustáquio Silva; Frank Silva Bezerra **Journal:** Oxidative Medicine and Cellular Longevity (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1014928 --- ## Abstract Obesity is a multifactorial disease with genetic, social, and environmental influences. This study aims at analyzing the effects of the combination of a refined carbohydrate diet and exposure to hyperoxia on the pulmonary oxidative and inflammatory response in mice. Twenty-four mice were divided into four groups: control group (CG), hyperoxia group (HG), refined carbohydrate diet group (RCDG), and refined carbohydrate diet + hyperoxia group (RCDHG). The experimental diet was composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk. For 24 hours, the HG and RCDHG were exposed to hyperoxia and the CG and RCDG to ambient air. After the exposures were completed, the animals were euthanized, and blood, bronchoalveolar lavage fluid, and lungs were collected for analyses. The HG showed higher levels of interferon-γ in adipose tissue as compared to other groups and higher levels of interleukin-10 and tumor necrosis factor-α compared to the CG and RCDHG. SOD and CAT activities in the pulmonary parenchyma decreased in the RCDHG as compared to the CG. There was an increase of lipid peroxidation in the HG, RCDG, and RCDHG as compared to the CG. A refined carbohydrate diet combined with hyperoxia promoted inflammation and redox imbalance in adult mice. --- ## Body ## 1. Introduction Obesity is a public health problem and is correlated with several comorbidities, such as heart failure [1, 2] which, in most cases, requires oxygen supplementation [3]. However, when administering oxygen, professionals should follow a careful method to assess the necessity, time, and dose to be given. Oxygen at high concentrations (hyperoxia) can trigger lung oxidative damage, including damage to components of the extracellular matrix, epithelial and endothelial cell injuries, and lung inflammation [4–6].According to the World Health Organization, worldwide obesity has doubled since 1980 [7]. In 2005, about 1.6 billion adults over 18 years were overweight, and over 400 million were obese [8]. In 2014, the number of overweight and obese cases increased to more than 1.9 billion and 600 million, respectively [7].The experimental model of obesity that more closely resembles human obesity is conditioned to foods with high refined carbohydrates and lipids [9]. These macronutrients are responsible for the systemic, chronic low-grade inflammation associated with obesity [10]. Carbohydrates trigger lipogenic enzymes due to the activation of the carbohydrate-responsive element-binding protein (ChREBP), thus favoring the development of obesity [11]. In obesity, the adipocytes release free fatty acids (FFAs) that activate the signaling pathways of inflammation. When FFA binds itself to receptors in the cell membrane of macrophages, it activates a complex of kinase enzymes and protein coding genes involved in the inflammatory response, such as tumor necrosis factor-α (TNF-α). These proteins activate adipocytes leading to lipolysis that releases more fatty acids and several inflammatory genes [12–14]. TNF-α activates the pathway of mitogen-activated protein kinases (MAPKs) responsible for inflammatory gene transcription [15] and can stimulate the infiltration and accumulation of macrophages in adipocytes because of inflammation in obesity [16]. In addition, obesity leads to hypertrophy and hyperplasia of adipocytes, which, in turn, causes hypoperfusion and tissue hypoxia [17, 18]. This process causes a decrease in adiponectin production and an increase in proinflammatory cytokines responsible for inflammation [16, 19].Obesity and hyperoxia are known to increase reactive oxygen species (ROS) [20, 21]. ROS can be from exogenous or endogenous origins. Endogenous ROS is usually produced as a result of cell metabolism [22, 23]. At low to moderate concentrations, they participate in physiological cellular processes and have a beneficial role in aerobic organisms because of their participation in the regulation of cell signaling, gene expression, and apoptotic mechanisms. However, at high concentrations, ROS may cause damage to cell constituents such as lipids, proteins, and DNA [22]. To counteract ROS, cells have an antioxidant defense system that is either enzymatic or nonenzymatic. Enzymes involved in the primary antioxidant defense system include superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase [22–24].Extra care should be taken when administering medicinal oxygen to obese patients, who already have chronic low-grade inflammation [21] and increased ROS [20] and may suffer more severe conditions. Thus, this study aimed to analyze the oxidative and inflammatory effects of a high refined carbohydrate diet in mice exposed to hyperoxia. ## 2. Materials and Methods ### 2.1. Experimental Design Twenty-four BALB/c mice (male, adults, and 5–7 weeks old) were housed under controlled conditions in standard laboratory cages (Laboratory of Experimental Nutrition, Department of Food, School of Nutrition, Federal University of Ouro Preto) and given free access to water and food. Allin vivo experimental protocols conducted on the animals at the Federal University of Ouro Preto were approved by the ethics committee (#2013/58). The animals were divided into two groups: the first group (G1) received a standard diet, and the second (G2) received a diet rich in refined carbohydrates, composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk, for twelve weeks. The animal body weight and food intake were measured weekly. After dietary treatment, G1 was randomly divided into the control group (CG) and hyperoxia group (HG), and G2 was randomly divided into the refined carbohydrate diet group (RCDG) and refined carbohydrate diet + hyperoxia group (RCDHG). For 24 hours, the HG and RCDHG were exposed to 100% oxygen, and the CG and RCDG were just exposed to ambient air. ### 2.2. Composition of Diets and Food Intake and Regulation of Body Mass The animals of the CG and HG were fed standard chow (Labina, Purina; Evialis Group, São Paulo, Brazil), and the RCDG and RCDHG received a high palatability feed, composed of 10% granulated sugar, 45% standard feed, and 45% condensed milk (Nestlé®, São Paulo, Brazil), used to promote obesity in animals [25, 26]. Food intake and body weight gain were measured once a week using a digital scale (Mark®, Series M; Bel Equipment Analytical LTDA, São Paulo, Brazil). To control intake, the diets were weighed before serving to the animals and after a week. ### 2.3. Oral Glucose Tolerance Test (OGTT) A week before the end of the experiment, the animals were submitted to OGTT, as described by Menezes-Garcia and colleagues [25] and Oliveira and colleagues [25, 26], to investigate their insulin sensitivity. ### 2.4. Exposure to Oxygen All mice (except the CG and RCDG, which inhaled ambient air) were placed in the inhalation chamber and removed after 24 h. An acrylic inhalation chamber was used to expose the animals to hyperoxia (30 cm long, 20 cm wide, and 15 cm high). Oxygen 100% was purchased from White Martins® (White Martins Praxair Inc., São Paulo, Brazil). The oxygen tank was coupled to the inhalation chamber using a silicone conduit [5, 6, 27]. The oxygen concentration was measured continuously through an oxygen cell (C3, Middlesbrough, England). The mice received water and foodad libitum, were kept in individual cages with controlled temperature and humidity (21 ± 2°C, 50 ± 10%, respectively), and were submitted to inverted 12 h cycles of light/dark (artificial lights, 7 p.m. to 7 a.m.). ### 2.5. Euthanasia After 24 hours of oxygen exposure, all animals were subjected to anesthesia with ketamine (130 mg/kg) and xylazine (0.3 mg/kg) and euthanized by exsanguination. The blood, bronchoalveolar lavage fluid (BALF), and adipose tissues (retroperitoneal, epididymal, and mesenteric) were removed. ### 2.6. Blood Collection To obtain plasma, two aliquots of blood were collected from each animal in polypropylene tubes containing 15µL of anticoagulant. One aliquot was sent to the Clinical Analysis Lab Pilot (LAPAC-UFOP) for measurements of blood count and white blood cell count. The other aliquot was centrifuged at 10,000 rpm for 15 min, and the supernatant was removed for cholesterol measurement. ### 2.7. Hemogram and Biochemical Analyses of Blood and Plasma For the complete blood count, whole blood was diluted with saline (1 : 2), and the erythrocyte hematological parameters, hematocrit and hemoglobin, were evaluated using an electronic counting device (ABX Diagnostics, micro 60, HORIBA®, Tokyo, Japan) at LAPAC-UFOP. Cholesterol concentrations were determined by automatic spectrophotometry using the Random Access Clinical Analyzer (CM-200; Wiener Lab, Rosario, Argentina) and by the enzymatic colorimetric method using a specific kit (Bioclin®; Quibasa, Belo Horizonte, Brazil). ### 2.8. Assessment and Analysis of the BALF Immediately after euthanasia, the chest of each animal was opened to collect the BALF. The left lung was clamped, the trachea cannulated, and the right lung perfused with 1.5 mL of saline solution. The samples were kept on ice until the end of the procedure to avoid cell lysis. Total, mononuclear, and polymorphonuclear cells were stained with trypan blue, enumerated in a Neubauer chamber (Sigma-Aldrich, MA, USA), and stained again using a fast panoptic coloration kit (Laborclin, Pinhais, Paraná, Brazil) [28, 29]. Differential cell counts were performed on cytospin preparations (Shandon, Waltham, MA, USA) and stained with the fast panoptic coloration kit [30]. ### 2.9. Tissue Processing and Homogenization The right lung was clamped, and a cannula was inserted into the trachea. The airspaces were washed with buffered saline solution (final volume 1.5 mL) maintained on ice. The left lung and epididymal adipose tissue (EAT) were removed and immersed in a fixative solution for 48 hr [6, 30]. The tissue was then processed as follows: tap water bath for 30 min, 70% and 90% alcohol baths for 1 hr each, 2 baths in 100% ethanol for 1 hr each, and embedding in paraffin. For histologic analyses, serial 5 μm sagittal sections were obtained from the left lung and stained with hematoxylin and eosin. The right lung was subsequently homogenized in 1 ml potassium phosphate buffer (pH 7.5) and centrifuged at 1500 ×g for 10 min. The supernatant was collected, and the final volume of all samples was adjusted to 1.5 ml with phosphate buffer. The samples were stored in a freezer (–80°C) for biochemical analyses [30]. ## 2.1. Experimental Design Twenty-four BALB/c mice (male, adults, and 5–7 weeks old) were housed under controlled conditions in standard laboratory cages (Laboratory of Experimental Nutrition, Department of Food, School of Nutrition, Federal University of Ouro Preto) and given free access to water and food. Allin vivo experimental protocols conducted on the animals at the Federal University of Ouro Preto were approved by the ethics committee (#2013/58). The animals were divided into two groups: the first group (G1) received a standard diet, and the second (G2) received a diet rich in refined carbohydrates, composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk, for twelve weeks. The animal body weight and food intake were measured weekly. After dietary treatment, G1 was randomly divided into the control group (CG) and hyperoxia group (HG), and G2 was randomly divided into the refined carbohydrate diet group (RCDG) and refined carbohydrate diet + hyperoxia group (RCDHG). For 24 hours, the HG and RCDHG were exposed to 100% oxygen, and the CG and RCDG were just exposed to ambient air. ## 2.2. Composition of Diets and Food Intake and Regulation of Body Mass The animals of the CG and HG were fed standard chow (Labina, Purina; Evialis Group, São Paulo, Brazil), and the RCDG and RCDHG received a high palatability feed, composed of 10% granulated sugar, 45% standard feed, and 45% condensed milk (Nestlé®, São Paulo, Brazil), used to promote obesity in animals [25, 26]. Food intake and body weight gain were measured once a week using a digital scale (Mark®, Series M; Bel Equipment Analytical LTDA, São Paulo, Brazil). To control intake, the diets were weighed before serving to the animals and after a week. ## 2.3. Oral Glucose Tolerance Test (OGTT) A week before the end of the experiment, the animals were submitted to OGTT, as described by Menezes-Garcia and colleagues [25] and Oliveira and colleagues [25, 26], to investigate their insulin sensitivity. ## 2.4. Exposure to Oxygen All mice (except the CG and RCDG, which inhaled ambient air) were placed in the inhalation chamber and removed after 24 h. An acrylic inhalation chamber was used to expose the animals to hyperoxia (30 cm long, 20 cm wide, and 15 cm high). Oxygen 100% was purchased from White Martins® (White Martins Praxair Inc., São Paulo, Brazil). The oxygen tank was coupled to the inhalation chamber using a silicone conduit [5, 6, 27]. The oxygen concentration was measured continuously through an oxygen cell (C3, Middlesbrough, England). The mice received water and foodad libitum, were kept in individual cages with controlled temperature and humidity (21 ± 2°C, 50 ± 10%, respectively), and were submitted to inverted 12 h cycles of light/dark (artificial lights, 7 p.m. to 7 a.m.). ## 2.5. Euthanasia After 24 hours of oxygen exposure, all animals were subjected to anesthesia with ketamine (130 mg/kg) and xylazine (0.3 mg/kg) and euthanized by exsanguination. The blood, bronchoalveolar lavage fluid (BALF), and adipose tissues (retroperitoneal, epididymal, and mesenteric) were removed. ## 2.6. Blood Collection To obtain plasma, two aliquots of blood were collected from each animal in polypropylene tubes containing 15µL of anticoagulant. One aliquot was sent to the Clinical Analysis Lab Pilot (LAPAC-UFOP) for measurements of blood count and white blood cell count. The other aliquot was centrifuged at 10,000 rpm for 15 min, and the supernatant was removed for cholesterol measurement. ## 2.7. Hemogram and Biochemical Analyses of Blood and Plasma For the complete blood count, whole blood was diluted with saline (1 : 2), and the erythrocyte hematological parameters, hematocrit and hemoglobin, were evaluated using an electronic counting device (ABX Diagnostics, micro 60, HORIBA®, Tokyo, Japan) at LAPAC-UFOP. Cholesterol concentrations were determined by automatic spectrophotometry using the Random Access Clinical Analyzer (CM-200; Wiener Lab, Rosario, Argentina) and by the enzymatic colorimetric method using a specific kit (Bioclin®; Quibasa, Belo Horizonte, Brazil). ## 2.8. Assessment and Analysis of the BALF Immediately after euthanasia, the chest of each animal was opened to collect the BALF. The left lung was clamped, the trachea cannulated, and the right lung perfused with 1.5 mL of saline solution. The samples were kept on ice until the end of the procedure to avoid cell lysis. Total, mononuclear, and polymorphonuclear cells were stained with trypan blue, enumerated in a Neubauer chamber (Sigma-Aldrich, MA, USA), and stained again using a fast panoptic coloration kit (Laborclin, Pinhais, Paraná, Brazil) [28, 29]. Differential cell counts were performed on cytospin preparations (Shandon, Waltham, MA, USA) and stained with the fast panoptic coloration kit [30]. ## 2.9. Tissue Processing and Homogenization The right lung was clamped, and a cannula was inserted into the trachea. The airspaces were washed with buffered saline solution (final volume 1.5 mL) maintained on ice. The left lung and epididymal adipose tissue (EAT) were removed and immersed in a fixative solution for 48 hr [6, 30]. The tissue was then processed as follows: tap water bath for 30 min, 70% and 90% alcohol baths for 1 hr each, 2 baths in 100% ethanol for 1 hr each, and embedding in paraffin. For histologic analyses, serial 5 μm sagittal sections were obtained from the left lung and stained with hematoxylin and eosin. The right lung was subsequently homogenized in 1 ml potassium phosphate buffer (pH 7.5) and centrifuged at 1500 ×g for 10 min. The supernatant was collected, and the final volume of all samples was adjusted to 1.5 ml with phosphate buffer. The samples were stored in a freezer (–80°C) for biochemical analyses [30]. ## 3. Antioxidant Defense and Oxidative Stress Biomarkers in Lung Homogenates We used the formation of thiobarbituric acid reactive substances (TBARS) as an index of lipid peroxidation during an acid-heating reaction as previously described by Valenca et al. [31]. Briefly, the TBARS level was estimated in accordance with the method described by Lean et al. [32]. The lung homogenate supernatants (1.0 ml) were mixed with 2.0 ml of TCA-TBA-HCL (15% w/v trichloroacetic acid (TCA); 0.375% w/v thiobarbituric acid (TBA); and 0.25 N hydrochloric acid (HCL)). The solution was heated for 15 min in a boiling water bath. After cooling, the precipitates were removed via centrifugation, and the absorbance of the sample at 535 nm was measured. The TBARS level was calculated using the molar absorption coefficient of malondialdehyde (1.56 × 105 M−1 cm−1.28).The lung homogenates were used to determine CAT activity. This method was based on the enzymatic decomposition of hydrogen peroxide (H2O2) observed spectrophotometrically at 240 nm for 5 min. Ten microliters of the homogenate supernatant was added to a cuvette containing 100 mM phosphate buffer (pH 7.2), and the reaction was initiated by the addition of 10 mM H2O2. H2O2 decomposition was calculated using the molar absorption coefficient 39.4 M−1 cm−1. The results were expressed as activity per mg of protein. One unit of CAT was equivalent to the hydrolysis of 1 μmol of H2O2 per min [33]. SOD activity was assayed by the spectrophotometric method of Marklund and Marklund [34] using an improved pyrogallol autoxidation inhibition assay. SOD reacts with the superoxide radical (O2-), and this slows down the rate of formation of o-hydroxy-o-benzoquinone and other polymer products. One unit of SOD is defined as the amount of enzyme that reduces the rate of autoxidation of pyrogallol by 50%. ### 3.1. Adiposity Index The adipose pads were removed and weighed to determine the adiposity index. The index was calculated by adding the epididymal, retroperitoneal, and mesenteric adipose tissue mass, divided by body weight, and multiplied by 100 [26]. ### 3.2. Immunoassays of Epididymal Adipose Tissue (EAT) The epididymal adipose tissue was used to determine the concentrations of the inflammatory mediators TNF-α, IFN-γ, and IL-10 and the plasma was used to determine the leptin levels. For the analysis, the samples were thawed and excess proteins were removed by acid/salt precipitation, as previously described [10]. Briefly, equal volumes of epididymal adipose tissue, plasma, and 1.2% trifluoroacetic acid/1.35 M NaCl were mixed, incubated at room temperature for 10 min, and centrifuged for 5 min at 10,000 rpm. The salt content of the supernatant was adjusted to be 0.14 M sodium chloride and 0.01 M sodium phosphate at a pH of 7.4 prior to determination of the concentrations of TNF-α, IFN-γ, and IL-10 using commercially available ELISA kits (Bio Source International, Inc., CA, USA) and leptin (PeproTech, London, United Kingdom) according to the manufacturer’s guidelines. All samples were measured in duplicate [9, 35]. ### 3.3. Morphometric and Stereological Analyses Twenty random images obtained from the histological slides of the lungs and EAT were digitized using a Leica DM5000B optical microscope with Leica Application Suite software and CM300 digital microcamera (Multiuser Laboratory of the Research Center for Biological Sciences of the Federal University of Ouro Preto). The images of the lung and EAT were scanned with 40x and 10x objective lenses, respectively. We used a representative image at 40x magnification with a 100µm ruler to calibrate a ruler in pixels derived from the program such that 434 pixels equaled 100 μm with the aid of Image J software. Five alveolar areas in each slide prepared from each animal were measured [36, 37]. Six fields from each animal image were captured with a digital camera coupled to a microscope (200x). The area was obtained by randomly measuring 50 adipocytes per blade using the J® Image software (National Institutes of Health, Bethesda, MD, USA).The analyses of the volume density values of alveolar air space (Vv[a]) and volume densities of alveolar septa (Vv[sa]) were performed on a test system that consists of sixteen points and a known test area in which the boundary line was considered forbidden in order to avoid overestimation of the number of structures. The test system was matched to a monitor attached to a microscope. The number of points (PP) that touched the alveolar septa was assessed according to the total number of test points (PT) in the system using the equation Vv=PP/PT. To obtain uniform and proportional lung samples, we analyzed 18 random fields in a cycloid test system attached to the monitor screen. The reference volume was estimated by point counting, using the test point system. A total area of 1.94 mm2 was analyzed to determine Vvsa in slides stained with hematoxylin and eosin [38]. ### 3.4. Statistical Analysis The data with normal distribution were analyzed by unpairedt-test, univariate analysis of variance (one-way ANOVA), or by two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. Data were expressed as the mean ± standard error of the mean. For discrete data, we used the Kruskal-Wallis test followed by Dunn’spost hoc test and expressed them as median, minimum, and maximum values. In both cases, the difference was considered significant when theP value was less than 0.05. All analyses were performed with GraphPad Prism, version 5.00 for Windows 7 (GraphPad Software; San Diego, CA, USA). ## 3.1. Adiposity Index The adipose pads were removed and weighed to determine the adiposity index. The index was calculated by adding the epididymal, retroperitoneal, and mesenteric adipose tissue mass, divided by body weight, and multiplied by 100 [26]. ## 3.2. Immunoassays of Epididymal Adipose Tissue (EAT) The epididymal adipose tissue was used to determine the concentrations of the inflammatory mediators TNF-α, IFN-γ, and IL-10 and the plasma was used to determine the leptin levels. For the analysis, the samples were thawed and excess proteins were removed by acid/salt precipitation, as previously described [10]. Briefly, equal volumes of epididymal adipose tissue, plasma, and 1.2% trifluoroacetic acid/1.35 M NaCl were mixed, incubated at room temperature for 10 min, and centrifuged for 5 min at 10,000 rpm. The salt content of the supernatant was adjusted to be 0.14 M sodium chloride and 0.01 M sodium phosphate at a pH of 7.4 prior to determination of the concentrations of TNF-α, IFN-γ, and IL-10 using commercially available ELISA kits (Bio Source International, Inc., CA, USA) and leptin (PeproTech, London, United Kingdom) according to the manufacturer’s guidelines. All samples were measured in duplicate [9, 35]. ## 3.3. Morphometric and Stereological Analyses Twenty random images obtained from the histological slides of the lungs and EAT were digitized using a Leica DM5000B optical microscope with Leica Application Suite software and CM300 digital microcamera (Multiuser Laboratory of the Research Center for Biological Sciences of the Federal University of Ouro Preto). The images of the lung and EAT were scanned with 40x and 10x objective lenses, respectively. We used a representative image at 40x magnification with a 100µm ruler to calibrate a ruler in pixels derived from the program such that 434 pixels equaled 100 μm with the aid of Image J software. Five alveolar areas in each slide prepared from each animal were measured [36, 37]. Six fields from each animal image were captured with a digital camera coupled to a microscope (200x). The area was obtained by randomly measuring 50 adipocytes per blade using the J® Image software (National Institutes of Health, Bethesda, MD, USA).The analyses of the volume density values of alveolar air space (Vv[a]) and volume densities of alveolar septa (Vv[sa]) were performed on a test system that consists of sixteen points and a known test area in which the boundary line was considered forbidden in order to avoid overestimation of the number of structures. The test system was matched to a monitor attached to a microscope. The number of points (PP) that touched the alveolar septa was assessed according to the total number of test points (PT) in the system using the equation Vv=PP/PT. To obtain uniform and proportional lung samples, we analyzed 18 random fields in a cycloid test system attached to the monitor screen. The reference volume was estimated by point counting, using the test point system. A total area of 1.94 mm2 was analyzed to determine Vvsa in slides stained with hematoxylin and eosin [38]. ## 3.4. Statistical Analysis The data with normal distribution were analyzed by unpairedt-test, univariate analysis of variance (one-way ANOVA), or by two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. Data were expressed as the mean ± standard error of the mean. For discrete data, we used the Kruskal-Wallis test followed by Dunn’spost hoc test and expressed them as median, minimum, and maximum values. In both cases, the difference was considered significant when theP value was less than 0.05. All analyses were performed with GraphPad Prism, version 5.00 for Windows 7 (GraphPad Software; San Diego, CA, USA). ## 4. Results ### 4.1. Food Intake and Body Weight Gain The animals were weighed and their food intake was measured weekly to evaluate if the high refined carbohydrate diet influenced food intake and body weight gain. As shown in Figure1, the RCDG had a higher body weight gain in the second week of the experiment compared to the CG, and this gain was maintained for 12 weeks. However, no significant difference was observed in the amount (g) consumed by the experimental groups (Figure 1).Figure 1 Body weight gain (a) and food intake (b) for 12 weeks. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and CG. Comparisons were performed using two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b) ### 4.2. Body Adiposity Index, Adipocyte Area, and Leptin The diet model induced obesity as confirmed by the body adiposity index, which increased in the RCDG as compared to the other groups (Figure2), and by the adipocyte area, which increased in the RCDHG compared to the CG and HG, evaluated by morphometric analysis of EAT sections (Figures 2 and 3). There was also an increase of leptin in the RCDG and RCDHG compared to the CG (Figure 4).Figure 2 Body adiposity (a) and adipocyte area (b). CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa,b,d<0.05 significantly different values for RCDG and CG, HG, and RCHDG, respectively. Pa,b<0.05 significantly different values for RCDHG and CG and HG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b)Figure 3 Histological analysis of the epididymal adipose tissue sections stained with hematoxylin and eosin. Bar = 50μm. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.Figure 4 Plasma leptin levels. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and RCDHG in relation to CG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. ### 4.3. Glucose and Cholesterol Metabolism According to the OGTT results, the RCDG presented higher glycemic levels at 15, 30, and 60 min after glucose overload as compared to the CG. There was also a significant increase in total cholesterol in the RCDG showing that this diet was able to induce insulin resistance and hypercholesterolemia in these animals (Figure5).Figure 5 Plasma glucose (a) and cholesterol (b) levels. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean six animals per group.Pa<0.05 significantly different values for RCDG compared to the CG. Comparisons were made using two-way ANOVA with Bonferroni’s multiple comparisonpost hoc test (a) and unpaired t-test (b). (a) (b) ### 4.4. Total and Differential Cell Count in the BALF The dynamics of cell recruitment in the BALF, where the presence of the leukocytes, lymphocytes, neutrophils, and macrophages was identified in the high refined carbohydrate diet and exposure to hyperoxia, was evaluated. As shown in Table1, there was an increase in the number of total leukocytes in the RCDHG as compared to the CG and HG, as well as an increase in macrophages in the RCDG and RCDHG as compared to the CG and HG.Table 1 The inflammatory cells in the bronchoalveolar lavage of experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 140.0 ± 5.3 146.3 ± 6.0 161.3 ± 5.2 178.3 ± 4.8 a , b <0.05 Macrophages (×103/mL) 92.1 ± 9.3 99.6 ± 7.4 128.5 ± 5.0 a , b 152.9 ± 7.4 a , b <0.05 Lymphocytes (×103/mL) 18.2 ± 4.5 9.7 ± 3.5 7.1 ± 2.3 7.6 ± 3.5 >0.05 Neutrophils (×103/mL) 29.7 ± 6.3 37.0 ± 5.1 25.7 ± 5.0 17.9 ± 2.6 >0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.5. Total and Differential Blood Cell Count The total and differential number of cells in the blood was counted. The total leukocytes and lymphocytes in the blood decreased in the HG compared to the CG and RCDG and decreased in the RCDHG compared to the CG (Table2). Exposure to hyperoxia promoted the significant decrease in neutrophils observed in the HG compared to the CG. Furthermore, the monocytes decreased in the HG and RCDHG compared to the CG.Table 2 Total and differential leukocyte count in the blood of the experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 7.2 ± 0.6 2.7 ± 0.4 a , c 5.4 ± 0.8 4.2 ± 0.5 a <0.05 Lymphocytes (×103/mL) 4.9 ± 4.0 1.6 ± 0.3 a , c 3.6 ± 0.6 2.5 ± 0.3 a <0.05 Neutrophils (×103/mL) 1.7 ± 0.1 0.9 ± 0.1 a 1.3 ± 0.2 1.4 ± 0.2 <0.05 Monocytes (×103/mL) 0.6 ± 0.1 0.2 ± 0.0 a 0.5 ± 0.1 0.3 ± 0.0 a <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.aRepresenting significant differences compared to CG. a,cRepresenting significant differences compared to CG and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.6. Stereological Evaluations of Lung Parenchyma of the Experimental Groups Morphometric analysis showed no significant differences in the alveolar air volume density (Vva) and Vvsa in the HG, RCDG, and RCDHG as compared to the CG (Table 3 and Figure 6).Table 3 Comparison of alveolar airspace volume and density of alveolar septa of groups. CG HG RCDG RCDHG P V v a (%) 37.5 (18.7/43.7) 39.1 (25.0/46.8) 39.1 (31.2/46.8) 34.4 (25.0/37.5) >0.05 V v s a (%) 53.1 (43.7/56.2) 45.3 (43.7/62.5) 43.7 (40.6/50.0) 50.0 (43.7/50.0) >0.05 V v a: alveolar airspace volume; Vvsa: density of alveolar septa. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data were expressed as median, minimum, and maximum (n=6) and were analyzed by Kruskal-Wallis test followed by Dunn’s post hoc test (P=0.95).Figure 6 Photomicrographs (400x) of hematoxylin and eosin stained lung sections of the CG, HG, RCDG, and RCDHG. Bar = 50μm. Cell influx into the lung parenchyma. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Arrows indicate influx of inflammatory cells into the lung parenchyma in groups exposed to hyperoxia and given a refined carbohydrate diet. ### 4.7. CBC and Biochemical Analysis of the Blood Clinical hematology is used to evaluate the general state of health of the animal as well as to detect specific diseases [39]. The HG animals had lower serum levels of erythrocytes, hemoglobin, and hematocrit compared to other groups (Table 4).Table 4 Blood count of the experimental groups. CG HG RCDG RCDHG P RBC (∗106/µL) 9.5 ± 0.1 7.6 ± 0.4 a , c , d 9.3 ± 0.3 9.7 ± 0.2 <0.05 Hemoglobin (g/dL) 17.3 ± 0.3 14.4 ± 0.8 a , c , d 17.7 ± 0.3 17.6 ± 0.3 <0.05 Hematocrit (%) 52.9 ± 1.0 44.2 ± 2.4 a , c , d 55.4 ± 1.2 54.4 ± 1.4 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.8. Immunoenzymatic Assays on the EAT The immunoenzymatic assays performed on the EAT showed that the HG had higher amounts of IFN-γ as compared to the CG, RCDG, and RCDHG and higher levels of IL-10 and TNF-α as compared to the CG and RCDHG (Table 5).Table 5 Levels of inflammatory markers in epididymal adipose tissue. CG HG RCDG RCDHG P INF-γ 549.4 ± 84.8 1027.0 ± 149.4 a , c , d 558.1 ± 40.7 384.1 ± 88.4 <0.05 IL-10 1.150.0 ± 343.0 2.606.0 ± 568.1 a , d 1.253.0 ± 166.8 592.5 ± 201.0 <0.05 TNF-α 528.0 ± 148.7 1.180.0 ± 245.6 a , d 608.7 ± 60.7 247.2 ± 79.5 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG. a,dRepresenting significant differences (P<0.05) compared to CG and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.9. Analysis of Redox Imbalance and Damage Caused by Oxidation The antioxidant enzymes SOD and CAT are generally regulated by oxidative stress and are responsible for the oxidative balance in the lungs. As shown in Table6, SOD activity in the lung parenchyma decreased in the RCDHG as compared to the CG and HG, and CAT activity decreased in the RCDHG as compared to the CG. The TBARS levels revealed a progressive increase in lipid peroxidation in the HG, RCDG, and RCDHG compared to the CG, as well as in the RCDG compared to the HG and RCDHG (Table 6).Table 6 Analysis of activities of SOD, CAT, and TBARS in lung samples from the CG, HG, RCDG, and RCDHG groups. CG HG RCDG RCDHG P SOD (U/mg prot) 26.1 ± 1.8 24.4 ± 2.0 20.9 ± 1.7 17.5 ± 1.4 a , b <0.05 CAT (U/mg prot) 0.8 ± 0,0 0.7 ± 0.1 0.6 ± 0.1 0.5 ± 0.1 a <0.05 TBARS (nM/mg prot) 0.1 ± 0.0 0.3 ± 0.0 a 0.6 ± 0.0 a , b 0.8 ± 0.0 a , b , c <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG. aRepresenting significant differences (P<0.05) compared to CG. a,b,cRepresenting significant differences (P<0.05) compared to CG, HG, and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.1. Food Intake and Body Weight Gain The animals were weighed and their food intake was measured weekly to evaluate if the high refined carbohydrate diet influenced food intake and body weight gain. As shown in Figure1, the RCDG had a higher body weight gain in the second week of the experiment compared to the CG, and this gain was maintained for 12 weeks. However, no significant difference was observed in the amount (g) consumed by the experimental groups (Figure 1).Figure 1 Body weight gain (a) and food intake (b) for 12 weeks. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and CG. Comparisons were performed using two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b) ## 4.2. Body Adiposity Index, Adipocyte Area, and Leptin The diet model induced obesity as confirmed by the body adiposity index, which increased in the RCDG as compared to the other groups (Figure2), and by the adipocyte area, which increased in the RCDHG compared to the CG and HG, evaluated by morphometric analysis of EAT sections (Figures 2 and 3). There was also an increase of leptin in the RCDG and RCDHG compared to the CG (Figure 4).Figure 2 Body adiposity (a) and adipocyte area (b). CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa,b,d<0.05 significantly different values for RCDG and CG, HG, and RCHDG, respectively. Pa,b<0.05 significantly different values for RCDHG and CG and HG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b)Figure 3 Histological analysis of the epididymal adipose tissue sections stained with hematoxylin and eosin. Bar = 50μm. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.Figure 4 Plasma leptin levels. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and RCDHG in relation to CG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. ## 4.3. Glucose and Cholesterol Metabolism According to the OGTT results, the RCDG presented higher glycemic levels at 15, 30, and 60 min after glucose overload as compared to the CG. There was also a significant increase in total cholesterol in the RCDG showing that this diet was able to induce insulin resistance and hypercholesterolemia in these animals (Figure5).Figure 5 Plasma glucose (a) and cholesterol (b) levels. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean six animals per group.Pa<0.05 significantly different values for RCDG compared to the CG. Comparisons were made using two-way ANOVA with Bonferroni’s multiple comparisonpost hoc test (a) and unpaired t-test (b). (a) (b) ## 4.4. Total and Differential Cell Count in the BALF The dynamics of cell recruitment in the BALF, where the presence of the leukocytes, lymphocytes, neutrophils, and macrophages was identified in the high refined carbohydrate diet and exposure to hyperoxia, was evaluated. As shown in Table1, there was an increase in the number of total leukocytes in the RCDHG as compared to the CG and HG, as well as an increase in macrophages in the RCDG and RCDHG as compared to the CG and HG.Table 1 The inflammatory cells in the bronchoalveolar lavage of experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 140.0 ± 5.3 146.3 ± 6.0 161.3 ± 5.2 178.3 ± 4.8 a , b <0.05 Macrophages (×103/mL) 92.1 ± 9.3 99.6 ± 7.4 128.5 ± 5.0 a , b 152.9 ± 7.4 a , b <0.05 Lymphocytes (×103/mL) 18.2 ± 4.5 9.7 ± 3.5 7.1 ± 2.3 7.6 ± 3.5 >0.05 Neutrophils (×103/mL) 29.7 ± 6.3 37.0 ± 5.1 25.7 ± 5.0 17.9 ± 2.6 >0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.5. Total and Differential Blood Cell Count The total and differential number of cells in the blood was counted. The total leukocytes and lymphocytes in the blood decreased in the HG compared to the CG and RCDG and decreased in the RCDHG compared to the CG (Table2). Exposure to hyperoxia promoted the significant decrease in neutrophils observed in the HG compared to the CG. Furthermore, the monocytes decreased in the HG and RCDHG compared to the CG.Table 2 Total and differential leukocyte count in the blood of the experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 7.2 ± 0.6 2.7 ± 0.4 a , c 5.4 ± 0.8 4.2 ± 0.5 a <0.05 Lymphocytes (×103/mL) 4.9 ± 4.0 1.6 ± 0.3 a , c 3.6 ± 0.6 2.5 ± 0.3 a <0.05 Neutrophils (×103/mL) 1.7 ± 0.1 0.9 ± 0.1 a 1.3 ± 0.2 1.4 ± 0.2 <0.05 Monocytes (×103/mL) 0.6 ± 0.1 0.2 ± 0.0 a 0.5 ± 0.1 0.3 ± 0.0 a <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.aRepresenting significant differences compared to CG. a,cRepresenting significant differences compared to CG and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.6. Stereological Evaluations of Lung Parenchyma of the Experimental Groups Morphometric analysis showed no significant differences in the alveolar air volume density (Vva) and Vvsa in the HG, RCDG, and RCDHG as compared to the CG (Table 3 and Figure 6).Table 3 Comparison of alveolar airspace volume and density of alveolar septa of groups. CG HG RCDG RCDHG P V v a (%) 37.5 (18.7/43.7) 39.1 (25.0/46.8) 39.1 (31.2/46.8) 34.4 (25.0/37.5) >0.05 V v s a (%) 53.1 (43.7/56.2) 45.3 (43.7/62.5) 43.7 (40.6/50.0) 50.0 (43.7/50.0) >0.05 V v a: alveolar airspace volume; Vvsa: density of alveolar septa. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data were expressed as median, minimum, and maximum (n=6) and were analyzed by Kruskal-Wallis test followed by Dunn’s post hoc test (P=0.95).Figure 6 Photomicrographs (400x) of hematoxylin and eosin stained lung sections of the CG, HG, RCDG, and RCDHG. Bar = 50μm. Cell influx into the lung parenchyma. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Arrows indicate influx of inflammatory cells into the lung parenchyma in groups exposed to hyperoxia and given a refined carbohydrate diet. ## 4.7. CBC and Biochemical Analysis of the Blood Clinical hematology is used to evaluate the general state of health of the animal as well as to detect specific diseases [39]. The HG animals had lower serum levels of erythrocytes, hemoglobin, and hematocrit compared to other groups (Table 4).Table 4 Blood count of the experimental groups. CG HG RCDG RCDHG P RBC (∗106/µL) 9.5 ± 0.1 7.6 ± 0.4 a , c , d 9.3 ± 0.3 9.7 ± 0.2 <0.05 Hemoglobin (g/dL) 17.3 ± 0.3 14.4 ± 0.8 a , c , d 17.7 ± 0.3 17.6 ± 0.3 <0.05 Hematocrit (%) 52.9 ± 1.0 44.2 ± 2.4 a , c , d 55.4 ± 1.2 54.4 ± 1.4 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.8. Immunoenzymatic Assays on the EAT The immunoenzymatic assays performed on the EAT showed that the HG had higher amounts of IFN-γ as compared to the CG, RCDG, and RCDHG and higher levels of IL-10 and TNF-α as compared to the CG and RCDHG (Table 5).Table 5 Levels of inflammatory markers in epididymal adipose tissue. CG HG RCDG RCDHG P INF-γ 549.4 ± 84.8 1027.0 ± 149.4 a , c , d 558.1 ± 40.7 384.1 ± 88.4 <0.05 IL-10 1.150.0 ± 343.0 2.606.0 ± 568.1 a , d 1.253.0 ± 166.8 592.5 ± 201.0 <0.05 TNF-α 528.0 ± 148.7 1.180.0 ± 245.6 a , d 608.7 ± 60.7 247.2 ± 79.5 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG. a,dRepresenting significant differences (P<0.05) compared to CG and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.9. Analysis of Redox Imbalance and Damage Caused by Oxidation The antioxidant enzymes SOD and CAT are generally regulated by oxidative stress and are responsible for the oxidative balance in the lungs. As shown in Table6, SOD activity in the lung parenchyma decreased in the RCDHG as compared to the CG and HG, and CAT activity decreased in the RCDHG as compared to the CG. The TBARS levels revealed a progressive increase in lipid peroxidation in the HG, RCDG, and RCDHG compared to the CG, as well as in the RCDG compared to the HG and RCDHG (Table 6).Table 6 Analysis of activities of SOD, CAT, and TBARS in lung samples from the CG, HG, RCDG, and RCDHG groups. CG HG RCDG RCDHG P SOD (U/mg prot) 26.1 ± 1.8 24.4 ± 2.0 20.9 ± 1.7 17.5 ± 1.4 a , b <0.05 CAT (U/mg prot) 0.8 ± 0,0 0.7 ± 0.1 0.6 ± 0.1 0.5 ± 0.1 a <0.05 TBARS (nM/mg prot) 0.1 ± 0.0 0.3 ± 0.0 a 0.6 ± 0.0 a , b 0.8 ± 0.0 a , b , c <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG. aRepresenting significant differences (P<0.05) compared to CG. a,b,cRepresenting significant differences (P<0.05) compared to CG, HG, and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 5. Discussion This study showed that a high refined carbohydrate diet increased the mass and body adiposity in experimental animals. This diet has been employed as an obesity induction model in rats and mice and has been accompanied by increased body weight, adiposity, levels of leptin, and plasma concentrations of cholesterol, glucose, and insulin [26, 40]. Our findings showed that a high refined carbohydrate diet resulted in higher resistance to insulin, hypercholesterolemia, and increased levels of leptin in BALB/c mice, corroborating those of Oliveira et al. [26]. Obesity results in several complications in glucose and lipid metabolism, such as the development of insulin resistance, type 2 diabetes, and hyperlipidemia, leading to metabolic syndrome and cardiovascular diseases [21, 38].Studies have shown that increased adiposity caused by a high palatability diet seems to influence greater food intake [25, 26]; however, there was no influence in food intake on our experimental model. We can speculate the effects of diet composition on metabolic responses since nutrients can act systemically as cellular signals [26]. The higher amount of sucrose in the high refined carbohydrate diet compared to that in the control diet could promote an increase in the activation of lipogenic enzymes due to the activation of ChREBP as well as an increased inflammatory response and insulin resistance [11, 26]. In addition, De Lima et al. (2008) pointed out that high amounts of sucrose in the diet lead to a higher glycemic index, resulting in the elevation of blood glucose and postprandial insulin concentration and favoring fat storage [41]. Therefore, the high concentration of carbohydrates in the high palatability feed possibly contributes to increased body weight, adiposity, adipocyte area, leptin level, glucose intolerance, and hypercholesterolemia.Studies have shown that both obesity and hyperoxia trigger inflammatory processes [20, 21]. To evaluate whether hyperoxia and/or a high refined carbohydrate diet cause an influx of peripheral blood cells into the lung parenchyma, the total and differential leukocytes present in bronchoalveolar lavage (BAL) and blood of animals were determined. Our results corroborate those of a previous study which reported an increase of macrophages and neutrophils in the BAL of BALB/c mice exposed to hyperoxia for 24 h, though stereological analysis showed no significant difference in Vva among all groups [6]. Our data suggest there was blood cell recruitment to an inflammation site, evidenced by the decrease in the number of leukocytes and lymphocytes in the blood. Our data also showed that hyperoxia might promote inflammation in the adipose tissue of eutrophic mice as evidenced by an increase in the proinflammatory cytokines, IFN-γ, and TNF-α. Macrophages are the major source of TNF-α and other proinflammatory molecules in adipose tissue [42]. Therefore, we suggest that, in addition to the recruitment of macrophages to the lung parenchyma, there was also migration of these cells to adipose tissues. Oliveira et al. (2013) demonstrated that, along with an increase in macrophages in adipose tissues, there is an increase in the number of regulatory T lymphocytes (Tregs), indicating a counterregulatory mechanism to suppress acute inflammation [26]. Since Tregs are directly related to the increase of IL-10 [43], the increase of IL-10 in animals exposed to hyperoxia is justified.Interferon Gamma (IFN-γ) is known to be released after the exposure to hyperoxia by inflammatory cells such as lymphocytes [44]. Our results showed an increase in the levels of IFN-γ in adipose tissue in the HG and a decrease of lymphocytes in the blood, indicating possible migration of these cells into the adipose tissue. However, when the animals were subjected to two proinflammatory factors (diet and hyperoxia), there were no significant differences in cytokine levels in adipose tissue, probably for being a secondary tissue injury caused by hyperoxia. Nagato et al. (2012) reported that BALB/c mice exposed to 24 hours of hyperoxia showed an increase in the levels of TNF-α and IL-6 in the lung [6]. Furthermore, Naura et al. (2009) showed that animals subjected to a diet High Fat showed an increase in the levels of IFN-γ and TNF-α in BALF [47]. Thus, we believe that the increase in the levels of cytokines in the lungs of the animals, subjected to hyperoxia and given a refined carbohydrate diet, occurred due to the recruitment of inflammatory cells in the lung without increasing in adipose tissue.The exposure to hyperoxia promoted a significant decrease of erythrocytes, hemoglobin, and hematocrit in this study. Some studies describe that the low partial oxygen pressure occurs in response to arterial hypoxia by the activation of hypoxia-inducible factor-1 (HIF-1), the main regulator of the hypoxic environment transcription which stimulates the production of erythropoietin, called stimulation-erythrocyte hormone via kidneys acting in the marrow of long bones, stimulating the production of erythrocytes to compensate the low concentration of oxygen in the blood. When the cellular oxygen level is adequate, HIF-1 is degraded [43, 46]. It is possible that there was a decrease in HIF-1 and, consequently, in the production of erythrocytes in animals exposed to hyperoxia.In this study, the activities of SOD and CAT were evaluated to better understand their contributions to the redox imbalance during exposure to hyperoxia. SOD is major pulmonary defense against the detrimental effects of O2 and converts O2 to H2O2, a substrate of CAT. CAT is responsible for preventing H2O2 accumulation to convert the H2O2 into two water molecules. The accumulation of H2O2 possible generates, via Fenton and Haber-Weiss reactions, the hydroxyl radical (OH∙) which can react in the side chain and attacks preferably amino acids such as cysteine, histidine, tryptophan, methionine, and phenylalanine, damaging the proteins and, as its consequence, causing the loss of enzyme activity [47]. Diets with high concentrations of lipids and carbohydrates lead to an increase in free fatty acids (FFAs), resulting in an increase of mitochondrial B-oxidation and the overload in the electron transport chain resulting in an increase of production in ROS [39]. Hyperoxia exposes the body to high levels of reactive oxygen species [5, 6] and the high levels of ROS can inhibit the activity of antioxidant enzymes [48]. Thus, the hyperoxia in animals subjected to a high carbohydrate diet causes cell injury, likely by overloading the cellular antioxidant defense, leading to an increase of ROS. In addition, the oxidative load created can reduce levels of antioxidant enzymes and lead to inhibition of their activities [5, 6]. Our results were similar to Nagato et al. (2012) who also reported no significant differences in relation to CAT activity in BALB/C mice exposed to hyperoxia for 24 hours; however, there was a decrease in SOD activity [6]. On the other hand, Nagato et al. (2009) observed a decrease of CAT and SOD in Wistar rats exposed to hyperoxia for 90 minutes [5]. Unlike these previous studies, the animals in this study were exposed to hyperoxia after receiving a high carbohydrate diet for 12 weeks. Thus, we believe that the decrease in the activity of these enzymes was due to the association of these two factors. Besides redox imbalance, hyperoxia causes damage owing to oxidation in the airways, which was supported by studies in BALB/c mice and Wistar rats. This damage can be detected experimentally by monitoring the lipid peroxidation products, such as malondialdehyde [46]. Our results corroborate those of Nagato et al. (2012), who found increased malondialdehyde in BALB/c mice exposed to hyperoxia for 12 and 24 hours [6].The results of this study, associated with previous studies [5, 6, 44], suggested that the supplemental oxygen is extremely important in clinical practice. However, a special attention should be paid to obese patients who have already had a low intensity chronic inflammation [21] and an increase in ROS [20] which induce lung inflammation [5, 6, 44]. ## 6. Conclusions This study has been the first to report the combined effects of the administration of a high refined carbohydrate diet and the exposure to a high oxygen concentration in adult BALB/c mice. However, more studies should be performed to analyze these effects in other organs or biological systems. --- *Source: 1014928-2016-11-29.xml*
1014928-2016-11-29_1014928-2016-11-29.md
56,121
The Effects of the Combination of a Refined Carbohydrate Diet and Exposure to Hyperoxia in Mice
Nicia Pedreira Soares; Keila Karine Duarte Campos; Karina Braga Pena; Ana Carla Balthar Bandeira; André Talvani; Marcelo Eustáquio Silva; Frank Silva Bezerra
Oxidative Medicine and Cellular Longevity (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1014928
1014928-2016-11-29.xml
--- ## Abstract Obesity is a multifactorial disease with genetic, social, and environmental influences. This study aims at analyzing the effects of the combination of a refined carbohydrate diet and exposure to hyperoxia on the pulmonary oxidative and inflammatory response in mice. Twenty-four mice were divided into four groups: control group (CG), hyperoxia group (HG), refined carbohydrate diet group (RCDG), and refined carbohydrate diet + hyperoxia group (RCDHG). The experimental diet was composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk. For 24 hours, the HG and RCDHG were exposed to hyperoxia and the CG and RCDG to ambient air. After the exposures were completed, the animals were euthanized, and blood, bronchoalveolar lavage fluid, and lungs were collected for analyses. The HG showed higher levels of interferon-γ in adipose tissue as compared to other groups and higher levels of interleukin-10 and tumor necrosis factor-α compared to the CG and RCDHG. SOD and CAT activities in the pulmonary parenchyma decreased in the RCDHG as compared to the CG. There was an increase of lipid peroxidation in the HG, RCDG, and RCDHG as compared to the CG. A refined carbohydrate diet combined with hyperoxia promoted inflammation and redox imbalance in adult mice. --- ## Body ## 1. Introduction Obesity is a public health problem and is correlated with several comorbidities, such as heart failure [1, 2] which, in most cases, requires oxygen supplementation [3]. However, when administering oxygen, professionals should follow a careful method to assess the necessity, time, and dose to be given. Oxygen at high concentrations (hyperoxia) can trigger lung oxidative damage, including damage to components of the extracellular matrix, epithelial and endothelial cell injuries, and lung inflammation [4–6].According to the World Health Organization, worldwide obesity has doubled since 1980 [7]. In 2005, about 1.6 billion adults over 18 years were overweight, and over 400 million were obese [8]. In 2014, the number of overweight and obese cases increased to more than 1.9 billion and 600 million, respectively [7].The experimental model of obesity that more closely resembles human obesity is conditioned to foods with high refined carbohydrates and lipids [9]. These macronutrients are responsible for the systemic, chronic low-grade inflammation associated with obesity [10]. Carbohydrates trigger lipogenic enzymes due to the activation of the carbohydrate-responsive element-binding protein (ChREBP), thus favoring the development of obesity [11]. In obesity, the adipocytes release free fatty acids (FFAs) that activate the signaling pathways of inflammation. When FFA binds itself to receptors in the cell membrane of macrophages, it activates a complex of kinase enzymes and protein coding genes involved in the inflammatory response, such as tumor necrosis factor-α (TNF-α). These proteins activate adipocytes leading to lipolysis that releases more fatty acids and several inflammatory genes [12–14]. TNF-α activates the pathway of mitogen-activated protein kinases (MAPKs) responsible for inflammatory gene transcription [15] and can stimulate the infiltration and accumulation of macrophages in adipocytes because of inflammation in obesity [16]. In addition, obesity leads to hypertrophy and hyperplasia of adipocytes, which, in turn, causes hypoperfusion and tissue hypoxia [17, 18]. This process causes a decrease in adiponectin production and an increase in proinflammatory cytokines responsible for inflammation [16, 19].Obesity and hyperoxia are known to increase reactive oxygen species (ROS) [20, 21]. ROS can be from exogenous or endogenous origins. Endogenous ROS is usually produced as a result of cell metabolism [22, 23]. At low to moderate concentrations, they participate in physiological cellular processes and have a beneficial role in aerobic organisms because of their participation in the regulation of cell signaling, gene expression, and apoptotic mechanisms. However, at high concentrations, ROS may cause damage to cell constituents such as lipids, proteins, and DNA [22]. To counteract ROS, cells have an antioxidant defense system that is either enzymatic or nonenzymatic. Enzymes involved in the primary antioxidant defense system include superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase [22–24].Extra care should be taken when administering medicinal oxygen to obese patients, who already have chronic low-grade inflammation [21] and increased ROS [20] and may suffer more severe conditions. Thus, this study aimed to analyze the oxidative and inflammatory effects of a high refined carbohydrate diet in mice exposed to hyperoxia. ## 2. Materials and Methods ### 2.1. Experimental Design Twenty-four BALB/c mice (male, adults, and 5–7 weeks old) were housed under controlled conditions in standard laboratory cages (Laboratory of Experimental Nutrition, Department of Food, School of Nutrition, Federal University of Ouro Preto) and given free access to water and food. Allin vivo experimental protocols conducted on the animals at the Federal University of Ouro Preto were approved by the ethics committee (#2013/58). The animals were divided into two groups: the first group (G1) received a standard diet, and the second (G2) received a diet rich in refined carbohydrates, composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk, for twelve weeks. The animal body weight and food intake were measured weekly. After dietary treatment, G1 was randomly divided into the control group (CG) and hyperoxia group (HG), and G2 was randomly divided into the refined carbohydrate diet group (RCDG) and refined carbohydrate diet + hyperoxia group (RCDHG). For 24 hours, the HG and RCDHG were exposed to 100% oxygen, and the CG and RCDG were just exposed to ambient air. ### 2.2. Composition of Diets and Food Intake and Regulation of Body Mass The animals of the CG and HG were fed standard chow (Labina, Purina; Evialis Group, São Paulo, Brazil), and the RCDG and RCDHG received a high palatability feed, composed of 10% granulated sugar, 45% standard feed, and 45% condensed milk (Nestlé®, São Paulo, Brazil), used to promote obesity in animals [25, 26]. Food intake and body weight gain were measured once a week using a digital scale (Mark®, Series M; Bel Equipment Analytical LTDA, São Paulo, Brazil). To control intake, the diets were weighed before serving to the animals and after a week. ### 2.3. Oral Glucose Tolerance Test (OGTT) A week before the end of the experiment, the animals were submitted to OGTT, as described by Menezes-Garcia and colleagues [25] and Oliveira and colleagues [25, 26], to investigate their insulin sensitivity. ### 2.4. Exposure to Oxygen All mice (except the CG and RCDG, which inhaled ambient air) were placed in the inhalation chamber and removed after 24 h. An acrylic inhalation chamber was used to expose the animals to hyperoxia (30 cm long, 20 cm wide, and 15 cm high). Oxygen 100% was purchased from White Martins® (White Martins Praxair Inc., São Paulo, Brazil). The oxygen tank was coupled to the inhalation chamber using a silicone conduit [5, 6, 27]. The oxygen concentration was measured continuously through an oxygen cell (C3, Middlesbrough, England). The mice received water and foodad libitum, were kept in individual cages with controlled temperature and humidity (21 ± 2°C, 50 ± 10%, respectively), and were submitted to inverted 12 h cycles of light/dark (artificial lights, 7 p.m. to 7 a.m.). ### 2.5. Euthanasia After 24 hours of oxygen exposure, all animals were subjected to anesthesia with ketamine (130 mg/kg) and xylazine (0.3 mg/kg) and euthanized by exsanguination. The blood, bronchoalveolar lavage fluid (BALF), and adipose tissues (retroperitoneal, epididymal, and mesenteric) were removed. ### 2.6. Blood Collection To obtain plasma, two aliquots of blood were collected from each animal in polypropylene tubes containing 15µL of anticoagulant. One aliquot was sent to the Clinical Analysis Lab Pilot (LAPAC-UFOP) for measurements of blood count and white blood cell count. The other aliquot was centrifuged at 10,000 rpm for 15 min, and the supernatant was removed for cholesterol measurement. ### 2.7. Hemogram and Biochemical Analyses of Blood and Plasma For the complete blood count, whole blood was diluted with saline (1 : 2), and the erythrocyte hematological parameters, hematocrit and hemoglobin, were evaluated using an electronic counting device (ABX Diagnostics, micro 60, HORIBA®, Tokyo, Japan) at LAPAC-UFOP. Cholesterol concentrations were determined by automatic spectrophotometry using the Random Access Clinical Analyzer (CM-200; Wiener Lab, Rosario, Argentina) and by the enzymatic colorimetric method using a specific kit (Bioclin®; Quibasa, Belo Horizonte, Brazil). ### 2.8. Assessment and Analysis of the BALF Immediately after euthanasia, the chest of each animal was opened to collect the BALF. The left lung was clamped, the trachea cannulated, and the right lung perfused with 1.5 mL of saline solution. The samples were kept on ice until the end of the procedure to avoid cell lysis. Total, mononuclear, and polymorphonuclear cells were stained with trypan blue, enumerated in a Neubauer chamber (Sigma-Aldrich, MA, USA), and stained again using a fast panoptic coloration kit (Laborclin, Pinhais, Paraná, Brazil) [28, 29]. Differential cell counts were performed on cytospin preparations (Shandon, Waltham, MA, USA) and stained with the fast panoptic coloration kit [30]. ### 2.9. Tissue Processing and Homogenization The right lung was clamped, and a cannula was inserted into the trachea. The airspaces were washed with buffered saline solution (final volume 1.5 mL) maintained on ice. The left lung and epididymal adipose tissue (EAT) were removed and immersed in a fixative solution for 48 hr [6, 30]. The tissue was then processed as follows: tap water bath for 30 min, 70% and 90% alcohol baths for 1 hr each, 2 baths in 100% ethanol for 1 hr each, and embedding in paraffin. For histologic analyses, serial 5 μm sagittal sections were obtained from the left lung and stained with hematoxylin and eosin. The right lung was subsequently homogenized in 1 ml potassium phosphate buffer (pH 7.5) and centrifuged at 1500 ×g for 10 min. The supernatant was collected, and the final volume of all samples was adjusted to 1.5 ml with phosphate buffer. The samples were stored in a freezer (–80°C) for biochemical analyses [30]. ## 2.1. Experimental Design Twenty-four BALB/c mice (male, adults, and 5–7 weeks old) were housed under controlled conditions in standard laboratory cages (Laboratory of Experimental Nutrition, Department of Food, School of Nutrition, Federal University of Ouro Preto) and given free access to water and food. Allin vivo experimental protocols conducted on the animals at the Federal University of Ouro Preto were approved by the ethics committee (#2013/58). The animals were divided into two groups: the first group (G1) received a standard diet, and the second (G2) received a diet rich in refined carbohydrates, composed of 10% sugar, 45% standard diet, and 45% sweet condensed milk, for twelve weeks. The animal body weight and food intake were measured weekly. After dietary treatment, G1 was randomly divided into the control group (CG) and hyperoxia group (HG), and G2 was randomly divided into the refined carbohydrate diet group (RCDG) and refined carbohydrate diet + hyperoxia group (RCDHG). For 24 hours, the HG and RCDHG were exposed to 100% oxygen, and the CG and RCDG were just exposed to ambient air. ## 2.2. Composition of Diets and Food Intake and Regulation of Body Mass The animals of the CG and HG were fed standard chow (Labina, Purina; Evialis Group, São Paulo, Brazil), and the RCDG and RCDHG received a high palatability feed, composed of 10% granulated sugar, 45% standard feed, and 45% condensed milk (Nestlé®, São Paulo, Brazil), used to promote obesity in animals [25, 26]. Food intake and body weight gain were measured once a week using a digital scale (Mark®, Series M; Bel Equipment Analytical LTDA, São Paulo, Brazil). To control intake, the diets were weighed before serving to the animals and after a week. ## 2.3. Oral Glucose Tolerance Test (OGTT) A week before the end of the experiment, the animals were submitted to OGTT, as described by Menezes-Garcia and colleagues [25] and Oliveira and colleagues [25, 26], to investigate their insulin sensitivity. ## 2.4. Exposure to Oxygen All mice (except the CG and RCDG, which inhaled ambient air) were placed in the inhalation chamber and removed after 24 h. An acrylic inhalation chamber was used to expose the animals to hyperoxia (30 cm long, 20 cm wide, and 15 cm high). Oxygen 100% was purchased from White Martins® (White Martins Praxair Inc., São Paulo, Brazil). The oxygen tank was coupled to the inhalation chamber using a silicone conduit [5, 6, 27]. The oxygen concentration was measured continuously through an oxygen cell (C3, Middlesbrough, England). The mice received water and foodad libitum, were kept in individual cages with controlled temperature and humidity (21 ± 2°C, 50 ± 10%, respectively), and were submitted to inverted 12 h cycles of light/dark (artificial lights, 7 p.m. to 7 a.m.). ## 2.5. Euthanasia After 24 hours of oxygen exposure, all animals were subjected to anesthesia with ketamine (130 mg/kg) and xylazine (0.3 mg/kg) and euthanized by exsanguination. The blood, bronchoalveolar lavage fluid (BALF), and adipose tissues (retroperitoneal, epididymal, and mesenteric) were removed. ## 2.6. Blood Collection To obtain plasma, two aliquots of blood were collected from each animal in polypropylene tubes containing 15µL of anticoagulant. One aliquot was sent to the Clinical Analysis Lab Pilot (LAPAC-UFOP) for measurements of blood count and white blood cell count. The other aliquot was centrifuged at 10,000 rpm for 15 min, and the supernatant was removed for cholesterol measurement. ## 2.7. Hemogram and Biochemical Analyses of Blood and Plasma For the complete blood count, whole blood was diluted with saline (1 : 2), and the erythrocyte hematological parameters, hematocrit and hemoglobin, were evaluated using an electronic counting device (ABX Diagnostics, micro 60, HORIBA®, Tokyo, Japan) at LAPAC-UFOP. Cholesterol concentrations were determined by automatic spectrophotometry using the Random Access Clinical Analyzer (CM-200; Wiener Lab, Rosario, Argentina) and by the enzymatic colorimetric method using a specific kit (Bioclin®; Quibasa, Belo Horizonte, Brazil). ## 2.8. Assessment and Analysis of the BALF Immediately after euthanasia, the chest of each animal was opened to collect the BALF. The left lung was clamped, the trachea cannulated, and the right lung perfused with 1.5 mL of saline solution. The samples were kept on ice until the end of the procedure to avoid cell lysis. Total, mononuclear, and polymorphonuclear cells were stained with trypan blue, enumerated in a Neubauer chamber (Sigma-Aldrich, MA, USA), and stained again using a fast panoptic coloration kit (Laborclin, Pinhais, Paraná, Brazil) [28, 29]. Differential cell counts were performed on cytospin preparations (Shandon, Waltham, MA, USA) and stained with the fast panoptic coloration kit [30]. ## 2.9. Tissue Processing and Homogenization The right lung was clamped, and a cannula was inserted into the trachea. The airspaces were washed with buffered saline solution (final volume 1.5 mL) maintained on ice. The left lung and epididymal adipose tissue (EAT) were removed and immersed in a fixative solution for 48 hr [6, 30]. The tissue was then processed as follows: tap water bath for 30 min, 70% and 90% alcohol baths for 1 hr each, 2 baths in 100% ethanol for 1 hr each, and embedding in paraffin. For histologic analyses, serial 5 μm sagittal sections were obtained from the left lung and stained with hematoxylin and eosin. The right lung was subsequently homogenized in 1 ml potassium phosphate buffer (pH 7.5) and centrifuged at 1500 ×g for 10 min. The supernatant was collected, and the final volume of all samples was adjusted to 1.5 ml with phosphate buffer. The samples were stored in a freezer (–80°C) for biochemical analyses [30]. ## 3. Antioxidant Defense and Oxidative Stress Biomarkers in Lung Homogenates We used the formation of thiobarbituric acid reactive substances (TBARS) as an index of lipid peroxidation during an acid-heating reaction as previously described by Valenca et al. [31]. Briefly, the TBARS level was estimated in accordance with the method described by Lean et al. [32]. The lung homogenate supernatants (1.0 ml) were mixed with 2.0 ml of TCA-TBA-HCL (15% w/v trichloroacetic acid (TCA); 0.375% w/v thiobarbituric acid (TBA); and 0.25 N hydrochloric acid (HCL)). The solution was heated for 15 min in a boiling water bath. After cooling, the precipitates were removed via centrifugation, and the absorbance of the sample at 535 nm was measured. The TBARS level was calculated using the molar absorption coefficient of malondialdehyde (1.56 × 105 M−1 cm−1.28).The lung homogenates were used to determine CAT activity. This method was based on the enzymatic decomposition of hydrogen peroxide (H2O2) observed spectrophotometrically at 240 nm for 5 min. Ten microliters of the homogenate supernatant was added to a cuvette containing 100 mM phosphate buffer (pH 7.2), and the reaction was initiated by the addition of 10 mM H2O2. H2O2 decomposition was calculated using the molar absorption coefficient 39.4 M−1 cm−1. The results were expressed as activity per mg of protein. One unit of CAT was equivalent to the hydrolysis of 1 μmol of H2O2 per min [33]. SOD activity was assayed by the spectrophotometric method of Marklund and Marklund [34] using an improved pyrogallol autoxidation inhibition assay. SOD reacts with the superoxide radical (O2-), and this slows down the rate of formation of o-hydroxy-o-benzoquinone and other polymer products. One unit of SOD is defined as the amount of enzyme that reduces the rate of autoxidation of pyrogallol by 50%. ### 3.1. Adiposity Index The adipose pads were removed and weighed to determine the adiposity index. The index was calculated by adding the epididymal, retroperitoneal, and mesenteric adipose tissue mass, divided by body weight, and multiplied by 100 [26]. ### 3.2. Immunoassays of Epididymal Adipose Tissue (EAT) The epididymal adipose tissue was used to determine the concentrations of the inflammatory mediators TNF-α, IFN-γ, and IL-10 and the plasma was used to determine the leptin levels. For the analysis, the samples were thawed and excess proteins were removed by acid/salt precipitation, as previously described [10]. Briefly, equal volumes of epididymal adipose tissue, plasma, and 1.2% trifluoroacetic acid/1.35 M NaCl were mixed, incubated at room temperature for 10 min, and centrifuged for 5 min at 10,000 rpm. The salt content of the supernatant was adjusted to be 0.14 M sodium chloride and 0.01 M sodium phosphate at a pH of 7.4 prior to determination of the concentrations of TNF-α, IFN-γ, and IL-10 using commercially available ELISA kits (Bio Source International, Inc., CA, USA) and leptin (PeproTech, London, United Kingdom) according to the manufacturer’s guidelines. All samples were measured in duplicate [9, 35]. ### 3.3. Morphometric and Stereological Analyses Twenty random images obtained from the histological slides of the lungs and EAT were digitized using a Leica DM5000B optical microscope with Leica Application Suite software and CM300 digital microcamera (Multiuser Laboratory of the Research Center for Biological Sciences of the Federal University of Ouro Preto). The images of the lung and EAT were scanned with 40x and 10x objective lenses, respectively. We used a representative image at 40x magnification with a 100µm ruler to calibrate a ruler in pixels derived from the program such that 434 pixels equaled 100 μm with the aid of Image J software. Five alveolar areas in each slide prepared from each animal were measured [36, 37]. Six fields from each animal image were captured with a digital camera coupled to a microscope (200x). The area was obtained by randomly measuring 50 adipocytes per blade using the J® Image software (National Institutes of Health, Bethesda, MD, USA).The analyses of the volume density values of alveolar air space (Vv[a]) and volume densities of alveolar septa (Vv[sa]) were performed on a test system that consists of sixteen points and a known test area in which the boundary line was considered forbidden in order to avoid overestimation of the number of structures. The test system was matched to a monitor attached to a microscope. The number of points (PP) that touched the alveolar septa was assessed according to the total number of test points (PT) in the system using the equation Vv=PP/PT. To obtain uniform and proportional lung samples, we analyzed 18 random fields in a cycloid test system attached to the monitor screen. The reference volume was estimated by point counting, using the test point system. A total area of 1.94 mm2 was analyzed to determine Vvsa in slides stained with hematoxylin and eosin [38]. ### 3.4. Statistical Analysis The data with normal distribution were analyzed by unpairedt-test, univariate analysis of variance (one-way ANOVA), or by two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. Data were expressed as the mean ± standard error of the mean. For discrete data, we used the Kruskal-Wallis test followed by Dunn’spost hoc test and expressed them as median, minimum, and maximum values. In both cases, the difference was considered significant when theP value was less than 0.05. All analyses were performed with GraphPad Prism, version 5.00 for Windows 7 (GraphPad Software; San Diego, CA, USA). ## 3.1. Adiposity Index The adipose pads were removed and weighed to determine the adiposity index. The index was calculated by adding the epididymal, retroperitoneal, and mesenteric adipose tissue mass, divided by body weight, and multiplied by 100 [26]. ## 3.2. Immunoassays of Epididymal Adipose Tissue (EAT) The epididymal adipose tissue was used to determine the concentrations of the inflammatory mediators TNF-α, IFN-γ, and IL-10 and the plasma was used to determine the leptin levels. For the analysis, the samples were thawed and excess proteins were removed by acid/salt precipitation, as previously described [10]. Briefly, equal volumes of epididymal adipose tissue, plasma, and 1.2% trifluoroacetic acid/1.35 M NaCl were mixed, incubated at room temperature for 10 min, and centrifuged for 5 min at 10,000 rpm. The salt content of the supernatant was adjusted to be 0.14 M sodium chloride and 0.01 M sodium phosphate at a pH of 7.4 prior to determination of the concentrations of TNF-α, IFN-γ, and IL-10 using commercially available ELISA kits (Bio Source International, Inc., CA, USA) and leptin (PeproTech, London, United Kingdom) according to the manufacturer’s guidelines. All samples were measured in duplicate [9, 35]. ## 3.3. Morphometric and Stereological Analyses Twenty random images obtained from the histological slides of the lungs and EAT were digitized using a Leica DM5000B optical microscope with Leica Application Suite software and CM300 digital microcamera (Multiuser Laboratory of the Research Center for Biological Sciences of the Federal University of Ouro Preto). The images of the lung and EAT were scanned with 40x and 10x objective lenses, respectively. We used a representative image at 40x magnification with a 100µm ruler to calibrate a ruler in pixels derived from the program such that 434 pixels equaled 100 μm with the aid of Image J software. Five alveolar areas in each slide prepared from each animal were measured [36, 37]. Six fields from each animal image were captured with a digital camera coupled to a microscope (200x). The area was obtained by randomly measuring 50 adipocytes per blade using the J® Image software (National Institutes of Health, Bethesda, MD, USA).The analyses of the volume density values of alveolar air space (Vv[a]) and volume densities of alveolar septa (Vv[sa]) were performed on a test system that consists of sixteen points and a known test area in which the boundary line was considered forbidden in order to avoid overestimation of the number of structures. The test system was matched to a monitor attached to a microscope. The number of points (PP) that touched the alveolar septa was assessed according to the total number of test points (PT) in the system using the equation Vv=PP/PT. To obtain uniform and proportional lung samples, we analyzed 18 random fields in a cycloid test system attached to the monitor screen. The reference volume was estimated by point counting, using the test point system. A total area of 1.94 mm2 was analyzed to determine Vvsa in slides stained with hematoxylin and eosin [38]. ## 3.4. Statistical Analysis The data with normal distribution were analyzed by unpairedt-test, univariate analysis of variance (one-way ANOVA), or by two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. Data were expressed as the mean ± standard error of the mean. For discrete data, we used the Kruskal-Wallis test followed by Dunn’spost hoc test and expressed them as median, minimum, and maximum values. In both cases, the difference was considered significant when theP value was less than 0.05. All analyses were performed with GraphPad Prism, version 5.00 for Windows 7 (GraphPad Software; San Diego, CA, USA). ## 4. Results ### 4.1. Food Intake and Body Weight Gain The animals were weighed and their food intake was measured weekly to evaluate if the high refined carbohydrate diet influenced food intake and body weight gain. As shown in Figure1, the RCDG had a higher body weight gain in the second week of the experiment compared to the CG, and this gain was maintained for 12 weeks. However, no significant difference was observed in the amount (g) consumed by the experimental groups (Figure 1).Figure 1 Body weight gain (a) and food intake (b) for 12 weeks. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and CG. Comparisons were performed using two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b) ### 4.2. Body Adiposity Index, Adipocyte Area, and Leptin The diet model induced obesity as confirmed by the body adiposity index, which increased in the RCDG as compared to the other groups (Figure2), and by the adipocyte area, which increased in the RCDHG compared to the CG and HG, evaluated by morphometric analysis of EAT sections (Figures 2 and 3). There was also an increase of leptin in the RCDG and RCDHG compared to the CG (Figure 4).Figure 2 Body adiposity (a) and adipocyte area (b). CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa,b,d<0.05 significantly different values for RCDG and CG, HG, and RCHDG, respectively. Pa,b<0.05 significantly different values for RCDHG and CG and HG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b)Figure 3 Histological analysis of the epididymal adipose tissue sections stained with hematoxylin and eosin. Bar = 50μm. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.Figure 4 Plasma leptin levels. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and RCDHG in relation to CG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. ### 4.3. Glucose and Cholesterol Metabolism According to the OGTT results, the RCDG presented higher glycemic levels at 15, 30, and 60 min after glucose overload as compared to the CG. There was also a significant increase in total cholesterol in the RCDG showing that this diet was able to induce insulin resistance and hypercholesterolemia in these animals (Figure5).Figure 5 Plasma glucose (a) and cholesterol (b) levels. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean six animals per group.Pa<0.05 significantly different values for RCDG compared to the CG. Comparisons were made using two-way ANOVA with Bonferroni’s multiple comparisonpost hoc test (a) and unpaired t-test (b). (a) (b) ### 4.4. Total and Differential Cell Count in the BALF The dynamics of cell recruitment in the BALF, where the presence of the leukocytes, lymphocytes, neutrophils, and macrophages was identified in the high refined carbohydrate diet and exposure to hyperoxia, was evaluated. As shown in Table1, there was an increase in the number of total leukocytes in the RCDHG as compared to the CG and HG, as well as an increase in macrophages in the RCDG and RCDHG as compared to the CG and HG.Table 1 The inflammatory cells in the bronchoalveolar lavage of experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 140.0 ± 5.3 146.3 ± 6.0 161.3 ± 5.2 178.3 ± 4.8 a , b <0.05 Macrophages (×103/mL) 92.1 ± 9.3 99.6 ± 7.4 128.5 ± 5.0 a , b 152.9 ± 7.4 a , b <0.05 Lymphocytes (×103/mL) 18.2 ± 4.5 9.7 ± 3.5 7.1 ± 2.3 7.6 ± 3.5 >0.05 Neutrophils (×103/mL) 29.7 ± 6.3 37.0 ± 5.1 25.7 ± 5.0 17.9 ± 2.6 >0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.5. Total and Differential Blood Cell Count The total and differential number of cells in the blood was counted. The total leukocytes and lymphocytes in the blood decreased in the HG compared to the CG and RCDG and decreased in the RCDHG compared to the CG (Table2). Exposure to hyperoxia promoted the significant decrease in neutrophils observed in the HG compared to the CG. Furthermore, the monocytes decreased in the HG and RCDHG compared to the CG.Table 2 Total and differential leukocyte count in the blood of the experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 7.2 ± 0.6 2.7 ± 0.4 a , c 5.4 ± 0.8 4.2 ± 0.5 a <0.05 Lymphocytes (×103/mL) 4.9 ± 4.0 1.6 ± 0.3 a , c 3.6 ± 0.6 2.5 ± 0.3 a <0.05 Neutrophils (×103/mL) 1.7 ± 0.1 0.9 ± 0.1 a 1.3 ± 0.2 1.4 ± 0.2 <0.05 Monocytes (×103/mL) 0.6 ± 0.1 0.2 ± 0.0 a 0.5 ± 0.1 0.3 ± 0.0 a <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.aRepresenting significant differences compared to CG. a,cRepresenting significant differences compared to CG and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.6. Stereological Evaluations of Lung Parenchyma of the Experimental Groups Morphometric analysis showed no significant differences in the alveolar air volume density (Vva) and Vvsa in the HG, RCDG, and RCDHG as compared to the CG (Table 3 and Figure 6).Table 3 Comparison of alveolar airspace volume and density of alveolar septa of groups. CG HG RCDG RCDHG P V v a (%) 37.5 (18.7/43.7) 39.1 (25.0/46.8) 39.1 (31.2/46.8) 34.4 (25.0/37.5) >0.05 V v s a (%) 53.1 (43.7/56.2) 45.3 (43.7/62.5) 43.7 (40.6/50.0) 50.0 (43.7/50.0) >0.05 V v a: alveolar airspace volume; Vvsa: density of alveolar septa. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data were expressed as median, minimum, and maximum (n=6) and were analyzed by Kruskal-Wallis test followed by Dunn’s post hoc test (P=0.95).Figure 6 Photomicrographs (400x) of hematoxylin and eosin stained lung sections of the CG, HG, RCDG, and RCDHG. Bar = 50μm. Cell influx into the lung parenchyma. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Arrows indicate influx of inflammatory cells into the lung parenchyma in groups exposed to hyperoxia and given a refined carbohydrate diet. ### 4.7. CBC and Biochemical Analysis of the Blood Clinical hematology is used to evaluate the general state of health of the animal as well as to detect specific diseases [39]. The HG animals had lower serum levels of erythrocytes, hemoglobin, and hematocrit compared to other groups (Table 4).Table 4 Blood count of the experimental groups. CG HG RCDG RCDHG P RBC (∗106/µL) 9.5 ± 0.1 7.6 ± 0.4 a , c , d 9.3 ± 0.3 9.7 ± 0.2 <0.05 Hemoglobin (g/dL) 17.3 ± 0.3 14.4 ± 0.8 a , c , d 17.7 ± 0.3 17.6 ± 0.3 <0.05 Hematocrit (%) 52.9 ± 1.0 44.2 ± 2.4 a , c , d 55.4 ± 1.2 54.4 ± 1.4 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.8. Immunoenzymatic Assays on the EAT The immunoenzymatic assays performed on the EAT showed that the HG had higher amounts of IFN-γ as compared to the CG, RCDG, and RCDHG and higher levels of IL-10 and TNF-α as compared to the CG and RCDHG (Table 5).Table 5 Levels of inflammatory markers in epididymal adipose tissue. CG HG RCDG RCDHG P INF-γ 549.4 ± 84.8 1027.0 ± 149.4 a , c , d 558.1 ± 40.7 384.1 ± 88.4 <0.05 IL-10 1.150.0 ± 343.0 2.606.0 ± 568.1 a , d 1.253.0 ± 166.8 592.5 ± 201.0 <0.05 TNF-α 528.0 ± 148.7 1.180.0 ± 245.6 a , d 608.7 ± 60.7 247.2 ± 79.5 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG. a,dRepresenting significant differences (P<0.05) compared to CG and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ### 4.9. Analysis of Redox Imbalance and Damage Caused by Oxidation The antioxidant enzymes SOD and CAT are generally regulated by oxidative stress and are responsible for the oxidative balance in the lungs. As shown in Table6, SOD activity in the lung parenchyma decreased in the RCDHG as compared to the CG and HG, and CAT activity decreased in the RCDHG as compared to the CG. The TBARS levels revealed a progressive increase in lipid peroxidation in the HG, RCDG, and RCDHG compared to the CG, as well as in the RCDG compared to the HG and RCDHG (Table 6).Table 6 Analysis of activities of SOD, CAT, and TBARS in lung samples from the CG, HG, RCDG, and RCDHG groups. CG HG RCDG RCDHG P SOD (U/mg prot) 26.1 ± 1.8 24.4 ± 2.0 20.9 ± 1.7 17.5 ± 1.4 a , b <0.05 CAT (U/mg prot) 0.8 ± 0,0 0.7 ± 0.1 0.6 ± 0.1 0.5 ± 0.1 a <0.05 TBARS (nM/mg prot) 0.1 ± 0.0 0.3 ± 0.0 a 0.6 ± 0.0 a , b 0.8 ± 0.0 a , b , c <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG. aRepresenting significant differences (P<0.05) compared to CG. a,b,cRepresenting significant differences (P<0.05) compared to CG, HG, and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.1. Food Intake and Body Weight Gain The animals were weighed and their food intake was measured weekly to evaluate if the high refined carbohydrate diet influenced food intake and body weight gain. As shown in Figure1, the RCDG had a higher body weight gain in the second week of the experiment compared to the CG, and this gain was maintained for 12 weeks. However, no significant difference was observed in the amount (g) consumed by the experimental groups (Figure 1).Figure 1 Body weight gain (a) and food intake (b) for 12 weeks. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and CG. Comparisons were performed using two-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b) ## 4.2. Body Adiposity Index, Adipocyte Area, and Leptin The diet model induced obesity as confirmed by the body adiposity index, which increased in the RCDG as compared to the other groups (Figure2), and by the adipocyte area, which increased in the RCDHG compared to the CG and HG, evaluated by morphometric analysis of EAT sections (Figures 2 and 3). There was also an increase of leptin in the RCDG and RCDHG compared to the CG (Figure 4).Figure 2 Body adiposity (a) and adipocyte area (b). CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa,b,d<0.05 significantly different values for RCDG and CG, HG, and RCHDG, respectively. Pa,b<0.05 significantly different values for RCDHG and CG and HG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. (a) (b)Figure 3 Histological analysis of the epididymal adipose tissue sections stained with hematoxylin and eosin. Bar = 50μm. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.Figure 4 Plasma leptin levels. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data are presented as mean ± standard error of the mean of six animals per group.Pa<0.05 significantly different values for RCDG and RCDHG in relation to CG. Comparisons were performed using one-way ANOVA followed by Bonferroni’s multiple comparisonpost hoc test. ## 4.3. Glucose and Cholesterol Metabolism According to the OGTT results, the RCDG presented higher glycemic levels at 15, 30, and 60 min after glucose overload as compared to the CG. There was also a significant increase in total cholesterol in the RCDG showing that this diet was able to induce insulin resistance and hypercholesterolemia in these animals (Figure5).Figure 5 Plasma glucose (a) and cholesterol (b) levels. CG: control group; RCDG: refined carbohydrate diet group. Data are presented as mean ± standard error of the mean six animals per group.Pa<0.05 significantly different values for RCDG compared to the CG. Comparisons were made using two-way ANOVA with Bonferroni’s multiple comparisonpost hoc test (a) and unpaired t-test (b). (a) (b) ## 4.4. Total and Differential Cell Count in the BALF The dynamics of cell recruitment in the BALF, where the presence of the leukocytes, lymphocytes, neutrophils, and macrophages was identified in the high refined carbohydrate diet and exposure to hyperoxia, was evaluated. As shown in Table1, there was an increase in the number of total leukocytes in the RCDHG as compared to the CG and HG, as well as an increase in macrophages in the RCDG and RCDHG as compared to the CG and HG.Table 1 The inflammatory cells in the bronchoalveolar lavage of experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 140.0 ± 5.3 146.3 ± 6.0 161.3 ± 5.2 178.3 ± 4.8 a , b <0.05 Macrophages (×103/mL) 92.1 ± 9.3 99.6 ± 7.4 128.5 ± 5.0 a , b 152.9 ± 7.4 a , b <0.05 Lymphocytes (×103/mL) 18.2 ± 4.5 9.7 ± 3.5 7.1 ± 2.3 7.6 ± 3.5 >0.05 Neutrophils (×103/mL) 29.7 ± 6.3 37.0 ± 5.1 25.7 ± 5.0 17.9 ± 2.6 >0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.5. Total and Differential Blood Cell Count The total and differential number of cells in the blood was counted. The total leukocytes and lymphocytes in the blood decreased in the HG compared to the CG and RCDG and decreased in the RCDHG compared to the CG (Table2). Exposure to hyperoxia promoted the significant decrease in neutrophils observed in the HG compared to the CG. Furthermore, the monocytes decreased in the HG and RCDHG compared to the CG.Table 2 Total and differential leukocyte count in the blood of the experimental groups. CG HG RCDG RCDHG P Leukocytes (×103/mL) 7.2 ± 0.6 2.7 ± 0.4 a , c 5.4 ± 0.8 4.2 ± 0.5 a <0.05 Lymphocytes (×103/mL) 4.9 ± 4.0 1.6 ± 0.3 a , c 3.6 ± 0.6 2.5 ± 0.3 a <0.05 Neutrophils (×103/mL) 1.7 ± 0.1 0.9 ± 0.1 a 1.3 ± 0.2 1.4 ± 0.2 <0.05 Monocytes (×103/mL) 0.6 ± 0.1 0.2 ± 0.0 a 0.5 ± 0.1 0.3 ± 0.0 a <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.aRepresenting significant differences compared to CG. a,cRepresenting significant differences compared to CG and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.6. Stereological Evaluations of Lung Parenchyma of the Experimental Groups Morphometric analysis showed no significant differences in the alveolar air volume density (Vva) and Vvsa in the HG, RCDG, and RCDHG as compared to the CG (Table 3 and Figure 6).Table 3 Comparison of alveolar airspace volume and density of alveolar septa of groups. CG HG RCDG RCDHG P V v a (%) 37.5 (18.7/43.7) 39.1 (25.0/46.8) 39.1 (31.2/46.8) 34.4 (25.0/37.5) >0.05 V v s a (%) 53.1 (43.7/56.2) 45.3 (43.7/62.5) 43.7 (40.6/50.0) 50.0 (43.7/50.0) >0.05 V v a: alveolar airspace volume; Vvsa: density of alveolar septa. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Data were expressed as median, minimum, and maximum (n=6) and were analyzed by Kruskal-Wallis test followed by Dunn’s post hoc test (P=0.95).Figure 6 Photomicrographs (400x) of hematoxylin and eosin stained lung sections of the CG, HG, RCDG, and RCDHG. Bar = 50μm. Cell influx into the lung parenchyma. CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group. Arrows indicate influx of inflammatory cells into the lung parenchyma in groups exposed to hyperoxia and given a refined carbohydrate diet. ## 4.7. CBC and Biochemical Analysis of the Blood Clinical hematology is used to evaluate the general state of health of the animal as well as to detect specific diseases [39]. The HG animals had lower serum levels of erythrocytes, hemoglobin, and hematocrit compared to other groups (Table 4).Table 4 Blood count of the experimental groups. CG HG RCDG RCDHG P RBC (∗106/µL) 9.5 ± 0.1 7.6 ± 0.4 a , c , d 9.3 ± 0.3 9.7 ± 0.2 <0.05 Hemoglobin (g/dL) 17.3 ± 0.3 14.4 ± 0.8 a , c , d 17.7 ± 0.3 17.6 ± 0.3 <0.05 Hematocrit (%) 52.9 ± 1.0 44.2 ± 2.4 a , c , d 55.4 ± 1.2 54.4 ± 1.4 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.8. Immunoenzymatic Assays on the EAT The immunoenzymatic assays performed on the EAT showed that the HG had higher amounts of IFN-γ as compared to the CG, RCDG, and RCDHG and higher levels of IL-10 and TNF-α as compared to the CG and RCDHG (Table 5).Table 5 Levels of inflammatory markers in epididymal adipose tissue. CG HG RCDG RCDHG P INF-γ 549.4 ± 84.8 1027.0 ± 149.4 a , c , d 558.1 ± 40.7 384.1 ± 88.4 <0.05 IL-10 1.150.0 ± 343.0 2.606.0 ± 568.1 a , d 1.253.0 ± 166.8 592.5 ± 201.0 <0.05 TNF-α 528.0 ± 148.7 1.180.0 ± 245.6 a , d 608.7 ± 60.7 247.2 ± 79.5 <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,c,dRepresenting significant differences (P<0.05) compared to CG, RCDG, and RCDHG. a,dRepresenting significant differences (P<0.05) compared to CG and RCDHG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 4.9. Analysis of Redox Imbalance and Damage Caused by Oxidation The antioxidant enzymes SOD and CAT are generally regulated by oxidative stress and are responsible for the oxidative balance in the lungs. As shown in Table6, SOD activity in the lung parenchyma decreased in the RCDHG as compared to the CG and HG, and CAT activity decreased in the RCDHG as compared to the CG. The TBARS levels revealed a progressive increase in lipid peroxidation in the HG, RCDG, and RCDHG compared to the CG, as well as in the RCDG compared to the HG and RCDHG (Table 6).Table 6 Analysis of activities of SOD, CAT, and TBARS in lung samples from the CG, HG, RCDG, and RCDHG groups. CG HG RCDG RCDHG P SOD (U/mg prot) 26.1 ± 1.8 24.4 ± 2.0 20.9 ± 1.7 17.5 ± 1.4 a , b <0.05 CAT (U/mg prot) 0.8 ± 0,0 0.7 ± 0.1 0.6 ± 0.1 0.5 ± 0.1 a <0.05 TBARS (nM/mg prot) 0.1 ± 0.0 0.3 ± 0.0 a 0.6 ± 0.0 a , b 0.8 ± 0.0 a , b , c <0.05 CG: control group; HG: hyperoxia group; RCDG: refined carbohydrate diet group; RCDHG: refined carbohydrate diet + hyperoxia group.a,bRepresenting significant differences (P<0.05) compared to CG and HG. aRepresenting significant differences (P<0.05) compared to CG. a,b,cRepresenting significant differences (P<0.05) compared to CG, HG, and RCDG, respectively. Data were expressed as mean ± SEM (n=6) and were analyzed by one-way ANOVA followed by Bonferroni’s multiple comparison post hoc test. ## 5. Discussion This study showed that a high refined carbohydrate diet increased the mass and body adiposity in experimental animals. This diet has been employed as an obesity induction model in rats and mice and has been accompanied by increased body weight, adiposity, levels of leptin, and plasma concentrations of cholesterol, glucose, and insulin [26, 40]. Our findings showed that a high refined carbohydrate diet resulted in higher resistance to insulin, hypercholesterolemia, and increased levels of leptin in BALB/c mice, corroborating those of Oliveira et al. [26]. Obesity results in several complications in glucose and lipid metabolism, such as the development of insulin resistance, type 2 diabetes, and hyperlipidemia, leading to metabolic syndrome and cardiovascular diseases [21, 38].Studies have shown that increased adiposity caused by a high palatability diet seems to influence greater food intake [25, 26]; however, there was no influence in food intake on our experimental model. We can speculate the effects of diet composition on metabolic responses since nutrients can act systemically as cellular signals [26]. The higher amount of sucrose in the high refined carbohydrate diet compared to that in the control diet could promote an increase in the activation of lipogenic enzymes due to the activation of ChREBP as well as an increased inflammatory response and insulin resistance [11, 26]. In addition, De Lima et al. (2008) pointed out that high amounts of sucrose in the diet lead to a higher glycemic index, resulting in the elevation of blood glucose and postprandial insulin concentration and favoring fat storage [41]. Therefore, the high concentration of carbohydrates in the high palatability feed possibly contributes to increased body weight, adiposity, adipocyte area, leptin level, glucose intolerance, and hypercholesterolemia.Studies have shown that both obesity and hyperoxia trigger inflammatory processes [20, 21]. To evaluate whether hyperoxia and/or a high refined carbohydrate diet cause an influx of peripheral blood cells into the lung parenchyma, the total and differential leukocytes present in bronchoalveolar lavage (BAL) and blood of animals were determined. Our results corroborate those of a previous study which reported an increase of macrophages and neutrophils in the BAL of BALB/c mice exposed to hyperoxia for 24 h, though stereological analysis showed no significant difference in Vva among all groups [6]. Our data suggest there was blood cell recruitment to an inflammation site, evidenced by the decrease in the number of leukocytes and lymphocytes in the blood. Our data also showed that hyperoxia might promote inflammation in the adipose tissue of eutrophic mice as evidenced by an increase in the proinflammatory cytokines, IFN-γ, and TNF-α. Macrophages are the major source of TNF-α and other proinflammatory molecules in adipose tissue [42]. Therefore, we suggest that, in addition to the recruitment of macrophages to the lung parenchyma, there was also migration of these cells to adipose tissues. Oliveira et al. (2013) demonstrated that, along with an increase in macrophages in adipose tissues, there is an increase in the number of regulatory T lymphocytes (Tregs), indicating a counterregulatory mechanism to suppress acute inflammation [26]. Since Tregs are directly related to the increase of IL-10 [43], the increase of IL-10 in animals exposed to hyperoxia is justified.Interferon Gamma (IFN-γ) is known to be released after the exposure to hyperoxia by inflammatory cells such as lymphocytes [44]. Our results showed an increase in the levels of IFN-γ in adipose tissue in the HG and a decrease of lymphocytes in the blood, indicating possible migration of these cells into the adipose tissue. However, when the animals were subjected to two proinflammatory factors (diet and hyperoxia), there were no significant differences in cytokine levels in adipose tissue, probably for being a secondary tissue injury caused by hyperoxia. Nagato et al. (2012) reported that BALB/c mice exposed to 24 hours of hyperoxia showed an increase in the levels of TNF-α and IL-6 in the lung [6]. Furthermore, Naura et al. (2009) showed that animals subjected to a diet High Fat showed an increase in the levels of IFN-γ and TNF-α in BALF [47]. Thus, we believe that the increase in the levels of cytokines in the lungs of the animals, subjected to hyperoxia and given a refined carbohydrate diet, occurred due to the recruitment of inflammatory cells in the lung without increasing in adipose tissue.The exposure to hyperoxia promoted a significant decrease of erythrocytes, hemoglobin, and hematocrit in this study. Some studies describe that the low partial oxygen pressure occurs in response to arterial hypoxia by the activation of hypoxia-inducible factor-1 (HIF-1), the main regulator of the hypoxic environment transcription which stimulates the production of erythropoietin, called stimulation-erythrocyte hormone via kidneys acting in the marrow of long bones, stimulating the production of erythrocytes to compensate the low concentration of oxygen in the blood. When the cellular oxygen level is adequate, HIF-1 is degraded [43, 46]. It is possible that there was a decrease in HIF-1 and, consequently, in the production of erythrocytes in animals exposed to hyperoxia.In this study, the activities of SOD and CAT were evaluated to better understand their contributions to the redox imbalance during exposure to hyperoxia. SOD is major pulmonary defense against the detrimental effects of O2 and converts O2 to H2O2, a substrate of CAT. CAT is responsible for preventing H2O2 accumulation to convert the H2O2 into two water molecules. The accumulation of H2O2 possible generates, via Fenton and Haber-Weiss reactions, the hydroxyl radical (OH∙) which can react in the side chain and attacks preferably amino acids such as cysteine, histidine, tryptophan, methionine, and phenylalanine, damaging the proteins and, as its consequence, causing the loss of enzyme activity [47]. Diets with high concentrations of lipids and carbohydrates lead to an increase in free fatty acids (FFAs), resulting in an increase of mitochondrial B-oxidation and the overload in the electron transport chain resulting in an increase of production in ROS [39]. Hyperoxia exposes the body to high levels of reactive oxygen species [5, 6] and the high levels of ROS can inhibit the activity of antioxidant enzymes [48]. Thus, the hyperoxia in animals subjected to a high carbohydrate diet causes cell injury, likely by overloading the cellular antioxidant defense, leading to an increase of ROS. In addition, the oxidative load created can reduce levels of antioxidant enzymes and lead to inhibition of their activities [5, 6]. Our results were similar to Nagato et al. (2012) who also reported no significant differences in relation to CAT activity in BALB/C mice exposed to hyperoxia for 24 hours; however, there was a decrease in SOD activity [6]. On the other hand, Nagato et al. (2009) observed a decrease of CAT and SOD in Wistar rats exposed to hyperoxia for 90 minutes [5]. Unlike these previous studies, the animals in this study were exposed to hyperoxia after receiving a high carbohydrate diet for 12 weeks. Thus, we believe that the decrease in the activity of these enzymes was due to the association of these two factors. Besides redox imbalance, hyperoxia causes damage owing to oxidation in the airways, which was supported by studies in BALB/c mice and Wistar rats. This damage can be detected experimentally by monitoring the lipid peroxidation products, such as malondialdehyde [46]. Our results corroborate those of Nagato et al. (2012), who found increased malondialdehyde in BALB/c mice exposed to hyperoxia for 12 and 24 hours [6].The results of this study, associated with previous studies [5, 6, 44], suggested that the supplemental oxygen is extremely important in clinical practice. However, a special attention should be paid to obese patients who have already had a low intensity chronic inflammation [21] and an increase in ROS [20] which induce lung inflammation [5, 6, 44]. ## 6. Conclusions This study has been the first to report the combined effects of the administration of a high refined carbohydrate diet and the exposure to a high oxygen concentration in adult BALB/c mice. However, more studies should be performed to analyze these effects in other organs or biological systems. --- *Source: 1014928-2016-11-29.xml*
2016
# Replication Past theγ-Radiation-Induced Guanine-Thymine Cross-Link G[8,5-Me]T by Human and Yeast DNA Polymerase η **Authors:** Paromita Raychaudhury; Ashis K. Basu **Journal:** Journal of Nucleic Acids (2010) **Publisher:** SAGE-Hindawi Access to Research **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.4061/2010/101495 --- ## Abstract γ-Radiation-induced intrastrand guanine-thymine cross-link, G[8,5-Me]T, hinders replication in vitro and is mutagenic in mammalian cells. Herein we report in vitro translesion synthesis of G[8,5-Me]T by human and yeast DNA polymerase η (hPol η and yPol η). dAMP misincorporation opposite the cross-linked G by yPol η was preferred over correct incorporation of dCMP, but further extension was 100-fold less efficient for G∗:A compared to G∗:C. For hPol η, both incorporation and extension were more efficient with the correct nucleotides. To evaluate translesion synthesis in the presence of all four dNTPs, we have developed a plasmid-based DNA sequencing assay, which showed that yPol η was more error-prone. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively. Targeted G→T was the dominant mutation by both DNA polymerases. But yPol η induced targeted G→T in 23% frequency relative to 4% by hPol η. For yPol η, targeted G→T and G→C constituted 83% of the mutations. By contrast, with hPol η, semi-targeted mutations (7.2%), that is, mutations at bases near the lesion, occurred at equal frequency as the targeted mutations (6.9%). The kind of mutations detected with hPol η showed significant similarities with the mutational spectrum of G[8,5-Me]T in human embryonic kidney cells. --- ## Body ## 1. Introduction DNA-DNA interstrand and intrastrand cross-links are strong blocks of DNA replication, and understanding the details of polymerase bypass of these complex lesions is of major interest [1–5]. The double base DNA lesions are formed at substantial frequency by ionizing radiation and by metal-catalyzed H2O2 reactions (reviewed in [6]). A major DNA damage, in anoxic conditions, is an intrastrand cross-linked species in which C8 of Gua is linked to the 5-methyl group of an adjacent thymine, but the G[8,5-Me]T cross-link is formed at a much higher rate than the T[5-Me,8]G cross-link (Figure 1) [7]. Additional thymine-purine cross-links have been isolated from γ-irradiated DNA in oxygen-free aqueous solution [8]. Wang and coworkers identified structurally similar guanine-cytosine and guanine-5-methylcytosine cross-links in DNA exposed to γ- or X-rays [9–11]. The G[8,5-Me]T cross-link is formed in a dose-dependent manner in human cells when exposed to γ-rays [12], and the G[8,5]C cross-link is formed at a slightly lower level [13].Figure 1 Chemical structures of the twoγ-radiation-induced intrastrand cross-links, G[8,5-Me]T and T[5-Me,8]G.These intrastrand cross-links destabilize the DNA double helix [14], and UvrABC, the excision nuclease proteins from Escherichia coli, can excise them [15, 16]. Using purified DNA polymerases, it was shown that G[8,5-Me]T and G[8,5]C are strong blocks of replication in vitro [12, 17]. For G[8,5-Me]T, primer extension is terminated after incorporation of dAMP opposite the 3′-T by exo-free Klenow fragment and Pol IV (dinB) of Escherichia coli whereas Taq polymerase is completely blocked at the nucleotide preceding the cross-link [17]. However, yeast polymerase η (yPol η), a member of the Y-family DNA polymerase from Saccharomyces cerevisiae, can bypass both G[8,5-Me]T and G[8,5]C cross-links with reduced efficiency [12, 18]. For both these two lesions, nucleotide incorporation opposite the 3′-base of the cross-link is accurate, but the incorporation of dAMP and dGMP is favored opposite the cross-linked G by yPol η over that of the correct nucleotide, dCMP [12, 18].We have recently compared translesion synthesis of G[8,5-Me]T with T[5-Me,8]G in simian and human embryonic kidney cells and found that both cross-links are strongly mutagenic and that the two lesions show interesting pattern of mutations, which included high frequency of semitargeted mutations that occurred a few bases5′ or 3′ to the cross-link [19]. One can anticipate a role of one or more Y-family DNA polymerases in bypassing these replication blocking lesions, and we noted that purified human DNA polymerase η (hPol η) incorporates dCMP preferentially opposite the G of G[8,5-Me]T cross-link, in contrast to yPol η which incorporates dAMP and dGMP much more readily [12, 19]. However, the previous preliminary studies did not examine the kinetics of polymerase extension beyond the lesion site; nor were the full-length extension products analyzed. The kinetics of nucleotide incorporations are influenced by DNA damages, not only at the lesion site but at least up to 3 bases 5′ to the lesion [20]. Therefore, incorporation pattern opposite the lesion provides only part of the information on lesion bypass. In the current paper, we have evaluated translesion synthesis of the G[8,5-Me]T cross-link by these two DNA polymerases more critically by determining single nucleotide incorporation kinetics and characterizing the full-length extension products in the presence of all four dNTPs. We report herein that G[8,5-Me]T bypass by yPol η is much more error-prone than hPol η. We also show that the mutational signatures of these two polymerases are different. ## 2. Materials and Methods ### 2.1. Materials [γ-32P] ATP was supplied by Du Pont New England Nuclear (Boston, MA). Recombinant human and yeast DNA polymerases η were purchased from Enzymax, LLC. (Lexington, KY). EcoR V restriction endonuclease, T4 DNA ligase, and T4 polynucleotide kinase were obtained from New England Bioloabs (Beverly, MA). E. coli DL7 (AB1157, lacΔU169, uvr+) was from J. Essigmann (MIT, Cambridge, MA). The pMS2 phagemid was a gift from Masaaki Moriya (SUNY, Stony Brook, NY). ### 2.2. Methods #### 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. #### 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. #### 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. #### 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 2.1. Materials [γ-32P] ATP was supplied by Du Pont New England Nuclear (Boston, MA). Recombinant human and yeast DNA polymerases η were purchased from Enzymax, LLC. (Lexington, KY). EcoR V restriction endonuclease, T4 DNA ligase, and T4 polynucleotide kinase were obtained from New England Bioloabs (Beverly, MA). E. coli DL7 (AB1157, lacΔU169, uvr+) was from J. Essigmann (MIT, Cambridge, MA). The pMS2 phagemid was a gift from Masaaki Moriya (SUNY, Stony Brook, NY). ## 2.2. Methods ### 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. ### 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. ### 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. ### 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. ## 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. ## 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. ## 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 3. Results ### 3.1.In Vitro Bypass by DNA Polymerase η A 26-mer template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, which contained the G[8,5-Me]T cross-link (ĜT) at the 5th and 6th bases from the 5′ end, was constructed. The DNA sequence of the first 12-nucleotides in this template was taken from codon 272–275 of the p53 gene, in which the G[8,5-Me]T cross-link was incorporated at the second and third nucleotide of codon 273, a well-known mutational hotspot for human cancer [24]. We used both running start and standing start conditions to evaluate bypass of the cross-link. Template-primer complex (50 nM) was incubated with increasing concentration of hPol η and yPol η at 37°C for 30 min in the presence of all four dNTPs (100 μM). For the running start experiments, a 5′-32P-radiolabeled 14-mer primer, 5′-CTGCAAGCGATACA-3′, was annealed to the template so that it was 3 bases 3′ to the cross-link. As shown in Figure 2, G[8,5-Me]T was a strong block of both DNA polymerases. With 5 nM hPol η, 80% of the control template extended to a 22-mer and a 23-mer (full-length) products whereas for G[8,5-Me]T less than 1% extended to the full-length product, and a major block was at the cross-linked G (19-mer). With 20 nM hPol η, nearly 75% was blocked after incorporating a base opposite the cross-linked G (19-mer), and the full-length product increased only to ~10%. The full-length product increased to ~18% with 50 nM hPol η. In similar experiment using yPol η, unlike the human enzyme, the major blocks were at 19-mer and 20-mer (i.e., opposite the cross-linked G and its 5′ neighbor). With 50 nM yPol η, 8% of the primer extended to full-length 23-mer product.Extension of a 14-mer primer by varying concentration (5, 10, 15, 20, and 50 nM) of hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs. The experiments were carried out at 37°C for 30 min. (a)(b)With concentrations of hPolη and yPol η at 50 nM, a substantial fraction (18% and 8%, resp.) of the primer extended to full-length products in 30 min. So we chose to use 50 nM Pol η concentrations for the subsequent experiment. As shown in Figure 3, in the presence of all four dNTPs, extension of a 14-mer primer on the control template rapidly generated a full-length extension product (a 23-mer) as well as a blunt-end addition product (a 24-mer) in 5 min with 50 nM hPol η whereas the extension of the primer stalled after adding a base opposite the cross-linked T and G, generating a 19-mer. It is interesting that hPol η did not stall before either of the cross-linked bases, but it was unable to continue synthesis only after incorporating a dNMP opposite the cross-linked G. Longer incubation allowed further extension, including a small fraction of full-length product, but even after 2 h the 19-mer band was the most pronounced extension product. The result was qualitatively similar with yPol η, except that the extent of full-length product was only marginally increased with time and it stalled both after incorporation of a nucleotide opposite the cross-linked G (19-mer) and after incorporation of a nucleotide opposite its 5′-neighbor (20-mer). Standing start experiments were carried out, and the amount of extension of the primer by one nucleotide was plotted with increasing dNTP concentration to determine the initial velocity of the polymerase-catalyzed reaction, which is shown in Figure 4. From these plots, the steady-state kinetic parameters, Km and kcat, for nucleotide incorporation opposite cross-linked G and the same for the control were determined (Tables 1 and 2). For hPol η, catalytic efficiency (kcat/Km) of dCMP incorporation was 17-fold decreased opposite the cross-linked G whereas extension to the next base was decreased 5-fold relative to control. By contrast, for yPol η dCMP incorporation was decreased 1,000-fold, and extension to the next base was decreased 12-fold relative to control. This suggests that yPol η had more difficulty in bypassing G[8,5-Me]T than hPol η. As was reported before [12, 19], in contrast to hPol η, which incorporates the correct nucleotide preferentially opposite G[8,5-Me]T, yPol η was much more error-prone, and insertion of dAMP opposite the cross-linked G was favored over that of the correct nucleotide, dCMP (Tables 1 and 2). In fact, dAMP misincorporation opposite the cross-linked G was more than 20-times more efficient than dCMP incorporation by yPol η. However, with yPol η the extension was 100-fold slower for G*:A pair compared to G*:C pair whereas the same for hPol η was about 13-fold slower. In each case, the higher catalytic efficiency was due to a much smaller Km. When nucleotide incorporation fidelity opposite the cross-linked G and its 5′ base was considered, dCMP incorporation over dAMP misincorporation was 200-fold more efficient for hPol η whereas the same was only 5-fold more efficient for yPol η. Nevertheless, it seems that dCMP was preferred opposite the cross-linked G for bypass of G[8,5-Me]T by both DNA polymerases although the ability to discriminate against the wrong nucleotide by yPol η was not high.Table 1 Kinetic parameters for dCTP and dATP incorporation and chain extension by human DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.6± 0.10.02± 0.0033801.0C:G3.9± 0.20.09± 0.00543.41.0dATP6.25± 0.016.5± 0.030.962.5× 10-3A:G4.7± 0.15.1± 0.030.92.1× 10-2G[8,5-Me]T-containing substratedCTP2.43± 0.020.11± 0.00222.15.8× 10-2C:G*2.0± 0.020.24± 0.0048.30.2dATP1.75± 0.11.07± 0.011.634.2× 10-3A:G*1.7± 0.012.6± 0.030.651.5× 10-2aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Table 2 Kinetic parameters for dCTP and dATP incorporation and chain extension by yeast DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.3± 0.020.04± 0.002182.51.0C:G4.4± 0.040.07± 0.00162.81.0dATP5.3± 0.039.5± 0.0020.563.1× 10-3A:G3.7± 0.041.3± 0.012.84.4× 10-2G[8,5-Me]T-containing substratedCTP1.99± 0.00111.2± 0.010.179.3× 10-4C:G*1.6± 0.10.31± 0.0025.28.2× 10-2dATP2.2± 0.010.59± 0.0053.722.0× 10-2A:G*1.2± 0.00922.0± 1.00.057.9× 10-4aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Extension of a 14-mer primer by 50 nM hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs for the indicated time at 37°C. (a)(b)Single nucleotide incorporation and extension assay. Template-primer (50 nM) was incubated with 6.4 nM hPolη or yPol η for various times with increasing concentrations of dNTP. Steady-state kinetics for single nucleotide incorporation opposite cross-linked (G) (solid line) or control (G) (dashed line) are shown in (a), (b), (e), and (f). (a) and (b) represent dCTP and dATP incorporations, respectively, for hPol η whereas (e) and (f) represent the same for yPol η. Steady-state kinetics for dGTP incorporation opposite (C) immediately following the cross-linked (G) (solid line) or control (G) (dashed line) are shown in (c), (d), (g), and (h). (c) and (d) represent extension of G*:C and G*:A pairs, respectively, for hPol η whereas (g) and (h) represent the same for yPol η. Error bars show the standard deviation of at least three experiments. (a)(b)(c)(d)(e)(f)(g)(h) ### 3.2. Analysis of the Full-Length Bypass Products Although steady-state kinetics provides useful information on the ability to incorporate a nucleotide opposite a lesion and further extension, it is important to determine the sequences of full-length bypass products in the presence of all four dNTPs. In mammalian cells, replication of G[8,5-Me]T-containing DNA also generates significant level of semitargeted mutations [19], and it would be of interest to determine if pol η causes errors not only opposite the cross-link but also near the lesion. Guengerich and colleagues have developed an elegant LC-ESI/MS/MS-based method to analyze the polymerase extension products [25–30]. In the current paper, we report a plasmid-based approach to accomplish the same goal. The principle of this approach is shown in Scheme 1. The pMS2 plasmid was linearized by digestion with EcoR V. A scaffold 36-mer, containing two 12-nucleotide regions complementary to the two ends of the digested plasmid, was annealed to generate a gapped circular DNA, in which the G[8,5-Me]T cross-link was located in the middle of a 12-nucleotide gap. The scaffold G[8,5-Me]T-36-mer contained the same local DNA sequence near the G[8,5-Me]T cross-link as the 26-mer used in the steady-state kinetic assay. It also contained several uracils replacing thymines at the two ends where it annealed with the plasmid. The circular scaffold plasmid DNA was incubated with 50 nM hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. We expected a large fraction of the control construct to extend to full-length circular product whereas a much smaller fraction of the cross-linked construct was able to do the same. The full-length extension product extended up to the 3′ end of the circular DNA, and the nick between the two ends was sealed by ligation overnight at 16°C in the presence of an excess of T4 DNA ligase to generate covalently closed circular ss plasmid. Although the DNA polymerase was not inactivated, both hPol η and yPol η were inefficient in continuing further extension at 16°C (data not shown). The scaffold 36-mer was digested by treatment with uracil DNA glycosylase and exonuclease III. The removal of the lesion-containing scaffold was considered critical to avoid any potential in vivo replication of the lesion. Therefore, we analyzed the products by agarose gel electrophoresis after uracil DNA glycosylase followed by exonuclease III treatment and confirmed that the plasmid was quantitatively linearized when either Pol η or DNA ligase was absent (data not shown). The proteins were extracted with phenol and chloroform, and the DNA was precipitated with ethanol. The DNA was used to transform repair-competent E. coli DL7, and the transformants were analyzed by DNA sequencing.Scheme 1 General protocol for analyzing the full-length extension products.The number of colonies recovered upon transformation inE. coli of the plasmid incubated with hPol η for different times is shown in Figure 5. Since linear ss DNA is inefficient in transfecting E. coli, no colonies were recovered from the zero time point from both the control and the G[8,5-Me]T scaffold whereas increasing numbers of colonies were recovered as incubation times with the DNA polymerase were increased. The number of colonies reflected the extent of full-length product that was ligated, and relative to the control 36-mer scaffold, the G[8,5-Me]T scaffold generated only 9% progeny at 15 min, which increased to 18% at 30 min and to 27% after 2 h (Figure 4). (For this calculation, the number of colonies obtained from the 120 min extension of the control 36-mer was considered 100%.) This suggests that with increased time of incubation, more DNA polymerase can bypass the G[8,5-Me]T cross-link, as we have also noted in the primer extension experiment with the G[8,5-Me]T 26-mer.Figure 5 The number of colonies obtained from extension by hPolη of a control scaffold (black) was compared with the G[8,5-Me]T scaffold (white) at different time points in the bar graph. The number of colonies obtained from the 120 min extension of the control 36-mer was arbitrarily considered 100%. The zero time point showing no colonies ensures that colonies only originated from the extension products.DNA sequencing results of the 2 h incubation products from two independent experiments with hPolη and yPol η are shown in Figure 6. The types and numbers of mutants from two different experiments are shown in Figure 6(a) whereas Figure 6(b) shows the combined result in a bar graph. As noted in the kinetic studies, yPol η was found to be more error-prone than hPol η. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively, for the G[8,5-Me]T cross-link whereas no mutants were recovered from the control after sequencing in excess of one hundred colonies following extension with each DNA polymerase. The pattern of mutagenesis from the G[8,5-Me]T cross-link was significantly different for these two polymerases. yPol η induced targeted G→T as the major mutagenic event, followed by targeted G→C; these two base substitutions, taken together, constituted 83% of the mutations. By contrast, in the case of hPol η, semitargeted mutations (7.2%) occurred at equal frequency as the targeted mutations (6.9%). With hPol η, though most frequent mutation was G→T (4%), approximately half as many G→A (2.2%) was also detected. It is interesting that even a single targeted G→A could not be detected in the extension by yPol η. Similarly, targeted G→C was completely absent with hPol η. For the cross-linked T, yPol η bypass was completely error-free whereas low (0.6%) level of T→G transversions was detected with hPol η. With yPol η, semitargeted mutations were restricted to the immediate 5′-C and 3′-G of the cross-link, but with hPol η, errors were noted as far as two bases 5′ and five bases 3′ to the cross-link. In sum, despite the similarity of targeted G→T transversions, the mutational profile of the two Y-family DNA polymerases exhibited distinct patterns.(a) Types and frequencies of mutations induced by G[8,5-Me]T as determined from the full-length extension products generated by hPolη (top) and yPol η (bottom). It is noteworthy that no mutants were isolated from the control batches after sequencing in excess of one hundred colonies. (b) The combined data in (a) is represented in bar graph showing the percentages of each type of single-base substitution or deletion induced by G[8,5-Me]T by hPol η (top) and yPol η (bottom). The colors represent T (green), A (blue), G (red), C (orange), and one-base deletion (yellow). The T deletion in a run of three thymines by hPol η was arbitrarily shown here at the T closest to the lesion. (a)(b) ## 3.1.In Vitro Bypass by DNA Polymerase η A 26-mer template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, which contained the G[8,5-Me]T cross-link (ĜT) at the 5th and 6th bases from the 5′ end, was constructed. The DNA sequence of the first 12-nucleotides in this template was taken from codon 272–275 of the p53 gene, in which the G[8,5-Me]T cross-link was incorporated at the second and third nucleotide of codon 273, a well-known mutational hotspot for human cancer [24]. We used both running start and standing start conditions to evaluate bypass of the cross-link. Template-primer complex (50 nM) was incubated with increasing concentration of hPol η and yPol η at 37°C for 30 min in the presence of all four dNTPs (100 μM). For the running start experiments, a 5′-32P-radiolabeled 14-mer primer, 5′-CTGCAAGCGATACA-3′, was annealed to the template so that it was 3 bases 3′ to the cross-link. As shown in Figure 2, G[8,5-Me]T was a strong block of both DNA polymerases. With 5 nM hPol η, 80% of the control template extended to a 22-mer and a 23-mer (full-length) products whereas for G[8,5-Me]T less than 1% extended to the full-length product, and a major block was at the cross-linked G (19-mer). With 20 nM hPol η, nearly 75% was blocked after incorporating a base opposite the cross-linked G (19-mer), and the full-length product increased only to ~10%. The full-length product increased to ~18% with 50 nM hPol η. In similar experiment using yPol η, unlike the human enzyme, the major blocks were at 19-mer and 20-mer (i.e., opposite the cross-linked G and its 5′ neighbor). With 50 nM yPol η, 8% of the primer extended to full-length 23-mer product.Extension of a 14-mer primer by varying concentration (5, 10, 15, 20, and 50 nM) of hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs. The experiments were carried out at 37°C for 30 min. (a)(b)With concentrations of hPolη and yPol η at 50 nM, a substantial fraction (18% and 8%, resp.) of the primer extended to full-length products in 30 min. So we chose to use 50 nM Pol η concentrations for the subsequent experiment. As shown in Figure 3, in the presence of all four dNTPs, extension of a 14-mer primer on the control template rapidly generated a full-length extension product (a 23-mer) as well as a blunt-end addition product (a 24-mer) in 5 min with 50 nM hPol η whereas the extension of the primer stalled after adding a base opposite the cross-linked T and G, generating a 19-mer. It is interesting that hPol η did not stall before either of the cross-linked bases, but it was unable to continue synthesis only after incorporating a dNMP opposite the cross-linked G. Longer incubation allowed further extension, including a small fraction of full-length product, but even after 2 h the 19-mer band was the most pronounced extension product. The result was qualitatively similar with yPol η, except that the extent of full-length product was only marginally increased with time and it stalled both after incorporation of a nucleotide opposite the cross-linked G (19-mer) and after incorporation of a nucleotide opposite its 5′-neighbor (20-mer). Standing start experiments were carried out, and the amount of extension of the primer by one nucleotide was plotted with increasing dNTP concentration to determine the initial velocity of the polymerase-catalyzed reaction, which is shown in Figure 4. From these plots, the steady-state kinetic parameters, Km and kcat, for nucleotide incorporation opposite cross-linked G and the same for the control were determined (Tables 1 and 2). For hPol η, catalytic efficiency (kcat/Km) of dCMP incorporation was 17-fold decreased opposite the cross-linked G whereas extension to the next base was decreased 5-fold relative to control. By contrast, for yPol η dCMP incorporation was decreased 1,000-fold, and extension to the next base was decreased 12-fold relative to control. This suggests that yPol η had more difficulty in bypassing G[8,5-Me]T than hPol η. As was reported before [12, 19], in contrast to hPol η, which incorporates the correct nucleotide preferentially opposite G[8,5-Me]T, yPol η was much more error-prone, and insertion of dAMP opposite the cross-linked G was favored over that of the correct nucleotide, dCMP (Tables 1 and 2). In fact, dAMP misincorporation opposite the cross-linked G was more than 20-times more efficient than dCMP incorporation by yPol η. However, with yPol η the extension was 100-fold slower for G*:A pair compared to G*:C pair whereas the same for hPol η was about 13-fold slower. In each case, the higher catalytic efficiency was due to a much smaller Km. When nucleotide incorporation fidelity opposite the cross-linked G and its 5′ base was considered, dCMP incorporation over dAMP misincorporation was 200-fold more efficient for hPol η whereas the same was only 5-fold more efficient for yPol η. Nevertheless, it seems that dCMP was preferred opposite the cross-linked G for bypass of G[8,5-Me]T by both DNA polymerases although the ability to discriminate against the wrong nucleotide by yPol η was not high.Table 1 Kinetic parameters for dCTP and dATP incorporation and chain extension by human DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.6± 0.10.02± 0.0033801.0C:G3.9± 0.20.09± 0.00543.41.0dATP6.25± 0.016.5± 0.030.962.5× 10-3A:G4.7± 0.15.1± 0.030.92.1× 10-2G[8,5-Me]T-containing substratedCTP2.43± 0.020.11± 0.00222.15.8× 10-2C:G*2.0± 0.020.24± 0.0048.30.2dATP1.75± 0.11.07± 0.011.634.2× 10-3A:G*1.7± 0.012.6± 0.030.651.5× 10-2aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Table 2 Kinetic parameters for dCTP and dATP incorporation and chain extension by yeast DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.3± 0.020.04± 0.002182.51.0C:G4.4± 0.040.07± 0.00162.81.0dATP5.3± 0.039.5± 0.0020.563.1× 10-3A:G3.7± 0.041.3± 0.012.84.4× 10-2G[8,5-Me]T-containing substratedCTP1.99± 0.00111.2± 0.010.179.3× 10-4C:G*1.6± 0.10.31± 0.0025.28.2× 10-2dATP2.2± 0.010.59± 0.0053.722.0× 10-2A:G*1.2± 0.00922.0± 1.00.057.9× 10-4aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Extension of a 14-mer primer by 50 nM hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs for the indicated time at 37°C. (a)(b)Single nucleotide incorporation and extension assay. Template-primer (50 nM) was incubated with 6.4 nM hPolη or yPol η for various times with increasing concentrations of dNTP. Steady-state kinetics for single nucleotide incorporation opposite cross-linked (G) (solid line) or control (G) (dashed line) are shown in (a), (b), (e), and (f). (a) and (b) represent dCTP and dATP incorporations, respectively, for hPol η whereas (e) and (f) represent the same for yPol η. Steady-state kinetics for dGTP incorporation opposite (C) immediately following the cross-linked (G) (solid line) or control (G) (dashed line) are shown in (c), (d), (g), and (h). (c) and (d) represent extension of G*:C and G*:A pairs, respectively, for hPol η whereas (g) and (h) represent the same for yPol η. Error bars show the standard deviation of at least three experiments. (a)(b)(c)(d)(e)(f)(g)(h) ## 3.2. Analysis of the Full-Length Bypass Products Although steady-state kinetics provides useful information on the ability to incorporate a nucleotide opposite a lesion and further extension, it is important to determine the sequences of full-length bypass products in the presence of all four dNTPs. In mammalian cells, replication of G[8,5-Me]T-containing DNA also generates significant level of semitargeted mutations [19], and it would be of interest to determine if pol η causes errors not only opposite the cross-link but also near the lesion. Guengerich and colleagues have developed an elegant LC-ESI/MS/MS-based method to analyze the polymerase extension products [25–30]. In the current paper, we report a plasmid-based approach to accomplish the same goal. The principle of this approach is shown in Scheme 1. The pMS2 plasmid was linearized by digestion with EcoR V. A scaffold 36-mer, containing two 12-nucleotide regions complementary to the two ends of the digested plasmid, was annealed to generate a gapped circular DNA, in which the G[8,5-Me]T cross-link was located in the middle of a 12-nucleotide gap. The scaffold G[8,5-Me]T-36-mer contained the same local DNA sequence near the G[8,5-Me]T cross-link as the 26-mer used in the steady-state kinetic assay. It also contained several uracils replacing thymines at the two ends where it annealed with the plasmid. The circular scaffold plasmid DNA was incubated with 50 nM hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. We expected a large fraction of the control construct to extend to full-length circular product whereas a much smaller fraction of the cross-linked construct was able to do the same. The full-length extension product extended up to the 3′ end of the circular DNA, and the nick between the two ends was sealed by ligation overnight at 16°C in the presence of an excess of T4 DNA ligase to generate covalently closed circular ss plasmid. Although the DNA polymerase was not inactivated, both hPol η and yPol η were inefficient in continuing further extension at 16°C (data not shown). The scaffold 36-mer was digested by treatment with uracil DNA glycosylase and exonuclease III. The removal of the lesion-containing scaffold was considered critical to avoid any potential in vivo replication of the lesion. Therefore, we analyzed the products by agarose gel electrophoresis after uracil DNA glycosylase followed by exonuclease III treatment and confirmed that the plasmid was quantitatively linearized when either Pol η or DNA ligase was absent (data not shown). The proteins were extracted with phenol and chloroform, and the DNA was precipitated with ethanol. The DNA was used to transform repair-competent E. coli DL7, and the transformants were analyzed by DNA sequencing.Scheme 1 General protocol for analyzing the full-length extension products.The number of colonies recovered upon transformation inE. coli of the plasmid incubated with hPol η for different times is shown in Figure 5. Since linear ss DNA is inefficient in transfecting E. coli, no colonies were recovered from the zero time point from both the control and the G[8,5-Me]T scaffold whereas increasing numbers of colonies were recovered as incubation times with the DNA polymerase were increased. The number of colonies reflected the extent of full-length product that was ligated, and relative to the control 36-mer scaffold, the G[8,5-Me]T scaffold generated only 9% progeny at 15 min, which increased to 18% at 30 min and to 27% after 2 h (Figure 4). (For this calculation, the number of colonies obtained from the 120 min extension of the control 36-mer was considered 100%.) This suggests that with increased time of incubation, more DNA polymerase can bypass the G[8,5-Me]T cross-link, as we have also noted in the primer extension experiment with the G[8,5-Me]T 26-mer.Figure 5 The number of colonies obtained from extension by hPolη of a control scaffold (black) was compared with the G[8,5-Me]T scaffold (white) at different time points in the bar graph. The number of colonies obtained from the 120 min extension of the control 36-mer was arbitrarily considered 100%. The zero time point showing no colonies ensures that colonies only originated from the extension products.DNA sequencing results of the 2 h incubation products from two independent experiments with hPolη and yPol η are shown in Figure 6. The types and numbers of mutants from two different experiments are shown in Figure 6(a) whereas Figure 6(b) shows the combined result in a bar graph. As noted in the kinetic studies, yPol η was found to be more error-prone than hPol η. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively, for the G[8,5-Me]T cross-link whereas no mutants were recovered from the control after sequencing in excess of one hundred colonies following extension with each DNA polymerase. The pattern of mutagenesis from the G[8,5-Me]T cross-link was significantly different for these two polymerases. yPol η induced targeted G→T as the major mutagenic event, followed by targeted G→C; these two base substitutions, taken together, constituted 83% of the mutations. By contrast, in the case of hPol η, semitargeted mutations (7.2%) occurred at equal frequency as the targeted mutations (6.9%). With hPol η, though most frequent mutation was G→T (4%), approximately half as many G→A (2.2%) was also detected. It is interesting that even a single targeted G→A could not be detected in the extension by yPol η. Similarly, targeted G→C was completely absent with hPol η. For the cross-linked T, yPol η bypass was completely error-free whereas low (0.6%) level of T→G transversions was detected with hPol η. With yPol η, semitargeted mutations were restricted to the immediate 5′-C and 3′-G of the cross-link, but with hPol η, errors were noted as far as two bases 5′ and five bases 3′ to the cross-link. In sum, despite the similarity of targeted G→T transversions, the mutational profile of the two Y-family DNA polymerases exhibited distinct patterns.(a) Types and frequencies of mutations induced by G[8,5-Me]T as determined from the full-length extension products generated by hPolη (top) and yPol η (bottom). It is noteworthy that no mutants were isolated from the control batches after sequencing in excess of one hundred colonies. (b) The combined data in (a) is represented in bar graph showing the percentages of each type of single-base substitution or deletion induced by G[8,5-Me]T by hPol η (top) and yPol η (bottom). The colors represent T (green), A (blue), G (red), C (orange), and one-base deletion (yellow). The T deletion in a run of three thymines by hPol η was arbitrarily shown here at the T closest to the lesion. (a)(b) ## 4. Discussion In earlier studies it was shown that hPolη preferentially incorporates the correct nucleotide opposite each of the cross-linked bases whereas yPol η, though accurately incorporates dAMP opposite the cross-linked T, is highly error-prone in nucleotide incorporation opposite the cross-linked G [12, 19]. However, neither the kinetics of further extension of the primer nor the sequences of the full-length extension products were determined. Miller and Grollman [20] have shown that DNA polymerase functions can be affected by replication-blocking lesions remote from the lesion site. In the current investigation, using steady-state kinetics, we determined that though dAMP incorporation opposite the G of G[8,5-Me]T by yPol η was more than 20-fold preferred over dCMP incorporation, further extension of the G*:A pair was 100-fold less efficient than extension of the G*:C pair. As a result, dCMP incorporation followed by further extension was 5-times as efficient as dAMP incorporation by yPol η. For hPol η, on the other hand, it was nearly 200-times as efficient.In order to characterize the full-length extension products in the presence of all four dNTPs, we developed a novel method to sequence them. In this approach, as shown in Scheme1, a single-stranded plasmid (e.g., pMS2) containing a restriction endonuclease site in a hairpin region is digested and linearized by the enzyme. A DNA adduct containing scaffold is annealed to the linear DNA to create a gapped plasmid, in which the lesion is situated in the middle of this gap. A DNA polymerase is allowed to extend the 3′-end of the plasmid to fill in the gap, which is then enzymatically ligated to create a closed-circular plasmid or viral genome. The ss circular DNA is replicated in E. coli, and the progeny is subjected to DNA sequencing. The scaffold is quantitatively removed prior to transformation in E. coli to avoid biological processing of the lesion in vivo. The DNA sequencing result of the area which originally contained the gap provides the nature of extension products. It is worth mentioning that other plasmid-based sequencing techniques using PCR amplification have been developed and successfully used in recent years [31, 32]. However, we believe that the hallmark of our current approach is its simplicity. It neither requires expensive instrumentation nor is technically demanding. While the sensitivity of the mass spectral analysis is limited by the signal to noise ratio, which varies from experiment to experiment, the plasmid-based sequencing approach enables determination of misincorporations occurring at a level of less than 1% frequency. However, the sequence determination is dependent on the efficiency of ligation, which is only proficient with full-length extension products. As a result, a limitation of the current plasmid-based approach is that it does not offer any information on the incomplete extension products, which may be readily available by the MS approach. Using this method of sequencing, we showed that yPol η was much more error-prone in bypassing G[8,5-Me]T than hPol η. The targeted G→T was the major type of mutation by both DNA polymerases, but yPol η induced it nearly 6-times more efficiently than hPol η. With hPol η, semitargeted mutations, that is, mutations near the lesion, occurred at approximately equal frequency as the targeted mutations whereas more than 80% of the mutations were targeted mutations with yPol η.Several studies have established differences between the yeast and the human enzyme. For translesion synthesis ofγ-hydroxypropanodeoxyguanosine, yPol η synthesizes past the adduct relatively accurately whereas hPol η discriminates poorly between incorporation of correct and wrong nucleotides opposite the adduct [33]. The mechanistic basis of these two enzymes has been examined, which showed that they differ in several important respects [34]. hPol η has a 50-fold-faster rate of nucleotide incorporation than yPol η but binds the nucleotide with an approximately 50-fold-lower level of affinity. It is unclear how these differences influence the nucleotide incorporation opposite the G[8,5-Me]T cross-link.When the hPolη mutational spectrum was compared with the mutations detected in human embryonic kidney cells [19], significant similarities in the two results are apparent. Notably, the high frequency of G→T followed by G→A and the semitargeted mutations 5′ to the cross-link such as 5′-C→T and 5′-G→T reflect a similar pattern in the in vitro studies using purified DNA hPol η and the cellular studies. These similarities notwithstanding, certain variations in the mutation profiles are also noteworthy. Targeted T→A and substitutions at adjacent 3′-G and the thymines noted in the mammalian cells were absent in the hPol η extensions. It was suggested that in a cell the binding to proliferating cell nuclear antigen (PCNA) via its PCNA-interacting protein domain is a prerequisite for hPol η’s ability to function in translesion synthesis in human cells [35]. Therefore, certain differences between bypass of a DNA damage by a purified hPol ηin vitro and that in a cell should be anticipated. Although there is insufficient evidence to conclude that hPol η is responsible for the observed mutations of G[8,5-Me]T in human cells, it seems reasonable to postulate that this Y-family DNA polymerase is one of the DNA polymerases involved in the cellular bypass of this cross-link. --- *Source: 101495-2010-09-13.xml*
101495-2010-09-13_101495-2010-09-13.md
54,332
Replication Past theγ-Radiation-Induced Guanine-Thymine Cross-Link G[8,5-Me]T by Human and Yeast DNA Polymerase η
Paromita Raychaudhury; Ashis K. Basu
Journal of Nucleic Acids (2010)
Biological Sciences
SAGE-Hindawi Access to Research
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.4061/2010/101495
101495-2010-09-13.xml
--- ## Abstract γ-Radiation-induced intrastrand guanine-thymine cross-link, G[8,5-Me]T, hinders replication in vitro and is mutagenic in mammalian cells. Herein we report in vitro translesion synthesis of G[8,5-Me]T by human and yeast DNA polymerase η (hPol η and yPol η). dAMP misincorporation opposite the cross-linked G by yPol η was preferred over correct incorporation of dCMP, but further extension was 100-fold less efficient for G∗:A compared to G∗:C. For hPol η, both incorporation and extension were more efficient with the correct nucleotides. To evaluate translesion synthesis in the presence of all four dNTPs, we have developed a plasmid-based DNA sequencing assay, which showed that yPol η was more error-prone. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively. Targeted G→T was the dominant mutation by both DNA polymerases. But yPol η induced targeted G→T in 23% frequency relative to 4% by hPol η. For yPol η, targeted G→T and G→C constituted 83% of the mutations. By contrast, with hPol η, semi-targeted mutations (7.2%), that is, mutations at bases near the lesion, occurred at equal frequency as the targeted mutations (6.9%). The kind of mutations detected with hPol η showed significant similarities with the mutational spectrum of G[8,5-Me]T in human embryonic kidney cells. --- ## Body ## 1. Introduction DNA-DNA interstrand and intrastrand cross-links are strong blocks of DNA replication, and understanding the details of polymerase bypass of these complex lesions is of major interest [1–5]. The double base DNA lesions are formed at substantial frequency by ionizing radiation and by metal-catalyzed H2O2 reactions (reviewed in [6]). A major DNA damage, in anoxic conditions, is an intrastrand cross-linked species in which C8 of Gua is linked to the 5-methyl group of an adjacent thymine, but the G[8,5-Me]T cross-link is formed at a much higher rate than the T[5-Me,8]G cross-link (Figure 1) [7]. Additional thymine-purine cross-links have been isolated from γ-irradiated DNA in oxygen-free aqueous solution [8]. Wang and coworkers identified structurally similar guanine-cytosine and guanine-5-methylcytosine cross-links in DNA exposed to γ- or X-rays [9–11]. The G[8,5-Me]T cross-link is formed in a dose-dependent manner in human cells when exposed to γ-rays [12], and the G[8,5]C cross-link is formed at a slightly lower level [13].Figure 1 Chemical structures of the twoγ-radiation-induced intrastrand cross-links, G[8,5-Me]T and T[5-Me,8]G.These intrastrand cross-links destabilize the DNA double helix [14], and UvrABC, the excision nuclease proteins from Escherichia coli, can excise them [15, 16]. Using purified DNA polymerases, it was shown that G[8,5-Me]T and G[8,5]C are strong blocks of replication in vitro [12, 17]. For G[8,5-Me]T, primer extension is terminated after incorporation of dAMP opposite the 3′-T by exo-free Klenow fragment and Pol IV (dinB) of Escherichia coli whereas Taq polymerase is completely blocked at the nucleotide preceding the cross-link [17]. However, yeast polymerase η (yPol η), a member of the Y-family DNA polymerase from Saccharomyces cerevisiae, can bypass both G[8,5-Me]T and G[8,5]C cross-links with reduced efficiency [12, 18]. For both these two lesions, nucleotide incorporation opposite the 3′-base of the cross-link is accurate, but the incorporation of dAMP and dGMP is favored opposite the cross-linked G by yPol η over that of the correct nucleotide, dCMP [12, 18].We have recently compared translesion synthesis of G[8,5-Me]T with T[5-Me,8]G in simian and human embryonic kidney cells and found that both cross-links are strongly mutagenic and that the two lesions show interesting pattern of mutations, which included high frequency of semitargeted mutations that occurred a few bases5′ or 3′ to the cross-link [19]. One can anticipate a role of one or more Y-family DNA polymerases in bypassing these replication blocking lesions, and we noted that purified human DNA polymerase η (hPol η) incorporates dCMP preferentially opposite the G of G[8,5-Me]T cross-link, in contrast to yPol η which incorporates dAMP and dGMP much more readily [12, 19]. However, the previous preliminary studies did not examine the kinetics of polymerase extension beyond the lesion site; nor were the full-length extension products analyzed. The kinetics of nucleotide incorporations are influenced by DNA damages, not only at the lesion site but at least up to 3 bases 5′ to the lesion [20]. Therefore, incorporation pattern opposite the lesion provides only part of the information on lesion bypass. In the current paper, we have evaluated translesion synthesis of the G[8,5-Me]T cross-link by these two DNA polymerases more critically by determining single nucleotide incorporation kinetics and characterizing the full-length extension products in the presence of all four dNTPs. We report herein that G[8,5-Me]T bypass by yPol η is much more error-prone than hPol η. We also show that the mutational signatures of these two polymerases are different. ## 2. Materials and Methods ### 2.1. Materials [γ-32P] ATP was supplied by Du Pont New England Nuclear (Boston, MA). Recombinant human and yeast DNA polymerases η were purchased from Enzymax, LLC. (Lexington, KY). EcoR V restriction endonuclease, T4 DNA ligase, and T4 polynucleotide kinase were obtained from New England Bioloabs (Beverly, MA). E. coli DL7 (AB1157, lacΔU169, uvr+) was from J. Essigmann (MIT, Cambridge, MA). The pMS2 phagemid was a gift from Masaaki Moriya (SUNY, Stony Brook, NY). ### 2.2. Methods #### 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. #### 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. #### 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. #### 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 2.1. Materials [γ-32P] ATP was supplied by Du Pont New England Nuclear (Boston, MA). Recombinant human and yeast DNA polymerases η were purchased from Enzymax, LLC. (Lexington, KY). EcoR V restriction endonuclease, T4 DNA ligase, and T4 polynucleotide kinase were obtained from New England Bioloabs (Beverly, MA). E. coli DL7 (AB1157, lacΔU169, uvr+) was from J. Essigmann (MIT, Cambridge, MA). The pMS2 phagemid was a gift from Masaaki Moriya (SUNY, Stony Brook, NY). ## 2.2. Methods ### 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. ### 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. ### 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. ### 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 2.2.1. Synthesis and Characterization of Oligonucleotides The lesions containing oligonucleotides have been synthesized and characterized as reported in [15]. Unmodified oligonucleotides were analyzed by MALDI-TOF MS analysis, which gave a molecular ion with a mass within 0.005% of theoretical whereas adducted oligonucleotides were analyzed by ESI-MS in addition to digestion followed by HPLC analysis. ## 2.2.2. Construction of 26-mer and 36-mer Containing G[8,5-Me]T Cross-Link The 26-mer G[8,5-Me]T template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, was constructed by ligating a 5′-phosphorylated 14-mer, 5′-ATCGCTTGCAGGGG-3′ (~7.5 nmol), to the G[8,5-Me]T cross-linked 12-mer, 5′-GTGCĜTGTTTGT-3′ (~5 nmol), in the presence of an 18-nucleotide complementary oligonucleotide, 5′-GCAAGCGATACAAACACG-3′ (~7.5 nmol), as described [19, 21]. Similarly, a 12-mer, 5′-CCUGGAAGCGAU-3′ (~7.5 nmol), a 5′-phosphorylated G[8,5-Me]T 12-mer (~5 nmol), and a 5′-phosphorylated 12-mer, 5′-AUCGCUGCUACC-3′ (~7.5 nmol), were annealed to a complementary 26-mer, 5′-GCAGCGATACAAACACGCACATCGCT-3′ (~7.5 nmol), and ligated in the presence of T4 DNA ligase to prepare a G[8,5-Me]T cross-linked 36-mer, 5′-CCUGGAAGCGAUGTGCĜTGTTTGTAUCGCUGCUACC-3′. The oligonucleotides were separated by electrophoresis on a 16% polyacrylamide-8 M urea gel. The ligated product bands were visualized by UV shadowing and excised. The 26-mers and the 36-mers were desalted on a Sephadex G-25 (Sigma) column and stored at -20°C until further use. ## 2.2.3.In Vitro Nucleotide Incorporation and Chain Extension To determine the nucleotide preferentially incorporated opposite G[8,5-Me]T cross-link, the steady-state kinetic analyses were performed by the method of Goodman and coworkers [22, 23]. The primed template was obtained by annealing 5-fold molar excess of the modified or control 26-mer template (~20 ng) to a complementary 5′-32P-labeled primer. Primer extension under standing start conditions was carried out with hPol η or yPol η (6.4 nM) with individual dNTPs or a mixture of all four dNTPs in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. The reactions were terminated by adding an equal volume of 95% (v/v) formamide, 20 mM EDTA, 0.02% (w/v) xylene cyanol, and 0.02% (w/v) bromophenol blue and heating at 90°C for 2 min, and the products were resolved on a 20% polyacrylamide gel containing 8 M urea. The DNA bands were visualized and quantitated using a Phosphorimager. The dNTP concentration and time of incubation were optimized to ensure that primer extension was less than 20%. The Km and kcat were extrapolated from the Michaelis-Menten plot of the kinetic data. ## 2.2.4. Analysis of the Full-Length Bypass Products Using pMS2 Vector The ss pMS2 shuttle vector DNA (58 pmols, 100μg) was digested with an excess of EcoR V (300 pmol, 4.84 μg) for 1 h at 37°C followed by room temperature overnight. A 36-mer scaffold oligonucleotide containing the G[8,5-Me]T cross-link (or a control) was annealed overnight at 16°C to form the gapped DNA. The gapped plasmid was incubated with hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. DNA ligase (200 units) was added, and the pMS2 mixture containing the DNA polymerase, dNTPs, and so forth, was ligated overnight at 16°C. The scaffold oligonucleotide was digested by treatment with uracil glycosylase and exonuclease III, the proteins were extracted with phenol/chloroform, and the DNA was precipitated with ethanol. The final construct was dissolved in deionized water and used to transform E. coliDL7 cells. The transformants were randomly picked and analyzed by DNA sequencing. ## 3. Results ### 3.1.In Vitro Bypass by DNA Polymerase η A 26-mer template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, which contained the G[8,5-Me]T cross-link (ĜT) at the 5th and 6th bases from the 5′ end, was constructed. The DNA sequence of the first 12-nucleotides in this template was taken from codon 272–275 of the p53 gene, in which the G[8,5-Me]T cross-link was incorporated at the second and third nucleotide of codon 273, a well-known mutational hotspot for human cancer [24]. We used both running start and standing start conditions to evaluate bypass of the cross-link. Template-primer complex (50 nM) was incubated with increasing concentration of hPol η and yPol η at 37°C for 30 min in the presence of all four dNTPs (100 μM). For the running start experiments, a 5′-32P-radiolabeled 14-mer primer, 5′-CTGCAAGCGATACA-3′, was annealed to the template so that it was 3 bases 3′ to the cross-link. As shown in Figure 2, G[8,5-Me]T was a strong block of both DNA polymerases. With 5 nM hPol η, 80% of the control template extended to a 22-mer and a 23-mer (full-length) products whereas for G[8,5-Me]T less than 1% extended to the full-length product, and a major block was at the cross-linked G (19-mer). With 20 nM hPol η, nearly 75% was blocked after incorporating a base opposite the cross-linked G (19-mer), and the full-length product increased only to ~10%. The full-length product increased to ~18% with 50 nM hPol η. In similar experiment using yPol η, unlike the human enzyme, the major blocks were at 19-mer and 20-mer (i.e., opposite the cross-linked G and its 5′ neighbor). With 50 nM yPol η, 8% of the primer extended to full-length 23-mer product.Extension of a 14-mer primer by varying concentration (5, 10, 15, 20, and 50 nM) of hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs. The experiments were carried out at 37°C for 30 min. (a)(b)With concentrations of hPolη and yPol η at 50 nM, a substantial fraction (18% and 8%, resp.) of the primer extended to full-length products in 30 min. So we chose to use 50 nM Pol η concentrations for the subsequent experiment. As shown in Figure 3, in the presence of all four dNTPs, extension of a 14-mer primer on the control template rapidly generated a full-length extension product (a 23-mer) as well as a blunt-end addition product (a 24-mer) in 5 min with 50 nM hPol η whereas the extension of the primer stalled after adding a base opposite the cross-linked T and G, generating a 19-mer. It is interesting that hPol η did not stall before either of the cross-linked bases, but it was unable to continue synthesis only after incorporating a dNMP opposite the cross-linked G. Longer incubation allowed further extension, including a small fraction of full-length product, but even after 2 h the 19-mer band was the most pronounced extension product. The result was qualitatively similar with yPol η, except that the extent of full-length product was only marginally increased with time and it stalled both after incorporation of a nucleotide opposite the cross-linked G (19-mer) and after incorporation of a nucleotide opposite its 5′-neighbor (20-mer). Standing start experiments were carried out, and the amount of extension of the primer by one nucleotide was plotted with increasing dNTP concentration to determine the initial velocity of the polymerase-catalyzed reaction, which is shown in Figure 4. From these plots, the steady-state kinetic parameters, Km and kcat, for nucleotide incorporation opposite cross-linked G and the same for the control were determined (Tables 1 and 2). For hPol η, catalytic efficiency (kcat/Km) of dCMP incorporation was 17-fold decreased opposite the cross-linked G whereas extension to the next base was decreased 5-fold relative to control. By contrast, for yPol η dCMP incorporation was decreased 1,000-fold, and extension to the next base was decreased 12-fold relative to control. This suggests that yPol η had more difficulty in bypassing G[8,5-Me]T than hPol η. As was reported before [12, 19], in contrast to hPol η, which incorporates the correct nucleotide preferentially opposite G[8,5-Me]T, yPol η was much more error-prone, and insertion of dAMP opposite the cross-linked G was favored over that of the correct nucleotide, dCMP (Tables 1 and 2). In fact, dAMP misincorporation opposite the cross-linked G was more than 20-times more efficient than dCMP incorporation by yPol η. However, with yPol η the extension was 100-fold slower for G*:A pair compared to G*:C pair whereas the same for hPol η was about 13-fold slower. In each case, the higher catalytic efficiency was due to a much smaller Km. When nucleotide incorporation fidelity opposite the cross-linked G and its 5′ base was considered, dCMP incorporation over dAMP misincorporation was 200-fold more efficient for hPol η whereas the same was only 5-fold more efficient for yPol η. Nevertheless, it seems that dCMP was preferred opposite the cross-linked G for bypass of G[8,5-Me]T by both DNA polymerases although the ability to discriminate against the wrong nucleotide by yPol η was not high.Table 1 Kinetic parameters for dCTP and dATP incorporation and chain extension by human DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.6± 0.10.02± 0.0033801.0C:G3.9± 0.20.09± 0.00543.41.0dATP6.25± 0.016.5± 0.030.962.5× 10-3A:G4.7± 0.15.1± 0.030.92.1× 10-2G[8,5-Me]T-containing substratedCTP2.43± 0.020.11± 0.00222.15.8× 10-2C:G*2.0± 0.020.24± 0.0048.30.2dATP1.75± 0.11.07± 0.011.634.2× 10-3A:G*1.7± 0.012.6± 0.030.651.5× 10-2aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Table 2 Kinetic parameters for dCTP and dATP incorporation and chain extension by yeast DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.3± 0.020.04± 0.002182.51.0C:G4.4± 0.040.07± 0.00162.81.0dATP5.3± 0.039.5± 0.0020.563.1× 10-3A:G3.7± 0.041.3± 0.012.84.4× 10-2G[8,5-Me]T-containing substratedCTP1.99± 0.00111.2± 0.010.179.3× 10-4C:G*1.6± 0.10.31± 0.0025.28.2× 10-2dATP2.2± 0.010.59± 0.0053.722.0× 10-2A:G*1.2± 0.00922.0± 1.00.057.9× 10-4aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Extension of a 14-mer primer by 50 nM hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs for the indicated time at 37°C. (a)(b)Single nucleotide incorporation and extension assay. Template-primer (50 nM) was incubated with 6.4 nM hPolη or yPol η for various times with increasing concentrations of dNTP. Steady-state kinetics for single nucleotide incorporation opposite cross-linked (G) (solid line) or control (G) (dashed line) are shown in (a), (b), (e), and (f). (a) and (b) represent dCTP and dATP incorporations, respectively, for hPol η whereas (e) and (f) represent the same for yPol η. Steady-state kinetics for dGTP incorporation opposite (C) immediately following the cross-linked (G) (solid line) or control (G) (dashed line) are shown in (c), (d), (g), and (h). (c) and (d) represent extension of G*:C and G*:A pairs, respectively, for hPol η whereas (g) and (h) represent the same for yPol η. Error bars show the standard deviation of at least three experiments. (a)(b)(c)(d)(e)(f)(g)(h) ### 3.2. Analysis of the Full-Length Bypass Products Although steady-state kinetics provides useful information on the ability to incorporate a nucleotide opposite a lesion and further extension, it is important to determine the sequences of full-length bypass products in the presence of all four dNTPs. In mammalian cells, replication of G[8,5-Me]T-containing DNA also generates significant level of semitargeted mutations [19], and it would be of interest to determine if pol η causes errors not only opposite the cross-link but also near the lesion. Guengerich and colleagues have developed an elegant LC-ESI/MS/MS-based method to analyze the polymerase extension products [25–30]. In the current paper, we report a plasmid-based approach to accomplish the same goal. The principle of this approach is shown in Scheme 1. The pMS2 plasmid was linearized by digestion with EcoR V. A scaffold 36-mer, containing two 12-nucleotide regions complementary to the two ends of the digested plasmid, was annealed to generate a gapped circular DNA, in which the G[8,5-Me]T cross-link was located in the middle of a 12-nucleotide gap. The scaffold G[8,5-Me]T-36-mer contained the same local DNA sequence near the G[8,5-Me]T cross-link as the 26-mer used in the steady-state kinetic assay. It also contained several uracils replacing thymines at the two ends where it annealed with the plasmid. The circular scaffold plasmid DNA was incubated with 50 nM hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. We expected a large fraction of the control construct to extend to full-length circular product whereas a much smaller fraction of the cross-linked construct was able to do the same. The full-length extension product extended up to the 3′ end of the circular DNA, and the nick between the two ends was sealed by ligation overnight at 16°C in the presence of an excess of T4 DNA ligase to generate covalently closed circular ss plasmid. Although the DNA polymerase was not inactivated, both hPol η and yPol η were inefficient in continuing further extension at 16°C (data not shown). The scaffold 36-mer was digested by treatment with uracil DNA glycosylase and exonuclease III. The removal of the lesion-containing scaffold was considered critical to avoid any potential in vivo replication of the lesion. Therefore, we analyzed the products by agarose gel electrophoresis after uracil DNA glycosylase followed by exonuclease III treatment and confirmed that the plasmid was quantitatively linearized when either Pol η or DNA ligase was absent (data not shown). The proteins were extracted with phenol and chloroform, and the DNA was precipitated with ethanol. The DNA was used to transform repair-competent E. coli DL7, and the transformants were analyzed by DNA sequencing.Scheme 1 General protocol for analyzing the full-length extension products.The number of colonies recovered upon transformation inE. coli of the plasmid incubated with hPol η for different times is shown in Figure 5. Since linear ss DNA is inefficient in transfecting E. coli, no colonies were recovered from the zero time point from both the control and the G[8,5-Me]T scaffold whereas increasing numbers of colonies were recovered as incubation times with the DNA polymerase were increased. The number of colonies reflected the extent of full-length product that was ligated, and relative to the control 36-mer scaffold, the G[8,5-Me]T scaffold generated only 9% progeny at 15 min, which increased to 18% at 30 min and to 27% after 2 h (Figure 4). (For this calculation, the number of colonies obtained from the 120 min extension of the control 36-mer was considered 100%.) This suggests that with increased time of incubation, more DNA polymerase can bypass the G[8,5-Me]T cross-link, as we have also noted in the primer extension experiment with the G[8,5-Me]T 26-mer.Figure 5 The number of colonies obtained from extension by hPolη of a control scaffold (black) was compared with the G[8,5-Me]T scaffold (white) at different time points in the bar graph. The number of colonies obtained from the 120 min extension of the control 36-mer was arbitrarily considered 100%. The zero time point showing no colonies ensures that colonies only originated from the extension products.DNA sequencing results of the 2 h incubation products from two independent experiments with hPolη and yPol η are shown in Figure 6. The types and numbers of mutants from two different experiments are shown in Figure 6(a) whereas Figure 6(b) shows the combined result in a bar graph. As noted in the kinetic studies, yPol η was found to be more error-prone than hPol η. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively, for the G[8,5-Me]T cross-link whereas no mutants were recovered from the control after sequencing in excess of one hundred colonies following extension with each DNA polymerase. The pattern of mutagenesis from the G[8,5-Me]T cross-link was significantly different for these two polymerases. yPol η induced targeted G→T as the major mutagenic event, followed by targeted G→C; these two base substitutions, taken together, constituted 83% of the mutations. By contrast, in the case of hPol η, semitargeted mutations (7.2%) occurred at equal frequency as the targeted mutations (6.9%). With hPol η, though most frequent mutation was G→T (4%), approximately half as many G→A (2.2%) was also detected. It is interesting that even a single targeted G→A could not be detected in the extension by yPol η. Similarly, targeted G→C was completely absent with hPol η. For the cross-linked T, yPol η bypass was completely error-free whereas low (0.6%) level of T→G transversions was detected with hPol η. With yPol η, semitargeted mutations were restricted to the immediate 5′-C and 3′-G of the cross-link, but with hPol η, errors were noted as far as two bases 5′ and five bases 3′ to the cross-link. In sum, despite the similarity of targeted G→T transversions, the mutational profile of the two Y-family DNA polymerases exhibited distinct patterns.(a) Types and frequencies of mutations induced by G[8,5-Me]T as determined from the full-length extension products generated by hPolη (top) and yPol η (bottom). It is noteworthy that no mutants were isolated from the control batches after sequencing in excess of one hundred colonies. (b) The combined data in (a) is represented in bar graph showing the percentages of each type of single-base substitution or deletion induced by G[8,5-Me]T by hPol η (top) and yPol η (bottom). The colors represent T (green), A (blue), G (red), C (orange), and one-base deletion (yellow). The T deletion in a run of three thymines by hPol η was arbitrarily shown here at the T closest to the lesion. (a)(b) ## 3.1.In Vitro Bypass by DNA Polymerase η A 26-mer template,5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′, which contained the G[8,5-Me]T cross-link (ĜT) at the 5th and 6th bases from the 5′ end, was constructed. The DNA sequence of the first 12-nucleotides in this template was taken from codon 272–275 of the p53 gene, in which the G[8,5-Me]T cross-link was incorporated at the second and third nucleotide of codon 273, a well-known mutational hotspot for human cancer [24]. We used both running start and standing start conditions to evaluate bypass of the cross-link. Template-primer complex (50 nM) was incubated with increasing concentration of hPol η and yPol η at 37°C for 30 min in the presence of all four dNTPs (100 μM). For the running start experiments, a 5′-32P-radiolabeled 14-mer primer, 5′-CTGCAAGCGATACA-3′, was annealed to the template so that it was 3 bases 3′ to the cross-link. As shown in Figure 2, G[8,5-Me]T was a strong block of both DNA polymerases. With 5 nM hPol η, 80% of the control template extended to a 22-mer and a 23-mer (full-length) products whereas for G[8,5-Me]T less than 1% extended to the full-length product, and a major block was at the cross-linked G (19-mer). With 20 nM hPol η, nearly 75% was blocked after incorporating a base opposite the cross-linked G (19-mer), and the full-length product increased only to ~10%. The full-length product increased to ~18% with 50 nM hPol η. In similar experiment using yPol η, unlike the human enzyme, the major blocks were at 19-mer and 20-mer (i.e., opposite the cross-linked G and its 5′ neighbor). With 50 nM yPol η, 8% of the primer extended to full-length 23-mer product.Extension of a 14-mer primer by varying concentration (5, 10, 15, 20, and 50 nM) of hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs. The experiments were carried out at 37°C for 30 min. (a)(b)With concentrations of hPolη and yPol η at 50 nM, a substantial fraction (18% and 8%, resp.) of the primer extended to full-length products in 30 min. So we chose to use 50 nM Pol η concentrations for the subsequent experiment. As shown in Figure 3, in the presence of all four dNTPs, extension of a 14-mer primer on the control template rapidly generated a full-length extension product (a 23-mer) as well as a blunt-end addition product (a 24-mer) in 5 min with 50 nM hPol η whereas the extension of the primer stalled after adding a base opposite the cross-linked T and G, generating a 19-mer. It is interesting that hPol η did not stall before either of the cross-linked bases, but it was unable to continue synthesis only after incorporating a dNMP opposite the cross-linked G. Longer incubation allowed further extension, including a small fraction of full-length product, but even after 2 h the 19-mer band was the most pronounced extension product. The result was qualitatively similar with yPol η, except that the extent of full-length product was only marginally increased with time and it stalled both after incorporation of a nucleotide opposite the cross-linked G (19-mer) and after incorporation of a nucleotide opposite its 5′-neighbor (20-mer). Standing start experiments were carried out, and the amount of extension of the primer by one nucleotide was plotted with increasing dNTP concentration to determine the initial velocity of the polymerase-catalyzed reaction, which is shown in Figure 4. From these plots, the steady-state kinetic parameters, Km and kcat, for nucleotide incorporation opposite cross-linked G and the same for the control were determined (Tables 1 and 2). For hPol η, catalytic efficiency (kcat/Km) of dCMP incorporation was 17-fold decreased opposite the cross-linked G whereas extension to the next base was decreased 5-fold relative to control. By contrast, for yPol η dCMP incorporation was decreased 1,000-fold, and extension to the next base was decreased 12-fold relative to control. This suggests that yPol η had more difficulty in bypassing G[8,5-Me]T than hPol η. As was reported before [12, 19], in contrast to hPol η, which incorporates the correct nucleotide preferentially opposite G[8,5-Me]T, yPol η was much more error-prone, and insertion of dAMP opposite the cross-linked G was favored over that of the correct nucleotide, dCMP (Tables 1 and 2). In fact, dAMP misincorporation opposite the cross-linked G was more than 20-times more efficient than dCMP incorporation by yPol η. However, with yPol η the extension was 100-fold slower for G*:A pair compared to G*:C pair whereas the same for hPol η was about 13-fold slower. In each case, the higher catalytic efficiency was due to a much smaller Km. When nucleotide incorporation fidelity opposite the cross-linked G and its 5′ base was considered, dCMP incorporation over dAMP misincorporation was 200-fold more efficient for hPol η whereas the same was only 5-fold more efficient for yPol η. Nevertheless, it seems that dCMP was preferred opposite the cross-linked G for bypass of G[8,5-Me]T by both DNA polymerases although the ability to discriminate against the wrong nucleotide by yPol η was not high.Table 1 Kinetic parameters for dCTP and dATP incorporation and chain extension by human DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.6± 0.10.02± 0.0033801.0C:G3.9± 0.20.09± 0.00543.41.0dATP6.25± 0.016.5± 0.030.962.5× 10-3A:G4.7± 0.15.1± 0.030.92.1× 10-2G[8,5-Me]T-containing substratedCTP2.43± 0.020.11± 0.00222.15.8× 10-2C:G*2.0± 0.020.24± 0.0048.30.2dATP1.75± 0.11.07± 0.011.634.2× 10-3A:G*1.7± 0.012.6± 0.030.651.5× 10-2aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Table 2 Kinetic parameters for dCTP and dATP incorporation and chain extension by yeast DNA polymeraseη (6.4 nM) on an undamaged and G[8,5-Me]T cross-link containing substrate. dNTPkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)FincaX:Gkcat (min-1)Km (μM)kcat/Km (μM-1 min-1)Fexta5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′5′-GTGCĜTGTTTGTATCGCTTGCAGGGG-3′ACAAACATAGCGAACp32X ACAAACATAGCGAACp32Nucleotide IncorporationChain ExtensionbUndamaged substratedCTP7.3± 0.020.04± 0.002182.51.0C:G4.4± 0.040.07± 0.00162.81.0dATP5.3± 0.039.5± 0.0020.563.1× 10-3A:G3.7± 0.041.3± 0.012.84.4× 10-2G[8,5-Me]T-containing substratedCTP1.99± 0.00111.2± 0.010.179.3× 10-4C:G*1.6± 0.10.31± 0.0025.28.2× 10-2dATP2.2± 0.010.59± 0.0053.722.0× 10-2A:G*1.2± 0.00922.0± 1.00.057.9× 10-4aFidelity (F) of incorporation or extension was determined by the following equation: (kcat/Km)incorrect/(kcat/Km)correct.bSteady-state kinetics for dGTP incorporation opposite C immediately following the X:G or X:G* pair was determined.Extension of a 14-mer primer by 50 nM hPolη (a) and yPol η (b) on control and G[8,5-Me]T templates in the presence of all four dNTPs for the indicated time at 37°C. (a)(b)Single nucleotide incorporation and extension assay. Template-primer (50 nM) was incubated with 6.4 nM hPolη or yPol η for various times with increasing concentrations of dNTP. Steady-state kinetics for single nucleotide incorporation opposite cross-linked (G) (solid line) or control (G) (dashed line) are shown in (a), (b), (e), and (f). (a) and (b) represent dCTP and dATP incorporations, respectively, for hPol η whereas (e) and (f) represent the same for yPol η. Steady-state kinetics for dGTP incorporation opposite (C) immediately following the cross-linked (G) (solid line) or control (G) (dashed line) are shown in (c), (d), (g), and (h). (c) and (d) represent extension of G*:C and G*:A pairs, respectively, for hPol η whereas (g) and (h) represent the same for yPol η. Error bars show the standard deviation of at least three experiments. (a)(b)(c)(d)(e)(f)(g)(h) ## 3.2. Analysis of the Full-Length Bypass Products Although steady-state kinetics provides useful information on the ability to incorporate a nucleotide opposite a lesion and further extension, it is important to determine the sequences of full-length bypass products in the presence of all four dNTPs. In mammalian cells, replication of G[8,5-Me]T-containing DNA also generates significant level of semitargeted mutations [19], and it would be of interest to determine if pol η causes errors not only opposite the cross-link but also near the lesion. Guengerich and colleagues have developed an elegant LC-ESI/MS/MS-based method to analyze the polymerase extension products [25–30]. In the current paper, we report a plasmid-based approach to accomplish the same goal. The principle of this approach is shown in Scheme 1. The pMS2 plasmid was linearized by digestion with EcoR V. A scaffold 36-mer, containing two 12-nucleotide regions complementary to the two ends of the digested plasmid, was annealed to generate a gapped circular DNA, in which the G[8,5-Me]T cross-link was located in the middle of a 12-nucleotide gap. The scaffold G[8,5-Me]T-36-mer contained the same local DNA sequence near the G[8,5-Me]T cross-link as the 26-mer used in the steady-state kinetic assay. It also contained several uracils replacing thymines at the two ends where it annealed with the plasmid. The circular scaffold plasmid DNA was incubated with 50 nM hPol η or yPol η and a mixture of all four dNTPs (25 mM each) in 25 mM Tris-HCl buffer (pH 7.5), 5 mM MgCl2, and 5 mM dithiothreitol at 37°C for various times. We expected a large fraction of the control construct to extend to full-length circular product whereas a much smaller fraction of the cross-linked construct was able to do the same. The full-length extension product extended up to the 3′ end of the circular DNA, and the nick between the two ends was sealed by ligation overnight at 16°C in the presence of an excess of T4 DNA ligase to generate covalently closed circular ss plasmid. Although the DNA polymerase was not inactivated, both hPol η and yPol η were inefficient in continuing further extension at 16°C (data not shown). The scaffold 36-mer was digested by treatment with uracil DNA glycosylase and exonuclease III. The removal of the lesion-containing scaffold was considered critical to avoid any potential in vivo replication of the lesion. Therefore, we analyzed the products by agarose gel electrophoresis after uracil DNA glycosylase followed by exonuclease III treatment and confirmed that the plasmid was quantitatively linearized when either Pol η or DNA ligase was absent (data not shown). The proteins were extracted with phenol and chloroform, and the DNA was precipitated with ethanol. The DNA was used to transform repair-competent E. coli DL7, and the transformants were analyzed by DNA sequencing.Scheme 1 General protocol for analyzing the full-length extension products.The number of colonies recovered upon transformation inE. coli of the plasmid incubated with hPol η for different times is shown in Figure 5. Since linear ss DNA is inefficient in transfecting E. coli, no colonies were recovered from the zero time point from both the control and the G[8,5-Me]T scaffold whereas increasing numbers of colonies were recovered as incubation times with the DNA polymerase were increased. The number of colonies reflected the extent of full-length product that was ligated, and relative to the control 36-mer scaffold, the G[8,5-Me]T scaffold generated only 9% progeny at 15 min, which increased to 18% at 30 min and to 27% after 2 h (Figure 4). (For this calculation, the number of colonies obtained from the 120 min extension of the control 36-mer was considered 100%.) This suggests that with increased time of incubation, more DNA polymerase can bypass the G[8,5-Me]T cross-link, as we have also noted in the primer extension experiment with the G[8,5-Me]T 26-mer.Figure 5 The number of colonies obtained from extension by hPolη of a control scaffold (black) was compared with the G[8,5-Me]T scaffold (white) at different time points in the bar graph. The number of colonies obtained from the 120 min extension of the control 36-mer was arbitrarily considered 100%. The zero time point showing no colonies ensures that colonies only originated from the extension products.DNA sequencing results of the 2 h incubation products from two independent experiments with hPolη and yPol η are shown in Figure 6. The types and numbers of mutants from two different experiments are shown in Figure 6(a) whereas Figure 6(b) shows the combined result in a bar graph. As noted in the kinetic studies, yPol η was found to be more error-prone than hPol η. Mutational frequencies of yPol η and hPol η were 36% and 14%, respectively, for the G[8,5-Me]T cross-link whereas no mutants were recovered from the control after sequencing in excess of one hundred colonies following extension with each DNA polymerase. The pattern of mutagenesis from the G[8,5-Me]T cross-link was significantly different for these two polymerases. yPol η induced targeted G→T as the major mutagenic event, followed by targeted G→C; these two base substitutions, taken together, constituted 83% of the mutations. By contrast, in the case of hPol η, semitargeted mutations (7.2%) occurred at equal frequency as the targeted mutations (6.9%). With hPol η, though most frequent mutation was G→T (4%), approximately half as many G→A (2.2%) was also detected. It is interesting that even a single targeted G→A could not be detected in the extension by yPol η. Similarly, targeted G→C was completely absent with hPol η. For the cross-linked T, yPol η bypass was completely error-free whereas low (0.6%) level of T→G transversions was detected with hPol η. With yPol η, semitargeted mutations were restricted to the immediate 5′-C and 3′-G of the cross-link, but with hPol η, errors were noted as far as two bases 5′ and five bases 3′ to the cross-link. In sum, despite the similarity of targeted G→T transversions, the mutational profile of the two Y-family DNA polymerases exhibited distinct patterns.(a) Types and frequencies of mutations induced by G[8,5-Me]T as determined from the full-length extension products generated by hPolη (top) and yPol η (bottom). It is noteworthy that no mutants were isolated from the control batches after sequencing in excess of one hundred colonies. (b) The combined data in (a) is represented in bar graph showing the percentages of each type of single-base substitution or deletion induced by G[8,5-Me]T by hPol η (top) and yPol η (bottom). The colors represent T (green), A (blue), G (red), C (orange), and one-base deletion (yellow). The T deletion in a run of three thymines by hPol η was arbitrarily shown here at the T closest to the lesion. (a)(b) ## 4. Discussion In earlier studies it was shown that hPolη preferentially incorporates the correct nucleotide opposite each of the cross-linked bases whereas yPol η, though accurately incorporates dAMP opposite the cross-linked T, is highly error-prone in nucleotide incorporation opposite the cross-linked G [12, 19]. However, neither the kinetics of further extension of the primer nor the sequences of the full-length extension products were determined. Miller and Grollman [20] have shown that DNA polymerase functions can be affected by replication-blocking lesions remote from the lesion site. In the current investigation, using steady-state kinetics, we determined that though dAMP incorporation opposite the G of G[8,5-Me]T by yPol η was more than 20-fold preferred over dCMP incorporation, further extension of the G*:A pair was 100-fold less efficient than extension of the G*:C pair. As a result, dCMP incorporation followed by further extension was 5-times as efficient as dAMP incorporation by yPol η. For hPol η, on the other hand, it was nearly 200-times as efficient.In order to characterize the full-length extension products in the presence of all four dNTPs, we developed a novel method to sequence them. In this approach, as shown in Scheme1, a single-stranded plasmid (e.g., pMS2) containing a restriction endonuclease site in a hairpin region is digested and linearized by the enzyme. A DNA adduct containing scaffold is annealed to the linear DNA to create a gapped plasmid, in which the lesion is situated in the middle of this gap. A DNA polymerase is allowed to extend the 3′-end of the plasmid to fill in the gap, which is then enzymatically ligated to create a closed-circular plasmid or viral genome. The ss circular DNA is replicated in E. coli, and the progeny is subjected to DNA sequencing. The scaffold is quantitatively removed prior to transformation in E. coli to avoid biological processing of the lesion in vivo. The DNA sequencing result of the area which originally contained the gap provides the nature of extension products. It is worth mentioning that other plasmid-based sequencing techniques using PCR amplification have been developed and successfully used in recent years [31, 32]. However, we believe that the hallmark of our current approach is its simplicity. It neither requires expensive instrumentation nor is technically demanding. While the sensitivity of the mass spectral analysis is limited by the signal to noise ratio, which varies from experiment to experiment, the plasmid-based sequencing approach enables determination of misincorporations occurring at a level of less than 1% frequency. However, the sequence determination is dependent on the efficiency of ligation, which is only proficient with full-length extension products. As a result, a limitation of the current plasmid-based approach is that it does not offer any information on the incomplete extension products, which may be readily available by the MS approach. Using this method of sequencing, we showed that yPol η was much more error-prone in bypassing G[8,5-Me]T than hPol η. The targeted G→T was the major type of mutation by both DNA polymerases, but yPol η induced it nearly 6-times more efficiently than hPol η. With hPol η, semitargeted mutations, that is, mutations near the lesion, occurred at approximately equal frequency as the targeted mutations whereas more than 80% of the mutations were targeted mutations with yPol η.Several studies have established differences between the yeast and the human enzyme. For translesion synthesis ofγ-hydroxypropanodeoxyguanosine, yPol η synthesizes past the adduct relatively accurately whereas hPol η discriminates poorly between incorporation of correct and wrong nucleotides opposite the adduct [33]. The mechanistic basis of these two enzymes has been examined, which showed that they differ in several important respects [34]. hPol η has a 50-fold-faster rate of nucleotide incorporation than yPol η but binds the nucleotide with an approximately 50-fold-lower level of affinity. It is unclear how these differences influence the nucleotide incorporation opposite the G[8,5-Me]T cross-link.When the hPolη mutational spectrum was compared with the mutations detected in human embryonic kidney cells [19], significant similarities in the two results are apparent. Notably, the high frequency of G→T followed by G→A and the semitargeted mutations 5′ to the cross-link such as 5′-C→T and 5′-G→T reflect a similar pattern in the in vitro studies using purified DNA hPol η and the cellular studies. These similarities notwithstanding, certain variations in the mutation profiles are also noteworthy. Targeted T→A and substitutions at adjacent 3′-G and the thymines noted in the mammalian cells were absent in the hPol η extensions. It was suggested that in a cell the binding to proliferating cell nuclear antigen (PCNA) via its PCNA-interacting protein domain is a prerequisite for hPol η’s ability to function in translesion synthesis in human cells [35]. Therefore, certain differences between bypass of a DNA damage by a purified hPol ηin vitro and that in a cell should be anticipated. Although there is insufficient evidence to conclude that hPol η is responsible for the observed mutations of G[8,5-Me]T in human cells, it seems reasonable to postulate that this Y-family DNA polymerase is one of the DNA polymerases involved in the cellular bypass of this cross-link. --- *Source: 101495-2010-09-13.xml*
2010
# Prevalence of Fabry Disease among Patients with Parkinson’s Disease **Authors:** Alexandra Lackova; Christian Beetz; Sebastian Oppermann; Peter Bauer; Petra Pavelekova; Tatiana Lorincova; Miriam Ostrozovicova; Kristina Kulcsarova; Jana Cobejova; Martin Cobej; Petra Levicka; Simona Liesenerova; Daniela Sendekova; Viktoria Sukovska; Zuzana Gdovinova; Vladimir Han; Mie Rizig; Henry Houlden; Matej Skorvanek **Journal:** Parkinson’s Disease (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1014950 --- ## Abstract Background. An increased prevalence of Parkinson’s disease (PD) disease has been previously reported in subjects with Fabry disease (FD) carrying alpha-galactosidase (GLA) mutations and their first-line relatives. Moreover, decreased alpha-galactosidase A (AGLA) enzymatic activity has been reported among cases with PD compared to controls. Objective. The aim of our study was to determine the prevalence of FD among patients with PD. Methods. We recruited 236 consecutive patients with PD from February 2018 to December 2020. Clinical and sociodemographic data, including the MDS-UPDRS-III scores and HY stage (the Hoehn and Yahr scale), were collected, and in-depth phenotyping was performed in subjects with identified GLA variants. A multistep approach, including standard determination of AGLA activity and LysoGb3 in males, and next-generation based GLA sequencing in all females and males with abnormal AGLA levels was performed in a routine diagnostic setting. Results. The mean age of our patients was 68.9 ± 8.9 years, 130 were men (55.1%), and the mean disease duration was 7.77 ± 5.35 years. Among 130 men, AGLA levels were low in 20 patients (15%), and subsequent Lyso-Gb3 testing showed values within the reference range for all tested subjects. In 126 subsequently genetically tested patients, four heterozygous p.(Asp313Tyr) GLA variants (3.2%, MAF 0.016) were identified; all were females. None of the 4 GLA variant carriers identified had any clinical manifestation suggestive of FD. Conclusions. The results of this study suggest a possible relationship between FD and PD in a small proportion of cases. Nevertheless, the GLA variant found in our cohort is classified as a variant of unknown significance. Therefore, its pathogenic causative role in the context of PD needs further elucidation, and these findings should be interpreted with caution. --- ## Body ## 1. Introduction Fabry disease (FD) belongs to the group of genetically determined forms of lysosome storage disorders with X-linked inheritance, which lead to a deficiency of lysosomal enzyme–alpha-galactosidase A (AGLA), resulting in the accumulation of glycosphingolipids, especially globotriaosylceramide (Gb3) in vital organs [1]. Although FD is a disease with X-linked inheritance, clinical symptoms are very common in females as well. Some mutations cause classic disease, while others result in a milder disease phenotype with later onset [2]. The clinical spectrum of the disease is broad. Early signs include typical neurological manifestations, skin changes, renal involvement, and characteristic cardiovascular manifestations. Due to the complexity of the disease, multidisciplinary management is needed [3]. As for the group of lysosomal storage diseases, the association between mutations in glucocerebrosidase (GBA), which encodes the lysosomal enzyme glucocerebrosidase (GCase), and Parkinson’s disease (PD) has highlighted the importance of lysosomal function in PD pathogenesis [4]. Since lysosomes are involved in the process of alpha-synuclein degradation, it is thought that their dysfunction may lead to its accumulation and subsequent PD formation [5]. In recent years, there have been expanding studies on the interrelationship between parkinsonism and the mutation of the alpha-galactosidase gene (GLA) gene in FD. An increased prevalence of PD disease has been previously reported in subjects with FD and their first-line relatives [6]. Moreover, decreased AGLA enzymatic activity has been reported among cases with PD compared to controls [2]. This points to a potential relationship between these two disorders. The function of the AGLA enzyme represents a new therapeutic approach for genetically determined PD, as increasing the levels of these enzymes leads to a reduction in the levels of alpha-synuclein [7]. Nevertheless, the prevalence of FD among PD patients has not been systematically studied so far. Therefore, we aimed to determine the prevalence of FD among patients with PD in a single tertiary movement disorders centre in Kosice, Slovakia. ## 2. Methods and Participants ### 2.1. Participants and Clinical Evaluation Overall, from February 2018 to December 2020, we recruited 236 consecutive patients with PD diagnosed based on the MDS clinical criteria for PD [8] in a single tertiary movement disorders center in Kosice, Slovakia, irrespective of their ethnicity, disease duration, disease stage, age of onset, or cognitive status. Several aspects were assessed: a—motor examination was performed using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) [9], including the Hoehn and Yahr (HY) scale to assess the disease stage. The MDS-UPDRS is a four-subscale combined scale that comprehensively assesses the symptoms of PD. It consists of the following: Part I—nonmotor experiences of daily living, Part II—motor experiences of daily living, Part III—motor examination, and Part IV—motor complications. All items are scored on a scale from 0 (normal) to 4 (severe), and the total score for each part is obtained from the sum of the corresponding item scores.Basic sociodemographic data were recorded, as well as the age of onset and disease duration, disease subtype (tremor-dominant, akinetic-rigid, mixed, and PIGD [10]), and information about family history of neurodegenerative parkinsonism. The presence of selected clinical parameters was evaluated, including dementia, based on the Montreal Cognitive Assessment (MoCA) [11] and the Parkinson’s Disease-Cognitive Rating Scale (PD-CRS) [12], both recommended for cognitive screening in PD by the Movement Disorder Society [13]. MoCA is a one-page cognitive impairment test assessing multiple cognitive domains such as short-term memory, executive functions, visuospatial abilities, naming, attention, and working memory, language, concentration, verbal abstraction, and orientation, with a maximum score of 30 points and cutoffs of 25/26 points for PD-MCI (PD-mild cognitive deficit) (sensitivity: 90%; specificity: 75%) and 20/21 points for PD-D (PD-dementia) (sensitivity: 81%; specificity: 95%) [14]. PD-CRS is a new battery of cognitive scales to assess cognitive decline in PD patients. This battery consists of subtests to assess cortical (naming confrontation and copying clocks) and subcortical functions (sustained attention, working memory, alternating and action verbal fluency, drawing clocks, immediate and delayed verbal memory with free recall) using a total of 9 tasks. The maximum subcortical and cortical function scores are 104 and 30, with a cutoff score of ≤81 points showing a sensitivity of 79% and specificity of 80% for PD-MCI compared with healthy controls [15], and cutoffs of both ≤62 and ≤64 points with a sensitivity and specificity of ≥94% for PD-D compared with healthy controls [16].The study was performed according to the Declaration of Helsinki (1975); it was approved by the local ethics committee and all patients signed the written informed consent before enrolment. ### 2.2. Enzymatic Activity Assay The following method was applied for the determination of AGLA (alpha-galactosidase) activity in DBS (dried blood spots). The used protocol implies extraction of the AGLA from DBS, incubation with a synthetic substrate for a defined amount of time, and detection of the enzymatic product using fluorimetry. AGLA enzymes are extracted from dry blood spots in sodium acetate buffer in a 96-well plate at 37°C with agitation. On top of the extracts, a specific synthetic substrate (4-methylumbelliferyl-α-D-galactopyranoside) is added, and the plates are incubated at 37°C for 4–6 h. The reaction is stopped by adding carbonate buffer (changing the pH to 10.7). The quantitation of the product (4-MU) was performed by fluorimetry on a Victor2 Fluorometer (PerkinElmer) using an external calibration line of 4-methylumbelliferone. The results of the enzymatic activity determination were calculated in μmol/L/h. Enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, further Lyso-Gb3 level determination and genetic studies were performed in men only in the case of abnormal AGLA levels. ### 2.3. Determination of the Levels of Lyso-Gb3 Lyso-Gb3 levels were determined only in men with abnormal AGLA levels. For each sample, three 3.2 mm punches from a dried blood spot are treated with 150μL metabolite extraction buffer (50 μL DMSO/water 1/1 and 100 μL IS solution in ethanol) at 37°C for 30 min. Potential filter card-derived debris is removed in a subsequent filtration step. The metabolite of interest is eventually quantified by liquid chromatography multiple-reaction-monitoring mass spectrometry (LC/MRM-MS) coupled with ultraperformance liquid chromatography (UPLC). Absolute concentrations are calculated based on an intraexperimental calibration line. ### 2.4. Genetic Studies Genetic studies were performed in all females and in males with abnormal AGLA levels, as shown in the flowchart in Figure1. The coding sequence of GLA along with at least 50 base-pairs of neighbouring intronic or UTR sequence was analyzed by a diagnostically validated in house assay as described in more detail previously [17].Figure 1 Algorithm of laboratory testing in male and female subjects with PD. ### 2.5. Statistical Analysis SPSS Inc. statistical software version 22.0 (Chicago, IL, USA) was used for statistical analysis. First, we described the basic sociodemographic characteristics of our study group. Subsequently, laboratory parameters and the prevalence of mutations in the GLA gene were analyzed. Lastly, we described the clinical characteristics of subjects with identified GLA variants. ## 2.1. Participants and Clinical Evaluation Overall, from February 2018 to December 2020, we recruited 236 consecutive patients with PD diagnosed based on the MDS clinical criteria for PD [8] in a single tertiary movement disorders center in Kosice, Slovakia, irrespective of their ethnicity, disease duration, disease stage, age of onset, or cognitive status. Several aspects were assessed: a—motor examination was performed using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) [9], including the Hoehn and Yahr (HY) scale to assess the disease stage. The MDS-UPDRS is a four-subscale combined scale that comprehensively assesses the symptoms of PD. It consists of the following: Part I—nonmotor experiences of daily living, Part II—motor experiences of daily living, Part III—motor examination, and Part IV—motor complications. All items are scored on a scale from 0 (normal) to 4 (severe), and the total score for each part is obtained from the sum of the corresponding item scores.Basic sociodemographic data were recorded, as well as the age of onset and disease duration, disease subtype (tremor-dominant, akinetic-rigid, mixed, and PIGD [10]), and information about family history of neurodegenerative parkinsonism. The presence of selected clinical parameters was evaluated, including dementia, based on the Montreal Cognitive Assessment (MoCA) [11] and the Parkinson’s Disease-Cognitive Rating Scale (PD-CRS) [12], both recommended for cognitive screening in PD by the Movement Disorder Society [13]. MoCA is a one-page cognitive impairment test assessing multiple cognitive domains such as short-term memory, executive functions, visuospatial abilities, naming, attention, and working memory, language, concentration, verbal abstraction, and orientation, with a maximum score of 30 points and cutoffs of 25/26 points for PD-MCI (PD-mild cognitive deficit) (sensitivity: 90%; specificity: 75%) and 20/21 points for PD-D (PD-dementia) (sensitivity: 81%; specificity: 95%) [14]. PD-CRS is a new battery of cognitive scales to assess cognitive decline in PD patients. This battery consists of subtests to assess cortical (naming confrontation and copying clocks) and subcortical functions (sustained attention, working memory, alternating and action verbal fluency, drawing clocks, immediate and delayed verbal memory with free recall) using a total of 9 tasks. The maximum subcortical and cortical function scores are 104 and 30, with a cutoff score of ≤81 points showing a sensitivity of 79% and specificity of 80% for PD-MCI compared with healthy controls [15], and cutoffs of both ≤62 and ≤64 points with a sensitivity and specificity of ≥94% for PD-D compared with healthy controls [16].The study was performed according to the Declaration of Helsinki (1975); it was approved by the local ethics committee and all patients signed the written informed consent before enrolment. ## 2.2. Enzymatic Activity Assay The following method was applied for the determination of AGLA (alpha-galactosidase) activity in DBS (dried blood spots). The used protocol implies extraction of the AGLA from DBS, incubation with a synthetic substrate for a defined amount of time, and detection of the enzymatic product using fluorimetry. AGLA enzymes are extracted from dry blood spots in sodium acetate buffer in a 96-well plate at 37°C with agitation. On top of the extracts, a specific synthetic substrate (4-methylumbelliferyl-α-D-galactopyranoside) is added, and the plates are incubated at 37°C for 4–6 h. The reaction is stopped by adding carbonate buffer (changing the pH to 10.7). The quantitation of the product (4-MU) was performed by fluorimetry on a Victor2 Fluorometer (PerkinElmer) using an external calibration line of 4-methylumbelliferone. The results of the enzymatic activity determination were calculated in μmol/L/h. Enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, further Lyso-Gb3 level determination and genetic studies were performed in men only in the case of abnormal AGLA levels. ## 2.3. Determination of the Levels of Lyso-Gb3 Lyso-Gb3 levels were determined only in men with abnormal AGLA levels. For each sample, three 3.2 mm punches from a dried blood spot are treated with 150μL metabolite extraction buffer (50 μL DMSO/water 1/1 and 100 μL IS solution in ethanol) at 37°C for 30 min. Potential filter card-derived debris is removed in a subsequent filtration step. The metabolite of interest is eventually quantified by liquid chromatography multiple-reaction-monitoring mass spectrometry (LC/MRM-MS) coupled with ultraperformance liquid chromatography (UPLC). Absolute concentrations are calculated based on an intraexperimental calibration line. ## 2.4. Genetic Studies Genetic studies were performed in all females and in males with abnormal AGLA levels, as shown in the flowchart in Figure1. The coding sequence of GLA along with at least 50 base-pairs of neighbouring intronic or UTR sequence was analyzed by a diagnostically validated in house assay as described in more detail previously [17].Figure 1 Algorithm of laboratory testing in male and female subjects with PD. ## 2.5. Statistical Analysis SPSS Inc. statistical software version 22.0 (Chicago, IL, USA) was used for statistical analysis. First, we described the basic sociodemographic characteristics of our study group. Subsequently, laboratory parameters and the prevalence of mutations in the GLA gene were analyzed. Lastly, we described the clinical characteristics of subjects with identified GLA variants. ## 3. Results The study included 236 PD patients with a mean age of 68.9 ± 8.9 years, of which 130 (55.1%) were male and the mean disease duration was 7.77 ± 5.35 years. Detailed characteristics of the PD sample are described in Table1. Among 130 men, AGLA levels (with an average value of 22.4 ± 7.57 μmol/L/h) were low in 20 patients (15%) (with an average value of 12.25 ± 2.51 μmol/L/h) and subsequent Lyso-Gb3 testing showed values within the reference range for all tested subjects (0.9–1.7 ng/mL). In 126 genetically tested patients (20 males with low AGLA levels and 106 females), four c.937G > T, p. (Asp313Tyr) variants were identified. Interestingly, all of the positive patients were women, and none of them reported a family history of PD. The age of onset for all subjects carrying heterozygous GLA p.Asp313Tyr variants was >55 years, and most had a mixed PD phenotype (3/4). All GLA p.Asp313Tyr positive subjects had a good response to dopaminergic medication, and all of them developed fluctuations and dyskinesia at the time of clinical assessment. In terms of nonmotor symptoms, all patients complained of autonomic dysfunction, especially urinary problems (3/4 had urine urgency), and the majority of them also complained of sleep-related issues, fatigue (3/4), and mood disorders (3/4) such as depression and anxiety. Cognitive status was normal in 2 GLA p.Asp313Tyr positive subjects according to MoCA and PD-CRS, while 2 patients scored in the PD mild cognitive impairment range. In terms of prodromal features, all four patients carrying the p.(Asp313Tyr) complained of increased sweating and three patients complained of smell loss; however, none of them reported symptoms of REM-sleep behavior disorder (RBD), and only one reported constipation.Table 1 Characteristics of the PD population (N = 236). Mean ± SD (N, %)RangeAge (years)68.9 ± 8.933–88GenderMale130 (55.1%)Female106 (44.9%)Age of onset61.13 ± 10.3527–85Disease duration7.77 ± 5.350–30MDS-UPDRS part III score27.58 ± 13.353–80Presence of GLA gene mutations (N = 126) c.937G > T, p. (Asp313Tyr)4 (3.17%)alpha-Galactosidase level in men (N = 129)<15.3 μmol/L/h (pathological)Pathological level of AGLA22.40 ± 7.5720 (15%)12,25 ± 2.515.3–57.25.3–15.2Lyso-Gb3 level in men> 1.8 ng/ml (pathological)1,23 ± 0,2200.9–1.7MDS-UPDRS: Movement Disorder Society-Unified Parkinson’s Disease Rating Scale.All patients, except one who refused, underwent brain MRI examinations, which revealed only mild subcortical white matter T2 hyperintensities, likely of vascular etiology. In one of the subjects, a swallow-tail sign at the level of substantia nigra was revealed. A DaT scan was performed on all 4 subjects with pathological findings.All patients carrying the p.(Asp313Tyr) variant underwent cardiological (including echocardiography), nephrological, and ophthalmological examinations, which were without any pathological findings that would support the diagnosis of FD. Detailed clinical characteristics of patients harboring p.(Asp313Tyr) mutations in our cohort are summarized in Table2.Table 2 Detailed clinical characteristics of PD patients harbouring the p.(Asp313Tyr) variant. GLA variantOriginFamily historyAgeGenderAge of onsetPD subtypeFallsFreezingTremorMDS-UPDRS part III ONHY stageMotor fluctuationsNMSCognitionInitial motor featuresProdromal featuresCurrent medicationp.(Asp313Tyr)Slovak (ES)−68F56PIGD++−354Wearing off, PoD dyskinInsomnia, fatigue, daytime sleepiness, frequent Urination, nocturia, constipation, depression, apathy, anxious mood, vertigo-dizziness,MoCA 29/30, PD-CRS 99/134Balance problems, left-side bradykinesia, and rigiditySmell loss, sweatingLCIG 4,2 ml/per hourp.(Asp313Tyr)Slovak (VA)−66F61Mixed−−+131PoD dyskinesia, wearing offUrine urgency, nykturia, fatigue, mild anxious mood, mild hyposmiaMoCA 27/30, PD-CRS 94/134Pain and tremor of the left handFatigue, constipation, sweating, urine urgency, anxietyL/C 200 mg/Dp.(Asp313Tyr)Slovak (AS)−75F69Mixed−+−202PoD dyskinesiaDaytime sleepiness, fatigue, urine urgencyMoCA 25/30, PD-CRS 62/134Tremor of the right handSmell loss, excessive sweatingL/C 875 mg/DPPX 1,57 mg/Dp.(Asp313Tyr)Slovak (HR)−70F60Mixed−++322PoD dyskinesia,wearing offUrine urgency, depression, anxious mood, chronic pain of lower limbsMoCA 21/30Tremor of the right handSweating, urine urgencyL/C 500 mg/Drasagilin 1 mg/DY, years; PD, Parkinson’s disease; MDS-UPDRS, Movement Disorder Society-Unified Parkinson’s Disease Rating Scale; HY, Hoehn and Yahr; NMS, nonmotor symptoms; F, female; M, male; PIGD, postural instability gait disorder; PoD, peak-of-dose; MoCA, Montreal Cognitive Assessment; L/C, levodopa/carbidopa; PPX, pramipexole; PD-CRS, and Parkinson’s disease cognitive rating scale. ## 4. Discussion Genetic screening of 127 PD patients for GLA variants previously associated with FD in our cohort resulted in the identification of 4 heterozygous GLA p.(Asp313Tyr) variants. The GLA p.Asp313Tyr allele frequency of 1.6% found in our PD population was significantly higher compared to the allele frequency in the general population reported in the major genetic databases ExAC (0.3%), gnomAD (0.3%), TOPMed (0.3%), or 1000 Genomes Project (0.2%) [18]. To the best of our knowledge, this is the first study surveying the prevalence of FD among patients with PD.The exact etiology of PD is still not known. Several molecular pathways have been associated with the pathology of neurodegeneration, including mitochondrial dysfunction [19], oxidative and proteolytic stress, immune and inflammation responses [20], and the autophagy-lysosomal pathway (ALP) [21]. The role of lysosomal dysfunction in the pathogenesis of parkinsonism is highlighted by the link between PD and heterozygous GBA carriers [6]. Although there is considerable literature supporting a relationship between GBA and synucleinopathies [4, 22, 23], not much is known about a possible association between synucleinopathies and FD, which may also be common [24, 25]. Parkinsonism has been observed in patients with FD in several studies [6, 26–29] which suggests that there may be an increased risk of developing PD in individuals carrying GLA mutations. Recent findings of decreased AGLA, which are characteristic of FD in some patients with parkinsonism, suggest a link between the two diseases [2]. Due to these findings, we surveyed patients with PD to determine the prevalence of FD. In four patients with PD who were genetically tested for FD, the p. (Asp313Tyr) variant was identified, the allele frequency of which was higher than in the general population (1.6% vs. 0.2–0.3%). Interestingly, all patients were women and were carrying the same p. (Asp313Tyr) GLA variant. Previous studies reporting on the clinical features of parkinsonism in the few described FD subjects [27–30] showed a typical age of onset, mostly an akinetic-rigid type of parkinsonism and typically a good response to dopaminergic treatment. Prodromal and nonmotor symptoms were less described in these reports. This is in line with our findings showing a typical later onset of PD, with a good therapeutic response to therapy, where nonmotor symptoms were dominated by autonomic symptoms as described previously. None of the previous reports or our own data, thus far, supports the presence of clinical features of parkinsonism that would be suggestive of GLA carrier status.According to the existing literature, more than 900 variants in the GLA gene have been described [31], but the specific clinical impact of most mutations has not yet been well explored [32]. Although substantial literature exists, it is still under debate whether the p. (Asp313Tyr) variant represents a disease-causing mutation, a low-pathogenic variant, or just a polymorphism [33]. In 1993, this mutation was reported as causative in a male patient with a classical manifestation of FD [34]. However, 10 years later, a more detailed analysis revealed a second missense mutation, p. (Cys172Gly), in the same patient, which questioned the pathogenicity of the p. (Asp313Tyr) variant [35]. Later, the mutation was identified as a “pseudodeficient allele,” suggesting that mutant enzyme activity is pH dependent [35, 36]. Similarly, Niemann et al. and Oder et al. describe this variant as not clinically relevant [37] or as nonpathogenic for FD [32], and the p. (Asp313Tyr) variant has also been referred to as a polymorphism [38]. The possible pathological significance of the p.(Asp313Tyr) variant in the GLA gene has not been verified in the Hasholt study during the examination of members of two Danish families. His findings only support the assumption that p.(Asp313Tyr) is a rare variant without pathological significance [39].These findings contradict other studies showing that the p. (Asp313Tyr) variant may lead to FD-nervous system manifestations [33, 39–41]. Koulousios et al. have shown the possibility that the p. (Asp313Tyr) variant might be considered as disease causing by finding that the prevalence of the p. (Asp313Tyr) variant among patients with FD is more than 35%, while the frequency in the general population is estimated to be less than 1% [41]. According to Koulousios et al., this variant is associated with a milder phenotype and a later onset of the disease [41]. Data in the Moulin study showed that the p. (Asp313Tyr) GLA variant may lead to symptoms and organ manifestations compatible with FD [40]. Also, Zompola et al. reported two newly diagnosed Fabry cases with nervous system manifestations related to the p.(Asp313Tyr) variant, which strengthened the presumption of the pathogenicity of this mutation [33]. In a recent review of Effraimidis et al., the authors stated that the frequency of the p. (Asp313Tyr) variant in comparison to the general population is only higher in neurologic disorders [42]. In fact, patients carrying GLA variants may be asymptomatic or show a spectrum of mild clinical manifestations, including cerebrovascular disease, such as the recently reported cerebral hemodynamic changes in asymptomatic FD subjects at risk for cerebrovascular events [43]. Preclinical detection of neurovascular involvement in FD might allow appropriate management and prevention of future cerebrovascular complications and disability [43]. Due to the findings stated previously, patients carrying the p.(Asp313Tyr) variant should be monitored annually, as enzyme replacement therapy could be used if organ manifestations occur [33].In summary, this study showed a higher incidence of the p. (Asp313Tyr) variant with a higher prevalence than in the general population, further emphasizing the importance of the autophagy-lysosomal pathway study in PD. None of our four patients showed cardiac hypertrophy, renal dysfunction, acroparesthesias, or corneal opacities. MRI did not reveal morphological or functional abnormalities specific for FD. Overall, the results of this study suggest that the p. (Asp313Tyr) variant leads to a nonpathological or very mild variant of FD at most. On the other hand, this mutation may not always be reflected in the fully expressed FD phenotype and may not be accompanied by low levels of AGLA, as the residual activity of this enzyme is preserved [41]. Also, in the context of PD and FD, this variant of unknown significance may help clinicians and researchers in questioning the causative role of genetic variants within the daily clinical and diagnostic settings [44]. Accordingly, as stated, its pathogenic, causative role in the context of PD needs to be further elucidated, and these findings should be interpreted with caution. Observing the four female patients as the asymptomatic carriers of p. (Asp313Tyr), and especially their sons, is needed.In addition, the range of the Fabry phenotype in women can vary from asymptomatic to severely affected compared to men, where the phenotype is usually more severe [2]. The diagnosis of this disease is challenging because the phenotypic manifestation of FD can range from typical manifestations of the disease to unusual and incorrect, and late diagnosis can delay the necessary treatment [32]. Specific treatment for orally administered α-galactosidase A inhibitor or recombinant human alpha-galactosidase (ERT) is available, with better results if therapy is initiated before organ damage [2]. With the perspective of potential FD therapy as a disease modifier also for PD, screening for prodromal symptoms of PD is appropriate. ### 4.1. Strengths and Limitations One of the limitations of this study is the lack of an ethnically-matched healthy control group, which does not allow a direct comparison of prevalence between Slovak PD subjects and healthy controls, although the allele frequency of the GLA p.Asp313Tyr variant in major genetic databases is rather constant at the level of 0.2–0.3%. Also, as part of the clinical lab protocol, enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, genetic analyses were performed in men only in the case of abnormal AGLA levels. One other limitation is that our cohort includes a wider range of age of onset of disease of our patients, taking into account that most of the young-onset PD are genetic, alternative etiologies of PD besides FD can explain the final prevalence. ## 4.1. Strengths and Limitations One of the limitations of this study is the lack of an ethnically-matched healthy control group, which does not allow a direct comparison of prevalence between Slovak PD subjects and healthy controls, although the allele frequency of the GLA p.Asp313Tyr variant in major genetic databases is rather constant at the level of 0.2–0.3%. Also, as part of the clinical lab protocol, enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, genetic analyses were performed in men only in the case of abnormal AGLA levels. One other limitation is that our cohort includes a wider range of age of onset of disease of our patients, taking into account that most of the young-onset PD are genetic, alternative etiologies of PD besides FD can explain the final prevalence. ## 5. Conclusion While the prevalence of the GLA p.Asp313Tyr variant seems to be higher in PD patients compared to the general population, its pathogenic causative role in the context of PD needs to be further elucidated, and these findings should be interpreted with caution. The clinical significance of the variant abovementioned is still under debate. Patients carrying the p.(Asp313Tyr) variant should be monitored annually, as enzyme replacement therapy could be used if organ manifestations occur. --- *Source: 1014950-2022-01-24.xml*
1014950-2022-01-24_1014950-2022-01-24.md
31,535
Prevalence of Fabry Disease among Patients with Parkinson’s Disease
Alexandra Lackova; Christian Beetz; Sebastian Oppermann; Peter Bauer; Petra Pavelekova; Tatiana Lorincova; Miriam Ostrozovicova; Kristina Kulcsarova; Jana Cobejova; Martin Cobej; Petra Levicka; Simona Liesenerova; Daniela Sendekova; Viktoria Sukovska; Zuzana Gdovinova; Vladimir Han; Mie Rizig; Henry Houlden; Matej Skorvanek
Parkinson’s Disease (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1014950
1014950-2022-01-24.xml
--- ## Abstract Background. An increased prevalence of Parkinson’s disease (PD) disease has been previously reported in subjects with Fabry disease (FD) carrying alpha-galactosidase (GLA) mutations and their first-line relatives. Moreover, decreased alpha-galactosidase A (AGLA) enzymatic activity has been reported among cases with PD compared to controls. Objective. The aim of our study was to determine the prevalence of FD among patients with PD. Methods. We recruited 236 consecutive patients with PD from February 2018 to December 2020. Clinical and sociodemographic data, including the MDS-UPDRS-III scores and HY stage (the Hoehn and Yahr scale), were collected, and in-depth phenotyping was performed in subjects with identified GLA variants. A multistep approach, including standard determination of AGLA activity and LysoGb3 in males, and next-generation based GLA sequencing in all females and males with abnormal AGLA levels was performed in a routine diagnostic setting. Results. The mean age of our patients was 68.9 ± 8.9 years, 130 were men (55.1%), and the mean disease duration was 7.77 ± 5.35 years. Among 130 men, AGLA levels were low in 20 patients (15%), and subsequent Lyso-Gb3 testing showed values within the reference range for all tested subjects. In 126 subsequently genetically tested patients, four heterozygous p.(Asp313Tyr) GLA variants (3.2%, MAF 0.016) were identified; all were females. None of the 4 GLA variant carriers identified had any clinical manifestation suggestive of FD. Conclusions. The results of this study suggest a possible relationship between FD and PD in a small proportion of cases. Nevertheless, the GLA variant found in our cohort is classified as a variant of unknown significance. Therefore, its pathogenic causative role in the context of PD needs further elucidation, and these findings should be interpreted with caution. --- ## Body ## 1. Introduction Fabry disease (FD) belongs to the group of genetically determined forms of lysosome storage disorders with X-linked inheritance, which lead to a deficiency of lysosomal enzyme–alpha-galactosidase A (AGLA), resulting in the accumulation of glycosphingolipids, especially globotriaosylceramide (Gb3) in vital organs [1]. Although FD is a disease with X-linked inheritance, clinical symptoms are very common in females as well. Some mutations cause classic disease, while others result in a milder disease phenotype with later onset [2]. The clinical spectrum of the disease is broad. Early signs include typical neurological manifestations, skin changes, renal involvement, and characteristic cardiovascular manifestations. Due to the complexity of the disease, multidisciplinary management is needed [3]. As for the group of lysosomal storage diseases, the association between mutations in glucocerebrosidase (GBA), which encodes the lysosomal enzyme glucocerebrosidase (GCase), and Parkinson’s disease (PD) has highlighted the importance of lysosomal function in PD pathogenesis [4]. Since lysosomes are involved in the process of alpha-synuclein degradation, it is thought that their dysfunction may lead to its accumulation and subsequent PD formation [5]. In recent years, there have been expanding studies on the interrelationship between parkinsonism and the mutation of the alpha-galactosidase gene (GLA) gene in FD. An increased prevalence of PD disease has been previously reported in subjects with FD and their first-line relatives [6]. Moreover, decreased AGLA enzymatic activity has been reported among cases with PD compared to controls [2]. This points to a potential relationship between these two disorders. The function of the AGLA enzyme represents a new therapeutic approach for genetically determined PD, as increasing the levels of these enzymes leads to a reduction in the levels of alpha-synuclein [7]. Nevertheless, the prevalence of FD among PD patients has not been systematically studied so far. Therefore, we aimed to determine the prevalence of FD among patients with PD in a single tertiary movement disorders centre in Kosice, Slovakia. ## 2. Methods and Participants ### 2.1. Participants and Clinical Evaluation Overall, from February 2018 to December 2020, we recruited 236 consecutive patients with PD diagnosed based on the MDS clinical criteria for PD [8] in a single tertiary movement disorders center in Kosice, Slovakia, irrespective of their ethnicity, disease duration, disease stage, age of onset, or cognitive status. Several aspects were assessed: a—motor examination was performed using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) [9], including the Hoehn and Yahr (HY) scale to assess the disease stage. The MDS-UPDRS is a four-subscale combined scale that comprehensively assesses the symptoms of PD. It consists of the following: Part I—nonmotor experiences of daily living, Part II—motor experiences of daily living, Part III—motor examination, and Part IV—motor complications. All items are scored on a scale from 0 (normal) to 4 (severe), and the total score for each part is obtained from the sum of the corresponding item scores.Basic sociodemographic data were recorded, as well as the age of onset and disease duration, disease subtype (tremor-dominant, akinetic-rigid, mixed, and PIGD [10]), and information about family history of neurodegenerative parkinsonism. The presence of selected clinical parameters was evaluated, including dementia, based on the Montreal Cognitive Assessment (MoCA) [11] and the Parkinson’s Disease-Cognitive Rating Scale (PD-CRS) [12], both recommended for cognitive screening in PD by the Movement Disorder Society [13]. MoCA is a one-page cognitive impairment test assessing multiple cognitive domains such as short-term memory, executive functions, visuospatial abilities, naming, attention, and working memory, language, concentration, verbal abstraction, and orientation, with a maximum score of 30 points and cutoffs of 25/26 points for PD-MCI (PD-mild cognitive deficit) (sensitivity: 90%; specificity: 75%) and 20/21 points for PD-D (PD-dementia) (sensitivity: 81%; specificity: 95%) [14]. PD-CRS is a new battery of cognitive scales to assess cognitive decline in PD patients. This battery consists of subtests to assess cortical (naming confrontation and copying clocks) and subcortical functions (sustained attention, working memory, alternating and action verbal fluency, drawing clocks, immediate and delayed verbal memory with free recall) using a total of 9 tasks. The maximum subcortical and cortical function scores are 104 and 30, with a cutoff score of ≤81 points showing a sensitivity of 79% and specificity of 80% for PD-MCI compared with healthy controls [15], and cutoffs of both ≤62 and ≤64 points with a sensitivity and specificity of ≥94% for PD-D compared with healthy controls [16].The study was performed according to the Declaration of Helsinki (1975); it was approved by the local ethics committee and all patients signed the written informed consent before enrolment. ### 2.2. Enzymatic Activity Assay The following method was applied for the determination of AGLA (alpha-galactosidase) activity in DBS (dried blood spots). The used protocol implies extraction of the AGLA from DBS, incubation with a synthetic substrate for a defined amount of time, and detection of the enzymatic product using fluorimetry. AGLA enzymes are extracted from dry blood spots in sodium acetate buffer in a 96-well plate at 37°C with agitation. On top of the extracts, a specific synthetic substrate (4-methylumbelliferyl-α-D-galactopyranoside) is added, and the plates are incubated at 37°C for 4–6 h. The reaction is stopped by adding carbonate buffer (changing the pH to 10.7). The quantitation of the product (4-MU) was performed by fluorimetry on a Victor2 Fluorometer (PerkinElmer) using an external calibration line of 4-methylumbelliferone. The results of the enzymatic activity determination were calculated in μmol/L/h. Enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, further Lyso-Gb3 level determination and genetic studies were performed in men only in the case of abnormal AGLA levels. ### 2.3. Determination of the Levels of Lyso-Gb3 Lyso-Gb3 levels were determined only in men with abnormal AGLA levels. For each sample, three 3.2 mm punches from a dried blood spot are treated with 150μL metabolite extraction buffer (50 μL DMSO/water 1/1 and 100 μL IS solution in ethanol) at 37°C for 30 min. Potential filter card-derived debris is removed in a subsequent filtration step. The metabolite of interest is eventually quantified by liquid chromatography multiple-reaction-monitoring mass spectrometry (LC/MRM-MS) coupled with ultraperformance liquid chromatography (UPLC). Absolute concentrations are calculated based on an intraexperimental calibration line. ### 2.4. Genetic Studies Genetic studies were performed in all females and in males with abnormal AGLA levels, as shown in the flowchart in Figure1. The coding sequence of GLA along with at least 50 base-pairs of neighbouring intronic or UTR sequence was analyzed by a diagnostically validated in house assay as described in more detail previously [17].Figure 1 Algorithm of laboratory testing in male and female subjects with PD. ### 2.5. Statistical Analysis SPSS Inc. statistical software version 22.0 (Chicago, IL, USA) was used for statistical analysis. First, we described the basic sociodemographic characteristics of our study group. Subsequently, laboratory parameters and the prevalence of mutations in the GLA gene were analyzed. Lastly, we described the clinical characteristics of subjects with identified GLA variants. ## 2.1. Participants and Clinical Evaluation Overall, from February 2018 to December 2020, we recruited 236 consecutive patients with PD diagnosed based on the MDS clinical criteria for PD [8] in a single tertiary movement disorders center in Kosice, Slovakia, irrespective of their ethnicity, disease duration, disease stage, age of onset, or cognitive status. Several aspects were assessed: a—motor examination was performed using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) [9], including the Hoehn and Yahr (HY) scale to assess the disease stage. The MDS-UPDRS is a four-subscale combined scale that comprehensively assesses the symptoms of PD. It consists of the following: Part I—nonmotor experiences of daily living, Part II—motor experiences of daily living, Part III—motor examination, and Part IV—motor complications. All items are scored on a scale from 0 (normal) to 4 (severe), and the total score for each part is obtained from the sum of the corresponding item scores.Basic sociodemographic data were recorded, as well as the age of onset and disease duration, disease subtype (tremor-dominant, akinetic-rigid, mixed, and PIGD [10]), and information about family history of neurodegenerative parkinsonism. The presence of selected clinical parameters was evaluated, including dementia, based on the Montreal Cognitive Assessment (MoCA) [11] and the Parkinson’s Disease-Cognitive Rating Scale (PD-CRS) [12], both recommended for cognitive screening in PD by the Movement Disorder Society [13]. MoCA is a one-page cognitive impairment test assessing multiple cognitive domains such as short-term memory, executive functions, visuospatial abilities, naming, attention, and working memory, language, concentration, verbal abstraction, and orientation, with a maximum score of 30 points and cutoffs of 25/26 points for PD-MCI (PD-mild cognitive deficit) (sensitivity: 90%; specificity: 75%) and 20/21 points for PD-D (PD-dementia) (sensitivity: 81%; specificity: 95%) [14]. PD-CRS is a new battery of cognitive scales to assess cognitive decline in PD patients. This battery consists of subtests to assess cortical (naming confrontation and copying clocks) and subcortical functions (sustained attention, working memory, alternating and action verbal fluency, drawing clocks, immediate and delayed verbal memory with free recall) using a total of 9 tasks. The maximum subcortical and cortical function scores are 104 and 30, with a cutoff score of ≤81 points showing a sensitivity of 79% and specificity of 80% for PD-MCI compared with healthy controls [15], and cutoffs of both ≤62 and ≤64 points with a sensitivity and specificity of ≥94% for PD-D compared with healthy controls [16].The study was performed according to the Declaration of Helsinki (1975); it was approved by the local ethics committee and all patients signed the written informed consent before enrolment. ## 2.2. Enzymatic Activity Assay The following method was applied for the determination of AGLA (alpha-galactosidase) activity in DBS (dried blood spots). The used protocol implies extraction of the AGLA from DBS, incubation with a synthetic substrate for a defined amount of time, and detection of the enzymatic product using fluorimetry. AGLA enzymes are extracted from dry blood spots in sodium acetate buffer in a 96-well plate at 37°C with agitation. On top of the extracts, a specific synthetic substrate (4-methylumbelliferyl-α-D-galactopyranoside) is added, and the plates are incubated at 37°C for 4–6 h. The reaction is stopped by adding carbonate buffer (changing the pH to 10.7). The quantitation of the product (4-MU) was performed by fluorimetry on a Victor2 Fluorometer (PerkinElmer) using an external calibration line of 4-methylumbelliferone. The results of the enzymatic activity determination were calculated in μmol/L/h. Enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, further Lyso-Gb3 level determination and genetic studies were performed in men only in the case of abnormal AGLA levels. ## 2.3. Determination of the Levels of Lyso-Gb3 Lyso-Gb3 levels were determined only in men with abnormal AGLA levels. For each sample, three 3.2 mm punches from a dried blood spot are treated with 150μL metabolite extraction buffer (50 μL DMSO/water 1/1 and 100 μL IS solution in ethanol) at 37°C for 30 min. Potential filter card-derived debris is removed in a subsequent filtration step. The metabolite of interest is eventually quantified by liquid chromatography multiple-reaction-monitoring mass spectrometry (LC/MRM-MS) coupled with ultraperformance liquid chromatography (UPLC). Absolute concentrations are calculated based on an intraexperimental calibration line. ## 2.4. Genetic Studies Genetic studies were performed in all females and in males with abnormal AGLA levels, as shown in the flowchart in Figure1. The coding sequence of GLA along with at least 50 base-pairs of neighbouring intronic or UTR sequence was analyzed by a diagnostically validated in house assay as described in more detail previously [17].Figure 1 Algorithm of laboratory testing in male and female subjects with PD. ## 2.5. Statistical Analysis SPSS Inc. statistical software version 22.0 (Chicago, IL, USA) was used for statistical analysis. First, we described the basic sociodemographic characteristics of our study group. Subsequently, laboratory parameters and the prevalence of mutations in the GLA gene were analyzed. Lastly, we described the clinical characteristics of subjects with identified GLA variants. ## 3. Results The study included 236 PD patients with a mean age of 68.9 ± 8.9 years, of which 130 (55.1%) were male and the mean disease duration was 7.77 ± 5.35 years. Detailed characteristics of the PD sample are described in Table1. Among 130 men, AGLA levels (with an average value of 22.4 ± 7.57 μmol/L/h) were low in 20 patients (15%) (with an average value of 12.25 ± 2.51 μmol/L/h) and subsequent Lyso-Gb3 testing showed values within the reference range for all tested subjects (0.9–1.7 ng/mL). In 126 genetically tested patients (20 males with low AGLA levels and 106 females), four c.937G > T, p. (Asp313Tyr) variants were identified. Interestingly, all of the positive patients were women, and none of them reported a family history of PD. The age of onset for all subjects carrying heterozygous GLA p.Asp313Tyr variants was >55 years, and most had a mixed PD phenotype (3/4). All GLA p.Asp313Tyr positive subjects had a good response to dopaminergic medication, and all of them developed fluctuations and dyskinesia at the time of clinical assessment. In terms of nonmotor symptoms, all patients complained of autonomic dysfunction, especially urinary problems (3/4 had urine urgency), and the majority of them also complained of sleep-related issues, fatigue (3/4), and mood disorders (3/4) such as depression and anxiety. Cognitive status was normal in 2 GLA p.Asp313Tyr positive subjects according to MoCA and PD-CRS, while 2 patients scored in the PD mild cognitive impairment range. In terms of prodromal features, all four patients carrying the p.(Asp313Tyr) complained of increased sweating and three patients complained of smell loss; however, none of them reported symptoms of REM-sleep behavior disorder (RBD), and only one reported constipation.Table 1 Characteristics of the PD population (N = 236). Mean ± SD (N, %)RangeAge (years)68.9 ± 8.933–88GenderMale130 (55.1%)Female106 (44.9%)Age of onset61.13 ± 10.3527–85Disease duration7.77 ± 5.350–30MDS-UPDRS part III score27.58 ± 13.353–80Presence of GLA gene mutations (N = 126) c.937G > T, p. (Asp313Tyr)4 (3.17%)alpha-Galactosidase level in men (N = 129)<15.3 μmol/L/h (pathological)Pathological level of AGLA22.40 ± 7.5720 (15%)12,25 ± 2.515.3–57.25.3–15.2Lyso-Gb3 level in men> 1.8 ng/ml (pathological)1,23 ± 0,2200.9–1.7MDS-UPDRS: Movement Disorder Society-Unified Parkinson’s Disease Rating Scale.All patients, except one who refused, underwent brain MRI examinations, which revealed only mild subcortical white matter T2 hyperintensities, likely of vascular etiology. In one of the subjects, a swallow-tail sign at the level of substantia nigra was revealed. A DaT scan was performed on all 4 subjects with pathological findings.All patients carrying the p.(Asp313Tyr) variant underwent cardiological (including echocardiography), nephrological, and ophthalmological examinations, which were without any pathological findings that would support the diagnosis of FD. Detailed clinical characteristics of patients harboring p.(Asp313Tyr) mutations in our cohort are summarized in Table2.Table 2 Detailed clinical characteristics of PD patients harbouring the p.(Asp313Tyr) variant. GLA variantOriginFamily historyAgeGenderAge of onsetPD subtypeFallsFreezingTremorMDS-UPDRS part III ONHY stageMotor fluctuationsNMSCognitionInitial motor featuresProdromal featuresCurrent medicationp.(Asp313Tyr)Slovak (ES)−68F56PIGD++−354Wearing off, PoD dyskinInsomnia, fatigue, daytime sleepiness, frequent Urination, nocturia, constipation, depression, apathy, anxious mood, vertigo-dizziness,MoCA 29/30, PD-CRS 99/134Balance problems, left-side bradykinesia, and rigiditySmell loss, sweatingLCIG 4,2 ml/per hourp.(Asp313Tyr)Slovak (VA)−66F61Mixed−−+131PoD dyskinesia, wearing offUrine urgency, nykturia, fatigue, mild anxious mood, mild hyposmiaMoCA 27/30, PD-CRS 94/134Pain and tremor of the left handFatigue, constipation, sweating, urine urgency, anxietyL/C 200 mg/Dp.(Asp313Tyr)Slovak (AS)−75F69Mixed−+−202PoD dyskinesiaDaytime sleepiness, fatigue, urine urgencyMoCA 25/30, PD-CRS 62/134Tremor of the right handSmell loss, excessive sweatingL/C 875 mg/DPPX 1,57 mg/Dp.(Asp313Tyr)Slovak (HR)−70F60Mixed−++322PoD dyskinesia,wearing offUrine urgency, depression, anxious mood, chronic pain of lower limbsMoCA 21/30Tremor of the right handSweating, urine urgencyL/C 500 mg/Drasagilin 1 mg/DY, years; PD, Parkinson’s disease; MDS-UPDRS, Movement Disorder Society-Unified Parkinson’s Disease Rating Scale; HY, Hoehn and Yahr; NMS, nonmotor symptoms; F, female; M, male; PIGD, postural instability gait disorder; PoD, peak-of-dose; MoCA, Montreal Cognitive Assessment; L/C, levodopa/carbidopa; PPX, pramipexole; PD-CRS, and Parkinson’s disease cognitive rating scale. ## 4. Discussion Genetic screening of 127 PD patients for GLA variants previously associated with FD in our cohort resulted in the identification of 4 heterozygous GLA p.(Asp313Tyr) variants. The GLA p.Asp313Tyr allele frequency of 1.6% found in our PD population was significantly higher compared to the allele frequency in the general population reported in the major genetic databases ExAC (0.3%), gnomAD (0.3%), TOPMed (0.3%), or 1000 Genomes Project (0.2%) [18]. To the best of our knowledge, this is the first study surveying the prevalence of FD among patients with PD.The exact etiology of PD is still not known. Several molecular pathways have been associated with the pathology of neurodegeneration, including mitochondrial dysfunction [19], oxidative and proteolytic stress, immune and inflammation responses [20], and the autophagy-lysosomal pathway (ALP) [21]. The role of lysosomal dysfunction in the pathogenesis of parkinsonism is highlighted by the link between PD and heterozygous GBA carriers [6]. Although there is considerable literature supporting a relationship between GBA and synucleinopathies [4, 22, 23], not much is known about a possible association between synucleinopathies and FD, which may also be common [24, 25]. Parkinsonism has been observed in patients with FD in several studies [6, 26–29] which suggests that there may be an increased risk of developing PD in individuals carrying GLA mutations. Recent findings of decreased AGLA, which are characteristic of FD in some patients with parkinsonism, suggest a link between the two diseases [2]. Due to these findings, we surveyed patients with PD to determine the prevalence of FD. In four patients with PD who were genetically tested for FD, the p. (Asp313Tyr) variant was identified, the allele frequency of which was higher than in the general population (1.6% vs. 0.2–0.3%). Interestingly, all patients were women and were carrying the same p. (Asp313Tyr) GLA variant. Previous studies reporting on the clinical features of parkinsonism in the few described FD subjects [27–30] showed a typical age of onset, mostly an akinetic-rigid type of parkinsonism and typically a good response to dopaminergic treatment. Prodromal and nonmotor symptoms were less described in these reports. This is in line with our findings showing a typical later onset of PD, with a good therapeutic response to therapy, where nonmotor symptoms were dominated by autonomic symptoms as described previously. None of the previous reports or our own data, thus far, supports the presence of clinical features of parkinsonism that would be suggestive of GLA carrier status.According to the existing literature, more than 900 variants in the GLA gene have been described [31], but the specific clinical impact of most mutations has not yet been well explored [32]. Although substantial literature exists, it is still under debate whether the p. (Asp313Tyr) variant represents a disease-causing mutation, a low-pathogenic variant, or just a polymorphism [33]. In 1993, this mutation was reported as causative in a male patient with a classical manifestation of FD [34]. However, 10 years later, a more detailed analysis revealed a second missense mutation, p. (Cys172Gly), in the same patient, which questioned the pathogenicity of the p. (Asp313Tyr) variant [35]. Later, the mutation was identified as a “pseudodeficient allele,” suggesting that mutant enzyme activity is pH dependent [35, 36]. Similarly, Niemann et al. and Oder et al. describe this variant as not clinically relevant [37] or as nonpathogenic for FD [32], and the p. (Asp313Tyr) variant has also been referred to as a polymorphism [38]. The possible pathological significance of the p.(Asp313Tyr) variant in the GLA gene has not been verified in the Hasholt study during the examination of members of two Danish families. His findings only support the assumption that p.(Asp313Tyr) is a rare variant without pathological significance [39].These findings contradict other studies showing that the p. (Asp313Tyr) variant may lead to FD-nervous system manifestations [33, 39–41]. Koulousios et al. have shown the possibility that the p. (Asp313Tyr) variant might be considered as disease causing by finding that the prevalence of the p. (Asp313Tyr) variant among patients with FD is more than 35%, while the frequency in the general population is estimated to be less than 1% [41]. According to Koulousios et al., this variant is associated with a milder phenotype and a later onset of the disease [41]. Data in the Moulin study showed that the p. (Asp313Tyr) GLA variant may lead to symptoms and organ manifestations compatible with FD [40]. Also, Zompola et al. reported two newly diagnosed Fabry cases with nervous system manifestations related to the p.(Asp313Tyr) variant, which strengthened the presumption of the pathogenicity of this mutation [33]. In a recent review of Effraimidis et al., the authors stated that the frequency of the p. (Asp313Tyr) variant in comparison to the general population is only higher in neurologic disorders [42]. In fact, patients carrying GLA variants may be asymptomatic or show a spectrum of mild clinical manifestations, including cerebrovascular disease, such as the recently reported cerebral hemodynamic changes in asymptomatic FD subjects at risk for cerebrovascular events [43]. Preclinical detection of neurovascular involvement in FD might allow appropriate management and prevention of future cerebrovascular complications and disability [43]. Due to the findings stated previously, patients carrying the p.(Asp313Tyr) variant should be monitored annually, as enzyme replacement therapy could be used if organ manifestations occur [33].In summary, this study showed a higher incidence of the p. (Asp313Tyr) variant with a higher prevalence than in the general population, further emphasizing the importance of the autophagy-lysosomal pathway study in PD. None of our four patients showed cardiac hypertrophy, renal dysfunction, acroparesthesias, or corneal opacities. MRI did not reveal morphological or functional abnormalities specific for FD. Overall, the results of this study suggest that the p. (Asp313Tyr) variant leads to a nonpathological or very mild variant of FD at most. On the other hand, this mutation may not always be reflected in the fully expressed FD phenotype and may not be accompanied by low levels of AGLA, as the residual activity of this enzyme is preserved [41]. Also, in the context of PD and FD, this variant of unknown significance may help clinicians and researchers in questioning the causative role of genetic variants within the daily clinical and diagnostic settings [44]. Accordingly, as stated, its pathogenic, causative role in the context of PD needs to be further elucidated, and these findings should be interpreted with caution. Observing the four female patients as the asymptomatic carriers of p. (Asp313Tyr), and especially their sons, is needed.In addition, the range of the Fabry phenotype in women can vary from asymptomatic to severely affected compared to men, where the phenotype is usually more severe [2]. The diagnosis of this disease is challenging because the phenotypic manifestation of FD can range from typical manifestations of the disease to unusual and incorrect, and late diagnosis can delay the necessary treatment [32]. Specific treatment for orally administered α-galactosidase A inhibitor or recombinant human alpha-galactosidase (ERT) is available, with better results if therapy is initiated before organ damage [2]. With the perspective of potential FD therapy as a disease modifier also for PD, screening for prodromal symptoms of PD is appropriate. ### 4.1. Strengths and Limitations One of the limitations of this study is the lack of an ethnically-matched healthy control group, which does not allow a direct comparison of prevalence between Slovak PD subjects and healthy controls, although the allele frequency of the GLA p.Asp313Tyr variant in major genetic databases is rather constant at the level of 0.2–0.3%. Also, as part of the clinical lab protocol, enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, genetic analyses were performed in men only in the case of abnormal AGLA levels. One other limitation is that our cohort includes a wider range of age of onset of disease of our patients, taking into account that most of the young-onset PD are genetic, alternative etiologies of PD besides FD can explain the final prevalence. ## 4.1. Strengths and Limitations One of the limitations of this study is the lack of an ethnically-matched healthy control group, which does not allow a direct comparison of prevalence between Slovak PD subjects and healthy controls, although the allele frequency of the GLA p.Asp313Tyr variant in major genetic databases is rather constant at the level of 0.2–0.3%. Also, as part of the clinical lab protocol, enzymatic AGLA levels were not analyzed in females as previous studies have shown an inconsistent relationship between normal AGLA levels and GLA mutation status in FD subjects. On the other hand, normal AGLA levels in men predict a normal GLA mutation status, and thus, genetic analyses were performed in men only in the case of abnormal AGLA levels. One other limitation is that our cohort includes a wider range of age of onset of disease of our patients, taking into account that most of the young-onset PD are genetic, alternative etiologies of PD besides FD can explain the final prevalence. ## 5. Conclusion While the prevalence of the GLA p.Asp313Tyr variant seems to be higher in PD patients compared to the general population, its pathogenic causative role in the context of PD needs to be further elucidated, and these findings should be interpreted with caution. The clinical significance of the variant abovementioned is still under debate. Patients carrying the p.(Asp313Tyr) variant should be monitored annually, as enzyme replacement therapy could be used if organ manifestations occur. --- *Source: 1014950-2022-01-24.xml*
2022
# Molecular Imaging, Pharmacokinetics, and Dosimetry of111In-AMBA in Human Prostate Tumor-Bearing Mice **Authors:** Chung-Li Ho; I-Hsiang Liu; Yu-Hsien Wu; Liang-Cheng Chen; Chun-Lin Chen; Wan-Chi Lee; Cheng-Hui Chuang; Te-Wei Lee; Wuu-Jyh Lin; Lie-Hang Shen; Chih-Hsien Chang **Journal:** Journal of Biomedicine and Biotechnology (2011) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2011/101497 --- ## Abstract Molecular imaging with promise of personalized medicine can provide patient-specific information noninvasively, thus enabling treatment to be tailored to the specific biological attributes of both the disease and the patient. This study was to investigate the characterization of DO3A-CH2CO-G-4-aminobenzoyl-Q-W-A-V-G-H-L-M-NH2 (AMBA) in vitro, MicroSPECT/CT imaging, and biological activities of 111In-AMBA in PC-3 prostate tumor-bearing SCID mice. The uptake of 111In-AMBA reached highest with 3.87±0.65% ID/g at 8 h. MicroSPECT/CT imaging studies suggested that the uptake of 111In-AMBA was clearly visualized between 8 and 48 h postinjection. The distribution half-life (t1/2α) and the elimination half-life (t1/2β) of 111In-AMBA in mice were 1.53 h and 30.7 h, respectively. The Cmax and AUC of 111In-AMBA were 7.57% ID/g and 66.39 h∗% ID/g, respectively. The effective dose appeared to be 0.11 mSv/MBq-1. We demonstrated a good uptake of 111In-AMBA in the GRPR-overexpressed PC-3 tumor-bearing SCID mice. 111In-AMBA is a safe, potential molecular image-guided diagnostic agent for human GRPR-positive tumors, ranging from simple and straightforward biodistribution studies to improve the efficacy of combined modality anticancer therapy. --- ## Body ## 1. Introduction Prostate cancer is estimated to rank first in number of cancer cases and second in number of deaths due to cancer among men in the Western world [1]. Gastrin-releasing peptides (GRPs), including Bombesin-like peptides (BLPs), are involved in the regulation of a large number of biological processes in the gut and central nervous system (CNS) [2]. They mediate their action on cells by binding to members of a superfamily of G protein-coupled receptors [3]. There are four known subtypes of BN-related peptide receptors, namely, gastrin-releasing peptide receptor (GRPR, BB2, BRS-2), neuromedin B receptor (NMBR, BB1, BRS-1), orphan receptor (BRS-3), and amphibian receptor (BB4-R) [4]. Except the BB4-R, all the receptors were widely distributed, especially in the gastrointestinal (GI) tract and central nervous system (CNS). The receptors have a large range of effects in both normal physiology and pathophysiological conditions [5]. GRPRs are normally expressed in nonneuroendocrine tissues of the pancreas, breast, and neuroendocrine cells of the brain, GI tract, lung, and prostate, but are not normally expressed by epithelial cells in the colon, lung, or prostate [6, 7].Molecular imaging enables the visualization of the cellular function and the followup of the molecular process in living organisms without perturbing them [8]. The radionuclide molecular imaging technique is the most sensitive and can provide target-specific information. The radiotracer could also be used for radionuclide therapy. Thus, the development of a personalized theranostic (image and treat) agent would allow greater accuracy in selection of patients who may respond to treatment, and assessing the outcome of therapeutic response [9]. Gastrin-releasing peptide receptors (GRPRs) are overexpressed in several primary human tumors and metastases [5]. Markwalder and Reubi reported that GRPRs are expressed in invasive prostate carcinomas and in prostatic intraepithelial neoplasms at high density, whereas normal prostate tissue and hyperplastic prostate tissue were predominantly GRPR negative [10]. These findings suggest that GRPR may be used as a molecular basis for diagnosing and staging prostate cancer, further for imaging-guided personalized medicine using radiolabeled bombesin analogues.Previous studies have evaluated the111In-radiolabeled BN analogues which bind rapidly into GRP receptor-positive tumor cells, including PC-3, CA20948, and AR42J using gamma camera imaging after administration [11–14]. AMBA (DO3A-CH2CO-G-(4-aminobenzoyl)-QWAVGHLM-NH2) (Figure 1), a BBN-related peptide agonist, has a DO3A structure that can chelate tripositive lanthanide isotopes, such as 68Ga, 90Y, 111In, and 177Lu. Thus, it can formulate many kinds of radiolabelled probes for various purposes [15]. Indium 111 emits γ-photons of two energies (172 and 245 keV) as well as Auger and internal conversion electrons. 111In-AMBA was initially used for diagnostic purposes but remains the potential for radiotherapy. Auger electron, with a maximum energy of <30 keV, is a high linear energy transfer (LET) radiation with subcellular pathlength (2–500 nm) in tissues [16]. For imaging the presence or absence of GRPR, the 111In-AMBA could be used for patient selection for further radiotherapy (177Lu-AMBA), chemotherapy (BLP antagonists), or therapeutic response monitoring as imaging-guided personalized medicine. Although 111In-AMBA has been evaluated as an imaging agent [17–20], the pharmacokinetics and dosimetry of the agent have not been reported yet. In this study, 111In-AMBA was designed as an image-guided diagnostic agent for human GRPR-positive tumors, which only retain the last eight amino acids (Q-W-A-V-G-H-L-M-NH2) from native BN. The pharmacokinetics, biodistribution, dosimetry, and micro-SPECT/CT imaging of 111In-AMBA were evaluated in human androgen-independent PC-3 prostate tumor-bearing SCID mice.Figure 1 Representative structure of AMBA. ## 2. Materials and Methods ### 2.1. Chemicals Protected Nα-Fmoc-amino acid derivatives were purchased from Calbiochem-Novabiochem (Laufelfingen, Switzerland), Fmoc-amide resin and coupling reagent were purchased from Applied Biosystems Inc. (Foster City, CA, USA), and DOTA-tetra (tBu) ester was purchased from Macrocyclics (Dallas, TX, USA). Fmoc-4-abz-OH was obtained from Bachem (Chauptstrasse, Switzerland). Bombesin was purchased from Fluka (Buchs, Switzerland). ### 2.2. Synthesis of AMBA AMBA was synthesized by solid phase peptide synthesis (SPPS) using an Applied Biosystems Model 433A full automated peptide synthesizer (Applied Biosystems, Foster City, CA, USA) employing the Fmoc (9-fluorenylmethoxy-carbonyl) strategy. Carboxyl groups on Fmoc-protected amino acids were activated by (2-(1H-benzotriazol-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU), forming a peptide bond with the N-terminal amino group on the growing peptide, anchored viathe C-terminus to the resin, provided for stepwise amino acid addition. Rink Amide resin (0.25 mmole) and Fmoc-protected amino acids (1.0 mmoL), with appropriate side-chain protections, and DOTA-tetra (tBu ester) were used for SPPS of the BBN conjugates. Side chain protecting groups in the synthesis were Trt for Gln and His, and Boc for Trp.The protected peptide-resin was cleaved and deprotected with mixture of 50% trifluoroacetic acid (TFA): 45% chloroform, 3.75% anisole, and 1.25% 1, 2-ethanedithiol (EDT) for 4 h at room temperature (RT). The crude peptide was isolated by precipitating with cool diethyl ether. After centrifugation, the collected precipitate was dried under vacuum. The crude peptide sample was purified by reverse phase high-performance liquid chromatography (HPLC) using a column of XTerra prep, MSC18, 5μm, 18 × 50 mm (Waters Corp., MA, USA) with an acetonitrile/water gradient consisting of solvent A (0.1% TFA in H2O) and solvent B (0.1% TFA in acetonitrile), with a 14.8% yield; flow: 6 mL/min; gradient: 20%–40% B for 20 min. The molecular weight was determined with a MALDI-TOF Mass Spectrometer (Bruker Daltonics Inc, Germany). M/z determined for the peptide was AMBA, 1,502.6 [M+H]. ### 2.3. Radiolabeling of111In-AMBA AMBA was radiolabeled with111In as previously described by Zhang et al. [21]. Briefly, AMBA was labeled with 111In (111InCl3, Institute of Nuclear Energy Research (INER), Taoyuan (Taiwan), 16430 MBq/mL in 0.05 N HCl, pH 1.5–1.9) by reaction of 6.66 × 10−4 μmole (1 μg) peptide in 95 μL 0.1 M NH4OAc (pH 5.5) with 64.75 MBq 111InCl3 in 5 μL 0.04 N HCl for 10 min at 95°C. The specific activity of 111In-AMBA was 9.72 × 104 MBq/μmole. The radiolabeling efficiency was analyzed using instant thin-layer chromatography (ITLC SG, Pall Corporation, New York. USA) with 0.1 M Na-citrate (pH 5.0) as solvent (indium citrate and 111InCl3: Rf = 0.9~1.0, peptide-bound 111In: Rf = 0~0.1) [22]. Radio high-performance liquid chromatography (Radio-HPLC) analysis was performed using a Waters 2690 chromatography system with a 2996 photodiode array detector (PDA), a Bioscan radiodetector (Washington, DC, USA), and an FC 203B fraction collector by Gilson (Middleton, WI, USA). 111In-AMBA was purified by an Agilent (Santa Clara, CA, USA) Zorbax bonus-RP HPLC column (4.6 × 250 mm, 5 μm) eluted with a gradient mixture from 10% B to 40% B in 40 min. Flow rate was 1 mL/min at RT, and the retention time for 111In-AMBA was 22.5 min. After purification by HPLC, 100% ethanol was used instead of acetonitrile by solvent exchange with Waters Sep-Pak Light C18 cartridge (Milford, MA, USA). Normal saline was added after evaporation, and pH value was at the range 7~7.5. ### 2.4. Receptor Cold Competition Assay Cold competition binding assay was studied using human bombesin 2 receptor expressed in HEK-293 cells as the source of GRP receptors (PerkinElmer, Boston, MA, USA). Assays were performed using FC96 plates and the Multiscreen system (Millipore, Bedford, MA). Binding of125I-Tyr4-Bombesin (PerkinElmer, Boston, MA, USA) to human bombesin 2 receptor (0.16 μg per well) was determined in the presence of increasing concentrations (0.001 nmole/L to 1000 nmole/L) of unlabeled AMBA in a buffer solution (20 mmol/L HEPES, pH 7.4, 3 mmol/L MgCl2, 1 mmol/L EDTA, and 0.3% BSA) with a total volume of 250 μL per well. After incubation for 60 min at RT, membranes were filtered and washed with ice-cold Tris-HCl buffer (50 mmol/L). The filters containing membrane-bound radioactivity were counted using a Cobra II gamma-counter (Packard, Meriden, CT). The inhibitory concentration of 50% (IC50) was calculated using a four-parameter curve-fitting routine using the KELL software for Windows version 6 (Biosoft, Ferguson, MO, USA) [13]. ### 2.5. Cell Culture and Animal Model Human androgen-independent prostate cancer PC-3 cells (Bioresource Collection and Research Center, Taiwan) were cultured in Ham’s F-12K medium supplemented with 10% heat-inactivated fetal bovine serum (all from GIBCO, Grand Island, NY, USA) containing 5% CO2 at 37°C. For animal inoculation, an aliquot was thawed, grown and used within 10 passages. Five-week-old male ICR SCID (severely compromised immunodeficient) outbred mice were obtained from the National Animal Center of Taiwan (Taipei, Taiwan, ROC) and maintained on a standard diet (Lab diet; PMI Feeds, St. Louis, MO, USA) at RT, with free access to tap water in the animal house of INER). Thirty-three SCID mice were subcutaneously injected with 2 × 106 PC-3 cells in the right hind flank. The animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of the INER. ### 2.6. Biodistribution Studies At 4 weeks after PC-3 cell inoculation, the weight of the developed tumors ranged from 0.05 to 0.2 g. Twenty-five PC-3 xenograft SCID mice (n=5 for each group) were injected with 0.37 MBq (0.1 μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. The mice were sacrificed by CO2 asphyxiation with tissues and organs excised at 1, 4, 8, 24, and 48 h postinjection (p.i.). Subsequently, the tissues and organs were weighed, radioactivity was counted in a Packard Cobra II gamma-counter by Perkin-Elmer (Waltham, MA, USA), and the percentage of injected dose per gram (% ID/g) for each organ or tissue was calculated [23]. ### 2.7. Pharmacokinetic Studies Six PC-3 xenograft SCID mice were injected with 0.37 MBq (0.1μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. At 0.25, 1, 4, 16, 24, 48, 72, 96, and 168 h p.i., 20 μL of blood was collected from the heart puncture, then the blood was weighed, radioactivity was counted in the Cobra II gamma-counter, and the percentage of injected dose per gram (% ID/g) was calculated. The data were fitted to a two-compartment model and the pharmacokinetic parameters were derived by the WinNonlin 5.0 software (Pharsight Corporation, Mountain View, CA, USA). ### 2.8. Micro-SPECT/CT Imaging Two male SCID mice bearing human PC-3 tumors of approximately 0.1 g were i.v. injected with 12.2 MBq/4μg 111In-AMBA after purification by radio-HPLC. The SPECT and CT images were acquired by a micro-SPECT/CT scanner system (XSPECT; Gamma Medica-ideas Inc., Northridge, CA, USA). SPECT imaging was performed using medium-energy, parallel-hole collimators at 1, 4, 8, 24, and 48 h. The source and detector were mounted on a circular gantry allowing them to rotate 360 degrees around the subject (mouse) positioned on a stationary bed. The field of view (FOV) was 12.5 cm. The imaging acquisition was accomplished using 64 projections at 90 seconds per projection. The energy windows were set at 173 keV ± 10% and 247 keV ± 10%. SPECT imaging was followed by CT imaging (X-ray source: 50 kV, 0.4 mA; 256 projections) with the animal in exactly the same position. A three-dimensional (3D) Feldkamp cone beam algorithm was used for CT image reconstruction, and a two-dimensional (2D) filtered back projection algorithm was used for SPECT image reconstruction. All image processing softwares, including SPECT/CT coregistration, were provided by Gamma Medica-Ideas Inc (Northridge, CA, USA). After coregistration, both the fused SPECT and CT images had 256 × 256 × 256 voxels with an isotropic 0.3-mm voxel size. ### 2.9. Absorbed Radiation Dose Calculations The relative organ mass scaling method was employed to extrapolate the animal data to humans [24, 25]. The mean absorbed dose in various tissues was calculated from the radionuclide concentration in tissues/organs of interest, assuming a homogeneous distribution of the radionuclide within any source region [26]. The calculated mean value of percentage of injected activity per g (% IA/g) for the organs in mice was extrapolated to uptake in organs of a 70-kg adult using the following formula [24]: (1)[(%IAgorgan)animal×(KgTBweight)animal]×(gorganKgTBweight)human=(%IAorgan)human. The extrapolated values (% IA) in the human organs at 1, 4, 8, 24, and 48 h were fitted with exponential biokinetic models and integrated to obtain the number of disintegrations in the source organs. This information was entered into the OLINDA/EXM computer program. The integrals (MBq-s) for 15 organs, including heart contents (blood), brain, muscle, bone, heart, lung, spleen, pancreas, kidneys, liver, and remainder of body were evaluated and used for dosimetry evaluation. The code also displays contributions of different source organs to the total dose of target organs. For the estimation of the tumor absorbed dose, it was assumed that once the radiopharmaceutical is inside the tumor, there is no biological elimination. ## 2.1. Chemicals Protected Nα-Fmoc-amino acid derivatives were purchased from Calbiochem-Novabiochem (Laufelfingen, Switzerland), Fmoc-amide resin and coupling reagent were purchased from Applied Biosystems Inc. (Foster City, CA, USA), and DOTA-tetra (tBu) ester was purchased from Macrocyclics (Dallas, TX, USA). Fmoc-4-abz-OH was obtained from Bachem (Chauptstrasse, Switzerland). Bombesin was purchased from Fluka (Buchs, Switzerland). ## 2.2. Synthesis of AMBA AMBA was synthesized by solid phase peptide synthesis (SPPS) using an Applied Biosystems Model 433A full automated peptide synthesizer (Applied Biosystems, Foster City, CA, USA) employing the Fmoc (9-fluorenylmethoxy-carbonyl) strategy. Carboxyl groups on Fmoc-protected amino acids were activated by (2-(1H-benzotriazol-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU), forming a peptide bond with the N-terminal amino group on the growing peptide, anchored viathe C-terminus to the resin, provided for stepwise amino acid addition. Rink Amide resin (0.25 mmole) and Fmoc-protected amino acids (1.0 mmoL), with appropriate side-chain protections, and DOTA-tetra (tBu ester) were used for SPPS of the BBN conjugates. Side chain protecting groups in the synthesis were Trt for Gln and His, and Boc for Trp.The protected peptide-resin was cleaved and deprotected with mixture of 50% trifluoroacetic acid (TFA): 45% chloroform, 3.75% anisole, and 1.25% 1, 2-ethanedithiol (EDT) for 4 h at room temperature (RT). The crude peptide was isolated by precipitating with cool diethyl ether. After centrifugation, the collected precipitate was dried under vacuum. The crude peptide sample was purified by reverse phase high-performance liquid chromatography (HPLC) using a column of XTerra prep, MSC18, 5μm, 18 × 50 mm (Waters Corp., MA, USA) with an acetonitrile/water gradient consisting of solvent A (0.1% TFA in H2O) and solvent B (0.1% TFA in acetonitrile), with a 14.8% yield; flow: 6 mL/min; gradient: 20%–40% B for 20 min. The molecular weight was determined with a MALDI-TOF Mass Spectrometer (Bruker Daltonics Inc, Germany). M/z determined for the peptide was AMBA, 1,502.6 [M+H]. ## 2.3. Radiolabeling of111In-AMBA AMBA was radiolabeled with111In as previously described by Zhang et al. [21]. Briefly, AMBA was labeled with 111In (111InCl3, Institute of Nuclear Energy Research (INER), Taoyuan (Taiwan), 16430 MBq/mL in 0.05 N HCl, pH 1.5–1.9) by reaction of 6.66 × 10−4 μmole (1 μg) peptide in 95 μL 0.1 M NH4OAc (pH 5.5) with 64.75 MBq 111InCl3 in 5 μL 0.04 N HCl for 10 min at 95°C. The specific activity of 111In-AMBA was 9.72 × 104 MBq/μmole. The radiolabeling efficiency was analyzed using instant thin-layer chromatography (ITLC SG, Pall Corporation, New York. USA) with 0.1 M Na-citrate (pH 5.0) as solvent (indium citrate and 111InCl3: Rf = 0.9~1.0, peptide-bound 111In: Rf = 0~0.1) [22]. Radio high-performance liquid chromatography (Radio-HPLC) analysis was performed using a Waters 2690 chromatography system with a 2996 photodiode array detector (PDA), a Bioscan radiodetector (Washington, DC, USA), and an FC 203B fraction collector by Gilson (Middleton, WI, USA). 111In-AMBA was purified by an Agilent (Santa Clara, CA, USA) Zorbax bonus-RP HPLC column (4.6 × 250 mm, 5 μm) eluted with a gradient mixture from 10% B to 40% B in 40 min. Flow rate was 1 mL/min at RT, and the retention time for 111In-AMBA was 22.5 min. After purification by HPLC, 100% ethanol was used instead of acetonitrile by solvent exchange with Waters Sep-Pak Light C18 cartridge (Milford, MA, USA). Normal saline was added after evaporation, and pH value was at the range 7~7.5. ## 2.4. Receptor Cold Competition Assay Cold competition binding assay was studied using human bombesin 2 receptor expressed in HEK-293 cells as the source of GRP receptors (PerkinElmer, Boston, MA, USA). Assays were performed using FC96 plates and the Multiscreen system (Millipore, Bedford, MA). Binding of125I-Tyr4-Bombesin (PerkinElmer, Boston, MA, USA) to human bombesin 2 receptor (0.16 μg per well) was determined in the presence of increasing concentrations (0.001 nmole/L to 1000 nmole/L) of unlabeled AMBA in a buffer solution (20 mmol/L HEPES, pH 7.4, 3 mmol/L MgCl2, 1 mmol/L EDTA, and 0.3% BSA) with a total volume of 250 μL per well. After incubation for 60 min at RT, membranes were filtered and washed with ice-cold Tris-HCl buffer (50 mmol/L). The filters containing membrane-bound radioactivity were counted using a Cobra II gamma-counter (Packard, Meriden, CT). The inhibitory concentration of 50% (IC50) was calculated using a four-parameter curve-fitting routine using the KELL software for Windows version 6 (Biosoft, Ferguson, MO, USA) [13]. ## 2.5. Cell Culture and Animal Model Human androgen-independent prostate cancer PC-3 cells (Bioresource Collection and Research Center, Taiwan) were cultured in Ham’s F-12K medium supplemented with 10% heat-inactivated fetal bovine serum (all from GIBCO, Grand Island, NY, USA) containing 5% CO2 at 37°C. For animal inoculation, an aliquot was thawed, grown and used within 10 passages. Five-week-old male ICR SCID (severely compromised immunodeficient) outbred mice were obtained from the National Animal Center of Taiwan (Taipei, Taiwan, ROC) and maintained on a standard diet (Lab diet; PMI Feeds, St. Louis, MO, USA) at RT, with free access to tap water in the animal house of INER). Thirty-three SCID mice were subcutaneously injected with 2 × 106 PC-3 cells in the right hind flank. The animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of the INER. ## 2.6. Biodistribution Studies At 4 weeks after PC-3 cell inoculation, the weight of the developed tumors ranged from 0.05 to 0.2 g. Twenty-five PC-3 xenograft SCID mice (n=5 for each group) were injected with 0.37 MBq (0.1 μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. The mice were sacrificed by CO2 asphyxiation with tissues and organs excised at 1, 4, 8, 24, and 48 h postinjection (p.i.). Subsequently, the tissues and organs were weighed, radioactivity was counted in a Packard Cobra II gamma-counter by Perkin-Elmer (Waltham, MA, USA), and the percentage of injected dose per gram (% ID/g) for each organ or tissue was calculated [23]. ## 2.7. Pharmacokinetic Studies Six PC-3 xenograft SCID mice were injected with 0.37 MBq (0.1μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. At 0.25, 1, 4, 16, 24, 48, 72, 96, and 168 h p.i., 20 μL of blood was collected from the heart puncture, then the blood was weighed, radioactivity was counted in the Cobra II gamma-counter, and the percentage of injected dose per gram (% ID/g) was calculated. The data were fitted to a two-compartment model and the pharmacokinetic parameters were derived by the WinNonlin 5.0 software (Pharsight Corporation, Mountain View, CA, USA). ## 2.8. Micro-SPECT/CT Imaging Two male SCID mice bearing human PC-3 tumors of approximately 0.1 g were i.v. injected with 12.2 MBq/4μg 111In-AMBA after purification by radio-HPLC. The SPECT and CT images were acquired by a micro-SPECT/CT scanner system (XSPECT; Gamma Medica-ideas Inc., Northridge, CA, USA). SPECT imaging was performed using medium-energy, parallel-hole collimators at 1, 4, 8, 24, and 48 h. The source and detector were mounted on a circular gantry allowing them to rotate 360 degrees around the subject (mouse) positioned on a stationary bed. The field of view (FOV) was 12.5 cm. The imaging acquisition was accomplished using 64 projections at 90 seconds per projection. The energy windows were set at 173 keV ± 10% and 247 keV ± 10%. SPECT imaging was followed by CT imaging (X-ray source: 50 kV, 0.4 mA; 256 projections) with the animal in exactly the same position. A three-dimensional (3D) Feldkamp cone beam algorithm was used for CT image reconstruction, and a two-dimensional (2D) filtered back projection algorithm was used for SPECT image reconstruction. All image processing softwares, including SPECT/CT coregistration, were provided by Gamma Medica-Ideas Inc (Northridge, CA, USA). After coregistration, both the fused SPECT and CT images had 256 × 256 × 256 voxels with an isotropic 0.3-mm voxel size. ## 2.9. Absorbed Radiation Dose Calculations The relative organ mass scaling method was employed to extrapolate the animal data to humans [24, 25]. The mean absorbed dose in various tissues was calculated from the radionuclide concentration in tissues/organs of interest, assuming a homogeneous distribution of the radionuclide within any source region [26]. The calculated mean value of percentage of injected activity per g (% IA/g) for the organs in mice was extrapolated to uptake in organs of a 70-kg adult using the following formula [24]: (1)[(%IAgorgan)animal×(KgTBweight)animal]×(gorganKgTBweight)human=(%IAorgan)human. The extrapolated values (% IA) in the human organs at 1, 4, 8, 24, and 48 h were fitted with exponential biokinetic models and integrated to obtain the number of disintegrations in the source organs. This information was entered into the OLINDA/EXM computer program. The integrals (MBq-s) for 15 organs, including heart contents (blood), brain, muscle, bone, heart, lung, spleen, pancreas, kidneys, liver, and remainder of body were evaluated and used for dosimetry evaluation. The code also displays contributions of different source organs to the total dose of target organs. For the estimation of the tumor absorbed dose, it was assumed that once the radiopharmaceutical is inside the tumor, there is no biological elimination. ## 3. Results ### 3.1. Radiolabeling and In Vitro Receptor Binding Assay The radiolabeling efficiency of111In-AMBA was 95.43±1.37% (n=11). The in vitrocompetitive binding assays were determined in the human bombesin 2 receptor using 125I-Tyr4-Bombesin as the GRP-R specific radiotracer, and unlabeled AMBA and native BN as competitors. The IC50 of the AMBA and native BN in human bombesin 2 receptor (Figure 2) is 0.82±0.41 nmol/L and 0.13±0.10 nmol/L, respectively, in a single, direct, nanomolar range, demonstrating high specificity and affinity for the GRP receptor. The Ki of AMBA and native BBN were 0.65±0.32 nmol/L and 0.10±0.08 nmol/L, respectively.Figure 2 Competitive binding assay of AMBA versus125I-Tyr4-Bombesin with human bombesin 2 receptors. ### 3.2. Biodistribution 111In-AMBA accumulated significantly in tumor, adrenal, pancreas, small intestine, and large intestine (Table 1). Fast blood clearance and fast excretion from the kidneys were observed. High levels of radioactivity were found in the kidneys before 24 h, indicating that the radioactivity was excreted rapidly in the urine within 24 h. The levels of radioactivity reached the highest with 3.87±0.65 % ID/g at 8 h and then declined rapidly. The highest tumor/muscle ratio (Tu/Mu) of 111In-AMBA was 11.79 at 8 h after injection and decreased progressively to 4.82 and 5.16 at 24 and 48 h after administration, respectively. Other GRPR-positive organs (small intestine and large intestine) also showed the specific binding of 111In-AMBA (Table 1). The tumor/muscle ratios were decreased conspicuously at 4 and 24 h postadministration.Table 1 Biodistribution of111In-AMBA after intravenous injection in PC-3 prostate tumor-bearing SCID mice. Organ1 h4 h8 h24 h48 hBlood0.95±0.090.50±0.060.42±0.020.19±0.030.09±0.02Brain0.06±0.010.05±0.010.05±0.000.02±0.000.03±0.00Skin0.99±0.220.60±0.160.55±0.020.36±0.020.28±0.02Muscle0.57±0.190.33±0.140.33±0.020.21±0.040.14±0.02Bone1.02±0.180.92±0.211.57±0.180.90±0.130.60±0.07Heart0.62±0.090.48±0.040.56±0.080.41±0.050.32±0.04Lung1.78±0.231.88±0.461.70±0.650.60±0.170.26±0.03Adrenals5.79±1.217.08±1.2217.8±4.657.41±1.995.20±1.11Spleen2.80±1.496.90±1.878.90±2.344.41±0.582.19±0.51Pancreas6.14±0.9912.9±2.4454.9±2.5115.9±1.949.80±2.21Kidney3.56±0.154.23±0.283.92±0.914.10±0.722.74±0.30Liver7.26±0.538.22±1.057.04±0.248.64±1.316.59±1.83Bladder7.63±2.941.75±0.751.07±0.120.64±0.060.46±0.10Stomach0.81±0.090.80±0.073.97±1.150.97±0.290.47±0.05SI1.67±0.221.56±0.174.42±0.611.48±0.270.74±0.10LI1.77±0.253.42±1.107.04±1.482.39±0.380.99±0.16Tumor (PC-3)2.24±0.661.86±0.713.87±0.651.02±0.090.75±0.08Tumor/muscle3.895.6911.794.825.16Values are expressed as % ID/g, mean ± SEM (n=4-5 at each time point). SI: small intestine; LI: large intestine. ### 3.3. Pharmacokinetic Studies The radioactivity declined to under detection limit after 24 h. The pharmacokinetic parameters derived by a two-compartment model [27] indicated that the distribution half-life (t1/2α) and distribution half-life (t1/2β) of 111In-AMBA were 1.53±0.69 h and 30.73±8.56 h, respectively (Table 2).Table 2 Pharmacokinetic parameters of plasma in PC-3 tumor-bearing mice after intravenous injection of 10μCi/mouse 111In-AMBA (mean ± SEM, n=5). ParameterUnitValueA% ID/g6.15±0.69B% ID/g1.43±0.61α1/h1.19±0.85β1/h0.03±0.01AUC0-168hh × (% ID/g)66.4±17.3t1/2αh1.53±0.69t1/2βh30.7±8.56Cmax% ID/g7.37±0.64A, B,α, β: macro rate constants; t1/2α, t1/2β: distribution and elimination half-lives; AUC0-168h: area under concentration of 111In-AMBA versus time curve; Cmax: maximum concentration in plasma. ### 3.4. Micro-SPECT/CT Imaging Micro-SPECT/CT imaging of111In-AMBA indicated significant uptake in the tumors at 8 and 24 h after intravenous injection (Figure 3). The longitudinal micro-SPECT/CT imaging showed high accumulation of 111In-AMBA in pancreas and gastrointestinal tract at 4, 8, 24, and 48 h after intravenous injection.Figure 3 MicroSPECT/CT images of111In-AMBA targeting PC-3 tumors xenograft SCID mice. 12.2 MBq/4 μg 111In-AMBA was administered to each mouse by intravenous injection. The images were acquired at 1, 4, 8, 24, and 48 h after injection. The energy window was set at 173keV±10% and 247keV±10%; the image size was set at 80 × 80 pixels. The color map shows the SPECT pixel values from 0 to the maximum expressed with an arbitrary value of 100. ### 3.5. Radiation Absorbed Dose Calculation The radiation-absorbed dose projections for the administration of111In-AMBA to humans, determined from the residence times in mice, are shown in Table 3. The highest absorbed doses appear in the lower large intestine (0.12 mSv/MBq−1), upper large intestine (0.13 mSv/MBq−1), kidneys (0.12 mSv/MBq−1), osteogenic cells (0.22 mSv/MBq−1), and pancreas (0.25 mSv/MBq−1). The effective dose appears to be approximately m0.11 mSv/MBq−1. The red marrow absorbed dose is estimated to be 0.09 mSv/MBq−1. For a 2-g tumor, the unit density sphere model was used and the estimated absorbed dose was 8.09 mGy·MBq−1.Table 3 Radiation dose estimates for111In-AMBA in humans. OrganEstimated dose (mSv/MBq−1)*Adrenals1.5E-01Brain3.1E-02Breasts7.7E-02Gallbladder Wall1.5E-01LLI Wall1.2E-01Small Intestine1.3E-01Stomach Wall1.1E-01ULI Wall1.3E-01Heart Wall7.2E-02Kidneys1.2E-01Liver2.0E-01Lungs7.4E-02Muscle7.0E-02Ovaries1.2E-01Pancreas2.5E-01Red Marrow8.8E-02Osteogenic Cells2.2E-01Skin5.8E-02Spleen1.2E-01Testes5.9E-02Thymus9.0E-02Thyroid9.2E-02Urinary Bladder Wall1.1E-01Uterus1.3E-01Total Body9.2E-02Effective Dose1.1E-01*Radiation-absorbed dose projections in humans were determined from residence times for111In-AMBA in SCID mice and were calculated by use of OLINDA/EXM version 1.0 computer program. ## 3.1. Radiolabeling and In Vitro Receptor Binding Assay The radiolabeling efficiency of111In-AMBA was 95.43±1.37% (n=11). The in vitrocompetitive binding assays were determined in the human bombesin 2 receptor using 125I-Tyr4-Bombesin as the GRP-R specific radiotracer, and unlabeled AMBA and native BN as competitors. The IC50 of the AMBA and native BN in human bombesin 2 receptor (Figure 2) is 0.82±0.41 nmol/L and 0.13±0.10 nmol/L, respectively, in a single, direct, nanomolar range, demonstrating high specificity and affinity for the GRP receptor. The Ki of AMBA and native BBN were 0.65±0.32 nmol/L and 0.10±0.08 nmol/L, respectively.Figure 2 Competitive binding assay of AMBA versus125I-Tyr4-Bombesin with human bombesin 2 receptors. ## 3.2. Biodistribution 111In-AMBA accumulated significantly in tumor, adrenal, pancreas, small intestine, and large intestine (Table 1). Fast blood clearance and fast excretion from the kidneys were observed. High levels of radioactivity were found in the kidneys before 24 h, indicating that the radioactivity was excreted rapidly in the urine within 24 h. The levels of radioactivity reached the highest with 3.87±0.65 % ID/g at 8 h and then declined rapidly. The highest tumor/muscle ratio (Tu/Mu) of 111In-AMBA was 11.79 at 8 h after injection and decreased progressively to 4.82 and 5.16 at 24 and 48 h after administration, respectively. Other GRPR-positive organs (small intestine and large intestine) also showed the specific binding of 111In-AMBA (Table 1). The tumor/muscle ratios were decreased conspicuously at 4 and 24 h postadministration.Table 1 Biodistribution of111In-AMBA after intravenous injection in PC-3 prostate tumor-bearing SCID mice. Organ1 h4 h8 h24 h48 hBlood0.95±0.090.50±0.060.42±0.020.19±0.030.09±0.02Brain0.06±0.010.05±0.010.05±0.000.02±0.000.03±0.00Skin0.99±0.220.60±0.160.55±0.020.36±0.020.28±0.02Muscle0.57±0.190.33±0.140.33±0.020.21±0.040.14±0.02Bone1.02±0.180.92±0.211.57±0.180.90±0.130.60±0.07Heart0.62±0.090.48±0.040.56±0.080.41±0.050.32±0.04Lung1.78±0.231.88±0.461.70±0.650.60±0.170.26±0.03Adrenals5.79±1.217.08±1.2217.8±4.657.41±1.995.20±1.11Spleen2.80±1.496.90±1.878.90±2.344.41±0.582.19±0.51Pancreas6.14±0.9912.9±2.4454.9±2.5115.9±1.949.80±2.21Kidney3.56±0.154.23±0.283.92±0.914.10±0.722.74±0.30Liver7.26±0.538.22±1.057.04±0.248.64±1.316.59±1.83Bladder7.63±2.941.75±0.751.07±0.120.64±0.060.46±0.10Stomach0.81±0.090.80±0.073.97±1.150.97±0.290.47±0.05SI1.67±0.221.56±0.174.42±0.611.48±0.270.74±0.10LI1.77±0.253.42±1.107.04±1.482.39±0.380.99±0.16Tumor (PC-3)2.24±0.661.86±0.713.87±0.651.02±0.090.75±0.08Tumor/muscle3.895.6911.794.825.16Values are expressed as % ID/g, mean ± SEM (n=4-5 at each time point). SI: small intestine; LI: large intestine. ## 3.3. Pharmacokinetic Studies The radioactivity declined to under detection limit after 24 h. The pharmacokinetic parameters derived by a two-compartment model [27] indicated that the distribution half-life (t1/2α) and distribution half-life (t1/2β) of 111In-AMBA were 1.53±0.69 h and 30.73±8.56 h, respectively (Table 2).Table 2 Pharmacokinetic parameters of plasma in PC-3 tumor-bearing mice after intravenous injection of 10μCi/mouse 111In-AMBA (mean ± SEM, n=5). ParameterUnitValueA% ID/g6.15±0.69B% ID/g1.43±0.61α1/h1.19±0.85β1/h0.03±0.01AUC0-168hh × (% ID/g)66.4±17.3t1/2αh1.53±0.69t1/2βh30.7±8.56Cmax% ID/g7.37±0.64A, B,α, β: macro rate constants; t1/2α, t1/2β: distribution and elimination half-lives; AUC0-168h: area under concentration of 111In-AMBA versus time curve; Cmax: maximum concentration in plasma. ## 3.4. Micro-SPECT/CT Imaging Micro-SPECT/CT imaging of111In-AMBA indicated significant uptake in the tumors at 8 and 24 h after intravenous injection (Figure 3). The longitudinal micro-SPECT/CT imaging showed high accumulation of 111In-AMBA in pancreas and gastrointestinal tract at 4, 8, 24, and 48 h after intravenous injection.Figure 3 MicroSPECT/CT images of111In-AMBA targeting PC-3 tumors xenograft SCID mice. 12.2 MBq/4 μg 111In-AMBA was administered to each mouse by intravenous injection. The images were acquired at 1, 4, 8, 24, and 48 h after injection. The energy window was set at 173keV±10% and 247keV±10%; the image size was set at 80 × 80 pixels. The color map shows the SPECT pixel values from 0 to the maximum expressed with an arbitrary value of 100. ## 3.5. Radiation Absorbed Dose Calculation The radiation-absorbed dose projections for the administration of111In-AMBA to humans, determined from the residence times in mice, are shown in Table 3. The highest absorbed doses appear in the lower large intestine (0.12 mSv/MBq−1), upper large intestine (0.13 mSv/MBq−1), kidneys (0.12 mSv/MBq−1), osteogenic cells (0.22 mSv/MBq−1), and pancreas (0.25 mSv/MBq−1). The effective dose appears to be approximately m0.11 mSv/MBq−1. The red marrow absorbed dose is estimated to be 0.09 mSv/MBq−1. For a 2-g tumor, the unit density sphere model was used and the estimated absorbed dose was 8.09 mGy·MBq−1.Table 3 Radiation dose estimates for111In-AMBA in humans. OrganEstimated dose (mSv/MBq−1)*Adrenals1.5E-01Brain3.1E-02Breasts7.7E-02Gallbladder Wall1.5E-01LLI Wall1.2E-01Small Intestine1.3E-01Stomach Wall1.1E-01ULI Wall1.3E-01Heart Wall7.2E-02Kidneys1.2E-01Liver2.0E-01Lungs7.4E-02Muscle7.0E-02Ovaries1.2E-01Pancreas2.5E-01Red Marrow8.8E-02Osteogenic Cells2.2E-01Skin5.8E-02Spleen1.2E-01Testes5.9E-02Thymus9.0E-02Thyroid9.2E-02Urinary Bladder Wall1.1E-01Uterus1.3E-01Total Body9.2E-02Effective Dose1.1E-01*Radiation-absorbed dose projections in humans were determined from residence times for111In-AMBA in SCID mice and were calculated by use of OLINDA/EXM version 1.0 computer program. ## 4. Discussion Growth factor receptors are involved in all steps of tumor progression, enhancing angiogenesis, local invasion, and distant metastases. The overexpression of growth factor receptors on the cell surface of malignant cells might be associated with a more aggressive behavior and a poor prognosis. For these reasons, tumor-related growth factor receptors can be taken as potential targets for therapeutic intervention. Over the last two decades, GRP and other BLPs may act as a growth factor in many types of cancer. GRPR antagonists have been developed as anticancer candidate compounds, exhibiting impressive antitumoral activity bothin vitro and in vivo in various murine and human tumors [28, 29]. Clinical trials with GRPR antagonists in cancer patients are in its initial phase as anticipated by animal toxicology studies and preliminary evaluation in humans [29]. Presently, efforts at the identification of the most suitable candidates for clinical trials and at improving drug formulation for human use are considered priorities. It may also be anticipated that GRPRs may be exploited as potential carriers for cytotoxins, immunotoxins, or radioactive compounds. Thus, the visualization of these receptors through molecular image-guided diagnostic agents may become an interesting tool for tumor detection and staging in personalized medicine.The present study showed the highest accumulation of111In-AMBA in pancreas in mice (Table 1). However, interspecies differences in structure and pharmacology of human and animal GRP receptors have been reported [30]. Because the pancreas is the primary normal tissue in these animals that expresses a high density of bloodstream-accessible GRPRs, the accumulation of 111In in the pancreas is a direct reflection of the efficacy of radiolabeled BN analogs for in vivo targeting of cell-surface-expressed GRPRs [31]. Retention of 111In-AMBA in the pancreas may be due to the characteristic of a radioagonist with effective internalization and cell retention. Waser et al. reported that in contrast to the strongly labeled GRPR-positive mouse pancreas with 177Lu-AMBA, the human pancreas did not bind 177Lu-AMBA unless chronic pancreatitis was diagnosed [32].The majority of research efforts into the design of bombesin-based radiopharmaceuticals have been carried out using GRPR agonists. The main reason for using agonists is that they undergo receptor-mediated endocytosis enabling residualization of the attached radiometal within the targeted cell [33]. Micro-SPECT/CT imaging is a noninvasive imaging modality that can longitudinally monitor the behavior of GRPR expression in the same animal across different time-points before and during therapy. In the present study, tumor targeting and localization of 111In-AMBA was clearly imaged with micro-SPECT/CT after 1 to 48 h of administration, suggesting that micro-SPECT/CT imaging with 111In-AMBA is a good tool for studying the tumor targeting, distribution, and real-time therapeutic response in vivo.The effective dose projected for the administration of111In-AMBA to humans (0.11 mSv/MBq−1) (Table 3) is comparable to that for 111In-pentetreotide (0.12 mSv/MBq−1) [34], the only 111In-labeled peptide receptor-targeted radiotherapeutic agent to be used clinically [35, 36]. The intestines, osteogenic cells, kidneys, and pancreas appear to receive absorbed doses around 0.2 mSv/MBq−1 of 111In-AMBA. At a maximum planned administration of 111 MBq for diagnostic imaging, the total radiation-absorbed dose to these organs kidneys would be about 12 mSv. The use of animal data to estimate human doses is a necessary first step, but such studies give only an estimate of radiation doses to be expected in human subjects. More accurate human dosimetry must be established with imaging studies involving human volunteers or patients. The dosimetry data presented here will be valuable in the dose planning of these studies, and for application of 111In-AMBA to Investigational New Drug (IND) research.Clinically, primary prostate cancer and the metastases may be heterogeneous, demonstrating a spectrum of phenotypes from androgen-sensitive to androgen-insensitive.177Lu-AMBA, a conjugated bombesin compound for imaging and systemic radiotherapy, is now in phase I clinical trials [15]. 177Lu-AMBA has been evaluated in early stages of prostate cancer represented by the androgen-dependent, prostate-specific antigen-secreting hormone-sensitive prostate cancer cell line LNCaP [6], derived from a lymph node metastasis, and also in PC-3 cell line, derived from bone metastasis, is androgen-independent and is thought to represent late-stage hormone-refractory prostate cancer (HRPC) [37]. 177Lu-AMBA will be clinically efficacious as a single-agent radiotherapeutic for heterogeneous metastatic prostate cancer and be a valuable adjunct to traditional chemotherapy. Thus, the visualization of GRPR receptors through 111In-AMBA as an image-guided agent may contribute to the use of radiotherapeutic, 177Lu-AMBA, and other traditional chemotherapy in personalized medicine.Targeted therapeutic and imaging agents are becoming more prevalent and are used to treat increasingly smaller population of patients. This has led to dramatic increases in the costs for clinical trials. Biomarkers have great potential to reduce the numbers of patients needed to test novel targeted agents by predicting or identifying nonresponse early on and thus enriching the clinical trial population with patients more likely to respond. GRPRs are expressed on prostate tumor cells, making it a potential biomarker for cancer. The imaging of111In-AMBA indicated the stage of prostate cancer for determining the therapeutic approach to prostate cancer and for monitoring the therapeutic efficacy. The expression of GRPR will vary from patient to patient due to the stages and individual difference. If such patients could be prescreened with 111In-AMBA to identify those with higher tumor expression of GRPR, then it would be possible to select cases for receiving BLPs-specific treatment, while cases with low tumor expression of GRPR can consider other treatment options. Consequently, the proposed approaches enable optimized and individualized treatment protocols and can enhance the development of image-guide personalized medicine.By visualizing how well drug targeting systems deliver pharmacologically active agents to the pathological site,111In-AMBA furthermore facilitates “personalized medicine” and patient individualization, as well as the efficacy of combination regimens. Regarding personalized medicine, it can be reasoned that only in patients showing high levels of target site uptake with high expression of GRPR should treatment be continued; otherwise, alternative therapeutic approaches should be considered. ## 5. Conclusion 111In-AMBA showed a characteristic of agonist, a good bioactivity in vitro and uptake in human GRPR-expressing tumors in vivo. The molecular image-guided diagnostic agent can be used for various different purposes, ranging from simple and straightforward biodistribution studies to extensive and elaborate experimental setups aiming to enable “personalized medicine” and to improve the efficacy of combined modality anticancer therapy. --- *Source: 101497-2011-05-24.xml*
101497-2011-05-24_101497-2011-05-24.md
44,417
Molecular Imaging, Pharmacokinetics, and Dosimetry of111In-AMBA in Human Prostate Tumor-Bearing Mice
Chung-Li Ho; I-Hsiang Liu; Yu-Hsien Wu; Liang-Cheng Chen; Chun-Lin Chen; Wan-Chi Lee; Cheng-Hui Chuang; Te-Wei Lee; Wuu-Jyh Lin; Lie-Hang Shen; Chih-Hsien Chang
Journal of Biomedicine and Biotechnology (2011)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2011/101497
101497-2011-05-24.xml
--- ## Abstract Molecular imaging with promise of personalized medicine can provide patient-specific information noninvasively, thus enabling treatment to be tailored to the specific biological attributes of both the disease and the patient. This study was to investigate the characterization of DO3A-CH2CO-G-4-aminobenzoyl-Q-W-A-V-G-H-L-M-NH2 (AMBA) in vitro, MicroSPECT/CT imaging, and biological activities of 111In-AMBA in PC-3 prostate tumor-bearing SCID mice. The uptake of 111In-AMBA reached highest with 3.87±0.65% ID/g at 8 h. MicroSPECT/CT imaging studies suggested that the uptake of 111In-AMBA was clearly visualized between 8 and 48 h postinjection. The distribution half-life (t1/2α) and the elimination half-life (t1/2β) of 111In-AMBA in mice were 1.53 h and 30.7 h, respectively. The Cmax and AUC of 111In-AMBA were 7.57% ID/g and 66.39 h∗% ID/g, respectively. The effective dose appeared to be 0.11 mSv/MBq-1. We demonstrated a good uptake of 111In-AMBA in the GRPR-overexpressed PC-3 tumor-bearing SCID mice. 111In-AMBA is a safe, potential molecular image-guided diagnostic agent for human GRPR-positive tumors, ranging from simple and straightforward biodistribution studies to improve the efficacy of combined modality anticancer therapy. --- ## Body ## 1. Introduction Prostate cancer is estimated to rank first in number of cancer cases and second in number of deaths due to cancer among men in the Western world [1]. Gastrin-releasing peptides (GRPs), including Bombesin-like peptides (BLPs), are involved in the regulation of a large number of biological processes in the gut and central nervous system (CNS) [2]. They mediate their action on cells by binding to members of a superfamily of G protein-coupled receptors [3]. There are four known subtypes of BN-related peptide receptors, namely, gastrin-releasing peptide receptor (GRPR, BB2, BRS-2), neuromedin B receptor (NMBR, BB1, BRS-1), orphan receptor (BRS-3), and amphibian receptor (BB4-R) [4]. Except the BB4-R, all the receptors were widely distributed, especially in the gastrointestinal (GI) tract and central nervous system (CNS). The receptors have a large range of effects in both normal physiology and pathophysiological conditions [5]. GRPRs are normally expressed in nonneuroendocrine tissues of the pancreas, breast, and neuroendocrine cells of the brain, GI tract, lung, and prostate, but are not normally expressed by epithelial cells in the colon, lung, or prostate [6, 7].Molecular imaging enables the visualization of the cellular function and the followup of the molecular process in living organisms without perturbing them [8]. The radionuclide molecular imaging technique is the most sensitive and can provide target-specific information. The radiotracer could also be used for radionuclide therapy. Thus, the development of a personalized theranostic (image and treat) agent would allow greater accuracy in selection of patients who may respond to treatment, and assessing the outcome of therapeutic response [9]. Gastrin-releasing peptide receptors (GRPRs) are overexpressed in several primary human tumors and metastases [5]. Markwalder and Reubi reported that GRPRs are expressed in invasive prostate carcinomas and in prostatic intraepithelial neoplasms at high density, whereas normal prostate tissue and hyperplastic prostate tissue were predominantly GRPR negative [10]. These findings suggest that GRPR may be used as a molecular basis for diagnosing and staging prostate cancer, further for imaging-guided personalized medicine using radiolabeled bombesin analogues.Previous studies have evaluated the111In-radiolabeled BN analogues which bind rapidly into GRP receptor-positive tumor cells, including PC-3, CA20948, and AR42J using gamma camera imaging after administration [11–14]. AMBA (DO3A-CH2CO-G-(4-aminobenzoyl)-QWAVGHLM-NH2) (Figure 1), a BBN-related peptide agonist, has a DO3A structure that can chelate tripositive lanthanide isotopes, such as 68Ga, 90Y, 111In, and 177Lu. Thus, it can formulate many kinds of radiolabelled probes for various purposes [15]. Indium 111 emits γ-photons of two energies (172 and 245 keV) as well as Auger and internal conversion electrons. 111In-AMBA was initially used for diagnostic purposes but remains the potential for radiotherapy. Auger electron, with a maximum energy of <30 keV, is a high linear energy transfer (LET) radiation with subcellular pathlength (2–500 nm) in tissues [16]. For imaging the presence or absence of GRPR, the 111In-AMBA could be used for patient selection for further radiotherapy (177Lu-AMBA), chemotherapy (BLP antagonists), or therapeutic response monitoring as imaging-guided personalized medicine. Although 111In-AMBA has been evaluated as an imaging agent [17–20], the pharmacokinetics and dosimetry of the agent have not been reported yet. In this study, 111In-AMBA was designed as an image-guided diagnostic agent for human GRPR-positive tumors, which only retain the last eight amino acids (Q-W-A-V-G-H-L-M-NH2) from native BN. The pharmacokinetics, biodistribution, dosimetry, and micro-SPECT/CT imaging of 111In-AMBA were evaluated in human androgen-independent PC-3 prostate tumor-bearing SCID mice.Figure 1 Representative structure of AMBA. ## 2. Materials and Methods ### 2.1. Chemicals Protected Nα-Fmoc-amino acid derivatives were purchased from Calbiochem-Novabiochem (Laufelfingen, Switzerland), Fmoc-amide resin and coupling reagent were purchased from Applied Biosystems Inc. (Foster City, CA, USA), and DOTA-tetra (tBu) ester was purchased from Macrocyclics (Dallas, TX, USA). Fmoc-4-abz-OH was obtained from Bachem (Chauptstrasse, Switzerland). Bombesin was purchased from Fluka (Buchs, Switzerland). ### 2.2. Synthesis of AMBA AMBA was synthesized by solid phase peptide synthesis (SPPS) using an Applied Biosystems Model 433A full automated peptide synthesizer (Applied Biosystems, Foster City, CA, USA) employing the Fmoc (9-fluorenylmethoxy-carbonyl) strategy. Carboxyl groups on Fmoc-protected amino acids were activated by (2-(1H-benzotriazol-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU), forming a peptide bond with the N-terminal amino group on the growing peptide, anchored viathe C-terminus to the resin, provided for stepwise amino acid addition. Rink Amide resin (0.25 mmole) and Fmoc-protected amino acids (1.0 mmoL), with appropriate side-chain protections, and DOTA-tetra (tBu ester) were used for SPPS of the BBN conjugates. Side chain protecting groups in the synthesis were Trt for Gln and His, and Boc for Trp.The protected peptide-resin was cleaved and deprotected with mixture of 50% trifluoroacetic acid (TFA): 45% chloroform, 3.75% anisole, and 1.25% 1, 2-ethanedithiol (EDT) for 4 h at room temperature (RT). The crude peptide was isolated by precipitating with cool diethyl ether. After centrifugation, the collected precipitate was dried under vacuum. The crude peptide sample was purified by reverse phase high-performance liquid chromatography (HPLC) using a column of XTerra prep, MSC18, 5μm, 18 × 50 mm (Waters Corp., MA, USA) with an acetonitrile/water gradient consisting of solvent A (0.1% TFA in H2O) and solvent B (0.1% TFA in acetonitrile), with a 14.8% yield; flow: 6 mL/min; gradient: 20%–40% B for 20 min. The molecular weight was determined with a MALDI-TOF Mass Spectrometer (Bruker Daltonics Inc, Germany). M/z determined for the peptide was AMBA, 1,502.6 [M+H]. ### 2.3. Radiolabeling of111In-AMBA AMBA was radiolabeled with111In as previously described by Zhang et al. [21]. Briefly, AMBA was labeled with 111In (111InCl3, Institute of Nuclear Energy Research (INER), Taoyuan (Taiwan), 16430 MBq/mL in 0.05 N HCl, pH 1.5–1.9) by reaction of 6.66 × 10−4 μmole (1 μg) peptide in 95 μL 0.1 M NH4OAc (pH 5.5) with 64.75 MBq 111InCl3 in 5 μL 0.04 N HCl for 10 min at 95°C. The specific activity of 111In-AMBA was 9.72 × 104 MBq/μmole. The radiolabeling efficiency was analyzed using instant thin-layer chromatography (ITLC SG, Pall Corporation, New York. USA) with 0.1 M Na-citrate (pH 5.0) as solvent (indium citrate and 111InCl3: Rf = 0.9~1.0, peptide-bound 111In: Rf = 0~0.1) [22]. Radio high-performance liquid chromatography (Radio-HPLC) analysis was performed using a Waters 2690 chromatography system with a 2996 photodiode array detector (PDA), a Bioscan radiodetector (Washington, DC, USA), and an FC 203B fraction collector by Gilson (Middleton, WI, USA). 111In-AMBA was purified by an Agilent (Santa Clara, CA, USA) Zorbax bonus-RP HPLC column (4.6 × 250 mm, 5 μm) eluted with a gradient mixture from 10% B to 40% B in 40 min. Flow rate was 1 mL/min at RT, and the retention time for 111In-AMBA was 22.5 min. After purification by HPLC, 100% ethanol was used instead of acetonitrile by solvent exchange with Waters Sep-Pak Light C18 cartridge (Milford, MA, USA). Normal saline was added after evaporation, and pH value was at the range 7~7.5. ### 2.4. Receptor Cold Competition Assay Cold competition binding assay was studied using human bombesin 2 receptor expressed in HEK-293 cells as the source of GRP receptors (PerkinElmer, Boston, MA, USA). Assays were performed using FC96 plates and the Multiscreen system (Millipore, Bedford, MA). Binding of125I-Tyr4-Bombesin (PerkinElmer, Boston, MA, USA) to human bombesin 2 receptor (0.16 μg per well) was determined in the presence of increasing concentrations (0.001 nmole/L to 1000 nmole/L) of unlabeled AMBA in a buffer solution (20 mmol/L HEPES, pH 7.4, 3 mmol/L MgCl2, 1 mmol/L EDTA, and 0.3% BSA) with a total volume of 250 μL per well. After incubation for 60 min at RT, membranes were filtered and washed with ice-cold Tris-HCl buffer (50 mmol/L). The filters containing membrane-bound radioactivity were counted using a Cobra II gamma-counter (Packard, Meriden, CT). The inhibitory concentration of 50% (IC50) was calculated using a four-parameter curve-fitting routine using the KELL software for Windows version 6 (Biosoft, Ferguson, MO, USA) [13]. ### 2.5. Cell Culture and Animal Model Human androgen-independent prostate cancer PC-3 cells (Bioresource Collection and Research Center, Taiwan) were cultured in Ham’s F-12K medium supplemented with 10% heat-inactivated fetal bovine serum (all from GIBCO, Grand Island, NY, USA) containing 5% CO2 at 37°C. For animal inoculation, an aliquot was thawed, grown and used within 10 passages. Five-week-old male ICR SCID (severely compromised immunodeficient) outbred mice were obtained from the National Animal Center of Taiwan (Taipei, Taiwan, ROC) and maintained on a standard diet (Lab diet; PMI Feeds, St. Louis, MO, USA) at RT, with free access to tap water in the animal house of INER). Thirty-three SCID mice were subcutaneously injected with 2 × 106 PC-3 cells in the right hind flank. The animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of the INER. ### 2.6. Biodistribution Studies At 4 weeks after PC-3 cell inoculation, the weight of the developed tumors ranged from 0.05 to 0.2 g. Twenty-five PC-3 xenograft SCID mice (n=5 for each group) were injected with 0.37 MBq (0.1 μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. The mice were sacrificed by CO2 asphyxiation with tissues and organs excised at 1, 4, 8, 24, and 48 h postinjection (p.i.). Subsequently, the tissues and organs were weighed, radioactivity was counted in a Packard Cobra II gamma-counter by Perkin-Elmer (Waltham, MA, USA), and the percentage of injected dose per gram (% ID/g) for each organ or tissue was calculated [23]. ### 2.7. Pharmacokinetic Studies Six PC-3 xenograft SCID mice were injected with 0.37 MBq (0.1μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. At 0.25, 1, 4, 16, 24, 48, 72, 96, and 168 h p.i., 20 μL of blood was collected from the heart puncture, then the blood was weighed, radioactivity was counted in the Cobra II gamma-counter, and the percentage of injected dose per gram (% ID/g) was calculated. The data were fitted to a two-compartment model and the pharmacokinetic parameters were derived by the WinNonlin 5.0 software (Pharsight Corporation, Mountain View, CA, USA). ### 2.8. Micro-SPECT/CT Imaging Two male SCID mice bearing human PC-3 tumors of approximately 0.1 g were i.v. injected with 12.2 MBq/4μg 111In-AMBA after purification by radio-HPLC. The SPECT and CT images were acquired by a micro-SPECT/CT scanner system (XSPECT; Gamma Medica-ideas Inc., Northridge, CA, USA). SPECT imaging was performed using medium-energy, parallel-hole collimators at 1, 4, 8, 24, and 48 h. The source and detector were mounted on a circular gantry allowing them to rotate 360 degrees around the subject (mouse) positioned on a stationary bed. The field of view (FOV) was 12.5 cm. The imaging acquisition was accomplished using 64 projections at 90 seconds per projection. The energy windows were set at 173 keV ± 10% and 247 keV ± 10%. SPECT imaging was followed by CT imaging (X-ray source: 50 kV, 0.4 mA; 256 projections) with the animal in exactly the same position. A three-dimensional (3D) Feldkamp cone beam algorithm was used for CT image reconstruction, and a two-dimensional (2D) filtered back projection algorithm was used for SPECT image reconstruction. All image processing softwares, including SPECT/CT coregistration, were provided by Gamma Medica-Ideas Inc (Northridge, CA, USA). After coregistration, both the fused SPECT and CT images had 256 × 256 × 256 voxels with an isotropic 0.3-mm voxel size. ### 2.9. Absorbed Radiation Dose Calculations The relative organ mass scaling method was employed to extrapolate the animal data to humans [24, 25]. The mean absorbed dose in various tissues was calculated from the radionuclide concentration in tissues/organs of interest, assuming a homogeneous distribution of the radionuclide within any source region [26]. The calculated mean value of percentage of injected activity per g (% IA/g) for the organs in mice was extrapolated to uptake in organs of a 70-kg adult using the following formula [24]: (1)[(%IAgorgan)animal×(KgTBweight)animal]×(gorganKgTBweight)human=(%IAorgan)human. The extrapolated values (% IA) in the human organs at 1, 4, 8, 24, and 48 h were fitted with exponential biokinetic models and integrated to obtain the number of disintegrations in the source organs. This information was entered into the OLINDA/EXM computer program. The integrals (MBq-s) for 15 organs, including heart contents (blood), brain, muscle, bone, heart, lung, spleen, pancreas, kidneys, liver, and remainder of body were evaluated and used for dosimetry evaluation. The code also displays contributions of different source organs to the total dose of target organs. For the estimation of the tumor absorbed dose, it was assumed that once the radiopharmaceutical is inside the tumor, there is no biological elimination. ## 2.1. Chemicals Protected Nα-Fmoc-amino acid derivatives were purchased from Calbiochem-Novabiochem (Laufelfingen, Switzerland), Fmoc-amide resin and coupling reagent were purchased from Applied Biosystems Inc. (Foster City, CA, USA), and DOTA-tetra (tBu) ester was purchased from Macrocyclics (Dallas, TX, USA). Fmoc-4-abz-OH was obtained from Bachem (Chauptstrasse, Switzerland). Bombesin was purchased from Fluka (Buchs, Switzerland). ## 2.2. Synthesis of AMBA AMBA was synthesized by solid phase peptide synthesis (SPPS) using an Applied Biosystems Model 433A full automated peptide synthesizer (Applied Biosystems, Foster City, CA, USA) employing the Fmoc (9-fluorenylmethoxy-carbonyl) strategy. Carboxyl groups on Fmoc-protected amino acids were activated by (2-(1H-benzotriazol-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU), forming a peptide bond with the N-terminal amino group on the growing peptide, anchored viathe C-terminus to the resin, provided for stepwise amino acid addition. Rink Amide resin (0.25 mmole) and Fmoc-protected amino acids (1.0 mmoL), with appropriate side-chain protections, and DOTA-tetra (tBu ester) were used for SPPS of the BBN conjugates. Side chain protecting groups in the synthesis were Trt for Gln and His, and Boc for Trp.The protected peptide-resin was cleaved and deprotected with mixture of 50% trifluoroacetic acid (TFA): 45% chloroform, 3.75% anisole, and 1.25% 1, 2-ethanedithiol (EDT) for 4 h at room temperature (RT). The crude peptide was isolated by precipitating with cool diethyl ether. After centrifugation, the collected precipitate was dried under vacuum. The crude peptide sample was purified by reverse phase high-performance liquid chromatography (HPLC) using a column of XTerra prep, MSC18, 5μm, 18 × 50 mm (Waters Corp., MA, USA) with an acetonitrile/water gradient consisting of solvent A (0.1% TFA in H2O) and solvent B (0.1% TFA in acetonitrile), with a 14.8% yield; flow: 6 mL/min; gradient: 20%–40% B for 20 min. The molecular weight was determined with a MALDI-TOF Mass Spectrometer (Bruker Daltonics Inc, Germany). M/z determined for the peptide was AMBA, 1,502.6 [M+H]. ## 2.3. Radiolabeling of111In-AMBA AMBA was radiolabeled with111In as previously described by Zhang et al. [21]. Briefly, AMBA was labeled with 111In (111InCl3, Institute of Nuclear Energy Research (INER), Taoyuan (Taiwan), 16430 MBq/mL in 0.05 N HCl, pH 1.5–1.9) by reaction of 6.66 × 10−4 μmole (1 μg) peptide in 95 μL 0.1 M NH4OAc (pH 5.5) with 64.75 MBq 111InCl3 in 5 μL 0.04 N HCl for 10 min at 95°C. The specific activity of 111In-AMBA was 9.72 × 104 MBq/μmole. The radiolabeling efficiency was analyzed using instant thin-layer chromatography (ITLC SG, Pall Corporation, New York. USA) with 0.1 M Na-citrate (pH 5.0) as solvent (indium citrate and 111InCl3: Rf = 0.9~1.0, peptide-bound 111In: Rf = 0~0.1) [22]. Radio high-performance liquid chromatography (Radio-HPLC) analysis was performed using a Waters 2690 chromatography system with a 2996 photodiode array detector (PDA), a Bioscan radiodetector (Washington, DC, USA), and an FC 203B fraction collector by Gilson (Middleton, WI, USA). 111In-AMBA was purified by an Agilent (Santa Clara, CA, USA) Zorbax bonus-RP HPLC column (4.6 × 250 mm, 5 μm) eluted with a gradient mixture from 10% B to 40% B in 40 min. Flow rate was 1 mL/min at RT, and the retention time for 111In-AMBA was 22.5 min. After purification by HPLC, 100% ethanol was used instead of acetonitrile by solvent exchange with Waters Sep-Pak Light C18 cartridge (Milford, MA, USA). Normal saline was added after evaporation, and pH value was at the range 7~7.5. ## 2.4. Receptor Cold Competition Assay Cold competition binding assay was studied using human bombesin 2 receptor expressed in HEK-293 cells as the source of GRP receptors (PerkinElmer, Boston, MA, USA). Assays were performed using FC96 plates and the Multiscreen system (Millipore, Bedford, MA). Binding of125I-Tyr4-Bombesin (PerkinElmer, Boston, MA, USA) to human bombesin 2 receptor (0.16 μg per well) was determined in the presence of increasing concentrations (0.001 nmole/L to 1000 nmole/L) of unlabeled AMBA in a buffer solution (20 mmol/L HEPES, pH 7.4, 3 mmol/L MgCl2, 1 mmol/L EDTA, and 0.3% BSA) with a total volume of 250 μL per well. After incubation for 60 min at RT, membranes were filtered and washed with ice-cold Tris-HCl buffer (50 mmol/L). The filters containing membrane-bound radioactivity were counted using a Cobra II gamma-counter (Packard, Meriden, CT). The inhibitory concentration of 50% (IC50) was calculated using a four-parameter curve-fitting routine using the KELL software for Windows version 6 (Biosoft, Ferguson, MO, USA) [13]. ## 2.5. Cell Culture and Animal Model Human androgen-independent prostate cancer PC-3 cells (Bioresource Collection and Research Center, Taiwan) were cultured in Ham’s F-12K medium supplemented with 10% heat-inactivated fetal bovine serum (all from GIBCO, Grand Island, NY, USA) containing 5% CO2 at 37°C. For animal inoculation, an aliquot was thawed, grown and used within 10 passages. Five-week-old male ICR SCID (severely compromised immunodeficient) outbred mice were obtained from the National Animal Center of Taiwan (Taipei, Taiwan, ROC) and maintained on a standard diet (Lab diet; PMI Feeds, St. Louis, MO, USA) at RT, with free access to tap water in the animal house of INER). Thirty-three SCID mice were subcutaneously injected with 2 × 106 PC-3 cells in the right hind flank. The animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of the INER. ## 2.6. Biodistribution Studies At 4 weeks after PC-3 cell inoculation, the weight of the developed tumors ranged from 0.05 to 0.2 g. Twenty-five PC-3 xenograft SCID mice (n=5 for each group) were injected with 0.37 MBq (0.1 μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. The mice were sacrificed by CO2 asphyxiation with tissues and organs excised at 1, 4, 8, 24, and 48 h postinjection (p.i.). Subsequently, the tissues and organs were weighed, radioactivity was counted in a Packard Cobra II gamma-counter by Perkin-Elmer (Waltham, MA, USA), and the percentage of injected dose per gram (% ID/g) for each organ or tissue was calculated [23]. ## 2.7. Pharmacokinetic Studies Six PC-3 xenograft SCID mice were injected with 0.37 MBq (0.1μg) of the 111In-AMBA in 100-μL normal saline via the tail vein. At 0.25, 1, 4, 16, 24, 48, 72, 96, and 168 h p.i., 20 μL of blood was collected from the heart puncture, then the blood was weighed, radioactivity was counted in the Cobra II gamma-counter, and the percentage of injected dose per gram (% ID/g) was calculated. The data were fitted to a two-compartment model and the pharmacokinetic parameters were derived by the WinNonlin 5.0 software (Pharsight Corporation, Mountain View, CA, USA). ## 2.8. Micro-SPECT/CT Imaging Two male SCID mice bearing human PC-3 tumors of approximately 0.1 g were i.v. injected with 12.2 MBq/4μg 111In-AMBA after purification by radio-HPLC. The SPECT and CT images were acquired by a micro-SPECT/CT scanner system (XSPECT; Gamma Medica-ideas Inc., Northridge, CA, USA). SPECT imaging was performed using medium-energy, parallel-hole collimators at 1, 4, 8, 24, and 48 h. The source and detector were mounted on a circular gantry allowing them to rotate 360 degrees around the subject (mouse) positioned on a stationary bed. The field of view (FOV) was 12.5 cm. The imaging acquisition was accomplished using 64 projections at 90 seconds per projection. The energy windows were set at 173 keV ± 10% and 247 keV ± 10%. SPECT imaging was followed by CT imaging (X-ray source: 50 kV, 0.4 mA; 256 projections) with the animal in exactly the same position. A three-dimensional (3D) Feldkamp cone beam algorithm was used for CT image reconstruction, and a two-dimensional (2D) filtered back projection algorithm was used for SPECT image reconstruction. All image processing softwares, including SPECT/CT coregistration, were provided by Gamma Medica-Ideas Inc (Northridge, CA, USA). After coregistration, both the fused SPECT and CT images had 256 × 256 × 256 voxels with an isotropic 0.3-mm voxel size. ## 2.9. Absorbed Radiation Dose Calculations The relative organ mass scaling method was employed to extrapolate the animal data to humans [24, 25]. The mean absorbed dose in various tissues was calculated from the radionuclide concentration in tissues/organs of interest, assuming a homogeneous distribution of the radionuclide within any source region [26]. The calculated mean value of percentage of injected activity per g (% IA/g) for the organs in mice was extrapolated to uptake in organs of a 70-kg adult using the following formula [24]: (1)[(%IAgorgan)animal×(KgTBweight)animal]×(gorganKgTBweight)human=(%IAorgan)human. The extrapolated values (% IA) in the human organs at 1, 4, 8, 24, and 48 h were fitted with exponential biokinetic models and integrated to obtain the number of disintegrations in the source organs. This information was entered into the OLINDA/EXM computer program. The integrals (MBq-s) for 15 organs, including heart contents (blood), brain, muscle, bone, heart, lung, spleen, pancreas, kidneys, liver, and remainder of body were evaluated and used for dosimetry evaluation. The code also displays contributions of different source organs to the total dose of target organs. For the estimation of the tumor absorbed dose, it was assumed that once the radiopharmaceutical is inside the tumor, there is no biological elimination. ## 3. Results ### 3.1. Radiolabeling and In Vitro Receptor Binding Assay The radiolabeling efficiency of111In-AMBA was 95.43±1.37% (n=11). The in vitrocompetitive binding assays were determined in the human bombesin 2 receptor using 125I-Tyr4-Bombesin as the GRP-R specific radiotracer, and unlabeled AMBA and native BN as competitors. The IC50 of the AMBA and native BN in human bombesin 2 receptor (Figure 2) is 0.82±0.41 nmol/L and 0.13±0.10 nmol/L, respectively, in a single, direct, nanomolar range, demonstrating high specificity and affinity for the GRP receptor. The Ki of AMBA and native BBN were 0.65±0.32 nmol/L and 0.10±0.08 nmol/L, respectively.Figure 2 Competitive binding assay of AMBA versus125I-Tyr4-Bombesin with human bombesin 2 receptors. ### 3.2. Biodistribution 111In-AMBA accumulated significantly in tumor, adrenal, pancreas, small intestine, and large intestine (Table 1). Fast blood clearance and fast excretion from the kidneys were observed. High levels of radioactivity were found in the kidneys before 24 h, indicating that the radioactivity was excreted rapidly in the urine within 24 h. The levels of radioactivity reached the highest with 3.87±0.65 % ID/g at 8 h and then declined rapidly. The highest tumor/muscle ratio (Tu/Mu) of 111In-AMBA was 11.79 at 8 h after injection and decreased progressively to 4.82 and 5.16 at 24 and 48 h after administration, respectively. Other GRPR-positive organs (small intestine and large intestine) also showed the specific binding of 111In-AMBA (Table 1). The tumor/muscle ratios were decreased conspicuously at 4 and 24 h postadministration.Table 1 Biodistribution of111In-AMBA after intravenous injection in PC-3 prostate tumor-bearing SCID mice. Organ1 h4 h8 h24 h48 hBlood0.95±0.090.50±0.060.42±0.020.19±0.030.09±0.02Brain0.06±0.010.05±0.010.05±0.000.02±0.000.03±0.00Skin0.99±0.220.60±0.160.55±0.020.36±0.020.28±0.02Muscle0.57±0.190.33±0.140.33±0.020.21±0.040.14±0.02Bone1.02±0.180.92±0.211.57±0.180.90±0.130.60±0.07Heart0.62±0.090.48±0.040.56±0.080.41±0.050.32±0.04Lung1.78±0.231.88±0.461.70±0.650.60±0.170.26±0.03Adrenals5.79±1.217.08±1.2217.8±4.657.41±1.995.20±1.11Spleen2.80±1.496.90±1.878.90±2.344.41±0.582.19±0.51Pancreas6.14±0.9912.9±2.4454.9±2.5115.9±1.949.80±2.21Kidney3.56±0.154.23±0.283.92±0.914.10±0.722.74±0.30Liver7.26±0.538.22±1.057.04±0.248.64±1.316.59±1.83Bladder7.63±2.941.75±0.751.07±0.120.64±0.060.46±0.10Stomach0.81±0.090.80±0.073.97±1.150.97±0.290.47±0.05SI1.67±0.221.56±0.174.42±0.611.48±0.270.74±0.10LI1.77±0.253.42±1.107.04±1.482.39±0.380.99±0.16Tumor (PC-3)2.24±0.661.86±0.713.87±0.651.02±0.090.75±0.08Tumor/muscle3.895.6911.794.825.16Values are expressed as % ID/g, mean ± SEM (n=4-5 at each time point). SI: small intestine; LI: large intestine. ### 3.3. Pharmacokinetic Studies The radioactivity declined to under detection limit after 24 h. The pharmacokinetic parameters derived by a two-compartment model [27] indicated that the distribution half-life (t1/2α) and distribution half-life (t1/2β) of 111In-AMBA were 1.53±0.69 h and 30.73±8.56 h, respectively (Table 2).Table 2 Pharmacokinetic parameters of plasma in PC-3 tumor-bearing mice after intravenous injection of 10μCi/mouse 111In-AMBA (mean ± SEM, n=5). ParameterUnitValueA% ID/g6.15±0.69B% ID/g1.43±0.61α1/h1.19±0.85β1/h0.03±0.01AUC0-168hh × (% ID/g)66.4±17.3t1/2αh1.53±0.69t1/2βh30.7±8.56Cmax% ID/g7.37±0.64A, B,α, β: macro rate constants; t1/2α, t1/2β: distribution and elimination half-lives; AUC0-168h: area under concentration of 111In-AMBA versus time curve; Cmax: maximum concentration in plasma. ### 3.4. Micro-SPECT/CT Imaging Micro-SPECT/CT imaging of111In-AMBA indicated significant uptake in the tumors at 8 and 24 h after intravenous injection (Figure 3). The longitudinal micro-SPECT/CT imaging showed high accumulation of 111In-AMBA in pancreas and gastrointestinal tract at 4, 8, 24, and 48 h after intravenous injection.Figure 3 MicroSPECT/CT images of111In-AMBA targeting PC-3 tumors xenograft SCID mice. 12.2 MBq/4 μg 111In-AMBA was administered to each mouse by intravenous injection. The images were acquired at 1, 4, 8, 24, and 48 h after injection. The energy window was set at 173keV±10% and 247keV±10%; the image size was set at 80 × 80 pixels. The color map shows the SPECT pixel values from 0 to the maximum expressed with an arbitrary value of 100. ### 3.5. Radiation Absorbed Dose Calculation The radiation-absorbed dose projections for the administration of111In-AMBA to humans, determined from the residence times in mice, are shown in Table 3. The highest absorbed doses appear in the lower large intestine (0.12 mSv/MBq−1), upper large intestine (0.13 mSv/MBq−1), kidneys (0.12 mSv/MBq−1), osteogenic cells (0.22 mSv/MBq−1), and pancreas (0.25 mSv/MBq−1). The effective dose appears to be approximately m0.11 mSv/MBq−1. The red marrow absorbed dose is estimated to be 0.09 mSv/MBq−1. For a 2-g tumor, the unit density sphere model was used and the estimated absorbed dose was 8.09 mGy·MBq−1.Table 3 Radiation dose estimates for111In-AMBA in humans. OrganEstimated dose (mSv/MBq−1)*Adrenals1.5E-01Brain3.1E-02Breasts7.7E-02Gallbladder Wall1.5E-01LLI Wall1.2E-01Small Intestine1.3E-01Stomach Wall1.1E-01ULI Wall1.3E-01Heart Wall7.2E-02Kidneys1.2E-01Liver2.0E-01Lungs7.4E-02Muscle7.0E-02Ovaries1.2E-01Pancreas2.5E-01Red Marrow8.8E-02Osteogenic Cells2.2E-01Skin5.8E-02Spleen1.2E-01Testes5.9E-02Thymus9.0E-02Thyroid9.2E-02Urinary Bladder Wall1.1E-01Uterus1.3E-01Total Body9.2E-02Effective Dose1.1E-01*Radiation-absorbed dose projections in humans were determined from residence times for111In-AMBA in SCID mice and were calculated by use of OLINDA/EXM version 1.0 computer program. ## 3.1. Radiolabeling and In Vitro Receptor Binding Assay The radiolabeling efficiency of111In-AMBA was 95.43±1.37% (n=11). The in vitrocompetitive binding assays were determined in the human bombesin 2 receptor using 125I-Tyr4-Bombesin as the GRP-R specific radiotracer, and unlabeled AMBA and native BN as competitors. The IC50 of the AMBA and native BN in human bombesin 2 receptor (Figure 2) is 0.82±0.41 nmol/L and 0.13±0.10 nmol/L, respectively, in a single, direct, nanomolar range, demonstrating high specificity and affinity for the GRP receptor. The Ki of AMBA and native BBN were 0.65±0.32 nmol/L and 0.10±0.08 nmol/L, respectively.Figure 2 Competitive binding assay of AMBA versus125I-Tyr4-Bombesin with human bombesin 2 receptors. ## 3.2. Biodistribution 111In-AMBA accumulated significantly in tumor, adrenal, pancreas, small intestine, and large intestine (Table 1). Fast blood clearance and fast excretion from the kidneys were observed. High levels of radioactivity were found in the kidneys before 24 h, indicating that the radioactivity was excreted rapidly in the urine within 24 h. The levels of radioactivity reached the highest with 3.87±0.65 % ID/g at 8 h and then declined rapidly. The highest tumor/muscle ratio (Tu/Mu) of 111In-AMBA was 11.79 at 8 h after injection and decreased progressively to 4.82 and 5.16 at 24 and 48 h after administration, respectively. Other GRPR-positive organs (small intestine and large intestine) also showed the specific binding of 111In-AMBA (Table 1). The tumor/muscle ratios were decreased conspicuously at 4 and 24 h postadministration.Table 1 Biodistribution of111In-AMBA after intravenous injection in PC-3 prostate tumor-bearing SCID mice. Organ1 h4 h8 h24 h48 hBlood0.95±0.090.50±0.060.42±0.020.19±0.030.09±0.02Brain0.06±0.010.05±0.010.05±0.000.02±0.000.03±0.00Skin0.99±0.220.60±0.160.55±0.020.36±0.020.28±0.02Muscle0.57±0.190.33±0.140.33±0.020.21±0.040.14±0.02Bone1.02±0.180.92±0.211.57±0.180.90±0.130.60±0.07Heart0.62±0.090.48±0.040.56±0.080.41±0.050.32±0.04Lung1.78±0.231.88±0.461.70±0.650.60±0.170.26±0.03Adrenals5.79±1.217.08±1.2217.8±4.657.41±1.995.20±1.11Spleen2.80±1.496.90±1.878.90±2.344.41±0.582.19±0.51Pancreas6.14±0.9912.9±2.4454.9±2.5115.9±1.949.80±2.21Kidney3.56±0.154.23±0.283.92±0.914.10±0.722.74±0.30Liver7.26±0.538.22±1.057.04±0.248.64±1.316.59±1.83Bladder7.63±2.941.75±0.751.07±0.120.64±0.060.46±0.10Stomach0.81±0.090.80±0.073.97±1.150.97±0.290.47±0.05SI1.67±0.221.56±0.174.42±0.611.48±0.270.74±0.10LI1.77±0.253.42±1.107.04±1.482.39±0.380.99±0.16Tumor (PC-3)2.24±0.661.86±0.713.87±0.651.02±0.090.75±0.08Tumor/muscle3.895.6911.794.825.16Values are expressed as % ID/g, mean ± SEM (n=4-5 at each time point). SI: small intestine; LI: large intestine. ## 3.3. Pharmacokinetic Studies The radioactivity declined to under detection limit after 24 h. The pharmacokinetic parameters derived by a two-compartment model [27] indicated that the distribution half-life (t1/2α) and distribution half-life (t1/2β) of 111In-AMBA were 1.53±0.69 h and 30.73±8.56 h, respectively (Table 2).Table 2 Pharmacokinetic parameters of plasma in PC-3 tumor-bearing mice after intravenous injection of 10μCi/mouse 111In-AMBA (mean ± SEM, n=5). ParameterUnitValueA% ID/g6.15±0.69B% ID/g1.43±0.61α1/h1.19±0.85β1/h0.03±0.01AUC0-168hh × (% ID/g)66.4±17.3t1/2αh1.53±0.69t1/2βh30.7±8.56Cmax% ID/g7.37±0.64A, B,α, β: macro rate constants; t1/2α, t1/2β: distribution and elimination half-lives; AUC0-168h: area under concentration of 111In-AMBA versus time curve; Cmax: maximum concentration in plasma. ## 3.4. Micro-SPECT/CT Imaging Micro-SPECT/CT imaging of111In-AMBA indicated significant uptake in the tumors at 8 and 24 h after intravenous injection (Figure 3). The longitudinal micro-SPECT/CT imaging showed high accumulation of 111In-AMBA in pancreas and gastrointestinal tract at 4, 8, 24, and 48 h after intravenous injection.Figure 3 MicroSPECT/CT images of111In-AMBA targeting PC-3 tumors xenograft SCID mice. 12.2 MBq/4 μg 111In-AMBA was administered to each mouse by intravenous injection. The images were acquired at 1, 4, 8, 24, and 48 h after injection. The energy window was set at 173keV±10% and 247keV±10%; the image size was set at 80 × 80 pixels. The color map shows the SPECT pixel values from 0 to the maximum expressed with an arbitrary value of 100. ## 3.5. Radiation Absorbed Dose Calculation The radiation-absorbed dose projections for the administration of111In-AMBA to humans, determined from the residence times in mice, are shown in Table 3. The highest absorbed doses appear in the lower large intestine (0.12 mSv/MBq−1), upper large intestine (0.13 mSv/MBq−1), kidneys (0.12 mSv/MBq−1), osteogenic cells (0.22 mSv/MBq−1), and pancreas (0.25 mSv/MBq−1). The effective dose appears to be approximately m0.11 mSv/MBq−1. The red marrow absorbed dose is estimated to be 0.09 mSv/MBq−1. For a 2-g tumor, the unit density sphere model was used and the estimated absorbed dose was 8.09 mGy·MBq−1.Table 3 Radiation dose estimates for111In-AMBA in humans. OrganEstimated dose (mSv/MBq−1)*Adrenals1.5E-01Brain3.1E-02Breasts7.7E-02Gallbladder Wall1.5E-01LLI Wall1.2E-01Small Intestine1.3E-01Stomach Wall1.1E-01ULI Wall1.3E-01Heart Wall7.2E-02Kidneys1.2E-01Liver2.0E-01Lungs7.4E-02Muscle7.0E-02Ovaries1.2E-01Pancreas2.5E-01Red Marrow8.8E-02Osteogenic Cells2.2E-01Skin5.8E-02Spleen1.2E-01Testes5.9E-02Thymus9.0E-02Thyroid9.2E-02Urinary Bladder Wall1.1E-01Uterus1.3E-01Total Body9.2E-02Effective Dose1.1E-01*Radiation-absorbed dose projections in humans were determined from residence times for111In-AMBA in SCID mice and were calculated by use of OLINDA/EXM version 1.0 computer program. ## 4. Discussion Growth factor receptors are involved in all steps of tumor progression, enhancing angiogenesis, local invasion, and distant metastases. The overexpression of growth factor receptors on the cell surface of malignant cells might be associated with a more aggressive behavior and a poor prognosis. For these reasons, tumor-related growth factor receptors can be taken as potential targets for therapeutic intervention. Over the last two decades, GRP and other BLPs may act as a growth factor in many types of cancer. GRPR antagonists have been developed as anticancer candidate compounds, exhibiting impressive antitumoral activity bothin vitro and in vivo in various murine and human tumors [28, 29]. Clinical trials with GRPR antagonists in cancer patients are in its initial phase as anticipated by animal toxicology studies and preliminary evaluation in humans [29]. Presently, efforts at the identification of the most suitable candidates for clinical trials and at improving drug formulation for human use are considered priorities. It may also be anticipated that GRPRs may be exploited as potential carriers for cytotoxins, immunotoxins, or radioactive compounds. Thus, the visualization of these receptors through molecular image-guided diagnostic agents may become an interesting tool for tumor detection and staging in personalized medicine.The present study showed the highest accumulation of111In-AMBA in pancreas in mice (Table 1). However, interspecies differences in structure and pharmacology of human and animal GRP receptors have been reported [30]. Because the pancreas is the primary normal tissue in these animals that expresses a high density of bloodstream-accessible GRPRs, the accumulation of 111In in the pancreas is a direct reflection of the efficacy of radiolabeled BN analogs for in vivo targeting of cell-surface-expressed GRPRs [31]. Retention of 111In-AMBA in the pancreas may be due to the characteristic of a radioagonist with effective internalization and cell retention. Waser et al. reported that in contrast to the strongly labeled GRPR-positive mouse pancreas with 177Lu-AMBA, the human pancreas did not bind 177Lu-AMBA unless chronic pancreatitis was diagnosed [32].The majority of research efforts into the design of bombesin-based radiopharmaceuticals have been carried out using GRPR agonists. The main reason for using agonists is that they undergo receptor-mediated endocytosis enabling residualization of the attached radiometal within the targeted cell [33]. Micro-SPECT/CT imaging is a noninvasive imaging modality that can longitudinally monitor the behavior of GRPR expression in the same animal across different time-points before and during therapy. In the present study, tumor targeting and localization of 111In-AMBA was clearly imaged with micro-SPECT/CT after 1 to 48 h of administration, suggesting that micro-SPECT/CT imaging with 111In-AMBA is a good tool for studying the tumor targeting, distribution, and real-time therapeutic response in vivo.The effective dose projected for the administration of111In-AMBA to humans (0.11 mSv/MBq−1) (Table 3) is comparable to that for 111In-pentetreotide (0.12 mSv/MBq−1) [34], the only 111In-labeled peptide receptor-targeted radiotherapeutic agent to be used clinically [35, 36]. The intestines, osteogenic cells, kidneys, and pancreas appear to receive absorbed doses around 0.2 mSv/MBq−1 of 111In-AMBA. At a maximum planned administration of 111 MBq for diagnostic imaging, the total radiation-absorbed dose to these organs kidneys would be about 12 mSv. The use of animal data to estimate human doses is a necessary first step, but such studies give only an estimate of radiation doses to be expected in human subjects. More accurate human dosimetry must be established with imaging studies involving human volunteers or patients. The dosimetry data presented here will be valuable in the dose planning of these studies, and for application of 111In-AMBA to Investigational New Drug (IND) research.Clinically, primary prostate cancer and the metastases may be heterogeneous, demonstrating a spectrum of phenotypes from androgen-sensitive to androgen-insensitive.177Lu-AMBA, a conjugated bombesin compound for imaging and systemic radiotherapy, is now in phase I clinical trials [15]. 177Lu-AMBA has been evaluated in early stages of prostate cancer represented by the androgen-dependent, prostate-specific antigen-secreting hormone-sensitive prostate cancer cell line LNCaP [6], derived from a lymph node metastasis, and also in PC-3 cell line, derived from bone metastasis, is androgen-independent and is thought to represent late-stage hormone-refractory prostate cancer (HRPC) [37]. 177Lu-AMBA will be clinically efficacious as a single-agent radiotherapeutic for heterogeneous metastatic prostate cancer and be a valuable adjunct to traditional chemotherapy. Thus, the visualization of GRPR receptors through 111In-AMBA as an image-guided agent may contribute to the use of radiotherapeutic, 177Lu-AMBA, and other traditional chemotherapy in personalized medicine.Targeted therapeutic and imaging agents are becoming more prevalent and are used to treat increasingly smaller population of patients. This has led to dramatic increases in the costs for clinical trials. Biomarkers have great potential to reduce the numbers of patients needed to test novel targeted agents by predicting or identifying nonresponse early on and thus enriching the clinical trial population with patients more likely to respond. GRPRs are expressed on prostate tumor cells, making it a potential biomarker for cancer. The imaging of111In-AMBA indicated the stage of prostate cancer for determining the therapeutic approach to prostate cancer and for monitoring the therapeutic efficacy. The expression of GRPR will vary from patient to patient due to the stages and individual difference. If such patients could be prescreened with 111In-AMBA to identify those with higher tumor expression of GRPR, then it would be possible to select cases for receiving BLPs-specific treatment, while cases with low tumor expression of GRPR can consider other treatment options. Consequently, the proposed approaches enable optimized and individualized treatment protocols and can enhance the development of image-guide personalized medicine.By visualizing how well drug targeting systems deliver pharmacologically active agents to the pathological site,111In-AMBA furthermore facilitates “personalized medicine” and patient individualization, as well as the efficacy of combination regimens. Regarding personalized medicine, it can be reasoned that only in patients showing high levels of target site uptake with high expression of GRPR should treatment be continued; otherwise, alternative therapeutic approaches should be considered. ## 5. Conclusion 111In-AMBA showed a characteristic of agonist, a good bioactivity in vitro and uptake in human GRPR-expressing tumors in vivo. The molecular image-guided diagnostic agent can be used for various different purposes, ranging from simple and straightforward biodistribution studies to extensive and elaborate experimental setups aiming to enable “personalized medicine” and to improve the efficacy of combined modality anticancer therapy. --- *Source: 101497-2011-05-24.xml*
2011
# The Data Reduction Pipeline of the Hamburg Robotic Telescope **Authors:** Marco Mittag; Alexander Hempelmann; José Nicolás González-Pérez; Jürgen H. M. M. Schmitt **Journal:** Advances in Astronomy (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/101502 --- ## Abstract The fully automatic reduction pipeline for the blue channel of the HEROS spectrograph of the Hamburg Robotic Telescope (HRT) is presented. This pipeline is started automatically after finishing the night-time observations and calibrations. The pipeline includes all necessary procedures for a reliable and complete data reduction, that is, Bias, Dark, and Flat Field correction. Also the order definition, wavelength calibration, and data extraction are included. The final output is written in a fits-format and ready to use for the astronomer. The reduction pipeline is implemented in IDL and based on the IDL reduction package REDUCE written by Piskunov and Valenti (2002). --- ## Body ## 1. Introduction The HRT [1] was built by Halfmann Teleskoptechnik GmbH (Germany) and installed at Hamburg Observatory in 2005. This Cassegrain-Nasmyth F/8 type telescope has a 1.2 m aperture, an Alt/Az mounting, and final direct drives with high-precision absolute encoders. The only instrumentation of the HRT is the Heidelberg Extended Range Optical Spectrograph (HEROS). It is connected with the telescope via a polymicro FVP 50/70 μ fused silica fibre, equipped with microlenses on both sides of the fibre to adapt the HRT-HEROS F-ratios. At present, only the blue channel of HEROS (380–570 nm) is operating, while the red channel will start its operations in July 2009. The spectral resolution of the device is R=20000. The telescope is designated for long-time monitoring of active stars at its final location Guanajuato in Mexico. ## 2. Reduction Pipeline The reduction pipeline is provided as fully automatic reduction pipeline including an automated wavelength calibration. It is started by the Central Control Software of the HRT system after the observations and calibrations have been finished. The reduced data are stored in an archive. This pipeline is implemented in IDL (Interactive Data Language) and uses the reduction package REDUCE, by Piskunov and Valenti [2]. REDUCE is a powerful package providing the required functionality for the HEROS pipeline.In the following the main reduction steps of the pipeline are described and these are represented in flow charts. ### 2.1. Preparation of the Reduction Before the main reduction starts, it is necessary to prepare the reduction. The flow chart in Figure1 shows the main steps of this part.Figure 1 Flow chart with the main steps of the preparation of the reduction.The first step of the preparation is the definition of the directory holding the raw data. Also a temporary directory and the directory are created, where results are saved on disk.Thereafter the parameters and file names for reduction procedure are defined. All reduction parameters are supplied with the same values to all routines called by the pipeline at the different steps of the reduction. This is made for consistency in the reduction. The parameter values are read out from the parameter file for the blue channel. ### 2.2. Building the Master Calibration Images The HRT system takes several calibration images: Bias, dark, and flat fields, before and after the observations. The respective images are averaged and then used as master calibration images. The main steps to build the master image are similar for bias, dark, and flat field. The flow chart in Figure2 shows the main steps in this part.Figure 2 Flow chart with the main steps of building the master calibration images.The first step is a check of the variation in the calibration images. The percentage of relative variationrvi between the arithmetic mean of the single images 〈imi〉 and the median of this values median(〈im1〉⋯〈imn〉) is calculated as(1)rvi=(〈imi〉-median(〈im1〉⋯〈imn〉)median(〈im1〉⋯〈imn)〉)·100. For the calculation of rvi for the darks the single images are corrected by bias and for the rvi for the flats by bias and dark. The percentage of relative variation is plotted in a log file.The calibration images are split in two lists. If the same number of images is taken before and after the observation, then in the first list the images taken at the start of the observation are collected and in the second list those taken after the observation. Then, the percentage of relative variations of the images are checked for both lists. If the percentage of relative variation is higher than a threshold, the corresponding image is not used for building the master calibration image. Normally the total number of images in both lists after the check ofrvi is greater or equals 3. The standard deviations of the percentage of relative variation of images, Equation (1), are calculated for both lists and used as the thresholds to build the master calibration image. If the standard deviation is less than 3.5, then it is reset to the default value 3.5. This is the minimum threshold to find outliers in the images.If the total number of images in one list is less than 3, an error message is obtained and the images are collected in a new list. The content of this new list is checked. Now there are 3 possibilities.(1) The total number of images is<3. The reduction ends, because one needs at least 3 calibration images to build a master calibration image.(2) The total number of images is<6. The new list is not split, because in the list must be at least 3 images.(3) The total number of images is≥6. The new list is split in two lists.Thereafter the single calibration images are averaged. As next step it is checked if an error arose during the combination of the single calibration images. In this case the pipeline stops and an error message is written to a file.The master bias is subtracted from the average dark and flat field and additionally the flat field is corrected from the dark contribution. The dark correction is not performed, if the arithmetic mean of master dark is smaller than a predefined threshold for the dark correction, because in this case a dark contribution is negligible. After the subtraction of the master bias, the dark is time normalised.The results are saved as master calibration images (hereafter, bias, dark, and flat field). To monitor long-term changes, the arithmetic mean of the single images and the corresponding standard deviation are saved in a log file. Also the standard deviation of both lists and the arithmetic mean, the median, and the standard deviation of the master calibration images are saved. ### 2.3. Order Definition The flat field is used for the order definition. The central positions of the individual spectral orders are located and defined in this step of the reduction pipeline. If a single order is not found, then the pipeline stops and an error message is written to a file. The results of the order definition are saved in the order definition file. The flow chart in Figure3 shows the main steps in this reduction part.Figure 3 Flow chart with the main steps of the order definition. ### 2.4. Blaze Extraction During the next step in the reduction pipeline the spectrum of the flat field lamp is extracted. The flow chart in Figure4 shows the main steps in this part of the pipeline. This spectrum (hereafter, blaze) can be used as blaze function. The blaze is used to eliminate the blaze function in the science spectrum and to correct for the difference in quantum efficiency of the pixels (Section 2.6). After the background correction the blaze is extracted like a science spectrum. The counts in the blaze are converted in electrons and the blaze is normalised by the exposure time. Finally the blaze is saved in the order definition file.Figure 4 Flow chart with the main steps of the blaze extraction. ### 2.5. Wavelength Calibration The spectrum of the ThAr lamp is used for the wavelength calibration. One ThAr image will be taken before and one after the observations. The flow chart in Figure5 shows the main steps in this part. The flow of the wavelength calibration can be split in two parts: the spectrum extraction and the new wavelength solution.Figure 5 Flow chart with the main steps of the wavelength calibration.ThAr Spectrum Extraction The two ThAr spectra are reduced consecutively. The bias is subtracted from the ThAr image. If the arithmetic mean of dark is above the threshold for dark subtraction, then the dark is also subtracted from the ThAr image. Thereafter the ThAr spectrum is extracted and saved on disk.Wavelength Solution For the automatic wavelength calibration a reference ThAr spectrum with 1D wavelength solutions of the several orders is used. The extracted spectra are compared with the reference spectrum. The shifts are calculated for each order via a cross correlation. After that the order shifts of the two ThAr spectra are averaged for each order. The new 1D wavelength solutions of all orders are determined with the shifts and the 1D wavelength solution of the reference spectrum. Finally, the results, the shifts, and the spectral resolution of the reference ThAr spectrum are saved in a wavelength file. To check the results, the pipeline creates plots with the residuals of the 1D wavelength solutions fits and a file containing the arithmetic mean and standard deviation of the shifts and the residuals of the 1D wavelength solution. ### 2.6. Spectrum Extraction The final part of the pipeline is the extraction of the actual spectra. First, the spectra of all images are extracted. The images of the same object are coadded and then a summed spectrum is created.Spectrum from the Single Science Image The flow chart in Figure6 shows the main steps in this part of the pipeline. The science images were collected in a list. The bias is subtracted from the science image, and if the arithmetic mean of dark is greater than a threshold for dark subtraction, the dark is also subtracted from the science image. If sky images from this image (object) exist, then the average of two sky images is created and subtracted from the science image. Next, a background correction is performed. After the background correction the pipeline identifies outliers in the science image and these are flagged. Thereafter, the spectrum is extracted, the counts in the spectrum are converted to electrons, and the spectrum is time normalised. In the following, the spectrum is divided by the blaze [3]. This step eliminates the blaze function. Simultaneously the correction for the quantum efficiency is achieved. If the keyword noautocal is not set, then the wavelengths for the individual objects are corrected by the corresponding barycentric velocity shifts. Finally the spectra are saved in fits file.Figure 6 Flow chart with the main steps of the spectrum extraction form the single science image.Spectrum from the Coadded Science Images This is the last part of the reduction pipeline. Here the spectrum from the summed science images of the same object is extracted. This reduction procedure is similar to the procedure of extraction the spectrum from the single science image, but additionally at first the images are coadded. The flow chart in Figure7 shows the first steps in this reduction part. The rest is similar to the spectrum extraction a single science image. The first difference between the both reduction steps is a check which objects have more than one exposure. These objects are collected in a list. Another difference is the coaddition of the single images for the same object, after the bias and, if necessary, a dark and sky correction. The mean Julian date for the summed image and the barycentric velocity shift are computed.Figure 7 Flow chart of the first steps of the spectrum extraction of the coadd science images. ## 2.1. Preparation of the Reduction Before the main reduction starts, it is necessary to prepare the reduction. The flow chart in Figure1 shows the main steps of this part.Figure 1 Flow chart with the main steps of the preparation of the reduction.The first step of the preparation is the definition of the directory holding the raw data. Also a temporary directory and the directory are created, where results are saved on disk.Thereafter the parameters and file names for reduction procedure are defined. All reduction parameters are supplied with the same values to all routines called by the pipeline at the different steps of the reduction. This is made for consistency in the reduction. The parameter values are read out from the parameter file for the blue channel. ## 2.2. Building the Master Calibration Images The HRT system takes several calibration images: Bias, dark, and flat fields, before and after the observations. The respective images are averaged and then used as master calibration images. The main steps to build the master image are similar for bias, dark, and flat field. The flow chart in Figure2 shows the main steps in this part.Figure 2 Flow chart with the main steps of building the master calibration images.The first step is a check of the variation in the calibration images. The percentage of relative variationrvi between the arithmetic mean of the single images 〈imi〉 and the median of this values median(〈im1〉⋯〈imn〉) is calculated as(1)rvi=(〈imi〉-median(〈im1〉⋯〈imn〉)median(〈im1〉⋯〈imn)〉)·100. For the calculation of rvi for the darks the single images are corrected by bias and for the rvi for the flats by bias and dark. The percentage of relative variation is plotted in a log file.The calibration images are split in two lists. If the same number of images is taken before and after the observation, then in the first list the images taken at the start of the observation are collected and in the second list those taken after the observation. Then, the percentage of relative variations of the images are checked for both lists. If the percentage of relative variation is higher than a threshold, the corresponding image is not used for building the master calibration image. Normally the total number of images in both lists after the check ofrvi is greater or equals 3. The standard deviations of the percentage of relative variation of images, Equation (1), are calculated for both lists and used as the thresholds to build the master calibration image. If the standard deviation is less than 3.5, then it is reset to the default value 3.5. This is the minimum threshold to find outliers in the images.If the total number of images in one list is less than 3, an error message is obtained and the images are collected in a new list. The content of this new list is checked. Now there are 3 possibilities.(1) The total number of images is<3. The reduction ends, because one needs at least 3 calibration images to build a master calibration image.(2) The total number of images is<6. The new list is not split, because in the list must be at least 3 images.(3) The total number of images is≥6. The new list is split in two lists.Thereafter the single calibration images are averaged. As next step it is checked if an error arose during the combination of the single calibration images. In this case the pipeline stops and an error message is written to a file.The master bias is subtracted from the average dark and flat field and additionally the flat field is corrected from the dark contribution. The dark correction is not performed, if the arithmetic mean of master dark is smaller than a predefined threshold for the dark correction, because in this case a dark contribution is negligible. After the subtraction of the master bias, the dark is time normalised.The results are saved as master calibration images (hereafter, bias, dark, and flat field). To monitor long-term changes, the arithmetic mean of the single images and the corresponding standard deviation are saved in a log file. Also the standard deviation of both lists and the arithmetic mean, the median, and the standard deviation of the master calibration images are saved. ## 2.3. Order Definition The flat field is used for the order definition. The central positions of the individual spectral orders are located and defined in this step of the reduction pipeline. If a single order is not found, then the pipeline stops and an error message is written to a file. The results of the order definition are saved in the order definition file. The flow chart in Figure3 shows the main steps in this reduction part.Figure 3 Flow chart with the main steps of the order definition. ## 2.4. Blaze Extraction During the next step in the reduction pipeline the spectrum of the flat field lamp is extracted. The flow chart in Figure4 shows the main steps in this part of the pipeline. This spectrum (hereafter, blaze) can be used as blaze function. The blaze is used to eliminate the blaze function in the science spectrum and to correct for the difference in quantum efficiency of the pixels (Section 2.6). After the background correction the blaze is extracted like a science spectrum. The counts in the blaze are converted in electrons and the blaze is normalised by the exposure time. Finally the blaze is saved in the order definition file.Figure 4 Flow chart with the main steps of the blaze extraction. ## 2.5. Wavelength Calibration The spectrum of the ThAr lamp is used for the wavelength calibration. One ThAr image will be taken before and one after the observations. The flow chart in Figure5 shows the main steps in this part. The flow of the wavelength calibration can be split in two parts: the spectrum extraction and the new wavelength solution.Figure 5 Flow chart with the main steps of the wavelength calibration.ThAr Spectrum Extraction The two ThAr spectra are reduced consecutively. The bias is subtracted from the ThAr image. If the arithmetic mean of dark is above the threshold for dark subtraction, then the dark is also subtracted from the ThAr image. Thereafter the ThAr spectrum is extracted and saved on disk.Wavelength Solution For the automatic wavelength calibration a reference ThAr spectrum with 1D wavelength solutions of the several orders is used. The extracted spectra are compared with the reference spectrum. The shifts are calculated for each order via a cross correlation. After that the order shifts of the two ThAr spectra are averaged for each order. The new 1D wavelength solutions of all orders are determined with the shifts and the 1D wavelength solution of the reference spectrum. Finally, the results, the shifts, and the spectral resolution of the reference ThAr spectrum are saved in a wavelength file. To check the results, the pipeline creates plots with the residuals of the 1D wavelength solutions fits and a file containing the arithmetic mean and standard deviation of the shifts and the residuals of the 1D wavelength solution. ## 2.6. Spectrum Extraction The final part of the pipeline is the extraction of the actual spectra. First, the spectra of all images are extracted. The images of the same object are coadded and then a summed spectrum is created.Spectrum from the Single Science Image The flow chart in Figure6 shows the main steps in this part of the pipeline. The science images were collected in a list. The bias is subtracted from the science image, and if the arithmetic mean of dark is greater than a threshold for dark subtraction, the dark is also subtracted from the science image. If sky images from this image (object) exist, then the average of two sky images is created and subtracted from the science image. Next, a background correction is performed. After the background correction the pipeline identifies outliers in the science image and these are flagged. Thereafter, the spectrum is extracted, the counts in the spectrum are converted to electrons, and the spectrum is time normalised. In the following, the spectrum is divided by the blaze [3]. This step eliminates the blaze function. Simultaneously the correction for the quantum efficiency is achieved. If the keyword noautocal is not set, then the wavelengths for the individual objects are corrected by the corresponding barycentric velocity shifts. Finally the spectra are saved in fits file.Figure 6 Flow chart with the main steps of the spectrum extraction form the single science image.Spectrum from the Coadded Science Images This is the last part of the reduction pipeline. Here the spectrum from the summed science images of the same object is extracted. This reduction procedure is similar to the procedure of extraction the spectrum from the single science image, but additionally at first the images are coadded. The flow chart in Figure7 shows the first steps in this reduction part. The rest is similar to the spectrum extraction a single science image. The first difference between the both reduction steps is a check which objects have more than one exposure. These objects are collected in a list. Another difference is the coaddition of the single images for the same object, after the bias and, if necessary, a dark and sky correction. The mean Julian date for the summed image and the barycentric velocity shift are computed.Figure 7 Flow chart of the first steps of the spectrum extraction of the coadd science images. ## 3. Conclusions The data reduction pipeline for the blue channel works fully automatically and stably. In case of an error in some positions in the reduction flow, a message is written in a file. Also log files are written during the reduction. With these error and log files the astronomer can check the reduction flow. In Figure8 an example of the final output of the pipeline is shown. A spectrum of Alcaid taken with HEROS and extracted with MIDAS [3] is shown in Figure 9 for comparison only.Figure 8 A sample result: Vega spectrum relative to the flat field spectrum and normalised to Figure9.Figure 9 A comparative spectrum: Alcaid spectrum [3].One problem in the reduction pipeline is the identification of faint outliers in the science image. Also the continuum normalisation process of late types stars may have problems in order to find the quasicontinuum segments of the spectrum.A future task will be the creation of the data reduction pipeline for the red channel. In general this pipeline will have the same structure as the pipeline for the blue channel has.Finally, the optimisation and regular support of both pipelines are important to obtain the best possible outputs. --- *Source: 101502-2009-12-02.xml*
101502-2009-12-02_101502-2009-12-02.md
22,720
The Data Reduction Pipeline of the Hamburg Robotic Telescope
Marco Mittag; Alexander Hempelmann; José Nicolás González-Pérez; Jürgen H. M. M. Schmitt
Advances in Astronomy (2010)
Physical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/101502
101502-2009-12-02.xml
--- ## Abstract The fully automatic reduction pipeline for the blue channel of the HEROS spectrograph of the Hamburg Robotic Telescope (HRT) is presented. This pipeline is started automatically after finishing the night-time observations and calibrations. The pipeline includes all necessary procedures for a reliable and complete data reduction, that is, Bias, Dark, and Flat Field correction. Also the order definition, wavelength calibration, and data extraction are included. The final output is written in a fits-format and ready to use for the astronomer. The reduction pipeline is implemented in IDL and based on the IDL reduction package REDUCE written by Piskunov and Valenti (2002). --- ## Body ## 1. Introduction The HRT [1] was built by Halfmann Teleskoptechnik GmbH (Germany) and installed at Hamburg Observatory in 2005. This Cassegrain-Nasmyth F/8 type telescope has a 1.2 m aperture, an Alt/Az mounting, and final direct drives with high-precision absolute encoders. The only instrumentation of the HRT is the Heidelberg Extended Range Optical Spectrograph (HEROS). It is connected with the telescope via a polymicro FVP 50/70 μ fused silica fibre, equipped with microlenses on both sides of the fibre to adapt the HRT-HEROS F-ratios. At present, only the blue channel of HEROS (380–570 nm) is operating, while the red channel will start its operations in July 2009. The spectral resolution of the device is R=20000. The telescope is designated for long-time monitoring of active stars at its final location Guanajuato in Mexico. ## 2. Reduction Pipeline The reduction pipeline is provided as fully automatic reduction pipeline including an automated wavelength calibration. It is started by the Central Control Software of the HRT system after the observations and calibrations have been finished. The reduced data are stored in an archive. This pipeline is implemented in IDL (Interactive Data Language) and uses the reduction package REDUCE, by Piskunov and Valenti [2]. REDUCE is a powerful package providing the required functionality for the HEROS pipeline.In the following the main reduction steps of the pipeline are described and these are represented in flow charts. ### 2.1. Preparation of the Reduction Before the main reduction starts, it is necessary to prepare the reduction. The flow chart in Figure1 shows the main steps of this part.Figure 1 Flow chart with the main steps of the preparation of the reduction.The first step of the preparation is the definition of the directory holding the raw data. Also a temporary directory and the directory are created, where results are saved on disk.Thereafter the parameters and file names for reduction procedure are defined. All reduction parameters are supplied with the same values to all routines called by the pipeline at the different steps of the reduction. This is made for consistency in the reduction. The parameter values are read out from the parameter file for the blue channel. ### 2.2. Building the Master Calibration Images The HRT system takes several calibration images: Bias, dark, and flat fields, before and after the observations. The respective images are averaged and then used as master calibration images. The main steps to build the master image are similar for bias, dark, and flat field. The flow chart in Figure2 shows the main steps in this part.Figure 2 Flow chart with the main steps of building the master calibration images.The first step is a check of the variation in the calibration images. The percentage of relative variationrvi between the arithmetic mean of the single images 〈imi〉 and the median of this values median(〈im1〉⋯〈imn〉) is calculated as(1)rvi=(〈imi〉-median(〈im1〉⋯〈imn〉)median(〈im1〉⋯〈imn)〉)·100. For the calculation of rvi for the darks the single images are corrected by bias and for the rvi for the flats by bias and dark. The percentage of relative variation is plotted in a log file.The calibration images are split in two lists. If the same number of images is taken before and after the observation, then in the first list the images taken at the start of the observation are collected and in the second list those taken after the observation. Then, the percentage of relative variations of the images are checked for both lists. If the percentage of relative variation is higher than a threshold, the corresponding image is not used for building the master calibration image. Normally the total number of images in both lists after the check ofrvi is greater or equals 3. The standard deviations of the percentage of relative variation of images, Equation (1), are calculated for both lists and used as the thresholds to build the master calibration image. If the standard deviation is less than 3.5, then it is reset to the default value 3.5. This is the minimum threshold to find outliers in the images.If the total number of images in one list is less than 3, an error message is obtained and the images are collected in a new list. The content of this new list is checked. Now there are 3 possibilities.(1) The total number of images is<3. The reduction ends, because one needs at least 3 calibration images to build a master calibration image.(2) The total number of images is<6. The new list is not split, because in the list must be at least 3 images.(3) The total number of images is≥6. The new list is split in two lists.Thereafter the single calibration images are averaged. As next step it is checked if an error arose during the combination of the single calibration images. In this case the pipeline stops and an error message is written to a file.The master bias is subtracted from the average dark and flat field and additionally the flat field is corrected from the dark contribution. The dark correction is not performed, if the arithmetic mean of master dark is smaller than a predefined threshold for the dark correction, because in this case a dark contribution is negligible. After the subtraction of the master bias, the dark is time normalised.The results are saved as master calibration images (hereafter, bias, dark, and flat field). To monitor long-term changes, the arithmetic mean of the single images and the corresponding standard deviation are saved in a log file. Also the standard deviation of both lists and the arithmetic mean, the median, and the standard deviation of the master calibration images are saved. ### 2.3. Order Definition The flat field is used for the order definition. The central positions of the individual spectral orders are located and defined in this step of the reduction pipeline. If a single order is not found, then the pipeline stops and an error message is written to a file. The results of the order definition are saved in the order definition file. The flow chart in Figure3 shows the main steps in this reduction part.Figure 3 Flow chart with the main steps of the order definition. ### 2.4. Blaze Extraction During the next step in the reduction pipeline the spectrum of the flat field lamp is extracted. The flow chart in Figure4 shows the main steps in this part of the pipeline. This spectrum (hereafter, blaze) can be used as blaze function. The blaze is used to eliminate the blaze function in the science spectrum and to correct for the difference in quantum efficiency of the pixels (Section 2.6). After the background correction the blaze is extracted like a science spectrum. The counts in the blaze are converted in electrons and the blaze is normalised by the exposure time. Finally the blaze is saved in the order definition file.Figure 4 Flow chart with the main steps of the blaze extraction. ### 2.5. Wavelength Calibration The spectrum of the ThAr lamp is used for the wavelength calibration. One ThAr image will be taken before and one after the observations. The flow chart in Figure5 shows the main steps in this part. The flow of the wavelength calibration can be split in two parts: the spectrum extraction and the new wavelength solution.Figure 5 Flow chart with the main steps of the wavelength calibration.ThAr Spectrum Extraction The two ThAr spectra are reduced consecutively. The bias is subtracted from the ThAr image. If the arithmetic mean of dark is above the threshold for dark subtraction, then the dark is also subtracted from the ThAr image. Thereafter the ThAr spectrum is extracted and saved on disk.Wavelength Solution For the automatic wavelength calibration a reference ThAr spectrum with 1D wavelength solutions of the several orders is used. The extracted spectra are compared with the reference spectrum. The shifts are calculated for each order via a cross correlation. After that the order shifts of the two ThAr spectra are averaged for each order. The new 1D wavelength solutions of all orders are determined with the shifts and the 1D wavelength solution of the reference spectrum. Finally, the results, the shifts, and the spectral resolution of the reference ThAr spectrum are saved in a wavelength file. To check the results, the pipeline creates plots with the residuals of the 1D wavelength solutions fits and a file containing the arithmetic mean and standard deviation of the shifts and the residuals of the 1D wavelength solution. ### 2.6. Spectrum Extraction The final part of the pipeline is the extraction of the actual spectra. First, the spectra of all images are extracted. The images of the same object are coadded and then a summed spectrum is created.Spectrum from the Single Science Image The flow chart in Figure6 shows the main steps in this part of the pipeline. The science images were collected in a list. The bias is subtracted from the science image, and if the arithmetic mean of dark is greater than a threshold for dark subtraction, the dark is also subtracted from the science image. If sky images from this image (object) exist, then the average of two sky images is created and subtracted from the science image. Next, a background correction is performed. After the background correction the pipeline identifies outliers in the science image and these are flagged. Thereafter, the spectrum is extracted, the counts in the spectrum are converted to electrons, and the spectrum is time normalised. In the following, the spectrum is divided by the blaze [3]. This step eliminates the blaze function. Simultaneously the correction for the quantum efficiency is achieved. If the keyword noautocal is not set, then the wavelengths for the individual objects are corrected by the corresponding barycentric velocity shifts. Finally the spectra are saved in fits file.Figure 6 Flow chart with the main steps of the spectrum extraction form the single science image.Spectrum from the Coadded Science Images This is the last part of the reduction pipeline. Here the spectrum from the summed science images of the same object is extracted. This reduction procedure is similar to the procedure of extraction the spectrum from the single science image, but additionally at first the images are coadded. The flow chart in Figure7 shows the first steps in this reduction part. The rest is similar to the spectrum extraction a single science image. The first difference between the both reduction steps is a check which objects have more than one exposure. These objects are collected in a list. Another difference is the coaddition of the single images for the same object, after the bias and, if necessary, a dark and sky correction. The mean Julian date for the summed image and the barycentric velocity shift are computed.Figure 7 Flow chart of the first steps of the spectrum extraction of the coadd science images. ## 2.1. Preparation of the Reduction Before the main reduction starts, it is necessary to prepare the reduction. The flow chart in Figure1 shows the main steps of this part.Figure 1 Flow chart with the main steps of the preparation of the reduction.The first step of the preparation is the definition of the directory holding the raw data. Also a temporary directory and the directory are created, where results are saved on disk.Thereafter the parameters and file names for reduction procedure are defined. All reduction parameters are supplied with the same values to all routines called by the pipeline at the different steps of the reduction. This is made for consistency in the reduction. The parameter values are read out from the parameter file for the blue channel. ## 2.2. Building the Master Calibration Images The HRT system takes several calibration images: Bias, dark, and flat fields, before and after the observations. The respective images are averaged and then used as master calibration images. The main steps to build the master image are similar for bias, dark, and flat field. The flow chart in Figure2 shows the main steps in this part.Figure 2 Flow chart with the main steps of building the master calibration images.The first step is a check of the variation in the calibration images. The percentage of relative variationrvi between the arithmetic mean of the single images 〈imi〉 and the median of this values median(〈im1〉⋯〈imn〉) is calculated as(1)rvi=(〈imi〉-median(〈im1〉⋯〈imn〉)median(〈im1〉⋯〈imn)〉)·100. For the calculation of rvi for the darks the single images are corrected by bias and for the rvi for the flats by bias and dark. The percentage of relative variation is plotted in a log file.The calibration images are split in two lists. If the same number of images is taken before and after the observation, then in the first list the images taken at the start of the observation are collected and in the second list those taken after the observation. Then, the percentage of relative variations of the images are checked for both lists. If the percentage of relative variation is higher than a threshold, the corresponding image is not used for building the master calibration image. Normally the total number of images in both lists after the check ofrvi is greater or equals 3. The standard deviations of the percentage of relative variation of images, Equation (1), are calculated for both lists and used as the thresholds to build the master calibration image. If the standard deviation is less than 3.5, then it is reset to the default value 3.5. This is the minimum threshold to find outliers in the images.If the total number of images in one list is less than 3, an error message is obtained and the images are collected in a new list. The content of this new list is checked. Now there are 3 possibilities.(1) The total number of images is<3. The reduction ends, because one needs at least 3 calibration images to build a master calibration image.(2) The total number of images is<6. The new list is not split, because in the list must be at least 3 images.(3) The total number of images is≥6. The new list is split in two lists.Thereafter the single calibration images are averaged. As next step it is checked if an error arose during the combination of the single calibration images. In this case the pipeline stops and an error message is written to a file.The master bias is subtracted from the average dark and flat field and additionally the flat field is corrected from the dark contribution. The dark correction is not performed, if the arithmetic mean of master dark is smaller than a predefined threshold for the dark correction, because in this case a dark contribution is negligible. After the subtraction of the master bias, the dark is time normalised.The results are saved as master calibration images (hereafter, bias, dark, and flat field). To monitor long-term changes, the arithmetic mean of the single images and the corresponding standard deviation are saved in a log file. Also the standard deviation of both lists and the arithmetic mean, the median, and the standard deviation of the master calibration images are saved. ## 2.3. Order Definition The flat field is used for the order definition. The central positions of the individual spectral orders are located and defined in this step of the reduction pipeline. If a single order is not found, then the pipeline stops and an error message is written to a file. The results of the order definition are saved in the order definition file. The flow chart in Figure3 shows the main steps in this reduction part.Figure 3 Flow chart with the main steps of the order definition. ## 2.4. Blaze Extraction During the next step in the reduction pipeline the spectrum of the flat field lamp is extracted. The flow chart in Figure4 shows the main steps in this part of the pipeline. This spectrum (hereafter, blaze) can be used as blaze function. The blaze is used to eliminate the blaze function in the science spectrum and to correct for the difference in quantum efficiency of the pixels (Section 2.6). After the background correction the blaze is extracted like a science spectrum. The counts in the blaze are converted in electrons and the blaze is normalised by the exposure time. Finally the blaze is saved in the order definition file.Figure 4 Flow chart with the main steps of the blaze extraction. ## 2.5. Wavelength Calibration The spectrum of the ThAr lamp is used for the wavelength calibration. One ThAr image will be taken before and one after the observations. The flow chart in Figure5 shows the main steps in this part. The flow of the wavelength calibration can be split in two parts: the spectrum extraction and the new wavelength solution.Figure 5 Flow chart with the main steps of the wavelength calibration.ThAr Spectrum Extraction The two ThAr spectra are reduced consecutively. The bias is subtracted from the ThAr image. If the arithmetic mean of dark is above the threshold for dark subtraction, then the dark is also subtracted from the ThAr image. Thereafter the ThAr spectrum is extracted and saved on disk.Wavelength Solution For the automatic wavelength calibration a reference ThAr spectrum with 1D wavelength solutions of the several orders is used. The extracted spectra are compared with the reference spectrum. The shifts are calculated for each order via a cross correlation. After that the order shifts of the two ThAr spectra are averaged for each order. The new 1D wavelength solutions of all orders are determined with the shifts and the 1D wavelength solution of the reference spectrum. Finally, the results, the shifts, and the spectral resolution of the reference ThAr spectrum are saved in a wavelength file. To check the results, the pipeline creates plots with the residuals of the 1D wavelength solutions fits and a file containing the arithmetic mean and standard deviation of the shifts and the residuals of the 1D wavelength solution. ## 2.6. Spectrum Extraction The final part of the pipeline is the extraction of the actual spectra. First, the spectra of all images are extracted. The images of the same object are coadded and then a summed spectrum is created.Spectrum from the Single Science Image The flow chart in Figure6 shows the main steps in this part of the pipeline. The science images were collected in a list. The bias is subtracted from the science image, and if the arithmetic mean of dark is greater than a threshold for dark subtraction, the dark is also subtracted from the science image. If sky images from this image (object) exist, then the average of two sky images is created and subtracted from the science image. Next, a background correction is performed. After the background correction the pipeline identifies outliers in the science image and these are flagged. Thereafter, the spectrum is extracted, the counts in the spectrum are converted to electrons, and the spectrum is time normalised. In the following, the spectrum is divided by the blaze [3]. This step eliminates the blaze function. Simultaneously the correction for the quantum efficiency is achieved. If the keyword noautocal is not set, then the wavelengths for the individual objects are corrected by the corresponding barycentric velocity shifts. Finally the spectra are saved in fits file.Figure 6 Flow chart with the main steps of the spectrum extraction form the single science image.Spectrum from the Coadded Science Images This is the last part of the reduction pipeline. Here the spectrum from the summed science images of the same object is extracted. This reduction procedure is similar to the procedure of extraction the spectrum from the single science image, but additionally at first the images are coadded. The flow chart in Figure7 shows the first steps in this reduction part. The rest is similar to the spectrum extraction a single science image. The first difference between the both reduction steps is a check which objects have more than one exposure. These objects are collected in a list. Another difference is the coaddition of the single images for the same object, after the bias and, if necessary, a dark and sky correction. The mean Julian date for the summed image and the barycentric velocity shift are computed.Figure 7 Flow chart of the first steps of the spectrum extraction of the coadd science images. ## 3. Conclusions The data reduction pipeline for the blue channel works fully automatically and stably. In case of an error in some positions in the reduction flow, a message is written in a file. Also log files are written during the reduction. With these error and log files the astronomer can check the reduction flow. In Figure8 an example of the final output of the pipeline is shown. A spectrum of Alcaid taken with HEROS and extracted with MIDAS [3] is shown in Figure 9 for comparison only.Figure 8 A sample result: Vega spectrum relative to the flat field spectrum and normalised to Figure9.Figure 9 A comparative spectrum: Alcaid spectrum [3].One problem in the reduction pipeline is the identification of faint outliers in the science image. Also the continuum normalisation process of late types stars may have problems in order to find the quasicontinuum segments of the spectrum.A future task will be the creation of the data reduction pipeline for the red channel. In general this pipeline will have the same structure as the pipeline for the blue channel has.Finally, the optimisation and regular support of both pipelines are important to obtain the best possible outputs. --- *Source: 101502-2009-12-02.xml*
2010
# The Evolution Model of Public Risk Perception Based on Pandemic Spreading Theory under Perspective of COVID-19 **Authors:** Yi-Cheng Zhang; Zhi Li; Guo-Bing Zhou; Nai-Ru Xu; Jia-Bao Liu **Journal:** Complexity (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1015049 --- ## Abstract After the occurrence of public health emergencies, due to the uncertainty of the evolution of events and the asymmetry of pandemic information, the public’s risk perception will fluctuate dramatically. Excessive risk perception often causes the public to overreact to emergencies, resulting in irrational behaviors, which have a negative impact on economic development and social order. However, low-risk perception will reduce individual awareness of prevention and control, which is not conducive to the implementation of government pandemic prevention and control measures. Therefore, it is of great significance to accurately evaluate public risk perception for improving government risk management. This paper took the evolution of public risk perception based on the COVID-19 region as the research object. First, we analyze the characteristics of infectious diseases in the evolution of public risk perception of public health emergencies. Second, we analyze the characteristics of risk perception transmission in social networks. Third, we establish the dynamic model of public risk perception evolution based on SEIR, and the evolution mechanism of the public risk perception network is revealed through simulation experiments. Finally, we provide policy suggestions for government departments to deal with public health emergencies based on the conclusions of this study. --- ## Body ## 1. Introduction After the occurrence of public health emergencies, due to the uncertainty of the evolution of events and the asymmetry of pandemic information, the public’s risk perception will fluctuate dramatically. The public takes various protective measures, such as collecting relevant information about the pandemic, forwarding and spreading information about the pandemic, producing violent emotional reactions, buying protective goods, and even leaving the pandemic area [1, 2]. In early March 2020, more than 50,000 people across the country were surveyed about their psychological stress and emotional state according to the Shanghai Mental Health Center; the survey showed: about 35% of the interviewees suffered from psychological distress and had obvious emotional stress reaction; about 5.14% of the interviewees suffered from serious psychological distress. During COVID-19, there has been frantic buying of face masks and disinfectants across the country and around the world. Moreover, public risk perceptions are highly contagious, and excessive risk perceptions by some members of the public can lead to irrational behavior by more members of the public, jeopardizing social harmony and stability. Therefore, we should pay attention to the public’s risk perception and emotional guidance, face up to the psychological needs of the public to vent their emotions, and reasonably guide the public’s emotional fluctuations and behavioral reactions. These become an important task of COVID-19 pandemic prevention and control [3].Public risk perception refers to the concern or anxiety expressed by the public about something [4], which reflects the process of the public’s subjective evaluation of a specific risk state [5, 6]. When the public is aware of risks, it stimulates the psychological state of coping with risks and further generates the demand for risk-related information and emergency behavior based on subjective judgment. Too high risk perception often leads the public to overreact to risk events, resulting in a variety of irrational and unnecessary behaviors, which have an unnecessary impact on economic development and social stability. However, when risk perception is too low, the public may give up taking effective self-protection behaviors. Public risk perception is a collection, selection, and understanding of the process of crisis information and response [7, 8]. In the all-media information age dominated by the network, the public’s information demand, information channel, and information content are characterized by diversification and complexity. It leads to dynamic change and unpredictability of public risk perception, which further increases the difficulty of health emergency prevention and control.Therefore, after the occurrence of major public health emergencies, the dynamic evolution law of public risk perception along with the development of the events should be grasped. It is helpful for the government to adopt active and effective risk management policies and measures. ## 2. Literature Review ### 2.1. Risk Perception Scholars generally believe that the public’s risk perception is mainly affected by individual characteristics, time, event progress, risk information, and other factors [9]. A questionnaire survey through a psychological scale is the most effective method to study the influencing factors and differences of risk perception. Peacock et al. take hurricane as the research scenario and explore the influencing factors of the formation process of public risk perception from two dimensions of socioeconomic and demographic characteristics [10]. In order to study the characteristics and influencing factors of public risk perception, Slovic carried out a series of empirical studies and summarized 15 different characteristics of risk perception [11].In the field of behavioral science and psychology, many scholars focus on the important role of memory in individual behavioral decision-making [12], Most of their research results support that individual memory system has a decisive influence on behavioral decision-making [13]. Welch et al. believed that the information obtained through news media and informal communication channels of social networks all belonged to the information used for behavioral decision-making in the individual memory system [14]. The same conclusion can also be reached when scholars introduce individual memory to build mathematical models. For example, Mullainathan took consumers’ memories of previous products and wages as the basis for purchasing decisions and constructed a consumer memory decision model [15]. Mehta et al. studied the relationship between consumers’ forgetting rate of brands and purchasing decisions and believed that when consumers are faced with many brands, their memory and perception of these brands play an important role in consumers’ choice [16]. Wei et al. constructed an evolution model of individual memory perception and corporate reputation to study the optimal strategy of CSR activities of enterprises [17]. Wei et al. introduced the recency effect, Lenovo effect, and read-back effect and built the public risk perception evolution model based on crisis information flow. This model uses the crisis information growth model, stakeholder influence model, and stakeholder memory model to measure the process of crisis information release, information diffusion, and information perception. The diffusion coefficient and forgetting coefficient of crisis information are introduced to explain the transmission mechanism of crisis information in the population. It is found that there are lag effect, cumulative effect, and jump phenomenon in the evolution of public risk perception [18]. ### 2.2. Communicable Disease Model Network Public Opinion Spread The infectious disease model is a mathematical model that uses an ordinary differential equation to describe the spread and prevalence of the infectious disease. Consider the similarity between the spread of information and the spread of infectious diseases. Daley et al. applied the infectious disease dynamics model to information transmission, dividing individuals into three categories: susceptible, sprayer, and immune and then constructing the classic DK model [19]. Subsequently, some scholars further refined the communication process and improved the model [20, 21]. However, with the rapid development of information technology and the explosion of social networks, the mode of information transmission has undergone profound changes. The classical infectious disease model can no longer accurately describe the geometric progression fission propagation process of network information [22]. One of the important reasons is that the spread of infectious diseases is unconscious, and the transmission of diseases by infected people is not based on people’s subjective will. However, the essence of information communication is social communication, and further research needs to consider the attributes of network information content, public society, and other factors [23–25].Shang et al. integrated the social network and communication dynamics model and proposed a simulation planning method taking public emergencies as scenarios [26]. Zhu et al. [27] established an infectious disease model based on the transmission rules of the Ebola virus. Wang et al. considered the interdependence of online and offline activities and constructed an information transmission model of a two-layer social network based on complex network theory and communication dynamics [28]. Liu et al. considered the influence of network dynamic evolution and constructed a dynamic network diffusion information transmission model of public emergencies [29]. Wang et al. defined the types of the public and the role of government intervention, and combined with the characteristics of emergency information communication, they constructed a public opinion communication control model under government intervention [30]. Zhong et al. considered the relationship between public status transition and the influence of government intervention, constructed the SEIRS model of public opinion communication control under government intervention, and used control factors to realize effective intervention of online public opinion in emergencies [31]. Yin et al. [32] considered that users may enter into another related topic after discussing one topic and proposed a multi-information susceptibility discussion immunity (M-SDI) model, effectively predicted the trend of online public opinion communication of public health emergencies through the fitting analysis of COVID-19 public opinion data obtained from China’s Sina Weibo. Wang et al. analyzed the mutual influence of multiple public opinion communication and the rule of state transfer among different groups after an emergency occurred and proposed the 3SI3R model [33].So that’s all, at present, the research on infectious disease models is relatively mature, and most of them are applied in the field of information dissemination and network public opinion dissemination. However, there are few literature on the evolution of applying the infectious disease model to public risk perception. Therefore, in the context of COVID-19, this paper analyzes the spread characteristics and rules of public risk perception by using the infectious disease model. Considering the propagation properties of the social network, such as social reinforcement effect, containment mechanism, and forgetting mechanism, we constructed the evolution dynamics model of public risk perception based on SEIR, which better delineated the evolution of public risk perception and provided decision-making suggestions for the government in formulating the risk management of public health emergencies. ## 2.1. Risk Perception Scholars generally believe that the public’s risk perception is mainly affected by individual characteristics, time, event progress, risk information, and other factors [9]. A questionnaire survey through a psychological scale is the most effective method to study the influencing factors and differences of risk perception. Peacock et al. take hurricane as the research scenario and explore the influencing factors of the formation process of public risk perception from two dimensions of socioeconomic and demographic characteristics [10]. In order to study the characteristics and influencing factors of public risk perception, Slovic carried out a series of empirical studies and summarized 15 different characteristics of risk perception [11].In the field of behavioral science and psychology, many scholars focus on the important role of memory in individual behavioral decision-making [12], Most of their research results support that individual memory system has a decisive influence on behavioral decision-making [13]. Welch et al. believed that the information obtained through news media and informal communication channels of social networks all belonged to the information used for behavioral decision-making in the individual memory system [14]. The same conclusion can also be reached when scholars introduce individual memory to build mathematical models. For example, Mullainathan took consumers’ memories of previous products and wages as the basis for purchasing decisions and constructed a consumer memory decision model [15]. Mehta et al. studied the relationship between consumers’ forgetting rate of brands and purchasing decisions and believed that when consumers are faced with many brands, their memory and perception of these brands play an important role in consumers’ choice [16]. Wei et al. constructed an evolution model of individual memory perception and corporate reputation to study the optimal strategy of CSR activities of enterprises [17]. Wei et al. introduced the recency effect, Lenovo effect, and read-back effect and built the public risk perception evolution model based on crisis information flow. This model uses the crisis information growth model, stakeholder influence model, and stakeholder memory model to measure the process of crisis information release, information diffusion, and information perception. The diffusion coefficient and forgetting coefficient of crisis information are introduced to explain the transmission mechanism of crisis information in the population. It is found that there are lag effect, cumulative effect, and jump phenomenon in the evolution of public risk perception [18]. ## 2.2. Communicable Disease Model Network Public Opinion Spread The infectious disease model is a mathematical model that uses an ordinary differential equation to describe the spread and prevalence of the infectious disease. Consider the similarity between the spread of information and the spread of infectious diseases. Daley et al. applied the infectious disease dynamics model to information transmission, dividing individuals into three categories: susceptible, sprayer, and immune and then constructing the classic DK model [19]. Subsequently, some scholars further refined the communication process and improved the model [20, 21]. However, with the rapid development of information technology and the explosion of social networks, the mode of information transmission has undergone profound changes. The classical infectious disease model can no longer accurately describe the geometric progression fission propagation process of network information [22]. One of the important reasons is that the spread of infectious diseases is unconscious, and the transmission of diseases by infected people is not based on people’s subjective will. However, the essence of information communication is social communication, and further research needs to consider the attributes of network information content, public society, and other factors [23–25].Shang et al. integrated the social network and communication dynamics model and proposed a simulation planning method taking public emergencies as scenarios [26]. Zhu et al. [27] established an infectious disease model based on the transmission rules of the Ebola virus. Wang et al. considered the interdependence of online and offline activities and constructed an information transmission model of a two-layer social network based on complex network theory and communication dynamics [28]. Liu et al. considered the influence of network dynamic evolution and constructed a dynamic network diffusion information transmission model of public emergencies [29]. Wang et al. defined the types of the public and the role of government intervention, and combined with the characteristics of emergency information communication, they constructed a public opinion communication control model under government intervention [30]. Zhong et al. considered the relationship between public status transition and the influence of government intervention, constructed the SEIRS model of public opinion communication control under government intervention, and used control factors to realize effective intervention of online public opinion in emergencies [31]. Yin et al. [32] considered that users may enter into another related topic after discussing one topic and proposed a multi-information susceptibility discussion immunity (M-SDI) model, effectively predicted the trend of online public opinion communication of public health emergencies through the fitting analysis of COVID-19 public opinion data obtained from China’s Sina Weibo. Wang et al. analyzed the mutual influence of multiple public opinion communication and the rule of state transfer among different groups after an emergency occurred and proposed the 3SI3R model [33].So that’s all, at present, the research on infectious disease models is relatively mature, and most of them are applied in the field of information dissemination and network public opinion dissemination. However, there are few literature on the evolution of applying the infectious disease model to public risk perception. Therefore, in the context of COVID-19, this paper analyzes the spread characteristics and rules of public risk perception by using the infectious disease model. Considering the propagation properties of the social network, such as social reinforcement effect, containment mechanism, and forgetting mechanism, we constructed the evolution dynamics model of public risk perception based on SEIR, which better delineated the evolution of public risk perception and provided decision-making suggestions for the government in formulating the risk management of public health emergencies. ## 3. Model Construction ### 3.1. Characteristics of the Evolution of Public Risk Perception in the COVID-19 The essence of an infectious disease is that the carrier of the pathogen transmits its own germs to the person who comes into contact with it through contact with other individuals. In the context of COVID-19, the spread of public risk perception has the characteristics of infectious disease, and individuals who perceive risk will transmit their perceived risk to other individuals who communicate with them through various communication channels. The transmission of infectious diseases between hosts needs to break a certain threshold, and the spread of the public’s perceived risk in the context of COVID-19 also needs certain conditions, as the perceived risk exceeds their own tolerance. Therefore, in the context of COVID-19, the spread of public risk perception has the characteristics of risk sources, transmission media, infectivity, and immunity. #### 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. #### 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. #### 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. #### 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ### 3.2. Factors Influencing the Evolution of Public Risk Perception Public risk perception is widely spread through the Internet, TV, newspapers, Weibo, and other mass media in the social network space such as the circle of relatives, neighbors, and friends who are closely related to individuals. The dissemination of public risk perception is a complex process, which is not only affected by individual factors such as interindividual intimacy, knowledge background, and life experience [34–36] but also affected by social factors such as information memory effect, social reinforcement effect, interest attenuation effect, containment mechanism, authority effect, broken window effect, and responsibility dispersion effect [37–40]. This paper focuses on the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on public risk perception transmission. #### 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. #### 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ### 3.3. Dynamic Evolution Model of Public Risk Perception in COVID-19 The individual in the social network is represented as the node in the network, and the relationship between individuals is represented by the connection between nodes, so the social network is represented as a concrete network structure. The individuals in the social network are represented as nodes in the network, and the relationships between individuals are represented by the links between nodes, thus representing the social network as a specific network structure. In the process of risk perception propagation, it can only be propagated between neighboring nodes. When a node propagates risk perception to its neighboring nodes, if the neighboring nodes choose to believe and accept the information, then the neighboring nodes continue to propagate risk perception to its neighbors. If the neighboring node does not accept the risk perception, the neighboring node does not propagate it again.When the risk perception spreads from the risk source to the whole network, there will be different psychological states for the same information due to the different interests and knowledge of nodes in the network. Therefore, nodes have different attitudes on whether to accept the spread of risk perception and ultimately lead to different trends in the spread of risk perception. So that, the nodes in the social network can be divided into four states: the susceptible state (S), the latent state (E), the onset state (I), and the recovering state (R). Among them, the susceptible state refers to the public who has not received the pandemic information. The latent status refers to receiving information but not disseminating risk perception. In other words, it refers to receiving pandemic information for the first time and perceiving risk but not breaking one’s maximum risk tolerance. The onset state refers to the state of panic and anxiety when receiving information, which is spreading risk perception. The recovery status refers to the public who rationally see the pandemic and do not spread it.The proportions of these four groups in the total population at timet are, respectively, St, Et, It, Rt, and St+Et+It+Rt=1.Considering the social reinforcement effect, forgetting mechanism, and containment mechanism of public risk perception, the following risk perception communication rules are proposed:(1) When the susceptible individualSi receives the information transmitted from an infected individual Ij, the susceptible individual Si may change to the latent state Ei with probability α or may change to the onset state Ii with the initial rate of transmission β and transmit the pandemic information to other individuals. The state transition can be expressed as follows:(2)Si+Ij⟶αEi+Ij,Si+Ij⟶βIi+Ij.(2) The nodes in the latent state are suspicious of the pandemic information and will receive the information transmitted from the nodes in the neighboring pandemic state many times. Under the influence of the social reinforcement effect, the nodes in the latent stateEi will be transformed into the onset state Ii with the probability of transmission λ. The latent node Ei that has not been transformed into the disease state may be transformed into the recovering state Ri with probability θ. The Ei transition process of the latent state can be expressed as follows:(3)Ei⟶λIi,Ei⟶θRi.(3) Since the onset node has already believed the pandemic information and spread it, the transmission state will not be affected by the social strengthening effect. However, the onset stateIi is affected by the social containment mechanism, the nodes in the onset state Ii will be transformed into the recovery state Ri with the probability of transmission ε. The Ii transition process of the onset state can be expressed as follows:(4)Ii⟶εRi.(4) With the passage of time, the recovering stateRi was affected by the amnesia mechanism and changed to the susceptible state Si with a probability of δ. The Ri transfer process of the healing state can be expressed as follows:(5)Ri⟶δSi.According to the above analysis, the evolution model of the public risk perception network in COVID-19 is shown in Figure2.Figure 2 State transition diagram of social network nodes.In summary, the public risk perception communication dynamics model is as follows:(6)dStdt=−αStIt−βStIt+δRt,dEtdt=αStIt−λEt−θEt,dItdt=βStIt+λEt−εIt,dRtdt=θEt+εIt−δRt. ### 3.4. Analysis of the Basic Reproduction Number of the Model In this paper, the next-generation matrix method is used to calculate the basic reproduction numberR0.States 1, 2, 3, and 4 represent the states ofE, I, S, and R, respectively. The density of class 4 node states is denoted by xi, that is x=x1,x2,x3,x4T. Constructor Fx: Fix represents the probability of new diseased nodes in state i; according to the above information, when i=1, the probability of new diseased node in latent state E is αSI; when i=2, the probability of new diseased node in onset state is βSI; and when i=3,4, there were no new disease nodes in susceptible nodes and recovered nodes. Therefore,(7)Fx=F1xFx2F3xF4x=αSIβSI00.ConstructorVx: V1x, V2x, V3x, and V4x, respectively, represents the probability of change of state nodes of E, I, S, and R; hypothesis Vix=Vi−x−Vi+x, where Vi+x represents the probability of changing from another state node to the i state and Vi−x represents the probability of transition from the i state node to another state; therefore,(8)Vx=V1xVx2V3xV4x=λE+θEεI−λEαSI+βSI−δRδR−θE−εI.Obviously, when there is no diseased node in the network system, all nodes are susceptible to infection, that is,E0=0,0,S∗,0 is the equilibrium point of the system, The derivative of Fx and Vx at E0 is as follows:(9)DFE0=F000,DVE0=V0J3J4,where(10)F=0α0β,V=λ+θ0−λε.Therefore, we calculated that(11)FV−1=αλλ+θεαεβλλ+θεβε.The spectral radius ofFV−1 is expressed as ρFV−1, that is, the basic regeneration number R0 is(12)R0=ρFV−1=αλ+βλ+θελ+θ.In the analysis of the network information transmission process, the basic regeneration numberR0 is an important parameter to measure whether network information can be spread on a large scale. It represents the average number of people affected by introducing a disease state node when all the network space is susceptible to infection without intervention. When R0<1, the network information will not be widely diffused. When R0>1, the network information will present a large-scale diffusion trend.It can be seen from equation (12) that the basic regeneration number is closely related to the social effects of public risk perception, and these social effects have an important influence on whether the public risk perception spreads on a large scale. Among them, when other factors remain unchanged, the initial transmission rate of public risk perception keeps increasing, so that the basic regeneration number R0 gradually changes from a value less than 1 to a value greater than 1, and the public panic gradually spreads. With the increase of the basic regeneration number R0, the diffusion scale becomes larger and larger. ## 3.1. Characteristics of the Evolution of Public Risk Perception in the COVID-19 The essence of an infectious disease is that the carrier of the pathogen transmits its own germs to the person who comes into contact with it through contact with other individuals. In the context of COVID-19, the spread of public risk perception has the characteristics of infectious disease, and individuals who perceive risk will transmit their perceived risk to other individuals who communicate with them through various communication channels. The transmission of infectious diseases between hosts needs to break a certain threshold, and the spread of the public’s perceived risk in the context of COVID-19 also needs certain conditions, as the perceived risk exceeds their own tolerance. Therefore, in the context of COVID-19, the spread of public risk perception has the characteristics of risk sources, transmission media, infectivity, and immunity. ### 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. ### 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. ### 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. ### 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ## 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. ## 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. ## 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. ## 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ## 3.2. Factors Influencing the Evolution of Public Risk Perception Public risk perception is widely spread through the Internet, TV, newspapers, Weibo, and other mass media in the social network space such as the circle of relatives, neighbors, and friends who are closely related to individuals. The dissemination of public risk perception is a complex process, which is not only affected by individual factors such as interindividual intimacy, knowledge background, and life experience [34–36] but also affected by social factors such as information memory effect, social reinforcement effect, interest attenuation effect, containment mechanism, authority effect, broken window effect, and responsibility dispersion effect [37–40]. This paper focuses on the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on public risk perception transmission. ### 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. ### 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ## 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. ## 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ## 3.3. Dynamic Evolution Model of Public Risk Perception in COVID-19 The individual in the social network is represented as the node in the network, and the relationship between individuals is represented by the connection between nodes, so the social network is represented as a concrete network structure. The individuals in the social network are represented as nodes in the network, and the relationships between individuals are represented by the links between nodes, thus representing the social network as a specific network structure. In the process of risk perception propagation, it can only be propagated between neighboring nodes. When a node propagates risk perception to its neighboring nodes, if the neighboring nodes choose to believe and accept the information, then the neighboring nodes continue to propagate risk perception to its neighbors. If the neighboring node does not accept the risk perception, the neighboring node does not propagate it again.When the risk perception spreads from the risk source to the whole network, there will be different psychological states for the same information due to the different interests and knowledge of nodes in the network. Therefore, nodes have different attitudes on whether to accept the spread of risk perception and ultimately lead to different trends in the spread of risk perception. So that, the nodes in the social network can be divided into four states: the susceptible state (S), the latent state (E), the onset state (I), and the recovering state (R). Among them, the susceptible state refers to the public who has not received the pandemic information. The latent status refers to receiving information but not disseminating risk perception. In other words, it refers to receiving pandemic information for the first time and perceiving risk but not breaking one’s maximum risk tolerance. The onset state refers to the state of panic and anxiety when receiving information, which is spreading risk perception. The recovery status refers to the public who rationally see the pandemic and do not spread it.The proportions of these four groups in the total population at timet are, respectively, St, Et, It, Rt, and St+Et+It+Rt=1.Considering the social reinforcement effect, forgetting mechanism, and containment mechanism of public risk perception, the following risk perception communication rules are proposed:(1) When the susceptible individualSi receives the information transmitted from an infected individual Ij, the susceptible individual Si may change to the latent state Ei with probability α or may change to the onset state Ii with the initial rate of transmission β and transmit the pandemic information to other individuals. The state transition can be expressed as follows:(2)Si+Ij⟶αEi+Ij,Si+Ij⟶βIi+Ij.(2) The nodes in the latent state are suspicious of the pandemic information and will receive the information transmitted from the nodes in the neighboring pandemic state many times. Under the influence of the social reinforcement effect, the nodes in the latent stateEi will be transformed into the onset state Ii with the probability of transmission λ. The latent node Ei that has not been transformed into the disease state may be transformed into the recovering state Ri with probability θ. The Ei transition process of the latent state can be expressed as follows:(3)Ei⟶λIi,Ei⟶θRi.(3) Since the onset node has already believed the pandemic information and spread it, the transmission state will not be affected by the social strengthening effect. However, the onset stateIi is affected by the social containment mechanism, the nodes in the onset state Ii will be transformed into the recovery state Ri with the probability of transmission ε. The Ii transition process of the onset state can be expressed as follows:(4)Ii⟶εRi.(4) With the passage of time, the recovering stateRi was affected by the amnesia mechanism and changed to the susceptible state Si with a probability of δ. The Ri transfer process of the healing state can be expressed as follows:(5)Ri⟶δSi.According to the above analysis, the evolution model of the public risk perception network in COVID-19 is shown in Figure2.Figure 2 State transition diagram of social network nodes.In summary, the public risk perception communication dynamics model is as follows:(6)dStdt=−αStIt−βStIt+δRt,dEtdt=αStIt−λEt−θEt,dItdt=βStIt+λEt−εIt,dRtdt=θEt+εIt−δRt. ## 3.4. Analysis of the Basic Reproduction Number of the Model In this paper, the next-generation matrix method is used to calculate the basic reproduction numberR0.States 1, 2, 3, and 4 represent the states ofE, I, S, and R, respectively. The density of class 4 node states is denoted by xi, that is x=x1,x2,x3,x4T. Constructor Fx: Fix represents the probability of new diseased nodes in state i; according to the above information, when i=1, the probability of new diseased node in latent state E is αSI; when i=2, the probability of new diseased node in onset state is βSI; and when i=3,4, there were no new disease nodes in susceptible nodes and recovered nodes. Therefore,(7)Fx=F1xFx2F3xF4x=αSIβSI00.ConstructorVx: V1x, V2x, V3x, and V4x, respectively, represents the probability of change of state nodes of E, I, S, and R; hypothesis Vix=Vi−x−Vi+x, where Vi+x represents the probability of changing from another state node to the i state and Vi−x represents the probability of transition from the i state node to another state; therefore,(8)Vx=V1xVx2V3xV4x=λE+θEεI−λEαSI+βSI−δRδR−θE−εI.Obviously, when there is no diseased node in the network system, all nodes are susceptible to infection, that is,E0=0,0,S∗,0 is the equilibrium point of the system, The derivative of Fx and Vx at E0 is as follows:(9)DFE0=F000,DVE0=V0J3J4,where(10)F=0α0β,V=λ+θ0−λε.Therefore, we calculated that(11)FV−1=αλλ+θεαεβλλ+θεβε.The spectral radius ofFV−1 is expressed as ρFV−1, that is, the basic regeneration number R0 is(12)R0=ρFV−1=αλ+βλ+θελ+θ.In the analysis of the network information transmission process, the basic regeneration numberR0 is an important parameter to measure whether network information can be spread on a large scale. It represents the average number of people affected by introducing a disease state node when all the network space is susceptible to infection without intervention. When R0<1, the network information will not be widely diffused. When R0>1, the network information will present a large-scale diffusion trend.It can be seen from equation (12) that the basic regeneration number is closely related to the social effects of public risk perception, and these social effects have an important influence on whether the public risk perception spreads on a large scale. Among them, when other factors remain unchanged, the initial transmission rate of public risk perception keeps increasing, so that the basic regeneration number R0 gradually changes from a value less than 1 to a value greater than 1, and the public panic gradually spreads. With the increase of the basic regeneration number R0, the diffusion scale becomes larger and larger. ## 4. Numerical Simulation and Analysis This part will verify the rationality and stability of the evolutionary model of public risk perception through simulation experiments and analyze the propagation mechanism of risk perception. In the simulation experiments, the initial conditions given are:S0=1, E0=0.00, I0=0.00, and R0=0.00. ### 4.1. Analysis of the Density of State Nodes in the Network Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2,δ=0.1 and substitute into equation (12); the calculation gives the basic regeneration number R0 as 3.125, The theory suggests that public risk perceptions undergo mass diffusion, which is consistent with the results in the figure.From Figure3, it can be seen that the node density of the susceptible state decreases rapidly at the initial stage and eventually tends to stabilize; the node density of the latent state increases rapidly to a peak of 0.31, then gradually decreases, and eventually tends to 0; the node density of the pathogenic state increases rapidly at the early stage of propagation and reaches a peak of 0.63, after which the node density gradually decreases and eventually stabilizes at 0.32; the node density of the healed state rapidly increases during the propagation process and eventually reaches a stable at 0.61. The density of nodes in the healed state increased rapidly during propagation and eventually reached a stable level of 0.61.Figure 3 Trend of state changes of various nodes in the SEIR model. ### 4.2. Impact of Initial Propagation Rate on the Evolution of Public Risk Perception The node density of the morbidity state represents the active degree of risk perception transmission; the node density of the healing state represents the degree of risk perception transmission final state. Therefore, the impact of public health pandemics on public risk perception is investigated by analyzing the change in node density of morbidity and healing states with the initial transmission rateβ. Figures 4 and 5 depict how public risk perceptions vary over time in terms of the density of morbidity status and healing status under the influence of different initial transmission probabilities. Set the initial parameter α=0.2,λ=0.5,θ=0.3,ε=0.2,δ=0.1; take three casesβ=0.2, β=0.5, and β=0.8; and substitute into equation (12); the calculation can get the basic regeneration number R0 as 1.625, 3.125, and 4.625, respectively; theoretically, the public risk perception will carry out large-scale diffusion, which is consistent with the results in the figure. As can be seen in Figures 4 and 5, both the onset state and the healing state R density maxima increase with increasing initial transmission probability β. That is, the greater the initial probability of transmission, the more the public is inclined to spread risk perceptions, and the shorter the time it takes to reach the peak. After the node density of the disease state reaches its peak, as the government’s emergency work advances, such as the effective implementation of pandemic prevention and control measures, some members of the public believe that the pandemic is temporarily controllable, and the number of publics spreading information about the pandemic begins to decrease, and eventually, all change to the disease state. Therefore, the greater the initial rate of transmission of public risk perception, the faster the network transmission that is not only fast but also on a large scale.Figure 4 Effect of initial transmission rate on the density of nodes in the onset state.Figure 5 Effect of initial propagation rate on node density in the healed state. ### 4.3. The Impact of Social Reinforcement Effects on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,θ=0.3,ε=0.2,δ=0.1 and take the three cases b=1,m=1, b=1,m=2, and b=2,m=2. Figure 6 depicts the change in the density of morbidity status nodes over time for different social reinforcement effects on public risk perception. As can be seen in Figure 6, the greater the number of times an individual receives information about an pandemic, the greater the density of morbidity status nodes, for the same reinforcement factor. Similarly, the higher the reinforcement factor, the higher the density of onset nodes, given the same number of times an individual received information about the pandemic. This suggests that the public is more inclined to disseminate risk perceptions in the presence of social reinforcement effects.Figure 6 Effect of reinforcement effects on the density of nodes in the onset state. ### 4.4. Impact of Containment Mechanisms on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,δ=0.1 and take the three cases ε=0.2, ε=0.4, and ε=0.6. Substituting into equation (12), the corresponding basic regeneration numbers R0 are 3.125, 1.56, and 1.04, respectively, and the nodal densities of onset states when public risk perception transmission reaches stability are 0.27, 0.16, and 0.1, respectively. From Figure 7, it can be seen that the maximum value of the onset state node density decreases with the increase of the containment mechanism ε. Since ε represents the strength of the containment mechanism, this means that the stronger the containment mechanism, the easier it is for the onset state nodes to be influenced by other nodes and stop spreading, the faster the network reaches a stable state, and the smaller the density of onset state nodes after the network is stable. Therefore, the containment mechanism curbs the spread of public risk perception to a certain extent, reducing the speed and scope of transmission.Figure 7 Effect of containment mechanisms on node density in the onset state. ### 4.5. The Impact of Forgetting Mechanisms on the Evolution of Public Risk Perception Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2; take the three cases of δ=0.1, δ=0.4, and δ=0.7, and substitute into (12); the corresponding basic regeneration numbers are all 3.125. The node densities of onset states when network propagation is stable are 0.28, 0.52, and 0.61, respectively.As can be seen from Figure8, the maximum value of the onset state node density increases with the increase of the forgetting rate δ. Since δ represents the intensity of forgetting, this means that the greater the degree of forgetting, the easier it is for sick state nodes to forget the received pandemic information and the easier it is for them to become susceptible nodes again to receive pandemic information and propagate risk perceptions, making the density of sick state nodes greater when the network is stable.Figure 8 Effect of forgetting mechanisms on the density of nodes in the onset state. ## 4.1. Analysis of the Density of State Nodes in the Network Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2,δ=0.1 and substitute into equation (12); the calculation gives the basic regeneration number R0 as 3.125, The theory suggests that public risk perceptions undergo mass diffusion, which is consistent with the results in the figure.From Figure3, it can be seen that the node density of the susceptible state decreases rapidly at the initial stage and eventually tends to stabilize; the node density of the latent state increases rapidly to a peak of 0.31, then gradually decreases, and eventually tends to 0; the node density of the pathogenic state increases rapidly at the early stage of propagation and reaches a peak of 0.63, after which the node density gradually decreases and eventually stabilizes at 0.32; the node density of the healed state rapidly increases during the propagation process and eventually reaches a stable at 0.61. The density of nodes in the healed state increased rapidly during propagation and eventually reached a stable level of 0.61.Figure 3 Trend of state changes of various nodes in the SEIR model. ## 4.2. Impact of Initial Propagation Rate on the Evolution of Public Risk Perception The node density of the morbidity state represents the active degree of risk perception transmission; the node density of the healing state represents the degree of risk perception transmission final state. Therefore, the impact of public health pandemics on public risk perception is investigated by analyzing the change in node density of morbidity and healing states with the initial transmission rateβ. Figures 4 and 5 depict how public risk perceptions vary over time in terms of the density of morbidity status and healing status under the influence of different initial transmission probabilities. Set the initial parameter α=0.2,λ=0.5,θ=0.3,ε=0.2,δ=0.1; take three casesβ=0.2, β=0.5, and β=0.8; and substitute into equation (12); the calculation can get the basic regeneration number R0 as 1.625, 3.125, and 4.625, respectively; theoretically, the public risk perception will carry out large-scale diffusion, which is consistent with the results in the figure. As can be seen in Figures 4 and 5, both the onset state and the healing state R density maxima increase with increasing initial transmission probability β. That is, the greater the initial probability of transmission, the more the public is inclined to spread risk perceptions, and the shorter the time it takes to reach the peak. After the node density of the disease state reaches its peak, as the government’s emergency work advances, such as the effective implementation of pandemic prevention and control measures, some members of the public believe that the pandemic is temporarily controllable, and the number of publics spreading information about the pandemic begins to decrease, and eventually, all change to the disease state. Therefore, the greater the initial rate of transmission of public risk perception, the faster the network transmission that is not only fast but also on a large scale.Figure 4 Effect of initial transmission rate on the density of nodes in the onset state.Figure 5 Effect of initial propagation rate on node density in the healed state. ## 4.3. The Impact of Social Reinforcement Effects on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,θ=0.3,ε=0.2,δ=0.1 and take the three cases b=1,m=1, b=1,m=2, and b=2,m=2. Figure 6 depicts the change in the density of morbidity status nodes over time for different social reinforcement effects on public risk perception. As can be seen in Figure 6, the greater the number of times an individual receives information about an pandemic, the greater the density of morbidity status nodes, for the same reinforcement factor. Similarly, the higher the reinforcement factor, the higher the density of onset nodes, given the same number of times an individual received information about the pandemic. This suggests that the public is more inclined to disseminate risk perceptions in the presence of social reinforcement effects.Figure 6 Effect of reinforcement effects on the density of nodes in the onset state. ## 4.4. Impact of Containment Mechanisms on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,δ=0.1 and take the three cases ε=0.2, ε=0.4, and ε=0.6. Substituting into equation (12), the corresponding basic regeneration numbers R0 are 3.125, 1.56, and 1.04, respectively, and the nodal densities of onset states when public risk perception transmission reaches stability are 0.27, 0.16, and 0.1, respectively. From Figure 7, it can be seen that the maximum value of the onset state node density decreases with the increase of the containment mechanism ε. Since ε represents the strength of the containment mechanism, this means that the stronger the containment mechanism, the easier it is for the onset state nodes to be influenced by other nodes and stop spreading, the faster the network reaches a stable state, and the smaller the density of onset state nodes after the network is stable. Therefore, the containment mechanism curbs the spread of public risk perception to a certain extent, reducing the speed and scope of transmission.Figure 7 Effect of containment mechanisms on node density in the onset state. ## 4.5. The Impact of Forgetting Mechanisms on the Evolution of Public Risk Perception Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2; take the three cases of δ=0.1, δ=0.4, and δ=0.7, and substitute into (12); the corresponding basic regeneration numbers are all 3.125. The node densities of onset states when network propagation is stable are 0.28, 0.52, and 0.61, respectively.As can be seen from Figure8, the maximum value of the onset state node density increases with the increase of the forgetting rate δ. Since δ represents the intensity of forgetting, this means that the greater the degree of forgetting, the easier it is for sick state nodes to forget the received pandemic information and the easier it is for them to become susceptible nodes again to receive pandemic information and propagate risk perceptions, making the density of sick state nodes greater when the network is stable.Figure 8 Effect of forgetting mechanisms on the density of nodes in the onset state. ## 5. Research Conclusions and Policy Recommendations ### 5.1. Research Conclusions In this paper, based on COVID-19, first, we analyzed the infectious characteristics of public risk perception in public health emergencies. Second, according to the characteristics of public risk perception transmission in social networks, we established the evolution dynamics model of public risk perception and solved the basic regeneration number. Finally, we revealed the evolution mechanism of the public risk perception network through parameter selection and simulation experiments. Therefore, the significance of this study is reflected in the following three aspects:(1) Systematically summarize the characteristics of public risk perception of infectious diseases in public health emergencies, including risk sources, transmission media, infectivity, and immunity. It provides a theoretical basis for the establishment of a public risk perception model.(2) Systematically analyze the influencing factors of public risk perception of public health emergencies. From two dimensions of individual and social factors, we focused on analyzing the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on the spread of risk perception and established the evolution dynamics model of the public risk perception based on SEIR. The stability of the model is proved theoretically by solving the basic regeneration number.(3) The infectious disease model is applied to the evolution model of risk perception of public health emergencies; through simulation experiments, we revealed the evolution mechanism of public risk perception network. The greater the initial rate of diffusion, the greater the speed and scope of network diffusion; the stronger the social reinforcement effect, the greater the speed and scope of network diffusion; the stronger the rate of forgetting, the greater the speed and scope of network diffusion; and the stronger the rate of containment, the weaker the speed of network diffusion and the smaller the scale of network diffusion. ### 5.2. Policy Recommendations The conclusions of this study can provide the following suggestions for relevant government departments to deal with major public health emergencies:(1) The government should immediately launch the emergency plan to rescue the affected public, shrink the scope of the emergency, reduce the level of the emergency, cut down the influence of the emergency, and weaken the public risk perception so as to reduce the negative impact of the online public opinion of the emergency and maintain social harmony and stability.(2) The government should undertake the responsibility of supervision, regulation, and management. First, on the premise of satisfying the public’s right to know, gradually relax the control on news media, and standardize the system of information disclosure and dissemination. In the process of information disclosure, the government should establish two-way communication channels: on the one hand, timely inform the public of the truth of the incident and, on the other hand, invite experts to objectively analyze and release authoritative information. Second, the government has a responsibility to know the source of the public’s fear and respect the public’s perception of risk. The risk that may be overestimated by the public through various ways has to be reduced, so as to reduce the public’s risk perception level and relieve the public’s panic.(3) News media should abide by professional ethics and report objectively and fairly, which will help reduce the risk perception of the public. The media is the reporter of the risk information of public health emergencies, the interpreter of dynamic information, and the guide of the public’s risk perception. After the occurrence of public health emergencies, the public relies more on media reports because of asymmetric information. Therefore, the media should report information objectively, accurately and timely so that the potential risks can be recognized by the public so as to cut down the public’s risk perception and panic and ultimately reduce the change from vulnerable groups to latent groups or disease groups.(4) The public should maintain a positive and optimistic attitude, collect emergency information rationally and objectively, and reduce the overestimated risk perception because of information insufficient. Instead of passively receiving information, the public should actively acquire and screen information. The analysis must be rational and objective so as to avoid herd behavior.In a word, in the face of public health emergencies, the government should establish an early warning and response mechanism for public risk perception. The media should report the emergency objectively and fairly in order to guide public opinion correctly. The public should remain positive and optimistic, enhance the awareness of discrimination, reduce unnecessary panic, and improve the confidence to overcome difficulties. ## 5.1. Research Conclusions In this paper, based on COVID-19, first, we analyzed the infectious characteristics of public risk perception in public health emergencies. Second, according to the characteristics of public risk perception transmission in social networks, we established the evolution dynamics model of public risk perception and solved the basic regeneration number. Finally, we revealed the evolution mechanism of the public risk perception network through parameter selection and simulation experiments. Therefore, the significance of this study is reflected in the following three aspects:(1) Systematically summarize the characteristics of public risk perception of infectious diseases in public health emergencies, including risk sources, transmission media, infectivity, and immunity. It provides a theoretical basis for the establishment of a public risk perception model.(2) Systematically analyze the influencing factors of public risk perception of public health emergencies. From two dimensions of individual and social factors, we focused on analyzing the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on the spread of risk perception and established the evolution dynamics model of the public risk perception based on SEIR. The stability of the model is proved theoretically by solving the basic regeneration number.(3) The infectious disease model is applied to the evolution model of risk perception of public health emergencies; through simulation experiments, we revealed the evolution mechanism of public risk perception network. The greater the initial rate of diffusion, the greater the speed and scope of network diffusion; the stronger the social reinforcement effect, the greater the speed and scope of network diffusion; the stronger the rate of forgetting, the greater the speed and scope of network diffusion; and the stronger the rate of containment, the weaker the speed of network diffusion and the smaller the scale of network diffusion. ## 5.2. Policy Recommendations The conclusions of this study can provide the following suggestions for relevant government departments to deal with major public health emergencies:(1) The government should immediately launch the emergency plan to rescue the affected public, shrink the scope of the emergency, reduce the level of the emergency, cut down the influence of the emergency, and weaken the public risk perception so as to reduce the negative impact of the online public opinion of the emergency and maintain social harmony and stability.(2) The government should undertake the responsibility of supervision, regulation, and management. First, on the premise of satisfying the public’s right to know, gradually relax the control on news media, and standardize the system of information disclosure and dissemination. In the process of information disclosure, the government should establish two-way communication channels: on the one hand, timely inform the public of the truth of the incident and, on the other hand, invite experts to objectively analyze and release authoritative information. Second, the government has a responsibility to know the source of the public’s fear and respect the public’s perception of risk. The risk that may be overestimated by the public through various ways has to be reduced, so as to reduce the public’s risk perception level and relieve the public’s panic.(3) News media should abide by professional ethics and report objectively and fairly, which will help reduce the risk perception of the public. The media is the reporter of the risk information of public health emergencies, the interpreter of dynamic information, and the guide of the public’s risk perception. After the occurrence of public health emergencies, the public relies more on media reports because of asymmetric information. Therefore, the media should report information objectively, accurately and timely so that the potential risks can be recognized by the public so as to cut down the public’s risk perception and panic and ultimately reduce the change from vulnerable groups to latent groups or disease groups.(4) The public should maintain a positive and optimistic attitude, collect emergency information rationally and objectively, and reduce the overestimated risk perception because of information insufficient. Instead of passively receiving information, the public should actively acquire and screen information. The analysis must be rational and objective so as to avoid herd behavior.In a word, in the face of public health emergencies, the government should establish an early warning and response mechanism for public risk perception. The media should report the emergency objectively and fairly in order to guide public opinion correctly. The public should remain positive and optimistic, enhance the awareness of discrimination, reduce unnecessary panic, and improve the confidence to overcome difficulties. --- *Source: 1015049-2021-11-29.xml*
1015049-2021-11-29_1015049-2021-11-29.md
77,350
The Evolution Model of Public Risk Perception Based on Pandemic Spreading Theory under Perspective of COVID-19
Yi-Cheng Zhang; Zhi Li; Guo-Bing Zhou; Nai-Ru Xu; Jia-Bao Liu
Complexity (2021)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1015049
1015049-2021-11-29.xml
--- ## Abstract After the occurrence of public health emergencies, due to the uncertainty of the evolution of events and the asymmetry of pandemic information, the public’s risk perception will fluctuate dramatically. Excessive risk perception often causes the public to overreact to emergencies, resulting in irrational behaviors, which have a negative impact on economic development and social order. However, low-risk perception will reduce individual awareness of prevention and control, which is not conducive to the implementation of government pandemic prevention and control measures. Therefore, it is of great significance to accurately evaluate public risk perception for improving government risk management. This paper took the evolution of public risk perception based on the COVID-19 region as the research object. First, we analyze the characteristics of infectious diseases in the evolution of public risk perception of public health emergencies. Second, we analyze the characteristics of risk perception transmission in social networks. Third, we establish the dynamic model of public risk perception evolution based on SEIR, and the evolution mechanism of the public risk perception network is revealed through simulation experiments. Finally, we provide policy suggestions for government departments to deal with public health emergencies based on the conclusions of this study. --- ## Body ## 1. Introduction After the occurrence of public health emergencies, due to the uncertainty of the evolution of events and the asymmetry of pandemic information, the public’s risk perception will fluctuate dramatically. The public takes various protective measures, such as collecting relevant information about the pandemic, forwarding and spreading information about the pandemic, producing violent emotional reactions, buying protective goods, and even leaving the pandemic area [1, 2]. In early March 2020, more than 50,000 people across the country were surveyed about their psychological stress and emotional state according to the Shanghai Mental Health Center; the survey showed: about 35% of the interviewees suffered from psychological distress and had obvious emotional stress reaction; about 5.14% of the interviewees suffered from serious psychological distress. During COVID-19, there has been frantic buying of face masks and disinfectants across the country and around the world. Moreover, public risk perceptions are highly contagious, and excessive risk perceptions by some members of the public can lead to irrational behavior by more members of the public, jeopardizing social harmony and stability. Therefore, we should pay attention to the public’s risk perception and emotional guidance, face up to the psychological needs of the public to vent their emotions, and reasonably guide the public’s emotional fluctuations and behavioral reactions. These become an important task of COVID-19 pandemic prevention and control [3].Public risk perception refers to the concern or anxiety expressed by the public about something [4], which reflects the process of the public’s subjective evaluation of a specific risk state [5, 6]. When the public is aware of risks, it stimulates the psychological state of coping with risks and further generates the demand for risk-related information and emergency behavior based on subjective judgment. Too high risk perception often leads the public to overreact to risk events, resulting in a variety of irrational and unnecessary behaviors, which have an unnecessary impact on economic development and social stability. However, when risk perception is too low, the public may give up taking effective self-protection behaviors. Public risk perception is a collection, selection, and understanding of the process of crisis information and response [7, 8]. In the all-media information age dominated by the network, the public’s information demand, information channel, and information content are characterized by diversification and complexity. It leads to dynamic change and unpredictability of public risk perception, which further increases the difficulty of health emergency prevention and control.Therefore, after the occurrence of major public health emergencies, the dynamic evolution law of public risk perception along with the development of the events should be grasped. It is helpful for the government to adopt active and effective risk management policies and measures. ## 2. Literature Review ### 2.1. Risk Perception Scholars generally believe that the public’s risk perception is mainly affected by individual characteristics, time, event progress, risk information, and other factors [9]. A questionnaire survey through a psychological scale is the most effective method to study the influencing factors and differences of risk perception. Peacock et al. take hurricane as the research scenario and explore the influencing factors of the formation process of public risk perception from two dimensions of socioeconomic and demographic characteristics [10]. In order to study the characteristics and influencing factors of public risk perception, Slovic carried out a series of empirical studies and summarized 15 different characteristics of risk perception [11].In the field of behavioral science and psychology, many scholars focus on the important role of memory in individual behavioral decision-making [12], Most of their research results support that individual memory system has a decisive influence on behavioral decision-making [13]. Welch et al. believed that the information obtained through news media and informal communication channels of social networks all belonged to the information used for behavioral decision-making in the individual memory system [14]. The same conclusion can also be reached when scholars introduce individual memory to build mathematical models. For example, Mullainathan took consumers’ memories of previous products and wages as the basis for purchasing decisions and constructed a consumer memory decision model [15]. Mehta et al. studied the relationship between consumers’ forgetting rate of brands and purchasing decisions and believed that when consumers are faced with many brands, their memory and perception of these brands play an important role in consumers’ choice [16]. Wei et al. constructed an evolution model of individual memory perception and corporate reputation to study the optimal strategy of CSR activities of enterprises [17]. Wei et al. introduced the recency effect, Lenovo effect, and read-back effect and built the public risk perception evolution model based on crisis information flow. This model uses the crisis information growth model, stakeholder influence model, and stakeholder memory model to measure the process of crisis information release, information diffusion, and information perception. The diffusion coefficient and forgetting coefficient of crisis information are introduced to explain the transmission mechanism of crisis information in the population. It is found that there are lag effect, cumulative effect, and jump phenomenon in the evolution of public risk perception [18]. ### 2.2. Communicable Disease Model Network Public Opinion Spread The infectious disease model is a mathematical model that uses an ordinary differential equation to describe the spread and prevalence of the infectious disease. Consider the similarity between the spread of information and the spread of infectious diseases. Daley et al. applied the infectious disease dynamics model to information transmission, dividing individuals into three categories: susceptible, sprayer, and immune and then constructing the classic DK model [19]. Subsequently, some scholars further refined the communication process and improved the model [20, 21]. However, with the rapid development of information technology and the explosion of social networks, the mode of information transmission has undergone profound changes. The classical infectious disease model can no longer accurately describe the geometric progression fission propagation process of network information [22]. One of the important reasons is that the spread of infectious diseases is unconscious, and the transmission of diseases by infected people is not based on people’s subjective will. However, the essence of information communication is social communication, and further research needs to consider the attributes of network information content, public society, and other factors [23–25].Shang et al. integrated the social network and communication dynamics model and proposed a simulation planning method taking public emergencies as scenarios [26]. Zhu et al. [27] established an infectious disease model based on the transmission rules of the Ebola virus. Wang et al. considered the interdependence of online and offline activities and constructed an information transmission model of a two-layer social network based on complex network theory and communication dynamics [28]. Liu et al. considered the influence of network dynamic evolution and constructed a dynamic network diffusion information transmission model of public emergencies [29]. Wang et al. defined the types of the public and the role of government intervention, and combined with the characteristics of emergency information communication, they constructed a public opinion communication control model under government intervention [30]. Zhong et al. considered the relationship between public status transition and the influence of government intervention, constructed the SEIRS model of public opinion communication control under government intervention, and used control factors to realize effective intervention of online public opinion in emergencies [31]. Yin et al. [32] considered that users may enter into another related topic after discussing one topic and proposed a multi-information susceptibility discussion immunity (M-SDI) model, effectively predicted the trend of online public opinion communication of public health emergencies through the fitting analysis of COVID-19 public opinion data obtained from China’s Sina Weibo. Wang et al. analyzed the mutual influence of multiple public opinion communication and the rule of state transfer among different groups after an emergency occurred and proposed the 3SI3R model [33].So that’s all, at present, the research on infectious disease models is relatively mature, and most of them are applied in the field of information dissemination and network public opinion dissemination. However, there are few literature on the evolution of applying the infectious disease model to public risk perception. Therefore, in the context of COVID-19, this paper analyzes the spread characteristics and rules of public risk perception by using the infectious disease model. Considering the propagation properties of the social network, such as social reinforcement effect, containment mechanism, and forgetting mechanism, we constructed the evolution dynamics model of public risk perception based on SEIR, which better delineated the evolution of public risk perception and provided decision-making suggestions for the government in formulating the risk management of public health emergencies. ## 2.1. Risk Perception Scholars generally believe that the public’s risk perception is mainly affected by individual characteristics, time, event progress, risk information, and other factors [9]. A questionnaire survey through a psychological scale is the most effective method to study the influencing factors and differences of risk perception. Peacock et al. take hurricane as the research scenario and explore the influencing factors of the formation process of public risk perception from two dimensions of socioeconomic and demographic characteristics [10]. In order to study the characteristics and influencing factors of public risk perception, Slovic carried out a series of empirical studies and summarized 15 different characteristics of risk perception [11].In the field of behavioral science and psychology, many scholars focus on the important role of memory in individual behavioral decision-making [12], Most of their research results support that individual memory system has a decisive influence on behavioral decision-making [13]. Welch et al. believed that the information obtained through news media and informal communication channels of social networks all belonged to the information used for behavioral decision-making in the individual memory system [14]. The same conclusion can also be reached when scholars introduce individual memory to build mathematical models. For example, Mullainathan took consumers’ memories of previous products and wages as the basis for purchasing decisions and constructed a consumer memory decision model [15]. Mehta et al. studied the relationship between consumers’ forgetting rate of brands and purchasing decisions and believed that when consumers are faced with many brands, their memory and perception of these brands play an important role in consumers’ choice [16]. Wei et al. constructed an evolution model of individual memory perception and corporate reputation to study the optimal strategy of CSR activities of enterprises [17]. Wei et al. introduced the recency effect, Lenovo effect, and read-back effect and built the public risk perception evolution model based on crisis information flow. This model uses the crisis information growth model, stakeholder influence model, and stakeholder memory model to measure the process of crisis information release, information diffusion, and information perception. The diffusion coefficient and forgetting coefficient of crisis information are introduced to explain the transmission mechanism of crisis information in the population. It is found that there are lag effect, cumulative effect, and jump phenomenon in the evolution of public risk perception [18]. ## 2.2. Communicable Disease Model Network Public Opinion Spread The infectious disease model is a mathematical model that uses an ordinary differential equation to describe the spread and prevalence of the infectious disease. Consider the similarity between the spread of information and the spread of infectious diseases. Daley et al. applied the infectious disease dynamics model to information transmission, dividing individuals into three categories: susceptible, sprayer, and immune and then constructing the classic DK model [19]. Subsequently, some scholars further refined the communication process and improved the model [20, 21]. However, with the rapid development of information technology and the explosion of social networks, the mode of information transmission has undergone profound changes. The classical infectious disease model can no longer accurately describe the geometric progression fission propagation process of network information [22]. One of the important reasons is that the spread of infectious diseases is unconscious, and the transmission of diseases by infected people is not based on people’s subjective will. However, the essence of information communication is social communication, and further research needs to consider the attributes of network information content, public society, and other factors [23–25].Shang et al. integrated the social network and communication dynamics model and proposed a simulation planning method taking public emergencies as scenarios [26]. Zhu et al. [27] established an infectious disease model based on the transmission rules of the Ebola virus. Wang et al. considered the interdependence of online and offline activities and constructed an information transmission model of a two-layer social network based on complex network theory and communication dynamics [28]. Liu et al. considered the influence of network dynamic evolution and constructed a dynamic network diffusion information transmission model of public emergencies [29]. Wang et al. defined the types of the public and the role of government intervention, and combined with the characteristics of emergency information communication, they constructed a public opinion communication control model under government intervention [30]. Zhong et al. considered the relationship between public status transition and the influence of government intervention, constructed the SEIRS model of public opinion communication control under government intervention, and used control factors to realize effective intervention of online public opinion in emergencies [31]. Yin et al. [32] considered that users may enter into another related topic after discussing one topic and proposed a multi-information susceptibility discussion immunity (M-SDI) model, effectively predicted the trend of online public opinion communication of public health emergencies through the fitting analysis of COVID-19 public opinion data obtained from China’s Sina Weibo. Wang et al. analyzed the mutual influence of multiple public opinion communication and the rule of state transfer among different groups after an emergency occurred and proposed the 3SI3R model [33].So that’s all, at present, the research on infectious disease models is relatively mature, and most of them are applied in the field of information dissemination and network public opinion dissemination. However, there are few literature on the evolution of applying the infectious disease model to public risk perception. Therefore, in the context of COVID-19, this paper analyzes the spread characteristics and rules of public risk perception by using the infectious disease model. Considering the propagation properties of the social network, such as social reinforcement effect, containment mechanism, and forgetting mechanism, we constructed the evolution dynamics model of public risk perception based on SEIR, which better delineated the evolution of public risk perception and provided decision-making suggestions for the government in formulating the risk management of public health emergencies. ## 3. Model Construction ### 3.1. Characteristics of the Evolution of Public Risk Perception in the COVID-19 The essence of an infectious disease is that the carrier of the pathogen transmits its own germs to the person who comes into contact with it through contact with other individuals. In the context of COVID-19, the spread of public risk perception has the characteristics of infectious disease, and individuals who perceive risk will transmit their perceived risk to other individuals who communicate with them through various communication channels. The transmission of infectious diseases between hosts needs to break a certain threshold, and the spread of the public’s perceived risk in the context of COVID-19 also needs certain conditions, as the perceived risk exceeds their own tolerance. Therefore, in the context of COVID-19, the spread of public risk perception has the characteristics of risk sources, transmission media, infectivity, and immunity. #### 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. #### 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. #### 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. #### 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ### 3.2. Factors Influencing the Evolution of Public Risk Perception Public risk perception is widely spread through the Internet, TV, newspapers, Weibo, and other mass media in the social network space such as the circle of relatives, neighbors, and friends who are closely related to individuals. The dissemination of public risk perception is a complex process, which is not only affected by individual factors such as interindividual intimacy, knowledge background, and life experience [34–36] but also affected by social factors such as information memory effect, social reinforcement effect, interest attenuation effect, containment mechanism, authority effect, broken window effect, and responsibility dispersion effect [37–40]. This paper focuses on the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on public risk perception transmission. #### 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. #### 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ### 3.3. Dynamic Evolution Model of Public Risk Perception in COVID-19 The individual in the social network is represented as the node in the network, and the relationship between individuals is represented by the connection between nodes, so the social network is represented as a concrete network structure. The individuals in the social network are represented as nodes in the network, and the relationships between individuals are represented by the links between nodes, thus representing the social network as a specific network structure. In the process of risk perception propagation, it can only be propagated between neighboring nodes. When a node propagates risk perception to its neighboring nodes, if the neighboring nodes choose to believe and accept the information, then the neighboring nodes continue to propagate risk perception to its neighbors. If the neighboring node does not accept the risk perception, the neighboring node does not propagate it again.When the risk perception spreads from the risk source to the whole network, there will be different psychological states for the same information due to the different interests and knowledge of nodes in the network. Therefore, nodes have different attitudes on whether to accept the spread of risk perception and ultimately lead to different trends in the spread of risk perception. So that, the nodes in the social network can be divided into four states: the susceptible state (S), the latent state (E), the onset state (I), and the recovering state (R). Among them, the susceptible state refers to the public who has not received the pandemic information. The latent status refers to receiving information but not disseminating risk perception. In other words, it refers to receiving pandemic information for the first time and perceiving risk but not breaking one’s maximum risk tolerance. The onset state refers to the state of panic and anxiety when receiving information, which is spreading risk perception. The recovery status refers to the public who rationally see the pandemic and do not spread it.The proportions of these four groups in the total population at timet are, respectively, St, Et, It, Rt, and St+Et+It+Rt=1.Considering the social reinforcement effect, forgetting mechanism, and containment mechanism of public risk perception, the following risk perception communication rules are proposed:(1) When the susceptible individualSi receives the information transmitted from an infected individual Ij, the susceptible individual Si may change to the latent state Ei with probability α or may change to the onset state Ii with the initial rate of transmission β and transmit the pandemic information to other individuals. The state transition can be expressed as follows:(2)Si+Ij⟶αEi+Ij,Si+Ij⟶βIi+Ij.(2) The nodes in the latent state are suspicious of the pandemic information and will receive the information transmitted from the nodes in the neighboring pandemic state many times. Under the influence of the social reinforcement effect, the nodes in the latent stateEi will be transformed into the onset state Ii with the probability of transmission λ. The latent node Ei that has not been transformed into the disease state may be transformed into the recovering state Ri with probability θ. The Ei transition process of the latent state can be expressed as follows:(3)Ei⟶λIi,Ei⟶θRi.(3) Since the onset node has already believed the pandemic information and spread it, the transmission state will not be affected by the social strengthening effect. However, the onset stateIi is affected by the social containment mechanism, the nodes in the onset state Ii will be transformed into the recovery state Ri with the probability of transmission ε. The Ii transition process of the onset state can be expressed as follows:(4)Ii⟶εRi.(4) With the passage of time, the recovering stateRi was affected by the amnesia mechanism and changed to the susceptible state Si with a probability of δ. The Ri transfer process of the healing state can be expressed as follows:(5)Ri⟶δSi.According to the above analysis, the evolution model of the public risk perception network in COVID-19 is shown in Figure2.Figure 2 State transition diagram of social network nodes.In summary, the public risk perception communication dynamics model is as follows:(6)dStdt=−αStIt−βStIt+δRt,dEtdt=αStIt−λEt−θEt,dItdt=βStIt+λEt−εIt,dRtdt=θEt+εIt−δRt. ### 3.4. Analysis of the Basic Reproduction Number of the Model In this paper, the next-generation matrix method is used to calculate the basic reproduction numberR0.States 1, 2, 3, and 4 represent the states ofE, I, S, and R, respectively. The density of class 4 node states is denoted by xi, that is x=x1,x2,x3,x4T. Constructor Fx: Fix represents the probability of new diseased nodes in state i; according to the above information, when i=1, the probability of new diseased node in latent state E is αSI; when i=2, the probability of new diseased node in onset state is βSI; and when i=3,4, there were no new disease nodes in susceptible nodes and recovered nodes. Therefore,(7)Fx=F1xFx2F3xF4x=αSIβSI00.ConstructorVx: V1x, V2x, V3x, and V4x, respectively, represents the probability of change of state nodes of E, I, S, and R; hypothesis Vix=Vi−x−Vi+x, where Vi+x represents the probability of changing from another state node to the i state and Vi−x represents the probability of transition from the i state node to another state; therefore,(8)Vx=V1xVx2V3xV4x=λE+θEεI−λEαSI+βSI−δRδR−θE−εI.Obviously, when there is no diseased node in the network system, all nodes are susceptible to infection, that is,E0=0,0,S∗,0 is the equilibrium point of the system, The derivative of Fx and Vx at E0 is as follows:(9)DFE0=F000,DVE0=V0J3J4,where(10)F=0α0β,V=λ+θ0−λε.Therefore, we calculated that(11)FV−1=αλλ+θεαεβλλ+θεβε.The spectral radius ofFV−1 is expressed as ρFV−1, that is, the basic regeneration number R0 is(12)R0=ρFV−1=αλ+βλ+θελ+θ.In the analysis of the network information transmission process, the basic regeneration numberR0 is an important parameter to measure whether network information can be spread on a large scale. It represents the average number of people affected by introducing a disease state node when all the network space is susceptible to infection without intervention. When R0<1, the network information will not be widely diffused. When R0>1, the network information will present a large-scale diffusion trend.It can be seen from equation (12) that the basic regeneration number is closely related to the social effects of public risk perception, and these social effects have an important influence on whether the public risk perception spreads on a large scale. Among them, when other factors remain unchanged, the initial transmission rate of public risk perception keeps increasing, so that the basic regeneration number R0 gradually changes from a value less than 1 to a value greater than 1, and the public panic gradually spreads. With the increase of the basic regeneration number R0, the diffusion scale becomes larger and larger. ## 3.1. Characteristics of the Evolution of Public Risk Perception in the COVID-19 The essence of an infectious disease is that the carrier of the pathogen transmits its own germs to the person who comes into contact with it through contact with other individuals. In the context of COVID-19, the spread of public risk perception has the characteristics of infectious disease, and individuals who perceive risk will transmit their perceived risk to other individuals who communicate with them through various communication channels. The transmission of infectious diseases between hosts needs to break a certain threshold, and the spread of the public’s perceived risk in the context of COVID-19 also needs certain conditions, as the perceived risk exceeds their own tolerance. Therefore, in the context of COVID-19, the spread of public risk perception has the characteristics of risk sources, transmission media, infectivity, and immunity. ### 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. ### 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. ### 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. ### 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ## 3.1.1. Risk Source The source of risk is the precondition of risk transmission. If there is no source of risk, there is no risk transmission. Risk sources are equivalent to pathogens in infectious diseases. The public health emergency caused by the COVID-19 outbreak in late December 2019 is a risk source for the spread of public risk perception. As the core of the process of risk communication, the source of risk causes public panic and panic buying of medical equipment depending on the communication media. ## 3.1.2. Propagation Medium The transmission medium is the carrier of risk source transmission. After the outbreak of COVID-19, the media of public risk perception are the Internet, TV, newspapers, Weibo, and other mass media. The pandemic information permeates the entire social cyberspace, and the public receives the pandemic information and transmits the perceived risk incorrectly, thus causing panic among the general public. ## 3.1.3. Contagious Infectivity is the most fundamental characteristic of infectious diseases. If there are only pathogens and infectious media, but pathogens do not have infectivity, they do not belong to infectious disease. Risk source of contagion will spread the risk to the environment, when individuals perceive more risk than they can bear, they will spread the risk perception to the outside world through their closely related kinship, work and neighborhood relationships. ## 3.1.4. Immunity Some people are immune to certain infections because they have antibodies or have been vaccinated against them. In the process of risk transmission caused by the outbreak of COVID-19, some individuals show different immunity based on their psychological quality and knowledge. For example, individuals with poor mental health and inadequate knowledge of novel coronavirus and the spread of the virus have much lower immunity than individuals with high mental health and abundant protective behaviors against COVID-19. At the same time, the individual’s gender, personality, and living environment will affect their immune ability.Therefore, in the context of COVID-19, the transmission process of public risk perception has the characteristics of the transmission process of infectious diseases. The infectious disease model is used to analyze and simulate the transmission process of risk perception so as to understand the principle of risk perception transmission and provide a reference for the formulation of risk perception control measures. ## 3.2. Factors Influencing the Evolution of Public Risk Perception Public risk perception is widely spread through the Internet, TV, newspapers, Weibo, and other mass media in the social network space such as the circle of relatives, neighbors, and friends who are closely related to individuals. The dissemination of public risk perception is a complex process, which is not only affected by individual factors such as interindividual intimacy, knowledge background, and life experience [34–36] but also affected by social factors such as information memory effect, social reinforcement effect, interest attenuation effect, containment mechanism, authority effect, broken window effect, and responsibility dispersion effect [37–40]. This paper focuses on the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on public risk perception transmission. ### 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. ### 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ## 3.2.1. Forgetting Mechanism German psychologist Ebbinghaus revealed the nonlinear attenuation of information value with the passage of time through the method of relearning. It reflects the significant impact of attenuation characteristics on information dissemination. Relevant literature call this phenomenon a forgetting mechanism [41], and it is proved by simulation that this mechanism can inhibit information diffusion and reduce the scale of information dissemination [42]. Scholars have shown that the rate of forgetting has a significant impact on the density of spreaders and immunizers in rumor-spreading experiments. The higher the forgetting probability or the faster the forgetting speed, the weaker the spreading power of rumors [43]. Then, in major public health emergencies, the public risk perception transmission characteristics will be the same. ## 3.2.2. Social Reinforcement Effect In the process of information transmission, individuals tend to be skeptical of information, and the probability of transmitting information after receiving it only once is very limited. However, if the neighbor repeatedly prompts the same information so that the individual receives the same information many times, the probability of the individual believing the information and spreading it will greatly increase. In social networks, information is dense, and a lot of information is mixed with truth and false. It is difficult for ordinary people to make a reasonable judgment. At this time, most people will use others’ judgment to form their own opinions. Therefore, the social reinforcement effect is very obvious in the information dissemination of social networks.Literature [44] constructed a rumor propagation model with social reinforcement effect and interest attenuation effect based on the social network and believed that the social reinforcement effect and interest attenuation effect would simultaneously act on the propagation state node, which would be converted into the connected state by interest attenuation effect, and the connected state would be converted into the propagation state by the social reinforcement effect. Therefore, this paper defines the propagation probability function of public risk perception in the social network caused by the social reinforcement effect as follows:(1)λm=1−1−βe−bm−1,where β is the initial transmission rate, which represents the probability that an individual will transmit the pandemic information after receiving it only once; b is the strengthening factor; and m is the number of messages received when m=1 and λ1=β.Figure1 shows that under the action of different reinforcement coefficient b, the propagation probability of individual risk perception changes with the change of m. Initial value λ1=β=0.5 represents the transmission probability of risk perception of susceptible individuals receiving information of an pandemic.Figure 1 Influence of different reinforcement coefficients on propagation probability.Based on the above considerations, this paper focuses on the local environment of individuals to describe the forgetting mechanism, social reinforcement effect, and containment mechanism of the spread of public risk perception and analyzes how these factors affect the spread of risk perception through simulation. ## 3.3. Dynamic Evolution Model of Public Risk Perception in COVID-19 The individual in the social network is represented as the node in the network, and the relationship between individuals is represented by the connection between nodes, so the social network is represented as a concrete network structure. The individuals in the social network are represented as nodes in the network, and the relationships between individuals are represented by the links between nodes, thus representing the social network as a specific network structure. In the process of risk perception propagation, it can only be propagated between neighboring nodes. When a node propagates risk perception to its neighboring nodes, if the neighboring nodes choose to believe and accept the information, then the neighboring nodes continue to propagate risk perception to its neighbors. If the neighboring node does not accept the risk perception, the neighboring node does not propagate it again.When the risk perception spreads from the risk source to the whole network, there will be different psychological states for the same information due to the different interests and knowledge of nodes in the network. Therefore, nodes have different attitudes on whether to accept the spread of risk perception and ultimately lead to different trends in the spread of risk perception. So that, the nodes in the social network can be divided into four states: the susceptible state (S), the latent state (E), the onset state (I), and the recovering state (R). Among them, the susceptible state refers to the public who has not received the pandemic information. The latent status refers to receiving information but not disseminating risk perception. In other words, it refers to receiving pandemic information for the first time and perceiving risk but not breaking one’s maximum risk tolerance. The onset state refers to the state of panic and anxiety when receiving information, which is spreading risk perception. The recovery status refers to the public who rationally see the pandemic and do not spread it.The proportions of these four groups in the total population at timet are, respectively, St, Et, It, Rt, and St+Et+It+Rt=1.Considering the social reinforcement effect, forgetting mechanism, and containment mechanism of public risk perception, the following risk perception communication rules are proposed:(1) When the susceptible individualSi receives the information transmitted from an infected individual Ij, the susceptible individual Si may change to the latent state Ei with probability α or may change to the onset state Ii with the initial rate of transmission β and transmit the pandemic information to other individuals. The state transition can be expressed as follows:(2)Si+Ij⟶αEi+Ij,Si+Ij⟶βIi+Ij.(2) The nodes in the latent state are suspicious of the pandemic information and will receive the information transmitted from the nodes in the neighboring pandemic state many times. Under the influence of the social reinforcement effect, the nodes in the latent stateEi will be transformed into the onset state Ii with the probability of transmission λ. The latent node Ei that has not been transformed into the disease state may be transformed into the recovering state Ri with probability θ. The Ei transition process of the latent state can be expressed as follows:(3)Ei⟶λIi,Ei⟶θRi.(3) Since the onset node has already believed the pandemic information and spread it, the transmission state will not be affected by the social strengthening effect. However, the onset stateIi is affected by the social containment mechanism, the nodes in the onset state Ii will be transformed into the recovery state Ri with the probability of transmission ε. The Ii transition process of the onset state can be expressed as follows:(4)Ii⟶εRi.(4) With the passage of time, the recovering stateRi was affected by the amnesia mechanism and changed to the susceptible state Si with a probability of δ. The Ri transfer process of the healing state can be expressed as follows:(5)Ri⟶δSi.According to the above analysis, the evolution model of the public risk perception network in COVID-19 is shown in Figure2.Figure 2 State transition diagram of social network nodes.In summary, the public risk perception communication dynamics model is as follows:(6)dStdt=−αStIt−βStIt+δRt,dEtdt=αStIt−λEt−θEt,dItdt=βStIt+λEt−εIt,dRtdt=θEt+εIt−δRt. ## 3.4. Analysis of the Basic Reproduction Number of the Model In this paper, the next-generation matrix method is used to calculate the basic reproduction numberR0.States 1, 2, 3, and 4 represent the states ofE, I, S, and R, respectively. The density of class 4 node states is denoted by xi, that is x=x1,x2,x3,x4T. Constructor Fx: Fix represents the probability of new diseased nodes in state i; according to the above information, when i=1, the probability of new diseased node in latent state E is αSI; when i=2, the probability of new diseased node in onset state is βSI; and when i=3,4, there were no new disease nodes in susceptible nodes and recovered nodes. Therefore,(7)Fx=F1xFx2F3xF4x=αSIβSI00.ConstructorVx: V1x, V2x, V3x, and V4x, respectively, represents the probability of change of state nodes of E, I, S, and R; hypothesis Vix=Vi−x−Vi+x, where Vi+x represents the probability of changing from another state node to the i state and Vi−x represents the probability of transition from the i state node to another state; therefore,(8)Vx=V1xVx2V3xV4x=λE+θEεI−λEαSI+βSI−δRδR−θE−εI.Obviously, when there is no diseased node in the network system, all nodes are susceptible to infection, that is,E0=0,0,S∗,0 is the equilibrium point of the system, The derivative of Fx and Vx at E0 is as follows:(9)DFE0=F000,DVE0=V0J3J4,where(10)F=0α0β,V=λ+θ0−λε.Therefore, we calculated that(11)FV−1=αλλ+θεαεβλλ+θεβε.The spectral radius ofFV−1 is expressed as ρFV−1, that is, the basic regeneration number R0 is(12)R0=ρFV−1=αλ+βλ+θελ+θ.In the analysis of the network information transmission process, the basic regeneration numberR0 is an important parameter to measure whether network information can be spread on a large scale. It represents the average number of people affected by introducing a disease state node when all the network space is susceptible to infection without intervention. When R0<1, the network information will not be widely diffused. When R0>1, the network information will present a large-scale diffusion trend.It can be seen from equation (12) that the basic regeneration number is closely related to the social effects of public risk perception, and these social effects have an important influence on whether the public risk perception spreads on a large scale. Among them, when other factors remain unchanged, the initial transmission rate of public risk perception keeps increasing, so that the basic regeneration number R0 gradually changes from a value less than 1 to a value greater than 1, and the public panic gradually spreads. With the increase of the basic regeneration number R0, the diffusion scale becomes larger and larger. ## 4. Numerical Simulation and Analysis This part will verify the rationality and stability of the evolutionary model of public risk perception through simulation experiments and analyze the propagation mechanism of risk perception. In the simulation experiments, the initial conditions given are:S0=1, E0=0.00, I0=0.00, and R0=0.00. ### 4.1. Analysis of the Density of State Nodes in the Network Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2,δ=0.1 and substitute into equation (12); the calculation gives the basic regeneration number R0 as 3.125, The theory suggests that public risk perceptions undergo mass diffusion, which is consistent with the results in the figure.From Figure3, it can be seen that the node density of the susceptible state decreases rapidly at the initial stage and eventually tends to stabilize; the node density of the latent state increases rapidly to a peak of 0.31, then gradually decreases, and eventually tends to 0; the node density of the pathogenic state increases rapidly at the early stage of propagation and reaches a peak of 0.63, after which the node density gradually decreases and eventually stabilizes at 0.32; the node density of the healed state rapidly increases during the propagation process and eventually reaches a stable at 0.61. The density of nodes in the healed state increased rapidly during propagation and eventually reached a stable level of 0.61.Figure 3 Trend of state changes of various nodes in the SEIR model. ### 4.2. Impact of Initial Propagation Rate on the Evolution of Public Risk Perception The node density of the morbidity state represents the active degree of risk perception transmission; the node density of the healing state represents the degree of risk perception transmission final state. Therefore, the impact of public health pandemics on public risk perception is investigated by analyzing the change in node density of morbidity and healing states with the initial transmission rateβ. Figures 4 and 5 depict how public risk perceptions vary over time in terms of the density of morbidity status and healing status under the influence of different initial transmission probabilities. Set the initial parameter α=0.2,λ=0.5,θ=0.3,ε=0.2,δ=0.1; take three casesβ=0.2, β=0.5, and β=0.8; and substitute into equation (12); the calculation can get the basic regeneration number R0 as 1.625, 3.125, and 4.625, respectively; theoretically, the public risk perception will carry out large-scale diffusion, which is consistent with the results in the figure. As can be seen in Figures 4 and 5, both the onset state and the healing state R density maxima increase with increasing initial transmission probability β. That is, the greater the initial probability of transmission, the more the public is inclined to spread risk perceptions, and the shorter the time it takes to reach the peak. After the node density of the disease state reaches its peak, as the government’s emergency work advances, such as the effective implementation of pandemic prevention and control measures, some members of the public believe that the pandemic is temporarily controllable, and the number of publics spreading information about the pandemic begins to decrease, and eventually, all change to the disease state. Therefore, the greater the initial rate of transmission of public risk perception, the faster the network transmission that is not only fast but also on a large scale.Figure 4 Effect of initial transmission rate on the density of nodes in the onset state.Figure 5 Effect of initial propagation rate on node density in the healed state. ### 4.3. The Impact of Social Reinforcement Effects on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,θ=0.3,ε=0.2,δ=0.1 and take the three cases b=1,m=1, b=1,m=2, and b=2,m=2. Figure 6 depicts the change in the density of morbidity status nodes over time for different social reinforcement effects on public risk perception. As can be seen in Figure 6, the greater the number of times an individual receives information about an pandemic, the greater the density of morbidity status nodes, for the same reinforcement factor. Similarly, the higher the reinforcement factor, the higher the density of onset nodes, given the same number of times an individual received information about the pandemic. This suggests that the public is more inclined to disseminate risk perceptions in the presence of social reinforcement effects.Figure 6 Effect of reinforcement effects on the density of nodes in the onset state. ### 4.4. Impact of Containment Mechanisms on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,δ=0.1 and take the three cases ε=0.2, ε=0.4, and ε=0.6. Substituting into equation (12), the corresponding basic regeneration numbers R0 are 3.125, 1.56, and 1.04, respectively, and the nodal densities of onset states when public risk perception transmission reaches stability are 0.27, 0.16, and 0.1, respectively. From Figure 7, it can be seen that the maximum value of the onset state node density decreases with the increase of the containment mechanism ε. Since ε represents the strength of the containment mechanism, this means that the stronger the containment mechanism, the easier it is for the onset state nodes to be influenced by other nodes and stop spreading, the faster the network reaches a stable state, and the smaller the density of onset state nodes after the network is stable. Therefore, the containment mechanism curbs the spread of public risk perception to a certain extent, reducing the speed and scope of transmission.Figure 7 Effect of containment mechanisms on node density in the onset state. ### 4.5. The Impact of Forgetting Mechanisms on the Evolution of Public Risk Perception Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2; take the three cases of δ=0.1, δ=0.4, and δ=0.7, and substitute into (12); the corresponding basic regeneration numbers are all 3.125. The node densities of onset states when network propagation is stable are 0.28, 0.52, and 0.61, respectively.As can be seen from Figure8, the maximum value of the onset state node density increases with the increase of the forgetting rate δ. Since δ represents the intensity of forgetting, this means that the greater the degree of forgetting, the easier it is for sick state nodes to forget the received pandemic information and the easier it is for them to become susceptible nodes again to receive pandemic information and propagate risk perceptions, making the density of sick state nodes greater when the network is stable.Figure 8 Effect of forgetting mechanisms on the density of nodes in the onset state. ## 4.1. Analysis of the Density of State Nodes in the Network Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2,δ=0.1 and substitute into equation (12); the calculation gives the basic regeneration number R0 as 3.125, The theory suggests that public risk perceptions undergo mass diffusion, which is consistent with the results in the figure.From Figure3, it can be seen that the node density of the susceptible state decreases rapidly at the initial stage and eventually tends to stabilize; the node density of the latent state increases rapidly to a peak of 0.31, then gradually decreases, and eventually tends to 0; the node density of the pathogenic state increases rapidly at the early stage of propagation and reaches a peak of 0.63, after which the node density gradually decreases and eventually stabilizes at 0.32; the node density of the healed state rapidly increases during the propagation process and eventually reaches a stable at 0.61. The density of nodes in the healed state increased rapidly during propagation and eventually reached a stable level of 0.61.Figure 3 Trend of state changes of various nodes in the SEIR model. ## 4.2. Impact of Initial Propagation Rate on the Evolution of Public Risk Perception The node density of the morbidity state represents the active degree of risk perception transmission; the node density of the healing state represents the degree of risk perception transmission final state. Therefore, the impact of public health pandemics on public risk perception is investigated by analyzing the change in node density of morbidity and healing states with the initial transmission rateβ. Figures 4 and 5 depict how public risk perceptions vary over time in terms of the density of morbidity status and healing status under the influence of different initial transmission probabilities. Set the initial parameter α=0.2,λ=0.5,θ=0.3,ε=0.2,δ=0.1; take three casesβ=0.2, β=0.5, and β=0.8; and substitute into equation (12); the calculation can get the basic regeneration number R0 as 1.625, 3.125, and 4.625, respectively; theoretically, the public risk perception will carry out large-scale diffusion, which is consistent with the results in the figure. As can be seen in Figures 4 and 5, both the onset state and the healing state R density maxima increase with increasing initial transmission probability β. That is, the greater the initial probability of transmission, the more the public is inclined to spread risk perceptions, and the shorter the time it takes to reach the peak. After the node density of the disease state reaches its peak, as the government’s emergency work advances, such as the effective implementation of pandemic prevention and control measures, some members of the public believe that the pandemic is temporarily controllable, and the number of publics spreading information about the pandemic begins to decrease, and eventually, all change to the disease state. Therefore, the greater the initial rate of transmission of public risk perception, the faster the network transmission that is not only fast but also on a large scale.Figure 4 Effect of initial transmission rate on the density of nodes in the onset state.Figure 5 Effect of initial propagation rate on node density in the healed state. ## 4.3. The Impact of Social Reinforcement Effects on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,θ=0.3,ε=0.2,δ=0.1 and take the three cases b=1,m=1, b=1,m=2, and b=2,m=2. Figure 6 depicts the change in the density of morbidity status nodes over time for different social reinforcement effects on public risk perception. As can be seen in Figure 6, the greater the number of times an individual receives information about an pandemic, the greater the density of morbidity status nodes, for the same reinforcement factor. Similarly, the higher the reinforcement factor, the higher the density of onset nodes, given the same number of times an individual received information about the pandemic. This suggests that the public is more inclined to disseminate risk perceptions in the presence of social reinforcement effects.Figure 6 Effect of reinforcement effects on the density of nodes in the onset state. ## 4.4. Impact of Containment Mechanisms on the Evolution of Public Risk Perceptions Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,δ=0.1 and take the three cases ε=0.2, ε=0.4, and ε=0.6. Substituting into equation (12), the corresponding basic regeneration numbers R0 are 3.125, 1.56, and 1.04, respectively, and the nodal densities of onset states when public risk perception transmission reaches stability are 0.27, 0.16, and 0.1, respectively. From Figure 7, it can be seen that the maximum value of the onset state node density decreases with the increase of the containment mechanism ε. Since ε represents the strength of the containment mechanism, this means that the stronger the containment mechanism, the easier it is for the onset state nodes to be influenced by other nodes and stop spreading, the faster the network reaches a stable state, and the smaller the density of onset state nodes after the network is stable. Therefore, the containment mechanism curbs the spread of public risk perception to a certain extent, reducing the speed and scope of transmission.Figure 7 Effect of containment mechanisms on node density in the onset state. ## 4.5. The Impact of Forgetting Mechanisms on the Evolution of Public Risk Perception Set the initial parameter toα=0.2,β=0.5,λ=0.5,θ=0.3,ε=0.2; take the three cases of δ=0.1, δ=0.4, and δ=0.7, and substitute into (12); the corresponding basic regeneration numbers are all 3.125. The node densities of onset states when network propagation is stable are 0.28, 0.52, and 0.61, respectively.As can be seen from Figure8, the maximum value of the onset state node density increases with the increase of the forgetting rate δ. Since δ represents the intensity of forgetting, this means that the greater the degree of forgetting, the easier it is for sick state nodes to forget the received pandemic information and the easier it is for them to become susceptible nodes again to receive pandemic information and propagate risk perceptions, making the density of sick state nodes greater when the network is stable.Figure 8 Effect of forgetting mechanisms on the density of nodes in the onset state. ## 5. Research Conclusions and Policy Recommendations ### 5.1. Research Conclusions In this paper, based on COVID-19, first, we analyzed the infectious characteristics of public risk perception in public health emergencies. Second, according to the characteristics of public risk perception transmission in social networks, we established the evolution dynamics model of public risk perception and solved the basic regeneration number. Finally, we revealed the evolution mechanism of the public risk perception network through parameter selection and simulation experiments. Therefore, the significance of this study is reflected in the following three aspects:(1) Systematically summarize the characteristics of public risk perception of infectious diseases in public health emergencies, including risk sources, transmission media, infectivity, and immunity. It provides a theoretical basis for the establishment of a public risk perception model.(2) Systematically analyze the influencing factors of public risk perception of public health emergencies. From two dimensions of individual and social factors, we focused on analyzing the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on the spread of risk perception and established the evolution dynamics model of the public risk perception based on SEIR. The stability of the model is proved theoretically by solving the basic regeneration number.(3) The infectious disease model is applied to the evolution model of risk perception of public health emergencies; through simulation experiments, we revealed the evolution mechanism of public risk perception network. The greater the initial rate of diffusion, the greater the speed and scope of network diffusion; the stronger the social reinforcement effect, the greater the speed and scope of network diffusion; the stronger the rate of forgetting, the greater the speed and scope of network diffusion; and the stronger the rate of containment, the weaker the speed of network diffusion and the smaller the scale of network diffusion. ### 5.2. Policy Recommendations The conclusions of this study can provide the following suggestions for relevant government departments to deal with major public health emergencies:(1) The government should immediately launch the emergency plan to rescue the affected public, shrink the scope of the emergency, reduce the level of the emergency, cut down the influence of the emergency, and weaken the public risk perception so as to reduce the negative impact of the online public opinion of the emergency and maintain social harmony and stability.(2) The government should undertake the responsibility of supervision, regulation, and management. First, on the premise of satisfying the public’s right to know, gradually relax the control on news media, and standardize the system of information disclosure and dissemination. In the process of information disclosure, the government should establish two-way communication channels: on the one hand, timely inform the public of the truth of the incident and, on the other hand, invite experts to objectively analyze and release authoritative information. Second, the government has a responsibility to know the source of the public’s fear and respect the public’s perception of risk. The risk that may be overestimated by the public through various ways has to be reduced, so as to reduce the public’s risk perception level and relieve the public’s panic.(3) News media should abide by professional ethics and report objectively and fairly, which will help reduce the risk perception of the public. The media is the reporter of the risk information of public health emergencies, the interpreter of dynamic information, and the guide of the public’s risk perception. After the occurrence of public health emergencies, the public relies more on media reports because of asymmetric information. Therefore, the media should report information objectively, accurately and timely so that the potential risks can be recognized by the public so as to cut down the public’s risk perception and panic and ultimately reduce the change from vulnerable groups to latent groups or disease groups.(4) The public should maintain a positive and optimistic attitude, collect emergency information rationally and objectively, and reduce the overestimated risk perception because of information insufficient. Instead of passively receiving information, the public should actively acquire and screen information. The analysis must be rational and objective so as to avoid herd behavior.In a word, in the face of public health emergencies, the government should establish an early warning and response mechanism for public risk perception. The media should report the emergency objectively and fairly in order to guide public opinion correctly. The public should remain positive and optimistic, enhance the awareness of discrimination, reduce unnecessary panic, and improve the confidence to overcome difficulties. ## 5.1. Research Conclusions In this paper, based on COVID-19, first, we analyzed the infectious characteristics of public risk perception in public health emergencies. Second, according to the characteristics of public risk perception transmission in social networks, we established the evolution dynamics model of public risk perception and solved the basic regeneration number. Finally, we revealed the evolution mechanism of the public risk perception network through parameter selection and simulation experiments. Therefore, the significance of this study is reflected in the following three aspects:(1) Systematically summarize the characteristics of public risk perception of infectious diseases in public health emergencies, including risk sources, transmission media, infectivity, and immunity. It provides a theoretical basis for the establishment of a public risk perception model.(2) Systematically analyze the influencing factors of public risk perception of public health emergencies. From two dimensions of individual and social factors, we focused on analyzing the influence of the forgetting mechanism, social reinforcement effect, and containment mechanism on the spread of risk perception and established the evolution dynamics model of the public risk perception based on SEIR. The stability of the model is proved theoretically by solving the basic regeneration number.(3) The infectious disease model is applied to the evolution model of risk perception of public health emergencies; through simulation experiments, we revealed the evolution mechanism of public risk perception network. The greater the initial rate of diffusion, the greater the speed and scope of network diffusion; the stronger the social reinforcement effect, the greater the speed and scope of network diffusion; the stronger the rate of forgetting, the greater the speed and scope of network diffusion; and the stronger the rate of containment, the weaker the speed of network diffusion and the smaller the scale of network diffusion. ## 5.2. Policy Recommendations The conclusions of this study can provide the following suggestions for relevant government departments to deal with major public health emergencies:(1) The government should immediately launch the emergency plan to rescue the affected public, shrink the scope of the emergency, reduce the level of the emergency, cut down the influence of the emergency, and weaken the public risk perception so as to reduce the negative impact of the online public opinion of the emergency and maintain social harmony and stability.(2) The government should undertake the responsibility of supervision, regulation, and management. First, on the premise of satisfying the public’s right to know, gradually relax the control on news media, and standardize the system of information disclosure and dissemination. In the process of information disclosure, the government should establish two-way communication channels: on the one hand, timely inform the public of the truth of the incident and, on the other hand, invite experts to objectively analyze and release authoritative information. Second, the government has a responsibility to know the source of the public’s fear and respect the public’s perception of risk. The risk that may be overestimated by the public through various ways has to be reduced, so as to reduce the public’s risk perception level and relieve the public’s panic.(3) News media should abide by professional ethics and report objectively and fairly, which will help reduce the risk perception of the public. The media is the reporter of the risk information of public health emergencies, the interpreter of dynamic information, and the guide of the public’s risk perception. After the occurrence of public health emergencies, the public relies more on media reports because of asymmetric information. Therefore, the media should report information objectively, accurately and timely so that the potential risks can be recognized by the public so as to cut down the public’s risk perception and panic and ultimately reduce the change from vulnerable groups to latent groups or disease groups.(4) The public should maintain a positive and optimistic attitude, collect emergency information rationally and objectively, and reduce the overestimated risk perception because of information insufficient. Instead of passively receiving information, the public should actively acquire and screen information. The analysis must be rational and objective so as to avoid herd behavior.In a word, in the face of public health emergencies, the government should establish an early warning and response mechanism for public risk perception. The media should report the emergency objectively and fairly in order to guide public opinion correctly. The public should remain positive and optimistic, enhance the awareness of discrimination, reduce unnecessary panic, and improve the confidence to overcome difficulties. --- *Source: 1015049-2021-11-29.xml*
2021
# Efficacy of Dexmedetomidine versus Ketofol for Sedation of Postoperative Mechanically Ventilated Patients with Obstructive Sleep Apnea **Authors:** Hatem Elmoutaz Mahmoud; Doaa Abou Elkassim Rashwan **Journal:** Critical Care Research and Practice (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1015054 --- ## Abstract Patients with sleep apnea are prone to postoperative respiratory complications, requiring restriction of sedatives during perioperative care. We performed a prospective randomized study on 24 patients with obstructive sleep apnea (OSA) who underwent elective surgery under general anesthesia. The patients were equally divided into two groups:Group Dex: received dexmedetomidine loading dose 1 mcg/kg IV over 10 min followed by infusion of 0.2–0.7 mcg/kg/hr; Group KFL: received ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1) and maintenance dose of 5–10 mcg/kg/min. Sedation level (Ramsay sedation score), bispectral index (BIS), duration of mechanical ventilation, surgical intensive care unit (SICU) stay, and mean time to extubation were evaluated. Complications (hypotension, hypertension, bradycardia, postextubation apnea, respiratory depression, and desaturation) and number of patients requiring reintubation were recorded. There was a statistically significant difference between the two groups in BIS at the third hour only (Group DEX 63.00 ± 3.542 and Group KFL 66.42 ± 4.010, p value = 0.036). Duration of mechanical ventilation, SICU stay, and extubation time showed no statistically significant differences. No complications were recorded in both groups. Thus, dexmedetomidine was associated with lesser duration of mechanical ventilation and time to extubation than ketofol, but these differences were not statistically significant. --- ## Body ## 1. Introduction Obstructive sleep apnea (OSA) is a common condition [1] and is characterized by recurrent episodes of decrease in or cessation of airflow during sleep [2]. This condition causes a decrease in the oxygen level in the blood leading to an increase in the blood pressure and strain on the heart and lungs. The incidence of OSA is nearly 5% and about 9% among surgical patients [3].Patients with OSA have increased incidence of perioperative complications [4]; they are susceptible to postoperative airway complications and require use of low doses of opioids and sedatives [5]. Sleep apnea is becoming a major concern for intensivists, as these patients need postoperative admission to the intensive care unit (ICU), mechanical ventilation, and sedation [6]. Dexmedetomidine is an α2-adrenoreceptor agonist; it has analgesic and sedative properties and is associated with limited respiratory depression [7–9]. Propofol is a sedative-hypnotic agent with rapid onset and short duration of action [10]. Ketamine, an NMDA receptor antagonist, binds to opioid and sigma receptors, leading to dissociative anesthesia [11], amnesia, and analgesia [12]. Its use as a single sedative agent has been limited because it causes emergence reactions [13].Ketofol, which is a mixture of ketamine and propofol in a single syringe, has been shown to be effective in the operating theater and in day surgeries [14, 15]. It has the advantage of minimizing the respiratory and hemodynamic effects of the constituent drugs [16]. The combined administration of ketamine and propofol has been shown to reduce the dose of propofol needed for sedation [17]. However, the use of ketofol is a new practice for intensivists, and there are limited data on its use as a sedative in the ICU [18].No previous reports have compared the efficacy of dexmedetomidine and ketofol for postoperative sedation of mechanically ventilated patients with OSA. In this study, we compare the efficacy of dexmedetomidine and ketofol for postoperative sedation of mechanically ventilated patients with OSA in terms of sedation level, duration of mechanical ventilation, time of extubation, duration of surgical intensive care unit (SICU) stay, and occurrence of complications. ## 2. Materials and Methods This single-center randomized study was conducted in the SICU of Benisuef University Hospital. We obtained approval from the ethics committee of the institution (The FM-BSU REC). The study was registered at ISRCTN (trial registration number:ISRCTN56992547).After obtaining consent, 24 patients diagnosed with OSA, who underwent elective surgeries under general anesthesia from May 2016 to April 2017, were included. These patients were admitted to the SICU, and were intubated, ventilated, and sedated according to the protocol followed in our department, as they may develop postoperative respiratory depression and/or obstruction and need reintubation. ### 2.1. Inclusion Criteria The study included adult patients (18–50 years) with OSA requiring postoperative short-term sedation and mechanical ventilation (less than 12 hours). ### 2.2. Exclusion Criteria (1) Requirement for prolonged sedation and mechanical ventilation (more than 12 hours) (2) Epilepsy (3) Known allergies to the drugs being studied (4) Severe hepatic, renal, or central nervous system involvement, significant cardiac diseases, or arrhythmias (5) Pregnancy (6) Intake of other sedatives and anticonvulsant drugsIntraoperative analgesia was maintained in all patients with fentanyl 1 mcg/kg, followed by infusion of 1-2 mcg/kg/h; the administration was ceased at the end of the operation.On arrival to the SICU, the patients were connected to the mechanical ventilator; complete monitoring was performed using ECG, pulse oximetry, noninvasive and invasive arterial blood pressure measurement, and capnography. Bispectral index (BIS) electrodes were applied on the forehead. A baseline 12-lead ECG, chest radiograph, ABGs, and CBC were obtained, and biochemical tests were performed.Patients were randomly allocated into two groups by a sealed opaque envelop technique:Group Dex comprised twelve patients receiving a loading dose infusion of dexmedetomidine (Precedex, Abbot Laboratories Inc., Abbot Park, IL, USA; 2 ml, 200 mcg vial, 100 mcg/ml) 1 mcg/kg IV over 10 min, followed by infusion of 0.2–0.7 mcg/kg/hr [19]. Group KFL comprised twelve patients receiving ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1; ketamine 8 mg/ml and propofol 8 mg/ml, by mixing 40 ml propofol 1% (10 mg/ml)) with 8 ml ketamine (50 mg/ml) and 2 ml dextrose 5% (each ml of aliquot contained 8 mg propofol and 8 mg ketamine), followed by infusion of 5–10 mcg/kg/min [18].The degree of sedation was measured hourly using the Ramsay sedation score (RSS). In both groups, the target was to achieve and maintain RSS of 4 or 5. ### 2.3. Ramsay Sedation Scale Sedation level description is as follows:(1) Patient is anxious and agitated or restless, or both.(2) Patient is cooperative, oriented, and tranquil.(3) Patient responds to commands only.(4) Patient exhibits brisk response to light glabellar tap or loud auditory stimulus.(5) Patient exhibits a sluggish response.(6) Patient exhibits no response [20].When the patients fulfilled the criteria for weaning and extubation [21], mechanical ventilation was discontinued and extubation was performed. We collected the following data: (1) demographic data: age, sex, body mass index, and types of surgeries; (2) vital signs: heart rate, invasive mean arterial blood pressure, SpO2, and end-tidal CO2, which were continuously monitored and recorded at baseline (after admission to the SICU), at 1 hour and 3 hours after the start of sedation, and then every 3 hours; (3) sedation level: RSS was recorded at baseline, at 1 hour and 3 hours after the start of sedation, and then every three hours; (4) BIS was recorded at baseline, at 1 hour and three hours after the start of sedation, and then every three hours; (5) duration of mechanical ventilation, and stay in the SICU (hours) (secondary outcome); (6) mean time to extubation (the time of discontinuation of sedative to extubation in minutes) (primary outcome); (7) behavioral pain scale for pain assessment recorded at baseline, at 1 hour, and 3 hours after the start of sedation, and then every 3 hours (Table 1) [22]; (8) complications including hypotension (systolic blood pressure less than 90 mmHg), hypertension (systolic blood pressure more than 170 mmHg), and bradycardia (heart rate less than 50 b/minute) [18].Table 1 Behavioral pain scale for pain assessment. Item Description Score Facial expression Relaxed 1 Partially tightened (e.g., brow lowering) 2 Fully tightened (e.g., eyelid closing) 3 Grimacing 4 Upper limbs No movement 1 Partially bent 2 Fully bent with finger flexion 3 Permanently retracted 4 Compliance with ventilation Tolerating movement 1 Coughing but tolerating ventilation for most of the time 2 Fighting ventilator 3 Unable to control ventilation 4Additionally, the number of patients who required reintubation and those who had postextubation respiratory depression, apnea, and desaturation was recorded. ### 2.4. Statistical Analysis After a pilot study with three patients in each group, the mean ± SD of extubation time in dexmedetomidine treated group was 32.3 ± 2.1 minutes, while in ketofol group was 39 ± 2.2 minutes. Accordingly, we calculated that the minimum proper sample size was 10 participants in each arm to be able to detect a real difference of 13.2 minutes with 95% power atα = 0.05 level using Student’s t-test for independent samples. We increased the number to 12 patient in each group in case of drop of any case. Sample size calculation was done using Stats Direct statistical software version 2.7.2 for MS Windows, Stats Direct Ltd., Cheshire, UK. We performed analysis using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 22 for Microsoft Windows. Data were statistically described in terms of mean ± standard deviation (±SD), median and range, or frequencies (number of cases) and percentages when appropriate. Comparison of numerical variables between the study groups was done using the Mann–Whitney U test for independent samples. For comparing categorical data, the chi-square (χ2) test was performed. The exact test was used instead when the expected frequency is less than 5. p values less than 0.05 were considered statistically significant. ## 2.1. Inclusion Criteria The study included adult patients (18–50 years) with OSA requiring postoperative short-term sedation and mechanical ventilation (less than 12 hours). ## 2.2. Exclusion Criteria (1) Requirement for prolonged sedation and mechanical ventilation (more than 12 hours) (2) Epilepsy (3) Known allergies to the drugs being studied (4) Severe hepatic, renal, or central nervous system involvement, significant cardiac diseases, or arrhythmias (5) Pregnancy (6) Intake of other sedatives and anticonvulsant drugsIntraoperative analgesia was maintained in all patients with fentanyl 1 mcg/kg, followed by infusion of 1-2 mcg/kg/h; the administration was ceased at the end of the operation.On arrival to the SICU, the patients were connected to the mechanical ventilator; complete monitoring was performed using ECG, pulse oximetry, noninvasive and invasive arterial blood pressure measurement, and capnography. Bispectral index (BIS) electrodes were applied on the forehead. A baseline 12-lead ECG, chest radiograph, ABGs, and CBC were obtained, and biochemical tests were performed.Patients were randomly allocated into two groups by a sealed opaque envelop technique:Group Dex comprised twelve patients receiving a loading dose infusion of dexmedetomidine (Precedex, Abbot Laboratories Inc., Abbot Park, IL, USA; 2 ml, 200 mcg vial, 100 mcg/ml) 1 mcg/kg IV over 10 min, followed by infusion of 0.2–0.7 mcg/kg/hr [19]. Group KFL comprised twelve patients receiving ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1; ketamine 8 mg/ml and propofol 8 mg/ml, by mixing 40 ml propofol 1% (10 mg/ml)) with 8 ml ketamine (50 mg/ml) and 2 ml dextrose 5% (each ml of aliquot contained 8 mg propofol and 8 mg ketamine), followed by infusion of 5–10 mcg/kg/min [18].The degree of sedation was measured hourly using the Ramsay sedation score (RSS). In both groups, the target was to achieve and maintain RSS of 4 or 5. ## 2.3. Ramsay Sedation Scale Sedation level description is as follows:(1) Patient is anxious and agitated or restless, or both.(2) Patient is cooperative, oriented, and tranquil.(3) Patient responds to commands only.(4) Patient exhibits brisk response to light glabellar tap or loud auditory stimulus.(5) Patient exhibits a sluggish response.(6) Patient exhibits no response [20].When the patients fulfilled the criteria for weaning and extubation [21], mechanical ventilation was discontinued and extubation was performed. We collected the following data: (1) demographic data: age, sex, body mass index, and types of surgeries; (2) vital signs: heart rate, invasive mean arterial blood pressure, SpO2, and end-tidal CO2, which were continuously monitored and recorded at baseline (after admission to the SICU), at 1 hour and 3 hours after the start of sedation, and then every 3 hours; (3) sedation level: RSS was recorded at baseline, at 1 hour and 3 hours after the start of sedation, and then every three hours; (4) BIS was recorded at baseline, at 1 hour and three hours after the start of sedation, and then every three hours; (5) duration of mechanical ventilation, and stay in the SICU (hours) (secondary outcome); (6) mean time to extubation (the time of discontinuation of sedative to extubation in minutes) (primary outcome); (7) behavioral pain scale for pain assessment recorded at baseline, at 1 hour, and 3 hours after the start of sedation, and then every 3 hours (Table 1) [22]; (8) complications including hypotension (systolic blood pressure less than 90 mmHg), hypertension (systolic blood pressure more than 170 mmHg), and bradycardia (heart rate less than 50 b/minute) [18].Table 1 Behavioral pain scale for pain assessment. Item Description Score Facial expression Relaxed 1 Partially tightened (e.g., brow lowering) 2 Fully tightened (e.g., eyelid closing) 3 Grimacing 4 Upper limbs No movement 1 Partially bent 2 Fully bent with finger flexion 3 Permanently retracted 4 Compliance with ventilation Tolerating movement 1 Coughing but tolerating ventilation for most of the time 2 Fighting ventilator 3 Unable to control ventilation 4Additionally, the number of patients who required reintubation and those who had postextubation respiratory depression, apnea, and desaturation was recorded. ## 2.4. Statistical Analysis After a pilot study with three patients in each group, the mean ± SD of extubation time in dexmedetomidine treated group was 32.3 ± 2.1 minutes, while in ketofol group was 39 ± 2.2 minutes. Accordingly, we calculated that the minimum proper sample size was 10 participants in each arm to be able to detect a real difference of 13.2 minutes with 95% power atα = 0.05 level using Student’s t-test for independent samples. We increased the number to 12 patient in each group in case of drop of any case. Sample size calculation was done using Stats Direct statistical software version 2.7.2 for MS Windows, Stats Direct Ltd., Cheshire, UK. We performed analysis using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 22 for Microsoft Windows. Data were statistically described in terms of mean ± standard deviation (±SD), median and range, or frequencies (number of cases) and percentages when appropriate. Comparison of numerical variables between the study groups was done using the Mann–Whitney U test for independent samples. For comparing categorical data, the chi-square (χ2) test was performed. The exact test was used instead when the expected frequency is less than 5. p values less than 0.05 were considered statistically significant. ## 3. Results We included 24 patients in this study. All cases completed the study (Figure1). No statistical significant differences in the demographic data and types of surgeries between the two groups (Table 2). The heart rate was statistically significantly lower in Group DEX than Group KFL at 1, 3, 6, 9, 12, and 18 hours (Table 3). The mean arterial blood pressure was statistically significantly lower in Group DEX than Group KFL at 15, 18, and 21 hours (Table 4). No statistical significant differences between the two groups in SpO2 and end-tidal CO2. No statistical significant differences in Ramsay sedation score between the two groups (Table 5). There was a statistical significant difference between the two groups in BIS at 3 hours only, it was 63.00 ± 3.542 in Group DEX and 66.42 ± 4.010 in Group KFL (p value = 0.036) (Table 6, Figure 2). No statistical significant differences in the behavioral pain scale between the two groups (Table 7). The duration of mechanical ventilation, extubation time (Figure 3), and length of the SICU stay (Figure 4) was lower in Group DEX than Group KFL without statistically significant difference (Table 8). No hypotension, hypertension, bradycardia, postextubation respiratory depression, apnea, or desaturation recorded. No patients required reintubation in both groups.Figure 1 Consort flow participant diagram.Table 2 Demographic data and surgical procedures in both groups. Variable Group KFL (n = 12) Group DEX (n = 12) p value Age (years) 36.58 ± 10.850 34.17 ± 8.111 0.644 BMI (kg/m2) 48.75 ± 9.343 44.58 ± 10.917 0.452 Sex (M/F) 6/6 5/7 1.000 Type of surgery (laparoscopic gastric sleeve/uvulopalatoplasty/lumbar disc fixation) 6/5/1 5/4/3 — Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant.Table 3 Heart rate (Bpm). Variable p value Time (hr) Group KFL (n = 12) Group DEX (n = 12) 0 88.42 ± 5.125 87.75 ± 4.224 0.580 1 80.67 ± 5.774 73.00 ± 4.390 0.003∗ 3 77.25 ± 4.137 66.00 ± 4.134 0.000∗ 6 80.42 ± 2.778 71.67 ± 9.74 0.013∗ 9 83.08 ± 4.055 76.67 ± 3.846 0.001∗ 12 86.42 ± 4.274 82.42 ± 4.776 0.049∗ 15 86.08 ± 2.875 84.17 ± 4.726 0.368 18 84.33 ± 4.418 79.08 ± 5.334 0.026∗ 21 82.42 ± 4.295 78.83 ± 5.638 0.181 24 82.58 ± 4.055 83.25 ± 6.426 0.931 27 83.00 ± 4.090 84.08 ± 6.302 0.469 30 84.83 ± 4.196 82.50 ± 5.854 0.311 Data are presented as mean ± SD.∗p values ≤ 0.05 are considered statistically significant. Bpm = beat per minute.Table 4 Mean arterial blood pressure (mmHg). Variable MAP p value Time (hr) Group KFL (n = 12) Group DEX (n = 12) 0 101.58 ± 13.714 101.25 ± 10.922 0.977 1 96.75 ± 6.524 90.33 ± 13.412 0.202 3 92.58 ± 6.802 89.08 ± 10.104 0.311 6 87.17 ± 3.857 83.92 ± 10.361 0.642 9 85.50 ± 6.488 84.83 ± 10.035 0.794 12 86.17 ± 3.512 83.25 ± 7.736 0.415 15 84.58 ± 6.317 78.17 ± 7.396 0.037∗ 18 84.25 ± 5.379 79.08 ± 6.082 0.046∗ 21 100.92 ± 13.358 92.58 ± 4.100 0.009∗ 24 95.50 ± 9.060 92.00 ± 7.160 0.349 27 94.00 ± 7.032 94.25 ± 9.910 0.663 30 92.33 ± 4.119 94.17 ± 7.779 0.448 Data are presented as mean ± SD.∗p values ≤ 0.05 are considered statistically significant. MAP = mean arterial blood pressure.Table 5 Ramsay sedation score. Time (hrs) Group KFL (n = 12) Group DEX (n = 12) p value 0 1(1-2) 1 (1-2) 1.000 1 4 (3–5) 4 (4–5) 0.244 3 4 (4–5) 4 (4–5) 1.000 6 3 (2–4) 4 (2–4) 0.126 9 2 (2-3) 2 (2-3) 0.680 12 2 (1-2) 2 (2-3) 1.000 Data are presented as median and range.p values  ≤ 0.05 are considered statistically significant.Table 6 Bispectral index. Time (hours) Group KFL (n = 12) Group DEX (n = 12) p value 0 82.83 ± 3.243 82.75 ± 2.896 0.907 1 71.25 ± 4.827 67.83 ± 6.013 0.156 3 66.42 ± 4.010 63.00 ± 3.542 0.036 6 65.33 ± 2.964 66.17 ± 3.589 0.579 9 67.92 ± 4.757 68.00 ± 6.310 1.000 Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant.Figure 2 Mean BIS between the study groups over the study period.Table 7 Behavioral pain scale. Group KFL (n = 12) Group DEX (n = 12) p value 1 1 (1–3) 1 (1–3) 0.156 3 1 (1-2) 1 (1-2) 0.950 6 (1-2) 1 (1-2) 0.317 9 1 (1-2) 1 (1-2) 1.000 Data are presented as median and range.p values ≤ 0.05 are considered statistically significant.Figure 3 Mean extubation time (min) between the study groups.Figure 4 Mean SICU stay (hours) between the study groups.Table 8 Extubation time, duration of mechanical ventilation, and SICU stay. Variable Group KFL (n = 12) Group DEX (n = 12) p value Extubation time (minutes) 35.58 ± 3.895 33.00 ± 3.384 0.105 Duration of mechanical ventilation (hr) 7.88 ± 3.328 7.58 ± 3.183 0.838 Stay in the SICU (hr) 29.00 ± 2.954 29.25 ± 3.415 0.708 Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant. ## 4. Discussion The results of the present study showed that both dexmedetomidine and ketofol were effective for sedation of postoperative mechanically ventilated patients with obstructive sleep apnea and provided hemodynamic stability without complications.Obstructive sleep apnea is characterized by periodic, partial, or complete obstruction of the upper airway, resulting in the disruption of sleep and hypoxemia [23].Patients with OSA are prone to postoperative respiratory problems after general anesthesia [24, 25].Sedation and analgesia used in critical care units provide patients with comfort and safety [26].Dexmedetomidine, an alpha-2 agonist, may reduce the duration of mechanical ventilation [27]; it is a useful adjunct in surgical patients with OSA [5], as it has analgesic and sedative properties and limited respiratory depression. It is useful in patients with OSA undergoing surgeries associated with significant postoperative pain [28, 29].Propofol and ketamine, when used in combination, provided effective sedation for spinal anesthesia and cardiovascular procedures [30]; it has been used for sedation in awake craniotomy and maintained hemodynamic and respiratory stability and is associated with rapid recovery profile [31].Xu et al. [32] in their study compared propofol with dexmedetomidine for sedation of adults who were mechanically ventilated after uvulopalatopharyngoplasty in the PACU, and the bispectral index values were significantly lower in the dexmedetomidine group than in the propofol group. The times to spontaneous breathing, awaking, and extubation were shorter in the dexmedetomidine group. They concluded that dexmedetomidine is an effective sedative for mechanically ventilated adults following uvulopalatopharyngoplasty.Eremenko and Chemova [33] compared the efficacy of dexmedetomidine and propofol for short-term sedation and analgesia after cardiac surgery; they reported no significant differences in the duration of mechanical ventilation or rate of awakening between the groups. Dexmedetomidine provides analgesic effect and shortens the duration of ICU stay. Bradycardia was observed more in dexmedetomidine while arterial hypotension in the propofol group.Paliwal et al. [19] showed a statistically significant lower heart rate in dexmedetomidine group; the decrease in mean arterial pressure was more in the propofol group.A study by Srivastava et al. [34] reported that dexmedetomidine maintained hemodynamic stability compared to propofol and midazolam for sedation of neurosurgical mechanically ventilated patients.Elbaradei et al. [35] showed that dexmedetomidine and propofol are safe sedatives for postoperative short-term ventilation and that dexmedetomidine resulted in lower heart rates than propofol.In our study, ketofol was used for short-term sedation with no complications reported; similarly, Hamimy et al. [18]. concluded that ketofol infusion provided adequate short-term sedation (less than 24 h) in mechanically ventilated patients with rapid recovery and no significant complications. ## 5. Conclusion Dexmedetomidine was associated with lower duration of mechanical ventilation and less time for extubation than ketofol for sedation of postoperative mechanically ventilated patients with obstructive sleep apnea, but these differences were not statistically significant. Both provided hemodynamic stability without complications. --- *Source: 1015054-2018-01-28.xml*
1015054-2018-01-28_1015054-2018-01-28.md
24,885
Efficacy of Dexmedetomidine versus Ketofol for Sedation of Postoperative Mechanically Ventilated Patients with Obstructive Sleep Apnea
Hatem Elmoutaz Mahmoud; Doaa Abou Elkassim Rashwan
Critical Care Research and Practice (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1015054
1015054-2018-01-28.xml
--- ## Abstract Patients with sleep apnea are prone to postoperative respiratory complications, requiring restriction of sedatives during perioperative care. We performed a prospective randomized study on 24 patients with obstructive sleep apnea (OSA) who underwent elective surgery under general anesthesia. The patients were equally divided into two groups:Group Dex: received dexmedetomidine loading dose 1 mcg/kg IV over 10 min followed by infusion of 0.2–0.7 mcg/kg/hr; Group KFL: received ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1) and maintenance dose of 5–10 mcg/kg/min. Sedation level (Ramsay sedation score), bispectral index (BIS), duration of mechanical ventilation, surgical intensive care unit (SICU) stay, and mean time to extubation were evaluated. Complications (hypotension, hypertension, bradycardia, postextubation apnea, respiratory depression, and desaturation) and number of patients requiring reintubation were recorded. There was a statistically significant difference between the two groups in BIS at the third hour only (Group DEX 63.00 ± 3.542 and Group KFL 66.42 ± 4.010, p value = 0.036). Duration of mechanical ventilation, SICU stay, and extubation time showed no statistically significant differences. No complications were recorded in both groups. Thus, dexmedetomidine was associated with lesser duration of mechanical ventilation and time to extubation than ketofol, but these differences were not statistically significant. --- ## Body ## 1. Introduction Obstructive sleep apnea (OSA) is a common condition [1] and is characterized by recurrent episodes of decrease in or cessation of airflow during sleep [2]. This condition causes a decrease in the oxygen level in the blood leading to an increase in the blood pressure and strain on the heart and lungs. The incidence of OSA is nearly 5% and about 9% among surgical patients [3].Patients with OSA have increased incidence of perioperative complications [4]; they are susceptible to postoperative airway complications and require use of low doses of opioids and sedatives [5]. Sleep apnea is becoming a major concern for intensivists, as these patients need postoperative admission to the intensive care unit (ICU), mechanical ventilation, and sedation [6]. Dexmedetomidine is an α2-adrenoreceptor agonist; it has analgesic and sedative properties and is associated with limited respiratory depression [7–9]. Propofol is a sedative-hypnotic agent with rapid onset and short duration of action [10]. Ketamine, an NMDA receptor antagonist, binds to opioid and sigma receptors, leading to dissociative anesthesia [11], amnesia, and analgesia [12]. Its use as a single sedative agent has been limited because it causes emergence reactions [13].Ketofol, which is a mixture of ketamine and propofol in a single syringe, has been shown to be effective in the operating theater and in day surgeries [14, 15]. It has the advantage of minimizing the respiratory and hemodynamic effects of the constituent drugs [16]. The combined administration of ketamine and propofol has been shown to reduce the dose of propofol needed for sedation [17]. However, the use of ketofol is a new practice for intensivists, and there are limited data on its use as a sedative in the ICU [18].No previous reports have compared the efficacy of dexmedetomidine and ketofol for postoperative sedation of mechanically ventilated patients with OSA. In this study, we compare the efficacy of dexmedetomidine and ketofol for postoperative sedation of mechanically ventilated patients with OSA in terms of sedation level, duration of mechanical ventilation, time of extubation, duration of surgical intensive care unit (SICU) stay, and occurrence of complications. ## 2. Materials and Methods This single-center randomized study was conducted in the SICU of Benisuef University Hospital. We obtained approval from the ethics committee of the institution (The FM-BSU REC). The study was registered at ISRCTN (trial registration number:ISRCTN56992547).After obtaining consent, 24 patients diagnosed with OSA, who underwent elective surgeries under general anesthesia from May 2016 to April 2017, were included. These patients were admitted to the SICU, and were intubated, ventilated, and sedated according to the protocol followed in our department, as they may develop postoperative respiratory depression and/or obstruction and need reintubation. ### 2.1. Inclusion Criteria The study included adult patients (18–50 years) with OSA requiring postoperative short-term sedation and mechanical ventilation (less than 12 hours). ### 2.2. Exclusion Criteria (1) Requirement for prolonged sedation and mechanical ventilation (more than 12 hours) (2) Epilepsy (3) Known allergies to the drugs being studied (4) Severe hepatic, renal, or central nervous system involvement, significant cardiac diseases, or arrhythmias (5) Pregnancy (6) Intake of other sedatives and anticonvulsant drugsIntraoperative analgesia was maintained in all patients with fentanyl 1 mcg/kg, followed by infusion of 1-2 mcg/kg/h; the administration was ceased at the end of the operation.On arrival to the SICU, the patients were connected to the mechanical ventilator; complete monitoring was performed using ECG, pulse oximetry, noninvasive and invasive arterial blood pressure measurement, and capnography. Bispectral index (BIS) electrodes were applied on the forehead. A baseline 12-lead ECG, chest radiograph, ABGs, and CBC were obtained, and biochemical tests were performed.Patients were randomly allocated into two groups by a sealed opaque envelop technique:Group Dex comprised twelve patients receiving a loading dose infusion of dexmedetomidine (Precedex, Abbot Laboratories Inc., Abbot Park, IL, USA; 2 ml, 200 mcg vial, 100 mcg/ml) 1 mcg/kg IV over 10 min, followed by infusion of 0.2–0.7 mcg/kg/hr [19]. Group KFL comprised twelve patients receiving ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1; ketamine 8 mg/ml and propofol 8 mg/ml, by mixing 40 ml propofol 1% (10 mg/ml)) with 8 ml ketamine (50 mg/ml) and 2 ml dextrose 5% (each ml of aliquot contained 8 mg propofol and 8 mg ketamine), followed by infusion of 5–10 mcg/kg/min [18].The degree of sedation was measured hourly using the Ramsay sedation score (RSS). In both groups, the target was to achieve and maintain RSS of 4 or 5. ### 2.3. Ramsay Sedation Scale Sedation level description is as follows:(1) Patient is anxious and agitated or restless, or both.(2) Patient is cooperative, oriented, and tranquil.(3) Patient responds to commands only.(4) Patient exhibits brisk response to light glabellar tap or loud auditory stimulus.(5) Patient exhibits a sluggish response.(6) Patient exhibits no response [20].When the patients fulfilled the criteria for weaning and extubation [21], mechanical ventilation was discontinued and extubation was performed. We collected the following data: (1) demographic data: age, sex, body mass index, and types of surgeries; (2) vital signs: heart rate, invasive mean arterial blood pressure, SpO2, and end-tidal CO2, which were continuously monitored and recorded at baseline (after admission to the SICU), at 1 hour and 3 hours after the start of sedation, and then every 3 hours; (3) sedation level: RSS was recorded at baseline, at 1 hour and 3 hours after the start of sedation, and then every three hours; (4) BIS was recorded at baseline, at 1 hour and three hours after the start of sedation, and then every three hours; (5) duration of mechanical ventilation, and stay in the SICU (hours) (secondary outcome); (6) mean time to extubation (the time of discontinuation of sedative to extubation in minutes) (primary outcome); (7) behavioral pain scale for pain assessment recorded at baseline, at 1 hour, and 3 hours after the start of sedation, and then every 3 hours (Table 1) [22]; (8) complications including hypotension (systolic blood pressure less than 90 mmHg), hypertension (systolic blood pressure more than 170 mmHg), and bradycardia (heart rate less than 50 b/minute) [18].Table 1 Behavioral pain scale for pain assessment. Item Description Score Facial expression Relaxed 1 Partially tightened (e.g., brow lowering) 2 Fully tightened (e.g., eyelid closing) 3 Grimacing 4 Upper limbs No movement 1 Partially bent 2 Fully bent with finger flexion 3 Permanently retracted 4 Compliance with ventilation Tolerating movement 1 Coughing but tolerating ventilation for most of the time 2 Fighting ventilator 3 Unable to control ventilation 4Additionally, the number of patients who required reintubation and those who had postextubation respiratory depression, apnea, and desaturation was recorded. ### 2.4. Statistical Analysis After a pilot study with three patients in each group, the mean ± SD of extubation time in dexmedetomidine treated group was 32.3 ± 2.1 minutes, while in ketofol group was 39 ± 2.2 minutes. Accordingly, we calculated that the minimum proper sample size was 10 participants in each arm to be able to detect a real difference of 13.2 minutes with 95% power atα = 0.05 level using Student’s t-test for independent samples. We increased the number to 12 patient in each group in case of drop of any case. Sample size calculation was done using Stats Direct statistical software version 2.7.2 for MS Windows, Stats Direct Ltd., Cheshire, UK. We performed analysis using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 22 for Microsoft Windows. Data were statistically described in terms of mean ± standard deviation (±SD), median and range, or frequencies (number of cases) and percentages when appropriate. Comparison of numerical variables between the study groups was done using the Mann–Whitney U test for independent samples. For comparing categorical data, the chi-square (χ2) test was performed. The exact test was used instead when the expected frequency is less than 5. p values less than 0.05 were considered statistically significant. ## 2.1. Inclusion Criteria The study included adult patients (18–50 years) with OSA requiring postoperative short-term sedation and mechanical ventilation (less than 12 hours). ## 2.2. Exclusion Criteria (1) Requirement for prolonged sedation and mechanical ventilation (more than 12 hours) (2) Epilepsy (3) Known allergies to the drugs being studied (4) Severe hepatic, renal, or central nervous system involvement, significant cardiac diseases, or arrhythmias (5) Pregnancy (6) Intake of other sedatives and anticonvulsant drugsIntraoperative analgesia was maintained in all patients with fentanyl 1 mcg/kg, followed by infusion of 1-2 mcg/kg/h; the administration was ceased at the end of the operation.On arrival to the SICU, the patients were connected to the mechanical ventilator; complete monitoring was performed using ECG, pulse oximetry, noninvasive and invasive arterial blood pressure measurement, and capnography. Bispectral index (BIS) electrodes were applied on the forehead. A baseline 12-lead ECG, chest radiograph, ABGs, and CBC were obtained, and biochemical tests were performed.Patients were randomly allocated into two groups by a sealed opaque envelop technique:Group Dex comprised twelve patients receiving a loading dose infusion of dexmedetomidine (Precedex, Abbot Laboratories Inc., Abbot Park, IL, USA; 2 ml, 200 mcg vial, 100 mcg/ml) 1 mcg/kg IV over 10 min, followed by infusion of 0.2–0.7 mcg/kg/hr [19]. Group KFL comprised twelve patients receiving ketofol as an initial bolus dose 500 mcg/kg IV (ketamine/propofol 1 : 1; ketamine 8 mg/ml and propofol 8 mg/ml, by mixing 40 ml propofol 1% (10 mg/ml)) with 8 ml ketamine (50 mg/ml) and 2 ml dextrose 5% (each ml of aliquot contained 8 mg propofol and 8 mg ketamine), followed by infusion of 5–10 mcg/kg/min [18].The degree of sedation was measured hourly using the Ramsay sedation score (RSS). In both groups, the target was to achieve and maintain RSS of 4 or 5. ## 2.3. Ramsay Sedation Scale Sedation level description is as follows:(1) Patient is anxious and agitated or restless, or both.(2) Patient is cooperative, oriented, and tranquil.(3) Patient responds to commands only.(4) Patient exhibits brisk response to light glabellar tap or loud auditory stimulus.(5) Patient exhibits a sluggish response.(6) Patient exhibits no response [20].When the patients fulfilled the criteria for weaning and extubation [21], mechanical ventilation was discontinued and extubation was performed. We collected the following data: (1) demographic data: age, sex, body mass index, and types of surgeries; (2) vital signs: heart rate, invasive mean arterial blood pressure, SpO2, and end-tidal CO2, which were continuously monitored and recorded at baseline (after admission to the SICU), at 1 hour and 3 hours after the start of sedation, and then every 3 hours; (3) sedation level: RSS was recorded at baseline, at 1 hour and 3 hours after the start of sedation, and then every three hours; (4) BIS was recorded at baseline, at 1 hour and three hours after the start of sedation, and then every three hours; (5) duration of mechanical ventilation, and stay in the SICU (hours) (secondary outcome); (6) mean time to extubation (the time of discontinuation of sedative to extubation in minutes) (primary outcome); (7) behavioral pain scale for pain assessment recorded at baseline, at 1 hour, and 3 hours after the start of sedation, and then every 3 hours (Table 1) [22]; (8) complications including hypotension (systolic blood pressure less than 90 mmHg), hypertension (systolic blood pressure more than 170 mmHg), and bradycardia (heart rate less than 50 b/minute) [18].Table 1 Behavioral pain scale for pain assessment. Item Description Score Facial expression Relaxed 1 Partially tightened (e.g., brow lowering) 2 Fully tightened (e.g., eyelid closing) 3 Grimacing 4 Upper limbs No movement 1 Partially bent 2 Fully bent with finger flexion 3 Permanently retracted 4 Compliance with ventilation Tolerating movement 1 Coughing but tolerating ventilation for most of the time 2 Fighting ventilator 3 Unable to control ventilation 4Additionally, the number of patients who required reintubation and those who had postextubation respiratory depression, apnea, and desaturation was recorded. ## 2.4. Statistical Analysis After a pilot study with three patients in each group, the mean ± SD of extubation time in dexmedetomidine treated group was 32.3 ± 2.1 minutes, while in ketofol group was 39 ± 2.2 minutes. Accordingly, we calculated that the minimum proper sample size was 10 participants in each arm to be able to detect a real difference of 13.2 minutes with 95% power atα = 0.05 level using Student’s t-test for independent samples. We increased the number to 12 patient in each group in case of drop of any case. Sample size calculation was done using Stats Direct statistical software version 2.7.2 for MS Windows, Stats Direct Ltd., Cheshire, UK. We performed analysis using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 22 for Microsoft Windows. Data were statistically described in terms of mean ± standard deviation (±SD), median and range, or frequencies (number of cases) and percentages when appropriate. Comparison of numerical variables between the study groups was done using the Mann–Whitney U test for independent samples. For comparing categorical data, the chi-square (χ2) test was performed. The exact test was used instead when the expected frequency is less than 5. p values less than 0.05 were considered statistically significant. ## 3. Results We included 24 patients in this study. All cases completed the study (Figure1). No statistical significant differences in the demographic data and types of surgeries between the two groups (Table 2). The heart rate was statistically significantly lower in Group DEX than Group KFL at 1, 3, 6, 9, 12, and 18 hours (Table 3). The mean arterial blood pressure was statistically significantly lower in Group DEX than Group KFL at 15, 18, and 21 hours (Table 4). No statistical significant differences between the two groups in SpO2 and end-tidal CO2. No statistical significant differences in Ramsay sedation score between the two groups (Table 5). There was a statistical significant difference between the two groups in BIS at 3 hours only, it was 63.00 ± 3.542 in Group DEX and 66.42 ± 4.010 in Group KFL (p value = 0.036) (Table 6, Figure 2). No statistical significant differences in the behavioral pain scale between the two groups (Table 7). The duration of mechanical ventilation, extubation time (Figure 3), and length of the SICU stay (Figure 4) was lower in Group DEX than Group KFL without statistically significant difference (Table 8). No hypotension, hypertension, bradycardia, postextubation respiratory depression, apnea, or desaturation recorded. No patients required reintubation in both groups.Figure 1 Consort flow participant diagram.Table 2 Demographic data and surgical procedures in both groups. Variable Group KFL (n = 12) Group DEX (n = 12) p value Age (years) 36.58 ± 10.850 34.17 ± 8.111 0.644 BMI (kg/m2) 48.75 ± 9.343 44.58 ± 10.917 0.452 Sex (M/F) 6/6 5/7 1.000 Type of surgery (laparoscopic gastric sleeve/uvulopalatoplasty/lumbar disc fixation) 6/5/1 5/4/3 — Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant.Table 3 Heart rate (Bpm). Variable p value Time (hr) Group KFL (n = 12) Group DEX (n = 12) 0 88.42 ± 5.125 87.75 ± 4.224 0.580 1 80.67 ± 5.774 73.00 ± 4.390 0.003∗ 3 77.25 ± 4.137 66.00 ± 4.134 0.000∗ 6 80.42 ± 2.778 71.67 ± 9.74 0.013∗ 9 83.08 ± 4.055 76.67 ± 3.846 0.001∗ 12 86.42 ± 4.274 82.42 ± 4.776 0.049∗ 15 86.08 ± 2.875 84.17 ± 4.726 0.368 18 84.33 ± 4.418 79.08 ± 5.334 0.026∗ 21 82.42 ± 4.295 78.83 ± 5.638 0.181 24 82.58 ± 4.055 83.25 ± 6.426 0.931 27 83.00 ± 4.090 84.08 ± 6.302 0.469 30 84.83 ± 4.196 82.50 ± 5.854 0.311 Data are presented as mean ± SD.∗p values ≤ 0.05 are considered statistically significant. Bpm = beat per minute.Table 4 Mean arterial blood pressure (mmHg). Variable MAP p value Time (hr) Group KFL (n = 12) Group DEX (n = 12) 0 101.58 ± 13.714 101.25 ± 10.922 0.977 1 96.75 ± 6.524 90.33 ± 13.412 0.202 3 92.58 ± 6.802 89.08 ± 10.104 0.311 6 87.17 ± 3.857 83.92 ± 10.361 0.642 9 85.50 ± 6.488 84.83 ± 10.035 0.794 12 86.17 ± 3.512 83.25 ± 7.736 0.415 15 84.58 ± 6.317 78.17 ± 7.396 0.037∗ 18 84.25 ± 5.379 79.08 ± 6.082 0.046∗ 21 100.92 ± 13.358 92.58 ± 4.100 0.009∗ 24 95.50 ± 9.060 92.00 ± 7.160 0.349 27 94.00 ± 7.032 94.25 ± 9.910 0.663 30 92.33 ± 4.119 94.17 ± 7.779 0.448 Data are presented as mean ± SD.∗p values ≤ 0.05 are considered statistically significant. MAP = mean arterial blood pressure.Table 5 Ramsay sedation score. Time (hrs) Group KFL (n = 12) Group DEX (n = 12) p value 0 1(1-2) 1 (1-2) 1.000 1 4 (3–5) 4 (4–5) 0.244 3 4 (4–5) 4 (4–5) 1.000 6 3 (2–4) 4 (2–4) 0.126 9 2 (2-3) 2 (2-3) 0.680 12 2 (1-2) 2 (2-3) 1.000 Data are presented as median and range.p values  ≤ 0.05 are considered statistically significant.Table 6 Bispectral index. Time (hours) Group KFL (n = 12) Group DEX (n = 12) p value 0 82.83 ± 3.243 82.75 ± 2.896 0.907 1 71.25 ± 4.827 67.83 ± 6.013 0.156 3 66.42 ± 4.010 63.00 ± 3.542 0.036 6 65.33 ± 2.964 66.17 ± 3.589 0.579 9 67.92 ± 4.757 68.00 ± 6.310 1.000 Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant.Figure 2 Mean BIS between the study groups over the study period.Table 7 Behavioral pain scale. Group KFL (n = 12) Group DEX (n = 12) p value 1 1 (1–3) 1 (1–3) 0.156 3 1 (1-2) 1 (1-2) 0.950 6 (1-2) 1 (1-2) 0.317 9 1 (1-2) 1 (1-2) 1.000 Data are presented as median and range.p values ≤ 0.05 are considered statistically significant.Figure 3 Mean extubation time (min) between the study groups.Figure 4 Mean SICU stay (hours) between the study groups.Table 8 Extubation time, duration of mechanical ventilation, and SICU stay. Variable Group KFL (n = 12) Group DEX (n = 12) p value Extubation time (minutes) 35.58 ± 3.895 33.00 ± 3.384 0.105 Duration of mechanical ventilation (hr) 7.88 ± 3.328 7.58 ± 3.183 0.838 Stay in the SICU (hr) 29.00 ± 2.954 29.25 ± 3.415 0.708 Data are presented as mean ± SD.p values ≤ 0.05 are considered statistically significant. ## 4. Discussion The results of the present study showed that both dexmedetomidine and ketofol were effective for sedation of postoperative mechanically ventilated patients with obstructive sleep apnea and provided hemodynamic stability without complications.Obstructive sleep apnea is characterized by periodic, partial, or complete obstruction of the upper airway, resulting in the disruption of sleep and hypoxemia [23].Patients with OSA are prone to postoperative respiratory problems after general anesthesia [24, 25].Sedation and analgesia used in critical care units provide patients with comfort and safety [26].Dexmedetomidine, an alpha-2 agonist, may reduce the duration of mechanical ventilation [27]; it is a useful adjunct in surgical patients with OSA [5], as it has analgesic and sedative properties and limited respiratory depression. It is useful in patients with OSA undergoing surgeries associated with significant postoperative pain [28, 29].Propofol and ketamine, when used in combination, provided effective sedation for spinal anesthesia and cardiovascular procedures [30]; it has been used for sedation in awake craniotomy and maintained hemodynamic and respiratory stability and is associated with rapid recovery profile [31].Xu et al. [32] in their study compared propofol with dexmedetomidine for sedation of adults who were mechanically ventilated after uvulopalatopharyngoplasty in the PACU, and the bispectral index values were significantly lower in the dexmedetomidine group than in the propofol group. The times to spontaneous breathing, awaking, and extubation were shorter in the dexmedetomidine group. They concluded that dexmedetomidine is an effective sedative for mechanically ventilated adults following uvulopalatopharyngoplasty.Eremenko and Chemova [33] compared the efficacy of dexmedetomidine and propofol for short-term sedation and analgesia after cardiac surgery; they reported no significant differences in the duration of mechanical ventilation or rate of awakening between the groups. Dexmedetomidine provides analgesic effect and shortens the duration of ICU stay. Bradycardia was observed more in dexmedetomidine while arterial hypotension in the propofol group.Paliwal et al. [19] showed a statistically significant lower heart rate in dexmedetomidine group; the decrease in mean arterial pressure was more in the propofol group.A study by Srivastava et al. [34] reported that dexmedetomidine maintained hemodynamic stability compared to propofol and midazolam for sedation of neurosurgical mechanically ventilated patients.Elbaradei et al. [35] showed that dexmedetomidine and propofol are safe sedatives for postoperative short-term ventilation and that dexmedetomidine resulted in lower heart rates than propofol.In our study, ketofol was used for short-term sedation with no complications reported; similarly, Hamimy et al. [18]. concluded that ketofol infusion provided adequate short-term sedation (less than 24 h) in mechanically ventilated patients with rapid recovery and no significant complications. ## 5. Conclusion Dexmedetomidine was associated with lower duration of mechanical ventilation and less time for extubation than ketofol for sedation of postoperative mechanically ventilated patients with obstructive sleep apnea, but these differences were not statistically significant. Both provided hemodynamic stability without complications. --- *Source: 1015054-2018-01-28.xml*
2018
# Vertebra Plana with Paraplegia in a Middle-Aged Woman Caused by B-Cell Lymphoma: A Case Report **Authors:** Mohd. Zahid; Sohail Ahamed; Jitesh Kumar Jain; Ravish Chabra **Journal:** Case Reports in Orthopedics (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101506 --- ## Abstract Vertebra plana is a rare presentation of spinal lymphoma. When radiological picture of a patient of paraplegia presents vertebra plana, diagnosis becomes a challenge. In a developing country like India tuberculosis should also be a consideration. Even histology sometimes fails to conclude a diagnosis. Immunohistochemistry is of immense help in clinching a diagnosis. --- ## Body ## 1. Introduction Vertebra plana is a radiological diagnosis which represents flattening of a vertebral body with relatively preserved intervertebral disc space. Eosinophilic granuloma is the most common cause of vertebra plana. Other causes include metastatic disease, multiple myeloma, lymphoma, leukemia, Ewing’s sarcoma, gaucher disease, Tuberculosis, and aneurysmal bone cyst.Primary lymphoma of bone is rare. Lymphoma can occur at any age but becomes more common in sixth and seventh decades of life. Incidence of lymphoma varies greatly from region to region. For reasons that are unclear, incidence of lymphoma appears to be increasing every year [1]. In one large series primary lymphoma of bone accounted for 5 per cent of all malignant bone tumors [2]. Tumor-related spinal cord injury (SCI) represents 25% of nontraumatic SCIs and 8% of all SCI cases [3]. Most patients with spinal lymphoma have only complaint of back pain. They may also have nerve root or cord compression. In contrast to multiple myeloma which is seen in the same age group, patients with lymphoma feel otherwise healthy. Diagnosis of spinal lymphoma can be challenging. MRI findings can produce diagnostic dilemma. Even histopathological studies sometime fail to reach the correct diagnosis. Immunohistochemistry often gives clues to clinch a proper diagnosis. Understanding the tumor type for correct identification allows for treatment planning and prognosis setting. The primary treatment of lymphoma is chemotherapy. Radiation is required for local control of disease [4]. Surgical intervention is needed to relieve the symptoms of cord compression. ## 2. Case Report The case concerns a 40-year-old woman who presented at our outpatient department with 2 years history of low back pain and lower limb weakness for 1 month. She presented to us with bladder incontinence and inability to ambulate. The bladder dysfunction had started 7 days before hospital presentation.On examination, she looked depressed and pale; there was no lymphadenopathy and apparent organomegaly. She had tenderness of lower thoracic spinal processes. Neurological examination of the legs showed reduced tone, and grade 1 paraplegia (MRC Scale). Knee jerks as well as ankle jerks were absent, both plantar reflexes were not elicitable, and there was sensory deficit over both lower limbs below mid-thigh level. No clinically significant past history was present. She was admitted and investigated.Investigations showed normal serum biochemistry apart from a mild increase of alkaline phosphatase, which was 15 KAU/L (normal 2–13). The total white cell count was within normal limit, differential count showed neutrophils 25%, lymphocytes 70%, monocytes 2%, eosinophils 3%, and basophils 0.0%, and no atypical lymphocytes were seen in blood film. The erythrocyte sedimentation rate was 24 mm/1st h.A lumbosacral spine X-ray (Figure1) examination showed vertebra plana of T10 vertebra with sclerosis and maintained disc space. Abdominal ultrasound was normal; no organomegaly and enlarged lymph nodes were detected. Bone marrow biopsy was done which showed normal bony trabeculae. MRI scan showed (Figure 3) vertebra plana of T10 vertebra with complete marrow replacement of vertebral body and posterior elements with associated homogenously enhanced soft tissue component in adjacent pre- and paravertebral space and in ventral and dorsal epidural spaces leading to severe cord compression and spinal canal stenosis. Although radiological picture was not in favor of tuberculosis, we started antitubercular treatment, as tuberculosis is very common in India and we have seen the cases of spinal tuberculosis with unusual presentation. However, patient did not respond to antitubercular treatment.Figure 1 X-ray dorsolumbar spine showing vertebra plana of T10 vertebra. Disc space is well maintained.The patient was operated and anterolateral decompression of spinal cord was done to relieve symptoms of cord compression. Histopathological examination showed malignant small round cells (Figure2) with some rosette formation. A diagnosis of small round cell tumor was suggested on basis of these findings. Immunohistochemistry examination showed positivity of tumor cells for common leukocyte antigen distinguishing it from round cell tumor.Figure 2 Histopathological examination of the excised tissues shows malignant small round cells with some rosette formation, suggesting a diagnosis of small round cell tumor.Figure 3 MRI of dorsolumbar spine showing complete marrow replacement of vertebral body and posterior elements with associated homogenously enhanced soft tissue component in adjacent pre- and paravertebral space and in ventral and dorsal epidural spaces leading to severe cord compression and spinal canal stenosis.The patient improved remarkably after surgery. Sensory symptoms improved and motor power regained to 4/5 on hip and 5/5 on knee and ankle (MRC Scale). Bladder control could not be gained 30 days after surgery. ## 3. Discussion Spinal lymphomas commonly present as extradural disease, either because of an isolated deposit within the spinal canal or by the extension from an adjacent nodal mass or bone involvement. Less commonly, non-Hodgkin’s lymphoma may arise in subdural space or within the spinal cord [5]. Spinal cord compression typically presents with back pain, leg numbness and tingling, radicular pain followed by extremity weakness, paresis, or paralysis. Lymphoma of spine can also be asymptomatic at presentation [6].Lymphomas are a heterogeneous group of malignancies of B cells or T cells. They usually originate in the lymph nodes but may originate in any organ of the body [7]. Extranodal disease is an adverse prognostic factor, particularly the involvement of the central nervous system [6]. Between 5% and 10% of patients with nodal presentation of lymphoma may develop CNS involvement.Histologically, lymphomas may be subdivided into non-Hodgkin lymphomas and Hodgkin lymphomas. Although secondary involvement of bones is relatively common in Hodgkin lymphoma, primary Hodgkin bone lymphoma is extremely rare. Non-Hodgkin bone lymphomas are considered primary only if a complete systemic workup reveals no evidence of extraosseous involvement.Histologically, the tumor consists of aggregates of malignant lymphoid cells replacing marrow spaces and osseous trabeculae. The cells contain irregular or even cleaved nuclei. The most important single procedure used to distinguish lymphoma from the other round cell tumors is the stain for leukocyte-common antigen, because lymphoid cells are the only cells that stain positively.Radiographically, spinal lymphoma produces a permeative or moth-eaten pattern of bone destruction or is a purely osteolytic lesion with or more commonly without a periosteal reaction. The affected vertebra can also present with an “ivory” appearance. Vertebra plana is an uncommon presentation. Because lymphoma usually does not evoke significant periosteal new bone formation, this is an important feature in differentiating it from Ewing’s sarcoma.The intermediate or high histological type of non-Hodgkin’s lymphoma and the presence of an underlying immune deficiency are the most significant risk factors for secondary CNS involvement [8]. CNS presentations may include spinal cord compression, leptomeningeal spread, or intracerebral mass lesions. In addition, other mechanisms of neuropathy should be considered, such as the effects of chemotherapy. Spinal cord compression is a rare presentation of non-Hodgkin’s lymphoma. --- *Source: 101506-2012-12-23.xml*
101506-2012-12-23_101506-2012-12-23.md
8,400
Vertebra Plana with Paraplegia in a Middle-Aged Woman Caused by B-Cell Lymphoma: A Case Report
Mohd. Zahid; Sohail Ahamed; Jitesh Kumar Jain; Ravish Chabra
Case Reports in Orthopedics (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101506
101506-2012-12-23.xml
--- ## Abstract Vertebra plana is a rare presentation of spinal lymphoma. When radiological picture of a patient of paraplegia presents vertebra plana, diagnosis becomes a challenge. In a developing country like India tuberculosis should also be a consideration. Even histology sometimes fails to conclude a diagnosis. Immunohistochemistry is of immense help in clinching a diagnosis. --- ## Body ## 1. Introduction Vertebra plana is a radiological diagnosis which represents flattening of a vertebral body with relatively preserved intervertebral disc space. Eosinophilic granuloma is the most common cause of vertebra plana. Other causes include metastatic disease, multiple myeloma, lymphoma, leukemia, Ewing’s sarcoma, gaucher disease, Tuberculosis, and aneurysmal bone cyst.Primary lymphoma of bone is rare. Lymphoma can occur at any age but becomes more common in sixth and seventh decades of life. Incidence of lymphoma varies greatly from region to region. For reasons that are unclear, incidence of lymphoma appears to be increasing every year [1]. In one large series primary lymphoma of bone accounted for 5 per cent of all malignant bone tumors [2]. Tumor-related spinal cord injury (SCI) represents 25% of nontraumatic SCIs and 8% of all SCI cases [3]. Most patients with spinal lymphoma have only complaint of back pain. They may also have nerve root or cord compression. In contrast to multiple myeloma which is seen in the same age group, patients with lymphoma feel otherwise healthy. Diagnosis of spinal lymphoma can be challenging. MRI findings can produce diagnostic dilemma. Even histopathological studies sometime fail to reach the correct diagnosis. Immunohistochemistry often gives clues to clinch a proper diagnosis. Understanding the tumor type for correct identification allows for treatment planning and prognosis setting. The primary treatment of lymphoma is chemotherapy. Radiation is required for local control of disease [4]. Surgical intervention is needed to relieve the symptoms of cord compression. ## 2. Case Report The case concerns a 40-year-old woman who presented at our outpatient department with 2 years history of low back pain and lower limb weakness for 1 month. She presented to us with bladder incontinence and inability to ambulate. The bladder dysfunction had started 7 days before hospital presentation.On examination, she looked depressed and pale; there was no lymphadenopathy and apparent organomegaly. She had tenderness of lower thoracic spinal processes. Neurological examination of the legs showed reduced tone, and grade 1 paraplegia (MRC Scale). Knee jerks as well as ankle jerks were absent, both plantar reflexes were not elicitable, and there was sensory deficit over both lower limbs below mid-thigh level. No clinically significant past history was present. She was admitted and investigated.Investigations showed normal serum biochemistry apart from a mild increase of alkaline phosphatase, which was 15 KAU/L (normal 2–13). The total white cell count was within normal limit, differential count showed neutrophils 25%, lymphocytes 70%, monocytes 2%, eosinophils 3%, and basophils 0.0%, and no atypical lymphocytes were seen in blood film. The erythrocyte sedimentation rate was 24 mm/1st h.A lumbosacral spine X-ray (Figure1) examination showed vertebra plana of T10 vertebra with sclerosis and maintained disc space. Abdominal ultrasound was normal; no organomegaly and enlarged lymph nodes were detected. Bone marrow biopsy was done which showed normal bony trabeculae. MRI scan showed (Figure 3) vertebra plana of T10 vertebra with complete marrow replacement of vertebral body and posterior elements with associated homogenously enhanced soft tissue component in adjacent pre- and paravertebral space and in ventral and dorsal epidural spaces leading to severe cord compression and spinal canal stenosis. Although radiological picture was not in favor of tuberculosis, we started antitubercular treatment, as tuberculosis is very common in India and we have seen the cases of spinal tuberculosis with unusual presentation. However, patient did not respond to antitubercular treatment.Figure 1 X-ray dorsolumbar spine showing vertebra plana of T10 vertebra. Disc space is well maintained.The patient was operated and anterolateral decompression of spinal cord was done to relieve symptoms of cord compression. Histopathological examination showed malignant small round cells (Figure2) with some rosette formation. A diagnosis of small round cell tumor was suggested on basis of these findings. Immunohistochemistry examination showed positivity of tumor cells for common leukocyte antigen distinguishing it from round cell tumor.Figure 2 Histopathological examination of the excised tissues shows malignant small round cells with some rosette formation, suggesting a diagnosis of small round cell tumor.Figure 3 MRI of dorsolumbar spine showing complete marrow replacement of vertebral body and posterior elements with associated homogenously enhanced soft tissue component in adjacent pre- and paravertebral space and in ventral and dorsal epidural spaces leading to severe cord compression and spinal canal stenosis.The patient improved remarkably after surgery. Sensory symptoms improved and motor power regained to 4/5 on hip and 5/5 on knee and ankle (MRC Scale). Bladder control could not be gained 30 days after surgery. ## 3. Discussion Spinal lymphomas commonly present as extradural disease, either because of an isolated deposit within the spinal canal or by the extension from an adjacent nodal mass or bone involvement. Less commonly, non-Hodgkin’s lymphoma may arise in subdural space or within the spinal cord [5]. Spinal cord compression typically presents with back pain, leg numbness and tingling, radicular pain followed by extremity weakness, paresis, or paralysis. Lymphoma of spine can also be asymptomatic at presentation [6].Lymphomas are a heterogeneous group of malignancies of B cells or T cells. They usually originate in the lymph nodes but may originate in any organ of the body [7]. Extranodal disease is an adverse prognostic factor, particularly the involvement of the central nervous system [6]. Between 5% and 10% of patients with nodal presentation of lymphoma may develop CNS involvement.Histologically, lymphomas may be subdivided into non-Hodgkin lymphomas and Hodgkin lymphomas. Although secondary involvement of bones is relatively common in Hodgkin lymphoma, primary Hodgkin bone lymphoma is extremely rare. Non-Hodgkin bone lymphomas are considered primary only if a complete systemic workup reveals no evidence of extraosseous involvement.Histologically, the tumor consists of aggregates of malignant lymphoid cells replacing marrow spaces and osseous trabeculae. The cells contain irregular or even cleaved nuclei. The most important single procedure used to distinguish lymphoma from the other round cell tumors is the stain for leukocyte-common antigen, because lymphoid cells are the only cells that stain positively.Radiographically, spinal lymphoma produces a permeative or moth-eaten pattern of bone destruction or is a purely osteolytic lesion with or more commonly without a periosteal reaction. The affected vertebra can also present with an “ivory” appearance. Vertebra plana is an uncommon presentation. Because lymphoma usually does not evoke significant periosteal new bone formation, this is an important feature in differentiating it from Ewing’s sarcoma.The intermediate or high histological type of non-Hodgkin’s lymphoma and the presence of an underlying immune deficiency are the most significant risk factors for secondary CNS involvement [8]. CNS presentations may include spinal cord compression, leptomeningeal spread, or intracerebral mass lesions. In addition, other mechanisms of neuropathy should be considered, such as the effects of chemotherapy. Spinal cord compression is a rare presentation of non-Hodgkin’s lymphoma. --- *Source: 101506-2012-12-23.xml*
2012
# Living Donor Liver Transplantation as a Backup Procedure: Treatment Strategy for Hepatocellular Adenomas Requiring Complex Resections **Authors:** Eduardo A. Fonseca; Flavia Feier; Rodrigo Vincenzi; Helry L. L. Candido; Rodrigo L. Azambuja; Fabio Payao; Marcel R. Benavides; Karina M. O. Roda; Katia M. R. Leite; Cristiane M. F. Ribeiro; Maria D. Begnami; Charles E. Zurstrassen; Francisco C. Carnevale; Paulo Chapchap; João Seda-Neto **Journal:** Case Reports in Surgery (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015061 --- ## Abstract Background & Aims. The most dangerous complications of hepatocellular adenomas are hemorrhage and malignant transformation, both of which require surgical treatment. The surgical treatment strategy for patients with benign large or central tumors is challenging because complex liver resections are required. The strategy of using a live donor as a backup procedure is described in this series. Methods. We present a series of three patients with large hepatocellular adenoma lesions showing a central location, for which the living donor liver transplantation strategy was used as a backup procedure. Results. Hepatocellular adenoma was confirmed by biopsy in all patients. Surgical resection was indicated because of the patients’ symptoms and lesion size and growth. All patients had a lesion that was central or in close contact with major vessels. The final decision to proceed with the resection was made intraoperatively. A live donor was prepared for all three patients. Two patients underwent portal vein embolization associated with extended hepatectomy, and a total hepatectomy plus liver transplantation with a living donor was performed in one patient. All patients had good postoperative outcomes. Conclusions. In the treatment of hepatocellular adenomas for which complex resections are necessary and resectability can only be confirmed intraoperatively, surgical safety can be improved through the use of a living donor backup. Center expertise with living donor liver transplantation is paramount for the success of this approach. --- ## Body ## 1. Introduction A hepatocellular adenoma (HCA) is a benign tumor that typically develops in a healthy liver. This condition usually occurs in women between 15 and 45 years of age and is associated with the use of oral contraceptives [1, 2]. The most dangerous complications associated with HCA are hemorrhage and malignant transformation, both of which require surgical treatment [3, 4].Liver transplantation (LT) is an alternative for patients with HCA or liver adenomatosis deemed unresectable, or when other treatment strategies have failed. However, large or central tumors usually require complex hepatectomies, with the potential risk of damaging the future liver remnant (FLR). Having a back-up living donor for transplantation can inform the most appropriate strategy for extreme resections.We report a series of three patients with large HCA who were taken to the operating room with a living donor prepared for a backup procedure in case resectability was not possible. ### 1.1. Patient One A 12-year-old girl, weighing 22 kg, presented with a 2-year history of pruritus and a right upper quadrant mass. The patient’s history did not include the use of hormones, and she did not present with jaundice, fecal acholia, or coluria. All laboratory liver tests and alpha-fetoprotein (AFP) levels were normal. Underling liver disease was ruled out.Magnetic resonance imaging (MRI) showed a large heterogeneous hypointense mass measuring23cm×15.3cm×12.4cm that compromised the right lobe (RL) of the liver, segments IV and I (Table 1). Additionally, the mass was in close contact with the FLR vessels (Figure 1(a)). FLR volume was of 105 cm3, representing 14% of the total liver volume (TLV), and 0.47% of the patient weight ratio.Table 1 Patients’ characteristics. Patient 1Patient 2Patient 3Gender, age (Y)Female, 12Female, 12Female, 40Tumor diameter (cm)23209.3Liver segments involvedI, IV, to VIIIIVa, V, to VIIII, V, to VIIIFLR (%)142812.5PVEYesNoYesTreatmentExtended right hepatectomyLDLT–back-upExtended right hepatectomyFollow-up5 y10 y6 yFLR: future liver remnant; PVE: portal vein embolization; LDLT: living donor liver transplantation.Figure 1 (a) MRI images showing a large heterogeneous mass compromising the RL and segment IV and in close contact with the FLR vessels (Arrow). (b) MRI image showing a large heterogeneous lesion occupying the whole RL and preserving segments II, III, and inferior IV. The left hepatic vein (arrow 1) and the left branch of the portal vein (arrow 2) are in close contact with the lesion. (c) Intraoperative aspect: tumor mass occupying almost the entire liver, preserving only segments II, III, and inferior IV. (d) MRI with hepatospecific contrast agent showing a lesion without contrast enhancement on the hepatobiliary phase in the caudate lobe with involvement of the middle and right hepatic veins and in close contact with the left hepatic vein. (a)(b)(c)(d)A liver biopsy of the mass was performed and showed features compatible with HCA, and the immunohistochemical (IHQ) result showed the HNF1-mutated subtype.A right portal vein embolization (PVE) was performed to increase the FLR, thus allowing the extended hepatectomy.On the third day after PVE, the patient presented with abdominal pain, hypotension, and a drop in serum hemoglobin concentration. Computerized tomography (CT) suggested intralesional active arterial bleeding, and the patient underwent a transarterial embolization. A new CT scan was performed seven weeks later and showed an FLR of 504 cm3, which corresponded to 40% of the TLV and 2.3% of the patient weight ratio.Due to the close contact of the mass with major vessels, the assessment of potential living related donors for a backup procedure was performed. A 35-year-old female, a related family member who was ABO compatible, was selected as a living donor and prepared according to a previously published protocol [5].An extended right hepatectomy with caudate resection was performed in March/2016, and the entire tumor was removed with no need to proceed with the LT. Histopathological analysis confirmed the diagnosis and had no signs of malignant transformation. The patient has been followed for 5 years since surgery and is clinically well, with normal liver function tests. A follow-up CT scan showed no residual tumor. ### 1.2. Patient Two A 12-year-old girl, weighting 28 kg, presented with a six-month history of pain and abdominal growth associated with vomiting and progressive shortness of breath, featuring abdominal compartment syndrome.Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.MRI showed a large heterogeneous lesion measuring20cm×16cm×16cm occupying the entire RL and preserving segments II, III, and inferior IV (Table 1). The tumor involved the right and medium hepatic veins, as well as segment IV and the right branches of the portal vein (PV). The left hepatic vein and the left branch of the PV had close contact with the lesion (Figure 1(b)). Estimated FLR volume was of 180 cm3, which represented 28% of the TLV and 0.64% of the patient weight ratio.A liver biopsy was performed and showed features compatible with HCA, and the IHQ analysis identified an inflammatory subtype.Because of the close contact of the mass with major vessels, a back-up live donor was evaluated. The donor was a healthy 31-year-old female, ABO compatible. The estimated volume of the left liver was of 450 cm3, which corresponded to 33% of the TLV and a graft to recipient weight ratio (GRWR) of 1.5%.The tumor mass occupied almost the entire liver, preserving only segments II, III, and IVb (Figure1(c)). The intraoperative ultrasound demonstrated left PV involvement, compromising the FLR inflow. Based on these intraoperative results, the decision was made to abort the planned liver resection, and a total hepatectomy followed by living donor liver transplant (LDLT) was performed in January/2012.A left liver graft was implanted with a cold and warm ischemia time of 55 and 25 minutes, respectively. The operation lasted 10 hours. Both the donor and recipient postoperative courses were uneventful, and the patients were discharged after 6 and 11 days, respectively. Histopathological analysis confirmed the diagnosis and did not demonstrate signs of malignant transformation. Currently, the recipient is in the tenth year of follow-up, clinically stable, and with normal liver tests. ### 1.3. Patient Three A 40-year-old female, weighting 95 kg, presented with an eight-month history of abdominal pain. Oral contraceptives had been used for 20 years.On physical examination, the patient was overweight, with a body mass index of 32.5 kg/m2. Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.An abdominal MRI showed the absence of underlying chronic liver disease and a solitary lesion with HCA characteristics measuring 4.6 cm in the caudate lobe.The patient was instructed to discontinue oral contraceptives, lose weight, and repeat the imaging exams. After eight months, an abdominal MRI showed a solid hypervascularized lesion measuring 9.3 cm (Table1), with involvement of the middle and right hepatic veins and close contact with the left hepatic vein (Figure 1(d)). The hepatic volumetry showed an FLR (II and III) of 291 cm3, corresponding to 12.5% of the TLV and 0.31% of the patient weight ratio. The patient underwent a percutaneous biopsy, which revealed an HCA-inflammatory subtype. PVE of the right and segment IV portal branches was performed through a percutaneous ipsilateral approach. No complications were observed after PVE. CT was performed 6 weeks later, with an FLR volume of 761 cm3, corresponding to 32.7% of the TLV and 0.82% of the patient weight ratio.Due to the close proximity of the tumor with the FLR left hepatic vein, an evaluation and work-up of a potential live donor occurred as part of a backup plan.A 19-year-old male who was ABO compatible was selected as a live donor (the patient’s son). General tests used for donor preparation were normal.The patient underwent surgical resection in September/2015, and the remnant left liver was macroscopically normal. Intraoperative ultrasound showed no involvement of the left hepatic vein by the tumor. Extended hepatectomy with caudate resection was performed. Histopathological analysis confirmed the diagnosis and showed no signs of malignant transformation. Currently, 6 years after surgery, she is in good general condition with normal liver function tests. An abdominal MRI was performed fifteen months after the operation and showed a hypertrophied left liver without residual tumor. ## 1.1. Patient One A 12-year-old girl, weighing 22 kg, presented with a 2-year history of pruritus and a right upper quadrant mass. The patient’s history did not include the use of hormones, and she did not present with jaundice, fecal acholia, or coluria. All laboratory liver tests and alpha-fetoprotein (AFP) levels were normal. Underling liver disease was ruled out.Magnetic resonance imaging (MRI) showed a large heterogeneous hypointense mass measuring23cm×15.3cm×12.4cm that compromised the right lobe (RL) of the liver, segments IV and I (Table 1). Additionally, the mass was in close contact with the FLR vessels (Figure 1(a)). FLR volume was of 105 cm3, representing 14% of the total liver volume (TLV), and 0.47% of the patient weight ratio.Table 1 Patients’ characteristics. Patient 1Patient 2Patient 3Gender, age (Y)Female, 12Female, 12Female, 40Tumor diameter (cm)23209.3Liver segments involvedI, IV, to VIIIIVa, V, to VIIII, V, to VIIIFLR (%)142812.5PVEYesNoYesTreatmentExtended right hepatectomyLDLT–back-upExtended right hepatectomyFollow-up5 y10 y6 yFLR: future liver remnant; PVE: portal vein embolization; LDLT: living donor liver transplantation.Figure 1 (a) MRI images showing a large heterogeneous mass compromising the RL and segment IV and in close contact with the FLR vessels (Arrow). (b) MRI image showing a large heterogeneous lesion occupying the whole RL and preserving segments II, III, and inferior IV. The left hepatic vein (arrow 1) and the left branch of the portal vein (arrow 2) are in close contact with the lesion. (c) Intraoperative aspect: tumor mass occupying almost the entire liver, preserving only segments II, III, and inferior IV. (d) MRI with hepatospecific contrast agent showing a lesion without contrast enhancement on the hepatobiliary phase in the caudate lobe with involvement of the middle and right hepatic veins and in close contact with the left hepatic vein. (a)(b)(c)(d)A liver biopsy of the mass was performed and showed features compatible with HCA, and the immunohistochemical (IHQ) result showed the HNF1-mutated subtype.A right portal vein embolization (PVE) was performed to increase the FLR, thus allowing the extended hepatectomy.On the third day after PVE, the patient presented with abdominal pain, hypotension, and a drop in serum hemoglobin concentration. Computerized tomography (CT) suggested intralesional active arterial bleeding, and the patient underwent a transarterial embolization. A new CT scan was performed seven weeks later and showed an FLR of 504 cm3, which corresponded to 40% of the TLV and 2.3% of the patient weight ratio.Due to the close contact of the mass with major vessels, the assessment of potential living related donors for a backup procedure was performed. A 35-year-old female, a related family member who was ABO compatible, was selected as a living donor and prepared according to a previously published protocol [5].An extended right hepatectomy with caudate resection was performed in March/2016, and the entire tumor was removed with no need to proceed with the LT. Histopathological analysis confirmed the diagnosis and had no signs of malignant transformation. The patient has been followed for 5 years since surgery and is clinically well, with normal liver function tests. A follow-up CT scan showed no residual tumor. ## 1.2. Patient Two A 12-year-old girl, weighting 28 kg, presented with a six-month history of pain and abdominal growth associated with vomiting and progressive shortness of breath, featuring abdominal compartment syndrome.Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.MRI showed a large heterogeneous lesion measuring20cm×16cm×16cm occupying the entire RL and preserving segments II, III, and inferior IV (Table 1). The tumor involved the right and medium hepatic veins, as well as segment IV and the right branches of the portal vein (PV). The left hepatic vein and the left branch of the PV had close contact with the lesion (Figure 1(b)). Estimated FLR volume was of 180 cm3, which represented 28% of the TLV and 0.64% of the patient weight ratio.A liver biopsy was performed and showed features compatible with HCA, and the IHQ analysis identified an inflammatory subtype.Because of the close contact of the mass with major vessels, a back-up live donor was evaluated. The donor was a healthy 31-year-old female, ABO compatible. The estimated volume of the left liver was of 450 cm3, which corresponded to 33% of the TLV and a graft to recipient weight ratio (GRWR) of 1.5%.The tumor mass occupied almost the entire liver, preserving only segments II, III, and IVb (Figure1(c)). The intraoperative ultrasound demonstrated left PV involvement, compromising the FLR inflow. Based on these intraoperative results, the decision was made to abort the planned liver resection, and a total hepatectomy followed by living donor liver transplant (LDLT) was performed in January/2012.A left liver graft was implanted with a cold and warm ischemia time of 55 and 25 minutes, respectively. The operation lasted 10 hours. Both the donor and recipient postoperative courses were uneventful, and the patients were discharged after 6 and 11 days, respectively. Histopathological analysis confirmed the diagnosis and did not demonstrate signs of malignant transformation. Currently, the recipient is in the tenth year of follow-up, clinically stable, and with normal liver tests. ## 1.3. Patient Three A 40-year-old female, weighting 95 kg, presented with an eight-month history of abdominal pain. Oral contraceptives had been used for 20 years.On physical examination, the patient was overweight, with a body mass index of 32.5 kg/m2. Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.An abdominal MRI showed the absence of underlying chronic liver disease and a solitary lesion with HCA characteristics measuring 4.6 cm in the caudate lobe.The patient was instructed to discontinue oral contraceptives, lose weight, and repeat the imaging exams. After eight months, an abdominal MRI showed a solid hypervascularized lesion measuring 9.3 cm (Table1), with involvement of the middle and right hepatic veins and close contact with the left hepatic vein (Figure 1(d)). The hepatic volumetry showed an FLR (II and III) of 291 cm3, corresponding to 12.5% of the TLV and 0.31% of the patient weight ratio. The patient underwent a percutaneous biopsy, which revealed an HCA-inflammatory subtype. PVE of the right and segment IV portal branches was performed through a percutaneous ipsilateral approach. No complications were observed after PVE. CT was performed 6 weeks later, with an FLR volume of 761 cm3, corresponding to 32.7% of the TLV and 0.82% of the patient weight ratio.Due to the close proximity of the tumor with the FLR left hepatic vein, an evaluation and work-up of a potential live donor occurred as part of a backup plan.A 19-year-old male who was ABO compatible was selected as a live donor (the patient’s son). General tests used for donor preparation were normal.The patient underwent surgical resection in September/2015, and the remnant left liver was macroscopically normal. Intraoperative ultrasound showed no involvement of the left hepatic vein by the tumor. Extended hepatectomy with caudate resection was performed. Histopathological analysis confirmed the diagnosis and showed no signs of malignant transformation. Currently, 6 years after surgery, she is in good general condition with normal liver function tests. An abdominal MRI was performed fifteen months after the operation and showed a hypertrophied left liver without residual tumor. ## 2. Discussion Recent advances in imaging as well as the acquisition of new tools in molecular biology to complement the diagnosis of HCA subtypes [6, 7] have contributed to a greater understanding of the evolution of these tumors and consequently to the development of good clinical practices for their management.The surgical approach is recommended for asymptomatic patients with the following HCA conditions: larger than 5 cm, harboring hepatocellular carcinoma or dysplastic foci,β-catenin activated, increasing size or imaging features of malignant transformation, rising AFP, males, and glycogen storage disease [8–10].Symptoms of pain and a large liver mass led to the decision to proceed with surgery in the patients described herein. The fact that the lesion was benign made the decision to proceed with surgery more difficult, because the central location of the tumors creates a technical challenge.The management of patients with HCA is a controversial topic. A few publications and small patient series report complications in the evolution of these patients. Hemorrhage is the most prevalent complication in published studies and shows a direct correlation with the size and the superficial location of these lesions [3, 4]. There is a tendency in the literature to recommend LT for patients with adenomatosis for whom the surgical resection is complex [11, 12]. The largest report on LT for adenomatosis is from the European group: of 49 patients with histologically confirmed adenomatosis (33% with GSD), 17 (34.6%) patients presented with HCC in liver explants, and a total of 8 (16.3%) patients died after LT during follow-up [13]. Chiche et al. reported a follow-up in 8 patients with adenomatosis; in 2 of them, liver transplantation was necessary due to symptoms of pain and hepatomegaly [14]. In patients with large tumors (isolated or multiple) or with tumors that are in close proximity to the vascular structures, resection is associated with perioperative morbidity and eventual mortality. These outcomes are related not only to the proximity of the tumors to the hepatic inflow and outflow vessels but are also associated with the small FLR and the potential for postoperative liver failure.It was reported in the literature that patients who received LT for unresectable HCA typically showed a small FLR at the time of resection [12]; however, LT must be reserved for those patients with irrefutable unresectable tumors, because of the side effects associated with a lifetime of immunosuppressive therapy. Therefore, surgical resection remains the mainstay of curative treatment.Currently, the clinical use of techniques that allow for compensatory hypertrophy of the FLR, such as PVE, has become pivotal in preparing for major liver resections [15–17], preventing posthepatectomy liver failure in patients with a small FLR [18, 19]. In those patients (patients 1 and 3) who underwent PVE, an increase in the FLR volume was observed which enabled total resection of the tumor to be performed safely.However, PVE is an invasive procedure that is not free of risks. The most common complication of percutaneous transhepatic procedures is hemorrhage, and its occurrence has been reported in 2-4% of patients. When there is massive bleeding, transarterial embolization is the most effective treatment, as occurred in patient 1 [20].All patients in this series were submitted to surgical treatment, and a living donor served as a backup strategy because of uncertainties regarding tumor resectability and the viability of the FLR. Although the patients who underwent PVE presented with a final increase in the size of the FLR sufficient for their metabolic demands after surgery, a living donor was used as a backup strategy because of the close proximity of the lesion to the vessels of the FLR. In both patients, complete lesion resection was achieved without the need to use the living donor.LDLT is more appropriate for this option because the procedure can be scheduled. In potentially resectable large tumors, this strategy adds a particular benefit because of potential technical difficulties that may arise during tumor resection, such as a delay or prolongation in cold ischemia time when grafts are used from deceased donors, which often precludes the success of the LT. In this series, only one of the patients actually underwent LT (patient 2), who presented evidence of unresectability because of vascular involvement of the FLR. The living donor backup procedure becomes viable only for medical groups with proven experience in complex liver resection and LDLT. Our group has extensive experience with LDLT, with no procedure-related mortality and a low incidence of severe complications [5].Thus, the use of the living donor backup procedure should be part of the treatment strategy in complex resections for benign lesions when the resectability is unclear, ensuring the safety of the liver resection. ## 3. Conclusion HCA is a benign disease, and there are challenging aspects to the surgical treatment strategy for patients with large and central HCAs that require complex liver resections. Extended hepatectomy linked to the donor backup procedure provides security in complex liver resections and should become part of the surgical treatment approach for these patients. Additionally, this approach should be performed in centers with personnel who have expertise in hepatobiliary surgery and liver transplantation using a living donor. --- *Source: 1015061-2022-02-17.xml*
1015061-2022-02-17_1015061-2022-02-17.md
24,166
Living Donor Liver Transplantation as a Backup Procedure: Treatment Strategy for Hepatocellular Adenomas Requiring Complex Resections
Eduardo A. Fonseca; Flavia Feier; Rodrigo Vincenzi; Helry L. L. Candido; Rodrigo L. Azambuja; Fabio Payao; Marcel R. Benavides; Karina M. O. Roda; Katia M. R. Leite; Cristiane M. F. Ribeiro; Maria D. Begnami; Charles E. Zurstrassen; Francisco C. Carnevale; Paulo Chapchap; João Seda-Neto
Case Reports in Surgery (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015061
1015061-2022-02-17.xml
--- ## Abstract Background & Aims. The most dangerous complications of hepatocellular adenomas are hemorrhage and malignant transformation, both of which require surgical treatment. The surgical treatment strategy for patients with benign large or central tumors is challenging because complex liver resections are required. The strategy of using a live donor as a backup procedure is described in this series. Methods. We present a series of three patients with large hepatocellular adenoma lesions showing a central location, for which the living donor liver transplantation strategy was used as a backup procedure. Results. Hepatocellular adenoma was confirmed by biopsy in all patients. Surgical resection was indicated because of the patients’ symptoms and lesion size and growth. All patients had a lesion that was central or in close contact with major vessels. The final decision to proceed with the resection was made intraoperatively. A live donor was prepared for all three patients. Two patients underwent portal vein embolization associated with extended hepatectomy, and a total hepatectomy plus liver transplantation with a living donor was performed in one patient. All patients had good postoperative outcomes. Conclusions. In the treatment of hepatocellular adenomas for which complex resections are necessary and resectability can only be confirmed intraoperatively, surgical safety can be improved through the use of a living donor backup. Center expertise with living donor liver transplantation is paramount for the success of this approach. --- ## Body ## 1. Introduction A hepatocellular adenoma (HCA) is a benign tumor that typically develops in a healthy liver. This condition usually occurs in women between 15 and 45 years of age and is associated with the use of oral contraceptives [1, 2]. The most dangerous complications associated with HCA are hemorrhage and malignant transformation, both of which require surgical treatment [3, 4].Liver transplantation (LT) is an alternative for patients with HCA or liver adenomatosis deemed unresectable, or when other treatment strategies have failed. However, large or central tumors usually require complex hepatectomies, with the potential risk of damaging the future liver remnant (FLR). Having a back-up living donor for transplantation can inform the most appropriate strategy for extreme resections.We report a series of three patients with large HCA who were taken to the operating room with a living donor prepared for a backup procedure in case resectability was not possible. ### 1.1. Patient One A 12-year-old girl, weighing 22 kg, presented with a 2-year history of pruritus and a right upper quadrant mass. The patient’s history did not include the use of hormones, and she did not present with jaundice, fecal acholia, or coluria. All laboratory liver tests and alpha-fetoprotein (AFP) levels were normal. Underling liver disease was ruled out.Magnetic resonance imaging (MRI) showed a large heterogeneous hypointense mass measuring23cm×15.3cm×12.4cm that compromised the right lobe (RL) of the liver, segments IV and I (Table 1). Additionally, the mass was in close contact with the FLR vessels (Figure 1(a)). FLR volume was of 105 cm3, representing 14% of the total liver volume (TLV), and 0.47% of the patient weight ratio.Table 1 Patients’ characteristics. Patient 1Patient 2Patient 3Gender, age (Y)Female, 12Female, 12Female, 40Tumor diameter (cm)23209.3Liver segments involvedI, IV, to VIIIIVa, V, to VIIII, V, to VIIIFLR (%)142812.5PVEYesNoYesTreatmentExtended right hepatectomyLDLT–back-upExtended right hepatectomyFollow-up5 y10 y6 yFLR: future liver remnant; PVE: portal vein embolization; LDLT: living donor liver transplantation.Figure 1 (a) MRI images showing a large heterogeneous mass compromising the RL and segment IV and in close contact with the FLR vessels (Arrow). (b) MRI image showing a large heterogeneous lesion occupying the whole RL and preserving segments II, III, and inferior IV. The left hepatic vein (arrow 1) and the left branch of the portal vein (arrow 2) are in close contact with the lesion. (c) Intraoperative aspect: tumor mass occupying almost the entire liver, preserving only segments II, III, and inferior IV. (d) MRI with hepatospecific contrast agent showing a lesion without contrast enhancement on the hepatobiliary phase in the caudate lobe with involvement of the middle and right hepatic veins and in close contact with the left hepatic vein. (a)(b)(c)(d)A liver biopsy of the mass was performed and showed features compatible with HCA, and the immunohistochemical (IHQ) result showed the HNF1-mutated subtype.A right portal vein embolization (PVE) was performed to increase the FLR, thus allowing the extended hepatectomy.On the third day after PVE, the patient presented with abdominal pain, hypotension, and a drop in serum hemoglobin concentration. Computerized tomography (CT) suggested intralesional active arterial bleeding, and the patient underwent a transarterial embolization. A new CT scan was performed seven weeks later and showed an FLR of 504 cm3, which corresponded to 40% of the TLV and 2.3% of the patient weight ratio.Due to the close contact of the mass with major vessels, the assessment of potential living related donors for a backup procedure was performed. A 35-year-old female, a related family member who was ABO compatible, was selected as a living donor and prepared according to a previously published protocol [5].An extended right hepatectomy with caudate resection was performed in March/2016, and the entire tumor was removed with no need to proceed with the LT. Histopathological analysis confirmed the diagnosis and had no signs of malignant transformation. The patient has been followed for 5 years since surgery and is clinically well, with normal liver function tests. A follow-up CT scan showed no residual tumor. ### 1.2. Patient Two A 12-year-old girl, weighting 28 kg, presented with a six-month history of pain and abdominal growth associated with vomiting and progressive shortness of breath, featuring abdominal compartment syndrome.Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.MRI showed a large heterogeneous lesion measuring20cm×16cm×16cm occupying the entire RL and preserving segments II, III, and inferior IV (Table 1). The tumor involved the right and medium hepatic veins, as well as segment IV and the right branches of the portal vein (PV). The left hepatic vein and the left branch of the PV had close contact with the lesion (Figure 1(b)). Estimated FLR volume was of 180 cm3, which represented 28% of the TLV and 0.64% of the patient weight ratio.A liver biopsy was performed and showed features compatible with HCA, and the IHQ analysis identified an inflammatory subtype.Because of the close contact of the mass with major vessels, a back-up live donor was evaluated. The donor was a healthy 31-year-old female, ABO compatible. The estimated volume of the left liver was of 450 cm3, which corresponded to 33% of the TLV and a graft to recipient weight ratio (GRWR) of 1.5%.The tumor mass occupied almost the entire liver, preserving only segments II, III, and IVb (Figure1(c)). The intraoperative ultrasound demonstrated left PV involvement, compromising the FLR inflow. Based on these intraoperative results, the decision was made to abort the planned liver resection, and a total hepatectomy followed by living donor liver transplant (LDLT) was performed in January/2012.A left liver graft was implanted with a cold and warm ischemia time of 55 and 25 minutes, respectively. The operation lasted 10 hours. Both the donor and recipient postoperative courses were uneventful, and the patients were discharged after 6 and 11 days, respectively. Histopathological analysis confirmed the diagnosis and did not demonstrate signs of malignant transformation. Currently, the recipient is in the tenth year of follow-up, clinically stable, and with normal liver tests. ### 1.3. Patient Three A 40-year-old female, weighting 95 kg, presented with an eight-month history of abdominal pain. Oral contraceptives had been used for 20 years.On physical examination, the patient was overweight, with a body mass index of 32.5 kg/m2. Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.An abdominal MRI showed the absence of underlying chronic liver disease and a solitary lesion with HCA characteristics measuring 4.6 cm in the caudate lobe.The patient was instructed to discontinue oral contraceptives, lose weight, and repeat the imaging exams. After eight months, an abdominal MRI showed a solid hypervascularized lesion measuring 9.3 cm (Table1), with involvement of the middle and right hepatic veins and close contact with the left hepatic vein (Figure 1(d)). The hepatic volumetry showed an FLR (II and III) of 291 cm3, corresponding to 12.5% of the TLV and 0.31% of the patient weight ratio. The patient underwent a percutaneous biopsy, which revealed an HCA-inflammatory subtype. PVE of the right and segment IV portal branches was performed through a percutaneous ipsilateral approach. No complications were observed after PVE. CT was performed 6 weeks later, with an FLR volume of 761 cm3, corresponding to 32.7% of the TLV and 0.82% of the patient weight ratio.Due to the close proximity of the tumor with the FLR left hepatic vein, an evaluation and work-up of a potential live donor occurred as part of a backup plan.A 19-year-old male who was ABO compatible was selected as a live donor (the patient’s son). General tests used for donor preparation were normal.The patient underwent surgical resection in September/2015, and the remnant left liver was macroscopically normal. Intraoperative ultrasound showed no involvement of the left hepatic vein by the tumor. Extended hepatectomy with caudate resection was performed. Histopathological analysis confirmed the diagnosis and showed no signs of malignant transformation. Currently, 6 years after surgery, she is in good general condition with normal liver function tests. An abdominal MRI was performed fifteen months after the operation and showed a hypertrophied left liver without residual tumor. ## 1.1. Patient One A 12-year-old girl, weighing 22 kg, presented with a 2-year history of pruritus and a right upper quadrant mass. The patient’s history did not include the use of hormones, and she did not present with jaundice, fecal acholia, or coluria. All laboratory liver tests and alpha-fetoprotein (AFP) levels were normal. Underling liver disease was ruled out.Magnetic resonance imaging (MRI) showed a large heterogeneous hypointense mass measuring23cm×15.3cm×12.4cm that compromised the right lobe (RL) of the liver, segments IV and I (Table 1). Additionally, the mass was in close contact with the FLR vessels (Figure 1(a)). FLR volume was of 105 cm3, representing 14% of the total liver volume (TLV), and 0.47% of the patient weight ratio.Table 1 Patients’ characteristics. Patient 1Patient 2Patient 3Gender, age (Y)Female, 12Female, 12Female, 40Tumor diameter (cm)23209.3Liver segments involvedI, IV, to VIIIIVa, V, to VIIII, V, to VIIIFLR (%)142812.5PVEYesNoYesTreatmentExtended right hepatectomyLDLT–back-upExtended right hepatectomyFollow-up5 y10 y6 yFLR: future liver remnant; PVE: portal vein embolization; LDLT: living donor liver transplantation.Figure 1 (a) MRI images showing a large heterogeneous mass compromising the RL and segment IV and in close contact with the FLR vessels (Arrow). (b) MRI image showing a large heterogeneous lesion occupying the whole RL and preserving segments II, III, and inferior IV. The left hepatic vein (arrow 1) and the left branch of the portal vein (arrow 2) are in close contact with the lesion. (c) Intraoperative aspect: tumor mass occupying almost the entire liver, preserving only segments II, III, and inferior IV. (d) MRI with hepatospecific contrast agent showing a lesion without contrast enhancement on the hepatobiliary phase in the caudate lobe with involvement of the middle and right hepatic veins and in close contact with the left hepatic vein. (a)(b)(c)(d)A liver biopsy of the mass was performed and showed features compatible with HCA, and the immunohistochemical (IHQ) result showed the HNF1-mutated subtype.A right portal vein embolization (PVE) was performed to increase the FLR, thus allowing the extended hepatectomy.On the third day after PVE, the patient presented with abdominal pain, hypotension, and a drop in serum hemoglobin concentration. Computerized tomography (CT) suggested intralesional active arterial bleeding, and the patient underwent a transarterial embolization. A new CT scan was performed seven weeks later and showed an FLR of 504 cm3, which corresponded to 40% of the TLV and 2.3% of the patient weight ratio.Due to the close contact of the mass with major vessels, the assessment of potential living related donors for a backup procedure was performed. A 35-year-old female, a related family member who was ABO compatible, was selected as a living donor and prepared according to a previously published protocol [5].An extended right hepatectomy with caudate resection was performed in March/2016, and the entire tumor was removed with no need to proceed with the LT. Histopathological analysis confirmed the diagnosis and had no signs of malignant transformation. The patient has been followed for 5 years since surgery and is clinically well, with normal liver function tests. A follow-up CT scan showed no residual tumor. ## 1.2. Patient Two A 12-year-old girl, weighting 28 kg, presented with a six-month history of pain and abdominal growth associated with vomiting and progressive shortness of breath, featuring abdominal compartment syndrome.Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.MRI showed a large heterogeneous lesion measuring20cm×16cm×16cm occupying the entire RL and preserving segments II, III, and inferior IV (Table 1). The tumor involved the right and medium hepatic veins, as well as segment IV and the right branches of the portal vein (PV). The left hepatic vein and the left branch of the PV had close contact with the lesion (Figure 1(b)). Estimated FLR volume was of 180 cm3, which represented 28% of the TLV and 0.64% of the patient weight ratio.A liver biopsy was performed and showed features compatible with HCA, and the IHQ analysis identified an inflammatory subtype.Because of the close contact of the mass with major vessels, a back-up live donor was evaluated. The donor was a healthy 31-year-old female, ABO compatible. The estimated volume of the left liver was of 450 cm3, which corresponded to 33% of the TLV and a graft to recipient weight ratio (GRWR) of 1.5%.The tumor mass occupied almost the entire liver, preserving only segments II, III, and IVb (Figure1(c)). The intraoperative ultrasound demonstrated left PV involvement, compromising the FLR inflow. Based on these intraoperative results, the decision was made to abort the planned liver resection, and a total hepatectomy followed by living donor liver transplant (LDLT) was performed in January/2012.A left liver graft was implanted with a cold and warm ischemia time of 55 and 25 minutes, respectively. The operation lasted 10 hours. Both the donor and recipient postoperative courses were uneventful, and the patients were discharged after 6 and 11 days, respectively. Histopathological analysis confirmed the diagnosis and did not demonstrate signs of malignant transformation. Currently, the recipient is in the tenth year of follow-up, clinically stable, and with normal liver tests. ## 1.3. Patient Three A 40-year-old female, weighting 95 kg, presented with an eight-month history of abdominal pain. Oral contraceptives had been used for 20 years.On physical examination, the patient was overweight, with a body mass index of 32.5 kg/m2. Liver function tests and serum AFP levels were normal. Underling liver disease was ruled out.An abdominal MRI showed the absence of underlying chronic liver disease and a solitary lesion with HCA characteristics measuring 4.6 cm in the caudate lobe.The patient was instructed to discontinue oral contraceptives, lose weight, and repeat the imaging exams. After eight months, an abdominal MRI showed a solid hypervascularized lesion measuring 9.3 cm (Table1), with involvement of the middle and right hepatic veins and close contact with the left hepatic vein (Figure 1(d)). The hepatic volumetry showed an FLR (II and III) of 291 cm3, corresponding to 12.5% of the TLV and 0.31% of the patient weight ratio. The patient underwent a percutaneous biopsy, which revealed an HCA-inflammatory subtype. PVE of the right and segment IV portal branches was performed through a percutaneous ipsilateral approach. No complications were observed after PVE. CT was performed 6 weeks later, with an FLR volume of 761 cm3, corresponding to 32.7% of the TLV and 0.82% of the patient weight ratio.Due to the close proximity of the tumor with the FLR left hepatic vein, an evaluation and work-up of a potential live donor occurred as part of a backup plan.A 19-year-old male who was ABO compatible was selected as a live donor (the patient’s son). General tests used for donor preparation were normal.The patient underwent surgical resection in September/2015, and the remnant left liver was macroscopically normal. Intraoperative ultrasound showed no involvement of the left hepatic vein by the tumor. Extended hepatectomy with caudate resection was performed. Histopathological analysis confirmed the diagnosis and showed no signs of malignant transformation. Currently, 6 years after surgery, she is in good general condition with normal liver function tests. An abdominal MRI was performed fifteen months after the operation and showed a hypertrophied left liver without residual tumor. ## 2. Discussion Recent advances in imaging as well as the acquisition of new tools in molecular biology to complement the diagnosis of HCA subtypes [6, 7] have contributed to a greater understanding of the evolution of these tumors and consequently to the development of good clinical practices for their management.The surgical approach is recommended for asymptomatic patients with the following HCA conditions: larger than 5 cm, harboring hepatocellular carcinoma or dysplastic foci,β-catenin activated, increasing size or imaging features of malignant transformation, rising AFP, males, and glycogen storage disease [8–10].Symptoms of pain and a large liver mass led to the decision to proceed with surgery in the patients described herein. The fact that the lesion was benign made the decision to proceed with surgery more difficult, because the central location of the tumors creates a technical challenge.The management of patients with HCA is a controversial topic. A few publications and small patient series report complications in the evolution of these patients. Hemorrhage is the most prevalent complication in published studies and shows a direct correlation with the size and the superficial location of these lesions [3, 4]. There is a tendency in the literature to recommend LT for patients with adenomatosis for whom the surgical resection is complex [11, 12]. The largest report on LT for adenomatosis is from the European group: of 49 patients with histologically confirmed adenomatosis (33% with GSD), 17 (34.6%) patients presented with HCC in liver explants, and a total of 8 (16.3%) patients died after LT during follow-up [13]. Chiche et al. reported a follow-up in 8 patients with adenomatosis; in 2 of them, liver transplantation was necessary due to symptoms of pain and hepatomegaly [14]. In patients with large tumors (isolated or multiple) or with tumors that are in close proximity to the vascular structures, resection is associated with perioperative morbidity and eventual mortality. These outcomes are related not only to the proximity of the tumors to the hepatic inflow and outflow vessels but are also associated with the small FLR and the potential for postoperative liver failure.It was reported in the literature that patients who received LT for unresectable HCA typically showed a small FLR at the time of resection [12]; however, LT must be reserved for those patients with irrefutable unresectable tumors, because of the side effects associated with a lifetime of immunosuppressive therapy. Therefore, surgical resection remains the mainstay of curative treatment.Currently, the clinical use of techniques that allow for compensatory hypertrophy of the FLR, such as PVE, has become pivotal in preparing for major liver resections [15–17], preventing posthepatectomy liver failure in patients with a small FLR [18, 19]. In those patients (patients 1 and 3) who underwent PVE, an increase in the FLR volume was observed which enabled total resection of the tumor to be performed safely.However, PVE is an invasive procedure that is not free of risks. The most common complication of percutaneous transhepatic procedures is hemorrhage, and its occurrence has been reported in 2-4% of patients. When there is massive bleeding, transarterial embolization is the most effective treatment, as occurred in patient 1 [20].All patients in this series were submitted to surgical treatment, and a living donor served as a backup strategy because of uncertainties regarding tumor resectability and the viability of the FLR. Although the patients who underwent PVE presented with a final increase in the size of the FLR sufficient for their metabolic demands after surgery, a living donor was used as a backup strategy because of the close proximity of the lesion to the vessels of the FLR. In both patients, complete lesion resection was achieved without the need to use the living donor.LDLT is more appropriate for this option because the procedure can be scheduled. In potentially resectable large tumors, this strategy adds a particular benefit because of potential technical difficulties that may arise during tumor resection, such as a delay or prolongation in cold ischemia time when grafts are used from deceased donors, which often precludes the success of the LT. In this series, only one of the patients actually underwent LT (patient 2), who presented evidence of unresectability because of vascular involvement of the FLR. The living donor backup procedure becomes viable only for medical groups with proven experience in complex liver resection and LDLT. Our group has extensive experience with LDLT, with no procedure-related mortality and a low incidence of severe complications [5].Thus, the use of the living donor backup procedure should be part of the treatment strategy in complex resections for benign lesions when the resectability is unclear, ensuring the safety of the liver resection. ## 3. Conclusion HCA is a benign disease, and there are challenging aspects to the surgical treatment strategy for patients with large and central HCAs that require complex liver resections. Extended hepatectomy linked to the donor backup procedure provides security in complex liver resections and should become part of the surgical treatment approach for these patients. Additionally, this approach should be performed in centers with personnel who have expertise in hepatobiliary surgery and liver transplantation using a living donor. --- *Source: 1015061-2022-02-17.xml*
2022
# Applied Pressure on Altering the Nano-Crystallization Behavior of Al86Ni6Y4.5Co2La1.5 Metallic Glass Powder during Spark Plasma Sintering and Its Effect on Powder Consolidation **Authors:** X. P. Li; M. Yan; G. Ji; M. Qian **Journal:** Journal of Nanomaterials (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101508 --- ## Abstract Metallic glass powder of the composition Al86Ni6Y4.5Co2La1.5 was consolidated into 10 mm diameter samples by spark plasma sintering (SPS) at different temperatures under an applied pressure of 200 MPa or 600 MPa. The heating rate and isothermal holding time were fixed at 40°C/min and 2 min, respectively. Fully dense bulk metallic glasses (BMGs) free of particle-particle interface oxides and nano-crystallization were fabricated under 600 MPa. In contrast, residual oxides were detected at particle-particle interfaces (enriched in both Al and O) when fabricated under a pressure of 200 MPa, indicating the incomplete removal of the oxide surface layers during SPS at a low pressure. Transmission electron microscopy (TEM) revealed noticeable nano-crystallization of face-centered cubic (fcc) Al close to such interfaces. Applying a high pressure played a key role in facilitating the removal of the oxide surface layers and therefore full densification of the Al86Ni6Y4.5Co2La1.5 metallic glass powder without nano-crystallization. It is proposed that applied high pressure, as an external force, assisted in the breakdown of surface oxide layers that enveloped the powder particles in the early stage of sintering. This, together with the electrical discharge during SPS, may have benefitted the viscous flow of metallic glasses during sintering. --- ## Body ## 1. Introduction Metallic glasses (MGs) have been investigated for decades due to their intrinsically unique physical and chemical properties [1]. Al-based MGs are promising advanced materials which have attracted increasing attention for their ultrahigh specific strength and relatively low cost compared with most other MGs [2]. However, due to their low glass forming ability (GFA), fabrication of Al-based BMGs through a conventional cooling process from liquid has proved to be challenging [3–5]. The first conceptual Al-based BMG with 1 mm diameter was fabricated using a copper mold casting approach in 2009 [6] since the Al-based MG was first reported in 1988 [7] and the alloy reported [6] remains to be the best glass forming Al-based BMG to date. The slow development of Al-based BMGs in terms of their GFA impedes the potential application of these materials.Since MG powder can be readily prepared by gas-atomization [8], powder metallurgy (PM), especially the spark plasma sintering (SPS) technique, offers an alternative to the fabrication of BMGs. Fully dense Ti-, Ni-, Cu-, and Fe-based BMGs with >10 mm diameters have been fabricated using SPS [9–12]. These MGs have much higher glass transition temperatures (Tg) [1, 3] compared to Al-based MGs and therefore can be readily consolidated at high sintering temperatures without nano-crystallization. As for Al-based BMGs, because their Tg temperatures are generally <300°C, nano-crystallization is easy to occur during SPS. Hence, few studies have succeeded in fabricating fully dense Al-based BMGs without crystallization [13–15]. On the other hand, a previous study [16] has revealed that MG powder is enveloped by an oxide layer which would inhibit viscous flow of the amorphous material for full densification. As a result, it is essential to remove this surface oxide layer to enable viscous flow for full densification of Al-based MG powder at low temperatures.It has been proposed [17] that the electrical discharge during SPS has a cleaning effect which can help to remove the surface oxide layers on metallic powders. In general, the higher the heating rate during SPS, the more effective the cleaning effect will be [18]. However, due to the low Tg of Al-based MGs (<300°C), it is difficult to accurately control the temperature rise and avoid overshoot when a very high heating rate (>40°C/min) is used. Consequently, it is necessary to consider employing other options such as the use of high pressure to assist in the breakdown of surface oxide layers that envelope the powder particles. In addition, applying high pressure during SPS is expected to favor the viscous flow between the Al-based MG powder particles for enhanced densification. No study has been reported on looking into the role of applying high pressure during the SPS of Al-based MG powder from these two perspectives.In this study, a 10 mm diameter (Φ10 mm) Al86Ni6Y4.5Co2La1.5 BMG was fabricated using SPS. The influence of applied pressure on the densification of Al86Ni6Y4.5Co2La1.5 MG powder was investigated through detailed characterization of the as-sintered samples using scanning electron microscopy (SEM) and transmission electron microscopy (TEM) by focusing on selected particle-particle interfaces. The underlying reasons were discussed. ## 2. Experimental Procedure Nitrogen-gas-atomized Al86Ni6Y4.5Co2La1.5 MG powder was used. To ensure a fully amorphous state, only powder particles that are finer than 25 μm in diameter were used based on a previous study of the powder [8, 17]. The amorphous nature of the selected powder was further confirmed by X-ray diffraction (XRD) (D/max III, CuKα target, operated at 40 kV and 60 mA).The surfaces of the starting powder were studied using X-ray photoelectron spectroscopy (XPS) (Kratos Axis ULTRA XPS, monochromatic Al X-ray, C 1 s at 285 eV was used as a standard). XPS survey scans were taken at an analyzer pass energy level of 160 eV and carried out over the binding energy range of 1200-0 eV with 1.0 eV steps and 100 ms dwell time at each step. The base pressure in the analysis chamber was maintained in the range of 1.33 × 10−7 Pa to 1.33 × 10−6 Pa during analysis.The SPS experiments were conducted on an SPS-1030 made by SPS SYNTEX INC, Japan. A tungsten carbide (WC) die (outer diameter 30 mm, inner diameter 10 mm, and height 20 mm) was used. TheTg temperature of the Al86Ni6Y4.5Co2La1.5 MG powder varies with heating rate and was recorded to be 270°C at 40°C/min in argon [8]. To avoid temperature overshoot and maximize the cleaning effect of SPS, the heating rate was fixed at 40°C/min, based on a few preliminary heating trials with the SPS machine. The isothermal holding time was fixed to be 2 min. To study the influence of sintering temperature and pressure on the densification of the Al86Ni6Y4.5Co2La1.5 MG powder, a range of sintering temperatures was chosen, 248.5, 258.5, 268.5, 278.5, 288.5, 298.5, and 308.5°C. The pressure applied was 200 MPa or 600 MPa. Table 1 summarizes the sintering parameters used.Table 1 SPS experimental parameters used for this study. Sintering temperature(°C) 248.5 258.5 268.5 278.5 288.5 298.5 308.5 Pressure (MPa) 200, 600 200 Holding time (min) 2 Heating rate (°C/min) 40The sintered density was measured using the Archimedes method. The SPS-processed samples were cut, ground, and polished. They were then characterized using SEM (JEOL 7001F, accelerating voltage 15 kV and working distance 10 mm) and TEM (JEOL JEM 2100, operated at 200 kV), where the TEM samples were prepared using a precision ion polishing system (Gatan’s PIPS, operated at −50°C). ## 3. Results and Discussion Figure1(a) shows the morphology of the starting powder and the XPS results are shown in Figure 1(b). Strong oxide signals are detected on the MG powder surfaces, consistent with the observations reported by Yan et al. [16]. An SPS-processed 10 mm diameter Al86Ni6Y4.5Co2La1.5 BMG sample (4 mm in height) is shown in Figure 1(c), which was fabricated by heating the powder to 248.5°C at 40°C/min and held at temperature for 2 min under 600 MPa. The XRD results shown in Figure 1(c) indicate that the as-sintered Al86Ni6Y4.5Co2La1.5 BMG is essentially amorphous.(a) SEM image of the starting Al86Ni6Y4.5Co2La1.5MG powder used for fabrication; (b) XPS survey spectra of the Al86Ni6Y4.5Co2La1.5 MG powder; (c) a 10 mm diameter Al86Ni6Y4.5Co2La1.5 BMG disk (thickness: 4 mm) (sintering conditions: 2 min at 248.5°C under 600 MPa); and (d) XRD pattern of the fabricated Al86Ni6Y4.5Co2La1.5 sample. (a) (b) (c) (d)Figure2 shows the density of SPS-processed Al86Ni6Y4.5Co2La1.5 BMGs achieved at different sintering temperatures and pressures. Under an applied pressure of 200 MPa, the density of the BMGs increased with increasing sintering temperature from 248.5°C to 278.5°C. But further increasing the sintering temperature to 308.5°C, which is above the Tg of the alloy and also above the peak temperature for the first crystallization stage of the Al86Ni6Y4.5Co2La1.5 MG powder [8], resulted in little increase in the sintered density. In contrast, increasing the applied pressure from 200 MPa to 600 MPa led to a substantial increase in the sintered density at each of the three temperatures tested. Increasing pressure was much more effective than increasing sintering temperature. This implies that sintering pressure plays a key role in the densification of Al86Ni6Y4.5Co2La1.5 MG powder during SPS. In fact, the near full density achieved by increasing pressure is difficult to achieve by increasing sintering temperature alone without nano-crystallization. To find out the underlying reasons for this big difference, the as-sintered microstructure was analyzed in detail using SEM and TEM, with a special focus being placed on the interfaces between powder particles.Figure 2 Sintered densities of the SPS-processed samples as a function of SPS sintering temperature and pressure.X-ray mapping was applied to all the constituent elements (i.e., Al, Ni, Y, Co, and La) as well as O in the samples that were sintered at 248.5°C under 200 MPa. Figure3 shows the results of Al, Ni, and O. The distribution of Ni, Y, Co, and La is homogenous in the microstructure, showing few features. However, Al and O are clearly enriched in areas close to the initial particle-particle interfaces (see Figures 3(b) and 3(d)). Furthermore, these Al- and O-enriched areas underwent only limited sintering, where the sintering necks are still recognizable between neighboring particles (see Figure 3(a)), compared to those well-sintered oxygen-deficient areas. It can be deduced that the surface oxide layers have hindered the densification process of the Al86Ni6Y4.5Co2La1.5 MG powder during SPS, and that the 200 MPa of applied pressure can only ensure limited removal of these oxide surface layers. To confirm this inference, TEM was used to investigate the interfaces between the particles in the SPS-processed sample, and the results are shown in Figures 4 and 5.SEM mapping results of Al and Ni as well as O in a selected area of an SPS-processed sample sintered at 248.5°C for 2 min under 200 MPa. The distribution of Y, Co, and La is generally homogeneous and similar to that of Ni. (a) (b) (c) (d)TEM bright field (BF) images of particle-particle interfaces in SPS-processed samples: (a) sintered at 248.5°C under 200 MPa; (b) sintered at 248.5°C under 600 MPa, free of crystallization; and (c) TEM-EDX results obtained from the interface shown in (a), indicative of noticeable crystallization of fcc-Al. (a) (b) (c)TEM BF image of an SPS-processed sample sintered at 248.5°C under 200 MPa. The inset in (a) is the corresponding SEAD patterns for the amorphous matrix and fcc-Al nanocrystals. (b) is an HRTEM image of the fcc-Al nanocrystals shown in (a). (a) (b)Figure4(a) shows that the interface between two particles in the same SPS-processed sample (sintered at 248.5°C under 200 MPa) has undergone noticeable crystallization. The interface layer is about 50 nm thick and oxygen can be detected at the interface using TEM energy dispersive X-ray (EDX) (see Figure 4(c)). This further confirms that the oxide surface layers were not completely removed during SPS under the applied pressure of 200 MPa. In contrast, clean interfaces between particles were uniformly observed in the SPS-processed samples under an applied pressure of 600 MPa. An example is shown in Figure 4(b).Figure5(a) shows a detailed view of the aforementioned crystallized interface area (Figure 4(a)) together with the surrounding amorphous matrix. Based on the selected area electron diffraction (SAED) patterns (inset in Figure 5(a)), these crystallized phases are indexed to be fcc-Al. Figure 5(b) shows a high-resolution TEM image of these fcc-Al nanocrystals which are about 10 nm in size. In contrast, no crystallization was detected in samples that were sintered at the same temperature (248.5°C) but under 600 MPa (see Figure 4(b)). The difference can be explained below. Under an applied pressure of 200 MPa and a heating rate of 40°C/min, it is difficult to completely remove the surface oxide layers on powder particles, as evidenced by the results shown in Figures 3 and 4. The remaining oxide surface layers prevent viscous flow between the powder particles and therefore inhibit full densification. Consequently, the sintering necks between the neighboring particles, because of the surrounding pores, will have relatively high electrical resistance. As a result, this will cause high local Joule heat (or enhanced temperature gradient) in these local contact areas [19–21], resulting in severe local nano-crystallization as shown above. With a high applied pressure of 600 MPa, the combined effect of the pressure and the electrical discharge during SPS can effectively disrupt the oxide surface layers on the powder particles leading to a complete removal of the surface oxides. Without the oxide surface layers, viscous flow occurs making full densification possible. This eliminates overheated local areas and therefore prevents local nano-crystallization. ## 4. Summary Al86Ni6Y4.5Co2La1.5 BMG disks (diameter: 10 mm; thickness: 4 mm) were fabricated from metallic glass powder of the same composition by SPS. The influence of applied pressure on the densification of Al86Ni6Y4.5Co2La1.5 metallic glass powder was investigated at different sintering temperatures at a fixed heating rate of 40°C/min. Applying a high pressure (600 MPa) assisted in the removal of the surface oxide layers that enveloped the starting metallic glass powder. This led to full densification of the metallic glass powder flow free of particle-particle interface oxides and nano-crystallization. The mechanism was attributed to potential viscous flow during SPS between the powder particles. In contrast, both residual oxides and nanocrystalline Al phases were detected at particle-particle interfaces in the Al86Ni6Y4.5Co2La1.5 BMGs fabricated under a low pressure (200 MPa) with respect to the same heating and isothermal sintering parameters. The applied pressure showed a predominant influence on the removal of the surface oxide layers on the starting metallic glass powder during SPS, which is crucial to the consolidation of the metallic glass powder. --- *Source: 101508-2013-02-28.xml*
101508-2013-02-28_101508-2013-02-28.md
15,293
Applied Pressure on Altering the Nano-Crystallization Behavior of Al86Ni6Y4.5Co2La1.5 Metallic Glass Powder during Spark Plasma Sintering and Its Effect on Powder Consolidation
X. P. Li; M. Yan; G. Ji; M. Qian
Journal of Nanomaterials (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101508
101508-2013-02-28.xml
--- ## Abstract Metallic glass powder of the composition Al86Ni6Y4.5Co2La1.5 was consolidated into 10 mm diameter samples by spark plasma sintering (SPS) at different temperatures under an applied pressure of 200 MPa or 600 MPa. The heating rate and isothermal holding time were fixed at 40°C/min and 2 min, respectively. Fully dense bulk metallic glasses (BMGs) free of particle-particle interface oxides and nano-crystallization were fabricated under 600 MPa. In contrast, residual oxides were detected at particle-particle interfaces (enriched in both Al and O) when fabricated under a pressure of 200 MPa, indicating the incomplete removal of the oxide surface layers during SPS at a low pressure. Transmission electron microscopy (TEM) revealed noticeable nano-crystallization of face-centered cubic (fcc) Al close to such interfaces. Applying a high pressure played a key role in facilitating the removal of the oxide surface layers and therefore full densification of the Al86Ni6Y4.5Co2La1.5 metallic glass powder without nano-crystallization. It is proposed that applied high pressure, as an external force, assisted in the breakdown of surface oxide layers that enveloped the powder particles in the early stage of sintering. This, together with the electrical discharge during SPS, may have benefitted the viscous flow of metallic glasses during sintering. --- ## Body ## 1. Introduction Metallic glasses (MGs) have been investigated for decades due to their intrinsically unique physical and chemical properties [1]. Al-based MGs are promising advanced materials which have attracted increasing attention for their ultrahigh specific strength and relatively low cost compared with most other MGs [2]. However, due to their low glass forming ability (GFA), fabrication of Al-based BMGs through a conventional cooling process from liquid has proved to be challenging [3–5]. The first conceptual Al-based BMG with 1 mm diameter was fabricated using a copper mold casting approach in 2009 [6] since the Al-based MG was first reported in 1988 [7] and the alloy reported [6] remains to be the best glass forming Al-based BMG to date. The slow development of Al-based BMGs in terms of their GFA impedes the potential application of these materials.Since MG powder can be readily prepared by gas-atomization [8], powder metallurgy (PM), especially the spark plasma sintering (SPS) technique, offers an alternative to the fabrication of BMGs. Fully dense Ti-, Ni-, Cu-, and Fe-based BMGs with >10 mm diameters have been fabricated using SPS [9–12]. These MGs have much higher glass transition temperatures (Tg) [1, 3] compared to Al-based MGs and therefore can be readily consolidated at high sintering temperatures without nano-crystallization. As for Al-based BMGs, because their Tg temperatures are generally <300°C, nano-crystallization is easy to occur during SPS. Hence, few studies have succeeded in fabricating fully dense Al-based BMGs without crystallization [13–15]. On the other hand, a previous study [16] has revealed that MG powder is enveloped by an oxide layer which would inhibit viscous flow of the amorphous material for full densification. As a result, it is essential to remove this surface oxide layer to enable viscous flow for full densification of Al-based MG powder at low temperatures.It has been proposed [17] that the electrical discharge during SPS has a cleaning effect which can help to remove the surface oxide layers on metallic powders. In general, the higher the heating rate during SPS, the more effective the cleaning effect will be [18]. However, due to the low Tg of Al-based MGs (<300°C), it is difficult to accurately control the temperature rise and avoid overshoot when a very high heating rate (>40°C/min) is used. Consequently, it is necessary to consider employing other options such as the use of high pressure to assist in the breakdown of surface oxide layers that envelope the powder particles. In addition, applying high pressure during SPS is expected to favor the viscous flow between the Al-based MG powder particles for enhanced densification. No study has been reported on looking into the role of applying high pressure during the SPS of Al-based MG powder from these two perspectives.In this study, a 10 mm diameter (Φ10 mm) Al86Ni6Y4.5Co2La1.5 BMG was fabricated using SPS. The influence of applied pressure on the densification of Al86Ni6Y4.5Co2La1.5 MG powder was investigated through detailed characterization of the as-sintered samples using scanning electron microscopy (SEM) and transmission electron microscopy (TEM) by focusing on selected particle-particle interfaces. The underlying reasons were discussed. ## 2. Experimental Procedure Nitrogen-gas-atomized Al86Ni6Y4.5Co2La1.5 MG powder was used. To ensure a fully amorphous state, only powder particles that are finer than 25 μm in diameter were used based on a previous study of the powder [8, 17]. The amorphous nature of the selected powder was further confirmed by X-ray diffraction (XRD) (D/max III, CuKα target, operated at 40 kV and 60 mA).The surfaces of the starting powder were studied using X-ray photoelectron spectroscopy (XPS) (Kratos Axis ULTRA XPS, monochromatic Al X-ray, C 1 s at 285 eV was used as a standard). XPS survey scans were taken at an analyzer pass energy level of 160 eV and carried out over the binding energy range of 1200-0 eV with 1.0 eV steps and 100 ms dwell time at each step. The base pressure in the analysis chamber was maintained in the range of 1.33 × 10−7 Pa to 1.33 × 10−6 Pa during analysis.The SPS experiments were conducted on an SPS-1030 made by SPS SYNTEX INC, Japan. A tungsten carbide (WC) die (outer diameter 30 mm, inner diameter 10 mm, and height 20 mm) was used. TheTg temperature of the Al86Ni6Y4.5Co2La1.5 MG powder varies with heating rate and was recorded to be 270°C at 40°C/min in argon [8]. To avoid temperature overshoot and maximize the cleaning effect of SPS, the heating rate was fixed at 40°C/min, based on a few preliminary heating trials with the SPS machine. The isothermal holding time was fixed to be 2 min. To study the influence of sintering temperature and pressure on the densification of the Al86Ni6Y4.5Co2La1.5 MG powder, a range of sintering temperatures was chosen, 248.5, 258.5, 268.5, 278.5, 288.5, 298.5, and 308.5°C. The pressure applied was 200 MPa or 600 MPa. Table 1 summarizes the sintering parameters used.Table 1 SPS experimental parameters used for this study. Sintering temperature(°C) 248.5 258.5 268.5 278.5 288.5 298.5 308.5 Pressure (MPa) 200, 600 200 Holding time (min) 2 Heating rate (°C/min) 40The sintered density was measured using the Archimedes method. The SPS-processed samples were cut, ground, and polished. They were then characterized using SEM (JEOL 7001F, accelerating voltage 15 kV and working distance 10 mm) and TEM (JEOL JEM 2100, operated at 200 kV), where the TEM samples were prepared using a precision ion polishing system (Gatan’s PIPS, operated at −50°C). ## 3. Results and Discussion Figure1(a) shows the morphology of the starting powder and the XPS results are shown in Figure 1(b). Strong oxide signals are detected on the MG powder surfaces, consistent with the observations reported by Yan et al. [16]. An SPS-processed 10 mm diameter Al86Ni6Y4.5Co2La1.5 BMG sample (4 mm in height) is shown in Figure 1(c), which was fabricated by heating the powder to 248.5°C at 40°C/min and held at temperature for 2 min under 600 MPa. The XRD results shown in Figure 1(c) indicate that the as-sintered Al86Ni6Y4.5Co2La1.5 BMG is essentially amorphous.(a) SEM image of the starting Al86Ni6Y4.5Co2La1.5MG powder used for fabrication; (b) XPS survey spectra of the Al86Ni6Y4.5Co2La1.5 MG powder; (c) a 10 mm diameter Al86Ni6Y4.5Co2La1.5 BMG disk (thickness: 4 mm) (sintering conditions: 2 min at 248.5°C under 600 MPa); and (d) XRD pattern of the fabricated Al86Ni6Y4.5Co2La1.5 sample. (a) (b) (c) (d)Figure2 shows the density of SPS-processed Al86Ni6Y4.5Co2La1.5 BMGs achieved at different sintering temperatures and pressures. Under an applied pressure of 200 MPa, the density of the BMGs increased with increasing sintering temperature from 248.5°C to 278.5°C. But further increasing the sintering temperature to 308.5°C, which is above the Tg of the alloy and also above the peak temperature for the first crystallization stage of the Al86Ni6Y4.5Co2La1.5 MG powder [8], resulted in little increase in the sintered density. In contrast, increasing the applied pressure from 200 MPa to 600 MPa led to a substantial increase in the sintered density at each of the three temperatures tested. Increasing pressure was much more effective than increasing sintering temperature. This implies that sintering pressure plays a key role in the densification of Al86Ni6Y4.5Co2La1.5 MG powder during SPS. In fact, the near full density achieved by increasing pressure is difficult to achieve by increasing sintering temperature alone without nano-crystallization. To find out the underlying reasons for this big difference, the as-sintered microstructure was analyzed in detail using SEM and TEM, with a special focus being placed on the interfaces between powder particles.Figure 2 Sintered densities of the SPS-processed samples as a function of SPS sintering temperature and pressure.X-ray mapping was applied to all the constituent elements (i.e., Al, Ni, Y, Co, and La) as well as O in the samples that were sintered at 248.5°C under 200 MPa. Figure3 shows the results of Al, Ni, and O. The distribution of Ni, Y, Co, and La is homogenous in the microstructure, showing few features. However, Al and O are clearly enriched in areas close to the initial particle-particle interfaces (see Figures 3(b) and 3(d)). Furthermore, these Al- and O-enriched areas underwent only limited sintering, where the sintering necks are still recognizable between neighboring particles (see Figure 3(a)), compared to those well-sintered oxygen-deficient areas. It can be deduced that the surface oxide layers have hindered the densification process of the Al86Ni6Y4.5Co2La1.5 MG powder during SPS, and that the 200 MPa of applied pressure can only ensure limited removal of these oxide surface layers. To confirm this inference, TEM was used to investigate the interfaces between the particles in the SPS-processed sample, and the results are shown in Figures 4 and 5.SEM mapping results of Al and Ni as well as O in a selected area of an SPS-processed sample sintered at 248.5°C for 2 min under 200 MPa. The distribution of Y, Co, and La is generally homogeneous and similar to that of Ni. (a) (b) (c) (d)TEM bright field (BF) images of particle-particle interfaces in SPS-processed samples: (a) sintered at 248.5°C under 200 MPa; (b) sintered at 248.5°C under 600 MPa, free of crystallization; and (c) TEM-EDX results obtained from the interface shown in (a), indicative of noticeable crystallization of fcc-Al. (a) (b) (c)TEM BF image of an SPS-processed sample sintered at 248.5°C under 200 MPa. The inset in (a) is the corresponding SEAD patterns for the amorphous matrix and fcc-Al nanocrystals. (b) is an HRTEM image of the fcc-Al nanocrystals shown in (a). (a) (b)Figure4(a) shows that the interface between two particles in the same SPS-processed sample (sintered at 248.5°C under 200 MPa) has undergone noticeable crystallization. The interface layer is about 50 nm thick and oxygen can be detected at the interface using TEM energy dispersive X-ray (EDX) (see Figure 4(c)). This further confirms that the oxide surface layers were not completely removed during SPS under the applied pressure of 200 MPa. In contrast, clean interfaces between particles were uniformly observed in the SPS-processed samples under an applied pressure of 600 MPa. An example is shown in Figure 4(b).Figure5(a) shows a detailed view of the aforementioned crystallized interface area (Figure 4(a)) together with the surrounding amorphous matrix. Based on the selected area electron diffraction (SAED) patterns (inset in Figure 5(a)), these crystallized phases are indexed to be fcc-Al. Figure 5(b) shows a high-resolution TEM image of these fcc-Al nanocrystals which are about 10 nm in size. In contrast, no crystallization was detected in samples that were sintered at the same temperature (248.5°C) but under 600 MPa (see Figure 4(b)). The difference can be explained below. Under an applied pressure of 200 MPa and a heating rate of 40°C/min, it is difficult to completely remove the surface oxide layers on powder particles, as evidenced by the results shown in Figures 3 and 4. The remaining oxide surface layers prevent viscous flow between the powder particles and therefore inhibit full densification. Consequently, the sintering necks between the neighboring particles, because of the surrounding pores, will have relatively high electrical resistance. As a result, this will cause high local Joule heat (or enhanced temperature gradient) in these local contact areas [19–21], resulting in severe local nano-crystallization as shown above. With a high applied pressure of 600 MPa, the combined effect of the pressure and the electrical discharge during SPS can effectively disrupt the oxide surface layers on the powder particles leading to a complete removal of the surface oxides. Without the oxide surface layers, viscous flow occurs making full densification possible. This eliminates overheated local areas and therefore prevents local nano-crystallization. ## 4. Summary Al86Ni6Y4.5Co2La1.5 BMG disks (diameter: 10 mm; thickness: 4 mm) were fabricated from metallic glass powder of the same composition by SPS. The influence of applied pressure on the densification of Al86Ni6Y4.5Co2La1.5 metallic glass powder was investigated at different sintering temperatures at a fixed heating rate of 40°C/min. Applying a high pressure (600 MPa) assisted in the removal of the surface oxide layers that enveloped the starting metallic glass powder. This led to full densification of the metallic glass powder flow free of particle-particle interface oxides and nano-crystallization. The mechanism was attributed to potential viscous flow during SPS between the powder particles. In contrast, both residual oxides and nanocrystalline Al phases were detected at particle-particle interfaces in the Al86Ni6Y4.5Co2La1.5 BMGs fabricated under a low pressure (200 MPa) with respect to the same heating and isothermal sintering parameters. The applied pressure showed a predominant influence on the removal of the surface oxide layers on the starting metallic glass powder during SPS, which is crucial to the consolidation of the metallic glass powder. --- *Source: 101508-2013-02-28.xml*
2013
# Wang-Bi Capsule Alleviates the Joint Inflammation and Bone Destruction in Mice with Collagen-Induced Arthritis **Authors:** Hua Cui; Haiyang Shu; Dancai Fan; Xinyu Wang; Ning Zhao; Cheng Lu; Aiping Lu; Xiaojuan He **Journal:** Evidence-Based Complementary and Alternative Medicine (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1015083 --- ## Abstract Wang-Bi Capsule (WB), a traditional Chinese medicine- (TCM-) based herbal formula, is currently used in clinic for the treatment of rheumatoid arthritis (RA) with positive clinical effects. However, its pharmacological mechanism of action in RA is still obscure. Therefore, this study established a collagen-induced arthritis (CIA) mice model to examine the efficacy of WB by using arthritis score, histological analysis, and micro-CT examination. Proinflammatory cytokines expression, osteoclast number, OPG/RANKL system, and NF-κB activation were then detected to further investigate the mechanism of WB in RA treatment. The results indicated that WB could alleviate the erythema and swelling of paws in CIA mice. It also inhibited the infiltration of inflammatory cells and bone destruction and increased bone density in joints of CIA mice. Mechanistic studies showed that WB treatment decreased the production of IL-1β, IL-6, and TNF-α in serum and joints of CIA mice. Moreover, it reduced the osteoclast number, increased OPG level, decreased RANKL level, and inhibited the activation of NF-κB in joints of CIA mice. In conclusion, this study demonstrated that WB could effectively alleviate disease progression of CIA mice by decreasing the IL-1β, IL-6, and TNF-α levels, modulating the OPG/RANKL system, and inhibiting the activation of NF-κB. --- ## Body ## 1. Introduction Rheumatoid arthritis (RA) is a complex, chronic inflammatory disease with approximately 0.24% global prevalence [1]. The pathophysiology of RA is characterized by a variety of immune cells with aberrant inflammatory cytokines such as TNF-α, IL-1, IL-6, and IL-17, infiltrating in the synovium of multiple joints, and those inflammatory mediators lead to long-term inflammation and the formation of pannus, which ultimately result in irreversible joint and cartilage destruction and sever disability [2, 3]. Currently, the purpose of RA treatment is mainly to reduce the inflammatory response, inhibit the development of lesions and bone damage, and protect the function of joints. Up to now, it is still difficult to find an ideal drug to completely cure this disease. Current anti-inflammatory drugs, no matter nonsteroidal anti-inflammatory drugs (NSAIDs) or steroid or biological response modifiers, could not meet the need of all RA patients.Traditional Chinese medicine (TCM) has attracted more and more attention due to its advantages of safety and less adverse reactions. Wang-Bi Capsule (WB), a TCM-based herbal formula, has been used to treat RA in China for several years and shown positive clinical effects. It is composed of seventeen herbal medicines:Rehmannia glutinosa Libosch., Rehmannia glutinosa (Gaetn.) Libosch. ex Fisch. et Mey., Dipsacus asper Wall. ex Henry, Aconitum carmichaeli Debx., Angelica pubescens Maxim. f. biserrata Shan et Yuan, Drynaria fortunei (Kunze) J. Sm., Cinnamomum cassia Presl, Epimedium brevicornu Maxim., Saposhnikovia divaricata (Trucz.) Schischk., Clematis chinensis Osbeck, Gleditsia sinensis Lam., Capra hircus Linnaeus, Paeonia lactiflora Pall., Ciborium barometz (L.) J. Sm., Anemarrhena asphodeloides Bunge, Lycopodium japonicum Thunb., Carthamus tinctorius L. Recently, the pharmacological effects of some components from WB in RA have been found. Zhang and Dai reported that total paeoniflorin, the major active component of Paeonia lactiflora Pallas, could inhibit the proliferation of lymphocytes and fibroblast-like synovium cells, and the production of matrix metalloproteinases [4]. Chi et al. indicated that icariin, isolated from the Epimedium family, decreased Th17 cells and the production of IL-17 through inhibiting STAT3 activation in collagen-induced arthritis (CIA) mice [5]. Kong et al. showed that Saposhnikovia divaricata chromone extract (SCE) reduced the protein level of NF-κB and inhibited p-ERK, p-JNK, and p-p38 expression in human fibroblast-like synoviocytes and CIA rats [6]. However, the pharmacological mechanism of action of WB in RA is still obscure. In this study, we want to determine the effect of WB in a CIA mouse model and understand its mechanism of action. ## 2. Materials and Methods ### 2.1. Animals Male DBA/1J mice (6–8 weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). Mice were housed in cages, and water and food were providedad libitum. All mice were allowed to acclimatize themselves for 1 week before the initiation of experiment. All protocols in this study were approved by the Research Ethics Committee of Institute of Basic Theory of Chinese Medicine, China Academy of Chinese Medical Sciences. ### 2.2. Induction of Collagen-Induced Arthritis Bovine type II collagen (Chondrex, Redmond, WA, USA) was emulsified in an equal volume of Freund’s complete adjuvant (Chondrex, Redmond, WA, USA) to a final concentration of 1 mg/mL. On day 0, mice were subcutaneously injected with 100μL of the emulsion at the base of the tail. On day 21, the animals were given booster injections intraperitoneally with 50 μL of the emulsion [7]. ### 2.3. Grouping and Treatment WB was provided and identified by Liaoning China Resources Benxi Sanyao Co., Ltd. (Liaoning, China, No. 20180205). It was prepared by dissolving in double distilled water and the sample was fully blended again prior to use. Mice were randomly divided into six groups with ten mice per group after the successful induction of CIA model: control group, model group, methotrexate group (MTX, 0.3 mg/kg/d), WB low-dose group (WB-L, 0.536 g/kg/d), WB middle-dose group (WB-M, 1.073 g/kg/d, equal to that for RA patients), and WB high-dose group (WB-H, 2.146 g/kg/d). The administration of drug started from day 28 after first immunization and lasted four weeks. WB and MTX solution were orally administered in a volume of 0.1 mL/10 g. The mice in normal group and model group were administered the same volume of double distilled water. ### 2.4. Assessment of Arthritis Severity Arthritis severity was graded using a 5-point scale (0: normal; 1: erythema and mild swelling confined to the tarsals or ankle joints; 2: erythema and mild swelling extending from the ankle to the tarsals; 3: erythema and moderate swelling extending from the ankle to the metatarsal joints; 4: erythema and severe swelling encompass the ankle, foot, and digits, or ankylosis of the limb) [8]. The total score of each mouse was calculated as the arthritic index, with a maximum possible score of 16 (4 points × 4 paws). ### 2.5. Enzyme-Linked Immunosorbent Assay Serum was collected after eyeball blood extraction in mice. Levels of IL-1β, IL-6, and TNF-α were detected with commercially available enzyme-linked immunosorbent assay (ELISA) kits (eBioscience, San Diego, CA, USA) according to manufacturer’s instructions. ### 2.6. Histological Assessment The dissected hind paw joints of mice were fixed in 10% formalin solution and decalcified using 10% EDTA. After dehydration, specimens were then paraffin-embedded, sectioned (5 mm thickness) for hematoxylin and eosin (H&E) staining, and observed under a light microscope. Histopathological characteristics were evaluated blindly as described previoulsy [9]. ### 2.7. Immunohistochemical Staining The sections were dewaxed and hydrated using xylene and a graded series of alcohols. Activity of endogenous peroxidase was quenched with 3% H2O2. Then the tissues were incubated with anti-IL-1β, anti-IL-6, and anti-TNF-α (Abcam, Cambridge, UK) overnight at 4°C. Final color product was developed with DAB Kit (ZSGB-BIO, Beijing, China). Later, sections were counterstained with hematoxylin (Leagene, Beijing, China). Meanwhile, PBS was used for control staining instead of primary antibodies. Immunohistochemical semiquantitative analysis was performed as previously described [10]. ### 2.8. TRAP Staining The sections were stained for TRAP using a TRAP staining kit (Sigma, St. Louis, MO, USA) according to the manufacturer’s protocol. Specimens were observed by computer image analysis using the Leica Qwin image analysis software (Leica Microsystem, Germany). TRAP-positive multinucleated cells that contained more than 3 nuclei were identified as osteoclasts [11]. ### 2.9. Micro-CT Analysis The fixed hind paws were placed in a centrifuge tube with physiological saline. Micro-CT analyses were performed using 1174 compact micro-CT (Skyscan, Aartselaar, Belgium). The micro-CT analysis procedures were performed according to the international guideline. ### 2.10. Western Blotting Analysis of the proteins extracted from ankle joints by western blotting was performed using standard methods. Equivalent amounts of protein from each sample were separated in 10% SDS-polyacrylamide gel and blotted onto a PVDF membrane. The membrane was then blocked with 5% milk (BD, Sparks, MD, USA); incubated with antibodies against OPG, RANKL, p-p65, p-IKKα/β, IκBα, and β-actin (Abcam, Cambridge, UK) overnight; and then hybridized with HRP-conjugated secondary antibody for 1 h. The immunoreactive bands were visualized using an ECL system (CLINX, Shanghai, China). The relative intensities of bands were quantified using Image J. ### 2.11. Statistical Analysis All of the data were expressed as the means ± standard deviation (SD) and analyzed using GraphPad Prism 6 software. Differences in the mean values of various groups were analyzed by using ANOVA.P values <0.05 were considered significant. ## 2.1. Animals Male DBA/1J mice (6–8 weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). Mice were housed in cages, and water and food were providedad libitum. All mice were allowed to acclimatize themselves for 1 week before the initiation of experiment. All protocols in this study were approved by the Research Ethics Committee of Institute of Basic Theory of Chinese Medicine, China Academy of Chinese Medical Sciences. ## 2.2. Induction of Collagen-Induced Arthritis Bovine type II collagen (Chondrex, Redmond, WA, USA) was emulsified in an equal volume of Freund’s complete adjuvant (Chondrex, Redmond, WA, USA) to a final concentration of 1 mg/mL. On day 0, mice were subcutaneously injected with 100μL of the emulsion at the base of the tail. On day 21, the animals were given booster injections intraperitoneally with 50 μL of the emulsion [7]. ## 2.3. Grouping and Treatment WB was provided and identified by Liaoning China Resources Benxi Sanyao Co., Ltd. (Liaoning, China, No. 20180205). It was prepared by dissolving in double distilled water and the sample was fully blended again prior to use. Mice were randomly divided into six groups with ten mice per group after the successful induction of CIA model: control group, model group, methotrexate group (MTX, 0.3 mg/kg/d), WB low-dose group (WB-L, 0.536 g/kg/d), WB middle-dose group (WB-M, 1.073 g/kg/d, equal to that for RA patients), and WB high-dose group (WB-H, 2.146 g/kg/d). The administration of drug started from day 28 after first immunization and lasted four weeks. WB and MTX solution were orally administered in a volume of 0.1 mL/10 g. The mice in normal group and model group were administered the same volume of double distilled water. ## 2.4. Assessment of Arthritis Severity Arthritis severity was graded using a 5-point scale (0: normal; 1: erythema and mild swelling confined to the tarsals or ankle joints; 2: erythema and mild swelling extending from the ankle to the tarsals; 3: erythema and moderate swelling extending from the ankle to the metatarsal joints; 4: erythema and severe swelling encompass the ankle, foot, and digits, or ankylosis of the limb) [8]. The total score of each mouse was calculated as the arthritic index, with a maximum possible score of 16 (4 points × 4 paws). ## 2.5. Enzyme-Linked Immunosorbent Assay Serum was collected after eyeball blood extraction in mice. Levels of IL-1β, IL-6, and TNF-α were detected with commercially available enzyme-linked immunosorbent assay (ELISA) kits (eBioscience, San Diego, CA, USA) according to manufacturer’s instructions. ## 2.6. Histological Assessment The dissected hind paw joints of mice were fixed in 10% formalin solution and decalcified using 10% EDTA. After dehydration, specimens were then paraffin-embedded, sectioned (5 mm thickness) for hematoxylin and eosin (H&E) staining, and observed under a light microscope. Histopathological characteristics were evaluated blindly as described previoulsy [9]. ## 2.7. Immunohistochemical Staining The sections were dewaxed and hydrated using xylene and a graded series of alcohols. Activity of endogenous peroxidase was quenched with 3% H2O2. Then the tissues were incubated with anti-IL-1β, anti-IL-6, and anti-TNF-α (Abcam, Cambridge, UK) overnight at 4°C. Final color product was developed with DAB Kit (ZSGB-BIO, Beijing, China). Later, sections were counterstained with hematoxylin (Leagene, Beijing, China). Meanwhile, PBS was used for control staining instead of primary antibodies. Immunohistochemical semiquantitative analysis was performed as previously described [10]. ## 2.8. TRAP Staining The sections were stained for TRAP using a TRAP staining kit (Sigma, St. Louis, MO, USA) according to the manufacturer’s protocol. Specimens were observed by computer image analysis using the Leica Qwin image analysis software (Leica Microsystem, Germany). TRAP-positive multinucleated cells that contained more than 3 nuclei were identified as osteoclasts [11]. ## 2.9. Micro-CT Analysis The fixed hind paws were placed in a centrifuge tube with physiological saline. Micro-CT analyses were performed using 1174 compact micro-CT (Skyscan, Aartselaar, Belgium). The micro-CT analysis procedures were performed according to the international guideline. ## 2.10. Western Blotting Analysis of the proteins extracted from ankle joints by western blotting was performed using standard methods. Equivalent amounts of protein from each sample were separated in 10% SDS-polyacrylamide gel and blotted onto a PVDF membrane. The membrane was then blocked with 5% milk (BD, Sparks, MD, USA); incubated with antibodies against OPG, RANKL, p-p65, p-IKKα/β, IκBα, and β-actin (Abcam, Cambridge, UK) overnight; and then hybridized with HRP-conjugated secondary antibody for 1 h. The immunoreactive bands were visualized using an ECL system (CLINX, Shanghai, China). The relative intensities of bands were quantified using Image J. ## 2.11. Statistical Analysis All of the data were expressed as the means ± standard deviation (SD) and analyzed using GraphPad Prism 6 software. Differences in the mean values of various groups were analyzed by using ANOVA.P values <0.05 were considered significant. ## 3. Results ### 3.1. WB Ameliorated Arthritis Severity of CIA Mice To evaluate the therapeutic effect of WB on RA, we used CIA mice, a typical RA animal model. As shown in Figure1(a), WB-M and WB-H treatment starting from disease onset effectively suppressed CIA progression. Significant improvement in clinical signs was observed about two weeks after WB administration, and the effect persisted until the end of the experiment. Subsequently, we analyzed the histological changes of ankle joints from the hind paws to determine the effect of WB on joint inflammation and destruction. As shown in Figures 1(b) and 1(c), the infiltration of inflammatory cells and synovial hyperplasia, as well as cartilage and bone destruction, in CIA mice were clear. WB-M and WB-H administration could alleviate those histological changes in ankle joints of CIA mice. The histological score of ankle joint was significantly lower in WB-M and WB-H treated mice than that in CIA mice.Figure 1 WB ameliorated arthritis severity of CIA mice. (a) Arthritis score of each group. (b) Histological score of each group. (c) Representative histological findings of ankle joint from hind paws of each group. Original magnification 200x.∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ### 3.2. WB Decreased the Levels of IL-1β, IL-6, and TNF-α in CIA Mice Multiple proinflammatory cytokines, such as IL-1β, IL-6, and TNF-α, not only induce and deteriorate inflammation but also cause cartilage damage and bone destruction in RA. We therefore detected the concentrations of these proinflammatory cytokines in serum and joints of CIA mice treated with WB. As shown in Figure 2, the serum levels of IL-1β, IL-6, and TNF-α in CIA mice were obviously increased, whereas WB treatment could significantly decrease the levels of these three cytokines. Similarly, WB treatment also lowered the levels of IL-1β, IL-6, and TNF-α, which were remarkably increased in synovium tissue sections of ankle joints from CIA mice (Figure 3).Figure 2 WB decreased (a) IL-1β, (b) IL-6, and (c) TNF-α levels in serum of CIA mice. Serum obtained from all group was measured by ELISA. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c)Figure 3 (a) Representative immunohistochemistry images of IL-1α, IL-6, and TNF-α in each group. (b) IOD means of each group. Synovium tissue sections from ankle joints in each group were stained with anti-IL-1β, anti-IL-6, and anti-TNF-α. Original magnification 400x. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ### 3.3. WB Inhibited the Bone Destruction in CIA Mice Histological examination showed that WB not only alleviated the joint inflammation but also inhibited joint and bone destruction. To further investigate the effect of WB on bone destruction in CIA mice, we used micro-CT analysis and TRAP staining. As shown in Figure4(a), the paws of the model group exhibited severe bone destruction and decreased bone density. However, paws from WB-treated groups exhibited reduced bone destruction and increased bone density, especially WB-M group. Moreover, the TRAP staining showed that the number of osteoclasts in ankle joint was remarkably higher in model group than that in control group. WB-M treatment significantly lowered the osteoclasts number in CIA mice (Figures 4(b) and 4(c)). Therefore, we used WB-M for further mechanism studies.Figure 4 WB inhibited bone destruction in CIA mice. (a) Representative three-dimensional images of ankle joint. (b) Representative osteoclasts images of ankle joint by TRAP staining. (c) Osteoclast number of each group.∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ### 3.4. WB Regulated OPG/RANKL System To investigate the mechanism of WB in inhibiting bone destruction, we detected the OPG and RANKL expression in ankle joint of CIA mice by western blotting. As shown in Figure5, The level of OPG was obviously decreased, whereas the level of RANKL was increased in model group when compared with control group. WB treatment significantly increased OPG level and, at the same time, decreased RANKL level when compared with model group.Figure 5 WB regulated OPG and RANKL levels in CIA mice. (a) Representative bands of western blotting in different groups. (b) Semiquantitative analysis of western blotting in different groups.∗P<0.05 vs. model group. ##P<0.01 vs. control group. (a) (b) ### 3.5. WB Suppressed the Activation of NF-κB Signaling Pathway The expression of p-IKKα/β, IκBα, and p-p65 was detected by western blotting. As shown in Figure 6, WB significantly decreased the expression levels of p-IKKα/β and p-p65 in ankle joints of CIA mice. In addition, WB increased the level of IκBα in CIA mice.Figure 6 WB suppressed the activation of NF-κB in CIA mice. (a) Representative bands of western blotting in different treatment groups. (b) Semiquantitative analysis of western blotting in different treatment groups. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.1. WB Ameliorated Arthritis Severity of CIA Mice To evaluate the therapeutic effect of WB on RA, we used CIA mice, a typical RA animal model. As shown in Figure1(a), WB-M and WB-H treatment starting from disease onset effectively suppressed CIA progression. Significant improvement in clinical signs was observed about two weeks after WB administration, and the effect persisted until the end of the experiment. Subsequently, we analyzed the histological changes of ankle joints from the hind paws to determine the effect of WB on joint inflammation and destruction. As shown in Figures 1(b) and 1(c), the infiltration of inflammatory cells and synovial hyperplasia, as well as cartilage and bone destruction, in CIA mice were clear. WB-M and WB-H administration could alleviate those histological changes in ankle joints of CIA mice. The histological score of ankle joint was significantly lower in WB-M and WB-H treated mice than that in CIA mice.Figure 1 WB ameliorated arthritis severity of CIA mice. (a) Arthritis score of each group. (b) Histological score of each group. (c) Representative histological findings of ankle joint from hind paws of each group. Original magnification 200x.∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ## 3.2. WB Decreased the Levels of IL-1β, IL-6, and TNF-α in CIA Mice Multiple proinflammatory cytokines, such as IL-1β, IL-6, and TNF-α, not only induce and deteriorate inflammation but also cause cartilage damage and bone destruction in RA. We therefore detected the concentrations of these proinflammatory cytokines in serum and joints of CIA mice treated with WB. As shown in Figure 2, the serum levels of IL-1β, IL-6, and TNF-α in CIA mice were obviously increased, whereas WB treatment could significantly decrease the levels of these three cytokines. Similarly, WB treatment also lowered the levels of IL-1β, IL-6, and TNF-α, which were remarkably increased in synovium tissue sections of ankle joints from CIA mice (Figure 3).Figure 2 WB decreased (a) IL-1β, (b) IL-6, and (c) TNF-α levels in serum of CIA mice. Serum obtained from all group was measured by ELISA. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c)Figure 3 (a) Representative immunohistochemistry images of IL-1α, IL-6, and TNF-α in each group. (b) IOD means of each group. Synovium tissue sections from ankle joints in each group were stained with anti-IL-1β, anti-IL-6, and anti-TNF-α. Original magnification 400x. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.3. WB Inhibited the Bone Destruction in CIA Mice Histological examination showed that WB not only alleviated the joint inflammation but also inhibited joint and bone destruction. To further investigate the effect of WB on bone destruction in CIA mice, we used micro-CT analysis and TRAP staining. As shown in Figure4(a), the paws of the model group exhibited severe bone destruction and decreased bone density. However, paws from WB-treated groups exhibited reduced bone destruction and increased bone density, especially WB-M group. Moreover, the TRAP staining showed that the number of osteoclasts in ankle joint was remarkably higher in model group than that in control group. WB-M treatment significantly lowered the osteoclasts number in CIA mice (Figures 4(b) and 4(c)). Therefore, we used WB-M for further mechanism studies.Figure 4 WB inhibited bone destruction in CIA mice. (a) Representative three-dimensional images of ankle joint. (b) Representative osteoclasts images of ankle joint by TRAP staining. (c) Osteoclast number of each group.∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ## 3.4. WB Regulated OPG/RANKL System To investigate the mechanism of WB in inhibiting bone destruction, we detected the OPG and RANKL expression in ankle joint of CIA mice by western blotting. As shown in Figure5, The level of OPG was obviously decreased, whereas the level of RANKL was increased in model group when compared with control group. WB treatment significantly increased OPG level and, at the same time, decreased RANKL level when compared with model group.Figure 5 WB regulated OPG and RANKL levels in CIA mice. (a) Representative bands of western blotting in different groups. (b) Semiquantitative analysis of western blotting in different groups.∗P<0.05 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.5. WB Suppressed the Activation of NF-κB Signaling Pathway The expression of p-IKKα/β, IκBα, and p-p65 was detected by western blotting. As shown in Figure 6, WB significantly decreased the expression levels of p-IKKα/β and p-p65 in ankle joints of CIA mice. In addition, WB increased the level of IκBα in CIA mice.Figure 6 WB suppressed the activation of NF-κB in CIA mice. (a) Representative bands of western blotting in different treatment groups. (b) Semiquantitative analysis of western blotting in different treatment groups. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 4. Discussion Joint inflammation and bone destruction are the two typical pathological features of RA. In this study, we found that WB could effectively suppress the disease progression in CIA mice. Histological analysis also showed that WB treatment alleviated inflammatory cells infiltration, as well as cartilage and bone destruction in CIA mice.Cytokines regulate a broad range of inflammatory processes in the pathogenesis of RA, especially some proinflammatory cytokines, TNF-α, IL-1β, and IL-6 [12]. TNF-α and IL-1β are central inflammatory cytokines in the pathogenesis of RA. They can exacerbate the inflammation through enhancing the release of some inflammatory mediators, including proinflammatory cytokines, chemokines, and PGE2. Simultaneously, they also participate in the bone destruction by promoting osteoclast activation [12, 13]. IL-6 was originally identified as a B cell regulatory factor, while recent findings indicate that it also acts as a regulator of CD4+ T cell proliferation, differentiation, and activation [14, 15]. Moreover, IL-6 is involved in the development of bone destruction, because it can induce osteoclast differentiation through receptor activator of NF-kappa B ligand (RANKL) expression [16]. In our study, we found that TNF-α, IL-1β, and IL-6 in serum and joint of CIA mice were significantly decreased in WB treatment group when compared with model group. These results further indicated the effectiveness of WB treatment in CIA mice.Osteoclasts-mediated bone destruction plays key role in RA progression. Proinflammatory cytokines such as TNF-α, IL-1β, and IL-6 were reported to be osteoclastogenic; besides, interactions between receptor activator of the nuclear factor kappa B (RANK) and its ligand RANKL are essential in osteoclastogenesis. RANKL is a TNF-family cytokine required for osteoclast formation. RANK on monocyte binds to RANKL, initiating osteoclast differentiation [17, 18]. Osteoprotegerin (OPG) is another important factor participating in the regulation of osteoclast differentiation. It acts as a decoy receptor to block the effect of RANKL, thus suspending the activation of osteoclasts [19]. Regulating the OPG-RANKL system is regarded as an effective strategy to inhibit the bone destruction in RA. In our study, we found that WB treatment could not only lower the number of osteoclasts but also increase the OPG level and, at the same time, decrease the RANKL level in the joints of CIA mice, which showed its effectiveness in alleviating the bone destruction in RA.The transcription factor NF-κB is crucial in the regulation of immune responses and bone destruction in RA. It can activate IκB phosphorylation by activation of IκB kinase (IKK) and promote the production of p65, thereby regulating the occurrence of inflammatory responses. On the other hand, NF-κB also participates in osteoclastogenesis, because it can mediate the effects of RANKL [20]. Previous studies have proved the activation of NF-κB in cultured synovial fibroblasts and synovial tissue from RA patients, as well as the arthritis animal models [21]. Moreover, blocking NF-κB could relieve inflammatory response and prevent bone destruction in arthritis animal models [22, 23]. Our results indicated that WB effectively inhibit the activation of NF-κB, which may partly explain its effect in inhibiting the progression of joint inflammation and bone destruction.In conclusion, this study demonstrated that WB could effectively alleviate disease progression of CIA mice by decreasing the IL-1β, IL-6, and TNF-α levels, modulating the OPG-RANKL system, and inhibiting the activation of NF-κB. --- *Source: 1015083-2020-03-19.xml*
1015083-2020-03-19_1015083-2020-03-19.md
29,143
Wang-Bi Capsule Alleviates the Joint Inflammation and Bone Destruction in Mice with Collagen-Induced Arthritis
Hua Cui; Haiyang Shu; Dancai Fan; Xinyu Wang; Ning Zhao; Cheng Lu; Aiping Lu; Xiaojuan He
Evidence-Based Complementary and Alternative Medicine (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1015083
1015083-2020-03-19.xml
--- ## Abstract Wang-Bi Capsule (WB), a traditional Chinese medicine- (TCM-) based herbal formula, is currently used in clinic for the treatment of rheumatoid arthritis (RA) with positive clinical effects. However, its pharmacological mechanism of action in RA is still obscure. Therefore, this study established a collagen-induced arthritis (CIA) mice model to examine the efficacy of WB by using arthritis score, histological analysis, and micro-CT examination. Proinflammatory cytokines expression, osteoclast number, OPG/RANKL system, and NF-κB activation were then detected to further investigate the mechanism of WB in RA treatment. The results indicated that WB could alleviate the erythema and swelling of paws in CIA mice. It also inhibited the infiltration of inflammatory cells and bone destruction and increased bone density in joints of CIA mice. Mechanistic studies showed that WB treatment decreased the production of IL-1β, IL-6, and TNF-α in serum and joints of CIA mice. Moreover, it reduced the osteoclast number, increased OPG level, decreased RANKL level, and inhibited the activation of NF-κB in joints of CIA mice. In conclusion, this study demonstrated that WB could effectively alleviate disease progression of CIA mice by decreasing the IL-1β, IL-6, and TNF-α levels, modulating the OPG/RANKL system, and inhibiting the activation of NF-κB. --- ## Body ## 1. Introduction Rheumatoid arthritis (RA) is a complex, chronic inflammatory disease with approximately 0.24% global prevalence [1]. The pathophysiology of RA is characterized by a variety of immune cells with aberrant inflammatory cytokines such as TNF-α, IL-1, IL-6, and IL-17, infiltrating in the synovium of multiple joints, and those inflammatory mediators lead to long-term inflammation and the formation of pannus, which ultimately result in irreversible joint and cartilage destruction and sever disability [2, 3]. Currently, the purpose of RA treatment is mainly to reduce the inflammatory response, inhibit the development of lesions and bone damage, and protect the function of joints. Up to now, it is still difficult to find an ideal drug to completely cure this disease. Current anti-inflammatory drugs, no matter nonsteroidal anti-inflammatory drugs (NSAIDs) or steroid or biological response modifiers, could not meet the need of all RA patients.Traditional Chinese medicine (TCM) has attracted more and more attention due to its advantages of safety and less adverse reactions. Wang-Bi Capsule (WB), a TCM-based herbal formula, has been used to treat RA in China for several years and shown positive clinical effects. It is composed of seventeen herbal medicines:Rehmannia glutinosa Libosch., Rehmannia glutinosa (Gaetn.) Libosch. ex Fisch. et Mey., Dipsacus asper Wall. ex Henry, Aconitum carmichaeli Debx., Angelica pubescens Maxim. f. biserrata Shan et Yuan, Drynaria fortunei (Kunze) J. Sm., Cinnamomum cassia Presl, Epimedium brevicornu Maxim., Saposhnikovia divaricata (Trucz.) Schischk., Clematis chinensis Osbeck, Gleditsia sinensis Lam., Capra hircus Linnaeus, Paeonia lactiflora Pall., Ciborium barometz (L.) J. Sm., Anemarrhena asphodeloides Bunge, Lycopodium japonicum Thunb., Carthamus tinctorius L. Recently, the pharmacological effects of some components from WB in RA have been found. Zhang and Dai reported that total paeoniflorin, the major active component of Paeonia lactiflora Pallas, could inhibit the proliferation of lymphocytes and fibroblast-like synovium cells, and the production of matrix metalloproteinases [4]. Chi et al. indicated that icariin, isolated from the Epimedium family, decreased Th17 cells and the production of IL-17 through inhibiting STAT3 activation in collagen-induced arthritis (CIA) mice [5]. Kong et al. showed that Saposhnikovia divaricata chromone extract (SCE) reduced the protein level of NF-κB and inhibited p-ERK, p-JNK, and p-p38 expression in human fibroblast-like synoviocytes and CIA rats [6]. However, the pharmacological mechanism of action of WB in RA is still obscure. In this study, we want to determine the effect of WB in a CIA mouse model and understand its mechanism of action. ## 2. Materials and Methods ### 2.1. Animals Male DBA/1J mice (6–8 weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). Mice were housed in cages, and water and food were providedad libitum. All mice were allowed to acclimatize themselves for 1 week before the initiation of experiment. All protocols in this study were approved by the Research Ethics Committee of Institute of Basic Theory of Chinese Medicine, China Academy of Chinese Medical Sciences. ### 2.2. Induction of Collagen-Induced Arthritis Bovine type II collagen (Chondrex, Redmond, WA, USA) was emulsified in an equal volume of Freund’s complete adjuvant (Chondrex, Redmond, WA, USA) to a final concentration of 1 mg/mL. On day 0, mice were subcutaneously injected with 100μL of the emulsion at the base of the tail. On day 21, the animals were given booster injections intraperitoneally with 50 μL of the emulsion [7]. ### 2.3. Grouping and Treatment WB was provided and identified by Liaoning China Resources Benxi Sanyao Co., Ltd. (Liaoning, China, No. 20180205). It was prepared by dissolving in double distilled water and the sample was fully blended again prior to use. Mice were randomly divided into six groups with ten mice per group after the successful induction of CIA model: control group, model group, methotrexate group (MTX, 0.3 mg/kg/d), WB low-dose group (WB-L, 0.536 g/kg/d), WB middle-dose group (WB-M, 1.073 g/kg/d, equal to that for RA patients), and WB high-dose group (WB-H, 2.146 g/kg/d). The administration of drug started from day 28 after first immunization and lasted four weeks. WB and MTX solution were orally administered in a volume of 0.1 mL/10 g. The mice in normal group and model group were administered the same volume of double distilled water. ### 2.4. Assessment of Arthritis Severity Arthritis severity was graded using a 5-point scale (0: normal; 1: erythema and mild swelling confined to the tarsals or ankle joints; 2: erythema and mild swelling extending from the ankle to the tarsals; 3: erythema and moderate swelling extending from the ankle to the metatarsal joints; 4: erythema and severe swelling encompass the ankle, foot, and digits, or ankylosis of the limb) [8]. The total score of each mouse was calculated as the arthritic index, with a maximum possible score of 16 (4 points × 4 paws). ### 2.5. Enzyme-Linked Immunosorbent Assay Serum was collected after eyeball blood extraction in mice. Levels of IL-1β, IL-6, and TNF-α were detected with commercially available enzyme-linked immunosorbent assay (ELISA) kits (eBioscience, San Diego, CA, USA) according to manufacturer’s instructions. ### 2.6. Histological Assessment The dissected hind paw joints of mice were fixed in 10% formalin solution and decalcified using 10% EDTA. After dehydration, specimens were then paraffin-embedded, sectioned (5 mm thickness) for hematoxylin and eosin (H&E) staining, and observed under a light microscope. Histopathological characteristics were evaluated blindly as described previoulsy [9]. ### 2.7. Immunohistochemical Staining The sections were dewaxed and hydrated using xylene and a graded series of alcohols. Activity of endogenous peroxidase was quenched with 3% H2O2. Then the tissues were incubated with anti-IL-1β, anti-IL-6, and anti-TNF-α (Abcam, Cambridge, UK) overnight at 4°C. Final color product was developed with DAB Kit (ZSGB-BIO, Beijing, China). Later, sections were counterstained with hematoxylin (Leagene, Beijing, China). Meanwhile, PBS was used for control staining instead of primary antibodies. Immunohistochemical semiquantitative analysis was performed as previously described [10]. ### 2.8. TRAP Staining The sections were stained for TRAP using a TRAP staining kit (Sigma, St. Louis, MO, USA) according to the manufacturer’s protocol. Specimens were observed by computer image analysis using the Leica Qwin image analysis software (Leica Microsystem, Germany). TRAP-positive multinucleated cells that contained more than 3 nuclei were identified as osteoclasts [11]. ### 2.9. Micro-CT Analysis The fixed hind paws were placed in a centrifuge tube with physiological saline. Micro-CT analyses were performed using 1174 compact micro-CT (Skyscan, Aartselaar, Belgium). The micro-CT analysis procedures were performed according to the international guideline. ### 2.10. Western Blotting Analysis of the proteins extracted from ankle joints by western blotting was performed using standard methods. Equivalent amounts of protein from each sample were separated in 10% SDS-polyacrylamide gel and blotted onto a PVDF membrane. The membrane was then blocked with 5% milk (BD, Sparks, MD, USA); incubated with antibodies against OPG, RANKL, p-p65, p-IKKα/β, IκBα, and β-actin (Abcam, Cambridge, UK) overnight; and then hybridized with HRP-conjugated secondary antibody for 1 h. The immunoreactive bands were visualized using an ECL system (CLINX, Shanghai, China). The relative intensities of bands were quantified using Image J. ### 2.11. Statistical Analysis All of the data were expressed as the means ± standard deviation (SD) and analyzed using GraphPad Prism 6 software. Differences in the mean values of various groups were analyzed by using ANOVA.P values <0.05 were considered significant. ## 2.1. Animals Male DBA/1J mice (6–8 weeks) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). Mice were housed in cages, and water and food were providedad libitum. All mice were allowed to acclimatize themselves for 1 week before the initiation of experiment. All protocols in this study were approved by the Research Ethics Committee of Institute of Basic Theory of Chinese Medicine, China Academy of Chinese Medical Sciences. ## 2.2. Induction of Collagen-Induced Arthritis Bovine type II collagen (Chondrex, Redmond, WA, USA) was emulsified in an equal volume of Freund’s complete adjuvant (Chondrex, Redmond, WA, USA) to a final concentration of 1 mg/mL. On day 0, mice were subcutaneously injected with 100μL of the emulsion at the base of the tail. On day 21, the animals were given booster injections intraperitoneally with 50 μL of the emulsion [7]. ## 2.3. Grouping and Treatment WB was provided and identified by Liaoning China Resources Benxi Sanyao Co., Ltd. (Liaoning, China, No. 20180205). It was prepared by dissolving in double distilled water and the sample was fully blended again prior to use. Mice were randomly divided into six groups with ten mice per group after the successful induction of CIA model: control group, model group, methotrexate group (MTX, 0.3 mg/kg/d), WB low-dose group (WB-L, 0.536 g/kg/d), WB middle-dose group (WB-M, 1.073 g/kg/d, equal to that for RA patients), and WB high-dose group (WB-H, 2.146 g/kg/d). The administration of drug started from day 28 after first immunization and lasted four weeks. WB and MTX solution were orally administered in a volume of 0.1 mL/10 g. The mice in normal group and model group were administered the same volume of double distilled water. ## 2.4. Assessment of Arthritis Severity Arthritis severity was graded using a 5-point scale (0: normal; 1: erythema and mild swelling confined to the tarsals or ankle joints; 2: erythema and mild swelling extending from the ankle to the tarsals; 3: erythema and moderate swelling extending from the ankle to the metatarsal joints; 4: erythema and severe swelling encompass the ankle, foot, and digits, or ankylosis of the limb) [8]. The total score of each mouse was calculated as the arthritic index, with a maximum possible score of 16 (4 points × 4 paws). ## 2.5. Enzyme-Linked Immunosorbent Assay Serum was collected after eyeball blood extraction in mice. Levels of IL-1β, IL-6, and TNF-α were detected with commercially available enzyme-linked immunosorbent assay (ELISA) kits (eBioscience, San Diego, CA, USA) according to manufacturer’s instructions. ## 2.6. Histological Assessment The dissected hind paw joints of mice were fixed in 10% formalin solution and decalcified using 10% EDTA. After dehydration, specimens were then paraffin-embedded, sectioned (5 mm thickness) for hematoxylin and eosin (H&E) staining, and observed under a light microscope. Histopathological characteristics were evaluated blindly as described previoulsy [9]. ## 2.7. Immunohistochemical Staining The sections were dewaxed and hydrated using xylene and a graded series of alcohols. Activity of endogenous peroxidase was quenched with 3% H2O2. Then the tissues were incubated with anti-IL-1β, anti-IL-6, and anti-TNF-α (Abcam, Cambridge, UK) overnight at 4°C. Final color product was developed with DAB Kit (ZSGB-BIO, Beijing, China). Later, sections were counterstained with hematoxylin (Leagene, Beijing, China). Meanwhile, PBS was used for control staining instead of primary antibodies. Immunohistochemical semiquantitative analysis was performed as previously described [10]. ## 2.8. TRAP Staining The sections were stained for TRAP using a TRAP staining kit (Sigma, St. Louis, MO, USA) according to the manufacturer’s protocol. Specimens were observed by computer image analysis using the Leica Qwin image analysis software (Leica Microsystem, Germany). TRAP-positive multinucleated cells that contained more than 3 nuclei were identified as osteoclasts [11]. ## 2.9. Micro-CT Analysis The fixed hind paws were placed in a centrifuge tube with physiological saline. Micro-CT analyses were performed using 1174 compact micro-CT (Skyscan, Aartselaar, Belgium). The micro-CT analysis procedures were performed according to the international guideline. ## 2.10. Western Blotting Analysis of the proteins extracted from ankle joints by western blotting was performed using standard methods. Equivalent amounts of protein from each sample were separated in 10% SDS-polyacrylamide gel and blotted onto a PVDF membrane. The membrane was then blocked with 5% milk (BD, Sparks, MD, USA); incubated with antibodies against OPG, RANKL, p-p65, p-IKKα/β, IκBα, and β-actin (Abcam, Cambridge, UK) overnight; and then hybridized with HRP-conjugated secondary antibody for 1 h. The immunoreactive bands were visualized using an ECL system (CLINX, Shanghai, China). The relative intensities of bands were quantified using Image J. ## 2.11. Statistical Analysis All of the data were expressed as the means ± standard deviation (SD) and analyzed using GraphPad Prism 6 software. Differences in the mean values of various groups were analyzed by using ANOVA.P values <0.05 were considered significant. ## 3. Results ### 3.1. WB Ameliorated Arthritis Severity of CIA Mice To evaluate the therapeutic effect of WB on RA, we used CIA mice, a typical RA animal model. As shown in Figure1(a), WB-M and WB-H treatment starting from disease onset effectively suppressed CIA progression. Significant improvement in clinical signs was observed about two weeks after WB administration, and the effect persisted until the end of the experiment. Subsequently, we analyzed the histological changes of ankle joints from the hind paws to determine the effect of WB on joint inflammation and destruction. As shown in Figures 1(b) and 1(c), the infiltration of inflammatory cells and synovial hyperplasia, as well as cartilage and bone destruction, in CIA mice were clear. WB-M and WB-H administration could alleviate those histological changes in ankle joints of CIA mice. The histological score of ankle joint was significantly lower in WB-M and WB-H treated mice than that in CIA mice.Figure 1 WB ameliorated arthritis severity of CIA mice. (a) Arthritis score of each group. (b) Histological score of each group. (c) Representative histological findings of ankle joint from hind paws of each group. Original magnification 200x.∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ### 3.2. WB Decreased the Levels of IL-1β, IL-6, and TNF-α in CIA Mice Multiple proinflammatory cytokines, such as IL-1β, IL-6, and TNF-α, not only induce and deteriorate inflammation but also cause cartilage damage and bone destruction in RA. We therefore detected the concentrations of these proinflammatory cytokines in serum and joints of CIA mice treated with WB. As shown in Figure 2, the serum levels of IL-1β, IL-6, and TNF-α in CIA mice were obviously increased, whereas WB treatment could significantly decrease the levels of these three cytokines. Similarly, WB treatment also lowered the levels of IL-1β, IL-6, and TNF-α, which were remarkably increased in synovium tissue sections of ankle joints from CIA mice (Figure 3).Figure 2 WB decreased (a) IL-1β, (b) IL-6, and (c) TNF-α levels in serum of CIA mice. Serum obtained from all group was measured by ELISA. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c)Figure 3 (a) Representative immunohistochemistry images of IL-1α, IL-6, and TNF-α in each group. (b) IOD means of each group. Synovium tissue sections from ankle joints in each group were stained with anti-IL-1β, anti-IL-6, and anti-TNF-α. Original magnification 400x. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ### 3.3. WB Inhibited the Bone Destruction in CIA Mice Histological examination showed that WB not only alleviated the joint inflammation but also inhibited joint and bone destruction. To further investigate the effect of WB on bone destruction in CIA mice, we used micro-CT analysis and TRAP staining. As shown in Figure4(a), the paws of the model group exhibited severe bone destruction and decreased bone density. However, paws from WB-treated groups exhibited reduced bone destruction and increased bone density, especially WB-M group. Moreover, the TRAP staining showed that the number of osteoclasts in ankle joint was remarkably higher in model group than that in control group. WB-M treatment significantly lowered the osteoclasts number in CIA mice (Figures 4(b) and 4(c)). Therefore, we used WB-M for further mechanism studies.Figure 4 WB inhibited bone destruction in CIA mice. (a) Representative three-dimensional images of ankle joint. (b) Representative osteoclasts images of ankle joint by TRAP staining. (c) Osteoclast number of each group.∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ### 3.4. WB Regulated OPG/RANKL System To investigate the mechanism of WB in inhibiting bone destruction, we detected the OPG and RANKL expression in ankle joint of CIA mice by western blotting. As shown in Figure5, The level of OPG was obviously decreased, whereas the level of RANKL was increased in model group when compared with control group. WB treatment significantly increased OPG level and, at the same time, decreased RANKL level when compared with model group.Figure 5 WB regulated OPG and RANKL levels in CIA mice. (a) Representative bands of western blotting in different groups. (b) Semiquantitative analysis of western blotting in different groups.∗P<0.05 vs. model group. ##P<0.01 vs. control group. (a) (b) ### 3.5. WB Suppressed the Activation of NF-κB Signaling Pathway The expression of p-IKKα/β, IκBα, and p-p65 was detected by western blotting. As shown in Figure 6, WB significantly decreased the expression levels of p-IKKα/β and p-p65 in ankle joints of CIA mice. In addition, WB increased the level of IκBα in CIA mice.Figure 6 WB suppressed the activation of NF-κB in CIA mice. (a) Representative bands of western blotting in different treatment groups. (b) Semiquantitative analysis of western blotting in different treatment groups. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.1. WB Ameliorated Arthritis Severity of CIA Mice To evaluate the therapeutic effect of WB on RA, we used CIA mice, a typical RA animal model. As shown in Figure1(a), WB-M and WB-H treatment starting from disease onset effectively suppressed CIA progression. Significant improvement in clinical signs was observed about two weeks after WB administration, and the effect persisted until the end of the experiment. Subsequently, we analyzed the histological changes of ankle joints from the hind paws to determine the effect of WB on joint inflammation and destruction. As shown in Figures 1(b) and 1(c), the infiltration of inflammatory cells and synovial hyperplasia, as well as cartilage and bone destruction, in CIA mice were clear. WB-M and WB-H administration could alleviate those histological changes in ankle joints of CIA mice. The histological score of ankle joint was significantly lower in WB-M and WB-H treated mice than that in CIA mice.Figure 1 WB ameliorated arthritis severity of CIA mice. (a) Arthritis score of each group. (b) Histological score of each group. (c) Representative histological findings of ankle joint from hind paws of each group. Original magnification 200x.∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ## 3.2. WB Decreased the Levels of IL-1β, IL-6, and TNF-α in CIA Mice Multiple proinflammatory cytokines, such as IL-1β, IL-6, and TNF-α, not only induce and deteriorate inflammation but also cause cartilage damage and bone destruction in RA. We therefore detected the concentrations of these proinflammatory cytokines in serum and joints of CIA mice treated with WB. As shown in Figure 2, the serum levels of IL-1β, IL-6, and TNF-α in CIA mice were obviously increased, whereas WB treatment could significantly decrease the levels of these three cytokines. Similarly, WB treatment also lowered the levels of IL-1β, IL-6, and TNF-α, which were remarkably increased in synovium tissue sections of ankle joints from CIA mice (Figure 3).Figure 2 WB decreased (a) IL-1β, (b) IL-6, and (c) TNF-α levels in serum of CIA mice. Serum obtained from all group was measured by ELISA. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c)Figure 3 (a) Representative immunohistochemistry images of IL-1α, IL-6, and TNF-α in each group. (b) IOD means of each group. Synovium tissue sections from ankle joints in each group were stained with anti-IL-1β, anti-IL-6, and anti-TNF-α. Original magnification 400x. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.3. WB Inhibited the Bone Destruction in CIA Mice Histological examination showed that WB not only alleviated the joint inflammation but also inhibited joint and bone destruction. To further investigate the effect of WB on bone destruction in CIA mice, we used micro-CT analysis and TRAP staining. As shown in Figure4(a), the paws of the model group exhibited severe bone destruction and decreased bone density. However, paws from WB-treated groups exhibited reduced bone destruction and increased bone density, especially WB-M group. Moreover, the TRAP staining showed that the number of osteoclasts in ankle joint was remarkably higher in model group than that in control group. WB-M treatment significantly lowered the osteoclasts number in CIA mice (Figures 4(b) and 4(c)). Therefore, we used WB-M for further mechanism studies.Figure 4 WB inhibited bone destruction in CIA mice. (a) Representative three-dimensional images of ankle joint. (b) Representative osteoclasts images of ankle joint by TRAP staining. (c) Osteoclast number of each group.∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) (c) ## 3.4. WB Regulated OPG/RANKL System To investigate the mechanism of WB in inhibiting bone destruction, we detected the OPG and RANKL expression in ankle joint of CIA mice by western blotting. As shown in Figure5, The level of OPG was obviously decreased, whereas the level of RANKL was increased in model group when compared with control group. WB treatment significantly increased OPG level and, at the same time, decreased RANKL level when compared with model group.Figure 5 WB regulated OPG and RANKL levels in CIA mice. (a) Representative bands of western blotting in different groups. (b) Semiquantitative analysis of western blotting in different groups.∗P<0.05 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 3.5. WB Suppressed the Activation of NF-κB Signaling Pathway The expression of p-IKKα/β, IκBα, and p-p65 was detected by western blotting. As shown in Figure 6, WB significantly decreased the expression levels of p-IKKα/β and p-p65 in ankle joints of CIA mice. In addition, WB increased the level of IκBα in CIA mice.Figure 6 WB suppressed the activation of NF-κB in CIA mice. (a) Representative bands of western blotting in different treatment groups. (b) Semiquantitative analysis of western blotting in different treatment groups. ∗P<0.05,∗∗P<0.01 vs. model group. ##P<0.01 vs. control group. (a) (b) ## 4. Discussion Joint inflammation and bone destruction are the two typical pathological features of RA. In this study, we found that WB could effectively suppress the disease progression in CIA mice. Histological analysis also showed that WB treatment alleviated inflammatory cells infiltration, as well as cartilage and bone destruction in CIA mice.Cytokines regulate a broad range of inflammatory processes in the pathogenesis of RA, especially some proinflammatory cytokines, TNF-α, IL-1β, and IL-6 [12]. TNF-α and IL-1β are central inflammatory cytokines in the pathogenesis of RA. They can exacerbate the inflammation through enhancing the release of some inflammatory mediators, including proinflammatory cytokines, chemokines, and PGE2. Simultaneously, they also participate in the bone destruction by promoting osteoclast activation [12, 13]. IL-6 was originally identified as a B cell regulatory factor, while recent findings indicate that it also acts as a regulator of CD4+ T cell proliferation, differentiation, and activation [14, 15]. Moreover, IL-6 is involved in the development of bone destruction, because it can induce osteoclast differentiation through receptor activator of NF-kappa B ligand (RANKL) expression [16]. In our study, we found that TNF-α, IL-1β, and IL-6 in serum and joint of CIA mice were significantly decreased in WB treatment group when compared with model group. These results further indicated the effectiveness of WB treatment in CIA mice.Osteoclasts-mediated bone destruction plays key role in RA progression. Proinflammatory cytokines such as TNF-α, IL-1β, and IL-6 were reported to be osteoclastogenic; besides, interactions between receptor activator of the nuclear factor kappa B (RANK) and its ligand RANKL are essential in osteoclastogenesis. RANKL is a TNF-family cytokine required for osteoclast formation. RANK on monocyte binds to RANKL, initiating osteoclast differentiation [17, 18]. Osteoprotegerin (OPG) is another important factor participating in the regulation of osteoclast differentiation. It acts as a decoy receptor to block the effect of RANKL, thus suspending the activation of osteoclasts [19]. Regulating the OPG-RANKL system is regarded as an effective strategy to inhibit the bone destruction in RA. In our study, we found that WB treatment could not only lower the number of osteoclasts but also increase the OPG level and, at the same time, decrease the RANKL level in the joints of CIA mice, which showed its effectiveness in alleviating the bone destruction in RA.The transcription factor NF-κB is crucial in the regulation of immune responses and bone destruction in RA. It can activate IκB phosphorylation by activation of IκB kinase (IKK) and promote the production of p65, thereby regulating the occurrence of inflammatory responses. On the other hand, NF-κB also participates in osteoclastogenesis, because it can mediate the effects of RANKL [20]. Previous studies have proved the activation of NF-κB in cultured synovial fibroblasts and synovial tissue from RA patients, as well as the arthritis animal models [21]. Moreover, blocking NF-κB could relieve inflammatory response and prevent bone destruction in arthritis animal models [22, 23]. Our results indicated that WB effectively inhibit the activation of NF-κB, which may partly explain its effect in inhibiting the progression of joint inflammation and bone destruction.In conclusion, this study demonstrated that WB could effectively alleviate disease progression of CIA mice by decreasing the IL-1β, IL-6, and TNF-α levels, modulating the OPG-RANKL system, and inhibiting the activation of NF-κB. --- *Source: 1015083-2020-03-19.xml*
2020
# Nonlinear Time Series: Computations and Applications **Authors:** Ming Li; Massimo Scalia; Cristian Toma **Journal:** Mathematical Problems in Engineering (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/101523 --- ## Body --- *Source: 101523-2010-10-20.xml*
101523-2010-10-20_101523-2010-10-20.md
357
Nonlinear Time Series: Computations and Applications
Ming Li; Massimo Scalia; Cristian Toma
Mathematical Problems in Engineering (2010)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/101523
101523-2010-10-20.xml
--- ## Body --- *Source: 101523-2010-10-20.xml*
2010
# Cost Effectiveness of Bosentan for Pulmonary Arterial Hypertension: A Systematic Review **Authors:** Ruxu You; Xinyu Qian; Weijing Tang; Tian Xie; Fang Zeng; Jun Chen; Yu Zhang; Jinyu Liu **Journal:** Canadian Respiratory Journal (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1015239 --- ## Abstract Objectives. Although many studies have reported on the cost-effectiveness of bosentan for treating pulmonary arterial hypertension (PAH), a systematic review of economic evaluations of bosentan is currently lacking. Objective evaluation of current pharmacoeconomic evidence can assist decision makers in determining the appropriate place in therapy of a new medication. Methods. Systematic literature searches were conducted in English-language databases (MEDLINE, EMBASE, EconLit databases, and the Cochrane Library) and Chinese-language databases (China National Knowledge Infrastructure, WanFang Data, and Chongqing VIP) to identify studies assessing the cost-effectiveness of bosentan for PAH treatments. Results. A total of 8 published studies were selected for inclusion. Among them were two studies comparing bosentan with epoprostenol and treprostinil. Both results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. Four studies compared bosentan with other endothelin receptor antagonists, which indicated ambrisentan might be the drug of choice for its economic advantages and improved safety profile. Only two economic evaluations provided data to compare bosentan versus sildenafil, and the results favored the use of sildenafil in PAH patients. Four studies compared bosentan with conventional, supportive, or palliative therapy, and whether bosentan was cost-effective was uncertain. Conclusions. Bosentan may represent a more cost-effective option compared with epoprostenol and conventional or palliative therapy. There was unanimous agreement that bosentan was not a cost-effective front-line therapy compared with sildenafil and other endothelin receptor antagonists. However, high-quality cost-effectiveness analyses that utilize long-term follow-up data and have no conflicts of interest are still needed. --- ## Body ## 1. Introduction Pulmonary arterial hypertension (PAH) is a relatively rare but life-threatening disease characterized by elevated arterial blood pressure in the pulmonary circulation that when left untreated results in right ventricular failure and death. The diagnosis is based on pressure measurements obtained by right heart catheterization and is defined as a mean pulmonary artery pressure of at least 25 mmHg, a pulmonary artery wedge pressure of not more than 15 mmHg and a pulmonary vascular resistance (PVR) of at least 3 Wood units [1]. The pathological changes of PAH include lesions in distal pulmonary arteries, medial hypertrophy, intimal proliferative and fibrotic changes, and adventitial thickening with perivascular inflammatory infiltrates. Vasoconstriction, endothelial dysfunction, dysregulated smooth muscle cell growth, inflammation, and thrombosis are contributory mechanisms to the disease progression [2]. A modified New York Heart Association (NYHA) functional classification system was adopted by the World Health Organization (WHO) in 1998 to facilitate evaluation of patients with PAH. Patients may have functional class (FC) I through IV, with increasing numbers reflecting increased severity.Although PAH affects males and females of all ethnicities and ages, the disease is more common in women aged between 20 and 40 years old [3]. The prevalence of PAH has been reported to be between 15 and 50 cases per million population [4]. Currently, there is no cure for PAH, but the overall median survival rates have improved dramatically over the past years (from 2.8 to 7 years in the aforementioned American registry) [5, 6], presumably due to a combination of significant advances in treatment strategies and patient-support strategies.Throughout the past 20 years, numerous specific pharmacological agents have been approved for the treatment of PAH, including prostacyclin pathway agonists (intravenous prostacyclin, synthetic analogs of prostacyclin, and nonprostanoid prostacyclin receptor agonists), endothelin receptor antagonists (ERAs), phosphodiesterase type-5 inhibitors (PDE-5Is), and the first soluble guanylate cyclase (sGC) stimulator (riociguat) [7]. As more novel therapies for PAH enter the market, it is necessary to evaluate their impacts on both economic and long-term health outcomes. Considering the limited availability of healthcare in the management of PAH, health technology assessment is increasingly important to determine whether treatments represent good value for money, as PAH is not only associated with morbidity, mortality, and overall reduced quality of life but also leads to increased healthcare expenditure [8].Bosentan is a dual endothelin receptor antagonist and the first oral agent available in China for the treatment of PAH. Since the development of bosentan, the number of papers and articles focused on its efficacy, short- and long-term costs, and cost-effectiveness has massively increased, which have provided scientific evidence for the deeper understanding of the therapy. Despite the potential benefit of the targeted agent in the treatment of PAH, its application is discussed controversially due to their high prices. Hence, it is necessary to assess the economic impact of the use of these agents in PAH.The objective of this article is, therefore, to review and assess the economic evidence of treatments with targeted agent bosentan in PAH. The review was also conducted to provide insight into key drivers of cost-effectiveness ratios and help healthcare decision-makers, patients, and health systems leaders make well-informed decisions. ## 2. Methods The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines by Moher et al. were followed for review and reporting procedures [9]. ### 2.1. Eligibility Criteria To be included in this systematic review, the articles had to meet the following criteria: (1) identified as a full economic evaluation, examined costs and their consequences, and reported incremental cost-effectiveness ratios (ICERs) or incremental cost-utility ratios (ICURs); (2) they included the bosentan intervention, regardless of monotherapy or combinations therapy; and (3) they were available in complete full-text format. Articles were excluded if they were systematic reviews, expert opinions, comments (commentary), methodological article, or conference abstracts and proceedings. ### 2.2. Literature Search We conducted a systematic literature search to identify all relevant studies estimating the cost-effectiveness of PAH therapies published between 1 January 2000 and 30 June 2017. The following databases were searched: MEDLINE (PubMed), EMBASE (Ovid), EconLit databases, and the Cochrane Library for English-language studies; and China National Knowledge Infrastructure (CNKI), Wanfang Data and Chongqing VIP (CQVIP) for Chinese-language studies. Literature search algorithm is detailed in AppendixS1. ### 2.3. Study Selection The titles and abstracts were screened for eligibility by two independent authors. Full-text copies of all potentially relevant articles were obtained and reviewed to determine whether they met the prespecified inclusion criteria. Disagreements were resolved by consensus through discussion. ### 2.4. Data Collection Data on study and patient characteristics as well as relevant outcomes were extracted using a standardized data extraction form, including general information for the article (e.g., authors and publication year), characteristics of the study (e.g., design and sample size), type of economic evaluation, study objective, description of the intervention and comparators, measure of benefit, cost data and respective sources, methods for dealing with uncertainty as well as cost and outcome results. ### 2.5. Quality Assessment The quality of reporting of all included studies was appraised using the 24-items Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Each item in the CHEERS checklist was scored as having met the criteria in full (“1”), not at all (“0”), or not applicable (NA). Quality assessment was performed by two authors, and the remaining authors resolved conflicts through discussion and consensus. Studies with a score higher than 75% were categorized as good, studies in the range 50%–74% were categorized moderate, and studies with scores lower than 50% were categorized as low [10]. ### 2.6. Data Synthesis A narrative synthesis was used to summarize and evaluate the aims, methods, settings, and results of the studies reviewed. When possible, information was compared across studies about the modeling technique, the cost perspective, the measures of benefit used, and incremental cost-effectiveness ratios. Cost/charges data are presented in US$ for the common price year 2017 using the “CCEMG-EPPI-Centre Cost Converter” Version 1.5 [11], a web-based tool that can be used to adjust an estimate of cost expressed in one currency and price year to a target currency and/or price year. ## 2.1. Eligibility Criteria To be included in this systematic review, the articles had to meet the following criteria: (1) identified as a full economic evaluation, examined costs and their consequences, and reported incremental cost-effectiveness ratios (ICERs) or incremental cost-utility ratios (ICURs); (2) they included the bosentan intervention, regardless of monotherapy or combinations therapy; and (3) they were available in complete full-text format. Articles were excluded if they were systematic reviews, expert opinions, comments (commentary), methodological article, or conference abstracts and proceedings. ## 2.2. Literature Search We conducted a systematic literature search to identify all relevant studies estimating the cost-effectiveness of PAH therapies published between 1 January 2000 and 30 June 2017. The following databases were searched: MEDLINE (PubMed), EMBASE (Ovid), EconLit databases, and the Cochrane Library for English-language studies; and China National Knowledge Infrastructure (CNKI), Wanfang Data and Chongqing VIP (CQVIP) for Chinese-language studies. Literature search algorithm is detailed in AppendixS1. ## 2.3. Study Selection The titles and abstracts were screened for eligibility by two independent authors. Full-text copies of all potentially relevant articles were obtained and reviewed to determine whether they met the prespecified inclusion criteria. Disagreements were resolved by consensus through discussion. ## 2.4. Data Collection Data on study and patient characteristics as well as relevant outcomes were extracted using a standardized data extraction form, including general information for the article (e.g., authors and publication year), characteristics of the study (e.g., design and sample size), type of economic evaluation, study objective, description of the intervention and comparators, measure of benefit, cost data and respective sources, methods for dealing with uncertainty as well as cost and outcome results. ## 2.5. Quality Assessment The quality of reporting of all included studies was appraised using the 24-items Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Each item in the CHEERS checklist was scored as having met the criteria in full (“1”), not at all (“0”), or not applicable (NA). Quality assessment was performed by two authors, and the remaining authors resolved conflicts through discussion and consensus. Studies with a score higher than 75% were categorized as good, studies in the range 50%–74% were categorized moderate, and studies with scores lower than 50% were categorized as low [10]. ## 2.6. Data Synthesis A narrative synthesis was used to summarize and evaluate the aims, methods, settings, and results of the studies reviewed. When possible, information was compared across studies about the modeling technique, the cost perspective, the measures of benefit used, and incremental cost-effectiveness ratios. Cost/charges data are presented in US$ for the common price year 2017 using the “CCEMG-EPPI-Centre Cost Converter” Version 1.5 [11], a web-based tool that can be used to adjust an estimate of cost expressed in one currency and price year to a target currency and/or price year. ## 3. Results ### 3.1. Studies Identified A total of 163 potential publications were identified with the search strategy used, including 119 English-language studies and 44 Chinese-language studies, among which 18 were duplicates and 131 were excluded after screening and analysis of titles and abstracts for not matching the eligibility criteria. A total of 8 articles were retrieved and analyzed (Figure1).Figure 1 Flowchart of literature search. CNKI China National Knowledge Infrastructure database, CQVIP Chongqing VIP database, and PAH pulmonary arterial hypertension. ### 3.2. Description of Identified Studies The 8 included studies are detailed in Table1. Two studies [12, 13] were conducted for the USA, and two for Canada [14, 15]. The remaining studies were conducted for Australia [16], UK [17], China [18], and Italy [19]. Of the 8 studies included, five used the Markov model [12–14, 17, 18], two used the Excel model [16, 19], and one used the cost-minimization analysis [15]. Five studies were conducted from the perspective of healthcare payers, of which four were performed from the perspective of public payers (e.g. Canadian Healthcare System, National Health System) [14, 15, 17, 19], while one study used a third-party payer perspective [16]. And in three studies [12, 13, 18] the perspective was not stated. The time horizons used in the EXCEL and Markov models were highly variable ranging from 3 years to a lifetime. Four studies [12, 13, 15, 19] used a shorter, 3 or 5-year time horizon, while the remaining studies [14, 16–18] chose longer modeling horizons such as 15 years.Table 1 General characteristics of the included studies. References Year published, country Perspective Model type Target population Treatment Comparator Cost components Time horizon Discount rate (%) Source of effectiveness and safety data Highland et al. [12] 2003, USA Unclear Markov model Patients with PAH Bosentan Epoprostenol, treprostinil Drug, diluent, per diem, hospitalization, home health, Hickman catheter, liver function One year NA Three studies Garin et al. [13] 2009, USA Unclear Markov model Patients with FC III and IV PAH Bosentan Epoprostenol, treprostinil, iloprost, sitaxentan, ambrisentan, sildenafil Drug, per diem, pain medications, hospitalization/clinic visit, intravenous line infections, laboratory tests One year NA Two RCTs Coyle et al. [14] 2016, Canada Healthcare system Markov model Patients with FC II and III PAH Bosentan Ambrisentan, sildenafil, tadalafil, supportive care Drugs, monitoring/therapeutic procedures (includes liver function tests, pregnancy test, echocardiograms, renal function, and blood work), hospital/ER/clinic visits (includes general practitioner visits, specialist visits, nurse visits, hospitalizations, emergency room visits, therapeutic procedures), Supportive care drugs Lifetime 5 A network meta-analysis Dranitsaris et al. [15] 2009, Canada Canadian healthcare system Cost-minimization analysis (CMA) Patients with FC II and III PAH Bosentan Ambrisentan, sitaxentan, sildenafil Drug acquisition, medical consultations and visits, laboratory and diagnostic procedures, functional studies, other healthcare-related resources, alternative pharmacotherapy 3 years 3 Nine placebo-controlled trials Wlodarczyk et al. [16] 2006, Australia A healthcare payer perspective An excel model Patients with iPAH Bosentan Conventional therapy Exercise test, lung function, chest x-ray, echocardiogram, electrocardiogram, blood tests, specialist, total medical 15years 5 Two aforementioned pivotal clinical trials and their long-term open-label extensions Stevenson et al. [17] 2009, UK National Health Service Markov model Patients with iPAH or PAH-CTD of FC III Bosentan Palliative therapy Drug acquisition, home delivery, palliative care Lifetime 3.5 Two RCTs Fan et al. [18] 2016, China Unclear Markov model Patients with PAH Bosentan Palliative therapy Drugs, monitoring/therapeutic procedures Lifetime 3.5 Patient registration and follow-up data for charity project Barbieri et al. [19] 2014, Italy National Health System An excel model Patients with FC II and III PAH Bosentan Ambrisentan Drug acquisition cost, direct medical costs (includes visits to professionals, laboratory tests, concomitant medications, hospitalizations) 3 years Unclear Two separate double-blind studies Note. PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; RCT: randomized controlled trial; CTD: connective tissue disease; CMA: cost minimization analysis.The endothelin receptor antagonist in the included studies was bosentan, and the most frequent application for comparators was prostanoids [12, 13], ambrisentan [13–15, 19], sildenafil [13, 14], and conventional, supportive, or palliative therapy [14, 16–18]. The majority of studies reported results as ICERs. Two studies were sponsored by Actelion Pharmaceuticals [16, 17], two were funded by GlaxoSmithKline [15, 19], one received funds from Actelion Pharmaceuticals, Encysive Pharmaceuticals, CoTherix, Gilead Sciences, United Therapeutics, and Pfizer [13], and one was funded by the Canadian Agency for Drugs and Technologies in Health (CADTH) [14]. Two studies [12, 18] did not disclose the source of funding. ### 3.3. Quality Assessment Based on reporting quality assessment from the CHEERS statement, most of the studies were classified as high quality [13–17, 19] and two as moderate [12, 18]. Table 2 presented the proportion of each item in the CHEERS checklist that was reported sufficiently, partially, or not at all in the review. Two studies [12, 18] failed to report the source of funding, and no state and the conflicts of interest were given by five studies [12, 15–18]. Additionally, no studies stated the setting and location—also an item required by the checklist when reporting the background and objectives of economic evaluations. Moreover, the perspective of the study of Highland et al. [12], Garin et al. [13] and Fan et al. [18]was not stated. Reasons for the choice of time horizon were not reported in three studies [12, 13, 19]. Only one study [17] performed a subgroup analysis to assess the impacts of bosentan in iPAH and PAH-CTD. And no studies [12–19] discussed the generalizability of the results; even they reported the study findings and limitations.Table 2 Quality of the economic evaluations (as assessed by the CHEERS statement). Item No. Section/item 1 2 3 4 5 6 7 8 Highland KB et al. [12] Garin MC et al. [13] Coyle K et al. [14] Dranitsaris G et al. [15] Wlodarczyk JH et al. [16] Stevenson MD et al. [17] Fan et al. [18] Barbieri M et al. [19] 1 Title 1 1 1 1 1 1 1 1 2 Abstract 1 1 1 1 1 1 1 1 3 Background and objectives 1 1 1 1 1 1 1 1 4 Target population and subgroups 1 1 1 1 1 1 1 1 5 Setting and location 0 0 0 0 0 0 0 0 6 Study perspective 0 0 1 1 1 1 0 1 7 Comparators 1 1 1 1 1 1 1 1 8 Time horizon 1 1 1 1 1 1 1 1 9 Discount rate 0 0 1 1 1 1 1 0 10 Choice of health outcomes 1 1 1 1 1 1 1 1 11 Measurement of effectiveness 1 1 1 1 1 1 1 1 12 Measurement and valuation of preference-based outcomes 1 1 1 1 1 1 1 0 13 Estimating resources and costs 1 1 1 1 1 1 1 1 14 Currency, price date, and conversion 1 1 1 1 1 1 1 1 15 Choice of model 1 1 1 1 1 1 1 1 16 Assumptions 1 1 1 1 1 1 1 1 17 Analytical methods 1 1 1 1 1 1 1 1 18 Study parameters 1 1 1 1 1 1 1 1 19 Incremental costs and outcomes 1 1 1 1 1 1 1 1 20 Characterizing uncertainty 1 1 1 1 1 1 1 1 21 Characterizing heterogeneity 0 0 0 0 0 1 0 0 22 Study findings, limitations, generalizability, and current knowledge 0 0 0 0 0 0 0 0 23 Source of funding 0 1 1 1 1 1 0 1 24 Conflicts of interest 0 1 1 0 0 0 0 1 Overall quality Moderate Good Good Good Good Good Moderate Good Note. “1” meets the quality assessment criteria; “0” does not fully conform to the quality assessment criteria; CHEERS: Consolidated Health Economic Evaluation Reporting Standards. ### 3.4. Cost-Effectiveness Results of the Studies The results of the included studies for the cost-effectiveness analysis are summarized in Table3.Table 3 Overview of economic evaluation outcomes of included studies. References Comparison Effectiveness/benefits Costs (original currency; mean) Costs (2017 US$; mean) ICER (2017 US$ per QALY) Threshold of ICER (per QALY) Sensitivity or uncertainty analysis Highland et al. [12] (1) Bosentan vs. epoprostenol Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $3631900 per 100 patients/yr Incremental costs: $4641721.88 per 100 patients/yr Dominating NA Sensitivity analyses: results robust. (2) Bosentan vs. treprostinil Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $4873800 per 100 patients/yr Incremental costs: $6228922.62 per 100 patients/yr Dominating NA Garin et al. [13] (1) Bosentan vs. epoprostenol Incremental effectiveness: 5.77 QALYs per 100 patients Incremental costs: $408213 per 100 patients/yr Incremental costs: $452508.19 per 100 patients/yr Dominating NA Sensitivity analyses had minimal impact on these results. (2) Bosentan vs. treprostinil Incremental effectiveness: 5.92 QALYs per 100 patients Incremental costs: $434684 per 100 patients/yr Incremental costs: $481851.56 per 100 patients/yr $81393.84 $50000 (3) Bosentan vs. iloprost Incremental effectiveness: 3.09 QALYs per 100 patients Incremental costs: $3466486 per 100 patients/yr Incremental costs: $3842634.40 per 100 patients/yr Dominating NA (4) Bosentan vs. Sitaxentan Incremental effectiveness: 0.16 QALYs per 100 patients Incremental costs: $474 per 100 patients/yr Incremental costs: $525.43 per 100 patients/yr $3283.94 $50000 (5) Bosentan vs. ambrisentan Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $0 per 100 patients/yr Incremental costs: $0 per 100 patients/yr $0 NA (6) Bosentan vs. sildenafil Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $3153535 per 100 patients/yr Incremental costs: $3495725.08 per 100 patients/yr Dominated NA Coyle et al. [14] (1) Bosentan vs. ambrisentan 5mg Patients with FC II Incremental effectiveness: −0.73 QALYs per person (treatment, 3.904 QALYs; comparator, 4.634 QALYs) Patients with FC IIIncremental costs: Can$29095 per person (treatment, Can$406282; comparator, Can$377187) Patients with FC IIIncremental costs: $24105.22 per person (treatment, $336604.81; comparator, $312499.59) Dominated NA Extensive sensitivity analyses: results robust. Probabilistic sensitivity analysis: results robust. Patients with FC IIIIncremental effectiveness: −0.22 QALYs per person (treatment, 2.960 QALYs; comparator, 3.180 QALYs) Patients with FC IIIIncremental costs: Can$61406 per person (treatment, Can$412979; comparator, Can$351573) Patients with FC IIIIncremental costs: $50874.90 per person (treatment, $342153.27; comparator, $291278.38) Dominated NA (2) Bosentan vs. ambrisentan 10mg Patients with FC II Incremental effectiveness: −0.313 QALYs per person (treatment, 3.904 QALYs; comparator, 4.217 QALYs) Patients with FC II incremental costs: Can$28759 per person (treatment, Can$406282 comparator, Can$377523) Patients with FC IIIIncremental costs: $23826.84 per person (treatment, $336604.81; comparator, $312777.96) Dominated NA Patients with FC III Incremental effectiveness: −0.083 QALYs per person (treatment, 2.960 QALYs; comparator, 3.043 QALYs) Patients with FC III Incremental costs: Can$36095 per person (treatment, Can$412979; comparator, Can$376884) Patients with FC III Incremental costs: $29904.7 per person (treatment, $342153.27; comparator, $312248.55) Dominated NA (3) Bosentan vs. sildenafil Patients with FC II incremental effectiveness: −0.7593 QALYs per person (treatment, 3.904 QALYs; comparator, 4.663 QALYs) Patients with FC II incremental costs: Can$260028 per person (treatment, Can$406282 comparator, Can$146254) Patients with FC II incremental costs: $215433.31 per person (treatment, $336604.81; comparator, $121171.50) Dominated NA Patients with FC III Incremental effectiveness: −0.324 QALYs per person (treatment, 2.960 QALYs; comparator, 3.284 QALYs) Patients with FC III Incremental costs: Can$231860 per person (treatment, Can$412979; comparator, Can$181119) Patients with FC III Incremental costs: $192096.11 per person (treatment, $342153.27; comparator, $150057.17) Dominated NA (4) Bosentan vs. tadalafil Patients with FC II Incremental effectiveness: −0.098 QALYs per person (treatment, 3.904 QALYs; comparator, 4.002 QALYs) Patients with FC II Incremental costs: Can$253037 per person (treatment, Can$406282 comparator, Can$153245) Patients with FC II Incremental costs: $209641.26 per person (treatment, $336604.81; comparator, $126963.55) Dominated NA Patients with FC III Incremental effectiveness: −0.053 QALYs per person (treatment, 2.960 QALYs; comparator, 3.013 QALYs) Patients with FC III Incremental costs: Can$212395 per person (treatment, Can$412979; comparator, Can$200584) Patients with FC III Incremental costs: $175969.35 per person (treatment, $342153.27; comparator, $166183.93) Dominated NA (5) Bosentan vs. supportive care Patients with FC II Incremental effectiveness: 0.686 QALYs per person (treatment, 3.904 QALYs; comparator, 3.128 QALYs) Patients with FCII Incremental costs: Can$251126 per person (treatment, Can$406282 comparator, Can$155156) Patients with FCII Incremental costs: $208058.00 per person (treatment, $336604.81; comparator, $128815.82) $303291.55 $165700.08 (Can$200000) Patients with FC III Incremental effectiveness: 0.273 QALYs per person (treatment, 2.960 QALYs; comparator, 2.687 QALYs) Patients with FC III Incremental costs: Can$208694 per person (treatment, Can$412979; comparator, Can$204285) Patients with FC III Incremental costs: $172903.07 per person (treatment, $342153.27; comparator, $169250.2) $633344.58 $165700.08 (Can$200000) Dranitsaris et al. [15] (1) Bosentan vs. ambrisentan NA Incremental costs: Can$16302 per patient (treatment, Can$164745; comparator, Can$148443) Incremental costs: $14956.40 per patient (treatment, $151146.60; comparator, $136190.20) NA NA One-way sensitivity analysis: results sensitive to sildenafil dose, ambrisentan daily drug cost, and bosentan daily drug cost. (2) Bosentan vs. sitaxentan NA Incremental costs: Can$6307 per patient (treatment, Can$164745; comparator, Can$158444) Incremental costs: $5786.41 per patient (treatment, $151146.60; comparator, $145365.70) NA NA (3) Bosentan vs. sildenafil NA Incremental costs: Can$116394 per patient (treatment, Can$164745; comparator, Can$48351) Incremental costs: $106786.59 per patient (treatment, $151146.60; comparator, $44360.01) NA NA Wlodarczyk et al. [16] Bosentan vs. conventional care At 5years incremental effectiveness: 1.39 life expectancy At 5years incremental costs: A$116929 for each patient At 5years incremental costs: $101787.70 for each patient $73228.56 $41928.72 (A$60000) One-way sensitivity analysis: removing the PBS continuation rules from the model, halving of the annual mortality rate in patients treated with conventional therapy, and changing mortality and hospitalization RR affected the results. At 10years incremental effectiveness: 2.93 life expectancy At 10years incremental costs: A$181808 for each patient At 10years incremental costs: $158265.43 for each patient $54015.51 $41928.72 (A$60000) At 15years incremental effectiveness: 3.87 life expectancy At 15years incremental costs: A$216331 for each patient At 15years incremental costs: $188318.00 for each patient $48660.98 $41928.72 (A$60000) Stevenson et al. [17] Bosentan vs. palliative therapy Patients with iPAHIncremental effectiveness: 0.37 QALYs per patient (treatment, 3.32 QALYs; comparator, 2.95 QALYs) Patients with iPAH Incremental costs: £69000 per patient (treatment, £134000; comparator, £203000) Patients with iPAHIncremental costs: $111639.98 per patient (treatment, $216808.07; comparator, $328448.05) Dominating NA The results were similar in both the deterministic and probabilistic analyses. Patients with PAH-CTDIncremental effectiveness: 0.15 QALYs per patient (treatment, 1.36 QALYs; comparator, 1.21 QALYs) Patients with PAH-CTDIncremental costs: £32000 per patient (treatment, £62000; comparator, £94000) Patients with PAH-CTD Incremental costs: $51775.06 per patient (treatment, $100314.18; comparator, $152089.25) Dominating NA Fan et al. [18] Bosentan vs. palliative therapy Incremental effectiveness: 6.19 QALYs per person (treatment, 7.23 QALYs; comparator, 1.04 QALYs) Incremental costs: ¥439046.77 per patient (treatment, ¥504293.75; comparator, ¥65246.98) Incremental costs: $125227.26 per patient (treatment, $143837.35; comparator, $18610.09) $20230.58 $39815.46 (¥139593) Sensitivity analyses: results robust. Barbieri et al. [19] Bosentan vs. ambrisentan NA Incremental costs: €1112145 (treatment, €87594291; comparator, €86482146) Incremental costs: $1184990.49 (treatment, $93331717.06; comparator, $92146726.56) NA NA The sensitivity analysis corroborated the base case findings. Note. “Dominating” denotes bosentan treatment producing more QALYs at a lower cost, whereas “dominated” denotes bosentan producing less QALYs at a higher cost. ICER: incremental cost-effectiveness ratio; QALY: quality-adjusted life-year; yr: year; PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; CTD: connective tissue disease. #### 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. #### 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. #### 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. #### 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 3.1. Studies Identified A total of 163 potential publications were identified with the search strategy used, including 119 English-language studies and 44 Chinese-language studies, among which 18 were duplicates and 131 were excluded after screening and analysis of titles and abstracts for not matching the eligibility criteria. A total of 8 articles were retrieved and analyzed (Figure1).Figure 1 Flowchart of literature search. CNKI China National Knowledge Infrastructure database, CQVIP Chongqing VIP database, and PAH pulmonary arterial hypertension. ## 3.2. Description of Identified Studies The 8 included studies are detailed in Table1. Two studies [12, 13] were conducted for the USA, and two for Canada [14, 15]. The remaining studies were conducted for Australia [16], UK [17], China [18], and Italy [19]. Of the 8 studies included, five used the Markov model [12–14, 17, 18], two used the Excel model [16, 19], and one used the cost-minimization analysis [15]. Five studies were conducted from the perspective of healthcare payers, of which four were performed from the perspective of public payers (e.g. Canadian Healthcare System, National Health System) [14, 15, 17, 19], while one study used a third-party payer perspective [16]. And in three studies [12, 13, 18] the perspective was not stated. The time horizons used in the EXCEL and Markov models were highly variable ranging from 3 years to a lifetime. Four studies [12, 13, 15, 19] used a shorter, 3 or 5-year time horizon, while the remaining studies [14, 16–18] chose longer modeling horizons such as 15 years.Table 1 General characteristics of the included studies. References Year published, country Perspective Model type Target population Treatment Comparator Cost components Time horizon Discount rate (%) Source of effectiveness and safety data Highland et al. [12] 2003, USA Unclear Markov model Patients with PAH Bosentan Epoprostenol, treprostinil Drug, diluent, per diem, hospitalization, home health, Hickman catheter, liver function One year NA Three studies Garin et al. [13] 2009, USA Unclear Markov model Patients with FC III and IV PAH Bosentan Epoprostenol, treprostinil, iloprost, sitaxentan, ambrisentan, sildenafil Drug, per diem, pain medications, hospitalization/clinic visit, intravenous line infections, laboratory tests One year NA Two RCTs Coyle et al. [14] 2016, Canada Healthcare system Markov model Patients with FC II and III PAH Bosentan Ambrisentan, sildenafil, tadalafil, supportive care Drugs, monitoring/therapeutic procedures (includes liver function tests, pregnancy test, echocardiograms, renal function, and blood work), hospital/ER/clinic visits (includes general practitioner visits, specialist visits, nurse visits, hospitalizations, emergency room visits, therapeutic procedures), Supportive care drugs Lifetime 5 A network meta-analysis Dranitsaris et al. [15] 2009, Canada Canadian healthcare system Cost-minimization analysis (CMA) Patients with FC II and III PAH Bosentan Ambrisentan, sitaxentan, sildenafil Drug acquisition, medical consultations and visits, laboratory and diagnostic procedures, functional studies, other healthcare-related resources, alternative pharmacotherapy 3 years 3 Nine placebo-controlled trials Wlodarczyk et al. [16] 2006, Australia A healthcare payer perspective An excel model Patients with iPAH Bosentan Conventional therapy Exercise test, lung function, chest x-ray, echocardiogram, electrocardiogram, blood tests, specialist, total medical 15years 5 Two aforementioned pivotal clinical trials and their long-term open-label extensions Stevenson et al. [17] 2009, UK National Health Service Markov model Patients with iPAH or PAH-CTD of FC III Bosentan Palliative therapy Drug acquisition, home delivery, palliative care Lifetime 3.5 Two RCTs Fan et al. [18] 2016, China Unclear Markov model Patients with PAH Bosentan Palliative therapy Drugs, monitoring/therapeutic procedures Lifetime 3.5 Patient registration and follow-up data for charity project Barbieri et al. [19] 2014, Italy National Health System An excel model Patients with FC II and III PAH Bosentan Ambrisentan Drug acquisition cost, direct medical costs (includes visits to professionals, laboratory tests, concomitant medications, hospitalizations) 3 years Unclear Two separate double-blind studies Note. PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; RCT: randomized controlled trial; CTD: connective tissue disease; CMA: cost minimization analysis.The endothelin receptor antagonist in the included studies was bosentan, and the most frequent application for comparators was prostanoids [12, 13], ambrisentan [13–15, 19], sildenafil [13, 14], and conventional, supportive, or palliative therapy [14, 16–18]. The majority of studies reported results as ICERs. Two studies were sponsored by Actelion Pharmaceuticals [16, 17], two were funded by GlaxoSmithKline [15, 19], one received funds from Actelion Pharmaceuticals, Encysive Pharmaceuticals, CoTherix, Gilead Sciences, United Therapeutics, and Pfizer [13], and one was funded by the Canadian Agency for Drugs and Technologies in Health (CADTH) [14]. Two studies [12, 18] did not disclose the source of funding. ## 3.3. Quality Assessment Based on reporting quality assessment from the CHEERS statement, most of the studies were classified as high quality [13–17, 19] and two as moderate [12, 18]. Table 2 presented the proportion of each item in the CHEERS checklist that was reported sufficiently, partially, or not at all in the review. Two studies [12, 18] failed to report the source of funding, and no state and the conflicts of interest were given by five studies [12, 15–18]. Additionally, no studies stated the setting and location—also an item required by the checklist when reporting the background and objectives of economic evaluations. Moreover, the perspective of the study of Highland et al. [12], Garin et al. [13] and Fan et al. [18]was not stated. Reasons for the choice of time horizon were not reported in three studies [12, 13, 19]. Only one study [17] performed a subgroup analysis to assess the impacts of bosentan in iPAH and PAH-CTD. And no studies [12–19] discussed the generalizability of the results; even they reported the study findings and limitations.Table 2 Quality of the economic evaluations (as assessed by the CHEERS statement). Item No. Section/item 1 2 3 4 5 6 7 8 Highland KB et al. [12] Garin MC et al. [13] Coyle K et al. [14] Dranitsaris G et al. [15] Wlodarczyk JH et al. [16] Stevenson MD et al. [17] Fan et al. [18] Barbieri M et al. [19] 1 Title 1 1 1 1 1 1 1 1 2 Abstract 1 1 1 1 1 1 1 1 3 Background and objectives 1 1 1 1 1 1 1 1 4 Target population and subgroups 1 1 1 1 1 1 1 1 5 Setting and location 0 0 0 0 0 0 0 0 6 Study perspective 0 0 1 1 1 1 0 1 7 Comparators 1 1 1 1 1 1 1 1 8 Time horizon 1 1 1 1 1 1 1 1 9 Discount rate 0 0 1 1 1 1 1 0 10 Choice of health outcomes 1 1 1 1 1 1 1 1 11 Measurement of effectiveness 1 1 1 1 1 1 1 1 12 Measurement and valuation of preference-based outcomes 1 1 1 1 1 1 1 0 13 Estimating resources and costs 1 1 1 1 1 1 1 1 14 Currency, price date, and conversion 1 1 1 1 1 1 1 1 15 Choice of model 1 1 1 1 1 1 1 1 16 Assumptions 1 1 1 1 1 1 1 1 17 Analytical methods 1 1 1 1 1 1 1 1 18 Study parameters 1 1 1 1 1 1 1 1 19 Incremental costs and outcomes 1 1 1 1 1 1 1 1 20 Characterizing uncertainty 1 1 1 1 1 1 1 1 21 Characterizing heterogeneity 0 0 0 0 0 1 0 0 22 Study findings, limitations, generalizability, and current knowledge 0 0 0 0 0 0 0 0 23 Source of funding 0 1 1 1 1 1 0 1 24 Conflicts of interest 0 1 1 0 0 0 0 1 Overall quality Moderate Good Good Good Good Good Moderate Good Note. “1” meets the quality assessment criteria; “0” does not fully conform to the quality assessment criteria; CHEERS: Consolidated Health Economic Evaluation Reporting Standards. ## 3.4. Cost-Effectiveness Results of the Studies The results of the included studies for the cost-effectiveness analysis are summarized in Table3.Table 3 Overview of economic evaluation outcomes of included studies. References Comparison Effectiveness/benefits Costs (original currency; mean) Costs (2017 US$; mean) ICER (2017 US$ per QALY) Threshold of ICER (per QALY) Sensitivity or uncertainty analysis Highland et al. [12] (1) Bosentan vs. epoprostenol Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $3631900 per 100 patients/yr Incremental costs: $4641721.88 per 100 patients/yr Dominating NA Sensitivity analyses: results robust. (2) Bosentan vs. treprostinil Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $4873800 per 100 patients/yr Incremental costs: $6228922.62 per 100 patients/yr Dominating NA Garin et al. [13] (1) Bosentan vs. epoprostenol Incremental effectiveness: 5.77 QALYs per 100 patients Incremental costs: $408213 per 100 patients/yr Incremental costs: $452508.19 per 100 patients/yr Dominating NA Sensitivity analyses had minimal impact on these results. (2) Bosentan vs. treprostinil Incremental effectiveness: 5.92 QALYs per 100 patients Incremental costs: $434684 per 100 patients/yr Incremental costs: $481851.56 per 100 patients/yr $81393.84 $50000 (3) Bosentan vs. iloprost Incremental effectiveness: 3.09 QALYs per 100 patients Incremental costs: $3466486 per 100 patients/yr Incremental costs: $3842634.40 per 100 patients/yr Dominating NA (4) Bosentan vs. Sitaxentan Incremental effectiveness: 0.16 QALYs per 100 patients Incremental costs: $474 per 100 patients/yr Incremental costs: $525.43 per 100 patients/yr $3283.94 $50000 (5) Bosentan vs. ambrisentan Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $0 per 100 patients/yr Incremental costs: $0 per 100 patients/yr $0 NA (6) Bosentan vs. sildenafil Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $3153535 per 100 patients/yr Incremental costs: $3495725.08 per 100 patients/yr Dominated NA Coyle et al. [14] (1) Bosentan vs. ambrisentan 5mg Patients with FC II Incremental effectiveness: −0.73 QALYs per person (treatment, 3.904 QALYs; comparator, 4.634 QALYs) Patients with FC IIIncremental costs: Can$29095 per person (treatment, Can$406282; comparator, Can$377187) Patients with FC IIIncremental costs: $24105.22 per person (treatment, $336604.81; comparator, $312499.59) Dominated NA Extensive sensitivity analyses: results robust. Probabilistic sensitivity analysis: results robust. Patients with FC IIIIncremental effectiveness: −0.22 QALYs per person (treatment, 2.960 QALYs; comparator, 3.180 QALYs) Patients with FC IIIIncremental costs: Can$61406 per person (treatment, Can$412979; comparator, Can$351573) Patients with FC IIIIncremental costs: $50874.90 per person (treatment, $342153.27; comparator, $291278.38) Dominated NA (2) Bosentan vs. ambrisentan 10mg Patients with FC II Incremental effectiveness: −0.313 QALYs per person (treatment, 3.904 QALYs; comparator, 4.217 QALYs) Patients with FC II incremental costs: Can$28759 per person (treatment, Can$406282 comparator, Can$377523) Patients with FC IIIIncremental costs: $23826.84 per person (treatment, $336604.81; comparator, $312777.96) Dominated NA Patients with FC III Incremental effectiveness: −0.083 QALYs per person (treatment, 2.960 QALYs; comparator, 3.043 QALYs) Patients with FC III Incremental costs: Can$36095 per person (treatment, Can$412979; comparator, Can$376884) Patients with FC III Incremental costs: $29904.7 per person (treatment, $342153.27; comparator, $312248.55) Dominated NA (3) Bosentan vs. sildenafil Patients with FC II incremental effectiveness: −0.7593 QALYs per person (treatment, 3.904 QALYs; comparator, 4.663 QALYs) Patients with FC II incremental costs: Can$260028 per person (treatment, Can$406282 comparator, Can$146254) Patients with FC II incremental costs: $215433.31 per person (treatment, $336604.81; comparator, $121171.50) Dominated NA Patients with FC III Incremental effectiveness: −0.324 QALYs per person (treatment, 2.960 QALYs; comparator, 3.284 QALYs) Patients with FC III Incremental costs: Can$231860 per person (treatment, Can$412979; comparator, Can$181119) Patients with FC III Incremental costs: $192096.11 per person (treatment, $342153.27; comparator, $150057.17) Dominated NA (4) Bosentan vs. tadalafil Patients with FC II Incremental effectiveness: −0.098 QALYs per person (treatment, 3.904 QALYs; comparator, 4.002 QALYs) Patients with FC II Incremental costs: Can$253037 per person (treatment, Can$406282 comparator, Can$153245) Patients with FC II Incremental costs: $209641.26 per person (treatment, $336604.81; comparator, $126963.55) Dominated NA Patients with FC III Incremental effectiveness: −0.053 QALYs per person (treatment, 2.960 QALYs; comparator, 3.013 QALYs) Patients with FC III Incremental costs: Can$212395 per person (treatment, Can$412979; comparator, Can$200584) Patients with FC III Incremental costs: $175969.35 per person (treatment, $342153.27; comparator, $166183.93) Dominated NA (5) Bosentan vs. supportive care Patients with FC II Incremental effectiveness: 0.686 QALYs per person (treatment, 3.904 QALYs; comparator, 3.128 QALYs) Patients with FCII Incremental costs: Can$251126 per person (treatment, Can$406282 comparator, Can$155156) Patients with FCII Incremental costs: $208058.00 per person (treatment, $336604.81; comparator, $128815.82) $303291.55 $165700.08 (Can$200000) Patients with FC III Incremental effectiveness: 0.273 QALYs per person (treatment, 2.960 QALYs; comparator, 2.687 QALYs) Patients with FC III Incremental costs: Can$208694 per person (treatment, Can$412979; comparator, Can$204285) Patients with FC III Incremental costs: $172903.07 per person (treatment, $342153.27; comparator, $169250.2) $633344.58 $165700.08 (Can$200000) Dranitsaris et al. [15] (1) Bosentan vs. ambrisentan NA Incremental costs: Can$16302 per patient (treatment, Can$164745; comparator, Can$148443) Incremental costs: $14956.40 per patient (treatment, $151146.60; comparator, $136190.20) NA NA One-way sensitivity analysis: results sensitive to sildenafil dose, ambrisentan daily drug cost, and bosentan daily drug cost. (2) Bosentan vs. sitaxentan NA Incremental costs: Can$6307 per patient (treatment, Can$164745; comparator, Can$158444) Incremental costs: $5786.41 per patient (treatment, $151146.60; comparator, $145365.70) NA NA (3) Bosentan vs. sildenafil NA Incremental costs: Can$116394 per patient (treatment, Can$164745; comparator, Can$48351) Incremental costs: $106786.59 per patient (treatment, $151146.60; comparator, $44360.01) NA NA Wlodarczyk et al. [16] Bosentan vs. conventional care At 5years incremental effectiveness: 1.39 life expectancy At 5years incremental costs: A$116929 for each patient At 5years incremental costs: $101787.70 for each patient $73228.56 $41928.72 (A$60000) One-way sensitivity analysis: removing the PBS continuation rules from the model, halving of the annual mortality rate in patients treated with conventional therapy, and changing mortality and hospitalization RR affected the results. At 10years incremental effectiveness: 2.93 life expectancy At 10years incremental costs: A$181808 for each patient At 10years incremental costs: $158265.43 for each patient $54015.51 $41928.72 (A$60000) At 15years incremental effectiveness: 3.87 life expectancy At 15years incremental costs: A$216331 for each patient At 15years incremental costs: $188318.00 for each patient $48660.98 $41928.72 (A$60000) Stevenson et al. [17] Bosentan vs. palliative therapy Patients with iPAHIncremental effectiveness: 0.37 QALYs per patient (treatment, 3.32 QALYs; comparator, 2.95 QALYs) Patients with iPAH Incremental costs: £69000 per patient (treatment, £134000; comparator, £203000) Patients with iPAHIncremental costs: $111639.98 per patient (treatment, $216808.07; comparator, $328448.05) Dominating NA The results were similar in both the deterministic and probabilistic analyses. Patients with PAH-CTDIncremental effectiveness: 0.15 QALYs per patient (treatment, 1.36 QALYs; comparator, 1.21 QALYs) Patients with PAH-CTDIncremental costs: £32000 per patient (treatment, £62000; comparator, £94000) Patients with PAH-CTD Incremental costs: $51775.06 per patient (treatment, $100314.18; comparator, $152089.25) Dominating NA Fan et al. [18] Bosentan vs. palliative therapy Incremental effectiveness: 6.19 QALYs per person (treatment, 7.23 QALYs; comparator, 1.04 QALYs) Incremental costs: ¥439046.77 per patient (treatment, ¥504293.75; comparator, ¥65246.98) Incremental costs: $125227.26 per patient (treatment, $143837.35; comparator, $18610.09) $20230.58 $39815.46 (¥139593) Sensitivity analyses: results robust. Barbieri et al. [19] Bosentan vs. ambrisentan NA Incremental costs: €1112145 (treatment, €87594291; comparator, €86482146) Incremental costs: $1184990.49 (treatment, $93331717.06; comparator, $92146726.56) NA NA The sensitivity analysis corroborated the base case findings. Note. “Dominating” denotes bosentan treatment producing more QALYs at a lower cost, whereas “dominated” denotes bosentan producing less QALYs at a higher cost. ICER: incremental cost-effectiveness ratio; QALY: quality-adjusted life-year; yr: year; PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; CTD: connective tissue disease. ### 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. ### 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. ### 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. ### 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. ## 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. ## 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. ## 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 4. Discussion ### 4.1. Summary of Evidence PAH is a chronic progressive devastating disease with no cure, which may bring a significant medical and financial burden to patients’ families. The aim of this systematic review of published studies was to evaluate the costs and cost-effectiveness of bosentan for PAH. We used the thresholds stated in the included studies if applicable. Otherwise, we searched the literature to identify appropriate and accepted thresholds used in relevant countries to determine if the ICERs of bosentan was below such criteria, and in turn to evaluate whether they appeared to provide good value or money for PAH.In our analysis, two other Markov model-based cost-effectiveness analyses of bosentan, epoprostenol, and treprostinil in treating PAH was published in the USA [12, 13]. In these two studies, both of their results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. The differences between bosentan and treprostinil might be ascribed to different methodological approaches, such as the model structure, time horizon, the measurement of costs, and health utilities.According to the four model results, in PAH patients for whom an ERA was the preferred agent, ambrisentan might be the drug of choice because of its economic advantages and improved safety profile [13, 14, 16, 19].Two studies investigated the cost-effectiveness analysis of bosentan versus sildenafil, and sildenafil was founded to be more cost-effective, especially for adult patients with FC II and III PAH [13, 14].Among four studies [14, 16–18] that compared bosentan with conventional, supportive, or palliative therapy, three studies [16–18] concluded that bosentan seemed to be more favorable. In comparison, Coyle et al. [14] indicated that long-term, head-to-head research must be conducted to evaluate the cost-effectiveness of bosentan and supportive care before making any recommendations in Canada. This study differed from the current analysis as it was not designed to allow a direct comparison of the cost-effectiveness of PAH treatments relative to each other, but only relative to supportive care. ### 4.2. Quality of Evidence With regard to the quality of reporting of these economic evaluations, despite the fact that guidelines for conducting health economic evaluations have been widely available for many years, we observed that the quality of reporting was still insufficient for several articles. We hope that the availability of the CHEERS statement will lead to improvement in the reporting and hence the quality of economic evaluations of PAH.In terms of characterizing heterogeneity, subgroups of patient's diverse baseline characteristics and other variables could potentially contribute to the variation of interpretation of our review results. In this review, only one study [17] performed a subgroup analysis of iPAH and PAH-CTD. In fact, subgroup analyses are important because some factors such as disease severity, gender, and body mass index (BMI) may also affect the prognosis of PAH and exert a direct effect on its budget to a new therapy [20, 21].In addition, it is noteworthy that several articles included in this study receive grants from pharmaceutical companies which might cause potential bias of cost-effective evaluations. Although pharmaceutical industry-funded research could result in biases in cost-effective analyses, no guidance currently exists on how to evaluate this bias. ### 4.3. Key Drivers of Cost-Effectiveness In line with prior studies, some key drivers of cost-effectiveness were found in our review. First, the consideration of the comparator is highly important. The cost-effectiveness of a drug therapy could differ according to the selected comparator. For example, bosentan was shown to be dominant relative to epoprostenol, while the cost-effectiveness was less favorable when using sildenafil or ambrisentan as the comparator. Therefore, one of the most important structural choices of a cost-effectiveness analysis is the comparator choice.On the contrary, the included analyses were largely country-specific since healthcare systems and reimbursement policies could differ between countries and therefore have a significant influence on the results and final conclusions of economic evaluations. Future assessment is needed to use approaches such as alternative ways to specifying multilevel models for analyzing cost-effectiveness data and identification of a range of appropriate covariates to handle assumptions and uncertainties in economic evaluation results, which would improve the generalizability and transferability of studies across settings [22]. ### 4.4. Strengths and Limitations To the best of our knowledge, this is the first systematic review of published studies to examine the cost-saving or cost-effective properties of bosentan for PAH patients. Contrary to previous systematic and narrative reviews which were outdated or restricted to a specific comparator, this is the most comprehensive review incorporating economic evaluations over an extended period of time with quality assessed using a validated instrument.Although this review was conducted using explicit, systematic methods that were selected to minimize bias, several limitations which affect the conclusion should be taken into consideration when interpreting the results.First, given the disparity in the methods used across existing economic evaluations, it is extremely difficult to synthesize the studies into a coherent whole. Studies would have to be adjusted to achieve standardized results, but this is rarely achievable because of the diverse nature of the elements considered, including different types of models, perspectives, time horizons, and healthcare systems. Such difference was likely to have important impacts on model inputs such as costs and health utilities. Therefore, we summarized the evidence qualitatively, and then the results should be interpreted with caution.Second, the trial populations used in the pharmacoeconomic models may not represent the entire PAH patients in the real-world setting. The prevalence of mortalities and comorbidities seems to be relatively low in most studies. For example, Fan et al. developed the Markov model, based on Australia and New Zealand patient population, to analyze the annual mortality rates in Chinese PAH patients, which might have lead to an inaccurate extrapolation of results, given the study populations.Third, some relevant studies may have been overlooked in our review, especially those that were not published in the English or Chinese language. Similarly, we did not formally assess potential publication bias that may have occurred due to the lack of inclusion of unpublished studies (e.g., industry-sponsored evaluations), which may have had unfavorable findings. ## 4.1. Summary of Evidence PAH is a chronic progressive devastating disease with no cure, which may bring a significant medical and financial burden to patients’ families. The aim of this systematic review of published studies was to evaluate the costs and cost-effectiveness of bosentan for PAH. We used the thresholds stated in the included studies if applicable. Otherwise, we searched the literature to identify appropriate and accepted thresholds used in relevant countries to determine if the ICERs of bosentan was below such criteria, and in turn to evaluate whether they appeared to provide good value or money for PAH.In our analysis, two other Markov model-based cost-effectiveness analyses of bosentan, epoprostenol, and treprostinil in treating PAH was published in the USA [12, 13]. In these two studies, both of their results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. The differences between bosentan and treprostinil might be ascribed to different methodological approaches, such as the model structure, time horizon, the measurement of costs, and health utilities.According to the four model results, in PAH patients for whom an ERA was the preferred agent, ambrisentan might be the drug of choice because of its economic advantages and improved safety profile [13, 14, 16, 19].Two studies investigated the cost-effectiveness analysis of bosentan versus sildenafil, and sildenafil was founded to be more cost-effective, especially for adult patients with FC II and III PAH [13, 14].Among four studies [14, 16–18] that compared bosentan with conventional, supportive, or palliative therapy, three studies [16–18] concluded that bosentan seemed to be more favorable. In comparison, Coyle et al. [14] indicated that long-term, head-to-head research must be conducted to evaluate the cost-effectiveness of bosentan and supportive care before making any recommendations in Canada. This study differed from the current analysis as it was not designed to allow a direct comparison of the cost-effectiveness of PAH treatments relative to each other, but only relative to supportive care. ## 4.2. Quality of Evidence With regard to the quality of reporting of these economic evaluations, despite the fact that guidelines for conducting health economic evaluations have been widely available for many years, we observed that the quality of reporting was still insufficient for several articles. We hope that the availability of the CHEERS statement will lead to improvement in the reporting and hence the quality of economic evaluations of PAH.In terms of characterizing heterogeneity, subgroups of patient's diverse baseline characteristics and other variables could potentially contribute to the variation of interpretation of our review results. In this review, only one study [17] performed a subgroup analysis of iPAH and PAH-CTD. In fact, subgroup analyses are important because some factors such as disease severity, gender, and body mass index (BMI) may also affect the prognosis of PAH and exert a direct effect on its budget to a new therapy [20, 21].In addition, it is noteworthy that several articles included in this study receive grants from pharmaceutical companies which might cause potential bias of cost-effective evaluations. Although pharmaceutical industry-funded research could result in biases in cost-effective analyses, no guidance currently exists on how to evaluate this bias. ## 4.3. Key Drivers of Cost-Effectiveness In line with prior studies, some key drivers of cost-effectiveness were found in our review. First, the consideration of the comparator is highly important. The cost-effectiveness of a drug therapy could differ according to the selected comparator. For example, bosentan was shown to be dominant relative to epoprostenol, while the cost-effectiveness was less favorable when using sildenafil or ambrisentan as the comparator. Therefore, one of the most important structural choices of a cost-effectiveness analysis is the comparator choice.On the contrary, the included analyses were largely country-specific since healthcare systems and reimbursement policies could differ between countries and therefore have a significant influence on the results and final conclusions of economic evaluations. Future assessment is needed to use approaches such as alternative ways to specifying multilevel models for analyzing cost-effectiveness data and identification of a range of appropriate covariates to handle assumptions and uncertainties in economic evaluation results, which would improve the generalizability and transferability of studies across settings [22]. ## 4.4. Strengths and Limitations To the best of our knowledge, this is the first systematic review of published studies to examine the cost-saving or cost-effective properties of bosentan for PAH patients. Contrary to previous systematic and narrative reviews which were outdated or restricted to a specific comparator, this is the most comprehensive review incorporating economic evaluations over an extended period of time with quality assessed using a validated instrument.Although this review was conducted using explicit, systematic methods that were selected to minimize bias, several limitations which affect the conclusion should be taken into consideration when interpreting the results.First, given the disparity in the methods used across existing economic evaluations, it is extremely difficult to synthesize the studies into a coherent whole. Studies would have to be adjusted to achieve standardized results, but this is rarely achievable because of the diverse nature of the elements considered, including different types of models, perspectives, time horizons, and healthcare systems. Such difference was likely to have important impacts on model inputs such as costs and health utilities. Therefore, we summarized the evidence qualitatively, and then the results should be interpreted with caution.Second, the trial populations used in the pharmacoeconomic models may not represent the entire PAH patients in the real-world setting. The prevalence of mortalities and comorbidities seems to be relatively low in most studies. For example, Fan et al. developed the Markov model, based on Australia and New Zealand patient population, to analyze the annual mortality rates in Chinese PAH patients, which might have lead to an inaccurate extrapolation of results, given the study populations.Third, some relevant studies may have been overlooked in our review, especially those that were not published in the English or Chinese language. Similarly, we did not formally assess potential publication bias that may have occurred due to the lack of inclusion of unpublished studies (e.g., industry-sponsored evaluations), which may have had unfavorable findings. ## 5. Conclusions Evidence produced by economic evaluations in general, and in the PAH field in particular, has the potential of informing clinical and reimbursement decision-making. Based on the available evidence, we conclude that the administration of bosentan for PAH appears to be a more cost-effective alternative compared with epoprostenol and conventional or palliative therapy. There was unanimous agreement that bosentan was not a cost-effective front-line therapy compared with sildenafil and other endothelin receptor antagonists. Future research investigating ways to improve the quality of reporting of economic evaluations is therefore warranted. --- *Source: 1015239-2018-11-18.xml*
1015239-2018-11-18_1015239-2018-11-18.md
80,111
Cost Effectiveness of Bosentan for Pulmonary Arterial Hypertension: A Systematic Review
Ruxu You; Xinyu Qian; Weijing Tang; Tian Xie; Fang Zeng; Jun Chen; Yu Zhang; Jinyu Liu
Canadian Respiratory Journal (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1015239
1015239-2018-11-18.xml
--- ## Abstract Objectives. Although many studies have reported on the cost-effectiveness of bosentan for treating pulmonary arterial hypertension (PAH), a systematic review of economic evaluations of bosentan is currently lacking. Objective evaluation of current pharmacoeconomic evidence can assist decision makers in determining the appropriate place in therapy of a new medication. Methods. Systematic literature searches were conducted in English-language databases (MEDLINE, EMBASE, EconLit databases, and the Cochrane Library) and Chinese-language databases (China National Knowledge Infrastructure, WanFang Data, and Chongqing VIP) to identify studies assessing the cost-effectiveness of bosentan for PAH treatments. Results. A total of 8 published studies were selected for inclusion. Among them were two studies comparing bosentan with epoprostenol and treprostinil. Both results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. Four studies compared bosentan with other endothelin receptor antagonists, which indicated ambrisentan might be the drug of choice for its economic advantages and improved safety profile. Only two economic evaluations provided data to compare bosentan versus sildenafil, and the results favored the use of sildenafil in PAH patients. Four studies compared bosentan with conventional, supportive, or palliative therapy, and whether bosentan was cost-effective was uncertain. Conclusions. Bosentan may represent a more cost-effective option compared with epoprostenol and conventional or palliative therapy. There was unanimous agreement that bosentan was not a cost-effective front-line therapy compared with sildenafil and other endothelin receptor antagonists. However, high-quality cost-effectiveness analyses that utilize long-term follow-up data and have no conflicts of interest are still needed. --- ## Body ## 1. Introduction Pulmonary arterial hypertension (PAH) is a relatively rare but life-threatening disease characterized by elevated arterial blood pressure in the pulmonary circulation that when left untreated results in right ventricular failure and death. The diagnosis is based on pressure measurements obtained by right heart catheterization and is defined as a mean pulmonary artery pressure of at least 25 mmHg, a pulmonary artery wedge pressure of not more than 15 mmHg and a pulmonary vascular resistance (PVR) of at least 3 Wood units [1]. The pathological changes of PAH include lesions in distal pulmonary arteries, medial hypertrophy, intimal proliferative and fibrotic changes, and adventitial thickening with perivascular inflammatory infiltrates. Vasoconstriction, endothelial dysfunction, dysregulated smooth muscle cell growth, inflammation, and thrombosis are contributory mechanisms to the disease progression [2]. A modified New York Heart Association (NYHA) functional classification system was adopted by the World Health Organization (WHO) in 1998 to facilitate evaluation of patients with PAH. Patients may have functional class (FC) I through IV, with increasing numbers reflecting increased severity.Although PAH affects males and females of all ethnicities and ages, the disease is more common in women aged between 20 and 40 years old [3]. The prevalence of PAH has been reported to be between 15 and 50 cases per million population [4]. Currently, there is no cure for PAH, but the overall median survival rates have improved dramatically over the past years (from 2.8 to 7 years in the aforementioned American registry) [5, 6], presumably due to a combination of significant advances in treatment strategies and patient-support strategies.Throughout the past 20 years, numerous specific pharmacological agents have been approved for the treatment of PAH, including prostacyclin pathway agonists (intravenous prostacyclin, synthetic analogs of prostacyclin, and nonprostanoid prostacyclin receptor agonists), endothelin receptor antagonists (ERAs), phosphodiesterase type-5 inhibitors (PDE-5Is), and the first soluble guanylate cyclase (sGC) stimulator (riociguat) [7]. As more novel therapies for PAH enter the market, it is necessary to evaluate their impacts on both economic and long-term health outcomes. Considering the limited availability of healthcare in the management of PAH, health technology assessment is increasingly important to determine whether treatments represent good value for money, as PAH is not only associated with morbidity, mortality, and overall reduced quality of life but also leads to increased healthcare expenditure [8].Bosentan is a dual endothelin receptor antagonist and the first oral agent available in China for the treatment of PAH. Since the development of bosentan, the number of papers and articles focused on its efficacy, short- and long-term costs, and cost-effectiveness has massively increased, which have provided scientific evidence for the deeper understanding of the therapy. Despite the potential benefit of the targeted agent in the treatment of PAH, its application is discussed controversially due to their high prices. Hence, it is necessary to assess the economic impact of the use of these agents in PAH.The objective of this article is, therefore, to review and assess the economic evidence of treatments with targeted agent bosentan in PAH. The review was also conducted to provide insight into key drivers of cost-effectiveness ratios and help healthcare decision-makers, patients, and health systems leaders make well-informed decisions. ## 2. Methods The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines by Moher et al. were followed for review and reporting procedures [9]. ### 2.1. Eligibility Criteria To be included in this systematic review, the articles had to meet the following criteria: (1) identified as a full economic evaluation, examined costs and their consequences, and reported incremental cost-effectiveness ratios (ICERs) or incremental cost-utility ratios (ICURs); (2) they included the bosentan intervention, regardless of monotherapy or combinations therapy; and (3) they were available in complete full-text format. Articles were excluded if they were systematic reviews, expert opinions, comments (commentary), methodological article, or conference abstracts and proceedings. ### 2.2. Literature Search We conducted a systematic literature search to identify all relevant studies estimating the cost-effectiveness of PAH therapies published between 1 January 2000 and 30 June 2017. The following databases were searched: MEDLINE (PubMed), EMBASE (Ovid), EconLit databases, and the Cochrane Library for English-language studies; and China National Knowledge Infrastructure (CNKI), Wanfang Data and Chongqing VIP (CQVIP) for Chinese-language studies. Literature search algorithm is detailed in AppendixS1. ### 2.3. Study Selection The titles and abstracts were screened for eligibility by two independent authors. Full-text copies of all potentially relevant articles were obtained and reviewed to determine whether they met the prespecified inclusion criteria. Disagreements were resolved by consensus through discussion. ### 2.4. Data Collection Data on study and patient characteristics as well as relevant outcomes were extracted using a standardized data extraction form, including general information for the article (e.g., authors and publication year), characteristics of the study (e.g., design and sample size), type of economic evaluation, study objective, description of the intervention and comparators, measure of benefit, cost data and respective sources, methods for dealing with uncertainty as well as cost and outcome results. ### 2.5. Quality Assessment The quality of reporting of all included studies was appraised using the 24-items Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Each item in the CHEERS checklist was scored as having met the criteria in full (“1”), not at all (“0”), or not applicable (NA). Quality assessment was performed by two authors, and the remaining authors resolved conflicts through discussion and consensus. Studies with a score higher than 75% were categorized as good, studies in the range 50%–74% were categorized moderate, and studies with scores lower than 50% were categorized as low [10]. ### 2.6. Data Synthesis A narrative synthesis was used to summarize and evaluate the aims, methods, settings, and results of the studies reviewed. When possible, information was compared across studies about the modeling technique, the cost perspective, the measures of benefit used, and incremental cost-effectiveness ratios. Cost/charges data are presented in US$ for the common price year 2017 using the “CCEMG-EPPI-Centre Cost Converter” Version 1.5 [11], a web-based tool that can be used to adjust an estimate of cost expressed in one currency and price year to a target currency and/or price year. ## 2.1. Eligibility Criteria To be included in this systematic review, the articles had to meet the following criteria: (1) identified as a full economic evaluation, examined costs and their consequences, and reported incremental cost-effectiveness ratios (ICERs) or incremental cost-utility ratios (ICURs); (2) they included the bosentan intervention, regardless of monotherapy or combinations therapy; and (3) they were available in complete full-text format. Articles were excluded if they were systematic reviews, expert opinions, comments (commentary), methodological article, or conference abstracts and proceedings. ## 2.2. Literature Search We conducted a systematic literature search to identify all relevant studies estimating the cost-effectiveness of PAH therapies published between 1 January 2000 and 30 June 2017. The following databases were searched: MEDLINE (PubMed), EMBASE (Ovid), EconLit databases, and the Cochrane Library for English-language studies; and China National Knowledge Infrastructure (CNKI), Wanfang Data and Chongqing VIP (CQVIP) for Chinese-language studies. Literature search algorithm is detailed in AppendixS1. ## 2.3. Study Selection The titles and abstracts were screened for eligibility by two independent authors. Full-text copies of all potentially relevant articles were obtained and reviewed to determine whether they met the prespecified inclusion criteria. Disagreements were resolved by consensus through discussion. ## 2.4. Data Collection Data on study and patient characteristics as well as relevant outcomes were extracted using a standardized data extraction form, including general information for the article (e.g., authors and publication year), characteristics of the study (e.g., design and sample size), type of economic evaluation, study objective, description of the intervention and comparators, measure of benefit, cost data and respective sources, methods for dealing with uncertainty as well as cost and outcome results. ## 2.5. Quality Assessment The quality of reporting of all included studies was appraised using the 24-items Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Each item in the CHEERS checklist was scored as having met the criteria in full (“1”), not at all (“0”), or not applicable (NA). Quality assessment was performed by two authors, and the remaining authors resolved conflicts through discussion and consensus. Studies with a score higher than 75% were categorized as good, studies in the range 50%–74% were categorized moderate, and studies with scores lower than 50% were categorized as low [10]. ## 2.6. Data Synthesis A narrative synthesis was used to summarize and evaluate the aims, methods, settings, and results of the studies reviewed. When possible, information was compared across studies about the modeling technique, the cost perspective, the measures of benefit used, and incremental cost-effectiveness ratios. Cost/charges data are presented in US$ for the common price year 2017 using the “CCEMG-EPPI-Centre Cost Converter” Version 1.5 [11], a web-based tool that can be used to adjust an estimate of cost expressed in one currency and price year to a target currency and/or price year. ## 3. Results ### 3.1. Studies Identified A total of 163 potential publications were identified with the search strategy used, including 119 English-language studies and 44 Chinese-language studies, among which 18 were duplicates and 131 were excluded after screening and analysis of titles and abstracts for not matching the eligibility criteria. A total of 8 articles were retrieved and analyzed (Figure1).Figure 1 Flowchart of literature search. CNKI China National Knowledge Infrastructure database, CQVIP Chongqing VIP database, and PAH pulmonary arterial hypertension. ### 3.2. Description of Identified Studies The 8 included studies are detailed in Table1. Two studies [12, 13] were conducted for the USA, and two for Canada [14, 15]. The remaining studies were conducted for Australia [16], UK [17], China [18], and Italy [19]. Of the 8 studies included, five used the Markov model [12–14, 17, 18], two used the Excel model [16, 19], and one used the cost-minimization analysis [15]. Five studies were conducted from the perspective of healthcare payers, of which four were performed from the perspective of public payers (e.g. Canadian Healthcare System, National Health System) [14, 15, 17, 19], while one study used a third-party payer perspective [16]. And in three studies [12, 13, 18] the perspective was not stated. The time horizons used in the EXCEL and Markov models were highly variable ranging from 3 years to a lifetime. Four studies [12, 13, 15, 19] used a shorter, 3 or 5-year time horizon, while the remaining studies [14, 16–18] chose longer modeling horizons such as 15 years.Table 1 General characteristics of the included studies. References Year published, country Perspective Model type Target population Treatment Comparator Cost components Time horizon Discount rate (%) Source of effectiveness and safety data Highland et al. [12] 2003, USA Unclear Markov model Patients with PAH Bosentan Epoprostenol, treprostinil Drug, diluent, per diem, hospitalization, home health, Hickman catheter, liver function One year NA Three studies Garin et al. [13] 2009, USA Unclear Markov model Patients with FC III and IV PAH Bosentan Epoprostenol, treprostinil, iloprost, sitaxentan, ambrisentan, sildenafil Drug, per diem, pain medications, hospitalization/clinic visit, intravenous line infections, laboratory tests One year NA Two RCTs Coyle et al. [14] 2016, Canada Healthcare system Markov model Patients with FC II and III PAH Bosentan Ambrisentan, sildenafil, tadalafil, supportive care Drugs, monitoring/therapeutic procedures (includes liver function tests, pregnancy test, echocardiograms, renal function, and blood work), hospital/ER/clinic visits (includes general practitioner visits, specialist visits, nurse visits, hospitalizations, emergency room visits, therapeutic procedures), Supportive care drugs Lifetime 5 A network meta-analysis Dranitsaris et al. [15] 2009, Canada Canadian healthcare system Cost-minimization analysis (CMA) Patients with FC II and III PAH Bosentan Ambrisentan, sitaxentan, sildenafil Drug acquisition, medical consultations and visits, laboratory and diagnostic procedures, functional studies, other healthcare-related resources, alternative pharmacotherapy 3 years 3 Nine placebo-controlled trials Wlodarczyk et al. [16] 2006, Australia A healthcare payer perspective An excel model Patients with iPAH Bosentan Conventional therapy Exercise test, lung function, chest x-ray, echocardiogram, electrocardiogram, blood tests, specialist, total medical 15years 5 Two aforementioned pivotal clinical trials and their long-term open-label extensions Stevenson et al. [17] 2009, UK National Health Service Markov model Patients with iPAH or PAH-CTD of FC III Bosentan Palliative therapy Drug acquisition, home delivery, palliative care Lifetime 3.5 Two RCTs Fan et al. [18] 2016, China Unclear Markov model Patients with PAH Bosentan Palliative therapy Drugs, monitoring/therapeutic procedures Lifetime 3.5 Patient registration and follow-up data for charity project Barbieri et al. [19] 2014, Italy National Health System An excel model Patients with FC II and III PAH Bosentan Ambrisentan Drug acquisition cost, direct medical costs (includes visits to professionals, laboratory tests, concomitant medications, hospitalizations) 3 years Unclear Two separate double-blind studies Note. PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; RCT: randomized controlled trial; CTD: connective tissue disease; CMA: cost minimization analysis.The endothelin receptor antagonist in the included studies was bosentan, and the most frequent application for comparators was prostanoids [12, 13], ambrisentan [13–15, 19], sildenafil [13, 14], and conventional, supportive, or palliative therapy [14, 16–18]. The majority of studies reported results as ICERs. Two studies were sponsored by Actelion Pharmaceuticals [16, 17], two were funded by GlaxoSmithKline [15, 19], one received funds from Actelion Pharmaceuticals, Encysive Pharmaceuticals, CoTherix, Gilead Sciences, United Therapeutics, and Pfizer [13], and one was funded by the Canadian Agency for Drugs and Technologies in Health (CADTH) [14]. Two studies [12, 18] did not disclose the source of funding. ### 3.3. Quality Assessment Based on reporting quality assessment from the CHEERS statement, most of the studies were classified as high quality [13–17, 19] and two as moderate [12, 18]. Table 2 presented the proportion of each item in the CHEERS checklist that was reported sufficiently, partially, or not at all in the review. Two studies [12, 18] failed to report the source of funding, and no state and the conflicts of interest were given by five studies [12, 15–18]. Additionally, no studies stated the setting and location—also an item required by the checklist when reporting the background and objectives of economic evaluations. Moreover, the perspective of the study of Highland et al. [12], Garin et al. [13] and Fan et al. [18]was not stated. Reasons for the choice of time horizon were not reported in three studies [12, 13, 19]. Only one study [17] performed a subgroup analysis to assess the impacts of bosentan in iPAH and PAH-CTD. And no studies [12–19] discussed the generalizability of the results; even they reported the study findings and limitations.Table 2 Quality of the economic evaluations (as assessed by the CHEERS statement). Item No. Section/item 1 2 3 4 5 6 7 8 Highland KB et al. [12] Garin MC et al. [13] Coyle K et al. [14] Dranitsaris G et al. [15] Wlodarczyk JH et al. [16] Stevenson MD et al. [17] Fan et al. [18] Barbieri M et al. [19] 1 Title 1 1 1 1 1 1 1 1 2 Abstract 1 1 1 1 1 1 1 1 3 Background and objectives 1 1 1 1 1 1 1 1 4 Target population and subgroups 1 1 1 1 1 1 1 1 5 Setting and location 0 0 0 0 0 0 0 0 6 Study perspective 0 0 1 1 1 1 0 1 7 Comparators 1 1 1 1 1 1 1 1 8 Time horizon 1 1 1 1 1 1 1 1 9 Discount rate 0 0 1 1 1 1 1 0 10 Choice of health outcomes 1 1 1 1 1 1 1 1 11 Measurement of effectiveness 1 1 1 1 1 1 1 1 12 Measurement and valuation of preference-based outcomes 1 1 1 1 1 1 1 0 13 Estimating resources and costs 1 1 1 1 1 1 1 1 14 Currency, price date, and conversion 1 1 1 1 1 1 1 1 15 Choice of model 1 1 1 1 1 1 1 1 16 Assumptions 1 1 1 1 1 1 1 1 17 Analytical methods 1 1 1 1 1 1 1 1 18 Study parameters 1 1 1 1 1 1 1 1 19 Incremental costs and outcomes 1 1 1 1 1 1 1 1 20 Characterizing uncertainty 1 1 1 1 1 1 1 1 21 Characterizing heterogeneity 0 0 0 0 0 1 0 0 22 Study findings, limitations, generalizability, and current knowledge 0 0 0 0 0 0 0 0 23 Source of funding 0 1 1 1 1 1 0 1 24 Conflicts of interest 0 1 1 0 0 0 0 1 Overall quality Moderate Good Good Good Good Good Moderate Good Note. “1” meets the quality assessment criteria; “0” does not fully conform to the quality assessment criteria; CHEERS: Consolidated Health Economic Evaluation Reporting Standards. ### 3.4. Cost-Effectiveness Results of the Studies The results of the included studies for the cost-effectiveness analysis are summarized in Table3.Table 3 Overview of economic evaluation outcomes of included studies. References Comparison Effectiveness/benefits Costs (original currency; mean) Costs (2017 US$; mean) ICER (2017 US$ per QALY) Threshold of ICER (per QALY) Sensitivity or uncertainty analysis Highland et al. [12] (1) Bosentan vs. epoprostenol Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $3631900 per 100 patients/yr Incremental costs: $4641721.88 per 100 patients/yr Dominating NA Sensitivity analyses: results robust. (2) Bosentan vs. treprostinil Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $4873800 per 100 patients/yr Incremental costs: $6228922.62 per 100 patients/yr Dominating NA Garin et al. [13] (1) Bosentan vs. epoprostenol Incremental effectiveness: 5.77 QALYs per 100 patients Incremental costs: $408213 per 100 patients/yr Incremental costs: $452508.19 per 100 patients/yr Dominating NA Sensitivity analyses had minimal impact on these results. (2) Bosentan vs. treprostinil Incremental effectiveness: 5.92 QALYs per 100 patients Incremental costs: $434684 per 100 patients/yr Incremental costs: $481851.56 per 100 patients/yr $81393.84 $50000 (3) Bosentan vs. iloprost Incremental effectiveness: 3.09 QALYs per 100 patients Incremental costs: $3466486 per 100 patients/yr Incremental costs: $3842634.40 per 100 patients/yr Dominating NA (4) Bosentan vs. Sitaxentan Incremental effectiveness: 0.16 QALYs per 100 patients Incremental costs: $474 per 100 patients/yr Incremental costs: $525.43 per 100 patients/yr $3283.94 $50000 (5) Bosentan vs. ambrisentan Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $0 per 100 patients/yr Incremental costs: $0 per 100 patients/yr $0 NA (6) Bosentan vs. sildenafil Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $3153535 per 100 patients/yr Incremental costs: $3495725.08 per 100 patients/yr Dominated NA Coyle et al. [14] (1) Bosentan vs. ambrisentan 5mg Patients with FC II Incremental effectiveness: −0.73 QALYs per person (treatment, 3.904 QALYs; comparator, 4.634 QALYs) Patients with FC IIIncremental costs: Can$29095 per person (treatment, Can$406282; comparator, Can$377187) Patients with FC IIIncremental costs: $24105.22 per person (treatment, $336604.81; comparator, $312499.59) Dominated NA Extensive sensitivity analyses: results robust. Probabilistic sensitivity analysis: results robust. Patients with FC IIIIncremental effectiveness: −0.22 QALYs per person (treatment, 2.960 QALYs; comparator, 3.180 QALYs) Patients with FC IIIIncremental costs: Can$61406 per person (treatment, Can$412979; comparator, Can$351573) Patients with FC IIIIncremental costs: $50874.90 per person (treatment, $342153.27; comparator, $291278.38) Dominated NA (2) Bosentan vs. ambrisentan 10mg Patients with FC II Incremental effectiveness: −0.313 QALYs per person (treatment, 3.904 QALYs; comparator, 4.217 QALYs) Patients with FC II incremental costs: Can$28759 per person (treatment, Can$406282 comparator, Can$377523) Patients with FC IIIIncremental costs: $23826.84 per person (treatment, $336604.81; comparator, $312777.96) Dominated NA Patients with FC III Incremental effectiveness: −0.083 QALYs per person (treatment, 2.960 QALYs; comparator, 3.043 QALYs) Patients with FC III Incremental costs: Can$36095 per person (treatment, Can$412979; comparator, Can$376884) Patients with FC III Incremental costs: $29904.7 per person (treatment, $342153.27; comparator, $312248.55) Dominated NA (3) Bosentan vs. sildenafil Patients with FC II incremental effectiveness: −0.7593 QALYs per person (treatment, 3.904 QALYs; comparator, 4.663 QALYs) Patients with FC II incremental costs: Can$260028 per person (treatment, Can$406282 comparator, Can$146254) Patients with FC II incremental costs: $215433.31 per person (treatment, $336604.81; comparator, $121171.50) Dominated NA Patients with FC III Incremental effectiveness: −0.324 QALYs per person (treatment, 2.960 QALYs; comparator, 3.284 QALYs) Patients with FC III Incremental costs: Can$231860 per person (treatment, Can$412979; comparator, Can$181119) Patients with FC III Incremental costs: $192096.11 per person (treatment, $342153.27; comparator, $150057.17) Dominated NA (4) Bosentan vs. tadalafil Patients with FC II Incremental effectiveness: −0.098 QALYs per person (treatment, 3.904 QALYs; comparator, 4.002 QALYs) Patients with FC II Incremental costs: Can$253037 per person (treatment, Can$406282 comparator, Can$153245) Patients with FC II Incremental costs: $209641.26 per person (treatment, $336604.81; comparator, $126963.55) Dominated NA Patients with FC III Incremental effectiveness: −0.053 QALYs per person (treatment, 2.960 QALYs; comparator, 3.013 QALYs) Patients with FC III Incremental costs: Can$212395 per person (treatment, Can$412979; comparator, Can$200584) Patients with FC III Incremental costs: $175969.35 per person (treatment, $342153.27; comparator, $166183.93) Dominated NA (5) Bosentan vs. supportive care Patients with FC II Incremental effectiveness: 0.686 QALYs per person (treatment, 3.904 QALYs; comparator, 3.128 QALYs) Patients with FCII Incremental costs: Can$251126 per person (treatment, Can$406282 comparator, Can$155156) Patients with FCII Incremental costs: $208058.00 per person (treatment, $336604.81; comparator, $128815.82) $303291.55 $165700.08 (Can$200000) Patients with FC III Incremental effectiveness: 0.273 QALYs per person (treatment, 2.960 QALYs; comparator, 2.687 QALYs) Patients with FC III Incremental costs: Can$208694 per person (treatment, Can$412979; comparator, Can$204285) Patients with FC III Incremental costs: $172903.07 per person (treatment, $342153.27; comparator, $169250.2) $633344.58 $165700.08 (Can$200000) Dranitsaris et al. [15] (1) Bosentan vs. ambrisentan NA Incremental costs: Can$16302 per patient (treatment, Can$164745; comparator, Can$148443) Incremental costs: $14956.40 per patient (treatment, $151146.60; comparator, $136190.20) NA NA One-way sensitivity analysis: results sensitive to sildenafil dose, ambrisentan daily drug cost, and bosentan daily drug cost. (2) Bosentan vs. sitaxentan NA Incremental costs: Can$6307 per patient (treatment, Can$164745; comparator, Can$158444) Incremental costs: $5786.41 per patient (treatment, $151146.60; comparator, $145365.70) NA NA (3) Bosentan vs. sildenafil NA Incremental costs: Can$116394 per patient (treatment, Can$164745; comparator, Can$48351) Incremental costs: $106786.59 per patient (treatment, $151146.60; comparator, $44360.01) NA NA Wlodarczyk et al. [16] Bosentan vs. conventional care At 5years incremental effectiveness: 1.39 life expectancy At 5years incremental costs: A$116929 for each patient At 5years incremental costs: $101787.70 for each patient $73228.56 $41928.72 (A$60000) One-way sensitivity analysis: removing the PBS continuation rules from the model, halving of the annual mortality rate in patients treated with conventional therapy, and changing mortality and hospitalization RR affected the results. At 10years incremental effectiveness: 2.93 life expectancy At 10years incremental costs: A$181808 for each patient At 10years incremental costs: $158265.43 for each patient $54015.51 $41928.72 (A$60000) At 15years incremental effectiveness: 3.87 life expectancy At 15years incremental costs: A$216331 for each patient At 15years incremental costs: $188318.00 for each patient $48660.98 $41928.72 (A$60000) Stevenson et al. [17] Bosentan vs. palliative therapy Patients with iPAHIncremental effectiveness: 0.37 QALYs per patient (treatment, 3.32 QALYs; comparator, 2.95 QALYs) Patients with iPAH Incremental costs: £69000 per patient (treatment, £134000; comparator, £203000) Patients with iPAHIncremental costs: $111639.98 per patient (treatment, $216808.07; comparator, $328448.05) Dominating NA The results were similar in both the deterministic and probabilistic analyses. Patients with PAH-CTDIncremental effectiveness: 0.15 QALYs per patient (treatment, 1.36 QALYs; comparator, 1.21 QALYs) Patients with PAH-CTDIncremental costs: £32000 per patient (treatment, £62000; comparator, £94000) Patients with PAH-CTD Incremental costs: $51775.06 per patient (treatment, $100314.18; comparator, $152089.25) Dominating NA Fan et al. [18] Bosentan vs. palliative therapy Incremental effectiveness: 6.19 QALYs per person (treatment, 7.23 QALYs; comparator, 1.04 QALYs) Incremental costs: ¥439046.77 per patient (treatment, ¥504293.75; comparator, ¥65246.98) Incremental costs: $125227.26 per patient (treatment, $143837.35; comparator, $18610.09) $20230.58 $39815.46 (¥139593) Sensitivity analyses: results robust. Barbieri et al. [19] Bosentan vs. ambrisentan NA Incremental costs: €1112145 (treatment, €87594291; comparator, €86482146) Incremental costs: $1184990.49 (treatment, $93331717.06; comparator, $92146726.56) NA NA The sensitivity analysis corroborated the base case findings. Note. “Dominating” denotes bosentan treatment producing more QALYs at a lower cost, whereas “dominated” denotes bosentan producing less QALYs at a higher cost. ICER: incremental cost-effectiveness ratio; QALY: quality-adjusted life-year; yr: year; PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; CTD: connective tissue disease. #### 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. #### 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. #### 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. #### 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 3.1. Studies Identified A total of 163 potential publications were identified with the search strategy used, including 119 English-language studies and 44 Chinese-language studies, among which 18 were duplicates and 131 were excluded after screening and analysis of titles and abstracts for not matching the eligibility criteria. A total of 8 articles were retrieved and analyzed (Figure1).Figure 1 Flowchart of literature search. CNKI China National Knowledge Infrastructure database, CQVIP Chongqing VIP database, and PAH pulmonary arterial hypertension. ## 3.2. Description of Identified Studies The 8 included studies are detailed in Table1. Two studies [12, 13] were conducted for the USA, and two for Canada [14, 15]. The remaining studies were conducted for Australia [16], UK [17], China [18], and Italy [19]. Of the 8 studies included, five used the Markov model [12–14, 17, 18], two used the Excel model [16, 19], and one used the cost-minimization analysis [15]. Five studies were conducted from the perspective of healthcare payers, of which four were performed from the perspective of public payers (e.g. Canadian Healthcare System, National Health System) [14, 15, 17, 19], while one study used a third-party payer perspective [16]. And in three studies [12, 13, 18] the perspective was not stated. The time horizons used in the EXCEL and Markov models were highly variable ranging from 3 years to a lifetime. Four studies [12, 13, 15, 19] used a shorter, 3 or 5-year time horizon, while the remaining studies [14, 16–18] chose longer modeling horizons such as 15 years.Table 1 General characteristics of the included studies. References Year published, country Perspective Model type Target population Treatment Comparator Cost components Time horizon Discount rate (%) Source of effectiveness and safety data Highland et al. [12] 2003, USA Unclear Markov model Patients with PAH Bosentan Epoprostenol, treprostinil Drug, diluent, per diem, hospitalization, home health, Hickman catheter, liver function One year NA Three studies Garin et al. [13] 2009, USA Unclear Markov model Patients with FC III and IV PAH Bosentan Epoprostenol, treprostinil, iloprost, sitaxentan, ambrisentan, sildenafil Drug, per diem, pain medications, hospitalization/clinic visit, intravenous line infections, laboratory tests One year NA Two RCTs Coyle et al. [14] 2016, Canada Healthcare system Markov model Patients with FC II and III PAH Bosentan Ambrisentan, sildenafil, tadalafil, supportive care Drugs, monitoring/therapeutic procedures (includes liver function tests, pregnancy test, echocardiograms, renal function, and blood work), hospital/ER/clinic visits (includes general practitioner visits, specialist visits, nurse visits, hospitalizations, emergency room visits, therapeutic procedures), Supportive care drugs Lifetime 5 A network meta-analysis Dranitsaris et al. [15] 2009, Canada Canadian healthcare system Cost-minimization analysis (CMA) Patients with FC II and III PAH Bosentan Ambrisentan, sitaxentan, sildenafil Drug acquisition, medical consultations and visits, laboratory and diagnostic procedures, functional studies, other healthcare-related resources, alternative pharmacotherapy 3 years 3 Nine placebo-controlled trials Wlodarczyk et al. [16] 2006, Australia A healthcare payer perspective An excel model Patients with iPAH Bosentan Conventional therapy Exercise test, lung function, chest x-ray, echocardiogram, electrocardiogram, blood tests, specialist, total medical 15years 5 Two aforementioned pivotal clinical trials and their long-term open-label extensions Stevenson et al. [17] 2009, UK National Health Service Markov model Patients with iPAH or PAH-CTD of FC III Bosentan Palliative therapy Drug acquisition, home delivery, palliative care Lifetime 3.5 Two RCTs Fan et al. [18] 2016, China Unclear Markov model Patients with PAH Bosentan Palliative therapy Drugs, monitoring/therapeutic procedures Lifetime 3.5 Patient registration and follow-up data for charity project Barbieri et al. [19] 2014, Italy National Health System An excel model Patients with FC II and III PAH Bosentan Ambrisentan Drug acquisition cost, direct medical costs (includes visits to professionals, laboratory tests, concomitant medications, hospitalizations) 3 years Unclear Two separate double-blind studies Note. PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; RCT: randomized controlled trial; CTD: connective tissue disease; CMA: cost minimization analysis.The endothelin receptor antagonist in the included studies was bosentan, and the most frequent application for comparators was prostanoids [12, 13], ambrisentan [13–15, 19], sildenafil [13, 14], and conventional, supportive, or palliative therapy [14, 16–18]. The majority of studies reported results as ICERs. Two studies were sponsored by Actelion Pharmaceuticals [16, 17], two were funded by GlaxoSmithKline [15, 19], one received funds from Actelion Pharmaceuticals, Encysive Pharmaceuticals, CoTherix, Gilead Sciences, United Therapeutics, and Pfizer [13], and one was funded by the Canadian Agency for Drugs and Technologies in Health (CADTH) [14]. Two studies [12, 18] did not disclose the source of funding. ## 3.3. Quality Assessment Based on reporting quality assessment from the CHEERS statement, most of the studies were classified as high quality [13–17, 19] and two as moderate [12, 18]. Table 2 presented the proportion of each item in the CHEERS checklist that was reported sufficiently, partially, or not at all in the review. Two studies [12, 18] failed to report the source of funding, and no state and the conflicts of interest were given by five studies [12, 15–18]. Additionally, no studies stated the setting and location—also an item required by the checklist when reporting the background and objectives of economic evaluations. Moreover, the perspective of the study of Highland et al. [12], Garin et al. [13] and Fan et al. [18]was not stated. Reasons for the choice of time horizon were not reported in three studies [12, 13, 19]. Only one study [17] performed a subgroup analysis to assess the impacts of bosentan in iPAH and PAH-CTD. And no studies [12–19] discussed the generalizability of the results; even they reported the study findings and limitations.Table 2 Quality of the economic evaluations (as assessed by the CHEERS statement). Item No. Section/item 1 2 3 4 5 6 7 8 Highland KB et al. [12] Garin MC et al. [13] Coyle K et al. [14] Dranitsaris G et al. [15] Wlodarczyk JH et al. [16] Stevenson MD et al. [17] Fan et al. [18] Barbieri M et al. [19] 1 Title 1 1 1 1 1 1 1 1 2 Abstract 1 1 1 1 1 1 1 1 3 Background and objectives 1 1 1 1 1 1 1 1 4 Target population and subgroups 1 1 1 1 1 1 1 1 5 Setting and location 0 0 0 0 0 0 0 0 6 Study perspective 0 0 1 1 1 1 0 1 7 Comparators 1 1 1 1 1 1 1 1 8 Time horizon 1 1 1 1 1 1 1 1 9 Discount rate 0 0 1 1 1 1 1 0 10 Choice of health outcomes 1 1 1 1 1 1 1 1 11 Measurement of effectiveness 1 1 1 1 1 1 1 1 12 Measurement and valuation of preference-based outcomes 1 1 1 1 1 1 1 0 13 Estimating resources and costs 1 1 1 1 1 1 1 1 14 Currency, price date, and conversion 1 1 1 1 1 1 1 1 15 Choice of model 1 1 1 1 1 1 1 1 16 Assumptions 1 1 1 1 1 1 1 1 17 Analytical methods 1 1 1 1 1 1 1 1 18 Study parameters 1 1 1 1 1 1 1 1 19 Incremental costs and outcomes 1 1 1 1 1 1 1 1 20 Characterizing uncertainty 1 1 1 1 1 1 1 1 21 Characterizing heterogeneity 0 0 0 0 0 1 0 0 22 Study findings, limitations, generalizability, and current knowledge 0 0 0 0 0 0 0 0 23 Source of funding 0 1 1 1 1 1 0 1 24 Conflicts of interest 0 1 1 0 0 0 0 1 Overall quality Moderate Good Good Good Good Good Moderate Good Note. “1” meets the quality assessment criteria; “0” does not fully conform to the quality assessment criteria; CHEERS: Consolidated Health Economic Evaluation Reporting Standards. ## 3.4. Cost-Effectiveness Results of the Studies The results of the included studies for the cost-effectiveness analysis are summarized in Table3.Table 3 Overview of economic evaluation outcomes of included studies. References Comparison Effectiveness/benefits Costs (original currency; mean) Costs (2017 US$; mean) ICER (2017 US$ per QALY) Threshold of ICER (per QALY) Sensitivity or uncertainty analysis Highland et al. [12] (1) Bosentan vs. epoprostenol Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $3631900 per 100 patients/yr Incremental costs: $4641721.88 per 100 patients/yr Dominating NA Sensitivity analyses: results robust. (2) Bosentan vs. treprostinil Incremental effectiveness: 11 QALYs per 100 patients Incremental costs: $4873800 per 100 patients/yr Incremental costs: $6228922.62 per 100 patients/yr Dominating NA Garin et al. [13] (1) Bosentan vs. epoprostenol Incremental effectiveness: 5.77 QALYs per 100 patients Incremental costs: $408213 per 100 patients/yr Incremental costs: $452508.19 per 100 patients/yr Dominating NA Sensitivity analyses had minimal impact on these results. (2) Bosentan vs. treprostinil Incremental effectiveness: 5.92 QALYs per 100 patients Incremental costs: $434684 per 100 patients/yr Incremental costs: $481851.56 per 100 patients/yr $81393.84 $50000 (3) Bosentan vs. iloprost Incremental effectiveness: 3.09 QALYs per 100 patients Incremental costs: $3466486 per 100 patients/yr Incremental costs: $3842634.40 per 100 patients/yr Dominating NA (4) Bosentan vs. Sitaxentan Incremental effectiveness: 0.16 QALYs per 100 patients Incremental costs: $474 per 100 patients/yr Incremental costs: $525.43 per 100 patients/yr $3283.94 $50000 (5) Bosentan vs. ambrisentan Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $0 per 100 patients/yr Incremental costs: $0 per 100 patients/yr $0 NA (6) Bosentan vs. sildenafil Incremental effectiveness: 0 QALYs per 100 patients Incremental costs: $3153535 per 100 patients/yr Incremental costs: $3495725.08 per 100 patients/yr Dominated NA Coyle et al. [14] (1) Bosentan vs. ambrisentan 5mg Patients with FC II Incremental effectiveness: −0.73 QALYs per person (treatment, 3.904 QALYs; comparator, 4.634 QALYs) Patients with FC IIIncremental costs: Can$29095 per person (treatment, Can$406282; comparator, Can$377187) Patients with FC IIIncremental costs: $24105.22 per person (treatment, $336604.81; comparator, $312499.59) Dominated NA Extensive sensitivity analyses: results robust. Probabilistic sensitivity analysis: results robust. Patients with FC IIIIncremental effectiveness: −0.22 QALYs per person (treatment, 2.960 QALYs; comparator, 3.180 QALYs) Patients with FC IIIIncremental costs: Can$61406 per person (treatment, Can$412979; comparator, Can$351573) Patients with FC IIIIncremental costs: $50874.90 per person (treatment, $342153.27; comparator, $291278.38) Dominated NA (2) Bosentan vs. ambrisentan 10mg Patients with FC II Incremental effectiveness: −0.313 QALYs per person (treatment, 3.904 QALYs; comparator, 4.217 QALYs) Patients with FC II incremental costs: Can$28759 per person (treatment, Can$406282 comparator, Can$377523) Patients with FC IIIIncremental costs: $23826.84 per person (treatment, $336604.81; comparator, $312777.96) Dominated NA Patients with FC III Incremental effectiveness: −0.083 QALYs per person (treatment, 2.960 QALYs; comparator, 3.043 QALYs) Patients with FC III Incremental costs: Can$36095 per person (treatment, Can$412979; comparator, Can$376884) Patients with FC III Incremental costs: $29904.7 per person (treatment, $342153.27; comparator, $312248.55) Dominated NA (3) Bosentan vs. sildenafil Patients with FC II incremental effectiveness: −0.7593 QALYs per person (treatment, 3.904 QALYs; comparator, 4.663 QALYs) Patients with FC II incremental costs: Can$260028 per person (treatment, Can$406282 comparator, Can$146254) Patients with FC II incremental costs: $215433.31 per person (treatment, $336604.81; comparator, $121171.50) Dominated NA Patients with FC III Incremental effectiveness: −0.324 QALYs per person (treatment, 2.960 QALYs; comparator, 3.284 QALYs) Patients with FC III Incremental costs: Can$231860 per person (treatment, Can$412979; comparator, Can$181119) Patients with FC III Incremental costs: $192096.11 per person (treatment, $342153.27; comparator, $150057.17) Dominated NA (4) Bosentan vs. tadalafil Patients with FC II Incremental effectiveness: −0.098 QALYs per person (treatment, 3.904 QALYs; comparator, 4.002 QALYs) Patients with FC II Incremental costs: Can$253037 per person (treatment, Can$406282 comparator, Can$153245) Patients with FC II Incremental costs: $209641.26 per person (treatment, $336604.81; comparator, $126963.55) Dominated NA Patients with FC III Incremental effectiveness: −0.053 QALYs per person (treatment, 2.960 QALYs; comparator, 3.013 QALYs) Patients with FC III Incremental costs: Can$212395 per person (treatment, Can$412979; comparator, Can$200584) Patients with FC III Incremental costs: $175969.35 per person (treatment, $342153.27; comparator, $166183.93) Dominated NA (5) Bosentan vs. supportive care Patients with FC II Incremental effectiveness: 0.686 QALYs per person (treatment, 3.904 QALYs; comparator, 3.128 QALYs) Patients with FCII Incremental costs: Can$251126 per person (treatment, Can$406282 comparator, Can$155156) Patients with FCII Incremental costs: $208058.00 per person (treatment, $336604.81; comparator, $128815.82) $303291.55 $165700.08 (Can$200000) Patients with FC III Incremental effectiveness: 0.273 QALYs per person (treatment, 2.960 QALYs; comparator, 2.687 QALYs) Patients with FC III Incremental costs: Can$208694 per person (treatment, Can$412979; comparator, Can$204285) Patients with FC III Incremental costs: $172903.07 per person (treatment, $342153.27; comparator, $169250.2) $633344.58 $165700.08 (Can$200000) Dranitsaris et al. [15] (1) Bosentan vs. ambrisentan NA Incremental costs: Can$16302 per patient (treatment, Can$164745; comparator, Can$148443) Incremental costs: $14956.40 per patient (treatment, $151146.60; comparator, $136190.20) NA NA One-way sensitivity analysis: results sensitive to sildenafil dose, ambrisentan daily drug cost, and bosentan daily drug cost. (2) Bosentan vs. sitaxentan NA Incremental costs: Can$6307 per patient (treatment, Can$164745; comparator, Can$158444) Incremental costs: $5786.41 per patient (treatment, $151146.60; comparator, $145365.70) NA NA (3) Bosentan vs. sildenafil NA Incremental costs: Can$116394 per patient (treatment, Can$164745; comparator, Can$48351) Incremental costs: $106786.59 per patient (treatment, $151146.60; comparator, $44360.01) NA NA Wlodarczyk et al. [16] Bosentan vs. conventional care At 5years incremental effectiveness: 1.39 life expectancy At 5years incremental costs: A$116929 for each patient At 5years incremental costs: $101787.70 for each patient $73228.56 $41928.72 (A$60000) One-way sensitivity analysis: removing the PBS continuation rules from the model, halving of the annual mortality rate in patients treated with conventional therapy, and changing mortality and hospitalization RR affected the results. At 10years incremental effectiveness: 2.93 life expectancy At 10years incremental costs: A$181808 for each patient At 10years incremental costs: $158265.43 for each patient $54015.51 $41928.72 (A$60000) At 15years incremental effectiveness: 3.87 life expectancy At 15years incremental costs: A$216331 for each patient At 15years incremental costs: $188318.00 for each patient $48660.98 $41928.72 (A$60000) Stevenson et al. [17] Bosentan vs. palliative therapy Patients with iPAHIncremental effectiveness: 0.37 QALYs per patient (treatment, 3.32 QALYs; comparator, 2.95 QALYs) Patients with iPAH Incremental costs: £69000 per patient (treatment, £134000; comparator, £203000) Patients with iPAHIncremental costs: $111639.98 per patient (treatment, $216808.07; comparator, $328448.05) Dominating NA The results were similar in both the deterministic and probabilistic analyses. Patients with PAH-CTDIncremental effectiveness: 0.15 QALYs per patient (treatment, 1.36 QALYs; comparator, 1.21 QALYs) Patients with PAH-CTDIncremental costs: £32000 per patient (treatment, £62000; comparator, £94000) Patients with PAH-CTD Incremental costs: $51775.06 per patient (treatment, $100314.18; comparator, $152089.25) Dominating NA Fan et al. [18] Bosentan vs. palliative therapy Incremental effectiveness: 6.19 QALYs per person (treatment, 7.23 QALYs; comparator, 1.04 QALYs) Incremental costs: ¥439046.77 per patient (treatment, ¥504293.75; comparator, ¥65246.98) Incremental costs: $125227.26 per patient (treatment, $143837.35; comparator, $18610.09) $20230.58 $39815.46 (¥139593) Sensitivity analyses: results robust. Barbieri et al. [19] Bosentan vs. ambrisentan NA Incremental costs: €1112145 (treatment, €87594291; comparator, €86482146) Incremental costs: $1184990.49 (treatment, $93331717.06; comparator, $92146726.56) NA NA The sensitivity analysis corroborated the base case findings. Note. “Dominating” denotes bosentan treatment producing more QALYs at a lower cost, whereas “dominated” denotes bosentan producing less QALYs at a higher cost. ICER: incremental cost-effectiveness ratio; QALY: quality-adjusted life-year; yr: year; PAH: pulmonary arterial hypertension; FC: functional class; NA: not applicable; CTD: connective tissue disease. ### 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. ### 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. ### 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. ### 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 3.4.1. Bosentan versus Prostanoids Two studies [12, 13] conducted in the USA provided economic evaluation data for bosentan versus prostanoids (epoprostenol, treprostinil, and iloprost). Highland et al. [12] developed a Markov model to compare the cost-effectiveness of bosentan, epoprostenol, and treprostinil in treating PAH in 2003. These studies reported that bosentan was more cost-effective than either epoprostenol or treprostinil, with lower costs (a cost saving of $46417.21 or $62289.22, respectively. Adjusted to the year 2017 value) and a greater gain in quality-adjusted life-years (QALYs; 0.11 more QALYs gained) per patient. Garin et al. [13] improved the research by Highland et al. [12] in the updated Markov model in 2009. Bosentan was found to be dominant (lower costs and greater QALY) relative to other medications epoprostenol and iloprost. Additionally, in contrast to the findings by Highland et al. [12], Garin et al. [13] found that treatment with treprostinil resulted in an average annual savings of $4818.51 (adjusted to the year 2017 value) when used as an alternative to bosentan, with an ICER of $81393.84 per QALYs (adjusted to the year 2017 value) gained. The ICER of bosentan therapy was $81393.84 per QALY, adjusted to the year 2017 value, greater than the cost-effectiveness threshold of $50000 per QALY in the USA. ## 3.4.2. Bosentan versus Other Endothelin Receptor Antagonists Four studies [13–15, 19] compared bosentan with other available endothelin receptor antagonists, including ambrisentan and sitaxentan. Two studies [13, 14] used the cost-effectiveness analysis, while the other two used the cost-minimization analysis [15] and the budget impact analysis [19], respectively.Coyle et al. [14] estimated that, as first-line medications, both 5 mg and 10 mg ambrisentan provided more QALYs and cost-saving values than bosentan for patients with either FCII or III PAH. Garin et al. [13] found that the cost-effective value of bosentan was similar to that of ambrisentan. However, sitaxentan, as an alternative to bosentan, its annual cost saving $525.43 per 100 patients (adjusted to the year 2017 value) and the ICER was $3283.94 per QALYs (adjusted to the year 2017 value) which is within the threshold of acceptability in the USA. In the study of Dranitsaris et al. [15], which built a population-based CMA model to evaluate 3 years pharmacotherapy, revealed that bosentan would be associated with higher costs of $14956.40 and $5786.41 (adjusted to the year 2017 value) when used as an alternative to ambrisentan or sitaxentan, respectively. Moreover, the budget impact analysis reported by Barbieri et al. [19] demonstrated that the use of ambrisentan instead of bosentan for eligible patients might result in savings of about $1.1 million (adjusted to the year 2017 value) over a 3-year time horizon in Italy. ## 3.4.3. Bosentan versus Sildenafil Two studies [13, 14] provided economic evaluation data for bosentan versus sildenafil in patients with PAH in the USA and Canada. Garin et al. [13] built the Markov-type model to evaluate 1-year treatment; the economic analysis and sensitivity analysis indicated that treatment with bosentan resulted in the same gain in QALYs as sildenafil, but at a higher cost. In another study [14], a cost-utility analysis suggests that sildenafil was less costly (FC II $121171.50 versus $336604.81; FC III $150057.17 versus $342153.27, adjusted to the year 2017 value) and more effective (FC II 4.663 QALYs versus 3.904 QALYs; FC III 3.284 QALYs versus 2.960 QALYs) than bosentan in PAH patients with either functional class FC II or III disease. Therefore, the results show that the initiation of treatment with sildenafil is likely the most economical option. ## 3.4.4. Bosentan versus Conventional, Supportive or Palliative Therapy Four studies [14, 16–18] conducted in Canada, Australia, the UK, and China compared bosentan with conventional, supportive, or palliative therapy. Wlodarczyk et al. [16] assessed an average cost of $204237.00 (adjusted to the year 2017 value) per patient for bosentan and $15918.99 (adjusted to the year 2017 value) for conventional therapy alone in Australia, and bosentan was associated with an incremental life expectancy of 3.87 years over palliative therapy, with an ICER of $48660.98 per life-year (LY) gained over a 15-year time horizon. Stevenson et al. [17] indicated that bosentan was likely to be a more potential cost-effective first-line therapy for UK patients over the lifetime with iPAH or PAH-CTD within FC III than palliative care, with less costly (IPAH $216808.07 versus $328448.05 per patient; PAH-CTD $100314.18 versus $152089.25 per patient, adjusted to the year 2017 value) and better outcomes in QALYs (IPAH 3.32 versus 2.95; PAH-CTD 1.36 versus 1.21). The study by Fan et al. [18] showed that the utility value, which represented the health-related quality of life, was 7.23 QALYs treated with bosentan therapy and 1.04 QALYs with palliative therapy, respectively. Bosentan was associated with an incremental gain of 6.19 QALYs over palliative therapy. The estimated cost per patient was $143837.35 for patients treated with bosentan and $18610.09 for those given palliative therapy, a cost increase of $125227.26 per patient. The incremental cost-utility of bosentan relative to palliative therapy was $20230.58, which was less than one half gross domestic product (GDP) in China.In comparison, in the study of Coyle et al. [14], bosentan did not show a conclusive effect on cost-effectiveness. The study conducted in Canada showed that the ICER of bosentan versus supportive care in both patients with FC II and III diseases remained $303291.55 or $633344.58, which was higher than the willingness-to-pay threshold of $165700.08.Cost-effectiveness analyses of bosentan versus palliative therapy also suggested that bosentan was a potentially cost-effective intervention in Australia, UK, and China, which is not consistent with the results of Highland et al. [12] in the USA. ## 4. Discussion ### 4.1. Summary of Evidence PAH is a chronic progressive devastating disease with no cure, which may bring a significant medical and financial burden to patients’ families. The aim of this systematic review of published studies was to evaluate the costs and cost-effectiveness of bosentan for PAH. We used the thresholds stated in the included studies if applicable. Otherwise, we searched the literature to identify appropriate and accepted thresholds used in relevant countries to determine if the ICERs of bosentan was below such criteria, and in turn to evaluate whether they appeared to provide good value or money for PAH.In our analysis, two other Markov model-based cost-effectiveness analyses of bosentan, epoprostenol, and treprostinil in treating PAH was published in the USA [12, 13]. In these two studies, both of their results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. The differences between bosentan and treprostinil might be ascribed to different methodological approaches, such as the model structure, time horizon, the measurement of costs, and health utilities.According to the four model results, in PAH patients for whom an ERA was the preferred agent, ambrisentan might be the drug of choice because of its economic advantages and improved safety profile [13, 14, 16, 19].Two studies investigated the cost-effectiveness analysis of bosentan versus sildenafil, and sildenafil was founded to be more cost-effective, especially for adult patients with FC II and III PAH [13, 14].Among four studies [14, 16–18] that compared bosentan with conventional, supportive, or palliative therapy, three studies [16–18] concluded that bosentan seemed to be more favorable. In comparison, Coyle et al. [14] indicated that long-term, head-to-head research must be conducted to evaluate the cost-effectiveness of bosentan and supportive care before making any recommendations in Canada. This study differed from the current analysis as it was not designed to allow a direct comparison of the cost-effectiveness of PAH treatments relative to each other, but only relative to supportive care. ### 4.2. Quality of Evidence With regard to the quality of reporting of these economic evaluations, despite the fact that guidelines for conducting health economic evaluations have been widely available for many years, we observed that the quality of reporting was still insufficient for several articles. We hope that the availability of the CHEERS statement will lead to improvement in the reporting and hence the quality of economic evaluations of PAH.In terms of characterizing heterogeneity, subgroups of patient's diverse baseline characteristics and other variables could potentially contribute to the variation of interpretation of our review results. In this review, only one study [17] performed a subgroup analysis of iPAH and PAH-CTD. In fact, subgroup analyses are important because some factors such as disease severity, gender, and body mass index (BMI) may also affect the prognosis of PAH and exert a direct effect on its budget to a new therapy [20, 21].In addition, it is noteworthy that several articles included in this study receive grants from pharmaceutical companies which might cause potential bias of cost-effective evaluations. Although pharmaceutical industry-funded research could result in biases in cost-effective analyses, no guidance currently exists on how to evaluate this bias. ### 4.3. Key Drivers of Cost-Effectiveness In line with prior studies, some key drivers of cost-effectiveness were found in our review. First, the consideration of the comparator is highly important. The cost-effectiveness of a drug therapy could differ according to the selected comparator. For example, bosentan was shown to be dominant relative to epoprostenol, while the cost-effectiveness was less favorable when using sildenafil or ambrisentan as the comparator. Therefore, one of the most important structural choices of a cost-effectiveness analysis is the comparator choice.On the contrary, the included analyses were largely country-specific since healthcare systems and reimbursement policies could differ between countries and therefore have a significant influence on the results and final conclusions of economic evaluations. Future assessment is needed to use approaches such as alternative ways to specifying multilevel models for analyzing cost-effectiveness data and identification of a range of appropriate covariates to handle assumptions and uncertainties in economic evaluation results, which would improve the generalizability and transferability of studies across settings [22]. ### 4.4. Strengths and Limitations To the best of our knowledge, this is the first systematic review of published studies to examine the cost-saving or cost-effective properties of bosentan for PAH patients. Contrary to previous systematic and narrative reviews which were outdated or restricted to a specific comparator, this is the most comprehensive review incorporating economic evaluations over an extended period of time with quality assessed using a validated instrument.Although this review was conducted using explicit, systematic methods that were selected to minimize bias, several limitations which affect the conclusion should be taken into consideration when interpreting the results.First, given the disparity in the methods used across existing economic evaluations, it is extremely difficult to synthesize the studies into a coherent whole. Studies would have to be adjusted to achieve standardized results, but this is rarely achievable because of the diverse nature of the elements considered, including different types of models, perspectives, time horizons, and healthcare systems. Such difference was likely to have important impacts on model inputs such as costs and health utilities. Therefore, we summarized the evidence qualitatively, and then the results should be interpreted with caution.Second, the trial populations used in the pharmacoeconomic models may not represent the entire PAH patients in the real-world setting. The prevalence of mortalities and comorbidities seems to be relatively low in most studies. For example, Fan et al. developed the Markov model, based on Australia and New Zealand patient population, to analyze the annual mortality rates in Chinese PAH patients, which might have lead to an inaccurate extrapolation of results, given the study populations.Third, some relevant studies may have been overlooked in our review, especially those that were not published in the English or Chinese language. Similarly, we did not formally assess potential publication bias that may have occurred due to the lack of inclusion of unpublished studies (e.g., industry-sponsored evaluations), which may have had unfavorable findings. ## 4.1. Summary of Evidence PAH is a chronic progressive devastating disease with no cure, which may bring a significant medical and financial burden to patients’ families. The aim of this systematic review of published studies was to evaluate the costs and cost-effectiveness of bosentan for PAH. We used the thresholds stated in the included studies if applicable. Otherwise, we searched the literature to identify appropriate and accepted thresholds used in relevant countries to determine if the ICERs of bosentan was below such criteria, and in turn to evaluate whether they appeared to provide good value or money for PAH.In our analysis, two other Markov model-based cost-effectiveness analyses of bosentan, epoprostenol, and treprostinil in treating PAH was published in the USA [12, 13]. In these two studies, both of their results indicated that bosentan was more cost-effective than epoprostenol, while the results of bosentan and treprostinil were not consistent. The differences between bosentan and treprostinil might be ascribed to different methodological approaches, such as the model structure, time horizon, the measurement of costs, and health utilities.According to the four model results, in PAH patients for whom an ERA was the preferred agent, ambrisentan might be the drug of choice because of its economic advantages and improved safety profile [13, 14, 16, 19].Two studies investigated the cost-effectiveness analysis of bosentan versus sildenafil, and sildenafil was founded to be more cost-effective, especially for adult patients with FC II and III PAH [13, 14].Among four studies [14, 16–18] that compared bosentan with conventional, supportive, or palliative therapy, three studies [16–18] concluded that bosentan seemed to be more favorable. In comparison, Coyle et al. [14] indicated that long-term, head-to-head research must be conducted to evaluate the cost-effectiveness of bosentan and supportive care before making any recommendations in Canada. This study differed from the current analysis as it was not designed to allow a direct comparison of the cost-effectiveness of PAH treatments relative to each other, but only relative to supportive care. ## 4.2. Quality of Evidence With regard to the quality of reporting of these economic evaluations, despite the fact that guidelines for conducting health economic evaluations have been widely available for many years, we observed that the quality of reporting was still insufficient for several articles. We hope that the availability of the CHEERS statement will lead to improvement in the reporting and hence the quality of economic evaluations of PAH.In terms of characterizing heterogeneity, subgroups of patient's diverse baseline characteristics and other variables could potentially contribute to the variation of interpretation of our review results. In this review, only one study [17] performed a subgroup analysis of iPAH and PAH-CTD. In fact, subgroup analyses are important because some factors such as disease severity, gender, and body mass index (BMI) may also affect the prognosis of PAH and exert a direct effect on its budget to a new therapy [20, 21].In addition, it is noteworthy that several articles included in this study receive grants from pharmaceutical companies which might cause potential bias of cost-effective evaluations. Although pharmaceutical industry-funded research could result in biases in cost-effective analyses, no guidance currently exists on how to evaluate this bias. ## 4.3. Key Drivers of Cost-Effectiveness In line with prior studies, some key drivers of cost-effectiveness were found in our review. First, the consideration of the comparator is highly important. The cost-effectiveness of a drug therapy could differ according to the selected comparator. For example, bosentan was shown to be dominant relative to epoprostenol, while the cost-effectiveness was less favorable when using sildenafil or ambrisentan as the comparator. Therefore, one of the most important structural choices of a cost-effectiveness analysis is the comparator choice.On the contrary, the included analyses were largely country-specific since healthcare systems and reimbursement policies could differ between countries and therefore have a significant influence on the results and final conclusions of economic evaluations. Future assessment is needed to use approaches such as alternative ways to specifying multilevel models for analyzing cost-effectiveness data and identification of a range of appropriate covariates to handle assumptions and uncertainties in economic evaluation results, which would improve the generalizability and transferability of studies across settings [22]. ## 4.4. Strengths and Limitations To the best of our knowledge, this is the first systematic review of published studies to examine the cost-saving or cost-effective properties of bosentan for PAH patients. Contrary to previous systematic and narrative reviews which were outdated or restricted to a specific comparator, this is the most comprehensive review incorporating economic evaluations over an extended period of time with quality assessed using a validated instrument.Although this review was conducted using explicit, systematic methods that were selected to minimize bias, several limitations which affect the conclusion should be taken into consideration when interpreting the results.First, given the disparity in the methods used across existing economic evaluations, it is extremely difficult to synthesize the studies into a coherent whole. Studies would have to be adjusted to achieve standardized results, but this is rarely achievable because of the diverse nature of the elements considered, including different types of models, perspectives, time horizons, and healthcare systems. Such difference was likely to have important impacts on model inputs such as costs and health utilities. Therefore, we summarized the evidence qualitatively, and then the results should be interpreted with caution.Second, the trial populations used in the pharmacoeconomic models may not represent the entire PAH patients in the real-world setting. The prevalence of mortalities and comorbidities seems to be relatively low in most studies. For example, Fan et al. developed the Markov model, based on Australia and New Zealand patient population, to analyze the annual mortality rates in Chinese PAH patients, which might have lead to an inaccurate extrapolation of results, given the study populations.Third, some relevant studies may have been overlooked in our review, especially those that were not published in the English or Chinese language. Similarly, we did not formally assess potential publication bias that may have occurred due to the lack of inclusion of unpublished studies (e.g., industry-sponsored evaluations), which may have had unfavorable findings. ## 5. Conclusions Evidence produced by economic evaluations in general, and in the PAH field in particular, has the potential of informing clinical and reimbursement decision-making. Based on the available evidence, we conclude that the administration of bosentan for PAH appears to be a more cost-effective alternative compared with epoprostenol and conventional or palliative therapy. There was unanimous agreement that bosentan was not a cost-effective front-line therapy compared with sildenafil and other endothelin receptor antagonists. Future research investigating ways to improve the quality of reporting of economic evaluations is therefore warranted. --- *Source: 1015239-2018-11-18.xml*
2018
# Role of MicroRNAs in Renin-Angiotensin-Aldosterone System-Mediated Cardiovascular Inflammation and Remodeling **Authors:** Maricica Pacurari; Paul B. Tchounwou **Journal:** International Journal of Inflammation (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101527 --- ## Abstract MicroRNAs are endogenous regulators of gene expression either by inhibiting translation or protein degradation. Recent studies indicate that microRNAs play a role in cardiovascular disease and renin-angiotensin-aldosterone system- (RAAS-) mediated cardiovascular inflammation, either as mediators or being targeted by RAAS pharmacological inhibitors. The exact role(s) of microRNAs in RAAS-mediated cardiovascular inflammation and remodeling is/are still in early stage of investigation. However, few microRNAs have been shown to play a role in RAAS signaling, particularly miR-155, miR-146a/b, miR-132/122, and miR-483-3p. Identification of specific microRNAs and their targets and elucidating microRNA-regulated mechanisms associated RAS-mediated cardiovascular inflammation and remodeling might lead to the development of novel pharmacological strategies to target RAAS-mediated vascular pathologies. This paper reviews microRNAs role in inflammatory factors mediating cardiovascular inflammation and RAAS genes and the effect of RAAS pharmacological inhibition on microRNAs and the resolution of RAAS-mediated cardiovascular inflammation and remodeling. Also, this paper discusses the advances on microRNAs-based therapeutic approaches that may be important in targeting RAAS signaling. --- ## Body ## 1. Introduction The role of microRNAs in RAAS system is at early stages of investigations; however, few microRNAs have been shown to be implicated in the RAAS mediated hypertension cardiovascular diseases [1]. Blocking RAAS is a primary approach for the treatment of hypertension, cardiovascular inflammation, and cardiac hypertrophy [2]. The discovery of microRNAs in 1993 in nematodeCaenorhabditis elegans has led to a new research avenue and provided novel and innovative tools to understand gene regulation that sometimes could not be explained. Since then, more than 2,518 microRNAs have been identified and listed in current databases [3]. Angiotensin II (Ang II) is the main active effector of the RAAS with profound signaling effects on the cardiac and vascular systems. Ang II impacts the cardiovascular system particularly regulating the proliferation and migration of vascular smooth muscle cells (VSMC) therefore affecting cardiovascular remodeling. Ang II signaling is mediated via Ang II type I receptor (ATIR), and both the Ang II and ATRI are highly expressed in the VSMC of some of cardiovascular disease (CVD). In addition to Ang II, tumor necrosis factor alpha (TNFalpha) plays an important role in the development of cardiovascular inflammation, sometimes in tandem with Ang II. MicroRNAs regulate many important biological functions and abnormal levels of microRNAs are involved in cardiovascular and other pathologies.In this review, we attempt to provide information of microRNAs that have been shown to play a role in the RAAS signaling and cardiovascular inflammation/remodeling and related CVD. ## 2. MicroRNA Biogenesis and Stability The main function of microRNA is to bind to 3′ UTR of its target gene and suppress its expression. MicroRNAs are conserved small noncoding double-stranded strands of RNA of approximately 22 nucleotides in length. Gene regulation via microRNAs presents some level of complexity given that microRNA can be part of a coding and noncoding gene and can be independently expressed or can form a cluster sharing same transcriptional regulation [4]. Furthermore, the complexity of microRNAs signaling is extended by the finding that microRNAs are multifunctional as such one microRNA can bind to multiple targets, and more than one microRNA can bind to the same 3′ UTR [5].MicroRNAs biogenesis is a complex and important step in microRNA activity. Biogenesis of microRNAs is under temporal and spatial control, involving an intricate coordination of proteins, transcription factors, cofactors, and RNA [6]. In addition to microRNAs regulation by Drosha and Dicer proteins, additional levels of modification processes such as editing, methylation, uridylation, adenylation, or even RNA decay are emerging as key factors in regulation of microRNA biogenesis [7]. MicroRNAs abundance is dependent on the presence of Argonaute proteins. It has been previously reported that a loss of Ago2 resulted in loss of microRNA and the reexpression of Argonaute proteins led to increased expression of precursor microRNAs [8]. However, the mechanisms that regulate microRNAs turnover are not fully understood neither perhaps fully identified. Of all aspects of microRNAs, stability is one major property that makes microRNAs powerful tools in cell biology. MicroRNAs are stable in many biological fluids including circulating blood, urine, and breast milk [9]. Moreover, microRNAs can be found encapsulated in vesicles but also there are microRNAs that are not nonencapsulated but bound to other circulating macromolecules and account for majority (~80%) of circulating microRNAs [10]. Due to their stability, many microRNAs are considered potential biomarkers of several diseases, including cardiovascular diseases. ## 3. MicroRNA and RAAS Effectors Recent estimates suggest that one-third of all genes are regulated by microRNAs. In mouse primary cultured VSMC, overexpression of miR-155 inhibited Ang II-induced cell proliferation and viability via decreasing ATIR mRNA and protein [11]. Numerous studies showed that miR-155 plays an important role mediating inflammatory and immune responses and hematopoiesis [12]. However, miR-155 is also highly expressed in numerous types of cancer, and thus it seems that miR-155 may indeed regulate diverse biological functions [12]. Alexy and coworkers examined the formation of miR-155 encapsulated microvesicles (MP) by endothelial cells (EC) following TNFalpha treatment. In the presence of TNFalpha, EC released a higher level of miR-155/MP but tremendously decreased the level of miR-126 and miR-21/MP. The TNFalpha-induced miR-MP exerted antiapoptotic effect, whereas the low miR-MPs were proapoptotic. These results suggested also a role of microRNAs in cell to cell communication signaling pathway [13]. MiR-155 plays a key role in mediating cardiac injury, cardiac remodeling, and inflammation in hypertensive or pressure overload heart via regulating AT1R, eNOS, and inflammatory cytokines. In aortic adventitial fibroblast, miR-155 regulates AT1R [14]. Overexpression of miR-155 decreased the expression of AT1R and prevented Ang II-induced ERK1/2 activation and increased the expression of α-smooth muscle actin (α-SMA) [14]. Moreover, miR-155 targets endothelial nitric oxide synthase (eNOS), thus directly regulating endothelium-dependent vasorelation [15, 16]. Patients with nephrolithiasis exhibited high levels of miR-155 in blood and urine [17]. Urine MiR-155 level negatively correlated with IL-6, IL-1β, IL-6, and TNF-α and positively with RANTES [17]. Another level of intricacy between RAAS, microRNA, exercise, and hypertension was explored by Sun et al. [15]. In this study, exercise attenuated aortic remodeling and improved endothelium-mediated vasorelaxation in SHR rats. Exercise increased miR-27a and miR-155 and decreased miR-143. Exercise also reduced Ang II level, increased Ang (1–7) levels, ACE2, AT2R, and Mas receptors, and suppressed ACE a target of miR-27a and AT1R which is a target of miR-155. This study provided an insight into the possible mechanism by which exercise improves RAAS in aorta and might explain the beneficial effect of exercise on cardiovascular system [15].Ang II plays an important role in vascular remodeling by increasing the expression of TGFbeta, Col1A1, and alpha-smooth muscle actin (α-SMA). Pan et al. examined the effect of Ang II on miR-29b expression in the kidney of spontaneously hypertensive rats (SHRs). Ang II decreased the expression of miR-29b in the renal cortex of SHRs and in NRK-52E treated cells. In NRK-52E cells, miR-29b targets TGFbeta and α-SMA, and Col1A1, Col3A1, and overexpression of miR-29b abolished Ang II-induced genes [18]. In HEK293N cells overexpressing AT1R, Ang II increased miR-132 and miR-212 via AT1R/Gαq/ERK1/2-dependent axis. In primary cardiac fibroblast, Ang II induced the expression of miR-132 and miR-212 in the heart, arteries wall, and kidney but no Ang II effect on these microRNAs in primary myocytes [19]. In hypertensive rats, Ang II induced the expression of miR-132 or miR-212. Moreover, patients taking AT1R blockers (losartan, candesartan, irbesartan, and telmisartan) exhibited decreased levels of miR-132 and miR-122 [20]. Both miR-132 and miR-212 are highly conserved miRNAs, closely clustered and regulated by cAMP response element binding protein (CREB), which is Ang II target gene. In most tissues, the level of miR-132 is much higher than that of miR-212, and the exact role of such difference is not known; however, it is proposed that miR-132 may indeed have a regulatory effect on miR-212 [21]. Overexpression of miR-132/212 in fibroblasts resulted in differential expression of 24 genes of which 7 genes (AGTR1, AC, PKC, EGR1, JAK2, cJUN, and SOD2) are involved in Ang II signaling. Functionally, overexpression of miR-132/212 induces increased fibroblast size and increased expression level of Ang II. Among the modulated genes, DYRK2 and MAP3K3 were found to be downregulated and known to be involved in endothelial to mesenchymal transition [22]. These results suggested that miR-132/212 regulates many genes of Ang II signaling pathway [19] (Table 1).Table 1 MicroRNAs affected by RAAS effectors. Effector MicroRNA target gene Reference Angiotensin II miR-155 ATR1, eNOS,α-SMA, NF-κB, AP-1 [11–14, 16, 48] ↓ miR-29-b TGFbeta, Col 1A,α-SMA [18, 24, 33–35] ↓ miR-483-3p AGT, ACE-1, ACE-2, AT2R [29] ↓ miR-129-3p FAK, MMP-2, MMP-9 [23] ↑miR-132/212 AT1R, MSK, Gαβ/ERK1/2 [19, 21] ↓ miR-34 ANP,β-MHC [46, 49] miR-766 Cyp11B2 [25] miR-16 Ang II, CCDN1, CCDN2, CCDNE [47] Note: ↓: decreased expression level; ↑: increased expression level.In patients with renal carcinoma, miR-129-3p and miR-129-5p were significantly attenuated compared to normal biopsy specimens. Moreover, ectopic expression of miR-129-3p inhibited cell migration and invasiveness, whereas renal carcinoma cells treated with miR-129-3p resulted in decreased level of metastasis genes including SOX4, phosphorylated focal adhesion kinase (FAK), and MMP-2/MMP-9 [23].Recent studies have shown Ang II role in epithelial-mesenchymal transition (EMT), and microRNA role in such process was observed by Pan et al. [18] in spontaneously hypertensive rats (SHRs) and age-matched Wistar-Kyoto (WKY) rats. MiR-29b in renal cortex was lower in SHR than in WKY rats, and treatment of NRK-52E renal tubular epithelial cells with Ang II decreased miR-29b and increased expression of TGFbeta, α-smooth muscle actin (α-SMA), and collagen I (Col I). Mir-29b is emerging as microRNA associated with EMT [24] (Table 1). Li et al. [24] showed that TGFbeta downregulated miR-29b, whereas overexpression of miR-29b blunted TGFbeta-induced EMT via AKT2. Inhibition of miR-29b resulted in the expression of EMT markers.Aldosterone synthase (Cyp11B2 gene) is a target of Ang II and thus a target of Ang II regulated microRNAs. Cyp11B2 gene is a target of miR-766 in human adrenocortical cells H295R [25]. Maharjan et al. [25] showed that miR-766 binds to Cyp11B2 gene and reduces Cyp11B2 mRNA and protein level. The findings of this study are intriguing since microRNAs regulate protein expression; however, this study suggests that microRNA also affects mRNA of its target. ## 4. MicroRNA and RAAS Inhibitors The effect of RAAS inhibition on microRNAs was investigated by Deiuliis et al. in patients with atherosclerosis plaque progression [26]. Patients were given aliskiren for 12 weeks and peripheral blood mononuclear cells were collected and microRNAs arrays were performed. Aliskiren-treated patients had significantly downregulated miR-106b-5p, miR-27a-3p, and miR-18b-5p compared to placebo-treated patients. The level of microRNAs positively correlated with thoracic and abdominal aorta wall in patients treated with Aliskiren. In a different clinical setting such as in patients with acute stroke, plasma miR-106b-5p was found to be highly elevated compared to healthy patients [27]. Although the function of miR-106b-5p is not known yet, these findings suggest that miR-106-5p may play a role in hemodynamics. MiR-27a-3p has been shown to regulate EGFR/AKT1/mTOR axis thus to decrease cell viability and increase apoptosis, whereas overexpression of EGFR, AKT, or mTOR decreases miR-27a-3p-induced cell viability [28]. To identify angiotensin II (Ang II) regulated microRNAs, Kemp et al. performed genome-wide microarrays analysis in vascular smooth muscle cells treated with Ang II or losartan [29]. A high number of microRNAs (468) were regulated by Ang-II and losartan. Only 32 microRNAs were regulated by Ang II/AT2R, whereas 52 miRNAs were regulated via AT1R and 18 microRNAs were commonly regulated via AT1R and AT2R. Of all microRNAs, miR483-3p expression was significantly downregulated in response to chronic activation of AT1R. AT1R antagonist candesartan significantly increased miR-483-3p. Kemp et al. [29] also shed some insight on Ang II feed-forward regulation of RAAS effectors AGT, ACE-1, ACE-2, and AT2R via miR483-3p. In the presence of Ang II, miR483-3p is depressed, whereas RAAS effectors are highly expressed via 3′UTR binding sites of miR483-3p present on RAAS effectors [29]. A recent study of patients with coronary artery disease (CAD) receiving ARB, ACEI, and statins for 12 months provided evidence of Toll-like receptor 4 (TLR-4) regulated microRNAs. Four microRNAs including miR-31, miR-181a, miR-16, and miR-145 were downregulated in CAD patients compared to non-CAD patients. The treatment combination of ARB telmisartan and atorvastatin or ACEI enalapril and atorvastatin increased the TLR-4 responsive microRNAs and decreased TLR-4 protein level. ARB treatment induced a greater change of the four microRNAs compared to ACEI [30]. Another microRNA, miR-146a/b, was found at high levels in the blood of CAD patients, and its expression positively correlated with IRAK, TRAF, TLR4 mRNA, or protein [31]. After 12 months of treatment with atorvastatin and telmisartan or atorvastatin and enalapril, miR-146a/b, IRAK, TLR4 mRNA, or protein decreased in the blood of CAD patients. Correlation analysis revealed that miR-146-a and TLR4 were independent predicators of cardiac events [31] (Table 2).Table 2 MicroRNAs affected by RAAS inhibitors. Inhibitor MicroRNA target gene Reference Aliskiren ↓ miR-106-5p EGFR/AKT/mTOR, ACE [27] ↓ miR-27a-3p EGFR/AKT/mTOR, ACE [26–28, 38] ↓ miR-18b-5p EGFR, ACE [29] ↓ miR-155 AT1R Candesartan ↑ miR-483-3p AGT, ACE-1, ACE-2, AT2R [26, 38] ↓ miR-132/122 Ang II [38] ↓ miR-29b Col1A, Col3A1 [26, 38] ↓ miR-212 AT2R Telmisartan ↑ miR-31 [30] ↑ miR-181a TNFalpha [30] ↑ miR-16 VEGF [30] ↑ miR-143/145 KLF4, KLF6, ACE-2 [30] ↓ miR-146a/b TRAF6, KLF4, TLR4 [31] Atorvastatin ↓ miR-146a/b TRAF6, KLF4, TLR4 [31] ↓ miR-221/222 p27, p57 [50] Enalapril ↑ miR-31 [30] ↑ miR-181a TNFalpha [30] ↑ miR-145 KLF4, KLF6, ACE-2 [30] ↑ miR-16 VEGF, CCND1, CCND2, CCNE [30, 47] Captopril ↑ miR-16 VEGF [30, 47] ↑ miR-19b βMHC [47, 51] ↑ miR-20b [47] ↑ miR-93 [47] ↑ miR-106b [47] ↑ miR-223 [47] ↑ miR-423-5p [47] Note: ↓: decreased expression level; ↑: increased expression level. ## 5. MicroRNA in Cardiovascular Disease Cardiovascular disease (CVD) still remains the major cause of worldwide death, and identifying new molecular factors with roles in the development of CVD may offer novel diagnostic markers for cardiovascular events. In patients with atypical coronary artery disease, a signature of five microRNAs miR-487a, miR-502, miR-208, miR-215, and miR-29b was found to be altered and thus may be considered potential novel diagnostic biomarkers [32]. Molecular targets for several of the five microRNAs were found to be mediators of local inflammation, such as miR-215 targets catenin-beta interacting protein 1 in TGFbeta stimulated rat mesangial cells, whereas miR-29b plays an important role in modulating myocardial injury and idiopathic fibrosis [33–35]. MiR-29 family regulates extracellular matrix proteins and thus also influences remodeling. Potential therapeutic applicability of miR-29 has been experimentally tested in the settings of induced pulmonary fibrosis. In the bleomycin-induced pulmonary fibrosis, treatment with miR-29 reversed fibrosis by decreasing collagen (Col1A1 and Col3A1) synthesis. Moreover, tissue analysis revealed the presence of intravenously injected miR-29b not only in the lungs but also in the cardiac muscle and spleen [35].In a different cardiovascular pathology such as in patients with failing heart, ischemic cardiomyopathy, or aortic stenosis, miR-320 was found to be highly expressed compared to control patients [36]. The functional analysis of miR-320 via ectopic expression in cultured cardiomyocytes indicated that miR-320 regulates cell death and apoptosis gene [37]. MicroRNA analysis in the blood and cerebrospinal fluid (CSF) of patients that suffered a stroke showed a differential profile of the two tissues, and hence some microRNAs were absent in one tissue but present in the other. In the CSF 183, microRNAs were detected out of which let-7c and miR-221-3p were upregulated and correlated with stroke. Analysis of blood showed a higher number of detected miRNAs a total of 287 out of which miR-151a-3p and miR-140-5p were upregulated and miR-18b-5p was downregulated and correlated with stroke [26]. Also, patients with atherosclerosis and receiving aliskiren for 12 weeks had a decreased blood level of miR-18b-5p, miR-106b-5, and miR-27a-3p [38]. Although both cardiovascular diseases, stroke and atherosclerosis, are due to blood clots formation, some microRNAs might just be disease specific, as, for example, miR-18b-5p is decreased in the blood of stroke patients but not in patients with atherosclerosis [26, 38] (Table 2). Recent studies support microRNAs role in cardiac hypertrophy [22]. For example, inhibition of miR-1, miR-23a, and miR-133 increased cardiomyocytes hypertrophy, whereas miR-22 or miR-30a regulates cardiac hypertrophy in mice [39–43]. MicroRNA signaling is complex; for example, one microRNA can target multiple genes. MiR-34 targets cell cycle genes and cardiac autophagy [44]. In addition to microRNAs modulating cardiomyocytes, Ang II is also a regulator of cardiomyocytes hypertrophy [45]. With regard to this relationship, Huang et al. [45] have shown that Ang II-induced myocardial hypertrophy was antagonized by miR-34, whereas inhibition of miR-34 promoted Ang II signaling via ANP and β-MHC [46]. Another microRNA regulating cardiomyocytes hypertrophy is miR-16 [16, 47]. Huang et al. [47] showed that overexpressing miR-16 in cardiomyocytes decreases Ang II, whereas overexpressing miR-16 resulted in decreased expression of cyclins D2, D2, and E in the myocardium of mice. As shown in Figure 1, based on the existing experimental evidence, microRNAs and RAAS signaling are complex particularly such that RAAS effector Ang II coregulates its level via microRNA-132 and microRNA-212 which also targets Ang II signaling via AT1R. RAAS inhibitors mostly target microRNAs by suppressing their expression thus alleviating cardiovascular inflammation and remodeling.Figure 1 Dependent and independent RAAS-regulated microRNAs signaling in cardiovascular inflammation/remodeling and hypertension. Ang II regulates its level via stimulating miR-132, miR-212, and its downstream signaling via suppressing miR-483-3p, miR-129-3p, miR-29b, and miR-34 by increasing the expression of AT1R, AT2R, ACE1, ACE2, Col1A, and TGFbeta. Several microRNAs regulate RAAS signaling independent of Ang II via regulating inflammation and remodeling miR-146a/b, miR-181a, miR-155, miR-129-3p, and miR-29b. RAAS inhibitors differentially regulate microRNAs: telmisartan, atorvastatin, aliskiren, and candesartan inhibit miR-146a/b, miR-132, miR-212, miR-155, miR-129-3b, and miR-29b. Enalapril stimulates the expression of miR-181a which targets TNFα therefore regulating inflammation and remodeling. ↑: increased level, ↓: decreased level; ⊥: inhibition, and →: stimulation. AT1R: angiotensin II type 1 R; ACE: angiotensin converting enzymes; AGT: angiotensinogen; TLR4: toll-like receptor 4; TRAF6: TNF receptor associated factor 6. ## 6. Conclusion Considering the fact that millions of people worldwide are affected by hypertension and knowing the role played by RAAS in cardiovascular inflammation and remodeling, the determination of microRNAs role in regulating RAAS signaling may represent a new strategy in the development of novel therapeutics as well as a new treatment combination for patients suffering from high blood pressure and other cardiovascular diseases. Although scientific evidence on the role of microRNAs in RAAS signaling is scarce, the few published studies on circulating microRNAs in patients with coronary artery diseases do indeed indicate that some of these circulating microRNAs may be used as biomarkers of therapeutic approaches targeting RAAS and cardiovascular diseases. --- *Source: 101527-2015-05-06.xml*
101527-2015-05-06_101527-2015-05-06.md
21,811
Role of MicroRNAs in Renin-Angiotensin-Aldosterone System-Mediated Cardiovascular Inflammation and Remodeling
Maricica Pacurari; Paul B. Tchounwou
International Journal of Inflammation (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101527
101527-2015-05-06.xml
--- ## Abstract MicroRNAs are endogenous regulators of gene expression either by inhibiting translation or protein degradation. Recent studies indicate that microRNAs play a role in cardiovascular disease and renin-angiotensin-aldosterone system- (RAAS-) mediated cardiovascular inflammation, either as mediators or being targeted by RAAS pharmacological inhibitors. The exact role(s) of microRNAs in RAAS-mediated cardiovascular inflammation and remodeling is/are still in early stage of investigation. However, few microRNAs have been shown to play a role in RAAS signaling, particularly miR-155, miR-146a/b, miR-132/122, and miR-483-3p. Identification of specific microRNAs and their targets and elucidating microRNA-regulated mechanisms associated RAS-mediated cardiovascular inflammation and remodeling might lead to the development of novel pharmacological strategies to target RAAS-mediated vascular pathologies. This paper reviews microRNAs role in inflammatory factors mediating cardiovascular inflammation and RAAS genes and the effect of RAAS pharmacological inhibition on microRNAs and the resolution of RAAS-mediated cardiovascular inflammation and remodeling. Also, this paper discusses the advances on microRNAs-based therapeutic approaches that may be important in targeting RAAS signaling. --- ## Body ## 1. Introduction The role of microRNAs in RAAS system is at early stages of investigations; however, few microRNAs have been shown to be implicated in the RAAS mediated hypertension cardiovascular diseases [1]. Blocking RAAS is a primary approach for the treatment of hypertension, cardiovascular inflammation, and cardiac hypertrophy [2]. The discovery of microRNAs in 1993 in nematodeCaenorhabditis elegans has led to a new research avenue and provided novel and innovative tools to understand gene regulation that sometimes could not be explained. Since then, more than 2,518 microRNAs have been identified and listed in current databases [3]. Angiotensin II (Ang II) is the main active effector of the RAAS with profound signaling effects on the cardiac and vascular systems. Ang II impacts the cardiovascular system particularly regulating the proliferation and migration of vascular smooth muscle cells (VSMC) therefore affecting cardiovascular remodeling. Ang II signaling is mediated via Ang II type I receptor (ATIR), and both the Ang II and ATRI are highly expressed in the VSMC of some of cardiovascular disease (CVD). In addition to Ang II, tumor necrosis factor alpha (TNFalpha) plays an important role in the development of cardiovascular inflammation, sometimes in tandem with Ang II. MicroRNAs regulate many important biological functions and abnormal levels of microRNAs are involved in cardiovascular and other pathologies.In this review, we attempt to provide information of microRNAs that have been shown to play a role in the RAAS signaling and cardiovascular inflammation/remodeling and related CVD. ## 2. MicroRNA Biogenesis and Stability The main function of microRNA is to bind to 3′ UTR of its target gene and suppress its expression. MicroRNAs are conserved small noncoding double-stranded strands of RNA of approximately 22 nucleotides in length. Gene regulation via microRNAs presents some level of complexity given that microRNA can be part of a coding and noncoding gene and can be independently expressed or can form a cluster sharing same transcriptional regulation [4]. Furthermore, the complexity of microRNAs signaling is extended by the finding that microRNAs are multifunctional as such one microRNA can bind to multiple targets, and more than one microRNA can bind to the same 3′ UTR [5].MicroRNAs biogenesis is a complex and important step in microRNA activity. Biogenesis of microRNAs is under temporal and spatial control, involving an intricate coordination of proteins, transcription factors, cofactors, and RNA [6]. In addition to microRNAs regulation by Drosha and Dicer proteins, additional levels of modification processes such as editing, methylation, uridylation, adenylation, or even RNA decay are emerging as key factors in regulation of microRNA biogenesis [7]. MicroRNAs abundance is dependent on the presence of Argonaute proteins. It has been previously reported that a loss of Ago2 resulted in loss of microRNA and the reexpression of Argonaute proteins led to increased expression of precursor microRNAs [8]. However, the mechanisms that regulate microRNAs turnover are not fully understood neither perhaps fully identified. Of all aspects of microRNAs, stability is one major property that makes microRNAs powerful tools in cell biology. MicroRNAs are stable in many biological fluids including circulating blood, urine, and breast milk [9]. Moreover, microRNAs can be found encapsulated in vesicles but also there are microRNAs that are not nonencapsulated but bound to other circulating macromolecules and account for majority (~80%) of circulating microRNAs [10]. Due to their stability, many microRNAs are considered potential biomarkers of several diseases, including cardiovascular diseases. ## 3. MicroRNA and RAAS Effectors Recent estimates suggest that one-third of all genes are regulated by microRNAs. In mouse primary cultured VSMC, overexpression of miR-155 inhibited Ang II-induced cell proliferation and viability via decreasing ATIR mRNA and protein [11]. Numerous studies showed that miR-155 plays an important role mediating inflammatory and immune responses and hematopoiesis [12]. However, miR-155 is also highly expressed in numerous types of cancer, and thus it seems that miR-155 may indeed regulate diverse biological functions [12]. Alexy and coworkers examined the formation of miR-155 encapsulated microvesicles (MP) by endothelial cells (EC) following TNFalpha treatment. In the presence of TNFalpha, EC released a higher level of miR-155/MP but tremendously decreased the level of miR-126 and miR-21/MP. The TNFalpha-induced miR-MP exerted antiapoptotic effect, whereas the low miR-MPs were proapoptotic. These results suggested also a role of microRNAs in cell to cell communication signaling pathway [13]. MiR-155 plays a key role in mediating cardiac injury, cardiac remodeling, and inflammation in hypertensive or pressure overload heart via regulating AT1R, eNOS, and inflammatory cytokines. In aortic adventitial fibroblast, miR-155 regulates AT1R [14]. Overexpression of miR-155 decreased the expression of AT1R and prevented Ang II-induced ERK1/2 activation and increased the expression of α-smooth muscle actin (α-SMA) [14]. Moreover, miR-155 targets endothelial nitric oxide synthase (eNOS), thus directly regulating endothelium-dependent vasorelation [15, 16]. Patients with nephrolithiasis exhibited high levels of miR-155 in blood and urine [17]. Urine MiR-155 level negatively correlated with IL-6, IL-1β, IL-6, and TNF-α and positively with RANTES [17]. Another level of intricacy between RAAS, microRNA, exercise, and hypertension was explored by Sun et al. [15]. In this study, exercise attenuated aortic remodeling and improved endothelium-mediated vasorelaxation in SHR rats. Exercise increased miR-27a and miR-155 and decreased miR-143. Exercise also reduced Ang II level, increased Ang (1–7) levels, ACE2, AT2R, and Mas receptors, and suppressed ACE a target of miR-27a and AT1R which is a target of miR-155. This study provided an insight into the possible mechanism by which exercise improves RAAS in aorta and might explain the beneficial effect of exercise on cardiovascular system [15].Ang II plays an important role in vascular remodeling by increasing the expression of TGFbeta, Col1A1, and alpha-smooth muscle actin (α-SMA). Pan et al. examined the effect of Ang II on miR-29b expression in the kidney of spontaneously hypertensive rats (SHRs). Ang II decreased the expression of miR-29b in the renal cortex of SHRs and in NRK-52E treated cells. In NRK-52E cells, miR-29b targets TGFbeta and α-SMA, and Col1A1, Col3A1, and overexpression of miR-29b abolished Ang II-induced genes [18]. In HEK293N cells overexpressing AT1R, Ang II increased miR-132 and miR-212 via AT1R/Gαq/ERK1/2-dependent axis. In primary cardiac fibroblast, Ang II induced the expression of miR-132 and miR-212 in the heart, arteries wall, and kidney but no Ang II effect on these microRNAs in primary myocytes [19]. In hypertensive rats, Ang II induced the expression of miR-132 or miR-212. Moreover, patients taking AT1R blockers (losartan, candesartan, irbesartan, and telmisartan) exhibited decreased levels of miR-132 and miR-122 [20]. Both miR-132 and miR-212 are highly conserved miRNAs, closely clustered and regulated by cAMP response element binding protein (CREB), which is Ang II target gene. In most tissues, the level of miR-132 is much higher than that of miR-212, and the exact role of such difference is not known; however, it is proposed that miR-132 may indeed have a regulatory effect on miR-212 [21]. Overexpression of miR-132/212 in fibroblasts resulted in differential expression of 24 genes of which 7 genes (AGTR1, AC, PKC, EGR1, JAK2, cJUN, and SOD2) are involved in Ang II signaling. Functionally, overexpression of miR-132/212 induces increased fibroblast size and increased expression level of Ang II. Among the modulated genes, DYRK2 and MAP3K3 were found to be downregulated and known to be involved in endothelial to mesenchymal transition [22]. These results suggested that miR-132/212 regulates many genes of Ang II signaling pathway [19] (Table 1).Table 1 MicroRNAs affected by RAAS effectors. Effector MicroRNA target gene Reference Angiotensin II miR-155 ATR1, eNOS,α-SMA, NF-κB, AP-1 [11–14, 16, 48] ↓ miR-29-b TGFbeta, Col 1A,α-SMA [18, 24, 33–35] ↓ miR-483-3p AGT, ACE-1, ACE-2, AT2R [29] ↓ miR-129-3p FAK, MMP-2, MMP-9 [23] ↑miR-132/212 AT1R, MSK, Gαβ/ERK1/2 [19, 21] ↓ miR-34 ANP,β-MHC [46, 49] miR-766 Cyp11B2 [25] miR-16 Ang II, CCDN1, CCDN2, CCDNE [47] Note: ↓: decreased expression level; ↑: increased expression level.In patients with renal carcinoma, miR-129-3p and miR-129-5p were significantly attenuated compared to normal biopsy specimens. Moreover, ectopic expression of miR-129-3p inhibited cell migration and invasiveness, whereas renal carcinoma cells treated with miR-129-3p resulted in decreased level of metastasis genes including SOX4, phosphorylated focal adhesion kinase (FAK), and MMP-2/MMP-9 [23].Recent studies have shown Ang II role in epithelial-mesenchymal transition (EMT), and microRNA role in such process was observed by Pan et al. [18] in spontaneously hypertensive rats (SHRs) and age-matched Wistar-Kyoto (WKY) rats. MiR-29b in renal cortex was lower in SHR than in WKY rats, and treatment of NRK-52E renal tubular epithelial cells with Ang II decreased miR-29b and increased expression of TGFbeta, α-smooth muscle actin (α-SMA), and collagen I (Col I). Mir-29b is emerging as microRNA associated with EMT [24] (Table 1). Li et al. [24] showed that TGFbeta downregulated miR-29b, whereas overexpression of miR-29b blunted TGFbeta-induced EMT via AKT2. Inhibition of miR-29b resulted in the expression of EMT markers.Aldosterone synthase (Cyp11B2 gene) is a target of Ang II and thus a target of Ang II regulated microRNAs. Cyp11B2 gene is a target of miR-766 in human adrenocortical cells H295R [25]. Maharjan et al. [25] showed that miR-766 binds to Cyp11B2 gene and reduces Cyp11B2 mRNA and protein level. The findings of this study are intriguing since microRNAs regulate protein expression; however, this study suggests that microRNA also affects mRNA of its target. ## 4. MicroRNA and RAAS Inhibitors The effect of RAAS inhibition on microRNAs was investigated by Deiuliis et al. in patients with atherosclerosis plaque progression [26]. Patients were given aliskiren for 12 weeks and peripheral blood mononuclear cells were collected and microRNAs arrays were performed. Aliskiren-treated patients had significantly downregulated miR-106b-5p, miR-27a-3p, and miR-18b-5p compared to placebo-treated patients. The level of microRNAs positively correlated with thoracic and abdominal aorta wall in patients treated with Aliskiren. In a different clinical setting such as in patients with acute stroke, plasma miR-106b-5p was found to be highly elevated compared to healthy patients [27]. Although the function of miR-106b-5p is not known yet, these findings suggest that miR-106-5p may play a role in hemodynamics. MiR-27a-3p has been shown to regulate EGFR/AKT1/mTOR axis thus to decrease cell viability and increase apoptosis, whereas overexpression of EGFR, AKT, or mTOR decreases miR-27a-3p-induced cell viability [28]. To identify angiotensin II (Ang II) regulated microRNAs, Kemp et al. performed genome-wide microarrays analysis in vascular smooth muscle cells treated with Ang II or losartan [29]. A high number of microRNAs (468) were regulated by Ang-II and losartan. Only 32 microRNAs were regulated by Ang II/AT2R, whereas 52 miRNAs were regulated via AT1R and 18 microRNAs were commonly regulated via AT1R and AT2R. Of all microRNAs, miR483-3p expression was significantly downregulated in response to chronic activation of AT1R. AT1R antagonist candesartan significantly increased miR-483-3p. Kemp et al. [29] also shed some insight on Ang II feed-forward regulation of RAAS effectors AGT, ACE-1, ACE-2, and AT2R via miR483-3p. In the presence of Ang II, miR483-3p is depressed, whereas RAAS effectors are highly expressed via 3′UTR binding sites of miR483-3p present on RAAS effectors [29]. A recent study of patients with coronary artery disease (CAD) receiving ARB, ACEI, and statins for 12 months provided evidence of Toll-like receptor 4 (TLR-4) regulated microRNAs. Four microRNAs including miR-31, miR-181a, miR-16, and miR-145 were downregulated in CAD patients compared to non-CAD patients. The treatment combination of ARB telmisartan and atorvastatin or ACEI enalapril and atorvastatin increased the TLR-4 responsive microRNAs and decreased TLR-4 protein level. ARB treatment induced a greater change of the four microRNAs compared to ACEI [30]. Another microRNA, miR-146a/b, was found at high levels in the blood of CAD patients, and its expression positively correlated with IRAK, TRAF, TLR4 mRNA, or protein [31]. After 12 months of treatment with atorvastatin and telmisartan or atorvastatin and enalapril, miR-146a/b, IRAK, TLR4 mRNA, or protein decreased in the blood of CAD patients. Correlation analysis revealed that miR-146-a and TLR4 were independent predicators of cardiac events [31] (Table 2).Table 2 MicroRNAs affected by RAAS inhibitors. Inhibitor MicroRNA target gene Reference Aliskiren ↓ miR-106-5p EGFR/AKT/mTOR, ACE [27] ↓ miR-27a-3p EGFR/AKT/mTOR, ACE [26–28, 38] ↓ miR-18b-5p EGFR, ACE [29] ↓ miR-155 AT1R Candesartan ↑ miR-483-3p AGT, ACE-1, ACE-2, AT2R [26, 38] ↓ miR-132/122 Ang II [38] ↓ miR-29b Col1A, Col3A1 [26, 38] ↓ miR-212 AT2R Telmisartan ↑ miR-31 [30] ↑ miR-181a TNFalpha [30] ↑ miR-16 VEGF [30] ↑ miR-143/145 KLF4, KLF6, ACE-2 [30] ↓ miR-146a/b TRAF6, KLF4, TLR4 [31] Atorvastatin ↓ miR-146a/b TRAF6, KLF4, TLR4 [31] ↓ miR-221/222 p27, p57 [50] Enalapril ↑ miR-31 [30] ↑ miR-181a TNFalpha [30] ↑ miR-145 KLF4, KLF6, ACE-2 [30] ↑ miR-16 VEGF, CCND1, CCND2, CCNE [30, 47] Captopril ↑ miR-16 VEGF [30, 47] ↑ miR-19b βMHC [47, 51] ↑ miR-20b [47] ↑ miR-93 [47] ↑ miR-106b [47] ↑ miR-223 [47] ↑ miR-423-5p [47] Note: ↓: decreased expression level; ↑: increased expression level. ## 5. MicroRNA in Cardiovascular Disease Cardiovascular disease (CVD) still remains the major cause of worldwide death, and identifying new molecular factors with roles in the development of CVD may offer novel diagnostic markers for cardiovascular events. In patients with atypical coronary artery disease, a signature of five microRNAs miR-487a, miR-502, miR-208, miR-215, and miR-29b was found to be altered and thus may be considered potential novel diagnostic biomarkers [32]. Molecular targets for several of the five microRNAs were found to be mediators of local inflammation, such as miR-215 targets catenin-beta interacting protein 1 in TGFbeta stimulated rat mesangial cells, whereas miR-29b plays an important role in modulating myocardial injury and idiopathic fibrosis [33–35]. MiR-29 family regulates extracellular matrix proteins and thus also influences remodeling. Potential therapeutic applicability of miR-29 has been experimentally tested in the settings of induced pulmonary fibrosis. In the bleomycin-induced pulmonary fibrosis, treatment with miR-29 reversed fibrosis by decreasing collagen (Col1A1 and Col3A1) synthesis. Moreover, tissue analysis revealed the presence of intravenously injected miR-29b not only in the lungs but also in the cardiac muscle and spleen [35].In a different cardiovascular pathology such as in patients with failing heart, ischemic cardiomyopathy, or aortic stenosis, miR-320 was found to be highly expressed compared to control patients [36]. The functional analysis of miR-320 via ectopic expression in cultured cardiomyocytes indicated that miR-320 regulates cell death and apoptosis gene [37]. MicroRNA analysis in the blood and cerebrospinal fluid (CSF) of patients that suffered a stroke showed a differential profile of the two tissues, and hence some microRNAs were absent in one tissue but present in the other. In the CSF 183, microRNAs were detected out of which let-7c and miR-221-3p were upregulated and correlated with stroke. Analysis of blood showed a higher number of detected miRNAs a total of 287 out of which miR-151a-3p and miR-140-5p were upregulated and miR-18b-5p was downregulated and correlated with stroke [26]. Also, patients with atherosclerosis and receiving aliskiren for 12 weeks had a decreased blood level of miR-18b-5p, miR-106b-5, and miR-27a-3p [38]. Although both cardiovascular diseases, stroke and atherosclerosis, are due to blood clots formation, some microRNAs might just be disease specific, as, for example, miR-18b-5p is decreased in the blood of stroke patients but not in patients with atherosclerosis [26, 38] (Table 2). Recent studies support microRNAs role in cardiac hypertrophy [22]. For example, inhibition of miR-1, miR-23a, and miR-133 increased cardiomyocytes hypertrophy, whereas miR-22 or miR-30a regulates cardiac hypertrophy in mice [39–43]. MicroRNA signaling is complex; for example, one microRNA can target multiple genes. MiR-34 targets cell cycle genes and cardiac autophagy [44]. In addition to microRNAs modulating cardiomyocytes, Ang II is also a regulator of cardiomyocytes hypertrophy [45]. With regard to this relationship, Huang et al. [45] have shown that Ang II-induced myocardial hypertrophy was antagonized by miR-34, whereas inhibition of miR-34 promoted Ang II signaling via ANP and β-MHC [46]. Another microRNA regulating cardiomyocytes hypertrophy is miR-16 [16, 47]. Huang et al. [47] showed that overexpressing miR-16 in cardiomyocytes decreases Ang II, whereas overexpressing miR-16 resulted in decreased expression of cyclins D2, D2, and E in the myocardium of mice. As shown in Figure 1, based on the existing experimental evidence, microRNAs and RAAS signaling are complex particularly such that RAAS effector Ang II coregulates its level via microRNA-132 and microRNA-212 which also targets Ang II signaling via AT1R. RAAS inhibitors mostly target microRNAs by suppressing their expression thus alleviating cardiovascular inflammation and remodeling.Figure 1 Dependent and independent RAAS-regulated microRNAs signaling in cardiovascular inflammation/remodeling and hypertension. Ang II regulates its level via stimulating miR-132, miR-212, and its downstream signaling via suppressing miR-483-3p, miR-129-3p, miR-29b, and miR-34 by increasing the expression of AT1R, AT2R, ACE1, ACE2, Col1A, and TGFbeta. Several microRNAs regulate RAAS signaling independent of Ang II via regulating inflammation and remodeling miR-146a/b, miR-181a, miR-155, miR-129-3p, and miR-29b. RAAS inhibitors differentially regulate microRNAs: telmisartan, atorvastatin, aliskiren, and candesartan inhibit miR-146a/b, miR-132, miR-212, miR-155, miR-129-3b, and miR-29b. Enalapril stimulates the expression of miR-181a which targets TNFα therefore regulating inflammation and remodeling. ↑: increased level, ↓: decreased level; ⊥: inhibition, and →: stimulation. AT1R: angiotensin II type 1 R; ACE: angiotensin converting enzymes; AGT: angiotensinogen; TLR4: toll-like receptor 4; TRAF6: TNF receptor associated factor 6. ## 6. Conclusion Considering the fact that millions of people worldwide are affected by hypertension and knowing the role played by RAAS in cardiovascular inflammation and remodeling, the determination of microRNAs role in regulating RAAS signaling may represent a new strategy in the development of novel therapeutics as well as a new treatment combination for patients suffering from high blood pressure and other cardiovascular diseases. Although scientific evidence on the role of microRNAs in RAAS signaling is scarce, the few published studies on circulating microRNAs in patients with coronary artery diseases do indeed indicate that some of these circulating microRNAs may be used as biomarkers of therapeutic approaches targeting RAAS and cardiovascular diseases. --- *Source: 101527-2015-05-06.xml*
2015
# Advances in Multiferroic Nanomaterials Assembled with Clusters **Authors:** Shifeng Zhao **Journal:** Journal of Nanomaterials (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101528 --- ## Abstract As an entirely new perspective of multifunctional materials, multiferroics have attracted a great deal of attention. With the rapidly developing micro- and nano-electro-mechanical system (MEMS&NEMS), the new kinds of micro- and nanodevices and functionalities aroused extensive research activity in the area of multiferroics. As an ideal building block to assemble the nanostructure, cluster exhibits particular physical properties related to the cluster size at nanoscale, which is efficient in controlling the multiferroic properties for nanomaterials. This review focuses on our recent advances in multiferroic nanomaterials assembled with clusters. In particular, the single phase multiferroic films and compound heterostructured multiferroic films assembled with clusters were introduced detailedly. This technique presents a new and efficient method to produce the nanostructured multiferroic materials for their potential application in NEMS devices. --- ## Body ## 1. Introduction Multiferroics have attracted increasing attention due to simultaneous coexistence of ferromagnetic, ferroelectric, or ferroelastic ordering [1, 2]. Many researchers focused on the magnetoelectric effect driven by the prospect of controlling polarization by magnetic field and magnetization by electrical field [3], which opens up an entirely new perspective of magnetic/ferroelectric data storage media, spin-based devices (spintronics), magnetocapacitive devices, magnetic sensors, nonvolatile memories, random access memory, and so forth [4–8]. Since its discovery a century ago, ferroelectricity has been linked to the ancient phenomena of magnetism. Attempts to combine the dipole and spin orders into one system started in the 1960s in Cr2O3 single crystal [9, 10], and other single phase multiferroics, including boracites (Ni3B7O13I, Cr3B7O13Cl) [10], fluorides (BaMF4, MMn, Fe, Co, Ni) [11, 12], magnetite Fe3O4 [13], (Y/Yb)MnO3 [14], and BiFeO3 [15], were identified in the following decades. However, such a combination in these multiferroics has been proven to be unexpectedly tough.Moreover, by growing composite films combined with piezoelectric and magnetostrictive materials, the strong magnetoelectric coupling effect could be achieved due to product property. Much work has been done to prepare the composite films by combining perovskite ferroelectric oxides (e.g., Pb(Zr0.52Ti0.48)O3 (PZT), BaTiO3) with ferromagnetic oxides (e.g., CoFe2O4, La0.67Sr0.33MnO3) [16–21]; however, due to the low magnetostriction of these ferromagnetic oxides and Ni metal, the reported magnetoelectric effects in these composite films are generally not strong.As well known, rare earth iron alloy (R-Fe, R = rare earth element) possesses giant magnetostriction, being an order of magnitude greater than the ferromagnetic oxides [22]. The previous investigations have shown that the magnetoelectric effect in the bulk laminate consisted of R-Fe alloy and ferroelectric oxide (e.g., Tb0.30Dy0.70Fe2(Terfenol-D)/PZT) with magnetoelectric coupling coefficient αE of ~4680 mV/cm·Oe is much larger than that of the all-oxide laminates (e.g., CoFe2O4/PZT; its αE is ~60 mV/cm·Oe) [23–25]. However, with the applications in the micro-electro-mechanical system (MEMS) devices such as microtransducers, microactuators, and microsensors [26], the well-defined microstructures together with the tunable properties are necessary for the multiferroics. Therefore, for the laminated composite film (i.e., thin-film heterostructure), it could be expected that magnetoelectric effect would be enhanced significantly if the R-Fe alloy is used in the magnetostriction layer. Though the giant magnetostrictive films have been prepared by song conventional film preparation means [27–30], since the phase-formation temperature of R-Fe alloy is very high (the substrate is generally heated above 500°C), it is unavoidable to bring about serious oxygen diffusion from PZT oxide to Tb-Fe alloy. As a result, both magnetostriction in Tb-Fe alloy and piezoelectricity in PZT are seriously suppressed. Moreover, the serious oxygen diffusion would also generate a new interface layer, which further significantly decreases the magnetoelectric coupling efficiency [31]. Therefore, a progress with low temperature and energy is necessary during deposition. Fortunately, we have developed an effective preparation method, namely, low energy cluster beam deposition (LECBD), to prepare the nanostructured magnetic, giant magnetostrictive, single phase multiferroic, and well-defined microstructured multiferroic heterostructured films [32–38]. This paper aims to review the breakthroughs on the multiferroic nanostructure assembled with clusters. ## 2. Low Energy Cluster Beam Deposition As a very important building block of nanomaterials, nanoclusters are aggregates of atoms or molecules of nanometric size, containing a number of constituent particles ranging from 10 to 106 [39–41]. All the beam experiments on clusters have a cluster source, in which the clusters are produced. There are a variety of sources available including arc cluster ion source [42], laser vaporization cluster source [43, 44], gas aggregation source [45, 46], seeded supersonic nozzle source [47], ion sputtering source [48, 49], and liquid metal ion source [50, 51]. Based on these we developed low energy cluster beam deposition method. A magnetron-sputtering-gas-aggregation (MSGA) cluster source produce is used to produce cluster beam to assemble multiferroic films. The growth and deposition process of clusters is shown in Figure 1 using gas aggregation. A direct current (DC) pulse power was used as the sputtering power. As the sputtering gas, one stream of argon gas with 99.9% purity was introduced through a ring structure close to the surface of the target. Another stream of argon was fed as a buffer gas through a gas inlet near the magnetron discharge head. The cluster condensation and growth region was cooled with liquid nitrogen. A highly oriented cluster beam with a small divergent angle less than one degree was formed by differential pumping controlled by the skimmers. During the process of the deposition, the average velocity perpendicular to the substrate of each cluster is quite low (with less than 50 meV/atom corresponding to kinetic energy), and the velocity parallel to the substrate is about several meV/atom, which are both much lower than the molecular binding energy. The clusters are deposited on the substrate by a soft-landing manner and accumulated randomly but they do not coalesce with each other.Figure 1 The sketched diagram of the growth and deposition process of clusters.Based on this technique, various cluster-assembled nanostructured films such as metal and oxide have been prepared, which show peculiar properties different from the films prepared by the common methods [52–55]. Since the size, mass, and the assembling manner of the clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and properties of the cluster-assembled nanostructured films, which makes it an ideal candidate for the fabrication of single phase or heterostructured films. ## 3. Magnetic Films Assembled with Clusters ### 3.1. Giant Magnetostrictive R-Fe Films The cubic Laves phase R-Fe (R = rare earth element such as Tb, Dy, Sm) compounds are well known to be a giant magnetostrictive material at room temperature, which could be widely used as actuators, transducers, dampers, and so forth [56, 57]. With the demand of the rapidly developing nano-electro-mechanical system (NEMS), much work has been done and various methods such as ion plating [27], ion beam sputtering [28], flash evaporation [29] and magnetron sputtering [58], and molecular beam epitaxy [59, 60] have been developed for the preparation of R-Fe films. However, compared with the bulk materials, the saturation magnetostriction of the current R-Fe films is much lower while the magnetic driving field is still higher [58], which limits them to be further used. Fortunately, based on the LECBD technique, a well-defined Tb-Fe nanostructured film has been obtained, which exhibits excellent magnetostriction, much higher than the common Tb-Fe films [34]. Since the size, mass, and the assembling manner of the R-Fe clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and magnetic properties of the cluster-assembled R-Fe nanostructured films, which makes it an ideal candidate for the fabrication of the magnetic NEMS devices.A DC-magnetron-sputtering-gas-aggregation (MSGA) cluster source was used to produce the Tb-Fe cluster beam, which was finally deposited on the Si(100) substrate at room temperature to form the nanofilm. In order to tune the cluster size, the length of the condensing growth regionL was set as 80 mm, 95 mm, and 110 mm, respectively. Figure 2 presents the SEM images and size distributions of the typical Tb-Fe films prepared under different length of the condensing growth region L. One can clearly observe two facts: (i) all films are assembled by the spherical nanoparticles, which are distributed uniformly and monodispersely in the film; (ii) with the increase of the condensing growth region length, the size of the nanoparticle increases, being in the ranges of 31~36 nm for L = 110 mm, 28~33 nm for L = 95 mm, and 23~28 nm for L = 80 mm, respectively. Meanwhile, we note that the particle size distribution was almost lognormal. The formation of such nanoparticle film is attributed to the unique LECBD preparation process. And the length of the cluster growth region significantly influences the size of the Tb-Fe clusters for the films. In fact, the growth of the clusters as well as their size distribution is mainly determined by the cluster residence time and its distribution [61, 62]. With increasing the length of the condensing growth region, the cluster residence time in the condensing growth region increases, and thus the collision among metal ion, Tb-Fe vapor, carrier gas, and free clusters becomes more sufficient, leading to the bigger size of the clusters.Figure 2 SEM images of the typical as-deposited Tb-Fe films and the graph of population versus size distribution of the nanoparticles at different growth region length. (a) 110 mm, (b) 950 mm (c), and 80 mm ([36]).It has been confirmed that all present nanoparticle-assembled Tb-Fe films exhibit higher magnetostriction comparing to the common nonnanostructured films prepared by other methods [63–65]. And we observe that the magnetostrictive behavior and piezomagnetic coefficient evidently vary with the average size of the nanoparticle. Figure 3 gives the magnetostriction and piezomagnetic coefficient on the magnetic field for the films with various particle sizes. With increasing the particle size, the saturation magnetostriction λs and the saturation magnetic field Hm change, for example, λs~ 816 × 10−6 and Hm~ 6.0 kOe for d=25 nm, λs~ 1029 × 10−6 and Hm~ 7.0 kOe for d=30 nm, and λs = 746 × 10−6 and Hm = 5.0 kOe for d=35 nm. Obviously, the film with d=30 nm has the highest saturation magnetostriction and piezomagnetic coefficient. However, it is not the case at low magnetic field. The film with d=35 nm possesses higher magnetostriction and piezomagnetic coefficient at low magnetic field.Figure 3 The dependence of magnetostriction and piezomagnetic coefficient for the Tb-Fe nanostructured film on various particle sizes.It suggests that the dependence of the magnetostriction and piezomagnetic coefficient on the particle size could be attributed to the difference of the magnetization characteristic for these films. Figure4 presents the field dependent magnetization at room temperature for the films with the particle sizes of 25 nm, 30 nm, and 35 nm. It is shown that the degree of magnetization anisotropy for the film is significantly affected by the particles size. Both in-plane and out-of-plane saturation magnetization change, as well as the coercivity with variation of the particle size. For the films with particles size of 30 nm, the degree of magnetic anisotropy is the maximum and the difference between in-plane and out-of-plane saturation magnetization is maximum. The easy axis is out-of-plane. Thus particle size dependence of the magnetic anisotropy for the present films should be correlative to the exchange coupling effects between the nanoparticles in the film [66]. Thus the film with d=30 nm may show the highest degree of magnetic anisotropy because the exchange coupling distance is twice of the domain wall width (RFe ~ 15 nm) for magnetic nanoparticles [67]. Therefore, for the film with d=30 nm, it has higher magnetic anisotropy than the other films. It needs far higher magnetic field to rotate the spin into the applied field. Therefore, the magnetostrictive coefficient of this film is lower than the other films at a low magnetic field, but, its saturation magnetostriction is the highest, which could be ascribed to the higher energy of anisotropy exchange interaction [68].Figure 4 Magnetic hysteresis loops for the Tb-Fe nanostructured film assembled by the clusters (a) with 35 nm in diameter, (b) with 30 nm in diameter, and (c) with 25 nm in diameter. (a) (b) (c) ### 3.2. Enhanced Ferromagnetism of BiFeO3 Films Assembled with Clusters BiFeO3 (BFO) is one of the most outstanding single-phase lead-free multiferroics due to its high ferroelectric Curie (TFE~ 1103 K) [69] and Neel (TN~ 643 K) temperatures [70]. However, for BFO materials, antiferromagnetism and a superimposed incommensurate cycloid spin structure with a periodicity of 62 nm along the 110h axis cancel the macroscopic magnetization at room temperature, which restricts its applications [71]. Some investigations show that weak ferromagnetism is observed in some limited-dimension materials such as nanowires and nanoparticles due to the partial destruction of the spiral periodicity [72–74], which demonstrates a possible way to enhance ferromagnetism in single-phase multiferroics. Thus, as a size controllable building block of nanomaterials, clusters become a candidate to assemble multiferroics. Therefore, using LECBD technique, we have prepared the well-defined BiFeO3 nanostructured films assembled with 0-characteristic-dimension clusters, and then the films were annealed at 600°C. As we expected, the ferromagnetism of the as-prepared BiFeO3 films is enhanced [38].Figure5 gives the morphologies of cluster-assembled BiFeO3 nanostructured films before annealing. It can be seen that the films are assembled with clusters, which are nearly spherical and densely packed to form the uniformly continuous films, whereas each individual cluster is still clearly distinguishable. The population versus the size reveals that the average size of the nanoparticles is ~22 nm for as-deposited films and ~25.5 nm for the annealed films and is attributed to the fact that the size of cluster increases derive from the improvement of crystallizing during the annealing process.Figure 5 (a) Typical SEM image of as-deposited BFO nanostructured films, (b) the films after annealing, and the inset is the graph of population versus size distribution of the clusters.Figure6 present the XRD patterns of the typical as-deposited and annealed nanostructured films assembled with clusters. They show that both present films are polycrystalline and all of the observed diffraction peaks can be indexed to a perovskite structure. And the as-deposited BFO nanostructured films and other common films belonging to the rhombohedral structure with space group R-3c (161) prepared by other methods annealed BFO films transform to the coexistence of tetragonal and orthorhombic symmetry structure as (104) and (110) diffraction peaks are not obviously split, which is observed by expanding the view of the XRD pattern around 2θ = 32.6° in the inset of Figure 6. At the same time, the lattice constant of cluster-assembled BiFeO3 films is a = 5.491 Å, smaller than those of the films prepared by other methods. It suggests that there exists a crystal distortion for the cluster-assembled BFO films, giving rise to a transition from the rhombohedral structure to tetragonal one [75–77], which means the crystal structure changes from a high symmetry state to a low symmetry state compared to bulk BFO materials. Thus the crystal distortion is due to the size effect of the clusters with the smaller characteristic size, which partially destroys the long-range cycloid spin structure with a periodicity of 62 nm in the rhombohedral structure with space group R-3c (161). It is such crystal distortion of the as-prepared films that brings about the enhancement in magnetization.Figure 6 The XRD patterns before annealing and after annealing; the inset is the expanded view on the location of diffraction peak around2θ = 32.6°.Figure7 shows the magnetic hysteresis loops for the cluster-assembled BFO nanostructured films measured at 5 K and 300 K. As can be seen, obvious ferromagnetism is observed for the cluster-assembled BFO nanostructured films not only at 5 K but also at room temperature. In Particular, the saturation magnetization of the BFO films at room temperature reaches 108 emu/cc, which is comparable with that of the present films with 125 emu/cc at 5 K. More importantly, the large magnetization of 81 emu/cc is obtained at a magnetic field of 3000 Oe at room temperature, which is a larger response than the common films prepared by other methods [78–80].Figure 7 The magnetization dependence on the magnetic field of the typical cluster-assembled BFO nanostructured films (a) at low temperature of 5 K, (b) at 300 K. The inset is the expand view of the magnetic hysteresis. (a) (b)Such enhanced room temperature ferromagnetism is attributed to the fact the average size of BFO clusters is much less than the long-range cycloid order of 62 nm along the110h axis that the periodicity of the spin cycloid is broken [81]. Antiferromagnetic materials are considered as the combination of one sublattice with spins along one direction and another with spins along the opposite direction. If no spin canting is considered, the spins of these two sublattices compensate each other so that the net magnetization inside the material would become zero [82]. However, the long-range antiferromagnetic order is frequently interrupted at the cluster surfaces, which forms the uncompensated surface spins. For the cluster-assembled BFO films with the average size of 25.5 nm, the uncompensated surface spins become very significant due to the very large surface to volume ratio for the clusters. The uncompensated spins at the surface enhance the contribution to the nanoparticle’s overall magnetization. Besides, structural distortion and change of lattice parameter due to the size effect for the cluster-assembled nanostructured films [83] lead to the release of the latent magnetization locked within the cycloid. Then the ferromagnetism of nanostructured films is significantly enhanced. ## 3.1. Giant Magnetostrictive R-Fe Films The cubic Laves phase R-Fe (R = rare earth element such as Tb, Dy, Sm) compounds are well known to be a giant magnetostrictive material at room temperature, which could be widely used as actuators, transducers, dampers, and so forth [56, 57]. With the demand of the rapidly developing nano-electro-mechanical system (NEMS), much work has been done and various methods such as ion plating [27], ion beam sputtering [28], flash evaporation [29] and magnetron sputtering [58], and molecular beam epitaxy [59, 60] have been developed for the preparation of R-Fe films. However, compared with the bulk materials, the saturation magnetostriction of the current R-Fe films is much lower while the magnetic driving field is still higher [58], which limits them to be further used. Fortunately, based on the LECBD technique, a well-defined Tb-Fe nanostructured film has been obtained, which exhibits excellent magnetostriction, much higher than the common Tb-Fe films [34]. Since the size, mass, and the assembling manner of the R-Fe clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and magnetic properties of the cluster-assembled R-Fe nanostructured films, which makes it an ideal candidate for the fabrication of the magnetic NEMS devices.A DC-magnetron-sputtering-gas-aggregation (MSGA) cluster source was used to produce the Tb-Fe cluster beam, which was finally deposited on the Si(100) substrate at room temperature to form the nanofilm. In order to tune the cluster size, the length of the condensing growth regionL was set as 80 mm, 95 mm, and 110 mm, respectively. Figure 2 presents the SEM images and size distributions of the typical Tb-Fe films prepared under different length of the condensing growth region L. One can clearly observe two facts: (i) all films are assembled by the spherical nanoparticles, which are distributed uniformly and monodispersely in the film; (ii) with the increase of the condensing growth region length, the size of the nanoparticle increases, being in the ranges of 31~36 nm for L = 110 mm, 28~33 nm for L = 95 mm, and 23~28 nm for L = 80 mm, respectively. Meanwhile, we note that the particle size distribution was almost lognormal. The formation of such nanoparticle film is attributed to the unique LECBD preparation process. And the length of the cluster growth region significantly influences the size of the Tb-Fe clusters for the films. In fact, the growth of the clusters as well as their size distribution is mainly determined by the cluster residence time and its distribution [61, 62]. With increasing the length of the condensing growth region, the cluster residence time in the condensing growth region increases, and thus the collision among metal ion, Tb-Fe vapor, carrier gas, and free clusters becomes more sufficient, leading to the bigger size of the clusters.Figure 2 SEM images of the typical as-deposited Tb-Fe films and the graph of population versus size distribution of the nanoparticles at different growth region length. (a) 110 mm, (b) 950 mm (c), and 80 mm ([36]).It has been confirmed that all present nanoparticle-assembled Tb-Fe films exhibit higher magnetostriction comparing to the common nonnanostructured films prepared by other methods [63–65]. And we observe that the magnetostrictive behavior and piezomagnetic coefficient evidently vary with the average size of the nanoparticle. Figure 3 gives the magnetostriction and piezomagnetic coefficient on the magnetic field for the films with various particle sizes. With increasing the particle size, the saturation magnetostriction λs and the saturation magnetic field Hm change, for example, λs~ 816 × 10−6 and Hm~ 6.0 kOe for d=25 nm, λs~ 1029 × 10−6 and Hm~ 7.0 kOe for d=30 nm, and λs = 746 × 10−6 and Hm = 5.0 kOe for d=35 nm. Obviously, the film with d=30 nm has the highest saturation magnetostriction and piezomagnetic coefficient. However, it is not the case at low magnetic field. The film with d=35 nm possesses higher magnetostriction and piezomagnetic coefficient at low magnetic field.Figure 3 The dependence of magnetostriction and piezomagnetic coefficient for the Tb-Fe nanostructured film on various particle sizes.It suggests that the dependence of the magnetostriction and piezomagnetic coefficient on the particle size could be attributed to the difference of the magnetization characteristic for these films. Figure4 presents the field dependent magnetization at room temperature for the films with the particle sizes of 25 nm, 30 nm, and 35 nm. It is shown that the degree of magnetization anisotropy for the film is significantly affected by the particles size. Both in-plane and out-of-plane saturation magnetization change, as well as the coercivity with variation of the particle size. For the films with particles size of 30 nm, the degree of magnetic anisotropy is the maximum and the difference between in-plane and out-of-plane saturation magnetization is maximum. The easy axis is out-of-plane. Thus particle size dependence of the magnetic anisotropy for the present films should be correlative to the exchange coupling effects between the nanoparticles in the film [66]. Thus the film with d=30 nm may show the highest degree of magnetic anisotropy because the exchange coupling distance is twice of the domain wall width (RFe ~ 15 nm) for magnetic nanoparticles [67]. Therefore, for the film with d=30 nm, it has higher magnetic anisotropy than the other films. It needs far higher magnetic field to rotate the spin into the applied field. Therefore, the magnetostrictive coefficient of this film is lower than the other films at a low magnetic field, but, its saturation magnetostriction is the highest, which could be ascribed to the higher energy of anisotropy exchange interaction [68].Figure 4 Magnetic hysteresis loops for the Tb-Fe nanostructured film assembled by the clusters (a) with 35 nm in diameter, (b) with 30 nm in diameter, and (c) with 25 nm in diameter. (a) (b) (c) ## 3.2. Enhanced Ferromagnetism of BiFeO3 Films Assembled with Clusters BiFeO3 (BFO) is one of the most outstanding single-phase lead-free multiferroics due to its high ferroelectric Curie (TFE~ 1103 K) [69] and Neel (TN~ 643 K) temperatures [70]. However, for BFO materials, antiferromagnetism and a superimposed incommensurate cycloid spin structure with a periodicity of 62 nm along the 110h axis cancel the macroscopic magnetization at room temperature, which restricts its applications [71]. Some investigations show that weak ferromagnetism is observed in some limited-dimension materials such as nanowires and nanoparticles due to the partial destruction of the spiral periodicity [72–74], which demonstrates a possible way to enhance ferromagnetism in single-phase multiferroics. Thus, as a size controllable building block of nanomaterials, clusters become a candidate to assemble multiferroics. Therefore, using LECBD technique, we have prepared the well-defined BiFeO3 nanostructured films assembled with 0-characteristic-dimension clusters, and then the films were annealed at 600°C. As we expected, the ferromagnetism of the as-prepared BiFeO3 films is enhanced [38].Figure5 gives the morphologies of cluster-assembled BiFeO3 nanostructured films before annealing. It can be seen that the films are assembled with clusters, which are nearly spherical and densely packed to form the uniformly continuous films, whereas each individual cluster is still clearly distinguishable. The population versus the size reveals that the average size of the nanoparticles is ~22 nm for as-deposited films and ~25.5 nm for the annealed films and is attributed to the fact that the size of cluster increases derive from the improvement of crystallizing during the annealing process.Figure 5 (a) Typical SEM image of as-deposited BFO nanostructured films, (b) the films after annealing, and the inset is the graph of population versus size distribution of the clusters.Figure6 present the XRD patterns of the typical as-deposited and annealed nanostructured films assembled with clusters. They show that both present films are polycrystalline and all of the observed diffraction peaks can be indexed to a perovskite structure. And the as-deposited BFO nanostructured films and other common films belonging to the rhombohedral structure with space group R-3c (161) prepared by other methods annealed BFO films transform to the coexistence of tetragonal and orthorhombic symmetry structure as (104) and (110) diffraction peaks are not obviously split, which is observed by expanding the view of the XRD pattern around 2θ = 32.6° in the inset of Figure 6. At the same time, the lattice constant of cluster-assembled BiFeO3 films is a = 5.491 Å, smaller than those of the films prepared by other methods. It suggests that there exists a crystal distortion for the cluster-assembled BFO films, giving rise to a transition from the rhombohedral structure to tetragonal one [75–77], which means the crystal structure changes from a high symmetry state to a low symmetry state compared to bulk BFO materials. Thus the crystal distortion is due to the size effect of the clusters with the smaller characteristic size, which partially destroys the long-range cycloid spin structure with a periodicity of 62 nm in the rhombohedral structure with space group R-3c (161). It is such crystal distortion of the as-prepared films that brings about the enhancement in magnetization.Figure 6 The XRD patterns before annealing and after annealing; the inset is the expanded view on the location of diffraction peak around2θ = 32.6°.Figure7 shows the magnetic hysteresis loops for the cluster-assembled BFO nanostructured films measured at 5 K and 300 K. As can be seen, obvious ferromagnetism is observed for the cluster-assembled BFO nanostructured films not only at 5 K but also at room temperature. In Particular, the saturation magnetization of the BFO films at room temperature reaches 108 emu/cc, which is comparable with that of the present films with 125 emu/cc at 5 K. More importantly, the large magnetization of 81 emu/cc is obtained at a magnetic field of 3000 Oe at room temperature, which is a larger response than the common films prepared by other methods [78–80].Figure 7 The magnetization dependence on the magnetic field of the typical cluster-assembled BFO nanostructured films (a) at low temperature of 5 K, (b) at 300 K. The inset is the expand view of the magnetic hysteresis. (a) (b)Such enhanced room temperature ferromagnetism is attributed to the fact the average size of BFO clusters is much less than the long-range cycloid order of 62 nm along the110h axis that the periodicity of the spin cycloid is broken [81]. Antiferromagnetic materials are considered as the combination of one sublattice with spins along one direction and another with spins along the opposite direction. If no spin canting is considered, the spins of these two sublattices compensate each other so that the net magnetization inside the material would become zero [82]. However, the long-range antiferromagnetic order is frequently interrupted at the cluster surfaces, which forms the uncompensated surface spins. For the cluster-assembled BFO films with the average size of 25.5 nm, the uncompensated surface spins become very significant due to the very large surface to volume ratio for the clusters. The uncompensated spins at the surface enhance the contribution to the nanoparticle’s overall magnetization. Besides, structural distortion and change of lattice parameter due to the size effect for the cluster-assembled nanostructured films [83] lead to the release of the latent magnetization locked within the cycloid. Then the ferromagnetism of nanostructured films is significantly enhanced. ## 4. Multiferroic Film Heterostructure Assembled with Clusters It is well known that composite films combined with piezoelectric and magnetostrictive materials can obtain stronger magnetoelectric effect than single phase materials [84] by the magnetic-mechanical-electric coupling product interaction via the stress mediation. Some composite films combined by perovskite ferroelectric oxides (e.g., Pb(Zr0.52Ti0.48)O3 (PZT), BaTiO3) with ferromagnetic oxides (e.g., CoFe2O4, La0.67Sr0.33MnO3) [18, 21, 85, 86] did not acquire strong magnetoelectric effect due to low magnetostriction of the ferromagnetic oxides. Since we had prepared giant magnetostrictive R-Fe films using low energy cluster beam deposition, it is possible to prepare the well-defined microstructured thin-film multiferroic heterostructure consisting of R-Fe alloy and ferroelectric oxide. And the substrate is ferroelectric oxide; the degree of the interfacial reaction or diffusion between Tb-Fe alloy and ferroelectric oxide would be greatly suppressed due to the low temperature and energy during LECBD progress. Thus well-defined microstructure of thin-film heterostructure as well as strong magnetoelectric effect would be obtained. ### 4.1. Tb-Fe/PZT Thin-Film Heterostructure Tb-Fe nanocluster beam was deposited onto the surface of the PZT film through the open holes of the mask by LECBD progress. After deposition, not taking off the mask, a Pt electrode layer was deposited on the Tb-Fe dots via pulse laser deposition. Figure8 presents the surface SEM image of the Tb-Fe layer in the heterostructure. It shows that the Tb-Fe layer is compactly assembled by the regular spherical nanoclusters, which are distributed uniformly and adjacent with each other. The structure of the thin-film heterostructure is sketched in Insert (a) of Figure 8. Insert (b) of Figure 8 shows the cross-sectional SEM image of the thin-film heterostructure. One observes that the interface between Tb-Fe and PZT layers is clear and no transition layer is observed, which is benefit from the LECBD progress. During this process, the phase formation of Tb-Fe nanoclusters (or nanoparticles) is achieved in the condensation chamber with high temperature, while the deposition of Tb-Fe nanocluster beam onto the substrate is achieved in another high vacuum chamber with low energy and low temperature (e.g., room temperature). Both processes are independent of each other. It is easy to understand no reaction between Tb-Fe and PZT layers.Figure 8 The surface SEM image of the Tb-Fe layer in the thin-film heterostructure. Insert (a) is a sketch of the heterostructure, and Insert (b) is the typical cross-sectional SEM image of the heterostructure.No destroying between Tb-Fe and PZT layers gives a feasibility of well ferroelectric and ferromagnetic properties. Figure9 gives the polarization versus electric field hysteresis loops and magnetic hysteresis loops for the Tb-Fe/PZT thin-film heterostructure. It shows that the well-defined ferroelectric loops are observed. The saturation polarization and remanent polarization for Tb-Fe/PZT thin-film heterostructure have a very slight decrease compared with the pure PZT film. Such slight decrease in ferroelectric properties of the heterostructure should be attributed to the increase of oxygen vacancy concentration in PZT layer, which brings about difficulty for the mobility of domain walls in a certain degree and further leads to the decrease in polarization [87]. Insert of Figure 9(a) shows that the leakage current density in the heterostructure is quite low, for example, only being ~1.5 × 10−4 A/cm2 even under the higher electric field of 30 MV/m. In spite of this, we found that the leakage current density in the heterostructure was still higher than that of the pure PZT film, which indicates the increase of free carrier density in PZT layer of the heterostructure [88].Figure 9 (a) Polarization versus electric field hysteresis (P-E) loops for the thin-film heterostructure. Insert is the variation of leakage current density with the applied electric field. (b) The field dependent magnetization (M-H) curves for the thin-film heterostructure. (a) (b)Besides, the heterostructure exhibits the well-defined magnetic hysteresis loops. It shows that both in-plane and out-of-plane coercive field are the same as onlyHc~60 Oe, much lower than that of the bulk Tb-Fe alloy, while the in-plane and out-of-plane saturation magnetizations are, respectively, ~38 emu/cm3 and ~47 emu/cm3. We notice that the magnetization character of the heterostructure is almost comparable to the pure nanostructured Tb-Fe film prepared by LECBD process. Since magnetoelectric effect in a two-phase composite mainly originates from the interfacial stress transfer between the magnetostrictive and the ferroelectric phase, the high magnetostriction and ferroelectrics are beneficial to the magnetoelectric coupling. A strong magnetoelectric coupling could be obtained in the thin-film heterostructure.Figure10 plots the magnetic bias Hbias dependence of the induced voltage increment ΔVME at a given ac magnetic field frequency f=1.0 kHz. It shows that thin-film heterostructure exhibits strong magnetoelectric coupling. The calculated maximum increment of the magnetoelectric voltage coefficient is as high as ~140 mV/cm·Oe, larger than that of the reported all-oxide ferroelectric-ferromagnetic composite film [16–18]. So strong magnetoelectric effect in Tb-Fe/PZT thin-film heterostructure is evidently beneficial from the unique LECBD process. Based on this process, not only could the interface reaction be availably avoided on the maximum degree, but also both ferroelectric and magnetostrictive properties for PZT and Tb-Fe could be maintained well.Figure 10 TheHbias dependence of the induced magnetoelectric voltage increment ΔVME at a given dc magnetic frequency f=1.0kHz for the thin-film heterostructure. Inset is the Hbias dependence of piezomagnetic coefficient for the pure Tb-Fe nanostructured film prepared by LECBD process.Insert of Figure10 shows the Hbias dependence of piezomagnetic coefficient q (=δλ/δHbias) for the pure Tb-Fe nanostructured film prepared by LECBD process. Both ΔVME in heterostructure and q in Tb-Fe film have the similar change trend with Hbias. This indicates that the magnetoelectric coupling in the heterostructure should be dominated by the magnetic-mechanical-electric transform through the stress-mediated transfer. ### 4.2. Sm-Fe/PVDF Thin-Film Heterostructure For Si based magnetoelectric composited films, the couple efficiency between ferroelectric and ferromagnetic phases was depressed due to the stress clamping effect of the hard substrate. Therefore a flexible polyvinylidene fluoride (PVDF) film may be used instead of the hard substrate due to its small Young’s modulus. Thus the magnetoelectric coupling between the ferroelectric and ferromagnetic phases will almost be not influenced. Moreover, piezoelectric voltage constant (g31) of PVDF film is an order higher than that of ordinary PZT film, which allows it to generate a bigger voltage output under a small stress. This indicates that PVDF film is suitable to act as the piezoelectric phase in the magnetoelectric thin-film heterostructure.The flexible PVDF/Sm-Fe heterostructural film was prepared by depositing Sm-Fe nanocluster beam onto the PVDF film at room temperature using LECBD technique. Though it is very easy to destroy the PVDF polymer substrate, it can be avoidable during LECBD progress with quite low energy and low temperature. Figure11 shows the cross-section SEM image of the Sm-Fe/PVDF film. It can clearly be observed that the interface between the PVDF film and Sm-Fe layer is clear and no evident transition layer appears, indicating that the PVDF film does not get destroyed during the process of LECBD. The well-defined heterostructure makes it possible to generate the strong magnetoelectric effect.Figure 11 The cross-section SEM image of the Sm-Fe/PVDF thin-film heterostructure.The Sm-Fe film exhibits strong negative magnetostrictive effect with a saturation value of ~750 × 10−6 at magnetic field of ~7.0 kOe as shown in Figure 12. Inset of Figure 12 shows that the Sm-Fe/PVDF film exhibits distinct magnetic anisotropy with an in-plane magnetic easy axis, which obviously makes the magnetoelectric coupling in the Sm-Fe/PVDF film be more efficient under an in-plane magnetic field.Figure 12 Magnetostrictionλ dependence of Sm-Fe film on magnetic field H. Inset is the magnetic hysteresis loops for the Sm-Fe/PVDF thin-film heterostructure measured at room temperature.For thin-film heterostructure, a large magnetoelectric voltage output can be obtained. Figure13 gives the magnetoelectric voltage output increment ΔVME value as a function of Hbias for the Sm-Fe/PVDF film. It is seen that the film exhibits a large voltage output under the external magnetic bias. The ΔVME value increases with increasing Hbias, reaching the maximum value of ΔVME~ 210 μV at Hbias = 2.3 kOe, and then drops. Compared with the previous investigations, the magnetoelectric voltage output in the present Sm-Fe/PVDF film is remarkably large, almost being two orders higher than that of typical all-oxide PZT/CoFe2O4/PZT film deposited on the hard wafer [89].Figure 13 Induced voltage incrementΔVME as a function of magnetic bias Hbias.Therefore, by using the flexible PVDF polymer film as the substrate, the substrate clamping effect on the magnetoelectric coupling of the heterostructural film is completely eliminated. The heterostructural film exhibits large magnetoelectric voltage output, which is mainly attributed to the large piezoelectric voltage constant in the piezoelectric PVDF layer and high magnetic anisotropy with in-plane magnetic easy axis as well as the giant negative magnetostriction in the ferromagnetic Sm-Fe layer. ## 4.1. Tb-Fe/PZT Thin-Film Heterostructure Tb-Fe nanocluster beam was deposited onto the surface of the PZT film through the open holes of the mask by LECBD progress. After deposition, not taking off the mask, a Pt electrode layer was deposited on the Tb-Fe dots via pulse laser deposition. Figure8 presents the surface SEM image of the Tb-Fe layer in the heterostructure. It shows that the Tb-Fe layer is compactly assembled by the regular spherical nanoclusters, which are distributed uniformly and adjacent with each other. The structure of the thin-film heterostructure is sketched in Insert (a) of Figure 8. Insert (b) of Figure 8 shows the cross-sectional SEM image of the thin-film heterostructure. One observes that the interface between Tb-Fe and PZT layers is clear and no transition layer is observed, which is benefit from the LECBD progress. During this process, the phase formation of Tb-Fe nanoclusters (or nanoparticles) is achieved in the condensation chamber with high temperature, while the deposition of Tb-Fe nanocluster beam onto the substrate is achieved in another high vacuum chamber with low energy and low temperature (e.g., room temperature). Both processes are independent of each other. It is easy to understand no reaction between Tb-Fe and PZT layers.Figure 8 The surface SEM image of the Tb-Fe layer in the thin-film heterostructure. Insert (a) is a sketch of the heterostructure, and Insert (b) is the typical cross-sectional SEM image of the heterostructure.No destroying between Tb-Fe and PZT layers gives a feasibility of well ferroelectric and ferromagnetic properties. Figure9 gives the polarization versus electric field hysteresis loops and magnetic hysteresis loops for the Tb-Fe/PZT thin-film heterostructure. It shows that the well-defined ferroelectric loops are observed. The saturation polarization and remanent polarization for Tb-Fe/PZT thin-film heterostructure have a very slight decrease compared with the pure PZT film. Such slight decrease in ferroelectric properties of the heterostructure should be attributed to the increase of oxygen vacancy concentration in PZT layer, which brings about difficulty for the mobility of domain walls in a certain degree and further leads to the decrease in polarization [87]. Insert of Figure 9(a) shows that the leakage current density in the heterostructure is quite low, for example, only being ~1.5 × 10−4 A/cm2 even under the higher electric field of 30 MV/m. In spite of this, we found that the leakage current density in the heterostructure was still higher than that of the pure PZT film, which indicates the increase of free carrier density in PZT layer of the heterostructure [88].Figure 9 (a) Polarization versus electric field hysteresis (P-E) loops for the thin-film heterostructure. Insert is the variation of leakage current density with the applied electric field. (b) The field dependent magnetization (M-H) curves for the thin-film heterostructure. (a) (b)Besides, the heterostructure exhibits the well-defined magnetic hysteresis loops. It shows that both in-plane and out-of-plane coercive field are the same as onlyHc~60 Oe, much lower than that of the bulk Tb-Fe alloy, while the in-plane and out-of-plane saturation magnetizations are, respectively, ~38 emu/cm3 and ~47 emu/cm3. We notice that the magnetization character of the heterostructure is almost comparable to the pure nanostructured Tb-Fe film prepared by LECBD process. Since magnetoelectric effect in a two-phase composite mainly originates from the interfacial stress transfer between the magnetostrictive and the ferroelectric phase, the high magnetostriction and ferroelectrics are beneficial to the magnetoelectric coupling. A strong magnetoelectric coupling could be obtained in the thin-film heterostructure.Figure10 plots the magnetic bias Hbias dependence of the induced voltage increment ΔVME at a given ac magnetic field frequency f=1.0 kHz. It shows that thin-film heterostructure exhibits strong magnetoelectric coupling. The calculated maximum increment of the magnetoelectric voltage coefficient is as high as ~140 mV/cm·Oe, larger than that of the reported all-oxide ferroelectric-ferromagnetic composite film [16–18]. So strong magnetoelectric effect in Tb-Fe/PZT thin-film heterostructure is evidently beneficial from the unique LECBD process. Based on this process, not only could the interface reaction be availably avoided on the maximum degree, but also both ferroelectric and magnetostrictive properties for PZT and Tb-Fe could be maintained well.Figure 10 TheHbias dependence of the induced magnetoelectric voltage increment ΔVME at a given dc magnetic frequency f=1.0kHz for the thin-film heterostructure. Inset is the Hbias dependence of piezomagnetic coefficient for the pure Tb-Fe nanostructured film prepared by LECBD process.Insert of Figure10 shows the Hbias dependence of piezomagnetic coefficient q (=δλ/δHbias) for the pure Tb-Fe nanostructured film prepared by LECBD process. Both ΔVME in heterostructure and q in Tb-Fe film have the similar change trend with Hbias. This indicates that the magnetoelectric coupling in the heterostructure should be dominated by the magnetic-mechanical-electric transform through the stress-mediated transfer. ## 4.2. Sm-Fe/PVDF Thin-Film Heterostructure For Si based magnetoelectric composited films, the couple efficiency between ferroelectric and ferromagnetic phases was depressed due to the stress clamping effect of the hard substrate. Therefore a flexible polyvinylidene fluoride (PVDF) film may be used instead of the hard substrate due to its small Young’s modulus. Thus the magnetoelectric coupling between the ferroelectric and ferromagnetic phases will almost be not influenced. Moreover, piezoelectric voltage constant (g31) of PVDF film is an order higher than that of ordinary PZT film, which allows it to generate a bigger voltage output under a small stress. This indicates that PVDF film is suitable to act as the piezoelectric phase in the magnetoelectric thin-film heterostructure.The flexible PVDF/Sm-Fe heterostructural film was prepared by depositing Sm-Fe nanocluster beam onto the PVDF film at room temperature using LECBD technique. Though it is very easy to destroy the PVDF polymer substrate, it can be avoidable during LECBD progress with quite low energy and low temperature. Figure11 shows the cross-section SEM image of the Sm-Fe/PVDF film. It can clearly be observed that the interface between the PVDF film and Sm-Fe layer is clear and no evident transition layer appears, indicating that the PVDF film does not get destroyed during the process of LECBD. The well-defined heterostructure makes it possible to generate the strong magnetoelectric effect.Figure 11 The cross-section SEM image of the Sm-Fe/PVDF thin-film heterostructure.The Sm-Fe film exhibits strong negative magnetostrictive effect with a saturation value of ~750 × 10−6 at magnetic field of ~7.0 kOe as shown in Figure 12. Inset of Figure 12 shows that the Sm-Fe/PVDF film exhibits distinct magnetic anisotropy with an in-plane magnetic easy axis, which obviously makes the magnetoelectric coupling in the Sm-Fe/PVDF film be more efficient under an in-plane magnetic field.Figure 12 Magnetostrictionλ dependence of Sm-Fe film on magnetic field H. Inset is the magnetic hysteresis loops for the Sm-Fe/PVDF thin-film heterostructure measured at room temperature.For thin-film heterostructure, a large magnetoelectric voltage output can be obtained. Figure13 gives the magnetoelectric voltage output increment ΔVME value as a function of Hbias for the Sm-Fe/PVDF film. It is seen that the film exhibits a large voltage output under the external magnetic bias. The ΔVME value increases with increasing Hbias, reaching the maximum value of ΔVME~ 210 μV at Hbias = 2.3 kOe, and then drops. Compared with the previous investigations, the magnetoelectric voltage output in the present Sm-Fe/PVDF film is remarkably large, almost being two orders higher than that of typical all-oxide PZT/CoFe2O4/PZT film deposited on the hard wafer [89].Figure 13 Induced voltage incrementΔVME as a function of magnetic bias Hbias.Therefore, by using the flexible PVDF polymer film as the substrate, the substrate clamping effect on the magnetoelectric coupling of the heterostructural film is completely eliminated. The heterostructural film exhibits large magnetoelectric voltage output, which is mainly attributed to the large piezoelectric voltage constant in the piezoelectric PVDF layer and high magnetic anisotropy with in-plane magnetic easy axis as well as the giant negative magnetostriction in the ferromagnetic Sm-Fe layer. ## 5. Conclusions In a conclusion, the single phase multiferroic films and compound heterostructured multiferroic films assembled with clusters were prepared by low energy cluster beam deposition. It shows that the multiferroic properties of the thin-films can be controlled or improved by tuning the size of the clusters. And the structure of the thin-film heterostructure would not be destroyed due to low temperature and energy during LECBD progress. The LECBD technique provides an ideal avenue to prepare multiferroic nanostructure and facilitates their applications on NEMS devices. --- *Source: 101528-2015-03-19.xml*
101528-2015-03-19_101528-2015-03-19.md
51,042
Advances in Multiferroic Nanomaterials Assembled with Clusters
Shifeng Zhao
Journal of Nanomaterials (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101528
101528-2015-03-19.xml
--- ## Abstract As an entirely new perspective of multifunctional materials, multiferroics have attracted a great deal of attention. With the rapidly developing micro- and nano-electro-mechanical system (MEMS&NEMS), the new kinds of micro- and nanodevices and functionalities aroused extensive research activity in the area of multiferroics. As an ideal building block to assemble the nanostructure, cluster exhibits particular physical properties related to the cluster size at nanoscale, which is efficient in controlling the multiferroic properties for nanomaterials. This review focuses on our recent advances in multiferroic nanomaterials assembled with clusters. In particular, the single phase multiferroic films and compound heterostructured multiferroic films assembled with clusters were introduced detailedly. This technique presents a new and efficient method to produce the nanostructured multiferroic materials for their potential application in NEMS devices. --- ## Body ## 1. Introduction Multiferroics have attracted increasing attention due to simultaneous coexistence of ferromagnetic, ferroelectric, or ferroelastic ordering [1, 2]. Many researchers focused on the magnetoelectric effect driven by the prospect of controlling polarization by magnetic field and magnetization by electrical field [3], which opens up an entirely new perspective of magnetic/ferroelectric data storage media, spin-based devices (spintronics), magnetocapacitive devices, magnetic sensors, nonvolatile memories, random access memory, and so forth [4–8]. Since its discovery a century ago, ferroelectricity has been linked to the ancient phenomena of magnetism. Attempts to combine the dipole and spin orders into one system started in the 1960s in Cr2O3 single crystal [9, 10], and other single phase multiferroics, including boracites (Ni3B7O13I, Cr3B7O13Cl) [10], fluorides (BaMF4, MMn, Fe, Co, Ni) [11, 12], magnetite Fe3O4 [13], (Y/Yb)MnO3 [14], and BiFeO3 [15], were identified in the following decades. However, such a combination in these multiferroics has been proven to be unexpectedly tough.Moreover, by growing composite films combined with piezoelectric and magnetostrictive materials, the strong magnetoelectric coupling effect could be achieved due to product property. Much work has been done to prepare the composite films by combining perovskite ferroelectric oxides (e.g., Pb(Zr0.52Ti0.48)O3 (PZT), BaTiO3) with ferromagnetic oxides (e.g., CoFe2O4, La0.67Sr0.33MnO3) [16–21]; however, due to the low magnetostriction of these ferromagnetic oxides and Ni metal, the reported magnetoelectric effects in these composite films are generally not strong.As well known, rare earth iron alloy (R-Fe, R = rare earth element) possesses giant magnetostriction, being an order of magnitude greater than the ferromagnetic oxides [22]. The previous investigations have shown that the magnetoelectric effect in the bulk laminate consisted of R-Fe alloy and ferroelectric oxide (e.g., Tb0.30Dy0.70Fe2(Terfenol-D)/PZT) with magnetoelectric coupling coefficient αE of ~4680 mV/cm·Oe is much larger than that of the all-oxide laminates (e.g., CoFe2O4/PZT; its αE is ~60 mV/cm·Oe) [23–25]. However, with the applications in the micro-electro-mechanical system (MEMS) devices such as microtransducers, microactuators, and microsensors [26], the well-defined microstructures together with the tunable properties are necessary for the multiferroics. Therefore, for the laminated composite film (i.e., thin-film heterostructure), it could be expected that magnetoelectric effect would be enhanced significantly if the R-Fe alloy is used in the magnetostriction layer. Though the giant magnetostrictive films have been prepared by song conventional film preparation means [27–30], since the phase-formation temperature of R-Fe alloy is very high (the substrate is generally heated above 500°C), it is unavoidable to bring about serious oxygen diffusion from PZT oxide to Tb-Fe alloy. As a result, both magnetostriction in Tb-Fe alloy and piezoelectricity in PZT are seriously suppressed. Moreover, the serious oxygen diffusion would also generate a new interface layer, which further significantly decreases the magnetoelectric coupling efficiency [31]. Therefore, a progress with low temperature and energy is necessary during deposition. Fortunately, we have developed an effective preparation method, namely, low energy cluster beam deposition (LECBD), to prepare the nanostructured magnetic, giant magnetostrictive, single phase multiferroic, and well-defined microstructured multiferroic heterostructured films [32–38]. This paper aims to review the breakthroughs on the multiferroic nanostructure assembled with clusters. ## 2. Low Energy Cluster Beam Deposition As a very important building block of nanomaterials, nanoclusters are aggregates of atoms or molecules of nanometric size, containing a number of constituent particles ranging from 10 to 106 [39–41]. All the beam experiments on clusters have a cluster source, in which the clusters are produced. There are a variety of sources available including arc cluster ion source [42], laser vaporization cluster source [43, 44], gas aggregation source [45, 46], seeded supersonic nozzle source [47], ion sputtering source [48, 49], and liquid metal ion source [50, 51]. Based on these we developed low energy cluster beam deposition method. A magnetron-sputtering-gas-aggregation (MSGA) cluster source produce is used to produce cluster beam to assemble multiferroic films. The growth and deposition process of clusters is shown in Figure 1 using gas aggregation. A direct current (DC) pulse power was used as the sputtering power. As the sputtering gas, one stream of argon gas with 99.9% purity was introduced through a ring structure close to the surface of the target. Another stream of argon was fed as a buffer gas through a gas inlet near the magnetron discharge head. The cluster condensation and growth region was cooled with liquid nitrogen. A highly oriented cluster beam with a small divergent angle less than one degree was formed by differential pumping controlled by the skimmers. During the process of the deposition, the average velocity perpendicular to the substrate of each cluster is quite low (with less than 50 meV/atom corresponding to kinetic energy), and the velocity parallel to the substrate is about several meV/atom, which are both much lower than the molecular binding energy. The clusters are deposited on the substrate by a soft-landing manner and accumulated randomly but they do not coalesce with each other.Figure 1 The sketched diagram of the growth and deposition process of clusters.Based on this technique, various cluster-assembled nanostructured films such as metal and oxide have been prepared, which show peculiar properties different from the films prepared by the common methods [52–55]. Since the size, mass, and the assembling manner of the clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and properties of the cluster-assembled nanostructured films, which makes it an ideal candidate for the fabrication of single phase or heterostructured films. ## 3. Magnetic Films Assembled with Clusters ### 3.1. Giant Magnetostrictive R-Fe Films The cubic Laves phase R-Fe (R = rare earth element such as Tb, Dy, Sm) compounds are well known to be a giant magnetostrictive material at room temperature, which could be widely used as actuators, transducers, dampers, and so forth [56, 57]. With the demand of the rapidly developing nano-electro-mechanical system (NEMS), much work has been done and various methods such as ion plating [27], ion beam sputtering [28], flash evaporation [29] and magnetron sputtering [58], and molecular beam epitaxy [59, 60] have been developed for the preparation of R-Fe films. However, compared with the bulk materials, the saturation magnetostriction of the current R-Fe films is much lower while the magnetic driving field is still higher [58], which limits them to be further used. Fortunately, based on the LECBD technique, a well-defined Tb-Fe nanostructured film has been obtained, which exhibits excellent magnetostriction, much higher than the common Tb-Fe films [34]. Since the size, mass, and the assembling manner of the R-Fe clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and magnetic properties of the cluster-assembled R-Fe nanostructured films, which makes it an ideal candidate for the fabrication of the magnetic NEMS devices.A DC-magnetron-sputtering-gas-aggregation (MSGA) cluster source was used to produce the Tb-Fe cluster beam, which was finally deposited on the Si(100) substrate at room temperature to form the nanofilm. In order to tune the cluster size, the length of the condensing growth regionL was set as 80 mm, 95 mm, and 110 mm, respectively. Figure 2 presents the SEM images and size distributions of the typical Tb-Fe films prepared under different length of the condensing growth region L. One can clearly observe two facts: (i) all films are assembled by the spherical nanoparticles, which are distributed uniformly and monodispersely in the film; (ii) with the increase of the condensing growth region length, the size of the nanoparticle increases, being in the ranges of 31~36 nm for L = 110 mm, 28~33 nm for L = 95 mm, and 23~28 nm for L = 80 mm, respectively. Meanwhile, we note that the particle size distribution was almost lognormal. The formation of such nanoparticle film is attributed to the unique LECBD preparation process. And the length of the cluster growth region significantly influences the size of the Tb-Fe clusters for the films. In fact, the growth of the clusters as well as their size distribution is mainly determined by the cluster residence time and its distribution [61, 62]. With increasing the length of the condensing growth region, the cluster residence time in the condensing growth region increases, and thus the collision among metal ion, Tb-Fe vapor, carrier gas, and free clusters becomes more sufficient, leading to the bigger size of the clusters.Figure 2 SEM images of the typical as-deposited Tb-Fe films and the graph of population versus size distribution of the nanoparticles at different growth region length. (a) 110 mm, (b) 950 mm (c), and 80 mm ([36]).It has been confirmed that all present nanoparticle-assembled Tb-Fe films exhibit higher magnetostriction comparing to the common nonnanostructured films prepared by other methods [63–65]. And we observe that the magnetostrictive behavior and piezomagnetic coefficient evidently vary with the average size of the nanoparticle. Figure 3 gives the magnetostriction and piezomagnetic coefficient on the magnetic field for the films with various particle sizes. With increasing the particle size, the saturation magnetostriction λs and the saturation magnetic field Hm change, for example, λs~ 816 × 10−6 and Hm~ 6.0 kOe for d=25 nm, λs~ 1029 × 10−6 and Hm~ 7.0 kOe for d=30 nm, and λs = 746 × 10−6 and Hm = 5.0 kOe for d=35 nm. Obviously, the film with d=30 nm has the highest saturation magnetostriction and piezomagnetic coefficient. However, it is not the case at low magnetic field. The film with d=35 nm possesses higher magnetostriction and piezomagnetic coefficient at low magnetic field.Figure 3 The dependence of magnetostriction and piezomagnetic coefficient for the Tb-Fe nanostructured film on various particle sizes.It suggests that the dependence of the magnetostriction and piezomagnetic coefficient on the particle size could be attributed to the difference of the magnetization characteristic for these films. Figure4 presents the field dependent magnetization at room temperature for the films with the particle sizes of 25 nm, 30 nm, and 35 nm. It is shown that the degree of magnetization anisotropy for the film is significantly affected by the particles size. Both in-plane and out-of-plane saturation magnetization change, as well as the coercivity with variation of the particle size. For the films with particles size of 30 nm, the degree of magnetic anisotropy is the maximum and the difference between in-plane and out-of-plane saturation magnetization is maximum. The easy axis is out-of-plane. Thus particle size dependence of the magnetic anisotropy for the present films should be correlative to the exchange coupling effects between the nanoparticles in the film [66]. Thus the film with d=30 nm may show the highest degree of magnetic anisotropy because the exchange coupling distance is twice of the domain wall width (RFe ~ 15 nm) for magnetic nanoparticles [67]. Therefore, for the film with d=30 nm, it has higher magnetic anisotropy than the other films. It needs far higher magnetic field to rotate the spin into the applied field. Therefore, the magnetostrictive coefficient of this film is lower than the other films at a low magnetic field, but, its saturation magnetostriction is the highest, which could be ascribed to the higher energy of anisotropy exchange interaction [68].Figure 4 Magnetic hysteresis loops for the Tb-Fe nanostructured film assembled by the clusters (a) with 35 nm in diameter, (b) with 30 nm in diameter, and (c) with 25 nm in diameter. (a) (b) (c) ### 3.2. Enhanced Ferromagnetism of BiFeO3 Films Assembled with Clusters BiFeO3 (BFO) is one of the most outstanding single-phase lead-free multiferroics due to its high ferroelectric Curie (TFE~ 1103 K) [69] and Neel (TN~ 643 K) temperatures [70]. However, for BFO materials, antiferromagnetism and a superimposed incommensurate cycloid spin structure with a periodicity of 62 nm along the 110h axis cancel the macroscopic magnetization at room temperature, which restricts its applications [71]. Some investigations show that weak ferromagnetism is observed in some limited-dimension materials such as nanowires and nanoparticles due to the partial destruction of the spiral periodicity [72–74], which demonstrates a possible way to enhance ferromagnetism in single-phase multiferroics. Thus, as a size controllable building block of nanomaterials, clusters become a candidate to assemble multiferroics. Therefore, using LECBD technique, we have prepared the well-defined BiFeO3 nanostructured films assembled with 0-characteristic-dimension clusters, and then the films were annealed at 600°C. As we expected, the ferromagnetism of the as-prepared BiFeO3 films is enhanced [38].Figure5 gives the morphologies of cluster-assembled BiFeO3 nanostructured films before annealing. It can be seen that the films are assembled with clusters, which are nearly spherical and densely packed to form the uniformly continuous films, whereas each individual cluster is still clearly distinguishable. The population versus the size reveals that the average size of the nanoparticles is ~22 nm for as-deposited films and ~25.5 nm for the annealed films and is attributed to the fact that the size of cluster increases derive from the improvement of crystallizing during the annealing process.Figure 5 (a) Typical SEM image of as-deposited BFO nanostructured films, (b) the films after annealing, and the inset is the graph of population versus size distribution of the clusters.Figure6 present the XRD patterns of the typical as-deposited and annealed nanostructured films assembled with clusters. They show that both present films are polycrystalline and all of the observed diffraction peaks can be indexed to a perovskite structure. And the as-deposited BFO nanostructured films and other common films belonging to the rhombohedral structure with space group R-3c (161) prepared by other methods annealed BFO films transform to the coexistence of tetragonal and orthorhombic symmetry structure as (104) and (110) diffraction peaks are not obviously split, which is observed by expanding the view of the XRD pattern around 2θ = 32.6° in the inset of Figure 6. At the same time, the lattice constant of cluster-assembled BiFeO3 films is a = 5.491 Å, smaller than those of the films prepared by other methods. It suggests that there exists a crystal distortion for the cluster-assembled BFO films, giving rise to a transition from the rhombohedral structure to tetragonal one [75–77], which means the crystal structure changes from a high symmetry state to a low symmetry state compared to bulk BFO materials. Thus the crystal distortion is due to the size effect of the clusters with the smaller characteristic size, which partially destroys the long-range cycloid spin structure with a periodicity of 62 nm in the rhombohedral structure with space group R-3c (161). It is such crystal distortion of the as-prepared films that brings about the enhancement in magnetization.Figure 6 The XRD patterns before annealing and after annealing; the inset is the expanded view on the location of diffraction peak around2θ = 32.6°.Figure7 shows the magnetic hysteresis loops for the cluster-assembled BFO nanostructured films measured at 5 K and 300 K. As can be seen, obvious ferromagnetism is observed for the cluster-assembled BFO nanostructured films not only at 5 K but also at room temperature. In Particular, the saturation magnetization of the BFO films at room temperature reaches 108 emu/cc, which is comparable with that of the present films with 125 emu/cc at 5 K. More importantly, the large magnetization of 81 emu/cc is obtained at a magnetic field of 3000 Oe at room temperature, which is a larger response than the common films prepared by other methods [78–80].Figure 7 The magnetization dependence on the magnetic field of the typical cluster-assembled BFO nanostructured films (a) at low temperature of 5 K, (b) at 300 K. The inset is the expand view of the magnetic hysteresis. (a) (b)Such enhanced room temperature ferromagnetism is attributed to the fact the average size of BFO clusters is much less than the long-range cycloid order of 62 nm along the110h axis that the periodicity of the spin cycloid is broken [81]. Antiferromagnetic materials are considered as the combination of one sublattice with spins along one direction and another with spins along the opposite direction. If no spin canting is considered, the spins of these two sublattices compensate each other so that the net magnetization inside the material would become zero [82]. However, the long-range antiferromagnetic order is frequently interrupted at the cluster surfaces, which forms the uncompensated surface spins. For the cluster-assembled BFO films with the average size of 25.5 nm, the uncompensated surface spins become very significant due to the very large surface to volume ratio for the clusters. The uncompensated spins at the surface enhance the contribution to the nanoparticle’s overall magnetization. Besides, structural distortion and change of lattice parameter due to the size effect for the cluster-assembled nanostructured films [83] lead to the release of the latent magnetization locked within the cycloid. Then the ferromagnetism of nanostructured films is significantly enhanced. ## 3.1. Giant Magnetostrictive R-Fe Films The cubic Laves phase R-Fe (R = rare earth element such as Tb, Dy, Sm) compounds are well known to be a giant magnetostrictive material at room temperature, which could be widely used as actuators, transducers, dampers, and so forth [56, 57]. With the demand of the rapidly developing nano-electro-mechanical system (NEMS), much work has been done and various methods such as ion plating [27], ion beam sputtering [28], flash evaporation [29] and magnetron sputtering [58], and molecular beam epitaxy [59, 60] have been developed for the preparation of R-Fe films. However, compared with the bulk materials, the saturation magnetostriction of the current R-Fe films is much lower while the magnetic driving field is still higher [58], which limits them to be further used. Fortunately, based on the LECBD technique, a well-defined Tb-Fe nanostructured film has been obtained, which exhibits excellent magnetostriction, much higher than the common Tb-Fe films [34]. Since the size, mass, and the assembling manner of the R-Fe clusters can be precisely tuned by changing the working gas flow, controlling the length of condensation region, changing the buffer gas, and so forth, it is possible to control the microstructures and magnetic properties of the cluster-assembled R-Fe nanostructured films, which makes it an ideal candidate for the fabrication of the magnetic NEMS devices.A DC-magnetron-sputtering-gas-aggregation (MSGA) cluster source was used to produce the Tb-Fe cluster beam, which was finally deposited on the Si(100) substrate at room temperature to form the nanofilm. In order to tune the cluster size, the length of the condensing growth regionL was set as 80 mm, 95 mm, and 110 mm, respectively. Figure 2 presents the SEM images and size distributions of the typical Tb-Fe films prepared under different length of the condensing growth region L. One can clearly observe two facts: (i) all films are assembled by the spherical nanoparticles, which are distributed uniformly and monodispersely in the film; (ii) with the increase of the condensing growth region length, the size of the nanoparticle increases, being in the ranges of 31~36 nm for L = 110 mm, 28~33 nm for L = 95 mm, and 23~28 nm for L = 80 mm, respectively. Meanwhile, we note that the particle size distribution was almost lognormal. The formation of such nanoparticle film is attributed to the unique LECBD preparation process. And the length of the cluster growth region significantly influences the size of the Tb-Fe clusters for the films. In fact, the growth of the clusters as well as their size distribution is mainly determined by the cluster residence time and its distribution [61, 62]. With increasing the length of the condensing growth region, the cluster residence time in the condensing growth region increases, and thus the collision among metal ion, Tb-Fe vapor, carrier gas, and free clusters becomes more sufficient, leading to the bigger size of the clusters.Figure 2 SEM images of the typical as-deposited Tb-Fe films and the graph of population versus size distribution of the nanoparticles at different growth region length. (a) 110 mm, (b) 950 mm (c), and 80 mm ([36]).It has been confirmed that all present nanoparticle-assembled Tb-Fe films exhibit higher magnetostriction comparing to the common nonnanostructured films prepared by other methods [63–65]. And we observe that the magnetostrictive behavior and piezomagnetic coefficient evidently vary with the average size of the nanoparticle. Figure 3 gives the magnetostriction and piezomagnetic coefficient on the magnetic field for the films with various particle sizes. With increasing the particle size, the saturation magnetostriction λs and the saturation magnetic field Hm change, for example, λs~ 816 × 10−6 and Hm~ 6.0 kOe for d=25 nm, λs~ 1029 × 10−6 and Hm~ 7.0 kOe for d=30 nm, and λs = 746 × 10−6 and Hm = 5.0 kOe for d=35 nm. Obviously, the film with d=30 nm has the highest saturation magnetostriction and piezomagnetic coefficient. However, it is not the case at low magnetic field. The film with d=35 nm possesses higher magnetostriction and piezomagnetic coefficient at low magnetic field.Figure 3 The dependence of magnetostriction and piezomagnetic coefficient for the Tb-Fe nanostructured film on various particle sizes.It suggests that the dependence of the magnetostriction and piezomagnetic coefficient on the particle size could be attributed to the difference of the magnetization characteristic for these films. Figure4 presents the field dependent magnetization at room temperature for the films with the particle sizes of 25 nm, 30 nm, and 35 nm. It is shown that the degree of magnetization anisotropy for the film is significantly affected by the particles size. Both in-plane and out-of-plane saturation magnetization change, as well as the coercivity with variation of the particle size. For the films with particles size of 30 nm, the degree of magnetic anisotropy is the maximum and the difference between in-plane and out-of-plane saturation magnetization is maximum. The easy axis is out-of-plane. Thus particle size dependence of the magnetic anisotropy for the present films should be correlative to the exchange coupling effects between the nanoparticles in the film [66]. Thus the film with d=30 nm may show the highest degree of magnetic anisotropy because the exchange coupling distance is twice of the domain wall width (RFe ~ 15 nm) for magnetic nanoparticles [67]. Therefore, for the film with d=30 nm, it has higher magnetic anisotropy than the other films. It needs far higher magnetic field to rotate the spin into the applied field. Therefore, the magnetostrictive coefficient of this film is lower than the other films at a low magnetic field, but, its saturation magnetostriction is the highest, which could be ascribed to the higher energy of anisotropy exchange interaction [68].Figure 4 Magnetic hysteresis loops for the Tb-Fe nanostructured film assembled by the clusters (a) with 35 nm in diameter, (b) with 30 nm in diameter, and (c) with 25 nm in diameter. (a) (b) (c) ## 3.2. Enhanced Ferromagnetism of BiFeO3 Films Assembled with Clusters BiFeO3 (BFO) is one of the most outstanding single-phase lead-free multiferroics due to its high ferroelectric Curie (TFE~ 1103 K) [69] and Neel (TN~ 643 K) temperatures [70]. However, for BFO materials, antiferromagnetism and a superimposed incommensurate cycloid spin structure with a periodicity of 62 nm along the 110h axis cancel the macroscopic magnetization at room temperature, which restricts its applications [71]. Some investigations show that weak ferromagnetism is observed in some limited-dimension materials such as nanowires and nanoparticles due to the partial destruction of the spiral periodicity [72–74], which demonstrates a possible way to enhance ferromagnetism in single-phase multiferroics. Thus, as a size controllable building block of nanomaterials, clusters become a candidate to assemble multiferroics. Therefore, using LECBD technique, we have prepared the well-defined BiFeO3 nanostructured films assembled with 0-characteristic-dimension clusters, and then the films were annealed at 600°C. As we expected, the ferromagnetism of the as-prepared BiFeO3 films is enhanced [38].Figure5 gives the morphologies of cluster-assembled BiFeO3 nanostructured films before annealing. It can be seen that the films are assembled with clusters, which are nearly spherical and densely packed to form the uniformly continuous films, whereas each individual cluster is still clearly distinguishable. The population versus the size reveals that the average size of the nanoparticles is ~22 nm for as-deposited films and ~25.5 nm for the annealed films and is attributed to the fact that the size of cluster increases derive from the improvement of crystallizing during the annealing process.Figure 5 (a) Typical SEM image of as-deposited BFO nanostructured films, (b) the films after annealing, and the inset is the graph of population versus size distribution of the clusters.Figure6 present the XRD patterns of the typical as-deposited and annealed nanostructured films assembled with clusters. They show that both present films are polycrystalline and all of the observed diffraction peaks can be indexed to a perovskite structure. And the as-deposited BFO nanostructured films and other common films belonging to the rhombohedral structure with space group R-3c (161) prepared by other methods annealed BFO films transform to the coexistence of tetragonal and orthorhombic symmetry structure as (104) and (110) diffraction peaks are not obviously split, which is observed by expanding the view of the XRD pattern around 2θ = 32.6° in the inset of Figure 6. At the same time, the lattice constant of cluster-assembled BiFeO3 films is a = 5.491 Å, smaller than those of the films prepared by other methods. It suggests that there exists a crystal distortion for the cluster-assembled BFO films, giving rise to a transition from the rhombohedral structure to tetragonal one [75–77], which means the crystal structure changes from a high symmetry state to a low symmetry state compared to bulk BFO materials. Thus the crystal distortion is due to the size effect of the clusters with the smaller characteristic size, which partially destroys the long-range cycloid spin structure with a periodicity of 62 nm in the rhombohedral structure with space group R-3c (161). It is such crystal distortion of the as-prepared films that brings about the enhancement in magnetization.Figure 6 The XRD patterns before annealing and after annealing; the inset is the expanded view on the location of diffraction peak around2θ = 32.6°.Figure7 shows the magnetic hysteresis loops for the cluster-assembled BFO nanostructured films measured at 5 K and 300 K. As can be seen, obvious ferromagnetism is observed for the cluster-assembled BFO nanostructured films not only at 5 K but also at room temperature. In Particular, the saturation magnetization of the BFO films at room temperature reaches 108 emu/cc, which is comparable with that of the present films with 125 emu/cc at 5 K. More importantly, the large magnetization of 81 emu/cc is obtained at a magnetic field of 3000 Oe at room temperature, which is a larger response than the common films prepared by other methods [78–80].Figure 7 The magnetization dependence on the magnetic field of the typical cluster-assembled BFO nanostructured films (a) at low temperature of 5 K, (b) at 300 K. The inset is the expand view of the magnetic hysteresis. (a) (b)Such enhanced room temperature ferromagnetism is attributed to the fact the average size of BFO clusters is much less than the long-range cycloid order of 62 nm along the110h axis that the periodicity of the spin cycloid is broken [81]. Antiferromagnetic materials are considered as the combination of one sublattice with spins along one direction and another with spins along the opposite direction. If no spin canting is considered, the spins of these two sublattices compensate each other so that the net magnetization inside the material would become zero [82]. However, the long-range antiferromagnetic order is frequently interrupted at the cluster surfaces, which forms the uncompensated surface spins. For the cluster-assembled BFO films with the average size of 25.5 nm, the uncompensated surface spins become very significant due to the very large surface to volume ratio for the clusters. The uncompensated spins at the surface enhance the contribution to the nanoparticle’s overall magnetization. Besides, structural distortion and change of lattice parameter due to the size effect for the cluster-assembled nanostructured films [83] lead to the release of the latent magnetization locked within the cycloid. Then the ferromagnetism of nanostructured films is significantly enhanced. ## 4. Multiferroic Film Heterostructure Assembled with Clusters It is well known that composite films combined with piezoelectric and magnetostrictive materials can obtain stronger magnetoelectric effect than single phase materials [84] by the magnetic-mechanical-electric coupling product interaction via the stress mediation. Some composite films combined by perovskite ferroelectric oxides (e.g., Pb(Zr0.52Ti0.48)O3 (PZT), BaTiO3) with ferromagnetic oxides (e.g., CoFe2O4, La0.67Sr0.33MnO3) [18, 21, 85, 86] did not acquire strong magnetoelectric effect due to low magnetostriction of the ferromagnetic oxides. Since we had prepared giant magnetostrictive R-Fe films using low energy cluster beam deposition, it is possible to prepare the well-defined microstructured thin-film multiferroic heterostructure consisting of R-Fe alloy and ferroelectric oxide. And the substrate is ferroelectric oxide; the degree of the interfacial reaction or diffusion between Tb-Fe alloy and ferroelectric oxide would be greatly suppressed due to the low temperature and energy during LECBD progress. Thus well-defined microstructure of thin-film heterostructure as well as strong magnetoelectric effect would be obtained. ### 4.1. Tb-Fe/PZT Thin-Film Heterostructure Tb-Fe nanocluster beam was deposited onto the surface of the PZT film through the open holes of the mask by LECBD progress. After deposition, not taking off the mask, a Pt electrode layer was deposited on the Tb-Fe dots via pulse laser deposition. Figure8 presents the surface SEM image of the Tb-Fe layer in the heterostructure. It shows that the Tb-Fe layer is compactly assembled by the regular spherical nanoclusters, which are distributed uniformly and adjacent with each other. The structure of the thin-film heterostructure is sketched in Insert (a) of Figure 8. Insert (b) of Figure 8 shows the cross-sectional SEM image of the thin-film heterostructure. One observes that the interface between Tb-Fe and PZT layers is clear and no transition layer is observed, which is benefit from the LECBD progress. During this process, the phase formation of Tb-Fe nanoclusters (or nanoparticles) is achieved in the condensation chamber with high temperature, while the deposition of Tb-Fe nanocluster beam onto the substrate is achieved in another high vacuum chamber with low energy and low temperature (e.g., room temperature). Both processes are independent of each other. It is easy to understand no reaction between Tb-Fe and PZT layers.Figure 8 The surface SEM image of the Tb-Fe layer in the thin-film heterostructure. Insert (a) is a sketch of the heterostructure, and Insert (b) is the typical cross-sectional SEM image of the heterostructure.No destroying between Tb-Fe and PZT layers gives a feasibility of well ferroelectric and ferromagnetic properties. Figure9 gives the polarization versus electric field hysteresis loops and magnetic hysteresis loops for the Tb-Fe/PZT thin-film heterostructure. It shows that the well-defined ferroelectric loops are observed. The saturation polarization and remanent polarization for Tb-Fe/PZT thin-film heterostructure have a very slight decrease compared with the pure PZT film. Such slight decrease in ferroelectric properties of the heterostructure should be attributed to the increase of oxygen vacancy concentration in PZT layer, which brings about difficulty for the mobility of domain walls in a certain degree and further leads to the decrease in polarization [87]. Insert of Figure 9(a) shows that the leakage current density in the heterostructure is quite low, for example, only being ~1.5 × 10−4 A/cm2 even under the higher electric field of 30 MV/m. In spite of this, we found that the leakage current density in the heterostructure was still higher than that of the pure PZT film, which indicates the increase of free carrier density in PZT layer of the heterostructure [88].Figure 9 (a) Polarization versus electric field hysteresis (P-E) loops for the thin-film heterostructure. Insert is the variation of leakage current density with the applied electric field. (b) The field dependent magnetization (M-H) curves for the thin-film heterostructure. (a) (b)Besides, the heterostructure exhibits the well-defined magnetic hysteresis loops. It shows that both in-plane and out-of-plane coercive field are the same as onlyHc~60 Oe, much lower than that of the bulk Tb-Fe alloy, while the in-plane and out-of-plane saturation magnetizations are, respectively, ~38 emu/cm3 and ~47 emu/cm3. We notice that the magnetization character of the heterostructure is almost comparable to the pure nanostructured Tb-Fe film prepared by LECBD process. Since magnetoelectric effect in a two-phase composite mainly originates from the interfacial stress transfer between the magnetostrictive and the ferroelectric phase, the high magnetostriction and ferroelectrics are beneficial to the magnetoelectric coupling. A strong magnetoelectric coupling could be obtained in the thin-film heterostructure.Figure10 plots the magnetic bias Hbias dependence of the induced voltage increment ΔVME at a given ac magnetic field frequency f=1.0 kHz. It shows that thin-film heterostructure exhibits strong magnetoelectric coupling. The calculated maximum increment of the magnetoelectric voltage coefficient is as high as ~140 mV/cm·Oe, larger than that of the reported all-oxide ferroelectric-ferromagnetic composite film [16–18]. So strong magnetoelectric effect in Tb-Fe/PZT thin-film heterostructure is evidently beneficial from the unique LECBD process. Based on this process, not only could the interface reaction be availably avoided on the maximum degree, but also both ferroelectric and magnetostrictive properties for PZT and Tb-Fe could be maintained well.Figure 10 TheHbias dependence of the induced magnetoelectric voltage increment ΔVME at a given dc magnetic frequency f=1.0kHz for the thin-film heterostructure. Inset is the Hbias dependence of piezomagnetic coefficient for the pure Tb-Fe nanostructured film prepared by LECBD process.Insert of Figure10 shows the Hbias dependence of piezomagnetic coefficient q (=δλ/δHbias) for the pure Tb-Fe nanostructured film prepared by LECBD process. Both ΔVME in heterostructure and q in Tb-Fe film have the similar change trend with Hbias. This indicates that the magnetoelectric coupling in the heterostructure should be dominated by the magnetic-mechanical-electric transform through the stress-mediated transfer. ### 4.2. Sm-Fe/PVDF Thin-Film Heterostructure For Si based magnetoelectric composited films, the couple efficiency between ferroelectric and ferromagnetic phases was depressed due to the stress clamping effect of the hard substrate. Therefore a flexible polyvinylidene fluoride (PVDF) film may be used instead of the hard substrate due to its small Young’s modulus. Thus the magnetoelectric coupling between the ferroelectric and ferromagnetic phases will almost be not influenced. Moreover, piezoelectric voltage constant (g31) of PVDF film is an order higher than that of ordinary PZT film, which allows it to generate a bigger voltage output under a small stress. This indicates that PVDF film is suitable to act as the piezoelectric phase in the magnetoelectric thin-film heterostructure.The flexible PVDF/Sm-Fe heterostructural film was prepared by depositing Sm-Fe nanocluster beam onto the PVDF film at room temperature using LECBD technique. Though it is very easy to destroy the PVDF polymer substrate, it can be avoidable during LECBD progress with quite low energy and low temperature. Figure11 shows the cross-section SEM image of the Sm-Fe/PVDF film. It can clearly be observed that the interface between the PVDF film and Sm-Fe layer is clear and no evident transition layer appears, indicating that the PVDF film does not get destroyed during the process of LECBD. The well-defined heterostructure makes it possible to generate the strong magnetoelectric effect.Figure 11 The cross-section SEM image of the Sm-Fe/PVDF thin-film heterostructure.The Sm-Fe film exhibits strong negative magnetostrictive effect with a saturation value of ~750 × 10−6 at magnetic field of ~7.0 kOe as shown in Figure 12. Inset of Figure 12 shows that the Sm-Fe/PVDF film exhibits distinct magnetic anisotropy with an in-plane magnetic easy axis, which obviously makes the magnetoelectric coupling in the Sm-Fe/PVDF film be more efficient under an in-plane magnetic field.Figure 12 Magnetostrictionλ dependence of Sm-Fe film on magnetic field H. Inset is the magnetic hysteresis loops for the Sm-Fe/PVDF thin-film heterostructure measured at room temperature.For thin-film heterostructure, a large magnetoelectric voltage output can be obtained. Figure13 gives the magnetoelectric voltage output increment ΔVME value as a function of Hbias for the Sm-Fe/PVDF film. It is seen that the film exhibits a large voltage output under the external magnetic bias. The ΔVME value increases with increasing Hbias, reaching the maximum value of ΔVME~ 210 μV at Hbias = 2.3 kOe, and then drops. Compared with the previous investigations, the magnetoelectric voltage output in the present Sm-Fe/PVDF film is remarkably large, almost being two orders higher than that of typical all-oxide PZT/CoFe2O4/PZT film deposited on the hard wafer [89].Figure 13 Induced voltage incrementΔVME as a function of magnetic bias Hbias.Therefore, by using the flexible PVDF polymer film as the substrate, the substrate clamping effect on the magnetoelectric coupling of the heterostructural film is completely eliminated. The heterostructural film exhibits large magnetoelectric voltage output, which is mainly attributed to the large piezoelectric voltage constant in the piezoelectric PVDF layer and high magnetic anisotropy with in-plane magnetic easy axis as well as the giant negative magnetostriction in the ferromagnetic Sm-Fe layer. ## 4.1. Tb-Fe/PZT Thin-Film Heterostructure Tb-Fe nanocluster beam was deposited onto the surface of the PZT film through the open holes of the mask by LECBD progress. After deposition, not taking off the mask, a Pt electrode layer was deposited on the Tb-Fe dots via pulse laser deposition. Figure8 presents the surface SEM image of the Tb-Fe layer in the heterostructure. It shows that the Tb-Fe layer is compactly assembled by the regular spherical nanoclusters, which are distributed uniformly and adjacent with each other. The structure of the thin-film heterostructure is sketched in Insert (a) of Figure 8. Insert (b) of Figure 8 shows the cross-sectional SEM image of the thin-film heterostructure. One observes that the interface between Tb-Fe and PZT layers is clear and no transition layer is observed, which is benefit from the LECBD progress. During this process, the phase formation of Tb-Fe nanoclusters (or nanoparticles) is achieved in the condensation chamber with high temperature, while the deposition of Tb-Fe nanocluster beam onto the substrate is achieved in another high vacuum chamber with low energy and low temperature (e.g., room temperature). Both processes are independent of each other. It is easy to understand no reaction between Tb-Fe and PZT layers.Figure 8 The surface SEM image of the Tb-Fe layer in the thin-film heterostructure. Insert (a) is a sketch of the heterostructure, and Insert (b) is the typical cross-sectional SEM image of the heterostructure.No destroying between Tb-Fe and PZT layers gives a feasibility of well ferroelectric and ferromagnetic properties. Figure9 gives the polarization versus electric field hysteresis loops and magnetic hysteresis loops for the Tb-Fe/PZT thin-film heterostructure. It shows that the well-defined ferroelectric loops are observed. The saturation polarization and remanent polarization for Tb-Fe/PZT thin-film heterostructure have a very slight decrease compared with the pure PZT film. Such slight decrease in ferroelectric properties of the heterostructure should be attributed to the increase of oxygen vacancy concentration in PZT layer, which brings about difficulty for the mobility of domain walls in a certain degree and further leads to the decrease in polarization [87]. Insert of Figure 9(a) shows that the leakage current density in the heterostructure is quite low, for example, only being ~1.5 × 10−4 A/cm2 even under the higher electric field of 30 MV/m. In spite of this, we found that the leakage current density in the heterostructure was still higher than that of the pure PZT film, which indicates the increase of free carrier density in PZT layer of the heterostructure [88].Figure 9 (a) Polarization versus electric field hysteresis (P-E) loops for the thin-film heterostructure. Insert is the variation of leakage current density with the applied electric field. (b) The field dependent magnetization (M-H) curves for the thin-film heterostructure. (a) (b)Besides, the heterostructure exhibits the well-defined magnetic hysteresis loops. It shows that both in-plane and out-of-plane coercive field are the same as onlyHc~60 Oe, much lower than that of the bulk Tb-Fe alloy, while the in-plane and out-of-plane saturation magnetizations are, respectively, ~38 emu/cm3 and ~47 emu/cm3. We notice that the magnetization character of the heterostructure is almost comparable to the pure nanostructured Tb-Fe film prepared by LECBD process. Since magnetoelectric effect in a two-phase composite mainly originates from the interfacial stress transfer between the magnetostrictive and the ferroelectric phase, the high magnetostriction and ferroelectrics are beneficial to the magnetoelectric coupling. A strong magnetoelectric coupling could be obtained in the thin-film heterostructure.Figure10 plots the magnetic bias Hbias dependence of the induced voltage increment ΔVME at a given ac magnetic field frequency f=1.0 kHz. It shows that thin-film heterostructure exhibits strong magnetoelectric coupling. The calculated maximum increment of the magnetoelectric voltage coefficient is as high as ~140 mV/cm·Oe, larger than that of the reported all-oxide ferroelectric-ferromagnetic composite film [16–18]. So strong magnetoelectric effect in Tb-Fe/PZT thin-film heterostructure is evidently beneficial from the unique LECBD process. Based on this process, not only could the interface reaction be availably avoided on the maximum degree, but also both ferroelectric and magnetostrictive properties for PZT and Tb-Fe could be maintained well.Figure 10 TheHbias dependence of the induced magnetoelectric voltage increment ΔVME at a given dc magnetic frequency f=1.0kHz for the thin-film heterostructure. Inset is the Hbias dependence of piezomagnetic coefficient for the pure Tb-Fe nanostructured film prepared by LECBD process.Insert of Figure10 shows the Hbias dependence of piezomagnetic coefficient q (=δλ/δHbias) for the pure Tb-Fe nanostructured film prepared by LECBD process. Both ΔVME in heterostructure and q in Tb-Fe film have the similar change trend with Hbias. This indicates that the magnetoelectric coupling in the heterostructure should be dominated by the magnetic-mechanical-electric transform through the stress-mediated transfer. ## 4.2. Sm-Fe/PVDF Thin-Film Heterostructure For Si based magnetoelectric composited films, the couple efficiency between ferroelectric and ferromagnetic phases was depressed due to the stress clamping effect of the hard substrate. Therefore a flexible polyvinylidene fluoride (PVDF) film may be used instead of the hard substrate due to its small Young’s modulus. Thus the magnetoelectric coupling between the ferroelectric and ferromagnetic phases will almost be not influenced. Moreover, piezoelectric voltage constant (g31) of PVDF film is an order higher than that of ordinary PZT film, which allows it to generate a bigger voltage output under a small stress. This indicates that PVDF film is suitable to act as the piezoelectric phase in the magnetoelectric thin-film heterostructure.The flexible PVDF/Sm-Fe heterostructural film was prepared by depositing Sm-Fe nanocluster beam onto the PVDF film at room temperature using LECBD technique. Though it is very easy to destroy the PVDF polymer substrate, it can be avoidable during LECBD progress with quite low energy and low temperature. Figure11 shows the cross-section SEM image of the Sm-Fe/PVDF film. It can clearly be observed that the interface between the PVDF film and Sm-Fe layer is clear and no evident transition layer appears, indicating that the PVDF film does not get destroyed during the process of LECBD. The well-defined heterostructure makes it possible to generate the strong magnetoelectric effect.Figure 11 The cross-section SEM image of the Sm-Fe/PVDF thin-film heterostructure.The Sm-Fe film exhibits strong negative magnetostrictive effect with a saturation value of ~750 × 10−6 at magnetic field of ~7.0 kOe as shown in Figure 12. Inset of Figure 12 shows that the Sm-Fe/PVDF film exhibits distinct magnetic anisotropy with an in-plane magnetic easy axis, which obviously makes the magnetoelectric coupling in the Sm-Fe/PVDF film be more efficient under an in-plane magnetic field.Figure 12 Magnetostrictionλ dependence of Sm-Fe film on magnetic field H. Inset is the magnetic hysteresis loops for the Sm-Fe/PVDF thin-film heterostructure measured at room temperature.For thin-film heterostructure, a large magnetoelectric voltage output can be obtained. Figure13 gives the magnetoelectric voltage output increment ΔVME value as a function of Hbias for the Sm-Fe/PVDF film. It is seen that the film exhibits a large voltage output under the external magnetic bias. The ΔVME value increases with increasing Hbias, reaching the maximum value of ΔVME~ 210 μV at Hbias = 2.3 kOe, and then drops. Compared with the previous investigations, the magnetoelectric voltage output in the present Sm-Fe/PVDF film is remarkably large, almost being two orders higher than that of typical all-oxide PZT/CoFe2O4/PZT film deposited on the hard wafer [89].Figure 13 Induced voltage incrementΔVME as a function of magnetic bias Hbias.Therefore, by using the flexible PVDF polymer film as the substrate, the substrate clamping effect on the magnetoelectric coupling of the heterostructural film is completely eliminated. The heterostructural film exhibits large magnetoelectric voltage output, which is mainly attributed to the large piezoelectric voltage constant in the piezoelectric PVDF layer and high magnetic anisotropy with in-plane magnetic easy axis as well as the giant negative magnetostriction in the ferromagnetic Sm-Fe layer. ## 5. Conclusions In a conclusion, the single phase multiferroic films and compound heterostructured multiferroic films assembled with clusters were prepared by low energy cluster beam deposition. It shows that the multiferroic properties of the thin-films can be controlled or improved by tuning the size of the clusters. And the structure of the thin-film heterostructure would not be destroyed due to low temperature and energy during LECBD progress. The LECBD technique provides an ideal avenue to prepare multiferroic nanostructure and facilitates their applications on NEMS devices. --- *Source: 101528-2015-03-19.xml*
2015
# Contribution ofAmaranthus cruentus and Solanum macrocarpon Leaves Flour to Nutrient Intake and Effect on Nutritional Status of Rural School Children in Volta Region, Ghana **Authors:** Godfred Egbi; Mary Glover-Amengor; Margaret M. Tohouenou; Francis Zotor **Journal:** Journal of Nutrition and Metabolism (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1015280 --- ## Abstract Background. Plant-based foods are staple diets and main micronutrient sources of most rural Ghanaian households. The objective of this study was to determine the effect of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour on micronutrient intake and nutritional status of rural Ghanaian school children. Method. This study was a randomized controlled trial that consisted of baseline data collection and a three-month nutrition intervention feeding program. Two groups of 53 children, age 4–9 years, involved in the Ghana School Feeding Program took part in the study. An experimental group consumed Amaranthus cruentus and Solanum macrocarpon leaves flour (ACSMLVF) stews and soup. The control group consumed stews and soup without ACSMLVF. Haemoglobin and serum vitamin A concentrations were determined. Dietary and anthropometric data were collected and analysed. Participants were screened for malaria parasitaemia and hookworm. Results. Anaemia was present in 41.5% and 37.3%, respectively, of the intervention and control groups at baseline. It was present in 28.3% and 53.3%, respectively, at the end of the study. This was significantly different (p=0.024). There was a low vitamin A concentration in 66.0% and 64.7% at baseline and 20.8% and 23.4% at the end of the study in the intervention and control groups, respectively. The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6 μg, respectively, at baseline. Those of the control were 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively. At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group was 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The intake of these micronutrients for the control at the end of the study was 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. Conclusion. Consumption of ACSMLVF stews and soup increased iron, zinc, and beta-carotene intakes. Anaemia prevalence was lower in the intervention group at the end of the study. --- ## Body ## 1. Introduction Micronutrient deficiencies are a major public health problem amongst vulnerable groups such as infants, children, pregnant, and lactating women in the developing world [1]. For good health, a balanced diet consisting of starchy foods as well as protein and micronutrient rich foods is essential [2], since such a diet has negative association with the risk of chronic diseases [3].Vegetables and fruits are rich sources of minerals, vitamins and phytonutrients in sub-Saharan Africa [4, 5]. Plant-based foods are staple diets and the main source of micronutrients for most rural Ghanaian households. Vegetables and fruits are abundant and largely consumed during the rainy season. The availability, accessibility, affordability, and consumption of vegetables become a challenge during the dry season in poor households in Ghana and other West African countries. Consequently, poor household members are unlikely to meet their Recommended Dietary Allowances [6] for micronutrients in the dry season.Due to high water activity, green leafy vegetables are perishable. Heat-processing methods (sun or solar and mechanical drying) reduce their moisture contents which make it feasible to process them into flour, so they can be preserved for use in the dry season. Analysis made on the vitamin composition of the dry leaves ofAmaranthus cruentus (known locally as Aleefu) and Solanum macrocarpon (known locally as Gboma) indicates appreciable levels of beta-carotene (63.0 mg/100 g and 64.0 mg/100 g) and ascorbic acid (1.5 mg/100 g and 2.4 mg/100 g), respectively [7, 8]. In Ghana, the leaves of these plants are used in soup and stew preparations just as spinach is used in other parts of the world. This study sought to determine the contribution that consumption of stews and soups made from Amaranthus cruentus and Solanum macrocarpon leaves flour will make to total nutrient intake and the effect on nutritional status of rural Ghanaian school children. ## 2. Materials and Methods ### 2.1. Study Design The study was a pretest and posttest design. It consisted of baseline data collection and a three-month nutrition intervention feeding program. The study consisted of an intervention and a control group. The children (of a community basic school complex) were randomized by simple random sampling into the two groups. The intervention group received school lunch stews and soups prepared withAmaranthus cruentus and Solanum macrocarpon flour (ACSMVLF). The control group had no ACSMVLF in their stews and soups. The feeding was done on every weekday and lasted for a period of three months. ### 2.2. Study Area The study was carried out in the Adaklu District of the Volta Region. The capital town is Adaklu-Waya [9]. The district shares boundary at the west with Ho Municipality, south with Agotime-Ziope District, north with Akatsi District, and east with Ketu North District. The district has an area of 709 km square [9]. A population and housing census carried out in 2010 [9] showed that the population of Adaklu District was 36,391 (49.0% male) [9]. The majority of the people (78%) engaged in peasant farming. The food staples grown were maize and cassava. Other crops included cowpea, groundnut, tomatoes, garden eggs, pepper, and okra. To a small extent at the household levels were sheep, goat, and poultry rearing. ### 2.3. Study Population The study population consisted of pupils 4–9 years of age attending the Adaklu-Kodzobi basic school. The two groups were similar in age, physiological makeup, and the Recommended Dietary Allowances (RDIs) for micronutrients [6]. At the time of the study, the subjects were also participating in the Ghana School Feeding Program (GSFP) where school lunch is provided. ### 2.4. Sample Selection Criteria A child qualified to enroll in the study if he or she was aged four to nine years; attending school regularly; if parents gave written consent and their children provided assent to participate; enrolled in the Ghana Government School Feeding Program; and had no history of allergy to consumption of vegetables or vegetable flours as self-reported (or guardian). ### 2.5. Sample Size Determination and Sampling This was a pilot study based on a sample size of 90 participants estimated with 0.6 g/dl mean change in haemoglobin concentration with a standard deviation of 1.2 (unpublished) and assumed a standardized effect size of 0.4 and a power of 80% with a significance level of 0.05. Assuming an expected drop-out rate of 18%, the sample size was increased, to give 53 participants per group. A sample frame was constructed of all the children who met the study criteria. The children were randomized by simple random sampling. ### 2.6. Chemical Analysis The various stews and soups were analysed for their moisture, ash, protein, fat, iron, and zinc contents using standard protocols [10]. Beta-carotene content of the stews and soups was determined by the HPLC procedure of Rodriguez-Amaya and Kimura [11]. Triplicate determinations were made using the means as true values. ### 2.7. Vegetable Flour Preparation and Feeding of Participants FreshSolanum macrocarpon and Amaranthus cruentus leaves were purchased from urban market gardeners in Accra. The fresh leaves were soaked in 1% sodium chloride solution, rinsed with running tap water, and later dried in a mechanical air oven at 45°C for 10 hours. The dried leaves were ground separately into fine flour using a blender (Philips HR 2113, Netherlands). Both flours were mixed into a uniform blend of composite flour containing equal proportions (1 : 1 wt/wt) of each kind of vegetable with a cake mixer (Philips Viva Mixer HR 1565, Netherlands). Two hundred and thirty grams of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) was packaged in an airtight plastic (polythene) bag and stored in dark cardboard. Bean stew, tomato stew, or groundnut soup was prepared separately with tomato paste (200 g), ground pepper (15 g), onion paste (35 g), smoked anchovies powder (100 g), and iodized salt (40 g) with or without the composite flour of the (ACSMLF). Groundnut oil (400 g) was used to prepare either the beans or tomato stew. Groundnut paste (400 g) was used to prepare groundnut soup. An amount of 230 g of ACSMLVF was added to the stew or soup for the intervention group. The food for the control group did not contain any ACSMLVF. Each participant in the intervention group was given 50 g of tomato and 100 g of bean stews and 95 g of groundnut soup fortified with ACSMLVF. The ACSMLVF-fortified tomato stew was served thrice a week at lunch break. The ACSMLVF-fortified bean stew was served once a week, and the ACSMLVF-fortified groundnut soup was also served once a week. Each participant in the control group was given the same quantity and frequency of the tomato and bean stews and groundnut soup without ACSMLVF. The stews and soup were eaten with 230 g of boiled plain rice (twice a week), Ga-kenkey (twice a week), and banku once in a week. Ga-kenkey and banku are fermented and cooked corn dough meals. To prevent trading of the served meals, participants from the intervention and the control groups were identified with green and yellow buttons, respectively, on their breast pockets. Each group was served with food at a different location in the school premises, and they were supervised by teachers and research assistants to maintain compliance. The study was carried out from mid-September–mid-December after the major rainy season. ### 2.8. Data and Biological Sample Collection and Analyses Semistructured questionnaires were used to collect data on characteristics of the participants. Dietary data were captured through a food frequency questionnaire, 24-hour recall, and direct weighing of foods consumed. At every meal time, left over foods for individual participants were weighed and subtracted from the amount of meal served to account for actual food intake of each participant.Food measures such as cups, spoons, ladles, and balls were provided to assist the respondent in assessing portion sizes of the food consumed. Portion sizes of the foods consumed were then estimated and recorded. Prices of purchased foods were also recorded. To enhance the accuracy of estimation of the weight of purchased foods, samples of purchased foods were weighed with an ultramodern electronic scale (Soehnle Plateau 56377, LEIFHEIT AG Nassau, Germany) to obtain the weight of the foods. The total amounts of specific food consumed were computed manually. The amounts of the various nutrients: protein, fat, iron, zinc, vitamin C, folate, and vitamin B12 were calculated based on 100 grams portion sizes with the help of Microsoft Office Excel 2007 and the food composition table [12, 13]. The weights of individual participants were measured to the nearest 0.1 kg in triplicate with the Seca digital weighing scale (Seca scale 803), according to a WHO 2006 protocol [14]. The actual weight of a participant was the average of triplicate measurements to the nearest 0.1 kg. The height of each participant (in triplicate) to nearest 0.1 cm was taken with a stadiometer in a standing position in accordance with standard procedures [14]. The average of three readings was recorded as the true value. The weight and height measurements were done at baseline and the end of the study.A phlebotomist from the Parasitology Department of the Noguchi Memorial Institute for Medical Research (NMIMR), University of Ghana, Legon, collected 2 ml of venous blood by venepuncture from every participant early in the morning before breakfast. Blood samples were collected into serum separating tubes coated with gel (a clot activator). Blood samples collected were transported on ice packs to the Nutrition Department’s laboratory, NMIMR. At the laboratory, each blood sample was centrifuged at 2,000 rpm for 15 min. And duplicate serum aliquots were prepared into Eppendorf tubes and stored under −80°C until analysed. The venous blood samples collected were used immediately in the field to determine haemoglobin concentrations by a haemoglobinometer (Hb 201+) (HemoCue AB Angelholm, Sweden). The average of two readings was taken as the actual haemoglobin concentration of each participant. The serum sample from each participant was used to determine his or her serum vitamin A concentration by HPLC technique, according to the established protocol of NMIMR [15]. The standard Giemsa staining technique [16] was used to screen for malaria parasite infection in the participants in prepared blood film slides. Stool samples were collected and used to screen for presence of hookworm by the Kato–Katz technique [17]. ### 2.9. Data Analysis The amount of intake of nutrients was calculated using the Ghana Food Composition Table, Ring database, and West African FAO database. All measured variables were checked for normality. Haemoglobin, serum retinol, weight, and height values were normally distributed. Anaemia was defined as haemoglobin (Hb) concentration < 10.9 g/dl, mild—Hb 10.0–10.9 g/dl, moderate—Hb 7.0–9.9 g/dl, and severe—Hb < 7.0 g/dl for children below the age of five years [18]. For children 5–9 years of age, it was defined as mild anaemia—Hb 11.0–11.4 g/dl, moderate—Hb 8.0–10.9 g/dl, and severe—Hb < 8.0 g/dl–<80 [18]. Summary values were presented as means plus or minus standard deviations and percentages. Student’s t-test was used to compare mean values of control and intervention groups for any significant difference. Within-group individual differences were determined between baseline and end of the study periods using the paired t-test. Binary logistic regression was used to establish association of anaemia with other factors. One-way analysis of variance (ANOVA) was used to compare the mean nutrient composition of the various stews and soups. Significance was set at p≤0.05. The amount of nutrients consumed by the study participants was compared to the Recommended Daily Intakes (RDIs) of the various nutrients. ## 2.1. Study Design The study was a pretest and posttest design. It consisted of baseline data collection and a three-month nutrition intervention feeding program. The study consisted of an intervention and a control group. The children (of a community basic school complex) were randomized by simple random sampling into the two groups. The intervention group received school lunch stews and soups prepared withAmaranthus cruentus and Solanum macrocarpon flour (ACSMVLF). The control group had no ACSMVLF in their stews and soups. The feeding was done on every weekday and lasted for a period of three months. ## 2.2. Study Area The study was carried out in the Adaklu District of the Volta Region. The capital town is Adaklu-Waya [9]. The district shares boundary at the west with Ho Municipality, south with Agotime-Ziope District, north with Akatsi District, and east with Ketu North District. The district has an area of 709 km square [9]. A population and housing census carried out in 2010 [9] showed that the population of Adaklu District was 36,391 (49.0% male) [9]. The majority of the people (78%) engaged in peasant farming. The food staples grown were maize and cassava. Other crops included cowpea, groundnut, tomatoes, garden eggs, pepper, and okra. To a small extent at the household levels were sheep, goat, and poultry rearing. ## 2.3. Study Population The study population consisted of pupils 4–9 years of age attending the Adaklu-Kodzobi basic school. The two groups were similar in age, physiological makeup, and the Recommended Dietary Allowances (RDIs) for micronutrients [6]. At the time of the study, the subjects were also participating in the Ghana School Feeding Program (GSFP) where school lunch is provided. ## 2.4. Sample Selection Criteria A child qualified to enroll in the study if he or she was aged four to nine years; attending school regularly; if parents gave written consent and their children provided assent to participate; enrolled in the Ghana Government School Feeding Program; and had no history of allergy to consumption of vegetables or vegetable flours as self-reported (or guardian). ## 2.5. Sample Size Determination and Sampling This was a pilot study based on a sample size of 90 participants estimated with 0.6 g/dl mean change in haemoglobin concentration with a standard deviation of 1.2 (unpublished) and assumed a standardized effect size of 0.4 and a power of 80% with a significance level of 0.05. Assuming an expected drop-out rate of 18%, the sample size was increased, to give 53 participants per group. A sample frame was constructed of all the children who met the study criteria. The children were randomized by simple random sampling. ## 2.6. Chemical Analysis The various stews and soups were analysed for their moisture, ash, protein, fat, iron, and zinc contents using standard protocols [10]. Beta-carotene content of the stews and soups was determined by the HPLC procedure of Rodriguez-Amaya and Kimura [11]. Triplicate determinations were made using the means as true values. ## 2.7. Vegetable Flour Preparation and Feeding of Participants FreshSolanum macrocarpon and Amaranthus cruentus leaves were purchased from urban market gardeners in Accra. The fresh leaves were soaked in 1% sodium chloride solution, rinsed with running tap water, and later dried in a mechanical air oven at 45°C for 10 hours. The dried leaves were ground separately into fine flour using a blender (Philips HR 2113, Netherlands). Both flours were mixed into a uniform blend of composite flour containing equal proportions (1 : 1 wt/wt) of each kind of vegetable with a cake mixer (Philips Viva Mixer HR 1565, Netherlands). Two hundred and thirty grams of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) was packaged in an airtight plastic (polythene) bag and stored in dark cardboard. Bean stew, tomato stew, or groundnut soup was prepared separately with tomato paste (200 g), ground pepper (15 g), onion paste (35 g), smoked anchovies powder (100 g), and iodized salt (40 g) with or without the composite flour of the (ACSMLF). Groundnut oil (400 g) was used to prepare either the beans or tomato stew. Groundnut paste (400 g) was used to prepare groundnut soup. An amount of 230 g of ACSMLVF was added to the stew or soup for the intervention group. The food for the control group did not contain any ACSMLVF. Each participant in the intervention group was given 50 g of tomato and 100 g of bean stews and 95 g of groundnut soup fortified with ACSMLVF. The ACSMLVF-fortified tomato stew was served thrice a week at lunch break. The ACSMLVF-fortified bean stew was served once a week, and the ACSMLVF-fortified groundnut soup was also served once a week. Each participant in the control group was given the same quantity and frequency of the tomato and bean stews and groundnut soup without ACSMLVF. The stews and soup were eaten with 230 g of boiled plain rice (twice a week), Ga-kenkey (twice a week), and banku once in a week. Ga-kenkey and banku are fermented and cooked corn dough meals. To prevent trading of the served meals, participants from the intervention and the control groups were identified with green and yellow buttons, respectively, on their breast pockets. Each group was served with food at a different location in the school premises, and they were supervised by teachers and research assistants to maintain compliance. The study was carried out from mid-September–mid-December after the major rainy season. ## 2.8. Data and Biological Sample Collection and Analyses Semistructured questionnaires were used to collect data on characteristics of the participants. Dietary data were captured through a food frequency questionnaire, 24-hour recall, and direct weighing of foods consumed. At every meal time, left over foods for individual participants were weighed and subtracted from the amount of meal served to account for actual food intake of each participant.Food measures such as cups, spoons, ladles, and balls were provided to assist the respondent in assessing portion sizes of the food consumed. Portion sizes of the foods consumed were then estimated and recorded. Prices of purchased foods were also recorded. To enhance the accuracy of estimation of the weight of purchased foods, samples of purchased foods were weighed with an ultramodern electronic scale (Soehnle Plateau 56377, LEIFHEIT AG Nassau, Germany) to obtain the weight of the foods. The total amounts of specific food consumed were computed manually. The amounts of the various nutrients: protein, fat, iron, zinc, vitamin C, folate, and vitamin B12 were calculated based on 100 grams portion sizes with the help of Microsoft Office Excel 2007 and the food composition table [12, 13]. The weights of individual participants were measured to the nearest 0.1 kg in triplicate with the Seca digital weighing scale (Seca scale 803), according to a WHO 2006 protocol [14]. The actual weight of a participant was the average of triplicate measurements to the nearest 0.1 kg. The height of each participant (in triplicate) to nearest 0.1 cm was taken with a stadiometer in a standing position in accordance with standard procedures [14]. The average of three readings was recorded as the true value. The weight and height measurements were done at baseline and the end of the study.A phlebotomist from the Parasitology Department of the Noguchi Memorial Institute for Medical Research (NMIMR), University of Ghana, Legon, collected 2 ml of venous blood by venepuncture from every participant early in the morning before breakfast. Blood samples were collected into serum separating tubes coated with gel (a clot activator). Blood samples collected were transported on ice packs to the Nutrition Department’s laboratory, NMIMR. At the laboratory, each blood sample was centrifuged at 2,000 rpm for 15 min. And duplicate serum aliquots were prepared into Eppendorf tubes and stored under −80°C until analysed. The venous blood samples collected were used immediately in the field to determine haemoglobin concentrations by a haemoglobinometer (Hb 201+) (HemoCue AB Angelholm, Sweden). The average of two readings was taken as the actual haemoglobin concentration of each participant. The serum sample from each participant was used to determine his or her serum vitamin A concentration by HPLC technique, according to the established protocol of NMIMR [15]. The standard Giemsa staining technique [16] was used to screen for malaria parasite infection in the participants in prepared blood film slides. Stool samples were collected and used to screen for presence of hookworm by the Kato–Katz technique [17]. ## 2.9. Data Analysis The amount of intake of nutrients was calculated using the Ghana Food Composition Table, Ring database, and West African FAO database. All measured variables were checked for normality. Haemoglobin, serum retinol, weight, and height values were normally distributed. Anaemia was defined as haemoglobin (Hb) concentration < 10.9 g/dl, mild—Hb 10.0–10.9 g/dl, moderate—Hb 7.0–9.9 g/dl, and severe—Hb < 7.0 g/dl for children below the age of five years [18]. For children 5–9 years of age, it was defined as mild anaemia—Hb 11.0–11.4 g/dl, moderate—Hb 8.0–10.9 g/dl, and severe—Hb < 8.0 g/dl–<80 [18]. Summary values were presented as means plus or minus standard deviations and percentages. Student’s t-test was used to compare mean values of control and intervention groups for any significant difference. Within-group individual differences were determined between baseline and end of the study periods using the paired t-test. Binary logistic regression was used to establish association of anaemia with other factors. One-way analysis of variance (ANOVA) was used to compare the mean nutrient composition of the various stews and soups. Significance was set at p≤0.05. The amount of nutrients consumed by the study participants was compared to the Recommended Daily Intakes (RDIs) of the various nutrients. ## 3. Ethical Approval The Institutional Review Board (IRB) of the Noguchi Memorial Institute for Medical Research, College of Health Sciences, University of Ghana, Legon, gave ethical approval (NMIMR-IRB CPN001/14-15) for the study to be conducted. Participants and guardians gave written or thumb print consent to take part in the study. ## 4. Results ### 4.1. Background Characteristics Fifty-three children were recruited for each group at baseline, but only 51 children recruited for the control group provided biological samples. Two children declined for religious reasons. The study participants had similar background characteristics (Table1). There was no significant difference between the two groups in terms of gender, mean age, and possession of a backyard garden to provide vegetables for the households. Just under half (47.1%) were male. Almost all (99.0%) of the parents (or guardians) were in the 20–60-year age range. Most (94.2%) of the parents (guardians) were earning an income of ≤ GHȻ500 ($178.60) per month (Table 1). Only 4.8% of the parents (guardians) did not have formal education. The majority (68.3%) of the study participants’ households held food taboos of cultural significance; for example, an aversion to eating snails and reptiles. The level of stunting (HAZ score < −2 standard deviation) in the intervention group was 13.2% and 17.6% in the control group at baseline (Table 1). Wasting was present in 11.3% of the participants in the intervention group (WAZ score < −2 standard deviation), and 15.7% of the control wasted. Thinness (BMIAZ score < −2 standard deviation) at baseline was seen in 5.7% and 3.9% in the intervention and control groups, respectively. The level of anaemia in the children at baseline was 39.4% (Table 1). It was 41.5% and 37.3% in the intervention group and the control, respectively. The level of moderate anaemia was 9.4% in the intervention and 3.9% in the control group. The overall presence of low vitamin A concentration (<20 μg/dl) [19] was 65.4%; more specifically, it was mild—15–20 μg/dl (26.9%), moderate—10–14.9 μg/dl (21.2%), and severe—<10 μg/dl (17.3%). The presence of malaria parasitaemia was 35.6% at baseline. No participant had hookworm infection. Almost every study participant (98.1% in the intervention group and 100.0% in the control group) reported eating three times every day (Table 1). Only 13.2% and 9.8% of participants in the intervention group and the control, respectively, had a high dietary diversity score, eating diets made from the six food groups (cereals; legumes, nuts, and oils; fruits and vegetables; roots and tubers; meat, poultry, and fish; and fats and oils) available in Ghana. Many of the participants, 71.7% and 76.5% from the intervention and the control groups, had medium dietary diversity score, eating meals made from four or five food groups.Table 1 Background characteristics of participants by the study group. CharacteristicsIntervention (n = 53)Control (n = 51)p valueMean age (years)7.3 ± 1.76.7 ± 1.80.081SexMales41.551.90.242Females58.547.10.176AnthropometryWeight (Kg)23.0 ± 4.921.6 ± 3.90.088Height (cm)117.1 ± 11.3120.6 ± 11.30.110WAZ-score−0.731 ± 0.906−0.485 ± .9890.189HAZ-score−0.723 ± 1.149−0.592 ± 1.170.567WHZ-score−0.121 ± 1.362−0.095 ± 1.410.398BMI-for-agez score−0.388 ± 0.899−0.105 ± 0.9080.113MalnutritionAll category34.041.20.051Stunting (%)13.217.60.530Wasting (%)11.315.70.084Thinness (%)5.73.90.381Overweight (%)3.82.00.681Obesity (%)0.02.00.068AnaemiaAll category41.537.30.055Mild32.133.30.998Moderate9.43.90.051Low vitamin A levelAll category66.064.70.087Mildly low26.427.50.995Moderately low24.517.70.053Severely low15.119.60.087Infection statusMalaria parasitaemia (%)34.037.30.726Guardian’s age (years)20–60100.098.00.474≥610.02.00.197Daily food consumption patternTwice1.90.00.199≥thrice98.1100.00.344Parental monthly income $ (GH¢)≤$178 (GHȻ499)96.292.20.371≥$178 (GHȻ500)3.87.80.237Food taboosYes72.564.20.362No27.535.80.334Parental educationFormal education96.294.10.712No formal education3.85.90.891Household backyard gardenYes2215.70.371No7884.30.398Dietary diversity scoreLow (≤3 food groups)15.013.7Medium (4–5 food groups)71.776.5High (≥6 food groups)13.29.8p values obtained by the independent t-test, otherwise chi-square test, are significant at p<0.05.The results indicate that study participants whose parents earned the minimum income (1.0–499.0 cedis per month, equivalent to 1.0–178.6 USD) were two times more likely to be anaemic (OR: 1.95; CI: 0.22–0.86;p=0.039) compared to those whose parents earned at least 1000 cedis a month (Table 2). The participants with low serum retinol concentrations were 1.7 times more likely to have anaemia than those who had normal serum retinol levels (OR: 1.68; CI: 0.10–0.99; p=0.049) (Table 2). Other factors included in the binary logistic model, parental education status, parental marital status, and nutritional, infection, and dietary intake status of participants, were associated with anaemia but not significant (Table 2).Table 2 Factors associated with anaemia in the study participants at baseline. FactorOdds ratio95% CIp valueParental education statusAt least secondary educationReference [1]Basic education2.370.27–20.520.433No education1.570.47–5.280.468Parental occupationFormal employmentReference [1]Informal employment1.0270.360–2.9320.960Parental marital statusSingleReference [1]Married2.500.16–3.020.511Parental monthly incomeMonthly income (≥1000 cedis)Reference [1]Monthly income (500–999  cedis)1.790.019–2.400.210Monthly income (1–499 cedis)1.950.221–0.860.039Participant’s nutritional statusNormal retinol level (≥20μg/dl)Reference [1]Low retinol level (<20μg/dl)1.680.10–0.990.049Normal (HAZ > −2 standard deviationsReference [1]Stunted (HAZ ≤ −2 standard deviations1.590.13–2.640.491Participant’s infection statusMalaria parasitaemia absentReference [1]Malaria parasitaemia present1.640.27–9.890.069Dietary intakeAdequate intake (≥DRI)Reference [1]Inadequate iron intake (<DRI)2.320.48–3.630.052Inadequate fat intake (<DRI)1.030.99–1.070.127Inadequate protein intake (<DRI)1.230.13–8.670.431GenderBoysReference [1]Girls2.580.94–7.080.920Age0.820.738p values are significant at p<0.05. Binary logistic analysis was performed (Cox & Snell R Square = 0.186; Nagelkerke R Square = 0.248). ### 4.2. Nutrient Intake The results from weighed food records showed that stews and soup fed to the intervention group had a much higher content of protein, fat, ash, iron, and zinc compared to those of the control group (Table3). There was no significant difference in protein intake of the study groups at baseline. The mean protein intakes of the intervention and control groups at the end of the study were 38.1 ± 9.1 g and 32.6 ± 7.5 g, respectively.Table 3 Composition of the stews and soups consumed by study participants. Nutrient composition of stews and soups/100 g as consumedStew/soupGroupMoisture (%)Ash (mg)Protein (g)Fat (g)Iron (mg)Zinc (mg)β-Carotene (mg)Tomato stewControl70.7 ± 0.230.1 ± 1.54.6 ± 0.59.9 ± 0.53.0 ± 0.22.4 ± 0.52.8 ± 0.6Tomato + MGLVP stewIntervention68.5 ± 0.432.3 ± 2.47.7 ± 0.110.1 ± 0.79.7 ± 0.17.8 ± 0.16.2 ± 0.5Bean stewControl63.7 ± 115.5 ± 1.38.5 ± 1.710.0 ± 1.18.15 ± 1.14.3 ± 0.11.1 ± 0.3Bean + MGLVP stewIntervention62.3 ± 217.2 ± 1.415.9 ± 1.412.1 ± 0.914.5 ± 1.38.1 ± 0.75.9 ± 0.7Ground nut soupControl77.5 ± 1.221.3 ± 2.75.7 ± 0.215.1 ± 1.02.8 ± 0.21.1 ± 0.21.7 ± 0.3Ground nut + MGVLP soupIntervention72.2 ± 1.025.2 ± 1.910.9 ± 0.616.4 ± 0.96.9 ± 0.15.8 ± 0.25.1 ± 1.1p values (except that for fat) obtained by one-way ANOVA are significant between control and corresponding intervention meals at p<0.05. Data presented are from laboratory analysis.The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6μg, respectively, at baseline. Those of the control were 13.7 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively (Table 4). At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group were 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The values for the control group were 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. The stews and soup made from Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) contributed on the average of 580.0 μg of beta-carotene to each participant in the intervention group. The amount was estimated based on the amount of various stews and soups provided (Table 3). The control also derived an estimated average of 220.0 μg of beta-carotene from their stews and soups. The percentage of participants in the intervention group who met their DRI for iron, zinc, and provitamin A were 83.0%, 89.4%, and 94.3%, respectively, at the end of the study. The values in the control group were 61.7%, 53.2% and 60.9%, respectively. The change in mean intakes of iron, zinc, and beta-carotene for the intervention group was 9.9 mg, 8.1 mg, and 479.7 μg, respectively, and for the control group, 1.1 mg, 0.5 mg, and 207.8 μg, respectively. There were no significant differences in the changes in mean intakes of vitamin C, folic acid, and vitamin B12 of study participants from baseline to the end of the study for both groups (p>0.05). None of the children met their DRI for folic acid at baseline and at the end of the study (Table 4).Table 4 Nutrient intake of study participants in comparison with Dietary Reference Intakes (DRIs) at baseline and end of the study. NutrientStudy groupp valueInterventionControlIntake (mean ± SD)Met DRIIntake (mean ± SD)Met DRIProtein (g)Baseline32.7 ± 8.636 (67.9)32.2 ± 9.533 (64.7)0.793End line38.1 ± 9.145 (84.9)32.6 ± 9.231 (66.0)0.004∗p value0.001∗0.003∗0.9540.875Iron (mg)Baseline14.2 ± 7.132 (60.4)13.7 ± 6.132 (62.7)0.680End line24.1 ± 10.948 (90.6)14.8 ± 6.239 (83.0)0.001∗p value0.001∗0.004∗0.4330.006∗Beta-carotene (μg)Baseline215 ± 2323 (43.4)211 ± 2021 (41.2)0.065End line694 ± 3350 (94.3)418 ± 3528 (60.9)0.001∗p value0.001∗0.001∗0.001∗0.011∗Zinc (mg)Baseline5.7 ± 2.122 (41.5)5.4 ± 2.126 (51.0)0.538End line13.8 ± 8.245 (89.4)5.9 ± 2.325 (53.2)0.001∗p value0.001∗0.03∗0.1850.231Vitamin C (mg)Baseline23.8 ± 3.811 (20.8)23.4 ± 4.710 (19.6)0.673End line24.2 ± 3.912 (22.6)24.±4.79 (19.1)0.287p value0.1380.2050.7780.865Folic acid (mg)Baseline30.5 ± 17.00 (0)30.4 ± 12.20 (0)0.971End line30.7 ± 16.90 (0)30.8 ± 12.30 (0)0.816p value0.844—0.839—∗p values within (across rows) and between (in most right column) study groups are significant at p<0.05 by the paired and independent t-tests, respectively, otherwise by the chi-square test. The number of study participants at baseline was intervention = 53 and control = 51 and at end line was intervention = 53 and control = 47. Met DRI is the number (n) and percentage (%) of participants that met the various Recommended Nutrient Intakes. Nutrient intake was assessed by combination of 24-hour recall and direct food weighing (foods served at lunch break) (DRI source: National Research Council 1989, Recommended Dietary Allowances: 10th edition, Washington, DC: The National Academies Press (https://doi.org/10.17226/1349)). ### 4.3. Nutritional and Infection Status of Study Participants At the end of the study, 39.0% of the participants were still anaemic (Table5). The level of anaemia was 28.3% in the intervention group and 53.3% in the control (p=0.024). The level of mild anaemia was 24.5% and 46.8% in the intervention and the control groups, respectively (p=0.022). The overall prevalence of low vitamin A concentration among the participants at the end of the study was 21.2% (Table 5). No subjects in the intervention group had a moderately or severely low vitamin A concentration in the vitamin A level at the end of the study. However, this was present in 4.3% and 2.1%, respectively, of subjects in the control group (Table 5). At the end of the study, no significant difference was observed in the prevalence of stunting, wasting, and thinness as well as overweight and obesity between the two groups (p>0.05). There was no significant difference in the prevalence of malaria parasitaemia between the two groups at the end of the study. No hookworm infestation was observed in the participants.Table 5 Nutritional and infection status of study participants at the end of the study. Outcome variableIntervention (n = 53)Control (n = 47)p valueMalnutritionAll category32.138.30.051Stunting (%)11.314.90.063Wasting (%)9.414.90.052Thinness (%)5.74.20.567Overweight (%)5.72.10.074Obesity (%)0.02.10.067AnaemiaAll category (%)28.353.30.024Mild (%)24.546.80.022Moderate (%)3.84.30.791Low vitamin AAll category (%)20.823.40.059Mildly low20.817.00.055Moderately low (%)0.04.30.051Severely low (%)0.02.10.073Infection statusMalaria parasitaemia39.640.40.935Using Pearson’s chi-square test for categorical variables, statistical significance is set atp<0.05. The number of study participants at end line was intervention group = 53 and control group = 47. ### 4.4. Dietary Diversity Score of Study Participants The dietary diversity score (based on baseline data and data collected during the intervention) was calculated from 24-hour dietary recall and food frequency data based on nine food groups [20]. The mean dietary diversity score was 4.3. The majority of the participants in both study groups had a medium dietary diversity score (consumed food from 4 to 5 food groups). Only 13.2% and 9.8% of the participants from the intervention and control groups, respectively, had a high diversity score (≥6 food groups). The dietary diversity score for children in the intervention group did not differ significantly from the control group (p=0.829). Findings from the dietary diversity tertile showed that the majority of the participants from both groups consumed starchy staples, other vegetables, dark green leafy vegetables, and fish. Fruits and dairy products were scarcely eaten by the study participants (Table 6).Table 6 Food groups consumed by ≥50% of participants by the dietary diversity tertile at baseline and during the intervention study. TertileFood groupsLowest dietary diversity (≤3 food groups)Cereals∗, meat+, poultry+ and fish∗, vegetablesMedium dietary diversity (4–5 food groups)Cereals, meat+, poultry+ and fish∗, vegetables∗ and fruits+, roots and tubers, legumes∗, nuts and seedsHigh dietary diversity (≥6 food groups)Cereals, roots and tubers, meat+, poultry+ and fish∗, vegetables∗ and fruits+, fats+ and oils+, legumes∗, nuts and seeds∗Most consumed food items in a food group. +Most scarcely consumed food items in a food group. ## 4.1. Background Characteristics Fifty-three children were recruited for each group at baseline, but only 51 children recruited for the control group provided biological samples. Two children declined for religious reasons. The study participants had similar background characteristics (Table1). There was no significant difference between the two groups in terms of gender, mean age, and possession of a backyard garden to provide vegetables for the households. Just under half (47.1%) were male. Almost all (99.0%) of the parents (or guardians) were in the 20–60-year age range. Most (94.2%) of the parents (guardians) were earning an income of ≤ GHȻ500 ($178.60) per month (Table 1). Only 4.8% of the parents (guardians) did not have formal education. The majority (68.3%) of the study participants’ households held food taboos of cultural significance; for example, an aversion to eating snails and reptiles. The level of stunting (HAZ score < −2 standard deviation) in the intervention group was 13.2% and 17.6% in the control group at baseline (Table 1). Wasting was present in 11.3% of the participants in the intervention group (WAZ score < −2 standard deviation), and 15.7% of the control wasted. Thinness (BMIAZ score < −2 standard deviation) at baseline was seen in 5.7% and 3.9% in the intervention and control groups, respectively. The level of anaemia in the children at baseline was 39.4% (Table 1). It was 41.5% and 37.3% in the intervention group and the control, respectively. The level of moderate anaemia was 9.4% in the intervention and 3.9% in the control group. The overall presence of low vitamin A concentration (<20 μg/dl) [19] was 65.4%; more specifically, it was mild—15–20 μg/dl (26.9%), moderate—10–14.9 μg/dl (21.2%), and severe—<10 μg/dl (17.3%). The presence of malaria parasitaemia was 35.6% at baseline. No participant had hookworm infection. Almost every study participant (98.1% in the intervention group and 100.0% in the control group) reported eating three times every day (Table 1). Only 13.2% and 9.8% of participants in the intervention group and the control, respectively, had a high dietary diversity score, eating diets made from the six food groups (cereals; legumes, nuts, and oils; fruits and vegetables; roots and tubers; meat, poultry, and fish; and fats and oils) available in Ghana. Many of the participants, 71.7% and 76.5% from the intervention and the control groups, had medium dietary diversity score, eating meals made from four or five food groups.Table 1 Background characteristics of participants by the study group. CharacteristicsIntervention (n = 53)Control (n = 51)p valueMean age (years)7.3 ± 1.76.7 ± 1.80.081SexMales41.551.90.242Females58.547.10.176AnthropometryWeight (Kg)23.0 ± 4.921.6 ± 3.90.088Height (cm)117.1 ± 11.3120.6 ± 11.30.110WAZ-score−0.731 ± 0.906−0.485 ± .9890.189HAZ-score−0.723 ± 1.149−0.592 ± 1.170.567WHZ-score−0.121 ± 1.362−0.095 ± 1.410.398BMI-for-agez score−0.388 ± 0.899−0.105 ± 0.9080.113MalnutritionAll category34.041.20.051Stunting (%)13.217.60.530Wasting (%)11.315.70.084Thinness (%)5.73.90.381Overweight (%)3.82.00.681Obesity (%)0.02.00.068AnaemiaAll category41.537.30.055Mild32.133.30.998Moderate9.43.90.051Low vitamin A levelAll category66.064.70.087Mildly low26.427.50.995Moderately low24.517.70.053Severely low15.119.60.087Infection statusMalaria parasitaemia (%)34.037.30.726Guardian’s age (years)20–60100.098.00.474≥610.02.00.197Daily food consumption patternTwice1.90.00.199≥thrice98.1100.00.344Parental monthly income $ (GH¢)≤$178 (GHȻ499)96.292.20.371≥$178 (GHȻ500)3.87.80.237Food taboosYes72.564.20.362No27.535.80.334Parental educationFormal education96.294.10.712No formal education3.85.90.891Household backyard gardenYes2215.70.371No7884.30.398Dietary diversity scoreLow (≤3 food groups)15.013.7Medium (4–5 food groups)71.776.5High (≥6 food groups)13.29.8p values obtained by the independent t-test, otherwise chi-square test, are significant at p<0.05.The results indicate that study participants whose parents earned the minimum income (1.0–499.0 cedis per month, equivalent to 1.0–178.6 USD) were two times more likely to be anaemic (OR: 1.95; CI: 0.22–0.86;p=0.039) compared to those whose parents earned at least 1000 cedis a month (Table 2). The participants with low serum retinol concentrations were 1.7 times more likely to have anaemia than those who had normal serum retinol levels (OR: 1.68; CI: 0.10–0.99; p=0.049) (Table 2). Other factors included in the binary logistic model, parental education status, parental marital status, and nutritional, infection, and dietary intake status of participants, were associated with anaemia but not significant (Table 2).Table 2 Factors associated with anaemia in the study participants at baseline. FactorOdds ratio95% CIp valueParental education statusAt least secondary educationReference [1]Basic education2.370.27–20.520.433No education1.570.47–5.280.468Parental occupationFormal employmentReference [1]Informal employment1.0270.360–2.9320.960Parental marital statusSingleReference [1]Married2.500.16–3.020.511Parental monthly incomeMonthly income (≥1000 cedis)Reference [1]Monthly income (500–999  cedis)1.790.019–2.400.210Monthly income (1–499 cedis)1.950.221–0.860.039Participant’s nutritional statusNormal retinol level (≥20μg/dl)Reference [1]Low retinol level (<20μg/dl)1.680.10–0.990.049Normal (HAZ > −2 standard deviationsReference [1]Stunted (HAZ ≤ −2 standard deviations1.590.13–2.640.491Participant’s infection statusMalaria parasitaemia absentReference [1]Malaria parasitaemia present1.640.27–9.890.069Dietary intakeAdequate intake (≥DRI)Reference [1]Inadequate iron intake (<DRI)2.320.48–3.630.052Inadequate fat intake (<DRI)1.030.99–1.070.127Inadequate protein intake (<DRI)1.230.13–8.670.431GenderBoysReference [1]Girls2.580.94–7.080.920Age0.820.738p values are significant at p<0.05. Binary logistic analysis was performed (Cox & Snell R Square = 0.186; Nagelkerke R Square = 0.248). ## 4.2. Nutrient Intake The results from weighed food records showed that stews and soup fed to the intervention group had a much higher content of protein, fat, ash, iron, and zinc compared to those of the control group (Table3). There was no significant difference in protein intake of the study groups at baseline. The mean protein intakes of the intervention and control groups at the end of the study were 38.1 ± 9.1 g and 32.6 ± 7.5 g, respectively.Table 3 Composition of the stews and soups consumed by study participants. Nutrient composition of stews and soups/100 g as consumedStew/soupGroupMoisture (%)Ash (mg)Protein (g)Fat (g)Iron (mg)Zinc (mg)β-Carotene (mg)Tomato stewControl70.7 ± 0.230.1 ± 1.54.6 ± 0.59.9 ± 0.53.0 ± 0.22.4 ± 0.52.8 ± 0.6Tomato + MGLVP stewIntervention68.5 ± 0.432.3 ± 2.47.7 ± 0.110.1 ± 0.79.7 ± 0.17.8 ± 0.16.2 ± 0.5Bean stewControl63.7 ± 115.5 ± 1.38.5 ± 1.710.0 ± 1.18.15 ± 1.14.3 ± 0.11.1 ± 0.3Bean + MGLVP stewIntervention62.3 ± 217.2 ± 1.415.9 ± 1.412.1 ± 0.914.5 ± 1.38.1 ± 0.75.9 ± 0.7Ground nut soupControl77.5 ± 1.221.3 ± 2.75.7 ± 0.215.1 ± 1.02.8 ± 0.21.1 ± 0.21.7 ± 0.3Ground nut + MGVLP soupIntervention72.2 ± 1.025.2 ± 1.910.9 ± 0.616.4 ± 0.96.9 ± 0.15.8 ± 0.25.1 ± 1.1p values (except that for fat) obtained by one-way ANOVA are significant between control and corresponding intervention meals at p<0.05. Data presented are from laboratory analysis.The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6μg, respectively, at baseline. Those of the control were 13.7 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively (Table 4). At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group were 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The values for the control group were 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. The stews and soup made from Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) contributed on the average of 580.0 μg of beta-carotene to each participant in the intervention group. The amount was estimated based on the amount of various stews and soups provided (Table 3). The control also derived an estimated average of 220.0 μg of beta-carotene from their stews and soups. The percentage of participants in the intervention group who met their DRI for iron, zinc, and provitamin A were 83.0%, 89.4%, and 94.3%, respectively, at the end of the study. The values in the control group were 61.7%, 53.2% and 60.9%, respectively. The change in mean intakes of iron, zinc, and beta-carotene for the intervention group was 9.9 mg, 8.1 mg, and 479.7 μg, respectively, and for the control group, 1.1 mg, 0.5 mg, and 207.8 μg, respectively. There were no significant differences in the changes in mean intakes of vitamin C, folic acid, and vitamin B12 of study participants from baseline to the end of the study for both groups (p>0.05). None of the children met their DRI for folic acid at baseline and at the end of the study (Table 4).Table 4 Nutrient intake of study participants in comparison with Dietary Reference Intakes (DRIs) at baseline and end of the study. NutrientStudy groupp valueInterventionControlIntake (mean ± SD)Met DRIIntake (mean ± SD)Met DRIProtein (g)Baseline32.7 ± 8.636 (67.9)32.2 ± 9.533 (64.7)0.793End line38.1 ± 9.145 (84.9)32.6 ± 9.231 (66.0)0.004∗p value0.001∗0.003∗0.9540.875Iron (mg)Baseline14.2 ± 7.132 (60.4)13.7 ± 6.132 (62.7)0.680End line24.1 ± 10.948 (90.6)14.8 ± 6.239 (83.0)0.001∗p value0.001∗0.004∗0.4330.006∗Beta-carotene (μg)Baseline215 ± 2323 (43.4)211 ± 2021 (41.2)0.065End line694 ± 3350 (94.3)418 ± 3528 (60.9)0.001∗p value0.001∗0.001∗0.001∗0.011∗Zinc (mg)Baseline5.7 ± 2.122 (41.5)5.4 ± 2.126 (51.0)0.538End line13.8 ± 8.245 (89.4)5.9 ± 2.325 (53.2)0.001∗p value0.001∗0.03∗0.1850.231Vitamin C (mg)Baseline23.8 ± 3.811 (20.8)23.4 ± 4.710 (19.6)0.673End line24.2 ± 3.912 (22.6)24.±4.79 (19.1)0.287p value0.1380.2050.7780.865Folic acid (mg)Baseline30.5 ± 17.00 (0)30.4 ± 12.20 (0)0.971End line30.7 ± 16.90 (0)30.8 ± 12.30 (0)0.816p value0.844—0.839—∗p values within (across rows) and between (in most right column) study groups are significant at p<0.05 by the paired and independent t-tests, respectively, otherwise by the chi-square test. The number of study participants at baseline was intervention = 53 and control = 51 and at end line was intervention = 53 and control = 47. Met DRI is the number (n) and percentage (%) of participants that met the various Recommended Nutrient Intakes. Nutrient intake was assessed by combination of 24-hour recall and direct food weighing (foods served at lunch break) (DRI source: National Research Council 1989, Recommended Dietary Allowances: 10th edition, Washington, DC: The National Academies Press (https://doi.org/10.17226/1349)). ## 4.3. Nutritional and Infection Status of Study Participants At the end of the study, 39.0% of the participants were still anaemic (Table5). The level of anaemia was 28.3% in the intervention group and 53.3% in the control (p=0.024). The level of mild anaemia was 24.5% and 46.8% in the intervention and the control groups, respectively (p=0.022). The overall prevalence of low vitamin A concentration among the participants at the end of the study was 21.2% (Table 5). No subjects in the intervention group had a moderately or severely low vitamin A concentration in the vitamin A level at the end of the study. However, this was present in 4.3% and 2.1%, respectively, of subjects in the control group (Table 5). At the end of the study, no significant difference was observed in the prevalence of stunting, wasting, and thinness as well as overweight and obesity between the two groups (p>0.05). There was no significant difference in the prevalence of malaria parasitaemia between the two groups at the end of the study. No hookworm infestation was observed in the participants.Table 5 Nutritional and infection status of study participants at the end of the study. Outcome variableIntervention (n = 53)Control (n = 47)p valueMalnutritionAll category32.138.30.051Stunting (%)11.314.90.063Wasting (%)9.414.90.052Thinness (%)5.74.20.567Overweight (%)5.72.10.074Obesity (%)0.02.10.067AnaemiaAll category (%)28.353.30.024Mild (%)24.546.80.022Moderate (%)3.84.30.791Low vitamin AAll category (%)20.823.40.059Mildly low20.817.00.055Moderately low (%)0.04.30.051Severely low (%)0.02.10.073Infection statusMalaria parasitaemia39.640.40.935Using Pearson’s chi-square test for categorical variables, statistical significance is set atp<0.05. The number of study participants at end line was intervention group = 53 and control group = 47. ## 4.4. Dietary Diversity Score of Study Participants The dietary diversity score (based on baseline data and data collected during the intervention) was calculated from 24-hour dietary recall and food frequency data based on nine food groups [20]. The mean dietary diversity score was 4.3. The majority of the participants in both study groups had a medium dietary diversity score (consumed food from 4 to 5 food groups). Only 13.2% and 9.8% of the participants from the intervention and control groups, respectively, had a high diversity score (≥6 food groups). The dietary diversity score for children in the intervention group did not differ significantly from the control group (p=0.829). Findings from the dietary diversity tertile showed that the majority of the participants from both groups consumed starchy staples, other vegetables, dark green leafy vegetables, and fish. Fruits and dairy products were scarcely eaten by the study participants (Table 6).Table 6 Food groups consumed by ≥50% of participants by the dietary diversity tertile at baseline and during the intervention study. TertileFood groupsLowest dietary diversity (≤3 food groups)Cereals∗, meat+, poultry+ and fish∗, vegetablesMedium dietary diversity (4–5 food groups)Cereals, meat+, poultry+ and fish∗, vegetables∗ and fruits+, roots and tubers, legumes∗, nuts and seedsHigh dietary diversity (≥6 food groups)Cereals, roots and tubers, meat+, poultry+ and fish∗, vegetables∗ and fruits+, fats+ and oils+, legumes∗, nuts and seeds∗Most consumed food items in a food group. +Most scarcely consumed food items in a food group. ## 5. Discussion The study established various forms of malnutrition among the participants, including anaemia, low vitamin A, wasting, stunting, thinness, overweight, and obesity. There was a higher prevalence of low vitamin A concentration in the intervention group than in the control group, though it was not statistically significant. The high prevalence of low vitamin A at baseline may be attributed to inadequate intake of foods rich in vitamin A, malaria, and other infections or inflammations that the study did not investigate. Studies have shown that infections and inflammations affect retinol and carotenoid metabolism and biomarkers of vitamin A status [21, 22]. In view of those research findings, the prevalence of low levels of serum retinol (vitamin A deficiency) may be overestimated in tropical countries such as Ghana. The findings have shown that 20.8% of the participants in the intervention group who consumed Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) had prevalence of mildly but not moderately low vitamin A levels at the end of the study as had been observed in the controls. The level of decreased prevalence of low serum retinol observed in the control during the intervention may be attributed to likely decline in general infections other than malaria (that the present study did not investigate), and that might have been intense during the period preceding the intervention. The food frequency data show that both study groups had scarce access to some fruits, vegetables, fats, and vegetable oils eaten away from school probably during breakfast and dinner. Those might be the source of provitamin A foods available to both groups. The control diets (stews and soup) to some extent also provided some level of beta-carotene (averagely, 220 μg) in addition to some amount (198 μg) captured from 24-hour recall for the control group. It is the total intake of beta-carotene at the end of the study that leads to the significant difference observed within the controls. The intervention group had on average 114 μg beta-carotene based on 24-hour recall and extra 580 μg beta-carotene from the intervention diets (stews and soup). The significant difference between the two groups could be attributed to ACSMLVF.The findings have also demonstrated that consumption of ACSMLVF results in a significant reduction in the prevalence of anaemia. The findings suggest that ACSMLVF may be a simple but innovative food that can be used for food fortification (or modification) in order to protect against anaemia in school children during school feeding programs when fresh green vegetables are out of season. A similar study carried out among school children using fresh carotene-rich vegetables was found to increase haemoglobin concentration and reduce the prevalence of anaemia [23]. ACSMLVF is also a good source of beta-carotene. It may have helped improve the haemoglobin concentration and increased the vitamin A stores of subjects in the intervention group [24]. Our findings lend support to the suggestion of Maramag et al. [23] that beta-carotene may have compartmentalized effects on iron metabolism by enhancing incorporation of iron into haemoglobin.The causes of anaemia are multifactorial. We suggest, similarly to Idohou-Dossou et al. [25], that the residual anaemia in the intervention group could be due to deficiencies of other nutrients (such as folic acid and ascorbic acid) in addition to iron and vitamin A, antinutritional factors (polyphenols and phytic acid), and infection (malaria in the case of the present study). As ACSMLVF is a good source of micronutrients (iron, zinc, and beta-carotene), it may have the potential to control nutritional anaemia in relation to deficiencies of iron, zinc, and vitamin A. Malaria infection is known to promote anaemia. It is possible, as suggested by previous studies [25], that malaria and other infections could partially explain the prevailing anaemia condition of the intervention group at the end of the study. The prevalence of malaria infection increased (but nonsignificantly) in the participants at the end of the study. That could probably be due to noncompliance to use treated mosquito bed nets even though the investigators provided each participant with a bed net. Binka and Akweongo [26] have emphasized the effective use and the potential of long-lasting insecticide nets to kill or prevent mosquitoes from biting individuals. It was not the owning of such nets that were protective against malaria. Even though the association between malaria and anaemia was not statistically significant in this study, participants with this infection were 1.6 times more prone to anaemia than those without it. Another study [27] conducted among children in the same region as the current study in Ghana found that those with malaria infection had a higher risk of being anaemic than those without the infection. Undernutrition and overnutrition were prevalent among the participants at the start and end of the study, but without any statistical difference.Undernutrition is a challenge to the study participants as it is with other Ghanaian children in poor settings [28–30]. Ghana is in West Africa, a region with little progress in the past two decades to reduce the prevalence of stunting in children [31]. Nevertheless, two countries in the region (Ghana and Liberia) have been making an effort to reduce the prevalence of underweight to half the prevailing regional figure [32]. Based on the results of the present study, ACSMLVF consumption is not an appropriate strategy for controlling malnutrition within three months. It is suggested that its effectiveness to minimize or control malnutrition needs to be investigated further beyond three months. This was outside the scope of this study.The addition of ACSMLVF effectively improved the protein, iron, zinc, and beta-carotene content of the stews and soup provided for the intervention group. This led to a significant increase in the dietary intake of these nutrients by the intervention group compared to the controls. The study findings confirm that leafy vegetable flours [33] may be rich sources of micronutrients (iron, zinc, and beta-carotene) just like their fresh forms [5, 8, 34]. The regular consumption of ACSMLVF could allow an individual or a population to meet the Recommended Daily Intake (RDI) of these nutrients. The findings show that improvement in the iron, zinc, and beta-carotene content of the stews and soup by the addition of ACSMLF resulted in a significant increase in the number of participants in the intervention group who met their RDI for iron, zinc, and beta-carotene. An important area of concern to be considered in further research is the bioavailability of these nutrients; this was not investigated in this study. The bioavailability of iron and zinc would to some extent depend on the presence of antinutritional factors (polyphenols and phytic acid) in the ACSMLVF. There was no significant difference in the dietary intake of water-soluble vitamins (ascorbic acid and folic acid) within and between the study groups. It is possible that much of these water-soluble vitamins were lost during powder and food preparation. It is suggested that ACSMLVF consumption be supplemented with other rich sources of water-soluble vitamins in order to prevent their deficiencies.The major sources of food for the study participants as indicated by the baseline results were starchy staples (cereals, roots, and tubers), legumes, and fish. Fruits, fats, oils, and dairy products were limited in their normal diets. A previous study [35] conducted among Ghanaian school children also established that fruits, fats, oils, and dairy products were scarcely consumed by school children. Most of the participants in both the intervention (71.7%) and the control groups (76.5%) had a medium dietary diversity score. The findings indicate that only small fraction of the participants (9.8–13.2%) consumed highly diversified diets. These children were able to consume diets made from the six food groups available in Ghana. The findings also indicate that diets of all participants were predominantly made of plant staples (cereal, root, tuber, and legumes). According to Zimmermann et al. [36], populations whose habitual diet is plant based may be at high risk of iron deficiency. The reason for this is that plant iron (nonhaeme iron) is poorly bioavailable for absorption and utilization. However, iron from meat or meat products (haeme iron) is readily bioavailable and absorbable. Antinutrients such as phytic acid and polyphenols are known to bind to the nonhaeme iron and inhibit its availability. The dietary data reveal that the participants consumed little meat, poultry, and poultry products.Monthly income and serum retinol status are economic and nutrition factors that are independently and significantly associated with anaemia among the study participants. The parental monthly income might dictate the intake by the participants of foods rich in iron, vitamin A, beta-carotene, zinc, vitamin C, and folic acid. Rich food sources of these micronutrients are known to prevent nutritional anaemia [36]. A low serum retinol status was significantly associated with anaemia in the participants. Low serum retinol status is an indicator of vitamin A deficiency; participants with low serum retinol concentration were 1.7 times more likely to be anaemic compared to those with normal serum retinol. The findings give support to existing knowledge that the causes of anaemia are multifactorial [36–39]. The results of this study show that anaemia is associated with nutritional factors, infection, and socioeconomic factors which should be considered for further study. ## 6. Conclusions The addition ofAmaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) to school meal stews and soup improved the content and intake of iron, zinc, and beta-carotene of the study participants. The consumption of ACSMLVF allowed for at least 85% of the study participants to meet their Recommended Daily Intake of iron, zinc, and beta-carotene. Consumption of ACSMLVF-fortified stews and soup reduced the prevalence of anaemia. ### 6.1. Strengths and Limitations of the Study This study is multidimensional in nature as it captured demographic, biochemical, anthropometric, dietary, and parasitological data of the participants. The data collected established the household characteristics, nutritional and infection status, and dietary intakes of the participants at baseline. This is a pilot study limited to children aged 4–9 years who were participating in the Ghana School Feeding Program. For this reason, the findings cannot be extended to children outside this age range. The findings cannot be generalized to out-of-school children and pupils who do not participate in the Ghana School Feeding Program. Another limitation of the study is the inability to measure markers of inflammation such as C-reactive protein to correct for the influence of inflammation or infections on serum retinol concentrations of participants. ## 6.1. Strengths and Limitations of the Study This study is multidimensional in nature as it captured demographic, biochemical, anthropometric, dietary, and parasitological data of the participants. The data collected established the household characteristics, nutritional and infection status, and dietary intakes of the participants at baseline. This is a pilot study limited to children aged 4–9 years who were participating in the Ghana School Feeding Program. For this reason, the findings cannot be extended to children outside this age range. The findings cannot be generalized to out-of-school children and pupils who do not participate in the Ghana School Feeding Program. Another limitation of the study is the inability to measure markers of inflammation such as C-reactive protein to correct for the influence of inflammation or infections on serum retinol concentrations of participants. --- *Source: 1015280-2020-06-02.xml*
1015280-2020-06-02_1015280-2020-06-02.md
67,700
Contribution ofAmaranthus cruentus and Solanum macrocarpon Leaves Flour to Nutrient Intake and Effect on Nutritional Status of Rural School Children in Volta Region, Ghana
Godfred Egbi; Mary Glover-Amengor; Margaret M. Tohouenou; Francis Zotor
Journal of Nutrition and Metabolism (2020)
Agricultural Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1015280
1015280-2020-06-02.xml
--- ## Abstract Background. Plant-based foods are staple diets and main micronutrient sources of most rural Ghanaian households. The objective of this study was to determine the effect of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour on micronutrient intake and nutritional status of rural Ghanaian school children. Method. This study was a randomized controlled trial that consisted of baseline data collection and a three-month nutrition intervention feeding program. Two groups of 53 children, age 4–9 years, involved in the Ghana School Feeding Program took part in the study. An experimental group consumed Amaranthus cruentus and Solanum macrocarpon leaves flour (ACSMLVF) stews and soup. The control group consumed stews and soup without ACSMLVF. Haemoglobin and serum vitamin A concentrations were determined. Dietary and anthropometric data were collected and analysed. Participants were screened for malaria parasitaemia and hookworm. Results. Anaemia was present in 41.5% and 37.3%, respectively, of the intervention and control groups at baseline. It was present in 28.3% and 53.3%, respectively, at the end of the study. This was significantly different (p=0.024). There was a low vitamin A concentration in 66.0% and 64.7% at baseline and 20.8% and 23.4% at the end of the study in the intervention and control groups, respectively. The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6 μg, respectively, at baseline. Those of the control were 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively. At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group was 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The intake of these micronutrients for the control at the end of the study was 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. Conclusion. Consumption of ACSMLVF stews and soup increased iron, zinc, and beta-carotene intakes. Anaemia prevalence was lower in the intervention group at the end of the study. --- ## Body ## 1. Introduction Micronutrient deficiencies are a major public health problem amongst vulnerable groups such as infants, children, pregnant, and lactating women in the developing world [1]. For good health, a balanced diet consisting of starchy foods as well as protein and micronutrient rich foods is essential [2], since such a diet has negative association with the risk of chronic diseases [3].Vegetables and fruits are rich sources of minerals, vitamins and phytonutrients in sub-Saharan Africa [4, 5]. Plant-based foods are staple diets and the main source of micronutrients for most rural Ghanaian households. Vegetables and fruits are abundant and largely consumed during the rainy season. The availability, accessibility, affordability, and consumption of vegetables become a challenge during the dry season in poor households in Ghana and other West African countries. Consequently, poor household members are unlikely to meet their Recommended Dietary Allowances [6] for micronutrients in the dry season.Due to high water activity, green leafy vegetables are perishable. Heat-processing methods (sun or solar and mechanical drying) reduce their moisture contents which make it feasible to process them into flour, so they can be preserved for use in the dry season. Analysis made on the vitamin composition of the dry leaves ofAmaranthus cruentus (known locally as Aleefu) and Solanum macrocarpon (known locally as Gboma) indicates appreciable levels of beta-carotene (63.0 mg/100 g and 64.0 mg/100 g) and ascorbic acid (1.5 mg/100 g and 2.4 mg/100 g), respectively [7, 8]. In Ghana, the leaves of these plants are used in soup and stew preparations just as spinach is used in other parts of the world. This study sought to determine the contribution that consumption of stews and soups made from Amaranthus cruentus and Solanum macrocarpon leaves flour will make to total nutrient intake and the effect on nutritional status of rural Ghanaian school children. ## 2. Materials and Methods ### 2.1. Study Design The study was a pretest and posttest design. It consisted of baseline data collection and a three-month nutrition intervention feeding program. The study consisted of an intervention and a control group. The children (of a community basic school complex) were randomized by simple random sampling into the two groups. The intervention group received school lunch stews and soups prepared withAmaranthus cruentus and Solanum macrocarpon flour (ACSMVLF). The control group had no ACSMVLF in their stews and soups. The feeding was done on every weekday and lasted for a period of three months. ### 2.2. Study Area The study was carried out in the Adaklu District of the Volta Region. The capital town is Adaklu-Waya [9]. The district shares boundary at the west with Ho Municipality, south with Agotime-Ziope District, north with Akatsi District, and east with Ketu North District. The district has an area of 709 km square [9]. A population and housing census carried out in 2010 [9] showed that the population of Adaklu District was 36,391 (49.0% male) [9]. The majority of the people (78%) engaged in peasant farming. The food staples grown were maize and cassava. Other crops included cowpea, groundnut, tomatoes, garden eggs, pepper, and okra. To a small extent at the household levels were sheep, goat, and poultry rearing. ### 2.3. Study Population The study population consisted of pupils 4–9 years of age attending the Adaklu-Kodzobi basic school. The two groups were similar in age, physiological makeup, and the Recommended Dietary Allowances (RDIs) for micronutrients [6]. At the time of the study, the subjects were also participating in the Ghana School Feeding Program (GSFP) where school lunch is provided. ### 2.4. Sample Selection Criteria A child qualified to enroll in the study if he or she was aged four to nine years; attending school regularly; if parents gave written consent and their children provided assent to participate; enrolled in the Ghana Government School Feeding Program; and had no history of allergy to consumption of vegetables or vegetable flours as self-reported (or guardian). ### 2.5. Sample Size Determination and Sampling This was a pilot study based on a sample size of 90 participants estimated with 0.6 g/dl mean change in haemoglobin concentration with a standard deviation of 1.2 (unpublished) and assumed a standardized effect size of 0.4 and a power of 80% with a significance level of 0.05. Assuming an expected drop-out rate of 18%, the sample size was increased, to give 53 participants per group. A sample frame was constructed of all the children who met the study criteria. The children were randomized by simple random sampling. ### 2.6. Chemical Analysis The various stews and soups were analysed for their moisture, ash, protein, fat, iron, and zinc contents using standard protocols [10]. Beta-carotene content of the stews and soups was determined by the HPLC procedure of Rodriguez-Amaya and Kimura [11]. Triplicate determinations were made using the means as true values. ### 2.7. Vegetable Flour Preparation and Feeding of Participants FreshSolanum macrocarpon and Amaranthus cruentus leaves were purchased from urban market gardeners in Accra. The fresh leaves were soaked in 1% sodium chloride solution, rinsed with running tap water, and later dried in a mechanical air oven at 45°C for 10 hours. The dried leaves were ground separately into fine flour using a blender (Philips HR 2113, Netherlands). Both flours were mixed into a uniform blend of composite flour containing equal proportions (1 : 1 wt/wt) of each kind of vegetable with a cake mixer (Philips Viva Mixer HR 1565, Netherlands). Two hundred and thirty grams of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) was packaged in an airtight plastic (polythene) bag and stored in dark cardboard. Bean stew, tomato stew, or groundnut soup was prepared separately with tomato paste (200 g), ground pepper (15 g), onion paste (35 g), smoked anchovies powder (100 g), and iodized salt (40 g) with or without the composite flour of the (ACSMLF). Groundnut oil (400 g) was used to prepare either the beans or tomato stew. Groundnut paste (400 g) was used to prepare groundnut soup. An amount of 230 g of ACSMLVF was added to the stew or soup for the intervention group. The food for the control group did not contain any ACSMLVF. Each participant in the intervention group was given 50 g of tomato and 100 g of bean stews and 95 g of groundnut soup fortified with ACSMLVF. The ACSMLVF-fortified tomato stew was served thrice a week at lunch break. The ACSMLVF-fortified bean stew was served once a week, and the ACSMLVF-fortified groundnut soup was also served once a week. Each participant in the control group was given the same quantity and frequency of the tomato and bean stews and groundnut soup without ACSMLVF. The stews and soup were eaten with 230 g of boiled plain rice (twice a week), Ga-kenkey (twice a week), and banku once in a week. Ga-kenkey and banku are fermented and cooked corn dough meals. To prevent trading of the served meals, participants from the intervention and the control groups were identified with green and yellow buttons, respectively, on their breast pockets. Each group was served with food at a different location in the school premises, and they were supervised by teachers and research assistants to maintain compliance. The study was carried out from mid-September–mid-December after the major rainy season. ### 2.8. Data and Biological Sample Collection and Analyses Semistructured questionnaires were used to collect data on characteristics of the participants. Dietary data were captured through a food frequency questionnaire, 24-hour recall, and direct weighing of foods consumed. At every meal time, left over foods for individual participants were weighed and subtracted from the amount of meal served to account for actual food intake of each participant.Food measures such as cups, spoons, ladles, and balls were provided to assist the respondent in assessing portion sizes of the food consumed. Portion sizes of the foods consumed were then estimated and recorded. Prices of purchased foods were also recorded. To enhance the accuracy of estimation of the weight of purchased foods, samples of purchased foods were weighed with an ultramodern electronic scale (Soehnle Plateau 56377, LEIFHEIT AG Nassau, Germany) to obtain the weight of the foods. The total amounts of specific food consumed were computed manually. The amounts of the various nutrients: protein, fat, iron, zinc, vitamin C, folate, and vitamin B12 were calculated based on 100 grams portion sizes with the help of Microsoft Office Excel 2007 and the food composition table [12, 13]. The weights of individual participants were measured to the nearest 0.1 kg in triplicate with the Seca digital weighing scale (Seca scale 803), according to a WHO 2006 protocol [14]. The actual weight of a participant was the average of triplicate measurements to the nearest 0.1 kg. The height of each participant (in triplicate) to nearest 0.1 cm was taken with a stadiometer in a standing position in accordance with standard procedures [14]. The average of three readings was recorded as the true value. The weight and height measurements were done at baseline and the end of the study.A phlebotomist from the Parasitology Department of the Noguchi Memorial Institute for Medical Research (NMIMR), University of Ghana, Legon, collected 2 ml of venous blood by venepuncture from every participant early in the morning before breakfast. Blood samples were collected into serum separating tubes coated with gel (a clot activator). Blood samples collected were transported on ice packs to the Nutrition Department’s laboratory, NMIMR. At the laboratory, each blood sample was centrifuged at 2,000 rpm for 15 min. And duplicate serum aliquots were prepared into Eppendorf tubes and stored under −80°C until analysed. The venous blood samples collected were used immediately in the field to determine haemoglobin concentrations by a haemoglobinometer (Hb 201+) (HemoCue AB Angelholm, Sweden). The average of two readings was taken as the actual haemoglobin concentration of each participant. The serum sample from each participant was used to determine his or her serum vitamin A concentration by HPLC technique, according to the established protocol of NMIMR [15]. The standard Giemsa staining technique [16] was used to screen for malaria parasite infection in the participants in prepared blood film slides. Stool samples were collected and used to screen for presence of hookworm by the Kato–Katz technique [17]. ### 2.9. Data Analysis The amount of intake of nutrients was calculated using the Ghana Food Composition Table, Ring database, and West African FAO database. All measured variables were checked for normality. Haemoglobin, serum retinol, weight, and height values were normally distributed. Anaemia was defined as haemoglobin (Hb) concentration < 10.9 g/dl, mild—Hb 10.0–10.9 g/dl, moderate—Hb 7.0–9.9 g/dl, and severe—Hb < 7.0 g/dl for children below the age of five years [18]. For children 5–9 years of age, it was defined as mild anaemia—Hb 11.0–11.4 g/dl, moderate—Hb 8.0–10.9 g/dl, and severe—Hb < 8.0 g/dl–<80 [18]. Summary values were presented as means plus or minus standard deviations and percentages. Student’s t-test was used to compare mean values of control and intervention groups for any significant difference. Within-group individual differences were determined between baseline and end of the study periods using the paired t-test. Binary logistic regression was used to establish association of anaemia with other factors. One-way analysis of variance (ANOVA) was used to compare the mean nutrient composition of the various stews and soups. Significance was set at p≤0.05. The amount of nutrients consumed by the study participants was compared to the Recommended Daily Intakes (RDIs) of the various nutrients. ## 2.1. Study Design The study was a pretest and posttest design. It consisted of baseline data collection and a three-month nutrition intervention feeding program. The study consisted of an intervention and a control group. The children (of a community basic school complex) were randomized by simple random sampling into the two groups. The intervention group received school lunch stews and soups prepared withAmaranthus cruentus and Solanum macrocarpon flour (ACSMVLF). The control group had no ACSMVLF in their stews and soups. The feeding was done on every weekday and lasted for a period of three months. ## 2.2. Study Area The study was carried out in the Adaklu District of the Volta Region. The capital town is Adaklu-Waya [9]. The district shares boundary at the west with Ho Municipality, south with Agotime-Ziope District, north with Akatsi District, and east with Ketu North District. The district has an area of 709 km square [9]. A population and housing census carried out in 2010 [9] showed that the population of Adaklu District was 36,391 (49.0% male) [9]. The majority of the people (78%) engaged in peasant farming. The food staples grown were maize and cassava. Other crops included cowpea, groundnut, tomatoes, garden eggs, pepper, and okra. To a small extent at the household levels were sheep, goat, and poultry rearing. ## 2.3. Study Population The study population consisted of pupils 4–9 years of age attending the Adaklu-Kodzobi basic school. The two groups were similar in age, physiological makeup, and the Recommended Dietary Allowances (RDIs) for micronutrients [6]. At the time of the study, the subjects were also participating in the Ghana School Feeding Program (GSFP) where school lunch is provided. ## 2.4. Sample Selection Criteria A child qualified to enroll in the study if he or she was aged four to nine years; attending school regularly; if parents gave written consent and their children provided assent to participate; enrolled in the Ghana Government School Feeding Program; and had no history of allergy to consumption of vegetables or vegetable flours as self-reported (or guardian). ## 2.5. Sample Size Determination and Sampling This was a pilot study based on a sample size of 90 participants estimated with 0.6 g/dl mean change in haemoglobin concentration with a standard deviation of 1.2 (unpublished) and assumed a standardized effect size of 0.4 and a power of 80% with a significance level of 0.05. Assuming an expected drop-out rate of 18%, the sample size was increased, to give 53 participants per group. A sample frame was constructed of all the children who met the study criteria. The children were randomized by simple random sampling. ## 2.6. Chemical Analysis The various stews and soups were analysed for their moisture, ash, protein, fat, iron, and zinc contents using standard protocols [10]. Beta-carotene content of the stews and soups was determined by the HPLC procedure of Rodriguez-Amaya and Kimura [11]. Triplicate determinations were made using the means as true values. ## 2.7. Vegetable Flour Preparation and Feeding of Participants FreshSolanum macrocarpon and Amaranthus cruentus leaves were purchased from urban market gardeners in Accra. The fresh leaves were soaked in 1% sodium chloride solution, rinsed with running tap water, and later dried in a mechanical air oven at 45°C for 10 hours. The dried leaves were ground separately into fine flour using a blender (Philips HR 2113, Netherlands). Both flours were mixed into a uniform blend of composite flour containing equal proportions (1 : 1 wt/wt) of each kind of vegetable with a cake mixer (Philips Viva Mixer HR 1565, Netherlands). Two hundred and thirty grams of Amaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) was packaged in an airtight plastic (polythene) bag and stored in dark cardboard. Bean stew, tomato stew, or groundnut soup was prepared separately with tomato paste (200 g), ground pepper (15 g), onion paste (35 g), smoked anchovies powder (100 g), and iodized salt (40 g) with or without the composite flour of the (ACSMLF). Groundnut oil (400 g) was used to prepare either the beans or tomato stew. Groundnut paste (400 g) was used to prepare groundnut soup. An amount of 230 g of ACSMLVF was added to the stew or soup for the intervention group. The food for the control group did not contain any ACSMLVF. Each participant in the intervention group was given 50 g of tomato and 100 g of bean stews and 95 g of groundnut soup fortified with ACSMLVF. The ACSMLVF-fortified tomato stew was served thrice a week at lunch break. The ACSMLVF-fortified bean stew was served once a week, and the ACSMLVF-fortified groundnut soup was also served once a week. Each participant in the control group was given the same quantity and frequency of the tomato and bean stews and groundnut soup without ACSMLVF. The stews and soup were eaten with 230 g of boiled plain rice (twice a week), Ga-kenkey (twice a week), and banku once in a week. Ga-kenkey and banku are fermented and cooked corn dough meals. To prevent trading of the served meals, participants from the intervention and the control groups were identified with green and yellow buttons, respectively, on their breast pockets. Each group was served with food at a different location in the school premises, and they were supervised by teachers and research assistants to maintain compliance. The study was carried out from mid-September–mid-December after the major rainy season. ## 2.8. Data and Biological Sample Collection and Analyses Semistructured questionnaires were used to collect data on characteristics of the participants. Dietary data were captured through a food frequency questionnaire, 24-hour recall, and direct weighing of foods consumed. At every meal time, left over foods for individual participants were weighed and subtracted from the amount of meal served to account for actual food intake of each participant.Food measures such as cups, spoons, ladles, and balls were provided to assist the respondent in assessing portion sizes of the food consumed. Portion sizes of the foods consumed were then estimated and recorded. Prices of purchased foods were also recorded. To enhance the accuracy of estimation of the weight of purchased foods, samples of purchased foods were weighed with an ultramodern electronic scale (Soehnle Plateau 56377, LEIFHEIT AG Nassau, Germany) to obtain the weight of the foods. The total amounts of specific food consumed were computed manually. The amounts of the various nutrients: protein, fat, iron, zinc, vitamin C, folate, and vitamin B12 were calculated based on 100 grams portion sizes with the help of Microsoft Office Excel 2007 and the food composition table [12, 13]. The weights of individual participants were measured to the nearest 0.1 kg in triplicate with the Seca digital weighing scale (Seca scale 803), according to a WHO 2006 protocol [14]. The actual weight of a participant was the average of triplicate measurements to the nearest 0.1 kg. The height of each participant (in triplicate) to nearest 0.1 cm was taken with a stadiometer in a standing position in accordance with standard procedures [14]. The average of three readings was recorded as the true value. The weight and height measurements were done at baseline and the end of the study.A phlebotomist from the Parasitology Department of the Noguchi Memorial Institute for Medical Research (NMIMR), University of Ghana, Legon, collected 2 ml of venous blood by venepuncture from every participant early in the morning before breakfast. Blood samples were collected into serum separating tubes coated with gel (a clot activator). Blood samples collected were transported on ice packs to the Nutrition Department’s laboratory, NMIMR. At the laboratory, each blood sample was centrifuged at 2,000 rpm for 15 min. And duplicate serum aliquots were prepared into Eppendorf tubes and stored under −80°C until analysed. The venous blood samples collected were used immediately in the field to determine haemoglobin concentrations by a haemoglobinometer (Hb 201+) (HemoCue AB Angelholm, Sweden). The average of two readings was taken as the actual haemoglobin concentration of each participant. The serum sample from each participant was used to determine his or her serum vitamin A concentration by HPLC technique, according to the established protocol of NMIMR [15]. The standard Giemsa staining technique [16] was used to screen for malaria parasite infection in the participants in prepared blood film slides. Stool samples were collected and used to screen for presence of hookworm by the Kato–Katz technique [17]. ## 2.9. Data Analysis The amount of intake of nutrients was calculated using the Ghana Food Composition Table, Ring database, and West African FAO database. All measured variables were checked for normality. Haemoglobin, serum retinol, weight, and height values were normally distributed. Anaemia was defined as haemoglobin (Hb) concentration < 10.9 g/dl, mild—Hb 10.0–10.9 g/dl, moderate—Hb 7.0–9.9 g/dl, and severe—Hb < 7.0 g/dl for children below the age of five years [18]. For children 5–9 years of age, it was defined as mild anaemia—Hb 11.0–11.4 g/dl, moderate—Hb 8.0–10.9 g/dl, and severe—Hb < 8.0 g/dl–<80 [18]. Summary values were presented as means plus or minus standard deviations and percentages. Student’s t-test was used to compare mean values of control and intervention groups for any significant difference. Within-group individual differences were determined between baseline and end of the study periods using the paired t-test. Binary logistic regression was used to establish association of anaemia with other factors. One-way analysis of variance (ANOVA) was used to compare the mean nutrient composition of the various stews and soups. Significance was set at p≤0.05. The amount of nutrients consumed by the study participants was compared to the Recommended Daily Intakes (RDIs) of the various nutrients. ## 3. Ethical Approval The Institutional Review Board (IRB) of the Noguchi Memorial Institute for Medical Research, College of Health Sciences, University of Ghana, Legon, gave ethical approval (NMIMR-IRB CPN001/14-15) for the study to be conducted. Participants and guardians gave written or thumb print consent to take part in the study. ## 4. Results ### 4.1. Background Characteristics Fifty-three children were recruited for each group at baseline, but only 51 children recruited for the control group provided biological samples. Two children declined for religious reasons. The study participants had similar background characteristics (Table1). There was no significant difference between the two groups in terms of gender, mean age, and possession of a backyard garden to provide vegetables for the households. Just under half (47.1%) were male. Almost all (99.0%) of the parents (or guardians) were in the 20–60-year age range. Most (94.2%) of the parents (guardians) were earning an income of ≤ GHȻ500 ($178.60) per month (Table 1). Only 4.8% of the parents (guardians) did not have formal education. The majority (68.3%) of the study participants’ households held food taboos of cultural significance; for example, an aversion to eating snails and reptiles. The level of stunting (HAZ score < −2 standard deviation) in the intervention group was 13.2% and 17.6% in the control group at baseline (Table 1). Wasting was present in 11.3% of the participants in the intervention group (WAZ score < −2 standard deviation), and 15.7% of the control wasted. Thinness (BMIAZ score < −2 standard deviation) at baseline was seen in 5.7% and 3.9% in the intervention and control groups, respectively. The level of anaemia in the children at baseline was 39.4% (Table 1). It was 41.5% and 37.3% in the intervention group and the control, respectively. The level of moderate anaemia was 9.4% in the intervention and 3.9% in the control group. The overall presence of low vitamin A concentration (<20 μg/dl) [19] was 65.4%; more specifically, it was mild—15–20 μg/dl (26.9%), moderate—10–14.9 μg/dl (21.2%), and severe—<10 μg/dl (17.3%). The presence of malaria parasitaemia was 35.6% at baseline. No participant had hookworm infection. Almost every study participant (98.1% in the intervention group and 100.0% in the control group) reported eating three times every day (Table 1). Only 13.2% and 9.8% of participants in the intervention group and the control, respectively, had a high dietary diversity score, eating diets made from the six food groups (cereals; legumes, nuts, and oils; fruits and vegetables; roots and tubers; meat, poultry, and fish; and fats and oils) available in Ghana. Many of the participants, 71.7% and 76.5% from the intervention and the control groups, had medium dietary diversity score, eating meals made from four or five food groups.Table 1 Background characteristics of participants by the study group. CharacteristicsIntervention (n = 53)Control (n = 51)p valueMean age (years)7.3 ± 1.76.7 ± 1.80.081SexMales41.551.90.242Females58.547.10.176AnthropometryWeight (Kg)23.0 ± 4.921.6 ± 3.90.088Height (cm)117.1 ± 11.3120.6 ± 11.30.110WAZ-score−0.731 ± 0.906−0.485 ± .9890.189HAZ-score−0.723 ± 1.149−0.592 ± 1.170.567WHZ-score−0.121 ± 1.362−0.095 ± 1.410.398BMI-for-agez score−0.388 ± 0.899−0.105 ± 0.9080.113MalnutritionAll category34.041.20.051Stunting (%)13.217.60.530Wasting (%)11.315.70.084Thinness (%)5.73.90.381Overweight (%)3.82.00.681Obesity (%)0.02.00.068AnaemiaAll category41.537.30.055Mild32.133.30.998Moderate9.43.90.051Low vitamin A levelAll category66.064.70.087Mildly low26.427.50.995Moderately low24.517.70.053Severely low15.119.60.087Infection statusMalaria parasitaemia (%)34.037.30.726Guardian’s age (years)20–60100.098.00.474≥610.02.00.197Daily food consumption patternTwice1.90.00.199≥thrice98.1100.00.344Parental monthly income $ (GH¢)≤$178 (GHȻ499)96.292.20.371≥$178 (GHȻ500)3.87.80.237Food taboosYes72.564.20.362No27.535.80.334Parental educationFormal education96.294.10.712No formal education3.85.90.891Household backyard gardenYes2215.70.371No7884.30.398Dietary diversity scoreLow (≤3 food groups)15.013.7Medium (4–5 food groups)71.776.5High (≥6 food groups)13.29.8p values obtained by the independent t-test, otherwise chi-square test, are significant at p<0.05.The results indicate that study participants whose parents earned the minimum income (1.0–499.0 cedis per month, equivalent to 1.0–178.6 USD) were two times more likely to be anaemic (OR: 1.95; CI: 0.22–0.86;p=0.039) compared to those whose parents earned at least 1000 cedis a month (Table 2). The participants with low serum retinol concentrations were 1.7 times more likely to have anaemia than those who had normal serum retinol levels (OR: 1.68; CI: 0.10–0.99; p=0.049) (Table 2). Other factors included in the binary logistic model, parental education status, parental marital status, and nutritional, infection, and dietary intake status of participants, were associated with anaemia but not significant (Table 2).Table 2 Factors associated with anaemia in the study participants at baseline. FactorOdds ratio95% CIp valueParental education statusAt least secondary educationReference [1]Basic education2.370.27–20.520.433No education1.570.47–5.280.468Parental occupationFormal employmentReference [1]Informal employment1.0270.360–2.9320.960Parental marital statusSingleReference [1]Married2.500.16–3.020.511Parental monthly incomeMonthly income (≥1000 cedis)Reference [1]Monthly income (500–999  cedis)1.790.019–2.400.210Monthly income (1–499 cedis)1.950.221–0.860.039Participant’s nutritional statusNormal retinol level (≥20μg/dl)Reference [1]Low retinol level (<20μg/dl)1.680.10–0.990.049Normal (HAZ > −2 standard deviationsReference [1]Stunted (HAZ ≤ −2 standard deviations1.590.13–2.640.491Participant’s infection statusMalaria parasitaemia absentReference [1]Malaria parasitaemia present1.640.27–9.890.069Dietary intakeAdequate intake (≥DRI)Reference [1]Inadequate iron intake (<DRI)2.320.48–3.630.052Inadequate fat intake (<DRI)1.030.99–1.070.127Inadequate protein intake (<DRI)1.230.13–8.670.431GenderBoysReference [1]Girls2.580.94–7.080.920Age0.820.738p values are significant at p<0.05. Binary logistic analysis was performed (Cox & Snell R Square = 0.186; Nagelkerke R Square = 0.248). ### 4.2. Nutrient Intake The results from weighed food records showed that stews and soup fed to the intervention group had a much higher content of protein, fat, ash, iron, and zinc compared to those of the control group (Table3). There was no significant difference in protein intake of the study groups at baseline. The mean protein intakes of the intervention and control groups at the end of the study were 38.1 ± 9.1 g and 32.6 ± 7.5 g, respectively.Table 3 Composition of the stews and soups consumed by study participants. Nutrient composition of stews and soups/100 g as consumedStew/soupGroupMoisture (%)Ash (mg)Protein (g)Fat (g)Iron (mg)Zinc (mg)β-Carotene (mg)Tomato stewControl70.7 ± 0.230.1 ± 1.54.6 ± 0.59.9 ± 0.53.0 ± 0.22.4 ± 0.52.8 ± 0.6Tomato + MGLVP stewIntervention68.5 ± 0.432.3 ± 2.47.7 ± 0.110.1 ± 0.79.7 ± 0.17.8 ± 0.16.2 ± 0.5Bean stewControl63.7 ± 115.5 ± 1.38.5 ± 1.710.0 ± 1.18.15 ± 1.14.3 ± 0.11.1 ± 0.3Bean + MGLVP stewIntervention62.3 ± 217.2 ± 1.415.9 ± 1.412.1 ± 0.914.5 ± 1.38.1 ± 0.75.9 ± 0.7Ground nut soupControl77.5 ± 1.221.3 ± 2.75.7 ± 0.215.1 ± 1.02.8 ± 0.21.1 ± 0.21.7 ± 0.3Ground nut + MGVLP soupIntervention72.2 ± 1.025.2 ± 1.910.9 ± 0.616.4 ± 0.96.9 ± 0.15.8 ± 0.25.1 ± 1.1p values (except that for fat) obtained by one-way ANOVA are significant between control and corresponding intervention meals at p<0.05. Data presented are from laboratory analysis.The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6μg, respectively, at baseline. Those of the control were 13.7 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively (Table 4). At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group were 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The values for the control group were 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. The stews and soup made from Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) contributed on the average of 580.0 μg of beta-carotene to each participant in the intervention group. The amount was estimated based on the amount of various stews and soups provided (Table 3). The control also derived an estimated average of 220.0 μg of beta-carotene from their stews and soups. The percentage of participants in the intervention group who met their DRI for iron, zinc, and provitamin A were 83.0%, 89.4%, and 94.3%, respectively, at the end of the study. The values in the control group were 61.7%, 53.2% and 60.9%, respectively. The change in mean intakes of iron, zinc, and beta-carotene for the intervention group was 9.9 mg, 8.1 mg, and 479.7 μg, respectively, and for the control group, 1.1 mg, 0.5 mg, and 207.8 μg, respectively. There were no significant differences in the changes in mean intakes of vitamin C, folic acid, and vitamin B12 of study participants from baseline to the end of the study for both groups (p>0.05). None of the children met their DRI for folic acid at baseline and at the end of the study (Table 4).Table 4 Nutrient intake of study participants in comparison with Dietary Reference Intakes (DRIs) at baseline and end of the study. NutrientStudy groupp valueInterventionControlIntake (mean ± SD)Met DRIIntake (mean ± SD)Met DRIProtein (g)Baseline32.7 ± 8.636 (67.9)32.2 ± 9.533 (64.7)0.793End line38.1 ± 9.145 (84.9)32.6 ± 9.231 (66.0)0.004∗p value0.001∗0.003∗0.9540.875Iron (mg)Baseline14.2 ± 7.132 (60.4)13.7 ± 6.132 (62.7)0.680End line24.1 ± 10.948 (90.6)14.8 ± 6.239 (83.0)0.001∗p value0.001∗0.004∗0.4330.006∗Beta-carotene (μg)Baseline215 ± 2323 (43.4)211 ± 2021 (41.2)0.065End line694 ± 3350 (94.3)418 ± 3528 (60.9)0.001∗p value0.001∗0.001∗0.001∗0.011∗Zinc (mg)Baseline5.7 ± 2.122 (41.5)5.4 ± 2.126 (51.0)0.538End line13.8 ± 8.245 (89.4)5.9 ± 2.325 (53.2)0.001∗p value0.001∗0.03∗0.1850.231Vitamin C (mg)Baseline23.8 ± 3.811 (20.8)23.4 ± 4.710 (19.6)0.673End line24.2 ± 3.912 (22.6)24.±4.79 (19.1)0.287p value0.1380.2050.7780.865Folic acid (mg)Baseline30.5 ± 17.00 (0)30.4 ± 12.20 (0)0.971End line30.7 ± 16.90 (0)30.8 ± 12.30 (0)0.816p value0.844—0.839—∗p values within (across rows) and between (in most right column) study groups are significant at p<0.05 by the paired and independent t-tests, respectively, otherwise by the chi-square test. The number of study participants at baseline was intervention = 53 and control = 51 and at end line was intervention = 53 and control = 47. Met DRI is the number (n) and percentage (%) of participants that met the various Recommended Nutrient Intakes. Nutrient intake was assessed by combination of 24-hour recall and direct food weighing (foods served at lunch break) (DRI source: National Research Council 1989, Recommended Dietary Allowances: 10th edition, Washington, DC: The National Academies Press (https://doi.org/10.17226/1349)). ### 4.3. Nutritional and Infection Status of Study Participants At the end of the study, 39.0% of the participants were still anaemic (Table5). The level of anaemia was 28.3% in the intervention group and 53.3% in the control (p=0.024). The level of mild anaemia was 24.5% and 46.8% in the intervention and the control groups, respectively (p=0.022). The overall prevalence of low vitamin A concentration among the participants at the end of the study was 21.2% (Table 5). No subjects in the intervention group had a moderately or severely low vitamin A concentration in the vitamin A level at the end of the study. However, this was present in 4.3% and 2.1%, respectively, of subjects in the control group (Table 5). At the end of the study, no significant difference was observed in the prevalence of stunting, wasting, and thinness as well as overweight and obesity between the two groups (p>0.05). There was no significant difference in the prevalence of malaria parasitaemia between the two groups at the end of the study. No hookworm infestation was observed in the participants.Table 5 Nutritional and infection status of study participants at the end of the study. Outcome variableIntervention (n = 53)Control (n = 47)p valueMalnutritionAll category32.138.30.051Stunting (%)11.314.90.063Wasting (%)9.414.90.052Thinness (%)5.74.20.567Overweight (%)5.72.10.074Obesity (%)0.02.10.067AnaemiaAll category (%)28.353.30.024Mild (%)24.546.80.022Moderate (%)3.84.30.791Low vitamin AAll category (%)20.823.40.059Mildly low20.817.00.055Moderately low (%)0.04.30.051Severely low (%)0.02.10.073Infection statusMalaria parasitaemia39.640.40.935Using Pearson’s chi-square test for categorical variables, statistical significance is set atp<0.05. The number of study participants at end line was intervention group = 53 and control group = 47. ### 4.4. Dietary Diversity Score of Study Participants The dietary diversity score (based on baseline data and data collected during the intervention) was calculated from 24-hour dietary recall and food frequency data based on nine food groups [20]. The mean dietary diversity score was 4.3. The majority of the participants in both study groups had a medium dietary diversity score (consumed food from 4 to 5 food groups). Only 13.2% and 9.8% of the participants from the intervention and control groups, respectively, had a high diversity score (≥6 food groups). The dietary diversity score for children in the intervention group did not differ significantly from the control group (p=0.829). Findings from the dietary diversity tertile showed that the majority of the participants from both groups consumed starchy staples, other vegetables, dark green leafy vegetables, and fish. Fruits and dairy products were scarcely eaten by the study participants (Table 6).Table 6 Food groups consumed by ≥50% of participants by the dietary diversity tertile at baseline and during the intervention study. TertileFood groupsLowest dietary diversity (≤3 food groups)Cereals∗, meat+, poultry+ and fish∗, vegetablesMedium dietary diversity (4–5 food groups)Cereals, meat+, poultry+ and fish∗, vegetables∗ and fruits+, roots and tubers, legumes∗, nuts and seedsHigh dietary diversity (≥6 food groups)Cereals, roots and tubers, meat+, poultry+ and fish∗, vegetables∗ and fruits+, fats+ and oils+, legumes∗, nuts and seeds∗Most consumed food items in a food group. +Most scarcely consumed food items in a food group. ## 4.1. Background Characteristics Fifty-three children were recruited for each group at baseline, but only 51 children recruited for the control group provided biological samples. Two children declined for religious reasons. The study participants had similar background characteristics (Table1). There was no significant difference between the two groups in terms of gender, mean age, and possession of a backyard garden to provide vegetables for the households. Just under half (47.1%) were male. Almost all (99.0%) of the parents (or guardians) were in the 20–60-year age range. Most (94.2%) of the parents (guardians) were earning an income of ≤ GHȻ500 ($178.60) per month (Table 1). Only 4.8% of the parents (guardians) did not have formal education. The majority (68.3%) of the study participants’ households held food taboos of cultural significance; for example, an aversion to eating snails and reptiles. The level of stunting (HAZ score < −2 standard deviation) in the intervention group was 13.2% and 17.6% in the control group at baseline (Table 1). Wasting was present in 11.3% of the participants in the intervention group (WAZ score < −2 standard deviation), and 15.7% of the control wasted. Thinness (BMIAZ score < −2 standard deviation) at baseline was seen in 5.7% and 3.9% in the intervention and control groups, respectively. The level of anaemia in the children at baseline was 39.4% (Table 1). It was 41.5% and 37.3% in the intervention group and the control, respectively. The level of moderate anaemia was 9.4% in the intervention and 3.9% in the control group. The overall presence of low vitamin A concentration (<20 μg/dl) [19] was 65.4%; more specifically, it was mild—15–20 μg/dl (26.9%), moderate—10–14.9 μg/dl (21.2%), and severe—<10 μg/dl (17.3%). The presence of malaria parasitaemia was 35.6% at baseline. No participant had hookworm infection. Almost every study participant (98.1% in the intervention group and 100.0% in the control group) reported eating three times every day (Table 1). Only 13.2% and 9.8% of participants in the intervention group and the control, respectively, had a high dietary diversity score, eating diets made from the six food groups (cereals; legumes, nuts, and oils; fruits and vegetables; roots and tubers; meat, poultry, and fish; and fats and oils) available in Ghana. Many of the participants, 71.7% and 76.5% from the intervention and the control groups, had medium dietary diversity score, eating meals made from four or five food groups.Table 1 Background characteristics of participants by the study group. CharacteristicsIntervention (n = 53)Control (n = 51)p valueMean age (years)7.3 ± 1.76.7 ± 1.80.081SexMales41.551.90.242Females58.547.10.176AnthropometryWeight (Kg)23.0 ± 4.921.6 ± 3.90.088Height (cm)117.1 ± 11.3120.6 ± 11.30.110WAZ-score−0.731 ± 0.906−0.485 ± .9890.189HAZ-score−0.723 ± 1.149−0.592 ± 1.170.567WHZ-score−0.121 ± 1.362−0.095 ± 1.410.398BMI-for-agez score−0.388 ± 0.899−0.105 ± 0.9080.113MalnutritionAll category34.041.20.051Stunting (%)13.217.60.530Wasting (%)11.315.70.084Thinness (%)5.73.90.381Overweight (%)3.82.00.681Obesity (%)0.02.00.068AnaemiaAll category41.537.30.055Mild32.133.30.998Moderate9.43.90.051Low vitamin A levelAll category66.064.70.087Mildly low26.427.50.995Moderately low24.517.70.053Severely low15.119.60.087Infection statusMalaria parasitaemia (%)34.037.30.726Guardian’s age (years)20–60100.098.00.474≥610.02.00.197Daily food consumption patternTwice1.90.00.199≥thrice98.1100.00.344Parental monthly income $ (GH¢)≤$178 (GHȻ499)96.292.20.371≥$178 (GHȻ500)3.87.80.237Food taboosYes72.564.20.362No27.535.80.334Parental educationFormal education96.294.10.712No formal education3.85.90.891Household backyard gardenYes2215.70.371No7884.30.398Dietary diversity scoreLow (≤3 food groups)15.013.7Medium (4–5 food groups)71.776.5High (≥6 food groups)13.29.8p values obtained by the independent t-test, otherwise chi-square test, are significant at p<0.05.The results indicate that study participants whose parents earned the minimum income (1.0–499.0 cedis per month, equivalent to 1.0–178.6 USD) were two times more likely to be anaemic (OR: 1.95; CI: 0.22–0.86;p=0.039) compared to those whose parents earned at least 1000 cedis a month (Table 2). The participants with low serum retinol concentrations were 1.7 times more likely to have anaemia than those who had normal serum retinol levels (OR: 1.68; CI: 0.10–0.99; p=0.049) (Table 2). Other factors included in the binary logistic model, parental education status, parental marital status, and nutritional, infection, and dietary intake status of participants, were associated with anaemia but not significant (Table 2).Table 2 Factors associated with anaemia in the study participants at baseline. FactorOdds ratio95% CIp valueParental education statusAt least secondary educationReference [1]Basic education2.370.27–20.520.433No education1.570.47–5.280.468Parental occupationFormal employmentReference [1]Informal employment1.0270.360–2.9320.960Parental marital statusSingleReference [1]Married2.500.16–3.020.511Parental monthly incomeMonthly income (≥1000 cedis)Reference [1]Monthly income (500–999  cedis)1.790.019–2.400.210Monthly income (1–499 cedis)1.950.221–0.860.039Participant’s nutritional statusNormal retinol level (≥20μg/dl)Reference [1]Low retinol level (<20μg/dl)1.680.10–0.990.049Normal (HAZ > −2 standard deviationsReference [1]Stunted (HAZ ≤ −2 standard deviations1.590.13–2.640.491Participant’s infection statusMalaria parasitaemia absentReference [1]Malaria parasitaemia present1.640.27–9.890.069Dietary intakeAdequate intake (≥DRI)Reference [1]Inadequate iron intake (<DRI)2.320.48–3.630.052Inadequate fat intake (<DRI)1.030.99–1.070.127Inadequate protein intake (<DRI)1.230.13–8.670.431GenderBoysReference [1]Girls2.580.94–7.080.920Age0.820.738p values are significant at p<0.05. Binary logistic analysis was performed (Cox & Snell R Square = 0.186; Nagelkerke R Square = 0.248). ## 4.2. Nutrient Intake The results from weighed food records showed that stews and soup fed to the intervention group had a much higher content of protein, fat, ash, iron, and zinc compared to those of the control group (Table3). There was no significant difference in protein intake of the study groups at baseline. The mean protein intakes of the intervention and control groups at the end of the study were 38.1 ± 9.1 g and 32.6 ± 7.5 g, respectively.Table 3 Composition of the stews and soups consumed by study participants. Nutrient composition of stews and soups/100 g as consumedStew/soupGroupMoisture (%)Ash (mg)Protein (g)Fat (g)Iron (mg)Zinc (mg)β-Carotene (mg)Tomato stewControl70.7 ± 0.230.1 ± 1.54.6 ± 0.59.9 ± 0.53.0 ± 0.22.4 ± 0.52.8 ± 0.6Tomato + MGLVP stewIntervention68.5 ± 0.432.3 ± 2.47.7 ± 0.110.1 ± 0.79.7 ± 0.17.8 ± 0.16.2 ± 0.5Bean stewControl63.7 ± 115.5 ± 1.38.5 ± 1.710.0 ± 1.18.15 ± 1.14.3 ± 0.11.1 ± 0.3Bean + MGLVP stewIntervention62.3 ± 217.2 ± 1.415.9 ± 1.412.1 ± 0.914.5 ± 1.38.1 ± 0.75.9 ± 0.7Ground nut soupControl77.5 ± 1.221.3 ± 2.75.7 ± 0.215.1 ± 1.02.8 ± 0.21.1 ± 0.21.7 ± 0.3Ground nut + MGVLP soupIntervention72.2 ± 1.025.2 ± 1.910.9 ± 0.616.4 ± 0.96.9 ± 0.15.8 ± 0.25.1 ± 1.1p values (except that for fat) obtained by one-way ANOVA are significant between control and corresponding intervention meals at p<0.05. Data presented are from laboratory analysis.The mean iron, zinc, and provitamin A (beta-carotene) intakes of the intervention group were 14.2 ± 7.1 mg, 5.7 ± 2.1 mg, and 214.5 ± 22.6μg, respectively, at baseline. Those of the control were 13.7 13.7 ± 6.1 mg, 5.4 ± 2.1 mg, and 210.6 ± 20.1 μg, respectively (Table 4). At the end of the study, the mean intake of iron, zinc, and beta-carotene for the intervention group were 24.1 ± 10.9 mg, 13.8 ± 8.2 mg, and 694.2 ± 33.1 μg, respectively. The values for the control group were 14.8 ± 6.2 mg, 5.9 ± 2.3 mg, and 418.4 ± 34.7 μg, respectively. The stews and soup made from Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) contributed on the average of 580.0 μg of beta-carotene to each participant in the intervention group. The amount was estimated based on the amount of various stews and soups provided (Table 3). The control also derived an estimated average of 220.0 μg of beta-carotene from their stews and soups. The percentage of participants in the intervention group who met their DRI for iron, zinc, and provitamin A were 83.0%, 89.4%, and 94.3%, respectively, at the end of the study. The values in the control group were 61.7%, 53.2% and 60.9%, respectively. The change in mean intakes of iron, zinc, and beta-carotene for the intervention group was 9.9 mg, 8.1 mg, and 479.7 μg, respectively, and for the control group, 1.1 mg, 0.5 mg, and 207.8 μg, respectively. There were no significant differences in the changes in mean intakes of vitamin C, folic acid, and vitamin B12 of study participants from baseline to the end of the study for both groups (p>0.05). None of the children met their DRI for folic acid at baseline and at the end of the study (Table 4).Table 4 Nutrient intake of study participants in comparison with Dietary Reference Intakes (DRIs) at baseline and end of the study. NutrientStudy groupp valueInterventionControlIntake (mean ± SD)Met DRIIntake (mean ± SD)Met DRIProtein (g)Baseline32.7 ± 8.636 (67.9)32.2 ± 9.533 (64.7)0.793End line38.1 ± 9.145 (84.9)32.6 ± 9.231 (66.0)0.004∗p value0.001∗0.003∗0.9540.875Iron (mg)Baseline14.2 ± 7.132 (60.4)13.7 ± 6.132 (62.7)0.680End line24.1 ± 10.948 (90.6)14.8 ± 6.239 (83.0)0.001∗p value0.001∗0.004∗0.4330.006∗Beta-carotene (μg)Baseline215 ± 2323 (43.4)211 ± 2021 (41.2)0.065End line694 ± 3350 (94.3)418 ± 3528 (60.9)0.001∗p value0.001∗0.001∗0.001∗0.011∗Zinc (mg)Baseline5.7 ± 2.122 (41.5)5.4 ± 2.126 (51.0)0.538End line13.8 ± 8.245 (89.4)5.9 ± 2.325 (53.2)0.001∗p value0.001∗0.03∗0.1850.231Vitamin C (mg)Baseline23.8 ± 3.811 (20.8)23.4 ± 4.710 (19.6)0.673End line24.2 ± 3.912 (22.6)24.±4.79 (19.1)0.287p value0.1380.2050.7780.865Folic acid (mg)Baseline30.5 ± 17.00 (0)30.4 ± 12.20 (0)0.971End line30.7 ± 16.90 (0)30.8 ± 12.30 (0)0.816p value0.844—0.839—∗p values within (across rows) and between (in most right column) study groups are significant at p<0.05 by the paired and independent t-tests, respectively, otherwise by the chi-square test. The number of study participants at baseline was intervention = 53 and control = 51 and at end line was intervention = 53 and control = 47. Met DRI is the number (n) and percentage (%) of participants that met the various Recommended Nutrient Intakes. Nutrient intake was assessed by combination of 24-hour recall and direct food weighing (foods served at lunch break) (DRI source: National Research Council 1989, Recommended Dietary Allowances: 10th edition, Washington, DC: The National Academies Press (https://doi.org/10.17226/1349)). ## 4.3. Nutritional and Infection Status of Study Participants At the end of the study, 39.0% of the participants were still anaemic (Table5). The level of anaemia was 28.3% in the intervention group and 53.3% in the control (p=0.024). The level of mild anaemia was 24.5% and 46.8% in the intervention and the control groups, respectively (p=0.022). The overall prevalence of low vitamin A concentration among the participants at the end of the study was 21.2% (Table 5). No subjects in the intervention group had a moderately or severely low vitamin A concentration in the vitamin A level at the end of the study. However, this was present in 4.3% and 2.1%, respectively, of subjects in the control group (Table 5). At the end of the study, no significant difference was observed in the prevalence of stunting, wasting, and thinness as well as overweight and obesity between the two groups (p>0.05). There was no significant difference in the prevalence of malaria parasitaemia between the two groups at the end of the study. No hookworm infestation was observed in the participants.Table 5 Nutritional and infection status of study participants at the end of the study. Outcome variableIntervention (n = 53)Control (n = 47)p valueMalnutritionAll category32.138.30.051Stunting (%)11.314.90.063Wasting (%)9.414.90.052Thinness (%)5.74.20.567Overweight (%)5.72.10.074Obesity (%)0.02.10.067AnaemiaAll category (%)28.353.30.024Mild (%)24.546.80.022Moderate (%)3.84.30.791Low vitamin AAll category (%)20.823.40.059Mildly low20.817.00.055Moderately low (%)0.04.30.051Severely low (%)0.02.10.073Infection statusMalaria parasitaemia39.640.40.935Using Pearson’s chi-square test for categorical variables, statistical significance is set atp<0.05. The number of study participants at end line was intervention group = 53 and control group = 47. ## 4.4. Dietary Diversity Score of Study Participants The dietary diversity score (based on baseline data and data collected during the intervention) was calculated from 24-hour dietary recall and food frequency data based on nine food groups [20]. The mean dietary diversity score was 4.3. The majority of the participants in both study groups had a medium dietary diversity score (consumed food from 4 to 5 food groups). Only 13.2% and 9.8% of the participants from the intervention and control groups, respectively, had a high diversity score (≥6 food groups). The dietary diversity score for children in the intervention group did not differ significantly from the control group (p=0.829). Findings from the dietary diversity tertile showed that the majority of the participants from both groups consumed starchy staples, other vegetables, dark green leafy vegetables, and fish. Fruits and dairy products were scarcely eaten by the study participants (Table 6).Table 6 Food groups consumed by ≥50% of participants by the dietary diversity tertile at baseline and during the intervention study. TertileFood groupsLowest dietary diversity (≤3 food groups)Cereals∗, meat+, poultry+ and fish∗, vegetablesMedium dietary diversity (4–5 food groups)Cereals, meat+, poultry+ and fish∗, vegetables∗ and fruits+, roots and tubers, legumes∗, nuts and seedsHigh dietary diversity (≥6 food groups)Cereals, roots and tubers, meat+, poultry+ and fish∗, vegetables∗ and fruits+, fats+ and oils+, legumes∗, nuts and seeds∗Most consumed food items in a food group. +Most scarcely consumed food items in a food group. ## 5. Discussion The study established various forms of malnutrition among the participants, including anaemia, low vitamin A, wasting, stunting, thinness, overweight, and obesity. There was a higher prevalence of low vitamin A concentration in the intervention group than in the control group, though it was not statistically significant. The high prevalence of low vitamin A at baseline may be attributed to inadequate intake of foods rich in vitamin A, malaria, and other infections or inflammations that the study did not investigate. Studies have shown that infections and inflammations affect retinol and carotenoid metabolism and biomarkers of vitamin A status [21, 22]. In view of those research findings, the prevalence of low levels of serum retinol (vitamin A deficiency) may be overestimated in tropical countries such as Ghana. The findings have shown that 20.8% of the participants in the intervention group who consumed Amaranthus cruentus and Solanum macrocarpon leaf flour (ACSMLVF) had prevalence of mildly but not moderately low vitamin A levels at the end of the study as had been observed in the controls. The level of decreased prevalence of low serum retinol observed in the control during the intervention may be attributed to likely decline in general infections other than malaria (that the present study did not investigate), and that might have been intense during the period preceding the intervention. The food frequency data show that both study groups had scarce access to some fruits, vegetables, fats, and vegetable oils eaten away from school probably during breakfast and dinner. Those might be the source of provitamin A foods available to both groups. The control diets (stews and soup) to some extent also provided some level of beta-carotene (averagely, 220 μg) in addition to some amount (198 μg) captured from 24-hour recall for the control group. It is the total intake of beta-carotene at the end of the study that leads to the significant difference observed within the controls. The intervention group had on average 114 μg beta-carotene based on 24-hour recall and extra 580 μg beta-carotene from the intervention diets (stews and soup). The significant difference between the two groups could be attributed to ACSMLVF.The findings have also demonstrated that consumption of ACSMLVF results in a significant reduction in the prevalence of anaemia. The findings suggest that ACSMLVF may be a simple but innovative food that can be used for food fortification (or modification) in order to protect against anaemia in school children during school feeding programs when fresh green vegetables are out of season. A similar study carried out among school children using fresh carotene-rich vegetables was found to increase haemoglobin concentration and reduce the prevalence of anaemia [23]. ACSMLVF is also a good source of beta-carotene. It may have helped improve the haemoglobin concentration and increased the vitamin A stores of subjects in the intervention group [24]. Our findings lend support to the suggestion of Maramag et al. [23] that beta-carotene may have compartmentalized effects on iron metabolism by enhancing incorporation of iron into haemoglobin.The causes of anaemia are multifactorial. We suggest, similarly to Idohou-Dossou et al. [25], that the residual anaemia in the intervention group could be due to deficiencies of other nutrients (such as folic acid and ascorbic acid) in addition to iron and vitamin A, antinutritional factors (polyphenols and phytic acid), and infection (malaria in the case of the present study). As ACSMLVF is a good source of micronutrients (iron, zinc, and beta-carotene), it may have the potential to control nutritional anaemia in relation to deficiencies of iron, zinc, and vitamin A. Malaria infection is known to promote anaemia. It is possible, as suggested by previous studies [25], that malaria and other infections could partially explain the prevailing anaemia condition of the intervention group at the end of the study. The prevalence of malaria infection increased (but nonsignificantly) in the participants at the end of the study. That could probably be due to noncompliance to use treated mosquito bed nets even though the investigators provided each participant with a bed net. Binka and Akweongo [26] have emphasized the effective use and the potential of long-lasting insecticide nets to kill or prevent mosquitoes from biting individuals. It was not the owning of such nets that were protective against malaria. Even though the association between malaria and anaemia was not statistically significant in this study, participants with this infection were 1.6 times more prone to anaemia than those without it. Another study [27] conducted among children in the same region as the current study in Ghana found that those with malaria infection had a higher risk of being anaemic than those without the infection. Undernutrition and overnutrition were prevalent among the participants at the start and end of the study, but without any statistical difference.Undernutrition is a challenge to the study participants as it is with other Ghanaian children in poor settings [28–30]. Ghana is in West Africa, a region with little progress in the past two decades to reduce the prevalence of stunting in children [31]. Nevertheless, two countries in the region (Ghana and Liberia) have been making an effort to reduce the prevalence of underweight to half the prevailing regional figure [32]. Based on the results of the present study, ACSMLVF consumption is not an appropriate strategy for controlling malnutrition within three months. It is suggested that its effectiveness to minimize or control malnutrition needs to be investigated further beyond three months. This was outside the scope of this study.The addition of ACSMLVF effectively improved the protein, iron, zinc, and beta-carotene content of the stews and soup provided for the intervention group. This led to a significant increase in the dietary intake of these nutrients by the intervention group compared to the controls. The study findings confirm that leafy vegetable flours [33] may be rich sources of micronutrients (iron, zinc, and beta-carotene) just like their fresh forms [5, 8, 34]. The regular consumption of ACSMLVF could allow an individual or a population to meet the Recommended Daily Intake (RDI) of these nutrients. The findings show that improvement in the iron, zinc, and beta-carotene content of the stews and soup by the addition of ACSMLF resulted in a significant increase in the number of participants in the intervention group who met their RDI for iron, zinc, and beta-carotene. An important area of concern to be considered in further research is the bioavailability of these nutrients; this was not investigated in this study. The bioavailability of iron and zinc would to some extent depend on the presence of antinutritional factors (polyphenols and phytic acid) in the ACSMLVF. There was no significant difference in the dietary intake of water-soluble vitamins (ascorbic acid and folic acid) within and between the study groups. It is possible that much of these water-soluble vitamins were lost during powder and food preparation. It is suggested that ACSMLVF consumption be supplemented with other rich sources of water-soluble vitamins in order to prevent their deficiencies.The major sources of food for the study participants as indicated by the baseline results were starchy staples (cereals, roots, and tubers), legumes, and fish. Fruits, fats, oils, and dairy products were limited in their normal diets. A previous study [35] conducted among Ghanaian school children also established that fruits, fats, oils, and dairy products were scarcely consumed by school children. Most of the participants in both the intervention (71.7%) and the control groups (76.5%) had a medium dietary diversity score. The findings indicate that only small fraction of the participants (9.8–13.2%) consumed highly diversified diets. These children were able to consume diets made from the six food groups available in Ghana. The findings also indicate that diets of all participants were predominantly made of plant staples (cereal, root, tuber, and legumes). According to Zimmermann et al. [36], populations whose habitual diet is plant based may be at high risk of iron deficiency. The reason for this is that plant iron (nonhaeme iron) is poorly bioavailable for absorption and utilization. However, iron from meat or meat products (haeme iron) is readily bioavailable and absorbable. Antinutrients such as phytic acid and polyphenols are known to bind to the nonhaeme iron and inhibit its availability. The dietary data reveal that the participants consumed little meat, poultry, and poultry products.Monthly income and serum retinol status are economic and nutrition factors that are independently and significantly associated with anaemia among the study participants. The parental monthly income might dictate the intake by the participants of foods rich in iron, vitamin A, beta-carotene, zinc, vitamin C, and folic acid. Rich food sources of these micronutrients are known to prevent nutritional anaemia [36]. A low serum retinol status was significantly associated with anaemia in the participants. Low serum retinol status is an indicator of vitamin A deficiency; participants with low serum retinol concentration were 1.7 times more likely to be anaemic compared to those with normal serum retinol. The findings give support to existing knowledge that the causes of anaemia are multifactorial [36–39]. The results of this study show that anaemia is associated with nutritional factors, infection, and socioeconomic factors which should be considered for further study. ## 6. Conclusions The addition ofAmaranthus cruentus and Solanum macrocarpon leafy vegetable flour (ACSMLVF) to school meal stews and soup improved the content and intake of iron, zinc, and beta-carotene of the study participants. The consumption of ACSMLVF allowed for at least 85% of the study participants to meet their Recommended Daily Intake of iron, zinc, and beta-carotene. Consumption of ACSMLVF-fortified stews and soup reduced the prevalence of anaemia. ### 6.1. Strengths and Limitations of the Study This study is multidimensional in nature as it captured demographic, biochemical, anthropometric, dietary, and parasitological data of the participants. The data collected established the household characteristics, nutritional and infection status, and dietary intakes of the participants at baseline. This is a pilot study limited to children aged 4–9 years who were participating in the Ghana School Feeding Program. For this reason, the findings cannot be extended to children outside this age range. The findings cannot be generalized to out-of-school children and pupils who do not participate in the Ghana School Feeding Program. Another limitation of the study is the inability to measure markers of inflammation such as C-reactive protein to correct for the influence of inflammation or infections on serum retinol concentrations of participants. ## 6.1. Strengths and Limitations of the Study This study is multidimensional in nature as it captured demographic, biochemical, anthropometric, dietary, and parasitological data of the participants. The data collected established the household characteristics, nutritional and infection status, and dietary intakes of the participants at baseline. This is a pilot study limited to children aged 4–9 years who were participating in the Ghana School Feeding Program. For this reason, the findings cannot be extended to children outside this age range. The findings cannot be generalized to out-of-school children and pupils who do not participate in the Ghana School Feeding Program. Another limitation of the study is the inability to measure markers of inflammation such as C-reactive protein to correct for the influence of inflammation or infections on serum retinol concentrations of participants. --- *Source: 1015280-2020-06-02.xml*
2020
# A High Performance Load Balance Strategy for Real-Time Multicore Systems **Authors:** Keng-Mao Cho; Chun-Wei Tsai; Yi-Shiuan Chiu; Chu-Sing Yang **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101529 --- ## Abstract Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. --- ## Body ## 1. Introduction To promote convenience in people’s lives, “smart” has become a new requirement for various products [1], which in turn has led embedded systems to being developed. Embedded systems are now widely used in our daily life, such as digital appliances, network devices, portable devices, and diversified information products [2–8]. Various applications are employed in these devices, and multimedia applications are especially prevalent [9–11]. In order to support the plethora of applications, particularly multimedia-related signal processing, superior performance of embedded systems is required. Along with the increasing demand, the system energy consumption is also increasing. As a matter of fact, the advancement in battery technologies has been slower than the advancement of computing speed and the consequent processor energy consumption.Due to these reasons and to enhance the performance of modern embedded systems [12–14], the system needs to ( 1 ) provide more computation power and ( 2 ) reduce power consumption while maintaining performance.To enhance theperformance of the embedded system, a multicore architecture is one of the possible solutions which allow the system to process numerous jobs simultaneously by parallel computation. Keeping every processor core of the system in high utilization is an important issue to achieve high performance. In order to maximize the parallel computing of a multicore system, load balance becomes an issue that needs to be considered when scheduling. Round robin is one of the simple methods to dispatch tasks in a multicore system [15], where tasks are dispatched to processor cores in a rotated order. Shortest queue first [16] is another method that is often used. In this method, tasks are assigned to the processor core with the shortest waiting queue. To find the shortest queue, the number of tasks or the total computation time of tasks on the processor core can be used to represent the queue length. The latter is also called shortest response time first, and requires a priori knowledge about task service times. Additionally, utilization of a processor is usually considered as the criterion in load balance. To generate the maximum balanced load, tasks should be assigned to the processor core with the lowest utilization [17].In addition to the performance of the embedded system, energy consumption is also an important issue. Over the last decade, manufacturers have been competing to improve the performance of processors by raising the clock frequency. Under the same technology level and manufacturing processes, the higher operating frequencies of a CMOS-based processor require higher supply voltage. The dynamic power consumption (P dynamic) of a CMOS-based processor is related to the operating frequency (f) and supply voltage (V dd) as P dynamic ∝ V dd 2 × f. Thus, higher operating frequency results not only in higher performance but also higher power consumption. Due to the fact that devices which use batteries carry limited energy, research on power saving has received increasing attention, where dynamic voltage and frequency scaling (DVFS) techniques are often applied to extend the battery life of portable devices. DVFS reduces the supply voltage and operating frequency of processors simultaneously to save energy when performance demand is low. Just as the human brain consumes a lot of energy, the processors of a system consume the majority of the energy too. Consequently, multicore architectures can benefit greatly from DVFS technology. In early multicore systems, all processor cores shared the same clock [18]. Under this architecture, DVFS can still be applied to save energy, but there are more limitations. The tradeoff between performance and energy consumption becomes more difficult. To support more flexible power management for multicore systems, the voltage and frequency island (VFI) technique [19, 20] has been developed, where processor cores are partitioned into groups, with processor cores belonging to the same group sharing one supply voltage and having the same processing frequency [21]. ### 1.1. Motivation In the past, most studies regarding scheduling on a multicore system [22–25] have not been designed for real-time systems. For some urgent tasks, raising the priority level of these tasks cannot satisfy the urgency completely. In this case, not only a priority but also a deadline will be used to express the character of this task. Tasks with deadlines are called real-time tasks. Nowadays, some studies have focused on scheduling for real-time multicore systems [26, 27]. However, these kinds of algorithms usually view guaranteeing the hard deadline as their main purpose, and therefore limitations arise. Also, these algorithms need more a priori knowledge of tasks. When implementing them into a real system, they must satisfy some requirements, such as fixed application, training, and specific information about the application. However, most portable devices execute nonspecific applications, which are usually not hard-real-time tasks. For example, users may download numerous applications onto their smart phones, where most are soft-real-time tasks and normal tasks. Unfortunately, it is difficult to ensure which applications will be executed on devices before users actually use them. This is why the design of algorithms for specific applications is not suitable, and the requirement of additional priori knowledge is difficult to implement efficiently. Thus, to solve these problems and to consider the tradeoff between performance and energy consumption, this paper applies a solution to the problems of scheduling and power saving in a real-time system for a multicore platform. The proposed algorithms decrease the times of deadline missed and simultaneously consider dynamic power, static power, load balance, and practicability. ### 1.2. Contribution The contributions of this paper are as follows.(1) A power and deadline-aware multicore scheduling algorithm is proposed. It is composed of two parts: a power-aware scheduling algorithm and a deadline-aware load dispatch algorithm. The proposed algorithm is simple and easy to implement and overcomes the problems related to many existing power-saving algorithms that are difficult to implement and not suitable for diverse applications. (2) In the frequency-scaling part of the power-aware scheduling algorithm, we propose a DVFS-based algorithm called ED3VFS. This algorithm uses task deadlines to determine when to scale the operating frequency and is able to adjust parameters dynamically to suit different task sets. Experimental results show that ED3VFS is very effective and flexible. (3) This paper also proposes a deadline-aware load dispatch algorithm, called the two-level deadline-aware hybrid load balancer. The proposed load dispatch algorithm includes two levels: the concept of load imbalance in the first level and a novel load balance strategy, distribution of task deadline, in the second level. We also combined the other load balance strategies in the second level and let the proposed load dispatch algorithm deal with real-time tasks and normal tasks simultaneously. (4) We implemented the proposed load dispatch algorithm in Linux and ported the MicroC/OS-II real-time operating system kernel to a PACDSP on a PAC Duo platform and implemented the proposed power-aware scheduling algorithm in the real-time kernel. Experimental results show that the proposed algorithms work well in a real environment. ### 1.3. Organization The remainder of this paper is organized as follows. Section2 gives a brief introduction to work related to scheduling in a multicore system. Section 3 discusses and defines the problems we aim to solve in this paper, as well as limitations and assumptions. Section 4 describes the proposed power and deadline-aware multicore scheduling algorithm. A performance evaluation of the proposed algorithm is presented in Section 5, with conclusion offered in Section 6. ## 1.1. Motivation In the past, most studies regarding scheduling on a multicore system [22–25] have not been designed for real-time systems. For some urgent tasks, raising the priority level of these tasks cannot satisfy the urgency completely. In this case, not only a priority but also a deadline will be used to express the character of this task. Tasks with deadlines are called real-time tasks. Nowadays, some studies have focused on scheduling for real-time multicore systems [26, 27]. However, these kinds of algorithms usually view guaranteeing the hard deadline as their main purpose, and therefore limitations arise. Also, these algorithms need more a priori knowledge of tasks. When implementing them into a real system, they must satisfy some requirements, such as fixed application, training, and specific information about the application. However, most portable devices execute nonspecific applications, which are usually not hard-real-time tasks. For example, users may download numerous applications onto their smart phones, where most are soft-real-time tasks and normal tasks. Unfortunately, it is difficult to ensure which applications will be executed on devices before users actually use them. This is why the design of algorithms for specific applications is not suitable, and the requirement of additional priori knowledge is difficult to implement efficiently. Thus, to solve these problems and to consider the tradeoff between performance and energy consumption, this paper applies a solution to the problems of scheduling and power saving in a real-time system for a multicore platform. The proposed algorithms decrease the times of deadline missed and simultaneously consider dynamic power, static power, load balance, and practicability. ## 1.2. Contribution The contributions of this paper are as follows.(1) A power and deadline-aware multicore scheduling algorithm is proposed. It is composed of two parts: a power-aware scheduling algorithm and a deadline-aware load dispatch algorithm. The proposed algorithm is simple and easy to implement and overcomes the problems related to many existing power-saving algorithms that are difficult to implement and not suitable for diverse applications. (2) In the frequency-scaling part of the power-aware scheduling algorithm, we propose a DVFS-based algorithm called ED3VFS. This algorithm uses task deadlines to determine when to scale the operating frequency and is able to adjust parameters dynamically to suit different task sets. Experimental results show that ED3VFS is very effective and flexible. (3) This paper also proposes a deadline-aware load dispatch algorithm, called the two-level deadline-aware hybrid load balancer. The proposed load dispatch algorithm includes two levels: the concept of load imbalance in the first level and a novel load balance strategy, distribution of task deadline, in the second level. We also combined the other load balance strategies in the second level and let the proposed load dispatch algorithm deal with real-time tasks and normal tasks simultaneously. (4) We implemented the proposed load dispatch algorithm in Linux and ported the MicroC/OS-II real-time operating system kernel to a PACDSP on a PAC Duo platform and implemented the proposed power-aware scheduling algorithm in the real-time kernel. Experimental results show that the proposed algorithms work well in a real environment. ## 1.3. Organization The remainder of this paper is organized as follows. Section2 gives a brief introduction to work related to scheduling in a multicore system. Section 3 discusses and defines the problems we aim to solve in this paper, as well as limitations and assumptions. Section 4 describes the proposed power and deadline-aware multicore scheduling algorithm. A performance evaluation of the proposed algorithm is presented in Section 5, with conclusion offered in Section 6. ## 2. Related Work ### 2.1. DVFS-Based Power Saving Technologies There are two strategies for using DVFS techniques to reduce energy consumption. The first strategy is scaling voltage and frequency at task slack time. When a processor serves a task, the operating frequency is multiplied by the rate between the worst-case execution time (WCET) and the deadline of the task [28] to reduce power consumption, as shown in Figure 1. Shin et al. [29] combined offline and online components to satisfy the time constraints and reduce energy consumption. The offline component finds the lowest possible processor speed that satisfies all the time constraints, while the online component varies the operating speed dynamically to save more power. Since the task execution time may be changed slightly when executed, Salehi et al. [30] used an adaptive frequency update interval to follow sudden workload changes. The history data is used to predict the next workload and then, according to the prediction error, adjust the frequency update interval.Scaling voltage and frequency at task slack time. (a) Without DVFS and (b) with DVFS. (a) (b)The second strategy is scaling voltage and frequency when accessing external peripherals. References [31, 32] pointed out that the operating speed of memory and peripherals is much lower than that of the processors. For tasks that are memory-bounded or I/O-bounded, the operating frequency of a processor can be decreased to save power while waiting for the external peripherals to finish their jobs, as shown in Figure 2. Liang et al. [33] proposed an approximation equation called the memory access rate-critical speed equation (MAR-CSE) and then defined and used the memory access rate (MAR) to predict its critical speed.Scaling voltage and frequency when access external peripheral. (a) Without DVFS and (b) with DVFS. (a) (b) ### 2.2. Scheduling on Real-Time Multicore Systems Because the classical approaches need a priori knowledge of the application to achieve the target, especially when real-time guarantees are provided, Lombardi et al. [34] developed a precedence constraint posting-based offline scheduler for uncertain task durations. This method uses the average duration of a task to replace the probability distribution and calculates an approximate completion time by this cheaper-to-obtain information. Kim et al. [35] presented two pipeline time balancing schemes, namely, workload-aware task scheduling (WATS) and applied database size control (ADSC). Because the execution time of each pipeline stage will change along with the input data, different execution times in each pipeline stage reduce the performance of a system. To achieve higher performance, the pipeline time of each pipeline stage must be in a balanced state. The basic idea of the pipeline time balance schemes is monitoring and modifying the parameter value of the function in each pipeline stage, thereby allowing the execution time of each pipeline stage to be close to the same average value. Jia et al. [36] presented a novel static mapping technique that maps a real-time application onto a multiprocessor system, which optimizes processor usage efficiency. The proposed mapping approach is composed of two algorithms: task scheduling and cluster assignment. In task scheduling, the tasks are scheduled into a set of virtual processors. Tasks that are assigned to the same virtual processors share the maximized data, while data shared among virtual processors is minimized. The goal of cluster assignment is to assign virtual processors to real processors so that the overall memory access cost is minimized.In addition to balancing the utilization of each processor core, how to tackle the communications among tasks with performance requirements and precedence constraints is another challenge in the scheduling on real-time multicore systems. Hsiu et al. [37] considered the problem of scheduling real-time tasks with precedence constraints in multilayer bus systems and minimized the communication cost. They solved this problem via a dynamic-programming approach. First, they proposed a polynomial-time optimal algorithm for a restricted case, where one multilayer bus and the unit execution time and communication time are considered. The result was then extended as a pseudopolynomial-time optimal algorithm to consider multiple multilayer buses. To consider transition overhead and design for applications with loops, Shao et al. [38] proposed a real-time loop scheduling-algorithm called dynamic voltage loop scheduling (DVLS). In DVLS, the authors succeeded in repeatedly regrouping a loop based on rotation scheduling and decreased the energy consumed by DVS within a timing constraint.In addition to the abovementioned studies, there are many research directions and issues regarding real-time multicore systems. For real-time applications, it is common to estimate the worst case performance early in the design process without actual hardware implementation. It is a challenge to obtain the upper bound on the worst case response time considering practical issues such as multitask applications with different task periods, precedence relations, and variable execution times. Yet, Yang et al. [39] proposed an analysis technique based on mixed integer linear programming to estimate the worst case performance of each task in a nonpreemptive multitask application on a multiprocessor system. Seo et al. [26] tackled the problem of reducing power consumption in a periodic real-time system using DVS on a multicore processor. The processor was assumed to have the limitation that all cores must run at the same performance level. And so to reduce the dynamic power, they proposed a dynamic repartitioning algorithm. The algorithm dynamically balances the task loads of multiple cores to optimize power consumption during execution. Further, they proposed a dynamic core scaling algorithm, which adjusts the number of active cores to reduce leakage power consumption under low load conditions.Cui and Maskell [40] proposed a look-up table-based event-driven thermal estimation method. Fast event driven thermal estimation is based upon a thermal map, which is updated only when a high level event occurs. They developed a predictive future thermal map and proposed several predictive task allocation policies based on it. Differing from the utilization-based policy, they used the thermal-aware policies to reduce the peak temperature and average temperature of a system. Han et al. [27] presented synchronization-aware energy management schemes for a set of periodic real-time tasks that accesses shared resources. The mapping approach allocates tasks accessing the same resources to the same core to effectively reduce synchronization overhead. They also proposed a set of synchronization-aware slack management policies that can appropriately reclaim, preserve, release, and steal slack at runtime to slow down the execution of tasks and save more energy. Chen et al. [41] explored the online real-time task scheduling problem in heterogeneous multicore systems and considered tasks with precedence constraints and nonpreemptive task execution. In their assumption, the processor and the coprocessor have a master-slave relationship. Each task will first be executed on the processor and then dispatched to the coprocessor. During online operation, each task is tested by admission control, which ensures the schedulability. Since the coprocessor is nonpreemptive, to deal with the problem of a task having too large a blocking time, the authors inserted the preemptive points to configure the task blocking time and context switch overhead in the coprocessor. ### 2.3. Summary To extend the system lifetime for energy-limited devices, one of the possible ways is to use DVFS-based technology [28, 29, 31, 32] to save energy. Because the requirements will change when the real-time system is being used, other studies [27, 42, 43] have combined DVFS technology and a real-time scheduler to meet the time constraint while reducing energy consumption. Now, under the environment of multicore architectures, researchers have proposed multicore schedulers, which can meet real-time constraints and consume lower energy. Along with the development of technology, the algorithms have become much more complex and have increasing restrictions when multiple issues need to be considered simultaneously. As a consequence, these algorithms become difficult to implement and work in real environments. Thus, this paper relaxed some limitations to allow the proposed algorithm to be easier to implement and work well in real environments with simultaneous consideration to the real-time, power, and load balance issues. ## 2.1. DVFS-Based Power Saving Technologies There are two strategies for using DVFS techniques to reduce energy consumption. The first strategy is scaling voltage and frequency at task slack time. When a processor serves a task, the operating frequency is multiplied by the rate between the worst-case execution time (WCET) and the deadline of the task [28] to reduce power consumption, as shown in Figure 1. Shin et al. [29] combined offline and online components to satisfy the time constraints and reduce energy consumption. The offline component finds the lowest possible processor speed that satisfies all the time constraints, while the online component varies the operating speed dynamically to save more power. Since the task execution time may be changed slightly when executed, Salehi et al. [30] used an adaptive frequency update interval to follow sudden workload changes. The history data is used to predict the next workload and then, according to the prediction error, adjust the frequency update interval.Scaling voltage and frequency at task slack time. (a) Without DVFS and (b) with DVFS. (a) (b)The second strategy is scaling voltage and frequency when accessing external peripherals. References [31, 32] pointed out that the operating speed of memory and peripherals is much lower than that of the processors. For tasks that are memory-bounded or I/O-bounded, the operating frequency of a processor can be decreased to save power while waiting for the external peripherals to finish their jobs, as shown in Figure 2. Liang et al. [33] proposed an approximation equation called the memory access rate-critical speed equation (MAR-CSE) and then defined and used the memory access rate (MAR) to predict its critical speed.Scaling voltage and frequency when access external peripheral. (a) Without DVFS and (b) with DVFS. (a) (b) ## 2.2. Scheduling on Real-Time Multicore Systems Because the classical approaches need a priori knowledge of the application to achieve the target, especially when real-time guarantees are provided, Lombardi et al. [34] developed a precedence constraint posting-based offline scheduler for uncertain task durations. This method uses the average duration of a task to replace the probability distribution and calculates an approximate completion time by this cheaper-to-obtain information. Kim et al. [35] presented two pipeline time balancing schemes, namely, workload-aware task scheduling (WATS) and applied database size control (ADSC). Because the execution time of each pipeline stage will change along with the input data, different execution times in each pipeline stage reduce the performance of a system. To achieve higher performance, the pipeline time of each pipeline stage must be in a balanced state. The basic idea of the pipeline time balance schemes is monitoring and modifying the parameter value of the function in each pipeline stage, thereby allowing the execution time of each pipeline stage to be close to the same average value. Jia et al. [36] presented a novel static mapping technique that maps a real-time application onto a multiprocessor system, which optimizes processor usage efficiency. The proposed mapping approach is composed of two algorithms: task scheduling and cluster assignment. In task scheduling, the tasks are scheduled into a set of virtual processors. Tasks that are assigned to the same virtual processors share the maximized data, while data shared among virtual processors is minimized. The goal of cluster assignment is to assign virtual processors to real processors so that the overall memory access cost is minimized.In addition to balancing the utilization of each processor core, how to tackle the communications among tasks with performance requirements and precedence constraints is another challenge in the scheduling on real-time multicore systems. Hsiu et al. [37] considered the problem of scheduling real-time tasks with precedence constraints in multilayer bus systems and minimized the communication cost. They solved this problem via a dynamic-programming approach. First, they proposed a polynomial-time optimal algorithm for a restricted case, where one multilayer bus and the unit execution time and communication time are considered. The result was then extended as a pseudopolynomial-time optimal algorithm to consider multiple multilayer buses. To consider transition overhead and design for applications with loops, Shao et al. [38] proposed a real-time loop scheduling-algorithm called dynamic voltage loop scheduling (DVLS). In DVLS, the authors succeeded in repeatedly regrouping a loop based on rotation scheduling and decreased the energy consumed by DVS within a timing constraint.In addition to the abovementioned studies, there are many research directions and issues regarding real-time multicore systems. For real-time applications, it is common to estimate the worst case performance early in the design process without actual hardware implementation. It is a challenge to obtain the upper bound on the worst case response time considering practical issues such as multitask applications with different task periods, precedence relations, and variable execution times. Yet, Yang et al. [39] proposed an analysis technique based on mixed integer linear programming to estimate the worst case performance of each task in a nonpreemptive multitask application on a multiprocessor system. Seo et al. [26] tackled the problem of reducing power consumption in a periodic real-time system using DVS on a multicore processor. The processor was assumed to have the limitation that all cores must run at the same performance level. And so to reduce the dynamic power, they proposed a dynamic repartitioning algorithm. The algorithm dynamically balances the task loads of multiple cores to optimize power consumption during execution. Further, they proposed a dynamic core scaling algorithm, which adjusts the number of active cores to reduce leakage power consumption under low load conditions.Cui and Maskell [40] proposed a look-up table-based event-driven thermal estimation method. Fast event driven thermal estimation is based upon a thermal map, which is updated only when a high level event occurs. They developed a predictive future thermal map and proposed several predictive task allocation policies based on it. Differing from the utilization-based policy, they used the thermal-aware policies to reduce the peak temperature and average temperature of a system. Han et al. [27] presented synchronization-aware energy management schemes for a set of periodic real-time tasks that accesses shared resources. The mapping approach allocates tasks accessing the same resources to the same core to effectively reduce synchronization overhead. They also proposed a set of synchronization-aware slack management policies that can appropriately reclaim, preserve, release, and steal slack at runtime to slow down the execution of tasks and save more energy. Chen et al. [41] explored the online real-time task scheduling problem in heterogeneous multicore systems and considered tasks with precedence constraints and nonpreemptive task execution. In their assumption, the processor and the coprocessor have a master-slave relationship. Each task will first be executed on the processor and then dispatched to the coprocessor. During online operation, each task is tested by admission control, which ensures the schedulability. Since the coprocessor is nonpreemptive, to deal with the problem of a task having too large a blocking time, the authors inserted the preemptive points to configure the task blocking time and context switch overhead in the coprocessor. ## 2.3. Summary To extend the system lifetime for energy-limited devices, one of the possible ways is to use DVFS-based technology [28, 29, 31, 32] to save energy. Because the requirements will change when the real-time system is being used, other studies [27, 42, 43] have combined DVFS technology and a real-time scheduler to meet the time constraint while reducing energy consumption. Now, under the environment of multicore architectures, researchers have proposed multicore schedulers, which can meet real-time constraints and consume lower energy. Along with the development of technology, the algorithms have become much more complex and have increasing restrictions when multiple issues need to be considered simultaneously. As a consequence, these algorithms become difficult to implement and work in real environments. Thus, this paper relaxed some limitations to allow the proposed algorithm to be easier to implement and work well in real environments with simultaneous consideration to the real-time, power, and load balance issues. ## 3. Problem Definition The DVFS-based power-aware scheduling problem is defined as finding a schedule that can satisfy all the constraints of a system while consuming less energy to execute tasks. Differing from the aforementioned traditional scheduling on a single core system, scheduling on a multi-core system needs to decide not only the execution order of tasks, but also which tasks should be executed on which processor core. A good load dispatch can improve the performance and reduce energy consumption, so load dispatch is a very important issue in multicore scheduling. In this paper, we divided the power-aware multicore scheduling problem into load dispatch and power-aware scheduling and proposed different algorithms to solve them individually. Additionally, the key points of using the DVFS technique to reduce the energy consumption of a system include deciding when the operating voltage and frequency should be adjusted and selecting the operating state. To relax the limitations and make the proposed algorithm easier to implement and more light-weight, missing deadline is allowed in this research. The problem we consider in this paper can be defined as follows and is illustrated in Figure3.Figure 3 System model.System Model. There is one master processor unit and n slave processor cores in the system, and each processor core has its own operating system and can scale the operating voltage and frequency independently. When tasks are released, the master processor unit exchanges the status information with each slave processor core by IPC and dispatches them to a suitable slave processor core individually; then, the slave processor cores schedule tasks that are dispatched to them individually. Under this architecture, the proposed algorithm can apply to either homogeneous multicore systems or heterogeneous multicore systems and each processor core can manage itself. In this work, the platform that is used contains an ARM core as master processor unit and two DSPs.Input. A task set T = { T real , T normal }, where T real is the set of real-time tasks and T normal is the set of normal tasks. Each real-time task can be represented as (R e l e a s e _ t i m e, P r i o r i t y, R e l a t e d _ D e a d l i n e) and it can be a periodic task or an aperiodic task. In a dynamic environment, it is difficult to get all the information about tasks and the overhead of using optimization method is too heavy when every task is released. Therefore, we tried to schedule real-time tasks without considering the execution time of tasks. Although hard deadline is not guaranteed, methods are proposed to decrease the missing-deadline probability. For normal tasks, we only use (R e l e a s e _ t i m e, P r i o r i t y) to represent them. Generally, the execution order of real-time tasks is based on the absolute deadline of tasks. P r i o r i t y of real-time tasks is used only when two or more absolute deadlines are identical. The details of scheduling algorithm will be described in Section 4.Output. A set of feasible scheduling, S = { s 1 , s 2 , … , s n }, where s i is the scheduling result of the ith slave processor core and scaling operating voltage and frequency produced by the proposed scaling algorithm.Objective. To minimize total energy consumption E total, the objective function is expressed in (1) Minimize ( E total ) , (2) E total = ∑ E i , (3) E i = ∑ E i j , where E i is the energy consumption of the ith slave processor core and E i j is the energy consumption of the jth task in the ith slave processor core, as expressed in (4) E i j = ∑ ‍ ( P i j k × t i j k ) , (5) P i j k = c × V i j k 2 × f i j k , where P i j k is the power of the jth task at the kth time slice in the ith slave processor core, expressed as (5), and t i j k is the duration of the kth time slice for the jth task in the ith slave processor core. Since the operating mode will not always be the same for a given task under the proposed algorithm, the energy in each time slice must be individually calculated and then summarized. In (5), c is the load capacity, V i j k is the operating voltage, and f i j k is the operating frequency of the jth task at the kth time slice inith slave processor core.It is allowed that tasks are finished after their deadlines in this paper. The processing speed will be increased when a deadline is missed so that the task can be finished faster. Thus, the performance constraint can be expressed as(6) Minimize ( ∑ M i ) , where M i represents if the deadline of ith task is missed and is defined as (7) M i = { 1 , if f t i > d i 0 otherwise , where f t i is the finish time and d i is the absolute deadline of the ith task. ## 4. Power and Deadline-Aware Multicore Scheduling In this section, an efficient multicore scheduling algorithm is presented, called power and deadline-aware multicore scheduling, which integrates three different parts (modules):( 1 ) mixed-earliest deadline first (MEDF) [42], ( 2 ) enhanced deadline-driven dynamic voltage and frequency scaling (ED3VFS), and ( 3 ) two-level deadline-aware hybrid load balancer (TLDHLB). Among them, MEDF is used to schedule the tasks that have been dispatched to a processor core. ED3VFS is an enhanced version of D3VFS [42] and is used to scale the operating mode on each slave processor core. And finally, TLDHLB is used for task dispatch and is composed of two levels: the first level is the load imbalance strategy, while the second level is load balance. For example, when a new task that needs to be served by DSP arrived, TLDHB will dispatch it to a DSP with consideration of load balance. After the DSP receives the task, MEDF will schedule the new task. At the same time, ED3VFS will be executed on DSP periodically to reduce the energy consumption. ### 4.1. Mixed-Earliest Deadline First The original scheduling algorithm used in the MicroC/OS-II kernel is a fundamental priority-based scheduling algorithm. To support real-time tasks and normal tasks in the same time, mixed-earliest deadline first is selected to replace the original scheduling algorithm. MEDF combined EDF and fixed-priority scheduling. For real-time tasks, it uses EDF to schedule the tasks. When there are two or more deadlines of real-time tasks that are identical or for normal tasks, MEDF uses fixed-priority scheduling to decide the execution order. Moreover, MEDF will always select real-time tasks first when there are real-time tasks and normal tasks in the ready queue simultaneously.To cooperate with TLDHLB to save static power, we modified MEDF to let it turn off the processor cores while the ready task, which has the highest priority, is idle while there is no real-time task or other normal tasks. This means that when a processor core finishes all tasks, it will turn itself off to save power. ### 4.2. Enhanced Deadline-Driven Dynamic Voltage and Frequency Scaling The D3VFS scales operating mode dynamically by the active status of a system. α and β are two parameters used in D3VFS and are set as 10. D3VFS will scale operating modes while the system continues to be busy for α time units or when no deadlines are missed for β time units. Inspired by observations of D3VFS, we present a better strategy to set parameters α and β to enhance the performance of the scheduling system. First, in D3VFS, the related deadlines of tasks are not needed to be longer than 10 or a fixed threshold. When the shortest related deadline is shorter than this value of 10 (threshold), α and β become negative, which should be amended. Second, the purports of α and β are not exactly the same, and so their impacts are different. The bigger value we set to α, which is the longer time that the processor stays in lower speed, because the system will increase the operating frequency in a slower rate. This leads to more power savings with worse computing capacity. On the other hand, β is used for decreasing the operating frequency. A bigger β will allow the system to stay in high speed for longer, so that the performance will be better, but more power is consumed. Third, there is more than one task working in a system simultaneously. And in real environment, these tasks may not always be the same. The best setting for each task is different, and so giving different settings for different task sets is more flexible.To solve these aforementioned problems, this paper proposes a different concept to improve the performance of power saving algorithms and conforms to real situations while the system is working. Pseudocode1 shows the pseudocode of ED3VFS. The basic idea of ED3VFS is that the settings of α and β are free to change along with different task sets and these two parameters will be set to two different values. According to the basic idea of ED3VFS, we ran a series of experiments to find a better setting and to verify that the new setting is superior to the original D3VFS setting. The experimental settings include two main groups and are described as follow.(i) The first group is just like the original D3VFS, and the settings of α and β are the same. There are four settings in this group, and the four settings are α = β = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8, where D sr ⁡ is the shortest related deadline. (ii) The second group used in ED3VFS also features four settings in. In this group, α and β will be set into different values. α = D sr ⁡ - β and β = D sr ⁡ × 0.8 , 0.6 , 0.4, and 0.2. Just as in the above, the effects of α and β are opposite, so we gave them opposite values. Table 1 lists the overall settings of α and β in these experiment series.Table 1 Overall settings ofα and β. Settings Group 1 (α = β) Group 2 (α = D sr ⁡ - β) α β α β 1 0.2D sr ⁡ 0.2D sr ⁡ 0.2D sr ⁡ 0.8D sr ⁡ 2 0.4D sr ⁡ 0.4D sr ⁡ 0.4D sr ⁡ 0.6D sr ⁡ 3 0.6D sr ⁡ 0.6D sr ⁡ 0.6D sr ⁡ 0.4D sr ⁡ 4 0.8D sr ⁡ 0.8D sr ⁡ 0.8D sr ⁡ 0.2D sr ⁡Pseudocode 1:Pseudocode of enhanced deadline-driven dynamic voltage and frequency scaling (ED 3VFS). (1) Initially, setting the power mode of DSP to default levelf d. (2) Every timer interrupt occur (3)  If (There is any real-time task.) (4)   Settingα = D sr ⁡ × 0.2. (5)   Settingβ = D sr ⁡ - α. (6)   if (Deadline missed.) (7)Extending deadline of the task that missed deadline for d e ticks. (8)Rising power mode. (9)    else if (Utilization of DSP> S r for α ticks.) (10)Rising power mode. (11)   else if (There is no deadline missed forβ ticks.) (12)Falling power mode. (13)   else (14)Setting the power mode of DSP to default level f d. (15)  End IfFigure4 shows the energy comparison between the two different settings, namely, the original D3VFS and ED3VFS. The vertical axis shows energy consumption while the horizontal axis shows the setting of α, where α = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8. In this research, the first experimental result of energy is used to normalize all of the other results and make them easy to see the differences in comparison. The results show that the energy consumption of ED3VFS is lower than the original D3VFS in most cases. Figure 5 shows the performance comparison between original D3VFS and ED3VFS. We used the number of deadlines missed as the criterion to compare their performance. For a real-time system, a lower number of missed deadlines are better. The results show that ED3VFS is better than the original D3VFS in performance, except for the case when α = D sr ⁡ × 0.8. Although the proposed power-saving algorithm does not guarantee the hard deadline, we still try to stop missed deadlines from happening. According to the experimental results shown in Figure 5, there are two settings we can choose, α = D sr ⁡ × 0.2 and 0.4 when β = D sr ⁡ - α. There was no deadline missed in these two cases. Since the energy consumption of α = D sr ⁡ × 0.2 is less than α = D sr ⁡ × 0.4, we chose α = D sr ⁡ × 0.2 and β = D sr ⁡ - α as the final setting of ED3VFS. In this setting, both energy consumption and performance of ED3VFS are superior to the original D3VFS. Actually, if energy consumption is more important than performance in a given system, setting α = D sr ⁡ × 0.6 and β = D sr ⁡ - α is also a good choice. In that case, more energy can be saved and a certain level of performance maintained.Figure 4 Energy comparison between original D3VFS and ED3VFS.Figure 5 Performance comparison between original D3VFS and ED3VFS. ### 4.3. Two-Level Deadline-Aware Hybrid Load Balancer For systems limited by battery power, letting all processor cores work continuously in active mode is not a good idea, as much energy will be consumed. In a multicore system, balancing the workload between each core can reduce the completion times of all tasks. Processor cores thus can turn to sleep mode for longer times and save more energy.In this paper, a novel task dispatch algorithm, called two-level deadline-aware hybrid load balancer (TLDHLB), is presented. The first level is the load imbalance strategy used for saving static power, and it was inspired by [44]. The basic idea of the first level of task dispatch is dispatching tasks to the processor cores working in active mode and turning off the processor cores when all tasks are finished. For example, suppose there are one MPU and two DSPs in the system. Initially, the system will turn off two DSPs until there are tasks needing to be processed by DSP, as shown in Figure 6(a). When task1 was released, MPU will check the state of the DSPs. If there is no DSP working in active mode, then turn on DSP1 and dispatch task1 to DSP1. According to ED3VFS, DSP1 will work at the default speed, normally, and at the lowest speed in the beginning, as shown in Figure 6(b). Figure 6(c) shows that if taskn is released at tim e n while DSP1 works at full speed, then turn on DSP2 and dispatch taskn to DSP2. Figure 6(d) shows that, after DSP1 finished all tasks assigned to it, it will turn itself off by MEDF.Example of load unbalance. (a) Initial state, (b) Task1 released and turning on DSP1. (c) Taskn released while DSP1 worked in full speed and turning on DSP2. (d) DSP1 finished all of tasks that dispatched to it and turning off DSP1. (a) (b) (c) (d)The second level is used for load balance. When there are two or more DSPs working in active mode, the load balance strategies in the second level will be used to dispatch tasks that are newly released. Unlike traditional systems that do not contain real-time tasks, there are simultaneously real-time tasks and normal tasks in a system in our assumption. Traditional load balance strategies were not designed for real-time systems, so we propose a new dispatch criterion to process the problem of dispatch in real-time tasks. We also combined other criteria to allow our load balance algorithm to process real-time tasks and normal tasks simultaneously and improve robustness.The novel strategy uses the distribution of task deadlines as the criterion for load balance. According to our observation, the more uniform the distribution of task deadlines is, the lower the missing-deadline probability will be. Figure7 is a simple example that supports our observation. There are two DSPs and four tasks; Figure 7(a) shows that one of the dispatch results’ distribution of task deadlines is uneven. In this example, task1 is finished at its deadline, d 1, so that there is not enough time to execute task2 and a deadline missed occurs. A similar situation occurred in DSP2 for task4. Figure 7(b) shows that a different dispatch result features a more uniform distribution of task deadlines. In this situation, the time slot between two deadlines is longer, which means that there is more time to execute the next task when a task is finished, leading to the probability of a deadline missed being lower. Now, the problem is how to express the distribution of task deadlines.Example of real-time tasks dispatch. (a) Distribution of task deadlines is uneven. (b) Distribution of task deadlines is uniform. (a) (b)In this paper, the variance of task deadlines was used as the feature of deadline distribution. Since variance expresses how far a set of numbers is spread out, variance can be used to express the density of data distribution, which is what we need. A smaller variance of task deadlines implies that the time slot between two task deadlines is shorter. Equation (8) is the formula of variance, where N is the number of data, x i expresses ith data, and x - is the data mean: (8) Var ⁡ ( X ) = 1 N ∑ i = 1 N ‍ ( x i - x - ) 2 .Except for the distribution of task deadlines, the other three strategies were combined to dispatch normal tasks. These three strategies can also be used to dispatch real-time tasks when the uniformity of deadline distributions is equal. The first strategy is the execution order of the task. This means that the dispatcher will dispatch a task to the DSP that provides higher priority. For example, assume there are two DSPs and two tasks on each DSP. When a new task is released, if the execution order of the task is second in DSP1 and third in DSP2, this task will be dispatched to DSP1. The second strategy is the number of tasks. The dispatcher will dispatch a task to the DSP that has the fewest number of tasks on it. When the dispatcher cannot make the decision via the three strategies in the second level mentioned above, then the last strategy will be used. The last strategy is very simple, it simply chooses the DSP whose serial number is the minimum and is working in active mode. The pseudocode of two levels deadline-aware hybrid load balancer is shown in Pseudocode 2, where DS P all is the set of all DSPs, DS P off ⁡ is the set of DSPs that are not staying in active mode, DS P t is the target that the task will be dispatched to, DS P on ⁡ is the set of DSPs that are staying in active mode, DS P fs is the set of DSPs that are working at full speed, function g e t S t a t u s ( ) is used to obtain the status information of the DSPs, function m i n S e r i a l ( ) returns the DSP whose serial number is the minimum, function N u m ( ) returns the number of input data, function m a x U n i f o r m i t y ( ) returns the DSP whose uniformity of deadline distribution is the maximum, function m i n O r d e r ( ) returns the DSP that can provide the highest priority to new released task and function, m i n T a s k N u m ( ) returns the DSP whose number of tasks is the fewest.Pseudocode 2:Pseudocode of two levels deadline-aware hybrid load balancer. (1) Initially, turning off all of DSPs,DSP off ⁡ = DSP all. (2) When a task that needs be processed by DSP is released (3)  g e t S t a t u s ( DSP all ) (4) % First level for load unbalance% (5)  If (DSP off ⁡ = = DSP all) (6) DSP t = m i n S e r i a l ( DSP off ⁡ ) (7) Turn on m i n S e r i a l ( DSP off ⁡ ). (8)  Else if (DSP fs = = DSP on ⁡) (9)  if (DSP on ⁡ = = DSP all) (10) go to line 21 (11)   else (12) DSP t = m i n S e r i a l ( DSP off ⁡ ) (13) Turn on m i n S e r i a l ( DSP off ⁡ ). (14) Else if (N u m ( DSP on ⁡ - DSP fs ) = = 1) (15) DSP t = DSP on ⁡ - DSP fs (16) Else (17) go to line 21 (18) Dispatch the task to DSP t. (19) End (20) % Second level for load balance% (21) If (N u m ( m a x U n i f o r m i t y ( DS P on ⁡ ) ) = = 1) (22) DSP t = m a x U n i f o r m i t y ( DSP on ⁡ ) (23) Else if (N u m ( m i n O r d e r ( DSP on ⁡ ) ) = = 1) (24) DSP t = m i n O r d e r ( DSP on ⁡ ) (25) Else if (N u m ( m i n T a s k N u m ( DSP on ⁡ ) ) = = 1) (26) DSP t = m i n T a s k N u m ( DSP on ⁡ ) (27) Else (28) DSP t = m i n S e r i a l ( DSP on ⁡ ) (29) Dispatch the task to DSP t. (30) EndWhat is worth noticing is that when calculating the uniformity of deadline distribution, we should take the newly released task into account because we want to find a DSP whose deadline distribution is still uniform after inserting the newly released task. Furthermore,m a x U n i f o r m i t y ( ), m i n O r d e r ( ), and m i n T a s k N u m ( ) may return more than one DSP, when there are two or more DSPs with the same status. In that case, TLDHLB will use another strategy to dispatch the task and is why we combined four criteria to become a hybrid strategy. ## 4.1. Mixed-Earliest Deadline First The original scheduling algorithm used in the MicroC/OS-II kernel is a fundamental priority-based scheduling algorithm. To support real-time tasks and normal tasks in the same time, mixed-earliest deadline first is selected to replace the original scheduling algorithm. MEDF combined EDF and fixed-priority scheduling. For real-time tasks, it uses EDF to schedule the tasks. When there are two or more deadlines of real-time tasks that are identical or for normal tasks, MEDF uses fixed-priority scheduling to decide the execution order. Moreover, MEDF will always select real-time tasks first when there are real-time tasks and normal tasks in the ready queue simultaneously.To cooperate with TLDHLB to save static power, we modified MEDF to let it turn off the processor cores while the ready task, which has the highest priority, is idle while there is no real-time task or other normal tasks. This means that when a processor core finishes all tasks, it will turn itself off to save power. ## 4.2. Enhanced Deadline-Driven Dynamic Voltage and Frequency Scaling The D3VFS scales operating mode dynamically by the active status of a system. α and β are two parameters used in D3VFS and are set as 10. D3VFS will scale operating modes while the system continues to be busy for α time units or when no deadlines are missed for β time units. Inspired by observations of D3VFS, we present a better strategy to set parameters α and β to enhance the performance of the scheduling system. First, in D3VFS, the related deadlines of tasks are not needed to be longer than 10 or a fixed threshold. When the shortest related deadline is shorter than this value of 10 (threshold), α and β become negative, which should be amended. Second, the purports of α and β are not exactly the same, and so their impacts are different. The bigger value we set to α, which is the longer time that the processor stays in lower speed, because the system will increase the operating frequency in a slower rate. This leads to more power savings with worse computing capacity. On the other hand, β is used for decreasing the operating frequency. A bigger β will allow the system to stay in high speed for longer, so that the performance will be better, but more power is consumed. Third, there is more than one task working in a system simultaneously. And in real environment, these tasks may not always be the same. The best setting for each task is different, and so giving different settings for different task sets is more flexible.To solve these aforementioned problems, this paper proposes a different concept to improve the performance of power saving algorithms and conforms to real situations while the system is working. Pseudocode1 shows the pseudocode of ED3VFS. The basic idea of ED3VFS is that the settings of α and β are free to change along with different task sets and these two parameters will be set to two different values. According to the basic idea of ED3VFS, we ran a series of experiments to find a better setting and to verify that the new setting is superior to the original D3VFS setting. The experimental settings include two main groups and are described as follow.(i) The first group is just like the original D3VFS, and the settings of α and β are the same. There are four settings in this group, and the four settings are α = β = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8, where D sr ⁡ is the shortest related deadline. (ii) The second group used in ED3VFS also features four settings in. In this group, α and β will be set into different values. α = D sr ⁡ - β and β = D sr ⁡ × 0.8 , 0.6 , 0.4, and 0.2. Just as in the above, the effects of α and β are opposite, so we gave them opposite values. Table 1 lists the overall settings of α and β in these experiment series.Table 1 Overall settings ofα and β. Settings Group 1 (α = β) Group 2 (α = D sr ⁡ - β) α β α β 1 0.2D sr ⁡ 0.2D sr ⁡ 0.2D sr ⁡ 0.8D sr ⁡ 2 0.4D sr ⁡ 0.4D sr ⁡ 0.4D sr ⁡ 0.6D sr ⁡ 3 0.6D sr ⁡ 0.6D sr ⁡ 0.6D sr ⁡ 0.4D sr ⁡ 4 0.8D sr ⁡ 0.8D sr ⁡ 0.8D sr ⁡ 0.2D sr ⁡Pseudocode 1:Pseudocode of enhanced deadline-driven dynamic voltage and frequency scaling (ED 3VFS). (1) Initially, setting the power mode of DSP to default levelf d. (2) Every timer interrupt occur (3)  If (There is any real-time task.) (4)   Settingα = D sr ⁡ × 0.2. (5)   Settingβ = D sr ⁡ - α. (6)   if (Deadline missed.) (7)Extending deadline of the task that missed deadline for d e ticks. (8)Rising power mode. (9)    else if (Utilization of DSP> S r for α ticks.) (10)Rising power mode. (11)   else if (There is no deadline missed forβ ticks.) (12)Falling power mode. (13)   else (14)Setting the power mode of DSP to default level f d. (15)  End IfFigure4 shows the energy comparison between the two different settings, namely, the original D3VFS and ED3VFS. The vertical axis shows energy consumption while the horizontal axis shows the setting of α, where α = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8. In this research, the first experimental result of energy is used to normalize all of the other results and make them easy to see the differences in comparison. The results show that the energy consumption of ED3VFS is lower than the original D3VFS in most cases. Figure 5 shows the performance comparison between original D3VFS and ED3VFS. We used the number of deadlines missed as the criterion to compare their performance. For a real-time system, a lower number of missed deadlines are better. The results show that ED3VFS is better than the original D3VFS in performance, except for the case when α = D sr ⁡ × 0.8. Although the proposed power-saving algorithm does not guarantee the hard deadline, we still try to stop missed deadlines from happening. According to the experimental results shown in Figure 5, there are two settings we can choose, α = D sr ⁡ × 0.2 and 0.4 when β = D sr ⁡ - α. There was no deadline missed in these two cases. Since the energy consumption of α = D sr ⁡ × 0.2 is less than α = D sr ⁡ × 0.4, we chose α = D sr ⁡ × 0.2 and β = D sr ⁡ - α as the final setting of ED3VFS. In this setting, both energy consumption and performance of ED3VFS are superior to the original D3VFS. Actually, if energy consumption is more important than performance in a given system, setting α = D sr ⁡ × 0.6 and β = D sr ⁡ - α is also a good choice. In that case, more energy can be saved and a certain level of performance maintained.Figure 4 Energy comparison between original D3VFS and ED3VFS.Figure 5 Performance comparison between original D3VFS and ED3VFS. ## 4.3. Two-Level Deadline-Aware Hybrid Load Balancer For systems limited by battery power, letting all processor cores work continuously in active mode is not a good idea, as much energy will be consumed. In a multicore system, balancing the workload between each core can reduce the completion times of all tasks. Processor cores thus can turn to sleep mode for longer times and save more energy.In this paper, a novel task dispatch algorithm, called two-level deadline-aware hybrid load balancer (TLDHLB), is presented. The first level is the load imbalance strategy used for saving static power, and it was inspired by [44]. The basic idea of the first level of task dispatch is dispatching tasks to the processor cores working in active mode and turning off the processor cores when all tasks are finished. For example, suppose there are one MPU and two DSPs in the system. Initially, the system will turn off two DSPs until there are tasks needing to be processed by DSP, as shown in Figure 6(a). When task1 was released, MPU will check the state of the DSPs. If there is no DSP working in active mode, then turn on DSP1 and dispatch task1 to DSP1. According to ED3VFS, DSP1 will work at the default speed, normally, and at the lowest speed in the beginning, as shown in Figure 6(b). Figure 6(c) shows that if taskn is released at tim e n while DSP1 works at full speed, then turn on DSP2 and dispatch taskn to DSP2. Figure 6(d) shows that, after DSP1 finished all tasks assigned to it, it will turn itself off by MEDF.Example of load unbalance. (a) Initial state, (b) Task1 released and turning on DSP1. (c) Taskn released while DSP1 worked in full speed and turning on DSP2. (d) DSP1 finished all of tasks that dispatched to it and turning off DSP1. (a) (b) (c) (d)The second level is used for load balance. When there are two or more DSPs working in active mode, the load balance strategies in the second level will be used to dispatch tasks that are newly released. Unlike traditional systems that do not contain real-time tasks, there are simultaneously real-time tasks and normal tasks in a system in our assumption. Traditional load balance strategies were not designed for real-time systems, so we propose a new dispatch criterion to process the problem of dispatch in real-time tasks. We also combined other criteria to allow our load balance algorithm to process real-time tasks and normal tasks simultaneously and improve robustness.The novel strategy uses the distribution of task deadlines as the criterion for load balance. According to our observation, the more uniform the distribution of task deadlines is, the lower the missing-deadline probability will be. Figure7 is a simple example that supports our observation. There are two DSPs and four tasks; Figure 7(a) shows that one of the dispatch results’ distribution of task deadlines is uneven. In this example, task1 is finished at its deadline, d 1, so that there is not enough time to execute task2 and a deadline missed occurs. A similar situation occurred in DSP2 for task4. Figure 7(b) shows that a different dispatch result features a more uniform distribution of task deadlines. In this situation, the time slot between two deadlines is longer, which means that there is more time to execute the next task when a task is finished, leading to the probability of a deadline missed being lower. Now, the problem is how to express the distribution of task deadlines.Example of real-time tasks dispatch. (a) Distribution of task deadlines is uneven. (b) Distribution of task deadlines is uniform. (a) (b)In this paper, the variance of task deadlines was used as the feature of deadline distribution. Since variance expresses how far a set of numbers is spread out, variance can be used to express the density of data distribution, which is what we need. A smaller variance of task deadlines implies that the time slot between two task deadlines is shorter. Equation (8) is the formula of variance, where N is the number of data, x i expresses ith data, and x - is the data mean: (8) Var ⁡ ( X ) = 1 N ∑ i = 1 N ‍ ( x i - x - ) 2 .Except for the distribution of task deadlines, the other three strategies were combined to dispatch normal tasks. These three strategies can also be used to dispatch real-time tasks when the uniformity of deadline distributions is equal. The first strategy is the execution order of the task. This means that the dispatcher will dispatch a task to the DSP that provides higher priority. For example, assume there are two DSPs and two tasks on each DSP. When a new task is released, if the execution order of the task is second in DSP1 and third in DSP2, this task will be dispatched to DSP1. The second strategy is the number of tasks. The dispatcher will dispatch a task to the DSP that has the fewest number of tasks on it. When the dispatcher cannot make the decision via the three strategies in the second level mentioned above, then the last strategy will be used. The last strategy is very simple, it simply chooses the DSP whose serial number is the minimum and is working in active mode. The pseudocode of two levels deadline-aware hybrid load balancer is shown in Pseudocode 2, where DS P all is the set of all DSPs, DS P off ⁡ is the set of DSPs that are not staying in active mode, DS P t is the target that the task will be dispatched to, DS P on ⁡ is the set of DSPs that are staying in active mode, DS P fs is the set of DSPs that are working at full speed, function g e t S t a t u s ( ) is used to obtain the status information of the DSPs, function m i n S e r i a l ( ) returns the DSP whose serial number is the minimum, function N u m ( ) returns the number of input data, function m a x U n i f o r m i t y ( ) returns the DSP whose uniformity of deadline distribution is the maximum, function m i n O r d e r ( ) returns the DSP that can provide the highest priority to new released task and function, m i n T a s k N u m ( ) returns the DSP whose number of tasks is the fewest.Pseudocode 2:Pseudocode of two levels deadline-aware hybrid load balancer. (1) Initially, turning off all of DSPs,DSP off ⁡ = DSP all. (2) When a task that needs be processed by DSP is released (3)  g e t S t a t u s ( DSP all ) (4) % First level for load unbalance% (5)  If (DSP off ⁡ = = DSP all) (6) DSP t = m i n S e r i a l ( DSP off ⁡ ) (7) Turn on m i n S e r i a l ( DSP off ⁡ ). (8)  Else if (DSP fs = = DSP on ⁡) (9)  if (DSP on ⁡ = = DSP all) (10) go to line 21 (11)   else (12) DSP t = m i n S e r i a l ( DSP off ⁡ ) (13) Turn on m i n S e r i a l ( DSP off ⁡ ). (14) Else if (N u m ( DSP on ⁡ - DSP fs ) = = 1) (15) DSP t = DSP on ⁡ - DSP fs (16) Else (17) go to line 21 (18) Dispatch the task to DSP t. (19) End (20) % Second level for load balance% (21) If (N u m ( m a x U n i f o r m i t y ( DS P on ⁡ ) ) = = 1) (22) DSP t = m a x U n i f o r m i t y ( DSP on ⁡ ) (23) Else if (N u m ( m i n O r d e r ( DSP on ⁡ ) ) = = 1) (24) DSP t = m i n O r d e r ( DSP on ⁡ ) (25) Else if (N u m ( m i n T a s k N u m ( DSP on ⁡ ) ) = = 1) (26) DSP t = m i n T a s k N u m ( DSP on ⁡ ) (27) Else (28) DSP t = m i n S e r i a l ( DSP on ⁡ ) (29) Dispatch the task to DSP t. (30) EndWhat is worth noticing is that when calculating the uniformity of deadline distribution, we should take the newly released task into account because we want to find a DSP whose deadline distribution is still uniform after inserting the newly released task. Furthermore,m a x U n i f o r m i t y ( ), m i n O r d e r ( ), and m i n T a s k N u m ( ) may return more than one DSP, when there are two or more DSPs with the same status. In that case, TLDHLB will use another strategy to dispatch the task and is why we combined four criteria to become a hybrid strategy. ## 5. Experiments In this section, we describe the experimental environment and the setting of parameters. Experimental results and analyses are then shown. ### 5.1. Experimental Environment In this study the PAC Duo platform was used for the experiments and includes an ARM926 processor and two PACDSPs. The operating system kernel running on ARM is Linux 2.6.27. We ported the MicroC/OS-II kernel (version 2.5) to the PACDSPs and implemented the proposed power-aware scheduling algorithm on the MicroC/OS-II kernel and the proposed load dispatch algorithm on ARM. Figure8 shows the experimental system architecture, while Table 2 shows the operating frequencies used in the experiments and the corresponding voltages. In the experiments, we used a digital multimeter (FLUKE 8846A) to measure the voltage and current of the PACDSPs, the data of which was used to calculate energy consumption.Table 2 Operating voltage and frequency of PACDSP. Power mode (operating mode) Voltage (V) Frequency (MHz) 7 1.0 204 6 1.0 136 5 0.9 102 4 0.9 68 3 0.9 51 2 0.8 34 1 0.8 24Figure 8 System architecture. ### 5.2. Experimental Settings In the experiments, we used matrix multiplication,π calculation, quick sort, jpeg decoder, and histogram equalization as the workload. Other than the proposed algorithms, we also implemented two load balance algorithms and three frequency scaling strategies as the comparisons. Table 3 shows the algorithm usage in the experiments.(i) Seven sets of settings were used. The first set is the proposed algorithms and combines the proposed load dispatch algorithm and power-aware scheduling algorithms. Worthy of note is that the DSPs on our experimental platform cannot turn off and then turn on, so we scaled the operating frequency to the lowest frequency to represent the DSP as being turned off and assumed the energy consumption was zero until the proposed algorithms turn it on again. (ii) The second set to the fourth set used the same load balance algorithm. In these three sets, the load balancer used the number of tasks as the criterion to dispatch tasks. The frequency scaling strategies used in the second and third sets were two static settings: one is the highest operating frequency and the other is the lowest operating frequency. The fourth set used Linux-ondemand [45] as the frequency scaling strategy. Linux-ondemand is a dynamic frequency scaling algorithm; it is used in Linux kernel and dynamically scales the operating frequency according to the utilization of the processor. (iii) The fifth set to seventh set used the utilization of processor as the criterion to dispatch tasks. The frequency scaling strategies used in these three sets were the highest frequency, the lowest frequency, and Linux-ondemand in ordering. Except for the first set, the other settings use original scheduling used in MicroC/OS-II to schedule tasks.Table 3 Usage of algorithms for experiments. Set Load dispatch Frequency scaling TLDHLB The number of tasks Utilization ED 3VFS + MEDF Always in the highest frequency Always in the lowest frequency Linux-ondemand 1PDAMS ✓ ✓ 2NT (HF) ✓ ✓ 3NT (LF) ✓ ✓ 4NT (Ondemand) ✓ ✓ 5Utilization (HF) ✓ ✓ 6Utilization (LF) ✓ ✓ 7Utilization (Ondemand) ✓ ✓For each task, we used the times of the average execution time of the task as its deadline, from one time to five times. There are five settings of task deadline for each set of settings in the experiments. ### 5.3. Experimental Results #### 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. #### 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 5.1. Experimental Environment In this study the PAC Duo platform was used for the experiments and includes an ARM926 processor and two PACDSPs. The operating system kernel running on ARM is Linux 2.6.27. We ported the MicroC/OS-II kernel (version 2.5) to the PACDSPs and implemented the proposed power-aware scheduling algorithm on the MicroC/OS-II kernel and the proposed load dispatch algorithm on ARM. Figure8 shows the experimental system architecture, while Table 2 shows the operating frequencies used in the experiments and the corresponding voltages. In the experiments, we used a digital multimeter (FLUKE 8846A) to measure the voltage and current of the PACDSPs, the data of which was used to calculate energy consumption.Table 2 Operating voltage and frequency of PACDSP. Power mode (operating mode) Voltage (V) Frequency (MHz) 7 1.0 204 6 1.0 136 5 0.9 102 4 0.9 68 3 0.9 51 2 0.8 34 1 0.8 24Figure 8 System architecture. ## 5.2. Experimental Settings In the experiments, we used matrix multiplication,π calculation, quick sort, jpeg decoder, and histogram equalization as the workload. Other than the proposed algorithms, we also implemented two load balance algorithms and three frequency scaling strategies as the comparisons. Table 3 shows the algorithm usage in the experiments.(i) Seven sets of settings were used. The first set is the proposed algorithms and combines the proposed load dispatch algorithm and power-aware scheduling algorithms. Worthy of note is that the DSPs on our experimental platform cannot turn off and then turn on, so we scaled the operating frequency to the lowest frequency to represent the DSP as being turned off and assumed the energy consumption was zero until the proposed algorithms turn it on again. (ii) The second set to the fourth set used the same load balance algorithm. In these three sets, the load balancer used the number of tasks as the criterion to dispatch tasks. The frequency scaling strategies used in the second and third sets were two static settings: one is the highest operating frequency and the other is the lowest operating frequency. The fourth set used Linux-ondemand [45] as the frequency scaling strategy. Linux-ondemand is a dynamic frequency scaling algorithm; it is used in Linux kernel and dynamically scales the operating frequency according to the utilization of the processor. (iii) The fifth set to seventh set used the utilization of processor as the criterion to dispatch tasks. The frequency scaling strategies used in these three sets were the highest frequency, the lowest frequency, and Linux-ondemand in ordering. Except for the first set, the other settings use original scheduling used in MicroC/OS-II to schedule tasks.Table 3 Usage of algorithms for experiments. Set Load dispatch Frequency scaling TLDHLB The number of tasks Utilization ED 3VFS + MEDF Always in the highest frequency Always in the lowest frequency Linux-ondemand 1PDAMS ✓ ✓ 2NT (HF) ✓ ✓ 3NT (LF) ✓ ✓ 4NT (Ondemand) ✓ ✓ 5Utilization (HF) ✓ ✓ 6Utilization (LF) ✓ ✓ 7Utilization (Ondemand) ✓ ✓For each task, we used the times of the average execution time of the task as its deadline, from one time to five times. There are five settings of task deadline for each set of settings in the experiments. ## 5.3. Experimental Results ### 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. ### 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. ## 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 6. Conclusion This paper applied a solution to the problems of load dispatch and power saving in a real-time system on a multicore platform, called power and deadline-aware multicore scheduling. The proposed algorithm simultaneously considers dynamic power, static power, and load balance. To reduce the dynamic power, we implemented MEDF and fine-tuned the parameters of D3VFS to save more power and improve performance. The concept of load imbalance was introduced in saving static power. Instead of dispatching the workload to every processor core equally, the proposed algorithm turns power on only in parts of processor cores and lets other unnecessary cores turn to sleep mode or turn-off. Finally, deadline is used as a novel strategy for load balance between processor cores in active mode. Combining load imbalance and load balance, this paper proposed a two-level task dispatch algorithm called two-level deadline-aware hybrid load balancer.To verify that the proposed algorithms are useful, we implemented them on a multicore platform, PACDuo. We also implemented some load balance algorithms and frequency scaling algorithms for comparison. Experimental results show that compared to six combinations of load balance algorithms and frequency scaling algorithms, the proposed algorithms can reduce energy consumption by up to 54.2% and the performance of the proposed algorithms is superior to others. However, much work still needs to be completed in the future. Some areas for future study include( 1 ) adding theoretical analysis to support the proposed algorithm, ( 2 ) modelling the energy consumption more detailed, ( 3 ) considering the demands of hard-real-time and task migration while keeping the algorithm light-weight, and ( 4 ) introducing the concept of heuristic algorithms and improving the proposed algorithms. --- *Source: 101529-2014-04-14.xml*
101529-2014-04-14_101529-2014-04-14.md
90,330
A High Performance Load Balance Strategy for Real-Time Multicore Systems
Keng-Mao Cho; Chun-Wei Tsai; Yi-Shiuan Chiu; Chu-Sing Yang
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101529
101529-2014-04-14.xml
--- ## Abstract Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. --- ## Body ## 1. Introduction To promote convenience in people’s lives, “smart” has become a new requirement for various products [1], which in turn has led embedded systems to being developed. Embedded systems are now widely used in our daily life, such as digital appliances, network devices, portable devices, and diversified information products [2–8]. Various applications are employed in these devices, and multimedia applications are especially prevalent [9–11]. In order to support the plethora of applications, particularly multimedia-related signal processing, superior performance of embedded systems is required. Along with the increasing demand, the system energy consumption is also increasing. As a matter of fact, the advancement in battery technologies has been slower than the advancement of computing speed and the consequent processor energy consumption.Due to these reasons and to enhance the performance of modern embedded systems [12–14], the system needs to ( 1 ) provide more computation power and ( 2 ) reduce power consumption while maintaining performance.To enhance theperformance of the embedded system, a multicore architecture is one of the possible solutions which allow the system to process numerous jobs simultaneously by parallel computation. Keeping every processor core of the system in high utilization is an important issue to achieve high performance. In order to maximize the parallel computing of a multicore system, load balance becomes an issue that needs to be considered when scheduling. Round robin is one of the simple methods to dispatch tasks in a multicore system [15], where tasks are dispatched to processor cores in a rotated order. Shortest queue first [16] is another method that is often used. In this method, tasks are assigned to the processor core with the shortest waiting queue. To find the shortest queue, the number of tasks or the total computation time of tasks on the processor core can be used to represent the queue length. The latter is also called shortest response time first, and requires a priori knowledge about task service times. Additionally, utilization of a processor is usually considered as the criterion in load balance. To generate the maximum balanced load, tasks should be assigned to the processor core with the lowest utilization [17].In addition to the performance of the embedded system, energy consumption is also an important issue. Over the last decade, manufacturers have been competing to improve the performance of processors by raising the clock frequency. Under the same technology level and manufacturing processes, the higher operating frequencies of a CMOS-based processor require higher supply voltage. The dynamic power consumption (P dynamic) of a CMOS-based processor is related to the operating frequency (f) and supply voltage (V dd) as P dynamic ∝ V dd 2 × f. Thus, higher operating frequency results not only in higher performance but also higher power consumption. Due to the fact that devices which use batteries carry limited energy, research on power saving has received increasing attention, where dynamic voltage and frequency scaling (DVFS) techniques are often applied to extend the battery life of portable devices. DVFS reduces the supply voltage and operating frequency of processors simultaneously to save energy when performance demand is low. Just as the human brain consumes a lot of energy, the processors of a system consume the majority of the energy too. Consequently, multicore architectures can benefit greatly from DVFS technology. In early multicore systems, all processor cores shared the same clock [18]. Under this architecture, DVFS can still be applied to save energy, but there are more limitations. The tradeoff between performance and energy consumption becomes more difficult. To support more flexible power management for multicore systems, the voltage and frequency island (VFI) technique [19, 20] has been developed, where processor cores are partitioned into groups, with processor cores belonging to the same group sharing one supply voltage and having the same processing frequency [21]. ### 1.1. Motivation In the past, most studies regarding scheduling on a multicore system [22–25] have not been designed for real-time systems. For some urgent tasks, raising the priority level of these tasks cannot satisfy the urgency completely. In this case, not only a priority but also a deadline will be used to express the character of this task. Tasks with deadlines are called real-time tasks. Nowadays, some studies have focused on scheduling for real-time multicore systems [26, 27]. However, these kinds of algorithms usually view guaranteeing the hard deadline as their main purpose, and therefore limitations arise. Also, these algorithms need more a priori knowledge of tasks. When implementing them into a real system, they must satisfy some requirements, such as fixed application, training, and specific information about the application. However, most portable devices execute nonspecific applications, which are usually not hard-real-time tasks. For example, users may download numerous applications onto their smart phones, where most are soft-real-time tasks and normal tasks. Unfortunately, it is difficult to ensure which applications will be executed on devices before users actually use them. This is why the design of algorithms for specific applications is not suitable, and the requirement of additional priori knowledge is difficult to implement efficiently. Thus, to solve these problems and to consider the tradeoff between performance and energy consumption, this paper applies a solution to the problems of scheduling and power saving in a real-time system for a multicore platform. The proposed algorithms decrease the times of deadline missed and simultaneously consider dynamic power, static power, load balance, and practicability. ### 1.2. Contribution The contributions of this paper are as follows.(1) A power and deadline-aware multicore scheduling algorithm is proposed. It is composed of two parts: a power-aware scheduling algorithm and a deadline-aware load dispatch algorithm. The proposed algorithm is simple and easy to implement and overcomes the problems related to many existing power-saving algorithms that are difficult to implement and not suitable for diverse applications. (2) In the frequency-scaling part of the power-aware scheduling algorithm, we propose a DVFS-based algorithm called ED3VFS. This algorithm uses task deadlines to determine when to scale the operating frequency and is able to adjust parameters dynamically to suit different task sets. Experimental results show that ED3VFS is very effective and flexible. (3) This paper also proposes a deadline-aware load dispatch algorithm, called the two-level deadline-aware hybrid load balancer. The proposed load dispatch algorithm includes two levels: the concept of load imbalance in the first level and a novel load balance strategy, distribution of task deadline, in the second level. We also combined the other load balance strategies in the second level and let the proposed load dispatch algorithm deal with real-time tasks and normal tasks simultaneously. (4) We implemented the proposed load dispatch algorithm in Linux and ported the MicroC/OS-II real-time operating system kernel to a PACDSP on a PAC Duo platform and implemented the proposed power-aware scheduling algorithm in the real-time kernel. Experimental results show that the proposed algorithms work well in a real environment. ### 1.3. Organization The remainder of this paper is organized as follows. Section2 gives a brief introduction to work related to scheduling in a multicore system. Section 3 discusses and defines the problems we aim to solve in this paper, as well as limitations and assumptions. Section 4 describes the proposed power and deadline-aware multicore scheduling algorithm. A performance evaluation of the proposed algorithm is presented in Section 5, with conclusion offered in Section 6. ## 1.1. Motivation In the past, most studies regarding scheduling on a multicore system [22–25] have not been designed for real-time systems. For some urgent tasks, raising the priority level of these tasks cannot satisfy the urgency completely. In this case, not only a priority but also a deadline will be used to express the character of this task. Tasks with deadlines are called real-time tasks. Nowadays, some studies have focused on scheduling for real-time multicore systems [26, 27]. However, these kinds of algorithms usually view guaranteeing the hard deadline as their main purpose, and therefore limitations arise. Also, these algorithms need more a priori knowledge of tasks. When implementing them into a real system, they must satisfy some requirements, such as fixed application, training, and specific information about the application. However, most portable devices execute nonspecific applications, which are usually not hard-real-time tasks. For example, users may download numerous applications onto their smart phones, where most are soft-real-time tasks and normal tasks. Unfortunately, it is difficult to ensure which applications will be executed on devices before users actually use them. This is why the design of algorithms for specific applications is not suitable, and the requirement of additional priori knowledge is difficult to implement efficiently. Thus, to solve these problems and to consider the tradeoff between performance and energy consumption, this paper applies a solution to the problems of scheduling and power saving in a real-time system for a multicore platform. The proposed algorithms decrease the times of deadline missed and simultaneously consider dynamic power, static power, load balance, and practicability. ## 1.2. Contribution The contributions of this paper are as follows.(1) A power and deadline-aware multicore scheduling algorithm is proposed. It is composed of two parts: a power-aware scheduling algorithm and a deadline-aware load dispatch algorithm. The proposed algorithm is simple and easy to implement and overcomes the problems related to many existing power-saving algorithms that are difficult to implement and not suitable for diverse applications. (2) In the frequency-scaling part of the power-aware scheduling algorithm, we propose a DVFS-based algorithm called ED3VFS. This algorithm uses task deadlines to determine when to scale the operating frequency and is able to adjust parameters dynamically to suit different task sets. Experimental results show that ED3VFS is very effective and flexible. (3) This paper also proposes a deadline-aware load dispatch algorithm, called the two-level deadline-aware hybrid load balancer. The proposed load dispatch algorithm includes two levels: the concept of load imbalance in the first level and a novel load balance strategy, distribution of task deadline, in the second level. We also combined the other load balance strategies in the second level and let the proposed load dispatch algorithm deal with real-time tasks and normal tasks simultaneously. (4) We implemented the proposed load dispatch algorithm in Linux and ported the MicroC/OS-II real-time operating system kernel to a PACDSP on a PAC Duo platform and implemented the proposed power-aware scheduling algorithm in the real-time kernel. Experimental results show that the proposed algorithms work well in a real environment. ## 1.3. Organization The remainder of this paper is organized as follows. Section2 gives a brief introduction to work related to scheduling in a multicore system. Section 3 discusses and defines the problems we aim to solve in this paper, as well as limitations and assumptions. Section 4 describes the proposed power and deadline-aware multicore scheduling algorithm. A performance evaluation of the proposed algorithm is presented in Section 5, with conclusion offered in Section 6. ## 2. Related Work ### 2.1. DVFS-Based Power Saving Technologies There are two strategies for using DVFS techniques to reduce energy consumption. The first strategy is scaling voltage and frequency at task slack time. When a processor serves a task, the operating frequency is multiplied by the rate between the worst-case execution time (WCET) and the deadline of the task [28] to reduce power consumption, as shown in Figure 1. Shin et al. [29] combined offline and online components to satisfy the time constraints and reduce energy consumption. The offline component finds the lowest possible processor speed that satisfies all the time constraints, while the online component varies the operating speed dynamically to save more power. Since the task execution time may be changed slightly when executed, Salehi et al. [30] used an adaptive frequency update interval to follow sudden workload changes. The history data is used to predict the next workload and then, according to the prediction error, adjust the frequency update interval.Scaling voltage and frequency at task slack time. (a) Without DVFS and (b) with DVFS. (a) (b)The second strategy is scaling voltage and frequency when accessing external peripherals. References [31, 32] pointed out that the operating speed of memory and peripherals is much lower than that of the processors. For tasks that are memory-bounded or I/O-bounded, the operating frequency of a processor can be decreased to save power while waiting for the external peripherals to finish their jobs, as shown in Figure 2. Liang et al. [33] proposed an approximation equation called the memory access rate-critical speed equation (MAR-CSE) and then defined and used the memory access rate (MAR) to predict its critical speed.Scaling voltage and frequency when access external peripheral. (a) Without DVFS and (b) with DVFS. (a) (b) ### 2.2. Scheduling on Real-Time Multicore Systems Because the classical approaches need a priori knowledge of the application to achieve the target, especially when real-time guarantees are provided, Lombardi et al. [34] developed a precedence constraint posting-based offline scheduler for uncertain task durations. This method uses the average duration of a task to replace the probability distribution and calculates an approximate completion time by this cheaper-to-obtain information. Kim et al. [35] presented two pipeline time balancing schemes, namely, workload-aware task scheduling (WATS) and applied database size control (ADSC). Because the execution time of each pipeline stage will change along with the input data, different execution times in each pipeline stage reduce the performance of a system. To achieve higher performance, the pipeline time of each pipeline stage must be in a balanced state. The basic idea of the pipeline time balance schemes is monitoring and modifying the parameter value of the function in each pipeline stage, thereby allowing the execution time of each pipeline stage to be close to the same average value. Jia et al. [36] presented a novel static mapping technique that maps a real-time application onto a multiprocessor system, which optimizes processor usage efficiency. The proposed mapping approach is composed of two algorithms: task scheduling and cluster assignment. In task scheduling, the tasks are scheduled into a set of virtual processors. Tasks that are assigned to the same virtual processors share the maximized data, while data shared among virtual processors is minimized. The goal of cluster assignment is to assign virtual processors to real processors so that the overall memory access cost is minimized.In addition to balancing the utilization of each processor core, how to tackle the communications among tasks with performance requirements and precedence constraints is another challenge in the scheduling on real-time multicore systems. Hsiu et al. [37] considered the problem of scheduling real-time tasks with precedence constraints in multilayer bus systems and minimized the communication cost. They solved this problem via a dynamic-programming approach. First, they proposed a polynomial-time optimal algorithm for a restricted case, where one multilayer bus and the unit execution time and communication time are considered. The result was then extended as a pseudopolynomial-time optimal algorithm to consider multiple multilayer buses. To consider transition overhead and design for applications with loops, Shao et al. [38] proposed a real-time loop scheduling-algorithm called dynamic voltage loop scheduling (DVLS). In DVLS, the authors succeeded in repeatedly regrouping a loop based on rotation scheduling and decreased the energy consumed by DVS within a timing constraint.In addition to the abovementioned studies, there are many research directions and issues regarding real-time multicore systems. For real-time applications, it is common to estimate the worst case performance early in the design process without actual hardware implementation. It is a challenge to obtain the upper bound on the worst case response time considering practical issues such as multitask applications with different task periods, precedence relations, and variable execution times. Yet, Yang et al. [39] proposed an analysis technique based on mixed integer linear programming to estimate the worst case performance of each task in a nonpreemptive multitask application on a multiprocessor system. Seo et al. [26] tackled the problem of reducing power consumption in a periodic real-time system using DVS on a multicore processor. The processor was assumed to have the limitation that all cores must run at the same performance level. And so to reduce the dynamic power, they proposed a dynamic repartitioning algorithm. The algorithm dynamically balances the task loads of multiple cores to optimize power consumption during execution. Further, they proposed a dynamic core scaling algorithm, which adjusts the number of active cores to reduce leakage power consumption under low load conditions.Cui and Maskell [40] proposed a look-up table-based event-driven thermal estimation method. Fast event driven thermal estimation is based upon a thermal map, which is updated only when a high level event occurs. They developed a predictive future thermal map and proposed several predictive task allocation policies based on it. Differing from the utilization-based policy, they used the thermal-aware policies to reduce the peak temperature and average temperature of a system. Han et al. [27] presented synchronization-aware energy management schemes for a set of periodic real-time tasks that accesses shared resources. The mapping approach allocates tasks accessing the same resources to the same core to effectively reduce synchronization overhead. They also proposed a set of synchronization-aware slack management policies that can appropriately reclaim, preserve, release, and steal slack at runtime to slow down the execution of tasks and save more energy. Chen et al. [41] explored the online real-time task scheduling problem in heterogeneous multicore systems and considered tasks with precedence constraints and nonpreemptive task execution. In their assumption, the processor and the coprocessor have a master-slave relationship. Each task will first be executed on the processor and then dispatched to the coprocessor. During online operation, each task is tested by admission control, which ensures the schedulability. Since the coprocessor is nonpreemptive, to deal with the problem of a task having too large a blocking time, the authors inserted the preemptive points to configure the task blocking time and context switch overhead in the coprocessor. ### 2.3. Summary To extend the system lifetime for energy-limited devices, one of the possible ways is to use DVFS-based technology [28, 29, 31, 32] to save energy. Because the requirements will change when the real-time system is being used, other studies [27, 42, 43] have combined DVFS technology and a real-time scheduler to meet the time constraint while reducing energy consumption. Now, under the environment of multicore architectures, researchers have proposed multicore schedulers, which can meet real-time constraints and consume lower energy. Along with the development of technology, the algorithms have become much more complex and have increasing restrictions when multiple issues need to be considered simultaneously. As a consequence, these algorithms become difficult to implement and work in real environments. Thus, this paper relaxed some limitations to allow the proposed algorithm to be easier to implement and work well in real environments with simultaneous consideration to the real-time, power, and load balance issues. ## 2.1. DVFS-Based Power Saving Technologies There are two strategies for using DVFS techniques to reduce energy consumption. The first strategy is scaling voltage and frequency at task slack time. When a processor serves a task, the operating frequency is multiplied by the rate between the worst-case execution time (WCET) and the deadline of the task [28] to reduce power consumption, as shown in Figure 1. Shin et al. [29] combined offline and online components to satisfy the time constraints and reduce energy consumption. The offline component finds the lowest possible processor speed that satisfies all the time constraints, while the online component varies the operating speed dynamically to save more power. Since the task execution time may be changed slightly when executed, Salehi et al. [30] used an adaptive frequency update interval to follow sudden workload changes. The history data is used to predict the next workload and then, according to the prediction error, adjust the frequency update interval.Scaling voltage and frequency at task slack time. (a) Without DVFS and (b) with DVFS. (a) (b)The second strategy is scaling voltage and frequency when accessing external peripherals. References [31, 32] pointed out that the operating speed of memory and peripherals is much lower than that of the processors. For tasks that are memory-bounded or I/O-bounded, the operating frequency of a processor can be decreased to save power while waiting for the external peripherals to finish their jobs, as shown in Figure 2. Liang et al. [33] proposed an approximation equation called the memory access rate-critical speed equation (MAR-CSE) and then defined and used the memory access rate (MAR) to predict its critical speed.Scaling voltage and frequency when access external peripheral. (a) Without DVFS and (b) with DVFS. (a) (b) ## 2.2. Scheduling on Real-Time Multicore Systems Because the classical approaches need a priori knowledge of the application to achieve the target, especially when real-time guarantees are provided, Lombardi et al. [34] developed a precedence constraint posting-based offline scheduler for uncertain task durations. This method uses the average duration of a task to replace the probability distribution and calculates an approximate completion time by this cheaper-to-obtain information. Kim et al. [35] presented two pipeline time balancing schemes, namely, workload-aware task scheduling (WATS) and applied database size control (ADSC). Because the execution time of each pipeline stage will change along with the input data, different execution times in each pipeline stage reduce the performance of a system. To achieve higher performance, the pipeline time of each pipeline stage must be in a balanced state. The basic idea of the pipeline time balance schemes is monitoring and modifying the parameter value of the function in each pipeline stage, thereby allowing the execution time of each pipeline stage to be close to the same average value. Jia et al. [36] presented a novel static mapping technique that maps a real-time application onto a multiprocessor system, which optimizes processor usage efficiency. The proposed mapping approach is composed of two algorithms: task scheduling and cluster assignment. In task scheduling, the tasks are scheduled into a set of virtual processors. Tasks that are assigned to the same virtual processors share the maximized data, while data shared among virtual processors is minimized. The goal of cluster assignment is to assign virtual processors to real processors so that the overall memory access cost is minimized.In addition to balancing the utilization of each processor core, how to tackle the communications among tasks with performance requirements and precedence constraints is another challenge in the scheduling on real-time multicore systems. Hsiu et al. [37] considered the problem of scheduling real-time tasks with precedence constraints in multilayer bus systems and minimized the communication cost. They solved this problem via a dynamic-programming approach. First, they proposed a polynomial-time optimal algorithm for a restricted case, where one multilayer bus and the unit execution time and communication time are considered. The result was then extended as a pseudopolynomial-time optimal algorithm to consider multiple multilayer buses. To consider transition overhead and design for applications with loops, Shao et al. [38] proposed a real-time loop scheduling-algorithm called dynamic voltage loop scheduling (DVLS). In DVLS, the authors succeeded in repeatedly regrouping a loop based on rotation scheduling and decreased the energy consumed by DVS within a timing constraint.In addition to the abovementioned studies, there are many research directions and issues regarding real-time multicore systems. For real-time applications, it is common to estimate the worst case performance early in the design process without actual hardware implementation. It is a challenge to obtain the upper bound on the worst case response time considering practical issues such as multitask applications with different task periods, precedence relations, and variable execution times. Yet, Yang et al. [39] proposed an analysis technique based on mixed integer linear programming to estimate the worst case performance of each task in a nonpreemptive multitask application on a multiprocessor system. Seo et al. [26] tackled the problem of reducing power consumption in a periodic real-time system using DVS on a multicore processor. The processor was assumed to have the limitation that all cores must run at the same performance level. And so to reduce the dynamic power, they proposed a dynamic repartitioning algorithm. The algorithm dynamically balances the task loads of multiple cores to optimize power consumption during execution. Further, they proposed a dynamic core scaling algorithm, which adjusts the number of active cores to reduce leakage power consumption under low load conditions.Cui and Maskell [40] proposed a look-up table-based event-driven thermal estimation method. Fast event driven thermal estimation is based upon a thermal map, which is updated only when a high level event occurs. They developed a predictive future thermal map and proposed several predictive task allocation policies based on it. Differing from the utilization-based policy, they used the thermal-aware policies to reduce the peak temperature and average temperature of a system. Han et al. [27] presented synchronization-aware energy management schemes for a set of periodic real-time tasks that accesses shared resources. The mapping approach allocates tasks accessing the same resources to the same core to effectively reduce synchronization overhead. They also proposed a set of synchronization-aware slack management policies that can appropriately reclaim, preserve, release, and steal slack at runtime to slow down the execution of tasks and save more energy. Chen et al. [41] explored the online real-time task scheduling problem in heterogeneous multicore systems and considered tasks with precedence constraints and nonpreemptive task execution. In their assumption, the processor and the coprocessor have a master-slave relationship. Each task will first be executed on the processor and then dispatched to the coprocessor. During online operation, each task is tested by admission control, which ensures the schedulability. Since the coprocessor is nonpreemptive, to deal with the problem of a task having too large a blocking time, the authors inserted the preemptive points to configure the task blocking time and context switch overhead in the coprocessor. ## 2.3. Summary To extend the system lifetime for energy-limited devices, one of the possible ways is to use DVFS-based technology [28, 29, 31, 32] to save energy. Because the requirements will change when the real-time system is being used, other studies [27, 42, 43] have combined DVFS technology and a real-time scheduler to meet the time constraint while reducing energy consumption. Now, under the environment of multicore architectures, researchers have proposed multicore schedulers, which can meet real-time constraints and consume lower energy. Along with the development of technology, the algorithms have become much more complex and have increasing restrictions when multiple issues need to be considered simultaneously. As a consequence, these algorithms become difficult to implement and work in real environments. Thus, this paper relaxed some limitations to allow the proposed algorithm to be easier to implement and work well in real environments with simultaneous consideration to the real-time, power, and load balance issues. ## 3. Problem Definition The DVFS-based power-aware scheduling problem is defined as finding a schedule that can satisfy all the constraints of a system while consuming less energy to execute tasks. Differing from the aforementioned traditional scheduling on a single core system, scheduling on a multi-core system needs to decide not only the execution order of tasks, but also which tasks should be executed on which processor core. A good load dispatch can improve the performance and reduce energy consumption, so load dispatch is a very important issue in multicore scheduling. In this paper, we divided the power-aware multicore scheduling problem into load dispatch and power-aware scheduling and proposed different algorithms to solve them individually. Additionally, the key points of using the DVFS technique to reduce the energy consumption of a system include deciding when the operating voltage and frequency should be adjusted and selecting the operating state. To relax the limitations and make the proposed algorithm easier to implement and more light-weight, missing deadline is allowed in this research. The problem we consider in this paper can be defined as follows and is illustrated in Figure3.Figure 3 System model.System Model. There is one master processor unit and n slave processor cores in the system, and each processor core has its own operating system and can scale the operating voltage and frequency independently. When tasks are released, the master processor unit exchanges the status information with each slave processor core by IPC and dispatches them to a suitable slave processor core individually; then, the slave processor cores schedule tasks that are dispatched to them individually. Under this architecture, the proposed algorithm can apply to either homogeneous multicore systems or heterogeneous multicore systems and each processor core can manage itself. In this work, the platform that is used contains an ARM core as master processor unit and two DSPs.Input. A task set T = { T real , T normal }, where T real is the set of real-time tasks and T normal is the set of normal tasks. Each real-time task can be represented as (R e l e a s e _ t i m e, P r i o r i t y, R e l a t e d _ D e a d l i n e) and it can be a periodic task or an aperiodic task. In a dynamic environment, it is difficult to get all the information about tasks and the overhead of using optimization method is too heavy when every task is released. Therefore, we tried to schedule real-time tasks without considering the execution time of tasks. Although hard deadline is not guaranteed, methods are proposed to decrease the missing-deadline probability. For normal tasks, we only use (R e l e a s e _ t i m e, P r i o r i t y) to represent them. Generally, the execution order of real-time tasks is based on the absolute deadline of tasks. P r i o r i t y of real-time tasks is used only when two or more absolute deadlines are identical. The details of scheduling algorithm will be described in Section 4.Output. A set of feasible scheduling, S = { s 1 , s 2 , … , s n }, where s i is the scheduling result of the ith slave processor core and scaling operating voltage and frequency produced by the proposed scaling algorithm.Objective. To minimize total energy consumption E total, the objective function is expressed in (1) Minimize ( E total ) , (2) E total = ∑ E i , (3) E i = ∑ E i j , where E i is the energy consumption of the ith slave processor core and E i j is the energy consumption of the jth task in the ith slave processor core, as expressed in (4) E i j = ∑ ‍ ( P i j k × t i j k ) , (5) P i j k = c × V i j k 2 × f i j k , where P i j k is the power of the jth task at the kth time slice in the ith slave processor core, expressed as (5), and t i j k is the duration of the kth time slice for the jth task in the ith slave processor core. Since the operating mode will not always be the same for a given task under the proposed algorithm, the energy in each time slice must be individually calculated and then summarized. In (5), c is the load capacity, V i j k is the operating voltage, and f i j k is the operating frequency of the jth task at the kth time slice inith slave processor core.It is allowed that tasks are finished after their deadlines in this paper. The processing speed will be increased when a deadline is missed so that the task can be finished faster. Thus, the performance constraint can be expressed as(6) Minimize ( ∑ M i ) , where M i represents if the deadline of ith task is missed and is defined as (7) M i = { 1 , if f t i > d i 0 otherwise , where f t i is the finish time and d i is the absolute deadline of the ith task. ## 4. Power and Deadline-Aware Multicore Scheduling In this section, an efficient multicore scheduling algorithm is presented, called power and deadline-aware multicore scheduling, which integrates three different parts (modules):( 1 ) mixed-earliest deadline first (MEDF) [42], ( 2 ) enhanced deadline-driven dynamic voltage and frequency scaling (ED3VFS), and ( 3 ) two-level deadline-aware hybrid load balancer (TLDHLB). Among them, MEDF is used to schedule the tasks that have been dispatched to a processor core. ED3VFS is an enhanced version of D3VFS [42] and is used to scale the operating mode on each slave processor core. And finally, TLDHLB is used for task dispatch and is composed of two levels: the first level is the load imbalance strategy, while the second level is load balance. For example, when a new task that needs to be served by DSP arrived, TLDHB will dispatch it to a DSP with consideration of load balance. After the DSP receives the task, MEDF will schedule the new task. At the same time, ED3VFS will be executed on DSP periodically to reduce the energy consumption. ### 4.1. Mixed-Earliest Deadline First The original scheduling algorithm used in the MicroC/OS-II kernel is a fundamental priority-based scheduling algorithm. To support real-time tasks and normal tasks in the same time, mixed-earliest deadline first is selected to replace the original scheduling algorithm. MEDF combined EDF and fixed-priority scheduling. For real-time tasks, it uses EDF to schedule the tasks. When there are two or more deadlines of real-time tasks that are identical or for normal tasks, MEDF uses fixed-priority scheduling to decide the execution order. Moreover, MEDF will always select real-time tasks first when there are real-time tasks and normal tasks in the ready queue simultaneously.To cooperate with TLDHLB to save static power, we modified MEDF to let it turn off the processor cores while the ready task, which has the highest priority, is idle while there is no real-time task or other normal tasks. This means that when a processor core finishes all tasks, it will turn itself off to save power. ### 4.2. Enhanced Deadline-Driven Dynamic Voltage and Frequency Scaling The D3VFS scales operating mode dynamically by the active status of a system. α and β are two parameters used in D3VFS and are set as 10. D3VFS will scale operating modes while the system continues to be busy for α time units or when no deadlines are missed for β time units. Inspired by observations of D3VFS, we present a better strategy to set parameters α and β to enhance the performance of the scheduling system. First, in D3VFS, the related deadlines of tasks are not needed to be longer than 10 or a fixed threshold. When the shortest related deadline is shorter than this value of 10 (threshold), α and β become negative, which should be amended. Second, the purports of α and β are not exactly the same, and so their impacts are different. The bigger value we set to α, which is the longer time that the processor stays in lower speed, because the system will increase the operating frequency in a slower rate. This leads to more power savings with worse computing capacity. On the other hand, β is used for decreasing the operating frequency. A bigger β will allow the system to stay in high speed for longer, so that the performance will be better, but more power is consumed. Third, there is more than one task working in a system simultaneously. And in real environment, these tasks may not always be the same. The best setting for each task is different, and so giving different settings for different task sets is more flexible.To solve these aforementioned problems, this paper proposes a different concept to improve the performance of power saving algorithms and conforms to real situations while the system is working. Pseudocode1 shows the pseudocode of ED3VFS. The basic idea of ED3VFS is that the settings of α and β are free to change along with different task sets and these two parameters will be set to two different values. According to the basic idea of ED3VFS, we ran a series of experiments to find a better setting and to verify that the new setting is superior to the original D3VFS setting. The experimental settings include two main groups and are described as follow.(i) The first group is just like the original D3VFS, and the settings of α and β are the same. There are four settings in this group, and the four settings are α = β = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8, where D sr ⁡ is the shortest related deadline. (ii) The second group used in ED3VFS also features four settings in. In this group, α and β will be set into different values. α = D sr ⁡ - β and β = D sr ⁡ × 0.8 , 0.6 , 0.4, and 0.2. Just as in the above, the effects of α and β are opposite, so we gave them opposite values. Table 1 lists the overall settings of α and β in these experiment series.Table 1 Overall settings ofα and β. Settings Group 1 (α = β) Group 2 (α = D sr ⁡ - β) α β α β 1 0.2D sr ⁡ 0.2D sr ⁡ 0.2D sr ⁡ 0.8D sr ⁡ 2 0.4D sr ⁡ 0.4D sr ⁡ 0.4D sr ⁡ 0.6D sr ⁡ 3 0.6D sr ⁡ 0.6D sr ⁡ 0.6D sr ⁡ 0.4D sr ⁡ 4 0.8D sr ⁡ 0.8D sr ⁡ 0.8D sr ⁡ 0.2D sr ⁡Pseudocode 1:Pseudocode of enhanced deadline-driven dynamic voltage and frequency scaling (ED 3VFS). (1) Initially, setting the power mode of DSP to default levelf d. (2) Every timer interrupt occur (3)  If (There is any real-time task.) (4)   Settingα = D sr ⁡ × 0.2. (5)   Settingβ = D sr ⁡ - α. (6)   if (Deadline missed.) (7)Extending deadline of the task that missed deadline for d e ticks. (8)Rising power mode. (9)    else if (Utilization of DSP> S r for α ticks.) (10)Rising power mode. (11)   else if (There is no deadline missed forβ ticks.) (12)Falling power mode. (13)   else (14)Setting the power mode of DSP to default level f d. (15)  End IfFigure4 shows the energy comparison between the two different settings, namely, the original D3VFS and ED3VFS. The vertical axis shows energy consumption while the horizontal axis shows the setting of α, where α = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8. In this research, the first experimental result of energy is used to normalize all of the other results and make them easy to see the differences in comparison. The results show that the energy consumption of ED3VFS is lower than the original D3VFS in most cases. Figure 5 shows the performance comparison between original D3VFS and ED3VFS. We used the number of deadlines missed as the criterion to compare their performance. For a real-time system, a lower number of missed deadlines are better. The results show that ED3VFS is better than the original D3VFS in performance, except for the case when α = D sr ⁡ × 0.8. Although the proposed power-saving algorithm does not guarantee the hard deadline, we still try to stop missed deadlines from happening. According to the experimental results shown in Figure 5, there are two settings we can choose, α = D sr ⁡ × 0.2 and 0.4 when β = D sr ⁡ - α. There was no deadline missed in these two cases. Since the energy consumption of α = D sr ⁡ × 0.2 is less than α = D sr ⁡ × 0.4, we chose α = D sr ⁡ × 0.2 and β = D sr ⁡ - α as the final setting of ED3VFS. In this setting, both energy consumption and performance of ED3VFS are superior to the original D3VFS. Actually, if energy consumption is more important than performance in a given system, setting α = D sr ⁡ × 0.6 and β = D sr ⁡ - α is also a good choice. In that case, more energy can be saved and a certain level of performance maintained.Figure 4 Energy comparison between original D3VFS and ED3VFS.Figure 5 Performance comparison between original D3VFS and ED3VFS. ### 4.3. Two-Level Deadline-Aware Hybrid Load Balancer For systems limited by battery power, letting all processor cores work continuously in active mode is not a good idea, as much energy will be consumed. In a multicore system, balancing the workload between each core can reduce the completion times of all tasks. Processor cores thus can turn to sleep mode for longer times and save more energy.In this paper, a novel task dispatch algorithm, called two-level deadline-aware hybrid load balancer (TLDHLB), is presented. The first level is the load imbalance strategy used for saving static power, and it was inspired by [44]. The basic idea of the first level of task dispatch is dispatching tasks to the processor cores working in active mode and turning off the processor cores when all tasks are finished. For example, suppose there are one MPU and two DSPs in the system. Initially, the system will turn off two DSPs until there are tasks needing to be processed by DSP, as shown in Figure 6(a). When task1 was released, MPU will check the state of the DSPs. If there is no DSP working in active mode, then turn on DSP1 and dispatch task1 to DSP1. According to ED3VFS, DSP1 will work at the default speed, normally, and at the lowest speed in the beginning, as shown in Figure 6(b). Figure 6(c) shows that if taskn is released at tim e n while DSP1 works at full speed, then turn on DSP2 and dispatch taskn to DSP2. Figure 6(d) shows that, after DSP1 finished all tasks assigned to it, it will turn itself off by MEDF.Example of load unbalance. (a) Initial state, (b) Task1 released and turning on DSP1. (c) Taskn released while DSP1 worked in full speed and turning on DSP2. (d) DSP1 finished all of tasks that dispatched to it and turning off DSP1. (a) (b) (c) (d)The second level is used for load balance. When there are two or more DSPs working in active mode, the load balance strategies in the second level will be used to dispatch tasks that are newly released. Unlike traditional systems that do not contain real-time tasks, there are simultaneously real-time tasks and normal tasks in a system in our assumption. Traditional load balance strategies were not designed for real-time systems, so we propose a new dispatch criterion to process the problem of dispatch in real-time tasks. We also combined other criteria to allow our load balance algorithm to process real-time tasks and normal tasks simultaneously and improve robustness.The novel strategy uses the distribution of task deadlines as the criterion for load balance. According to our observation, the more uniform the distribution of task deadlines is, the lower the missing-deadline probability will be. Figure7 is a simple example that supports our observation. There are two DSPs and four tasks; Figure 7(a) shows that one of the dispatch results’ distribution of task deadlines is uneven. In this example, task1 is finished at its deadline, d 1, so that there is not enough time to execute task2 and a deadline missed occurs. A similar situation occurred in DSP2 for task4. Figure 7(b) shows that a different dispatch result features a more uniform distribution of task deadlines. In this situation, the time slot between two deadlines is longer, which means that there is more time to execute the next task when a task is finished, leading to the probability of a deadline missed being lower. Now, the problem is how to express the distribution of task deadlines.Example of real-time tasks dispatch. (a) Distribution of task deadlines is uneven. (b) Distribution of task deadlines is uniform. (a) (b)In this paper, the variance of task deadlines was used as the feature of deadline distribution. Since variance expresses how far a set of numbers is spread out, variance can be used to express the density of data distribution, which is what we need. A smaller variance of task deadlines implies that the time slot between two task deadlines is shorter. Equation (8) is the formula of variance, where N is the number of data, x i expresses ith data, and x - is the data mean: (8) Var ⁡ ( X ) = 1 N ∑ i = 1 N ‍ ( x i - x - ) 2 .Except for the distribution of task deadlines, the other three strategies were combined to dispatch normal tasks. These three strategies can also be used to dispatch real-time tasks when the uniformity of deadline distributions is equal. The first strategy is the execution order of the task. This means that the dispatcher will dispatch a task to the DSP that provides higher priority. For example, assume there are two DSPs and two tasks on each DSP. When a new task is released, if the execution order of the task is second in DSP1 and third in DSP2, this task will be dispatched to DSP1. The second strategy is the number of tasks. The dispatcher will dispatch a task to the DSP that has the fewest number of tasks on it. When the dispatcher cannot make the decision via the three strategies in the second level mentioned above, then the last strategy will be used. The last strategy is very simple, it simply chooses the DSP whose serial number is the minimum and is working in active mode. The pseudocode of two levels deadline-aware hybrid load balancer is shown in Pseudocode 2, where DS P all is the set of all DSPs, DS P off ⁡ is the set of DSPs that are not staying in active mode, DS P t is the target that the task will be dispatched to, DS P on ⁡ is the set of DSPs that are staying in active mode, DS P fs is the set of DSPs that are working at full speed, function g e t S t a t u s ( ) is used to obtain the status information of the DSPs, function m i n S e r i a l ( ) returns the DSP whose serial number is the minimum, function N u m ( ) returns the number of input data, function m a x U n i f o r m i t y ( ) returns the DSP whose uniformity of deadline distribution is the maximum, function m i n O r d e r ( ) returns the DSP that can provide the highest priority to new released task and function, m i n T a s k N u m ( ) returns the DSP whose number of tasks is the fewest.Pseudocode 2:Pseudocode of two levels deadline-aware hybrid load balancer. (1) Initially, turning off all of DSPs,DSP off ⁡ = DSP all. (2) When a task that needs be processed by DSP is released (3)  g e t S t a t u s ( DSP all ) (4) % First level for load unbalance% (5)  If (DSP off ⁡ = = DSP all) (6) DSP t = m i n S e r i a l ( DSP off ⁡ ) (7) Turn on m i n S e r i a l ( DSP off ⁡ ). (8)  Else if (DSP fs = = DSP on ⁡) (9)  if (DSP on ⁡ = = DSP all) (10) go to line 21 (11)   else (12) DSP t = m i n S e r i a l ( DSP off ⁡ ) (13) Turn on m i n S e r i a l ( DSP off ⁡ ). (14) Else if (N u m ( DSP on ⁡ - DSP fs ) = = 1) (15) DSP t = DSP on ⁡ - DSP fs (16) Else (17) go to line 21 (18) Dispatch the task to DSP t. (19) End (20) % Second level for load balance% (21) If (N u m ( m a x U n i f o r m i t y ( DS P on ⁡ ) ) = = 1) (22) DSP t = m a x U n i f o r m i t y ( DSP on ⁡ ) (23) Else if (N u m ( m i n O r d e r ( DSP on ⁡ ) ) = = 1) (24) DSP t = m i n O r d e r ( DSP on ⁡ ) (25) Else if (N u m ( m i n T a s k N u m ( DSP on ⁡ ) ) = = 1) (26) DSP t = m i n T a s k N u m ( DSP on ⁡ ) (27) Else (28) DSP t = m i n S e r i a l ( DSP on ⁡ ) (29) Dispatch the task to DSP t. (30) EndWhat is worth noticing is that when calculating the uniformity of deadline distribution, we should take the newly released task into account because we want to find a DSP whose deadline distribution is still uniform after inserting the newly released task. Furthermore,m a x U n i f o r m i t y ( ), m i n O r d e r ( ), and m i n T a s k N u m ( ) may return more than one DSP, when there are two or more DSPs with the same status. In that case, TLDHLB will use another strategy to dispatch the task and is why we combined four criteria to become a hybrid strategy. ## 4.1. Mixed-Earliest Deadline First The original scheduling algorithm used in the MicroC/OS-II kernel is a fundamental priority-based scheduling algorithm. To support real-time tasks and normal tasks in the same time, mixed-earliest deadline first is selected to replace the original scheduling algorithm. MEDF combined EDF and fixed-priority scheduling. For real-time tasks, it uses EDF to schedule the tasks. When there are two or more deadlines of real-time tasks that are identical or for normal tasks, MEDF uses fixed-priority scheduling to decide the execution order. Moreover, MEDF will always select real-time tasks first when there are real-time tasks and normal tasks in the ready queue simultaneously.To cooperate with TLDHLB to save static power, we modified MEDF to let it turn off the processor cores while the ready task, which has the highest priority, is idle while there is no real-time task or other normal tasks. This means that when a processor core finishes all tasks, it will turn itself off to save power. ## 4.2. Enhanced Deadline-Driven Dynamic Voltage and Frequency Scaling The D3VFS scales operating mode dynamically by the active status of a system. α and β are two parameters used in D3VFS and are set as 10. D3VFS will scale operating modes while the system continues to be busy for α time units or when no deadlines are missed for β time units. Inspired by observations of D3VFS, we present a better strategy to set parameters α and β to enhance the performance of the scheduling system. First, in D3VFS, the related deadlines of tasks are not needed to be longer than 10 or a fixed threshold. When the shortest related deadline is shorter than this value of 10 (threshold), α and β become negative, which should be amended. Second, the purports of α and β are not exactly the same, and so their impacts are different. The bigger value we set to α, which is the longer time that the processor stays in lower speed, because the system will increase the operating frequency in a slower rate. This leads to more power savings with worse computing capacity. On the other hand, β is used for decreasing the operating frequency. A bigger β will allow the system to stay in high speed for longer, so that the performance will be better, but more power is consumed. Third, there is more than one task working in a system simultaneously. And in real environment, these tasks may not always be the same. The best setting for each task is different, and so giving different settings for different task sets is more flexible.To solve these aforementioned problems, this paper proposes a different concept to improve the performance of power saving algorithms and conforms to real situations while the system is working. Pseudocode1 shows the pseudocode of ED3VFS. The basic idea of ED3VFS is that the settings of α and β are free to change along with different task sets and these two parameters will be set to two different values. According to the basic idea of ED3VFS, we ran a series of experiments to find a better setting and to verify that the new setting is superior to the original D3VFS setting. The experimental settings include two main groups and are described as follow.(i) The first group is just like the original D3VFS, and the settings of α and β are the same. There are four settings in this group, and the four settings are α = β = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8, where D sr ⁡ is the shortest related deadline. (ii) The second group used in ED3VFS also features four settings in. In this group, α and β will be set into different values. α = D sr ⁡ - β and β = D sr ⁡ × 0.8 , 0.6 , 0.4, and 0.2. Just as in the above, the effects of α and β are opposite, so we gave them opposite values. Table 1 lists the overall settings of α and β in these experiment series.Table 1 Overall settings ofα and β. Settings Group 1 (α = β) Group 2 (α = D sr ⁡ - β) α β α β 1 0.2D sr ⁡ 0.2D sr ⁡ 0.2D sr ⁡ 0.8D sr ⁡ 2 0.4D sr ⁡ 0.4D sr ⁡ 0.4D sr ⁡ 0.6D sr ⁡ 3 0.6D sr ⁡ 0.6D sr ⁡ 0.6D sr ⁡ 0.4D sr ⁡ 4 0.8D sr ⁡ 0.8D sr ⁡ 0.8D sr ⁡ 0.2D sr ⁡Pseudocode 1:Pseudocode of enhanced deadline-driven dynamic voltage and frequency scaling (ED 3VFS). (1) Initially, setting the power mode of DSP to default levelf d. (2) Every timer interrupt occur (3)  If (There is any real-time task.) (4)   Settingα = D sr ⁡ × 0.2. (5)   Settingβ = D sr ⁡ - α. (6)   if (Deadline missed.) (7)Extending deadline of the task that missed deadline for d e ticks. (8)Rising power mode. (9)    else if (Utilization of DSP> S r for α ticks.) (10)Rising power mode. (11)   else if (There is no deadline missed forβ ticks.) (12)Falling power mode. (13)   else (14)Setting the power mode of DSP to default level f d. (15)  End IfFigure4 shows the energy comparison between the two different settings, namely, the original D3VFS and ED3VFS. The vertical axis shows energy consumption while the horizontal axis shows the setting of α, where α = D sr ⁡ × 0.2 , 0.4 , 0.6, and 0.8. In this research, the first experimental result of energy is used to normalize all of the other results and make them easy to see the differences in comparison. The results show that the energy consumption of ED3VFS is lower than the original D3VFS in most cases. Figure 5 shows the performance comparison between original D3VFS and ED3VFS. We used the number of deadlines missed as the criterion to compare their performance. For a real-time system, a lower number of missed deadlines are better. The results show that ED3VFS is better than the original D3VFS in performance, except for the case when α = D sr ⁡ × 0.8. Although the proposed power-saving algorithm does not guarantee the hard deadline, we still try to stop missed deadlines from happening. According to the experimental results shown in Figure 5, there are two settings we can choose, α = D sr ⁡ × 0.2 and 0.4 when β = D sr ⁡ - α. There was no deadline missed in these two cases. Since the energy consumption of α = D sr ⁡ × 0.2 is less than α = D sr ⁡ × 0.4, we chose α = D sr ⁡ × 0.2 and β = D sr ⁡ - α as the final setting of ED3VFS. In this setting, both energy consumption and performance of ED3VFS are superior to the original D3VFS. Actually, if energy consumption is more important than performance in a given system, setting α = D sr ⁡ × 0.6 and β = D sr ⁡ - α is also a good choice. In that case, more energy can be saved and a certain level of performance maintained.Figure 4 Energy comparison between original D3VFS and ED3VFS.Figure 5 Performance comparison between original D3VFS and ED3VFS. ## 4.3. Two-Level Deadline-Aware Hybrid Load Balancer For systems limited by battery power, letting all processor cores work continuously in active mode is not a good idea, as much energy will be consumed. In a multicore system, balancing the workload between each core can reduce the completion times of all tasks. Processor cores thus can turn to sleep mode for longer times and save more energy.In this paper, a novel task dispatch algorithm, called two-level deadline-aware hybrid load balancer (TLDHLB), is presented. The first level is the load imbalance strategy used for saving static power, and it was inspired by [44]. The basic idea of the first level of task dispatch is dispatching tasks to the processor cores working in active mode and turning off the processor cores when all tasks are finished. For example, suppose there are one MPU and two DSPs in the system. Initially, the system will turn off two DSPs until there are tasks needing to be processed by DSP, as shown in Figure 6(a). When task1 was released, MPU will check the state of the DSPs. If there is no DSP working in active mode, then turn on DSP1 and dispatch task1 to DSP1. According to ED3VFS, DSP1 will work at the default speed, normally, and at the lowest speed in the beginning, as shown in Figure 6(b). Figure 6(c) shows that if taskn is released at tim e n while DSP1 works at full speed, then turn on DSP2 and dispatch taskn to DSP2. Figure 6(d) shows that, after DSP1 finished all tasks assigned to it, it will turn itself off by MEDF.Example of load unbalance. (a) Initial state, (b) Task1 released and turning on DSP1. (c) Taskn released while DSP1 worked in full speed and turning on DSP2. (d) DSP1 finished all of tasks that dispatched to it and turning off DSP1. (a) (b) (c) (d)The second level is used for load balance. When there are two or more DSPs working in active mode, the load balance strategies in the second level will be used to dispatch tasks that are newly released. Unlike traditional systems that do not contain real-time tasks, there are simultaneously real-time tasks and normal tasks in a system in our assumption. Traditional load balance strategies were not designed for real-time systems, so we propose a new dispatch criterion to process the problem of dispatch in real-time tasks. We also combined other criteria to allow our load balance algorithm to process real-time tasks and normal tasks simultaneously and improve robustness.The novel strategy uses the distribution of task deadlines as the criterion for load balance. According to our observation, the more uniform the distribution of task deadlines is, the lower the missing-deadline probability will be. Figure7 is a simple example that supports our observation. There are two DSPs and four tasks; Figure 7(a) shows that one of the dispatch results’ distribution of task deadlines is uneven. In this example, task1 is finished at its deadline, d 1, so that there is not enough time to execute task2 and a deadline missed occurs. A similar situation occurred in DSP2 for task4. Figure 7(b) shows that a different dispatch result features a more uniform distribution of task deadlines. In this situation, the time slot between two deadlines is longer, which means that there is more time to execute the next task when a task is finished, leading to the probability of a deadline missed being lower. Now, the problem is how to express the distribution of task deadlines.Example of real-time tasks dispatch. (a) Distribution of task deadlines is uneven. (b) Distribution of task deadlines is uniform. (a) (b)In this paper, the variance of task deadlines was used as the feature of deadline distribution. Since variance expresses how far a set of numbers is spread out, variance can be used to express the density of data distribution, which is what we need. A smaller variance of task deadlines implies that the time slot between two task deadlines is shorter. Equation (8) is the formula of variance, where N is the number of data, x i expresses ith data, and x - is the data mean: (8) Var ⁡ ( X ) = 1 N ∑ i = 1 N ‍ ( x i - x - ) 2 .Except for the distribution of task deadlines, the other three strategies were combined to dispatch normal tasks. These three strategies can also be used to dispatch real-time tasks when the uniformity of deadline distributions is equal. The first strategy is the execution order of the task. This means that the dispatcher will dispatch a task to the DSP that provides higher priority. For example, assume there are two DSPs and two tasks on each DSP. When a new task is released, if the execution order of the task is second in DSP1 and third in DSP2, this task will be dispatched to DSP1. The second strategy is the number of tasks. The dispatcher will dispatch a task to the DSP that has the fewest number of tasks on it. When the dispatcher cannot make the decision via the three strategies in the second level mentioned above, then the last strategy will be used. The last strategy is very simple, it simply chooses the DSP whose serial number is the minimum and is working in active mode. The pseudocode of two levels deadline-aware hybrid load balancer is shown in Pseudocode 2, where DS P all is the set of all DSPs, DS P off ⁡ is the set of DSPs that are not staying in active mode, DS P t is the target that the task will be dispatched to, DS P on ⁡ is the set of DSPs that are staying in active mode, DS P fs is the set of DSPs that are working at full speed, function g e t S t a t u s ( ) is used to obtain the status information of the DSPs, function m i n S e r i a l ( ) returns the DSP whose serial number is the minimum, function N u m ( ) returns the number of input data, function m a x U n i f o r m i t y ( ) returns the DSP whose uniformity of deadline distribution is the maximum, function m i n O r d e r ( ) returns the DSP that can provide the highest priority to new released task and function, m i n T a s k N u m ( ) returns the DSP whose number of tasks is the fewest.Pseudocode 2:Pseudocode of two levels deadline-aware hybrid load balancer. (1) Initially, turning off all of DSPs,DSP off ⁡ = DSP all. (2) When a task that needs be processed by DSP is released (3)  g e t S t a t u s ( DSP all ) (4) % First level for load unbalance% (5)  If (DSP off ⁡ = = DSP all) (6) DSP t = m i n S e r i a l ( DSP off ⁡ ) (7) Turn on m i n S e r i a l ( DSP off ⁡ ). (8)  Else if (DSP fs = = DSP on ⁡) (9)  if (DSP on ⁡ = = DSP all) (10) go to line 21 (11)   else (12) DSP t = m i n S e r i a l ( DSP off ⁡ ) (13) Turn on m i n S e r i a l ( DSP off ⁡ ). (14) Else if (N u m ( DSP on ⁡ - DSP fs ) = = 1) (15) DSP t = DSP on ⁡ - DSP fs (16) Else (17) go to line 21 (18) Dispatch the task to DSP t. (19) End (20) % Second level for load balance% (21) If (N u m ( m a x U n i f o r m i t y ( DS P on ⁡ ) ) = = 1) (22) DSP t = m a x U n i f o r m i t y ( DSP on ⁡ ) (23) Else if (N u m ( m i n O r d e r ( DSP on ⁡ ) ) = = 1) (24) DSP t = m i n O r d e r ( DSP on ⁡ ) (25) Else if (N u m ( m i n T a s k N u m ( DSP on ⁡ ) ) = = 1) (26) DSP t = m i n T a s k N u m ( DSP on ⁡ ) (27) Else (28) DSP t = m i n S e r i a l ( DSP on ⁡ ) (29) Dispatch the task to DSP t. (30) EndWhat is worth noticing is that when calculating the uniformity of deadline distribution, we should take the newly released task into account because we want to find a DSP whose deadline distribution is still uniform after inserting the newly released task. Furthermore,m a x U n i f o r m i t y ( ), m i n O r d e r ( ), and m i n T a s k N u m ( ) may return more than one DSP, when there are two or more DSPs with the same status. In that case, TLDHLB will use another strategy to dispatch the task and is why we combined four criteria to become a hybrid strategy. ## 5. Experiments In this section, we describe the experimental environment and the setting of parameters. Experimental results and analyses are then shown. ### 5.1. Experimental Environment In this study the PAC Duo platform was used for the experiments and includes an ARM926 processor and two PACDSPs. The operating system kernel running on ARM is Linux 2.6.27. We ported the MicroC/OS-II kernel (version 2.5) to the PACDSPs and implemented the proposed power-aware scheduling algorithm on the MicroC/OS-II kernel and the proposed load dispatch algorithm on ARM. Figure8 shows the experimental system architecture, while Table 2 shows the operating frequencies used in the experiments and the corresponding voltages. In the experiments, we used a digital multimeter (FLUKE 8846A) to measure the voltage and current of the PACDSPs, the data of which was used to calculate energy consumption.Table 2 Operating voltage and frequency of PACDSP. Power mode (operating mode) Voltage (V) Frequency (MHz) 7 1.0 204 6 1.0 136 5 0.9 102 4 0.9 68 3 0.9 51 2 0.8 34 1 0.8 24Figure 8 System architecture. ### 5.2. Experimental Settings In the experiments, we used matrix multiplication,π calculation, quick sort, jpeg decoder, and histogram equalization as the workload. Other than the proposed algorithms, we also implemented two load balance algorithms and three frequency scaling strategies as the comparisons. Table 3 shows the algorithm usage in the experiments.(i) Seven sets of settings were used. The first set is the proposed algorithms and combines the proposed load dispatch algorithm and power-aware scheduling algorithms. Worthy of note is that the DSPs on our experimental platform cannot turn off and then turn on, so we scaled the operating frequency to the lowest frequency to represent the DSP as being turned off and assumed the energy consumption was zero until the proposed algorithms turn it on again. (ii) The second set to the fourth set used the same load balance algorithm. In these three sets, the load balancer used the number of tasks as the criterion to dispatch tasks. The frequency scaling strategies used in the second and third sets were two static settings: one is the highest operating frequency and the other is the lowest operating frequency. The fourth set used Linux-ondemand [45] as the frequency scaling strategy. Linux-ondemand is a dynamic frequency scaling algorithm; it is used in Linux kernel and dynamically scales the operating frequency according to the utilization of the processor. (iii) The fifth set to seventh set used the utilization of processor as the criterion to dispatch tasks. The frequency scaling strategies used in these three sets were the highest frequency, the lowest frequency, and Linux-ondemand in ordering. Except for the first set, the other settings use original scheduling used in MicroC/OS-II to schedule tasks.Table 3 Usage of algorithms for experiments. Set Load dispatch Frequency scaling TLDHLB The number of tasks Utilization ED 3VFS + MEDF Always in the highest frequency Always in the lowest frequency Linux-ondemand 1PDAMS ✓ ✓ 2NT (HF) ✓ ✓ 3NT (LF) ✓ ✓ 4NT (Ondemand) ✓ ✓ 5Utilization (HF) ✓ ✓ 6Utilization (LF) ✓ ✓ 7Utilization (Ondemand) ✓ ✓For each task, we used the times of the average execution time of the task as its deadline, from one time to five times. There are five settings of task deadline for each set of settings in the experiments. ### 5.3. Experimental Results #### 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. #### 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 5.1. Experimental Environment In this study the PAC Duo platform was used for the experiments and includes an ARM926 processor and two PACDSPs. The operating system kernel running on ARM is Linux 2.6.27. We ported the MicroC/OS-II kernel (version 2.5) to the PACDSPs and implemented the proposed power-aware scheduling algorithm on the MicroC/OS-II kernel and the proposed load dispatch algorithm on ARM. Figure8 shows the experimental system architecture, while Table 2 shows the operating frequencies used in the experiments and the corresponding voltages. In the experiments, we used a digital multimeter (FLUKE 8846A) to measure the voltage and current of the PACDSPs, the data of which was used to calculate energy consumption.Table 2 Operating voltage and frequency of PACDSP. Power mode (operating mode) Voltage (V) Frequency (MHz) 7 1.0 204 6 1.0 136 5 0.9 102 4 0.9 68 3 0.9 51 2 0.8 34 1 0.8 24Figure 8 System architecture. ## 5.2. Experimental Settings In the experiments, we used matrix multiplication,π calculation, quick sort, jpeg decoder, and histogram equalization as the workload. Other than the proposed algorithms, we also implemented two load balance algorithms and three frequency scaling strategies as the comparisons. Table 3 shows the algorithm usage in the experiments.(i) Seven sets of settings were used. The first set is the proposed algorithms and combines the proposed load dispatch algorithm and power-aware scheduling algorithms. Worthy of note is that the DSPs on our experimental platform cannot turn off and then turn on, so we scaled the operating frequency to the lowest frequency to represent the DSP as being turned off and assumed the energy consumption was zero until the proposed algorithms turn it on again. (ii) The second set to the fourth set used the same load balance algorithm. In these three sets, the load balancer used the number of tasks as the criterion to dispatch tasks. The frequency scaling strategies used in the second and third sets were two static settings: one is the highest operating frequency and the other is the lowest operating frequency. The fourth set used Linux-ondemand [45] as the frequency scaling strategy. Linux-ondemand is a dynamic frequency scaling algorithm; it is used in Linux kernel and dynamically scales the operating frequency according to the utilization of the processor. (iii) The fifth set to seventh set used the utilization of processor as the criterion to dispatch tasks. The frequency scaling strategies used in these three sets were the highest frequency, the lowest frequency, and Linux-ondemand in ordering. Except for the first set, the other settings use original scheduling used in MicroC/OS-II to schedule tasks.Table 3 Usage of algorithms for experiments. Set Load dispatch Frequency scaling TLDHLB The number of tasks Utilization ED 3VFS + MEDF Always in the highest frequency Always in the lowest frequency Linux-ondemand 1PDAMS ✓ ✓ 2NT (HF) ✓ ✓ 3NT (LF) ✓ ✓ 4NT (Ondemand) ✓ ✓ 5Utilization (HF) ✓ ✓ 6Utilization (LF) ✓ ✓ 7Utilization (Ondemand) ✓ ✓For each task, we used the times of the average execution time of the task as its deadline, from one time to five times. There are five settings of task deadline for each set of settings in the experiments. ## 5.3. Experimental Results ### 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. ### 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 5.3.1. Comparison of Energy Consumption Figure9 shows the comparison of energy consumption. The vertical axis shows the energy consumption while the horizontal axis shows the setting of the task deadline. The results show that the energy consumption of the proposed algorithms is lower than that of other algorithms in almost every case. Compared with other algorithms, the proposed algorithm can reduce energy consumption by up to 54.2%. Experimental results also show that using the number of tasks as the criterion of load balance and always working in the lowest operating frequency can reduce energy consumption the most. The consequence is obvious and predictable, but the computing capacity under this condition is not satisfying. Although the proposed algorithm considered saving static power, due to hardware constrains, the static power consumption could barely be measured independently. As a result, static power consumption was not taken in count in the experiments. Besides, the overhead of scaling voltage and frequency has not been considered either. These will be added to our next work.Figure 9 Comparison of energy consumption. ## 5.3.2. Comparison of Performance Other than energy consumption, performance is a very important criterion to evaluate the effect of an algorithm. How to deal with the tradeoff between energy and performance is a difficult issue. Differing from traditional systems, the finish time of an overall system cannot represent the performance completely in a real-time system. For a real-time task, there is no difference between a system finishing the task very quickly and finishing the task just at deadline. Before the deadline of a real-time task, no matter what time is required to finish it, the effects of the real-time task are the same. Therefore, the number of missed deadlines is a better criterion to represent the performance of a real-time system. Figure10 offers the performance comparisons. The vertical axis shows the number of deadlines missed while the horizontal axis shows the setting of task deadlines. The results show that the performance of the proposed algorithms is the best. Except for setting the deadline of each task into one time of an average execution time, there is no deadline missed while using the proposed algorithms. Since there are only two slave processor cores in our experimental platform, when there are more than three real-time tasks and their deadlines are just one time of their average execution time, missing deadlines cannot be avoided. Worth to note is that even DSPs always run at the fastest frequency, using less appropriate dispatch method and scheduling algorithm may produce more missed deadlines. The proposed algorithm tries to decrease the probability of missing deadline not only in load dispatch, but also in scheduling. That is why the proposed algorithm has the chance to use less energy and get higher performance.Figure 10 Comparison of performance.Although always using the lowest operating frequency can reduce energy consumption the most, the performance is not acceptable. The number of deadlines missed is much more than other algorithms, regardless of which load balance algorithm is used. Experimental results show that the proposed algorithms found a good balance point between energy consumption and performance. By considering the deadline of tasks, the distribution of task deadlines is more uniform on each slave processor core. This can reduce the probability of deadlines being missed. A lower probability of deadline missed not only represents higher performance, but also reduces more energy consumption because there is a longer time that the system will work in lower speed when using ED3VFS. Moreover, the concept of load imbalance made the proposed algorithms reduce more energy consumption. Differing from dynamic voltage and frequency scaling technology that only reduces the dynamic power, load imbalance also reduces static power.Both the number of tasks and processor utilization are the most popular strategies to dispatch tasks in real systems, which is why we chose them as the comparisons. The aim of this paper is to develop not only a novel and effective load balance algorithm, but also an algorithm that can be applied in a real environment. Although the performance of some state-of-the-art algorithms may be better than the proposed algorithm, it is very difficult to satisfy their assumptions. Hence, these kinds of algorithms are hard to make work in a real environment. The other reason is the similarity of assumptions that all the algorithms used in this paper do not need the worst case execution time. Nowadays, portable devices that feature the ability to connect to internet are very popular, such as smart phones and tablet PCs. When a user downloads an application from the internet, there is no way for the system to get the worst case execution time of this application immediately. Although not using the worst case execution time of the tasks made the proposed algorithms unable to guarantee the hard-deadline, the proposed algorithm still tries to avoid missing deadline and become more flexible. We implemented the proposed algorithm in a real platform and the experimental results show that the proposed algorithms work well and are superior to others in general performance. ## 6. Conclusion This paper applied a solution to the problems of load dispatch and power saving in a real-time system on a multicore platform, called power and deadline-aware multicore scheduling. The proposed algorithm simultaneously considers dynamic power, static power, and load balance. To reduce the dynamic power, we implemented MEDF and fine-tuned the parameters of D3VFS to save more power and improve performance. The concept of load imbalance was introduced in saving static power. Instead of dispatching the workload to every processor core equally, the proposed algorithm turns power on only in parts of processor cores and lets other unnecessary cores turn to sleep mode or turn-off. Finally, deadline is used as a novel strategy for load balance between processor cores in active mode. Combining load imbalance and load balance, this paper proposed a two-level task dispatch algorithm called two-level deadline-aware hybrid load balancer.To verify that the proposed algorithms are useful, we implemented them on a multicore platform, PACDuo. We also implemented some load balance algorithms and frequency scaling algorithms for comparison. Experimental results show that compared to six combinations of load balance algorithms and frequency scaling algorithms, the proposed algorithms can reduce energy consumption by up to 54.2% and the performance of the proposed algorithms is superior to others. However, much work still needs to be completed in the future. Some areas for future study include( 1 ) adding theoretical analysis to support the proposed algorithm, ( 2 ) modelling the energy consumption more detailed, ( 3 ) considering the demands of hard-real-time and task migration while keeping the algorithm light-weight, and ( 4 ) introducing the concept of heuristic algorithms and improving the proposed algorithms. --- *Source: 101529-2014-04-14.xml*
2014
# Inhibiting PP2Acα Promotes the Malignant Phenotype of Gastric Cancer Cells through the ATM/METTL3 Axis **Authors:** Zhaoxiang Cheng; Shan Gao; Xiaojie Liang; Chao Lian; Jianquan Chen; Chao Fang **Journal:** BioMed Research International (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1015293 --- ## Abstract This article is aimed at exploring the relationship between the phosphatase 2A catalytic subunit Cα (PP2Acα, encoded by PPP2CA) and methyltransferase-like 3 (METTL3) in the malignant progression of gastric cancer (GC). Through analyzing the bioinformatics database and clinical tissue immunohistochemistry results, we found that abnormal PP2Acα and METTL3 levels were closely related to the malignant progression of GC. To explore the internal connection between PP2Acα and METTL3 in the progression of GC, we carried out cellular and molecular experiments and finally proved that PP2Acα inhibition can upregulate METTL3 levels by activating ATM activity, thereby promoting the malignant progression of GC. --- ## Body ## 1. Introduction The number of new gastric cancer (GC) cases exceeds 950,000 annually, with 78,300 deaths each year [1]. It is the fifth most common cancer in the world and the third leading cause of death due to cancer, accounting for 7% of the total number of cancer cases and 9% of cancer-related deaths [2]. Although the current D2 radical surgery for GC is quite mature and chemotherapy regimens are improving, the 5-year survival rate for advanced GC is only 20% [3]. The low survival rate of GC is closely related to the malignant phenotype of GC cells, which is characterized by proliferation, invasion, and distant migration. In recent years, targeted drugs that inhibit the malignant phenotype of GC cells, such as trastuzumab, have had a definite effect on HER-2-positive GC and can significantly prolong the survival of patients [4]. A phase II clinical study showed that rituximab can improve the prognosis of GC patients with high EGFR expression [5]. Ramulumab, which is a monoclonal antibody that targets VEGFR2, has a good antitumor effect, and its combination with paclitaxel for second-line treatment of advanced GC has been approved in some countries [6]. These cases suggest that molecular targeted drugs have the ability to improve the prognosis of GC patients. Therefore, further exploration of the molecular mechanism of the malignant phenotype of GC cells has important clinical significance for promoting GC therapy. In this study, two of the most common in vivo modifications (phosphorylation and methylation) were used as the entry point to study the molecular mechanism of GC.Phosphorylation is an important biological event that regulates protein activity and stability, and it is of great significance for maintaining cells’ physiological activities [7]. The steady-state phosphorylation of body proteins is mainly regulated by various phosphatases and kinases. Protein phosphatase 2A (PP2A), which is one of the main serine-threonine phosphatases in mammalian cells, maintains cell homoeostasis by counteracting most kinase-driven intracellular signaling pathways [8]. The PP2A catalytic subunit (which mainly refers to PP2Acα) is the core basis of PP2A, and PP2Acα dysfunction plays an important role in the occurrence and metastasis of some tumors. For example, immunohistochemistry and bioinformatics analysis showed that low expression of PPP2CA is closely related to colon cancer progression and poor prognosis; miR-650 promotes the malignant phenotype of undifferentiated thyroid cancer cells by inhibiting the expression of PPP2CA; upregulating the expression of PPP2CA can reverse the epithelial-mesenchymal transition, proliferation, and distant metastasis of prostate cancer. However, there have been no reports of PP2Acα in GC.In addition to protein phosphorylation, RNA methylation is indispensable for the maintenance of cell life activities, and the abnormal function of RNA methylation can lead to the occurrence of many diseases. N6-methyladenosine (m6A) modification is the most important and conservative RNA modification in cells [9]. Methyltransferase-like 3 (METTL3), as the core of the m6A-related methyltransferase complex, which plays an important role in the progression of various malignant tumors, for instances, upregulation of METTL3 promotes the proliferation of bladder cancer by accelerating the maturation of pri-miR221/222, promotes breast cancer progression by targeting Bcl-2, promotes the chemical and radio resistance of pancreatic cancer cells, and promotes the proliferation of colon cancer cells by inhibiting SOCS2. METTL3 also has been shown to play an important role in the occurrence and development of GC in recent years, but there is a lack of exploration of the regulatory factors of METTL3 in GC [10–14].Previous studies have found that PP2A can inhibit theMYC gene and the AKT, KRAS, and NF-κB proteins [15–18], as well as other oncogenes and tumor signaling pathways, thereby exerting a tumor suppressor effect. METTL3 tends to upregulate these oncogenes and signaling pathways [19, 20]. And in this study, through immunohistochemistry on gastric cancer tissue samples of 17 patients, it was found that in gastric cancer tissues, the positive rate of PP2Acα was 2/17 (11.8%), and the positive rate of METTL3 was 14/17 (82.4%). As a result, the roles of PP2Acα and METTL3 in malignant tumors may be very different, but there has been no research on the correlations between them when it comes to GC. Therefore, this study focused on exploring the interaction between PP2Acα and METTL3 in the progression of GC. ## 2. Materials and Methods ### 2.1. Immunohistochemistry GC pathological tissue wax blocks were provided by the Department of Pathology, the Affiliated Jiangning Hospital of Nanjing Medical University. Ten wax blocks with GC of pathological stage III-IV were screened out, and immunohistochemical sections were made from these tissue wax blocks. Anti-PP2Acα antibody (Santa Cruz Biotechnology, Dallas, TX, USA) and anti-METTL3 antibody (Proteintech Group, Chicago, IL, USA) diluted to 1 : 300 were used for immunostaining. Based on the percentage of positive cells, 2 pathologists who did not know the clinical information independently evaluated the PP2Acα and METTL3 staining intensities. The results were divided into the following categories: negative (-): 0 points, <25%; weakly positive (+): 1 point, ≥25, <50%; moderately positive (++): 2 points, ≥50, <75%; and strongly positive (+++): 3 points, ≥75%. ### 2.2. Cell Culture Two human GC cell lines were used: MGC803 and BGC823 (Beyotime Biotechnology, Shanghai, China). The cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM, HyClone, Inc., Logan, Utah, USA) added with 10% fetal bovine serum (FBS, Gibco, California, USA) and 1% antibiotics (penicillin/streptomycin, Gibco, California, USA). Cells were grown in a 5% CO2 incubator at 37°C. ### 2.3. Lentivirus-Mediated shRNA-Transfected Cells According to the multiplicity of infection values of BGC823 and MGC803 (100 for both), GC cells were transfected with lentivirus-mediated sh-PPP2CA and sh-NC (Genomeditech Co., Ltd., Shanghai, China). The cells were harvested 48 hours after transfection. Then, puromycin was used to screen out positively expressing cells.The shRNA sequences used were sh-PPP2CA-1, 5′-GATCCGTGGAACTTGACGATACTCTAACTCGAGTTAGAGTATCGTCAAGTTCCATTTTTT-3′; sh-PPP2CA-2, 5′-GATCCGCAGATCTTCTGTCTACATGGTTCAAGAGACCATGTAGACAGAAGATCTGCTTTTTTG-3′; and sh-PPP2CA-3, 5′-GATCCGGCAAATCACCAGATACAAATTTCAAGAGAATTGTATCTGGTGATTTGCCTTTTTTG-3′. ### 2.4. Clone Formation Experiment Stably transfected gastric cancer cells were inoculated into a 6-well plate at a density of 1000 cells per well and cultured for 2-3 weeks. The medium was discarded and washed twice with PBS, and then, the cell clusters formed were fixed with 4% paraformaldehyde for 15 minutes and finally stained with crystal violet for 20 minutes. The image of cell clusters was taken, and the number of cell clusters was counted with ImageJ 1.8.0 software. All experiments were carried out in triplicate and repeated at least three times. ### 2.5. Cell Counting Kit-8 Experiment Transfected GC cells were seeded into 96-well culture plates at a density of 1000 cells per well, and then at 24, 48, 72, and 96 hours of culture, cells were incubated with a 10μL Cell Counting Kit-8 (CCK8) reagent (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) at 37°C for 2 h. The absorbance of each well was then measured in an automatic enzyme label meter (450 nm spectrophotometry). ### 2.6. 5-Ethynyl-2′-deoxyuridine Incorporation Test Stably transfected GC cells were seeded into a 6-well plate at a density of1.5×105 cells per well for overnight culture; then, the cells were incubated with 10 μM 5-ethynyl-2′-deoxyuridine (EdU) (Beyotime Biotechnology, Shanghai, China) in an incubator for 2 hours. The cells were then fixed, washed, permeabilized, and stained. Finally, the cells were observed with an inverted fluorescence microscope, and images were captured. ### 2.7. Scratch Test The GC cell suspension was configured, and each cell suspension group was diluted to5×105 cells/mL. The above cell suspension was added to a double-well chamber (ibidi GmbH, Martinsried, Germany). Then, 70 μL of cell suspension was added to each well of the chamber. The chamber was removed with sterile forceps after the cells adhered to the wall, and 2 mL of complete culture base was added. At the time points of 0, 12, and 24 hours, the cells were observed, and images were captured by using an inverted microscope equipped with a camera. ### 2.8. Transwell Experiment A 24-well spreading gel invasion chamber (pore size, 8μm; Costar, Corning, Inc., Corning, NY, USA) was used for cell invasion assay, and stably transfected gastric cancer cells were harvested and suspended in FBS-free DMEM medium at a density of 1×105 cells/mL. Next, 200 μL of cell suspension was added to the upper chamber, while 500 μL of DMEM containing 10% FBS was added to the bottom chamber. After culturing in a cell incubator for 24 hours, the nonmigrated cells in the upper chamber were removed with a cotton swab, and the cells invaded at the bottom of the filter were fixed in 4% paraformaldehyde at room temperature for 5 minutes. After washing the upper chamber for 1 minute, the cells were stained with crystal violet and counted at a magnification of × 100 in 5 randomly selected fields of view under a phase-contrast microscope ### 2.9. RNA Extraction and Quantitative Polymerase Chain Reaction A spin column RNA extraction kit (Beyotime Biotechnology, Shanghai, China) was used to isolate total RNA from the cultured GC cells; then, the HiScript® RT Kit (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China) was used according to the manual, and 1μg of total RNA was reverse transcribed into cDNA. The RNA concentration was measured by using the NanoDrop™ spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA). Quantitative polymerase chain reaction (qPCR) was performed on the ABI StepOnePlus™ real-time (RT) PCR system (Thermo Fisher Scientific, Inc., Waltham, MA, USA) with ChamQ™ SYBR (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China).The primer sequences used wereGAPDH, 5′-GGAGCGAGATCCCTCCAAAAT-3′ (forward), 5′-GGCTGTTGTCATACTTCTCATGG-3′ (reverse); BATF2, 5′-CACCAGCAGCACGAGTCTC-3′ (forward), 5′-TGTGCGAGGCAAACAGGAG-3′ (reverse); HDGF, 5′-CTCTTCCCTTACGAGGAATCCA-3′ (forward), 5′-CCTTGACAGTAGGGTTGTTCTC-3′ (reverse); METTL3, 5′-TTGTCTCCAACCTTCCGTAGT-3′ (forward), 5′-CCAGATCAGAGAGGTGGTGTAG-3′ (reverse); and PPP2CA, 5′-CAAAAGAATCCAACGTGCAAGAG-3′ (forward), 5′-CGTTCACGGTAACGAACCTT-3′ (reverse). ### 2.10. Western Blot Analysis The lysate buffer of the protease inhibitor mixture and the 1% phosphatase inhibitor mixture were added to gastric cancer cells and lysed on ice for 10 minutes to extract the proteins. Protein concentration of each group was quantitatively detected by protein quantitative kit (BCA method, Beyotime Biotechnology, Shanghai, China). Before the WB experiment, protein lysates were added into the 5× SDS-PAGE protein loading buffer in proportion, boiled for 10 minutes, and stored in the refrigerator at -20°C. Protein extract (30-50μg) was extracted for precast gel electrophoresis. After electrophoresis, the PVDF membrane (Beyotime Biotechnology, Shanghai, China) was transferred and then sealed with 5% skimmed milk powder at room temperature for 1 hour. After slight rinsing of the blocking solution with TBST, the diluted primary antibody was incubated overnight at 4°C, and the next day with the corresponding diluted secondary antibody was incubated at room temperature for 1 h. Finally, an appropriate amount of developer solution was added in a dark room before exposure strips were performed. The primary antibodies used in this study were PP2A-Cα/β (1 : 500; Santa Cruz Biotechnology, Dallas, TX, USA); METTL3 (1 : 1000; Proteintech Group, Chicago, IL, USA); GAPDH (1 : 1000; Proteintech Group, Chicago, IL, USA); and β-actin (1 : 1000; Proteintech Group, Chicago, IL, USA). ### 2.11. Nude Mouse Tumor Formation Experiment This animal experiment ethics is approved by Experimental Animal Center of Nanjing Medical University, approval number: IACUC-2103060. In order to establish the xenograft model of gastric cancer cells, nude mice were obtained from the animal center of Nanjing Medical University (BALA/c; 4 weeks old). Before injection, the mice were reared in a specific pathogen-free environment for 1 week. A total of5×106 stably transfected GC cells in 150 μL of phosphate-buffered saline were injected into the flank of nude mice (n=5 per group). After 3 weeks, we euthanized nude mice for the measurement of tumor volume and tumor weights. ### 2.12. Statistical Analysis T-tests were used to analyze the statistical differences between normally distributed data, and P<0.05 was considered statistically significant. SPSS 13.0 software (IBM Inc., Armonk, New York, USA) was used for statistical analysis. ## 2.1. Immunohistochemistry GC pathological tissue wax blocks were provided by the Department of Pathology, the Affiliated Jiangning Hospital of Nanjing Medical University. Ten wax blocks with GC of pathological stage III-IV were screened out, and immunohistochemical sections were made from these tissue wax blocks. Anti-PP2Acα antibody (Santa Cruz Biotechnology, Dallas, TX, USA) and anti-METTL3 antibody (Proteintech Group, Chicago, IL, USA) diluted to 1 : 300 were used for immunostaining. Based on the percentage of positive cells, 2 pathologists who did not know the clinical information independently evaluated the PP2Acα and METTL3 staining intensities. The results were divided into the following categories: negative (-): 0 points, <25%; weakly positive (+): 1 point, ≥25, <50%; moderately positive (++): 2 points, ≥50, <75%; and strongly positive (+++): 3 points, ≥75%. ## 2.2. Cell Culture Two human GC cell lines were used: MGC803 and BGC823 (Beyotime Biotechnology, Shanghai, China). The cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM, HyClone, Inc., Logan, Utah, USA) added with 10% fetal bovine serum (FBS, Gibco, California, USA) and 1% antibiotics (penicillin/streptomycin, Gibco, California, USA). Cells were grown in a 5% CO2 incubator at 37°C. ## 2.3. Lentivirus-Mediated shRNA-Transfected Cells According to the multiplicity of infection values of BGC823 and MGC803 (100 for both), GC cells were transfected with lentivirus-mediated sh-PPP2CA and sh-NC (Genomeditech Co., Ltd., Shanghai, China). The cells were harvested 48 hours after transfection. Then, puromycin was used to screen out positively expressing cells.The shRNA sequences used were sh-PPP2CA-1, 5′-GATCCGTGGAACTTGACGATACTCTAACTCGAGTTAGAGTATCGTCAAGTTCCATTTTTT-3′; sh-PPP2CA-2, 5′-GATCCGCAGATCTTCTGTCTACATGGTTCAAGAGACCATGTAGACAGAAGATCTGCTTTTTTG-3′; and sh-PPP2CA-3, 5′-GATCCGGCAAATCACCAGATACAAATTTCAAGAGAATTGTATCTGGTGATTTGCCTTTTTTG-3′. ## 2.4. Clone Formation Experiment Stably transfected gastric cancer cells were inoculated into a 6-well plate at a density of 1000 cells per well and cultured for 2-3 weeks. The medium was discarded and washed twice with PBS, and then, the cell clusters formed were fixed with 4% paraformaldehyde for 15 minutes and finally stained with crystal violet for 20 minutes. The image of cell clusters was taken, and the number of cell clusters was counted with ImageJ 1.8.0 software. All experiments were carried out in triplicate and repeated at least three times. ## 2.5. Cell Counting Kit-8 Experiment Transfected GC cells were seeded into 96-well culture plates at a density of 1000 cells per well, and then at 24, 48, 72, and 96 hours of culture, cells were incubated with a 10μL Cell Counting Kit-8 (CCK8) reagent (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) at 37°C for 2 h. The absorbance of each well was then measured in an automatic enzyme label meter (450 nm spectrophotometry). ## 2.6. 5-Ethynyl-2′-deoxyuridine Incorporation Test Stably transfected GC cells were seeded into a 6-well plate at a density of1.5×105 cells per well for overnight culture; then, the cells were incubated with 10 μM 5-ethynyl-2′-deoxyuridine (EdU) (Beyotime Biotechnology, Shanghai, China) in an incubator for 2 hours. The cells were then fixed, washed, permeabilized, and stained. Finally, the cells were observed with an inverted fluorescence microscope, and images were captured. ## 2.7. Scratch Test The GC cell suspension was configured, and each cell suspension group was diluted to5×105 cells/mL. The above cell suspension was added to a double-well chamber (ibidi GmbH, Martinsried, Germany). Then, 70 μL of cell suspension was added to each well of the chamber. The chamber was removed with sterile forceps after the cells adhered to the wall, and 2 mL of complete culture base was added. At the time points of 0, 12, and 24 hours, the cells were observed, and images were captured by using an inverted microscope equipped with a camera. ## 2.8. Transwell Experiment A 24-well spreading gel invasion chamber (pore size, 8μm; Costar, Corning, Inc., Corning, NY, USA) was used for cell invasion assay, and stably transfected gastric cancer cells were harvested and suspended in FBS-free DMEM medium at a density of 1×105 cells/mL. Next, 200 μL of cell suspension was added to the upper chamber, while 500 μL of DMEM containing 10% FBS was added to the bottom chamber. After culturing in a cell incubator for 24 hours, the nonmigrated cells in the upper chamber were removed with a cotton swab, and the cells invaded at the bottom of the filter were fixed in 4% paraformaldehyde at room temperature for 5 minutes. After washing the upper chamber for 1 minute, the cells were stained with crystal violet and counted at a magnification of × 100 in 5 randomly selected fields of view under a phase-contrast microscope ## 2.9. RNA Extraction and Quantitative Polymerase Chain Reaction A spin column RNA extraction kit (Beyotime Biotechnology, Shanghai, China) was used to isolate total RNA from the cultured GC cells; then, the HiScript® RT Kit (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China) was used according to the manual, and 1μg of total RNA was reverse transcribed into cDNA. The RNA concentration was measured by using the NanoDrop™ spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA). Quantitative polymerase chain reaction (qPCR) was performed on the ABI StepOnePlus™ real-time (RT) PCR system (Thermo Fisher Scientific, Inc., Waltham, MA, USA) with ChamQ™ SYBR (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China).The primer sequences used wereGAPDH, 5′-GGAGCGAGATCCCTCCAAAAT-3′ (forward), 5′-GGCTGTTGTCATACTTCTCATGG-3′ (reverse); BATF2, 5′-CACCAGCAGCACGAGTCTC-3′ (forward), 5′-TGTGCGAGGCAAACAGGAG-3′ (reverse); HDGF, 5′-CTCTTCCCTTACGAGGAATCCA-3′ (forward), 5′-CCTTGACAGTAGGGTTGTTCTC-3′ (reverse); METTL3, 5′-TTGTCTCCAACCTTCCGTAGT-3′ (forward), 5′-CCAGATCAGAGAGGTGGTGTAG-3′ (reverse); and PPP2CA, 5′-CAAAAGAATCCAACGTGCAAGAG-3′ (forward), 5′-CGTTCACGGTAACGAACCTT-3′ (reverse). ## 2.10. Western Blot Analysis The lysate buffer of the protease inhibitor mixture and the 1% phosphatase inhibitor mixture were added to gastric cancer cells and lysed on ice for 10 minutes to extract the proteins. Protein concentration of each group was quantitatively detected by protein quantitative kit (BCA method, Beyotime Biotechnology, Shanghai, China). Before the WB experiment, protein lysates were added into the 5× SDS-PAGE protein loading buffer in proportion, boiled for 10 minutes, and stored in the refrigerator at -20°C. Protein extract (30-50μg) was extracted for precast gel electrophoresis. After electrophoresis, the PVDF membrane (Beyotime Biotechnology, Shanghai, China) was transferred and then sealed with 5% skimmed milk powder at room temperature for 1 hour. After slight rinsing of the blocking solution with TBST, the diluted primary antibody was incubated overnight at 4°C, and the next day with the corresponding diluted secondary antibody was incubated at room temperature for 1 h. Finally, an appropriate amount of developer solution was added in a dark room before exposure strips were performed. The primary antibodies used in this study were PP2A-Cα/β (1 : 500; Santa Cruz Biotechnology, Dallas, TX, USA); METTL3 (1 : 1000; Proteintech Group, Chicago, IL, USA); GAPDH (1 : 1000; Proteintech Group, Chicago, IL, USA); and β-actin (1 : 1000; Proteintech Group, Chicago, IL, USA). ## 2.11. Nude Mouse Tumor Formation Experiment This animal experiment ethics is approved by Experimental Animal Center of Nanjing Medical University, approval number: IACUC-2103060. In order to establish the xenograft model of gastric cancer cells, nude mice were obtained from the animal center of Nanjing Medical University (BALA/c; 4 weeks old). Before injection, the mice were reared in a specific pathogen-free environment for 1 week. A total of5×106 stably transfected GC cells in 150 μL of phosphate-buffered saline were injected into the flank of nude mice (n=5 per group). After 3 weeks, we euthanized nude mice for the measurement of tumor volume and tumor weights. ## 2.12. Statistical Analysis T-tests were used to analyze the statistical differences between normally distributed data, and P<0.05 was considered statistically significant. SPSS 13.0 software (IBM Inc., Armonk, New York, USA) was used for statistical analysis. ## 3. Results ### 3.1. PP2Acα and METTL3 Are Both Abnormally Expressed in Gastric Cancer Tissue and Related to Gastric Cancer Prognosis To study the role of PP2Acα and METTL3 in the progression of GC, we performed immunohistochemistry on 10 pairs of GC tissue and normal gastric mucosal tissue adjacent to the cancer. According to the above Materials and Methods, immunohistochemistry scoring was performed based on the scoring standard, and the results showed that the level of PP2Acα in GC tissue was significantly lower than that in normal gastric mucosal tissue adjacent to the cancer (P<0.001; Figure 1(a)), while the level of METTL3 was significantly increased in GC tissue (P<0.0001; Figure 1(b)). The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov) was used to compare the expression levels of the PPP2CA and METTL3 genes in GC tissue and normal gastric mucosal tissue adjacent to the cancer. PPP2CA expression was significantly decreased in GC tissue (32 tumor samples vs. 32 normal samples; P=3.76e−02; Figure 1(c)), while METTL3 expression was increased significantly in GC tissue (415 tumor samples vs. 35 normal samples; P=7.32e−06; Figure 1(d)). The results of prognostic analysis through the Kaplan-Meier plotter website (http://kmplot.com/analysis/) showed that the prognoses of the high PPP2CA expression and low METTL3 expression groups were significantly better than those of the respective control groups (P=1.2e−09 and P=4.4e−05, respectively; Figures 1(e) and 1(f)). The above results suggested that PP2Acα levels were significantly decreased and METTL3 levels were significantly increased in GC tissue. In addition, both the PPP2CA and METTL3 genes were closely related to GC prognosis. The different roles of PPP2CA and METTL3 in the progression and prognosis of GC require further exploration.Figure 1 PPP2CA and METTL3 expression levels in GC and relationships with GC prognosis. (a, b) Representative pictures of PP2Acα and METTL3 levels in GC tissue and normal gastric mucosal tissue adjacent to the cancer (magnification, ×40 and ×100; scalebars=200μm). (c, d) PPP2CA and METTL3 mRNA levels in The Cancer Genome Atlas database (P=3.76e−02 and P=7.32e−06, respectively; P<0.05 is statistically significant). (e, f) In the Kaplan-Meier plotter database, the prognostic values of PPP2CA and METTL3 for overall survival in GC patients were statistically significant (P<0.05). The picture can be downloaded at http://kmplot.com/analysis/index.php?p=service&cancer=ovar. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. GC: gastric cancer. (a)(b)(c)(d)(e)(f) ### 3.2. Inhibition of PP2Acα Results in Higher METTL3 Protein Levels To explore the relationship between PP2Acα and METTL3, we used lentivirus-mediated shRNA to knock down the PPP2CA gene in BGC823 and MGC803 cells. After 48 hours of transfection, the control, shRNA1, and shRNA3 groups with strong fluorescent expression (Figure 2(a)) were screened for puromycin, and GC cells with stable and low levels of PP2Acα were obtained. The successful knockdown at the mRNA and protein levels was verified (Figures 2(b)–2(e)), and METTL3 protein levels were significantly increased (Figures 2(d) and 2(e)). The mechanism of this wane-and-wax relationship deserves further exploration.Figure 2 Effect of PP2Acα inhibition on METTL3 levels. (a) The PPP2CA gene was knocked down in gastric cancer cells (BGC823 and MGC803) by lentivirus. (b, c) After knocking down PPP2CA, RT-qPCR was used to detect the expression of the METTL3 gene in the BGC823 and MGC803 cells. (d, e) Western blot analysis was used to detect METTL3 levels in the BGC823 and MGC803 cells after PP2Acα inhibition ∗P<0.05. (a)(b)(c)(d)(e) ### 3.3. Inhibition of PP2Acα Upregulates METTL3 through p-ATM According to the above results, we first used the Protein-Protein Interactions website to analyze the relationship between PP2Acα and METTL3. We found that PP2Acα and METTL3 establish links via the middle proteins WTAP and HSP90AA1 (Figure 3(a)). However, we could not clarify which one was the upstream or downstream protein. Therefore, we needed to find another molecular mechanism to identify links between PP2Acα and METTL3. PP2Acα plays the key role in maintaining phosphorylation homeostasis. We thus used the PhosphoSite website (https://www.phosphosite.org) to search the amino acid sequences of the phosphorylation sites of METTL3, and we found that METTL3 was rich in phosphorylation (Figure 3(b)), suggesting that the levels or functions of METTL3 may be regulated by PP2Acα. Based on this conjecture, through the PubMed database, we found that the phosphate groups of METTL3 could be added by the ataxia-telangiectasia mutated (ATM) kinase. The phosphate groups added onto the serine or threonine of this kinase can be removed by PP2A [21, 22]. In addition, PP2Acα inhibition can upregulate p-ATM levels, which can enhance ATM kinase activity [23]. In sum, we proposed that PP2Acα inhibition could upregulate METTL3 levels by enhancing the kinase activity of ATM. To test this speculation, we added KU55933, which is an ATM kinase inhibitor, to the medium of the GC cells. After 48 hours, we extracted the proteins of the GC cells to conduct a western blot assay. The experimental results showed that the p-ATM and METTL3 levels were all decreased compared to before treatment, while the total ATM level did not change significantly (Figure 3(c)). In conclusion, PP2Acα inhibition was found to upregulate METTL3 levels by stimulating the kinase activity of ATM in GC cells. Some experiments have shown that high METTL3 levels are closely related to the malignant progression of GC [12, 13]. Therefore, we used phenotypic experiments to detect the malignant phenotype of GC cells after inhibiting PP2Acα before and after adding KU55933.Figure 3 Effect of PP2Acα inhibition on METTL3 levels. (a) The protein interaction network between PP2Acα and METTL3 was queried through the STRING website. (b) The PhosphoSite website was used to query the phosphorylated modification of the amino acid sequence of METTL3. (c) Western blot analysis was used to detect changes in METTL3, ATM, and p-ATM levels before and after adding KU55933. (a)(b)(c) ### 3.4. Inhibition of PP2Acα Promotes the Malignant Phenotype of GC Cells In Vitro To explore the effect of PP2Acα inhibition on GC cells, various phenotypic experiments were carried out in vitro. The results of the CCK8 experiment showed that on the 4th and 5th days, the proliferation of the experimental group was significantly greater than that of the control group (P<0.05; Figures 4(a) and 4(b)). Similarly, in the clone formation experiment (Figures 4(c) and 4(d)) and EdU cell proliferation detection experiment (Figures 4(e) and 4(f)), significantly greater proliferation ability was found in the sh1 and sh3 groups compared to the control group (P<0.05). GC cells in the ibidi chamber were paved at the same density (5×105cells/mL), and the healing of scratches at 0, 12, and 24 hours after adhesion was observed. It was found that the healing ability of the shRNA1 and shRNA3 groups was significantly faster than that of the control group at 12 hours (Figures 4(g) and 4(h)). The Transwell invasion experiment showed that after 24 hours of culture, the invasion ability of the experimental group was significantly stronger than that of the control group (P<0.01; Figures 4(i) and 4(j)). The above phenotypic experiments proved that PP2Acα inhibition could promote the proliferation, migration, and invasion of GC cells.Figure 4 Effect of PP2Acα inhibition on the proliferation, migration, and invasion of gastric cancer cells in vitro. Cell Counting Kit-8 (a, b), clone formation (c, d), and EdU (e, f) tests were used to detect the effect of PPP2CA knockdown on the proliferation of BGC823 and MGC803 cells. (g, h) The scratch test was used to detect the effect of PPP2CA knockdown on the migration of BGC823 and MGC803 cells. (i, j) The Transwell experiment was used to detect the effect of PPP2CA knockdown on the invasion of BGC823 and MGC803 cells. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ### 3.5. Inhibition of PP2Acα Promotes GC Cell Proliferation In Vivo Stable knockout cells (MGC803/LV-shPPP2CA and MKN28/LV-shPPP2CA) and their corresponding control cells were subcutaneously inoculated into the axillae of 4-week-old male nude mice (n=5 per group). After 4 weeks, all mice were euthanized, and the tumors were isolated and removed. The results showed that PP2Acα inhibition significantly promoted the tumorigenicity of GC cells in the sh1 and sh3 groups in vivo. Compared with the control group, the tumor volume increased significantly in the experimental group (P<0.01; Figures 5(a) and 5(b)). These data further confirmed that knockdown of the PPP2CA gene could promote GC cell proliferation in animal models.Figure 5 Effect of PP2Acα inhibition on the proliferation of gastric cancer cells in vivo. (a, b) BGC823 and MGC803 cells transfected with LV-sh-PPP2CA-1,3 or LV-sh-NC were subcutaneously injected into nude mice (n=5 per group), and then, 4 weeks after injection, the tumors were removed, and tumor volumes were measured. LV: lentivirus; NC: normal control. (a)(b) ### 3.6. Inhibition of ATM Kinase Activity Can Reverse the Malignant Progression of Gastric Cancer Cells That Is Promoted by Inhibiting PP2Acα After KU55933 was used to inhibit the ATM activity of the GC cells, the morphological changes of the BGC-823 and MGC-803 cells were observed, and the proliferation and migration abilities of the GC cells were detected. We found that the apoptosis of the GC cells in each BGC-823 group was increased, and the epithelial-mesenchymal transition characteristics were weakened (Figure6(a)). That is, the GC cells became round, and the looseness between cells decreased. The CCK-8 and EdU cell proliferation assay results showed that the proliferation abilities of the BGC-823 and MGC-803 cells were significantly inhibited (Figures 6(b)–6(d)). The scratch test results showed that the migration ability of the GC cells was significantly inhibited (Figures 6(e) and 6(f)). These results suggest that the inhibition of ATM activity can reverse the enhancement of the malignant phenotype of GC cells that is induced by inhibiting PP2Acα. It can be concluded that PP2Acα inhibition upregulates METTL3 levels by stimulating the kinase activity of ATM, thereby promoting the malignant phenotype of GC cells.Figure 6 Effect of the inhibition of ATM activity on the malignant phenotype of gastric cancer cells. (a) After ATM activity was inhibited, the morphological changes of BGC-823 and MGC-803 cells were microscopically observed (magnification, ×10). (b, c) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was detected by using the Cell Counting Kit-8. (d) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was microscopically observed following the EdU cell proliferation test. (e, f) The migration of BGC-823 and MGC-803 cells was detected by using the scratch test.∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f) ## 3.1. PP2Acα and METTL3 Are Both Abnormally Expressed in Gastric Cancer Tissue and Related to Gastric Cancer Prognosis To study the role of PP2Acα and METTL3 in the progression of GC, we performed immunohistochemistry on 10 pairs of GC tissue and normal gastric mucosal tissue adjacent to the cancer. According to the above Materials and Methods, immunohistochemistry scoring was performed based on the scoring standard, and the results showed that the level of PP2Acα in GC tissue was significantly lower than that in normal gastric mucosal tissue adjacent to the cancer (P<0.001; Figure 1(a)), while the level of METTL3 was significantly increased in GC tissue (P<0.0001; Figure 1(b)). The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov) was used to compare the expression levels of the PPP2CA and METTL3 genes in GC tissue and normal gastric mucosal tissue adjacent to the cancer. PPP2CA expression was significantly decreased in GC tissue (32 tumor samples vs. 32 normal samples; P=3.76e−02; Figure 1(c)), while METTL3 expression was increased significantly in GC tissue (415 tumor samples vs. 35 normal samples; P=7.32e−06; Figure 1(d)). The results of prognostic analysis through the Kaplan-Meier plotter website (http://kmplot.com/analysis/) showed that the prognoses of the high PPP2CA expression and low METTL3 expression groups were significantly better than those of the respective control groups (P=1.2e−09 and P=4.4e−05, respectively; Figures 1(e) and 1(f)). The above results suggested that PP2Acα levels were significantly decreased and METTL3 levels were significantly increased in GC tissue. In addition, both the PPP2CA and METTL3 genes were closely related to GC prognosis. The different roles of PPP2CA and METTL3 in the progression and prognosis of GC require further exploration.Figure 1 PPP2CA and METTL3 expression levels in GC and relationships with GC prognosis. (a, b) Representative pictures of PP2Acα and METTL3 levels in GC tissue and normal gastric mucosal tissue adjacent to the cancer (magnification, ×40 and ×100; scalebars=200μm). (c, d) PPP2CA and METTL3 mRNA levels in The Cancer Genome Atlas database (P=3.76e−02 and P=7.32e−06, respectively; P<0.05 is statistically significant). (e, f) In the Kaplan-Meier plotter database, the prognostic values of PPP2CA and METTL3 for overall survival in GC patients were statistically significant (P<0.05). The picture can be downloaded at http://kmplot.com/analysis/index.php?p=service&cancer=ovar. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. GC: gastric cancer. (a)(b)(c)(d)(e)(f) ## 3.2. Inhibition of PP2Acα Results in Higher METTL3 Protein Levels To explore the relationship between PP2Acα and METTL3, we used lentivirus-mediated shRNA to knock down the PPP2CA gene in BGC823 and MGC803 cells. After 48 hours of transfection, the control, shRNA1, and shRNA3 groups with strong fluorescent expression (Figure 2(a)) were screened for puromycin, and GC cells with stable and low levels of PP2Acα were obtained. The successful knockdown at the mRNA and protein levels was verified (Figures 2(b)–2(e)), and METTL3 protein levels were significantly increased (Figures 2(d) and 2(e)). The mechanism of this wane-and-wax relationship deserves further exploration.Figure 2 Effect of PP2Acα inhibition on METTL3 levels. (a) The PPP2CA gene was knocked down in gastric cancer cells (BGC823 and MGC803) by lentivirus. (b, c) After knocking down PPP2CA, RT-qPCR was used to detect the expression of the METTL3 gene in the BGC823 and MGC803 cells. (d, e) Western blot analysis was used to detect METTL3 levels in the BGC823 and MGC803 cells after PP2Acα inhibition ∗P<0.05. (a)(b)(c)(d)(e) ## 3.3. Inhibition of PP2Acα Upregulates METTL3 through p-ATM According to the above results, we first used the Protein-Protein Interactions website to analyze the relationship between PP2Acα and METTL3. We found that PP2Acα and METTL3 establish links via the middle proteins WTAP and HSP90AA1 (Figure 3(a)). However, we could not clarify which one was the upstream or downstream protein. Therefore, we needed to find another molecular mechanism to identify links between PP2Acα and METTL3. PP2Acα plays the key role in maintaining phosphorylation homeostasis. We thus used the PhosphoSite website (https://www.phosphosite.org) to search the amino acid sequences of the phosphorylation sites of METTL3, and we found that METTL3 was rich in phosphorylation (Figure 3(b)), suggesting that the levels or functions of METTL3 may be regulated by PP2Acα. Based on this conjecture, through the PubMed database, we found that the phosphate groups of METTL3 could be added by the ataxia-telangiectasia mutated (ATM) kinase. The phosphate groups added onto the serine or threonine of this kinase can be removed by PP2A [21, 22]. In addition, PP2Acα inhibition can upregulate p-ATM levels, which can enhance ATM kinase activity [23]. In sum, we proposed that PP2Acα inhibition could upregulate METTL3 levels by enhancing the kinase activity of ATM. To test this speculation, we added KU55933, which is an ATM kinase inhibitor, to the medium of the GC cells. After 48 hours, we extracted the proteins of the GC cells to conduct a western blot assay. The experimental results showed that the p-ATM and METTL3 levels were all decreased compared to before treatment, while the total ATM level did not change significantly (Figure 3(c)). In conclusion, PP2Acα inhibition was found to upregulate METTL3 levels by stimulating the kinase activity of ATM in GC cells. Some experiments have shown that high METTL3 levels are closely related to the malignant progression of GC [12, 13]. Therefore, we used phenotypic experiments to detect the malignant phenotype of GC cells after inhibiting PP2Acα before and after adding KU55933.Figure 3 Effect of PP2Acα inhibition on METTL3 levels. (a) The protein interaction network between PP2Acα and METTL3 was queried through the STRING website. (b) The PhosphoSite website was used to query the phosphorylated modification of the amino acid sequence of METTL3. (c) Western blot analysis was used to detect changes in METTL3, ATM, and p-ATM levels before and after adding KU55933. (a)(b)(c) ## 3.4. Inhibition of PP2Acα Promotes the Malignant Phenotype of GC Cells In Vitro To explore the effect of PP2Acα inhibition on GC cells, various phenotypic experiments were carried out in vitro. The results of the CCK8 experiment showed that on the 4th and 5th days, the proliferation of the experimental group was significantly greater than that of the control group (P<0.05; Figures 4(a) and 4(b)). Similarly, in the clone formation experiment (Figures 4(c) and 4(d)) and EdU cell proliferation detection experiment (Figures 4(e) and 4(f)), significantly greater proliferation ability was found in the sh1 and sh3 groups compared to the control group (P<0.05). GC cells in the ibidi chamber were paved at the same density (5×105cells/mL), and the healing of scratches at 0, 12, and 24 hours after adhesion was observed. It was found that the healing ability of the shRNA1 and shRNA3 groups was significantly faster than that of the control group at 12 hours (Figures 4(g) and 4(h)). The Transwell invasion experiment showed that after 24 hours of culture, the invasion ability of the experimental group was significantly stronger than that of the control group (P<0.01; Figures 4(i) and 4(j)). The above phenotypic experiments proved that PP2Acα inhibition could promote the proliferation, migration, and invasion of GC cells.Figure 4 Effect of PP2Acα inhibition on the proliferation, migration, and invasion of gastric cancer cells in vitro. Cell Counting Kit-8 (a, b), clone formation (c, d), and EdU (e, f) tests were used to detect the effect of PPP2CA knockdown on the proliferation of BGC823 and MGC803 cells. (g, h) The scratch test was used to detect the effect of PPP2CA knockdown on the migration of BGC823 and MGC803 cells. (i, j) The Transwell experiment was used to detect the effect of PPP2CA knockdown on the invasion of BGC823 and MGC803 cells. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ## 3.5. Inhibition of PP2Acα Promotes GC Cell Proliferation In Vivo Stable knockout cells (MGC803/LV-shPPP2CA and MKN28/LV-shPPP2CA) and their corresponding control cells were subcutaneously inoculated into the axillae of 4-week-old male nude mice (n=5 per group). After 4 weeks, all mice were euthanized, and the tumors were isolated and removed. The results showed that PP2Acα inhibition significantly promoted the tumorigenicity of GC cells in the sh1 and sh3 groups in vivo. Compared with the control group, the tumor volume increased significantly in the experimental group (P<0.01; Figures 5(a) and 5(b)). These data further confirmed that knockdown of the PPP2CA gene could promote GC cell proliferation in animal models.Figure 5 Effect of PP2Acα inhibition on the proliferation of gastric cancer cells in vivo. (a, b) BGC823 and MGC803 cells transfected with LV-sh-PPP2CA-1,3 or LV-sh-NC were subcutaneously injected into nude mice (n=5 per group), and then, 4 weeks after injection, the tumors were removed, and tumor volumes were measured. LV: lentivirus; NC: normal control. (a)(b) ## 3.6. Inhibition of ATM Kinase Activity Can Reverse the Malignant Progression of Gastric Cancer Cells That Is Promoted by Inhibiting PP2Acα After KU55933 was used to inhibit the ATM activity of the GC cells, the morphological changes of the BGC-823 and MGC-803 cells were observed, and the proliferation and migration abilities of the GC cells were detected. We found that the apoptosis of the GC cells in each BGC-823 group was increased, and the epithelial-mesenchymal transition characteristics were weakened (Figure6(a)). That is, the GC cells became round, and the looseness between cells decreased. The CCK-8 and EdU cell proliferation assay results showed that the proliferation abilities of the BGC-823 and MGC-803 cells were significantly inhibited (Figures 6(b)–6(d)). The scratch test results showed that the migration ability of the GC cells was significantly inhibited (Figures 6(e) and 6(f)). These results suggest that the inhibition of ATM activity can reverse the enhancement of the malignant phenotype of GC cells that is induced by inhibiting PP2Acα. It can be concluded that PP2Acα inhibition upregulates METTL3 levels by stimulating the kinase activity of ATM, thereby promoting the malignant phenotype of GC cells.Figure 6 Effect of the inhibition of ATM activity on the malignant phenotype of gastric cancer cells. (a) After ATM activity was inhibited, the morphological changes of BGC-823 and MGC-803 cells were microscopically observed (magnification, ×10). (b, c) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was detected by using the Cell Counting Kit-8. (d) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was microscopically observed following the EdU cell proliferation test. (e, f) The migration of BGC-823 and MGC-803 cells was detected by using the scratch test.∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f) ## 4. Discussion The poor prognosis associated with advanced GC has become a major public health problem [24–26]. The radical resection of GC has been well-developed since the 1980s, chemotherapy regimens have improved in recent years [24, 27, 28], and advanced intervention methods, such as arterial interventional embolization for distant metastases and intraperitoneal hyperthermic perfusion therapy, have emerged. However, the prognoses of patients with advanced GC have not reached public expectations. Nevertheless, in recent years, it has been discovered that molecular targeted drugs can significantly prolong the survival of patients with malignant tumors [29, 30], which is promising when it comes to curing GC. Combined with the ongoing breakthrough in the research of targeted tumor therapy [31], the molecular mechanism of GC is worthy of in-depth research in order to improve the therapeutic targets for GC and lay the foundation for better diagnosis and treatment of GC in the future.To explore the molecular mechanism of advanced GC, this study used the most extensively modified proteins in the body as entry points. PP2A and m6A are the important components of phosphorylation homeostasis maintenance and RNA methylation modification, respectively. Between them, PP2A has been favored by researchers due to the complexity of its trimer structure, especially the regulatory subunit B, with its substrate specificity and functional diversity, which enriches the functions of the PP2A holoenzyme [8, 32]. However, the implementation of PP2A’s functional diversity is inseparable from its core enzyme, which is composed of the structural subunit A and catalytic subunit C [33]. As an important part of the PP2A core enzyme, PP2Acα is highly conservative, and PP2Acα dysfunction often leads to the loss of PP2A holoenzyme activity, leading to a variety of life activity disorders in the body, in turn inducing various diseases [34]. So far, relevant basic research on GC has not involved PP2Acα/PPP2CA. The TCGA and Kaplan-Meier plotter databases show that low PPP2CA expression is related to the poor prognosis of GC. Therefore, studying the expression imbalance of PPP2CA is crucial for the in-depth exploration of the pathogenesis of GC. M6A has become a hot research topic in recent years due to its dynamic and reversible methylation modification characteristics, and it has been found to have more and more important roles in various diseases [9, 35–38]. METTL3 is the core of m6A modification, and changes in the levels or methylation function of METTL3 have been found in the progress of many diseases, so METTL3 function and level abnormalities are often important research points. The correlation research of METTL3 in GC is not exceptional, and most results have shown that increased METTL3 levels promote the progression of GC [11, 39–41].This study found that PP2Acα inhibition significantly upregulated METTL3 protein levels. However, through the PubMed database, we did not find a correlation study to explain the connection between PP2Acα and METTL3. Using the STRING database to query the interaction network between PP2Acα and METTL3, we did not find a superior or subordinate regulatory relationship between them. Through the PhosphoSite website, we found that METTL3 has a large number of phosphorylation sites in the amino acid sequence, suggesting that METTL3 can be affected by kinase phosphorylation modification, and phosphorylation modification is often accompanied by changes in protein levels and functions.A previous study found that ATM kinase phosphorylates the serine 43 (S43) site in the amino acid sequence of METTL3 to upregulate the level and function of METTL3. The activated METTL3 locates the position of DNA double-strand breaks (DSBs). The DSB-related RNA is methylated, and then, the m6A recognition protein YTHDC1 recognizes this methylation and recruits the RAD51 and BRCA1 proteins to perform homologous recombination repair on the damage to maintain a stable genome. Therefore, cells with low METTL3 levels lack effective homologous recombination repair, which increases the instability of the genome and leads to cell death. Tumor cells with high METTL3 levels are more likely to respond to DSBs, stabilize their own genome, and maintain their malignant phenotype and drug resistance [21]. In particular, the activity of ATM, as the upstream kinase of METTL3, can be regulated by PP2Acα. Experiments have shown that PP2Acα inhibition, which leads to the upregulation of the autophosphorylation of the ATM Ser1981 site, activates the activity of ATM [23].In summary, it is speculated that in GC cells, inhibiting PP2Acα can upregulate the activity of ATM, leading to the phosphorylation of the S43 position of the METTL3 amino acid sequence, which may activate the METTL3 methylation function and upregulate the protein level of METTL3, ultimately enhancing the malignant phenotype of GC cells. To verify this speculation, we extracted GC cell proteins to perform western blot experiments. The results showed that PP2Acα inhibition led to increased p-ATM (Ser1981) and METTL3 levels, while the total ATM protein level did not change significantly. These results verify that the inhibition of PP2Acα upregulates the activity of the ATM kinase in GC cells and is accompanied by high METTL3 levels.Subsequently, we used KU55933 to inhibit the activity of ATM in each group of GC cells in rescue experiments. The western blot analysis results showed that after inhibiting ATM activity, the METTL3 protein levels decreased significantly. This result clarified the regulatory relationship between p-ATM and METTL3. In addition, through cell phenotyping experiments to compare the malignant phenotype of GC cells before and after adding KU55933, we found that PP2Acα inhibition promoted the malignant phenotype of GC cells, but this could be reversed by adding KU55933.It can be concluded that PP2Acα inhibition promotes increased METTL3 levels by upregulating ATM activity, and it ultimately enhances the malignant phenotype of GC cells. PP2Acα is the upstream of this signal axis, and its expression imbalance is the root cause of the activation of this axis. Combined with the upregulation of PPP2CA expression, it will inhibit the malignant phenotype of malignant tumor cells such as colon cancer, thyroid cancer, and prostate cancer. Targeted therapy of PP2Acα may help to control the malignant progression of gastric cancer.Of course, this study has limitations. Our understanding of the regulation of downstream targets by METTL3 still needs to be supplemented by follow-up studies. However, it is undeniable that this study has enriched the molecular mechanism research related to GC and laid the foundation for basic follow-up research of the clinical diagnosis and treatment of GC. --- *Source: 1015293-2021-08-25.xml*
1015293-2021-08-25_1015293-2021-08-25.md
52,557
Inhibiting PP2Acα Promotes the Malignant Phenotype of Gastric Cancer Cells through the ATM/METTL3 Axis
Zhaoxiang Cheng; Shan Gao; Xiaojie Liang; Chao Lian; Jianquan Chen; Chao Fang
BioMed Research International (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1015293
1015293-2021-08-25.xml
--- ## Abstract This article is aimed at exploring the relationship between the phosphatase 2A catalytic subunit Cα (PP2Acα, encoded by PPP2CA) and methyltransferase-like 3 (METTL3) in the malignant progression of gastric cancer (GC). Through analyzing the bioinformatics database and clinical tissue immunohistochemistry results, we found that abnormal PP2Acα and METTL3 levels were closely related to the malignant progression of GC. To explore the internal connection between PP2Acα and METTL3 in the progression of GC, we carried out cellular and molecular experiments and finally proved that PP2Acα inhibition can upregulate METTL3 levels by activating ATM activity, thereby promoting the malignant progression of GC. --- ## Body ## 1. Introduction The number of new gastric cancer (GC) cases exceeds 950,000 annually, with 78,300 deaths each year [1]. It is the fifth most common cancer in the world and the third leading cause of death due to cancer, accounting for 7% of the total number of cancer cases and 9% of cancer-related deaths [2]. Although the current D2 radical surgery for GC is quite mature and chemotherapy regimens are improving, the 5-year survival rate for advanced GC is only 20% [3]. The low survival rate of GC is closely related to the malignant phenotype of GC cells, which is characterized by proliferation, invasion, and distant migration. In recent years, targeted drugs that inhibit the malignant phenotype of GC cells, such as trastuzumab, have had a definite effect on HER-2-positive GC and can significantly prolong the survival of patients [4]. A phase II clinical study showed that rituximab can improve the prognosis of GC patients with high EGFR expression [5]. Ramulumab, which is a monoclonal antibody that targets VEGFR2, has a good antitumor effect, and its combination with paclitaxel for second-line treatment of advanced GC has been approved in some countries [6]. These cases suggest that molecular targeted drugs have the ability to improve the prognosis of GC patients. Therefore, further exploration of the molecular mechanism of the malignant phenotype of GC cells has important clinical significance for promoting GC therapy. In this study, two of the most common in vivo modifications (phosphorylation and methylation) were used as the entry point to study the molecular mechanism of GC.Phosphorylation is an important biological event that regulates protein activity and stability, and it is of great significance for maintaining cells’ physiological activities [7]. The steady-state phosphorylation of body proteins is mainly regulated by various phosphatases and kinases. Protein phosphatase 2A (PP2A), which is one of the main serine-threonine phosphatases in mammalian cells, maintains cell homoeostasis by counteracting most kinase-driven intracellular signaling pathways [8]. The PP2A catalytic subunit (which mainly refers to PP2Acα) is the core basis of PP2A, and PP2Acα dysfunction plays an important role in the occurrence and metastasis of some tumors. For example, immunohistochemistry and bioinformatics analysis showed that low expression of PPP2CA is closely related to colon cancer progression and poor prognosis; miR-650 promotes the malignant phenotype of undifferentiated thyroid cancer cells by inhibiting the expression of PPP2CA; upregulating the expression of PPP2CA can reverse the epithelial-mesenchymal transition, proliferation, and distant metastasis of prostate cancer. However, there have been no reports of PP2Acα in GC.In addition to protein phosphorylation, RNA methylation is indispensable for the maintenance of cell life activities, and the abnormal function of RNA methylation can lead to the occurrence of many diseases. N6-methyladenosine (m6A) modification is the most important and conservative RNA modification in cells [9]. Methyltransferase-like 3 (METTL3), as the core of the m6A-related methyltransferase complex, which plays an important role in the progression of various malignant tumors, for instances, upregulation of METTL3 promotes the proliferation of bladder cancer by accelerating the maturation of pri-miR221/222, promotes breast cancer progression by targeting Bcl-2, promotes the chemical and radio resistance of pancreatic cancer cells, and promotes the proliferation of colon cancer cells by inhibiting SOCS2. METTL3 also has been shown to play an important role in the occurrence and development of GC in recent years, but there is a lack of exploration of the regulatory factors of METTL3 in GC [10–14].Previous studies have found that PP2A can inhibit theMYC gene and the AKT, KRAS, and NF-κB proteins [15–18], as well as other oncogenes and tumor signaling pathways, thereby exerting a tumor suppressor effect. METTL3 tends to upregulate these oncogenes and signaling pathways [19, 20]. And in this study, through immunohistochemistry on gastric cancer tissue samples of 17 patients, it was found that in gastric cancer tissues, the positive rate of PP2Acα was 2/17 (11.8%), and the positive rate of METTL3 was 14/17 (82.4%). As a result, the roles of PP2Acα and METTL3 in malignant tumors may be very different, but there has been no research on the correlations between them when it comes to GC. Therefore, this study focused on exploring the interaction between PP2Acα and METTL3 in the progression of GC. ## 2. Materials and Methods ### 2.1. Immunohistochemistry GC pathological tissue wax blocks were provided by the Department of Pathology, the Affiliated Jiangning Hospital of Nanjing Medical University. Ten wax blocks with GC of pathological stage III-IV were screened out, and immunohistochemical sections were made from these tissue wax blocks. Anti-PP2Acα antibody (Santa Cruz Biotechnology, Dallas, TX, USA) and anti-METTL3 antibody (Proteintech Group, Chicago, IL, USA) diluted to 1 : 300 were used for immunostaining. Based on the percentage of positive cells, 2 pathologists who did not know the clinical information independently evaluated the PP2Acα and METTL3 staining intensities. The results were divided into the following categories: negative (-): 0 points, <25%; weakly positive (+): 1 point, ≥25, <50%; moderately positive (++): 2 points, ≥50, <75%; and strongly positive (+++): 3 points, ≥75%. ### 2.2. Cell Culture Two human GC cell lines were used: MGC803 and BGC823 (Beyotime Biotechnology, Shanghai, China). The cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM, HyClone, Inc., Logan, Utah, USA) added with 10% fetal bovine serum (FBS, Gibco, California, USA) and 1% antibiotics (penicillin/streptomycin, Gibco, California, USA). Cells were grown in a 5% CO2 incubator at 37°C. ### 2.3. Lentivirus-Mediated shRNA-Transfected Cells According to the multiplicity of infection values of BGC823 and MGC803 (100 for both), GC cells were transfected with lentivirus-mediated sh-PPP2CA and sh-NC (Genomeditech Co., Ltd., Shanghai, China). The cells were harvested 48 hours after transfection. Then, puromycin was used to screen out positively expressing cells.The shRNA sequences used were sh-PPP2CA-1, 5′-GATCCGTGGAACTTGACGATACTCTAACTCGAGTTAGAGTATCGTCAAGTTCCATTTTTT-3′; sh-PPP2CA-2, 5′-GATCCGCAGATCTTCTGTCTACATGGTTCAAGAGACCATGTAGACAGAAGATCTGCTTTTTTG-3′; and sh-PPP2CA-3, 5′-GATCCGGCAAATCACCAGATACAAATTTCAAGAGAATTGTATCTGGTGATTTGCCTTTTTTG-3′. ### 2.4. Clone Formation Experiment Stably transfected gastric cancer cells were inoculated into a 6-well plate at a density of 1000 cells per well and cultured for 2-3 weeks. The medium was discarded and washed twice with PBS, and then, the cell clusters formed were fixed with 4% paraformaldehyde for 15 minutes and finally stained with crystal violet for 20 minutes. The image of cell clusters was taken, and the number of cell clusters was counted with ImageJ 1.8.0 software. All experiments were carried out in triplicate and repeated at least three times. ### 2.5. Cell Counting Kit-8 Experiment Transfected GC cells were seeded into 96-well culture plates at a density of 1000 cells per well, and then at 24, 48, 72, and 96 hours of culture, cells were incubated with a 10μL Cell Counting Kit-8 (CCK8) reagent (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) at 37°C for 2 h. The absorbance of each well was then measured in an automatic enzyme label meter (450 nm spectrophotometry). ### 2.6. 5-Ethynyl-2′-deoxyuridine Incorporation Test Stably transfected GC cells were seeded into a 6-well plate at a density of1.5×105 cells per well for overnight culture; then, the cells were incubated with 10 μM 5-ethynyl-2′-deoxyuridine (EdU) (Beyotime Biotechnology, Shanghai, China) in an incubator for 2 hours. The cells were then fixed, washed, permeabilized, and stained. Finally, the cells were observed with an inverted fluorescence microscope, and images were captured. ### 2.7. Scratch Test The GC cell suspension was configured, and each cell suspension group was diluted to5×105 cells/mL. The above cell suspension was added to a double-well chamber (ibidi GmbH, Martinsried, Germany). Then, 70 μL of cell suspension was added to each well of the chamber. The chamber was removed with sterile forceps after the cells adhered to the wall, and 2 mL of complete culture base was added. At the time points of 0, 12, and 24 hours, the cells were observed, and images were captured by using an inverted microscope equipped with a camera. ### 2.8. Transwell Experiment A 24-well spreading gel invasion chamber (pore size, 8μm; Costar, Corning, Inc., Corning, NY, USA) was used for cell invasion assay, and stably transfected gastric cancer cells were harvested and suspended in FBS-free DMEM medium at a density of 1×105 cells/mL. Next, 200 μL of cell suspension was added to the upper chamber, while 500 μL of DMEM containing 10% FBS was added to the bottom chamber. After culturing in a cell incubator for 24 hours, the nonmigrated cells in the upper chamber were removed with a cotton swab, and the cells invaded at the bottom of the filter were fixed in 4% paraformaldehyde at room temperature for 5 minutes. After washing the upper chamber for 1 minute, the cells were stained with crystal violet and counted at a magnification of × 100 in 5 randomly selected fields of view under a phase-contrast microscope ### 2.9. RNA Extraction and Quantitative Polymerase Chain Reaction A spin column RNA extraction kit (Beyotime Biotechnology, Shanghai, China) was used to isolate total RNA from the cultured GC cells; then, the HiScript® RT Kit (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China) was used according to the manual, and 1μg of total RNA was reverse transcribed into cDNA. The RNA concentration was measured by using the NanoDrop™ spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA). Quantitative polymerase chain reaction (qPCR) was performed on the ABI StepOnePlus™ real-time (RT) PCR system (Thermo Fisher Scientific, Inc., Waltham, MA, USA) with ChamQ™ SYBR (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China).The primer sequences used wereGAPDH, 5′-GGAGCGAGATCCCTCCAAAAT-3′ (forward), 5′-GGCTGTTGTCATACTTCTCATGG-3′ (reverse); BATF2, 5′-CACCAGCAGCACGAGTCTC-3′ (forward), 5′-TGTGCGAGGCAAACAGGAG-3′ (reverse); HDGF, 5′-CTCTTCCCTTACGAGGAATCCA-3′ (forward), 5′-CCTTGACAGTAGGGTTGTTCTC-3′ (reverse); METTL3, 5′-TTGTCTCCAACCTTCCGTAGT-3′ (forward), 5′-CCAGATCAGAGAGGTGGTGTAG-3′ (reverse); and PPP2CA, 5′-CAAAAGAATCCAACGTGCAAGAG-3′ (forward), 5′-CGTTCACGGTAACGAACCTT-3′ (reverse). ### 2.10. Western Blot Analysis The lysate buffer of the protease inhibitor mixture and the 1% phosphatase inhibitor mixture were added to gastric cancer cells and lysed on ice for 10 minutes to extract the proteins. Protein concentration of each group was quantitatively detected by protein quantitative kit (BCA method, Beyotime Biotechnology, Shanghai, China). Before the WB experiment, protein lysates were added into the 5× SDS-PAGE protein loading buffer in proportion, boiled for 10 minutes, and stored in the refrigerator at -20°C. Protein extract (30-50μg) was extracted for precast gel electrophoresis. After electrophoresis, the PVDF membrane (Beyotime Biotechnology, Shanghai, China) was transferred and then sealed with 5% skimmed milk powder at room temperature for 1 hour. After slight rinsing of the blocking solution with TBST, the diluted primary antibody was incubated overnight at 4°C, and the next day with the corresponding diluted secondary antibody was incubated at room temperature for 1 h. Finally, an appropriate amount of developer solution was added in a dark room before exposure strips were performed. The primary antibodies used in this study were PP2A-Cα/β (1 : 500; Santa Cruz Biotechnology, Dallas, TX, USA); METTL3 (1 : 1000; Proteintech Group, Chicago, IL, USA); GAPDH (1 : 1000; Proteintech Group, Chicago, IL, USA); and β-actin (1 : 1000; Proteintech Group, Chicago, IL, USA). ### 2.11. Nude Mouse Tumor Formation Experiment This animal experiment ethics is approved by Experimental Animal Center of Nanjing Medical University, approval number: IACUC-2103060. In order to establish the xenograft model of gastric cancer cells, nude mice were obtained from the animal center of Nanjing Medical University (BALA/c; 4 weeks old). Before injection, the mice were reared in a specific pathogen-free environment for 1 week. A total of5×106 stably transfected GC cells in 150 μL of phosphate-buffered saline were injected into the flank of nude mice (n=5 per group). After 3 weeks, we euthanized nude mice for the measurement of tumor volume and tumor weights. ### 2.12. Statistical Analysis T-tests were used to analyze the statistical differences between normally distributed data, and P<0.05 was considered statistically significant. SPSS 13.0 software (IBM Inc., Armonk, New York, USA) was used for statistical analysis. ## 2.1. Immunohistochemistry GC pathological tissue wax blocks were provided by the Department of Pathology, the Affiliated Jiangning Hospital of Nanjing Medical University. Ten wax blocks with GC of pathological stage III-IV were screened out, and immunohistochemical sections were made from these tissue wax blocks. Anti-PP2Acα antibody (Santa Cruz Biotechnology, Dallas, TX, USA) and anti-METTL3 antibody (Proteintech Group, Chicago, IL, USA) diluted to 1 : 300 were used for immunostaining. Based on the percentage of positive cells, 2 pathologists who did not know the clinical information independently evaluated the PP2Acα and METTL3 staining intensities. The results were divided into the following categories: negative (-): 0 points, <25%; weakly positive (+): 1 point, ≥25, <50%; moderately positive (++): 2 points, ≥50, <75%; and strongly positive (+++): 3 points, ≥75%. ## 2.2. Cell Culture Two human GC cell lines were used: MGC803 and BGC823 (Beyotime Biotechnology, Shanghai, China). The cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM, HyClone, Inc., Logan, Utah, USA) added with 10% fetal bovine serum (FBS, Gibco, California, USA) and 1% antibiotics (penicillin/streptomycin, Gibco, California, USA). Cells were grown in a 5% CO2 incubator at 37°C. ## 2.3. Lentivirus-Mediated shRNA-Transfected Cells According to the multiplicity of infection values of BGC823 and MGC803 (100 for both), GC cells were transfected with lentivirus-mediated sh-PPP2CA and sh-NC (Genomeditech Co., Ltd., Shanghai, China). The cells were harvested 48 hours after transfection. Then, puromycin was used to screen out positively expressing cells.The shRNA sequences used were sh-PPP2CA-1, 5′-GATCCGTGGAACTTGACGATACTCTAACTCGAGTTAGAGTATCGTCAAGTTCCATTTTTT-3′; sh-PPP2CA-2, 5′-GATCCGCAGATCTTCTGTCTACATGGTTCAAGAGACCATGTAGACAGAAGATCTGCTTTTTTG-3′; and sh-PPP2CA-3, 5′-GATCCGGCAAATCACCAGATACAAATTTCAAGAGAATTGTATCTGGTGATTTGCCTTTTTTG-3′. ## 2.4. Clone Formation Experiment Stably transfected gastric cancer cells were inoculated into a 6-well plate at a density of 1000 cells per well and cultured for 2-3 weeks. The medium was discarded and washed twice with PBS, and then, the cell clusters formed were fixed with 4% paraformaldehyde for 15 minutes and finally stained with crystal violet for 20 minutes. The image of cell clusters was taken, and the number of cell clusters was counted with ImageJ 1.8.0 software. All experiments were carried out in triplicate and repeated at least three times. ## 2.5. Cell Counting Kit-8 Experiment Transfected GC cells were seeded into 96-well culture plates at a density of 1000 cells per well, and then at 24, 48, 72, and 96 hours of culture, cells were incubated with a 10μL Cell Counting Kit-8 (CCK8) reagent (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) at 37°C for 2 h. The absorbance of each well was then measured in an automatic enzyme label meter (450 nm spectrophotometry). ## 2.6. 5-Ethynyl-2′-deoxyuridine Incorporation Test Stably transfected GC cells were seeded into a 6-well plate at a density of1.5×105 cells per well for overnight culture; then, the cells were incubated with 10 μM 5-ethynyl-2′-deoxyuridine (EdU) (Beyotime Biotechnology, Shanghai, China) in an incubator for 2 hours. The cells were then fixed, washed, permeabilized, and stained. Finally, the cells were observed with an inverted fluorescence microscope, and images were captured. ## 2.7. Scratch Test The GC cell suspension was configured, and each cell suspension group was diluted to5×105 cells/mL. The above cell suspension was added to a double-well chamber (ibidi GmbH, Martinsried, Germany). Then, 70 μL of cell suspension was added to each well of the chamber. The chamber was removed with sterile forceps after the cells adhered to the wall, and 2 mL of complete culture base was added. At the time points of 0, 12, and 24 hours, the cells were observed, and images were captured by using an inverted microscope equipped with a camera. ## 2.8. Transwell Experiment A 24-well spreading gel invasion chamber (pore size, 8μm; Costar, Corning, Inc., Corning, NY, USA) was used for cell invasion assay, and stably transfected gastric cancer cells were harvested and suspended in FBS-free DMEM medium at a density of 1×105 cells/mL. Next, 200 μL of cell suspension was added to the upper chamber, while 500 μL of DMEM containing 10% FBS was added to the bottom chamber. After culturing in a cell incubator for 24 hours, the nonmigrated cells in the upper chamber were removed with a cotton swab, and the cells invaded at the bottom of the filter were fixed in 4% paraformaldehyde at room temperature for 5 minutes. After washing the upper chamber for 1 minute, the cells were stained with crystal violet and counted at a magnification of × 100 in 5 randomly selected fields of view under a phase-contrast microscope ## 2.9. RNA Extraction and Quantitative Polymerase Chain Reaction A spin column RNA extraction kit (Beyotime Biotechnology, Shanghai, China) was used to isolate total RNA from the cultured GC cells; then, the HiScript® RT Kit (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China) was used according to the manual, and 1μg of total RNA was reverse transcribed into cDNA. The RNA concentration was measured by using the NanoDrop™ spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA). Quantitative polymerase chain reaction (qPCR) was performed on the ABI StepOnePlus™ real-time (RT) PCR system (Thermo Fisher Scientific, Inc., Waltham, MA, USA) with ChamQ™ SYBR (Vazyme Biotech Co., Ltd., Nanjing, Jiangsu, China).The primer sequences used wereGAPDH, 5′-GGAGCGAGATCCCTCCAAAAT-3′ (forward), 5′-GGCTGTTGTCATACTTCTCATGG-3′ (reverse); BATF2, 5′-CACCAGCAGCACGAGTCTC-3′ (forward), 5′-TGTGCGAGGCAAACAGGAG-3′ (reverse); HDGF, 5′-CTCTTCCCTTACGAGGAATCCA-3′ (forward), 5′-CCTTGACAGTAGGGTTGTTCTC-3′ (reverse); METTL3, 5′-TTGTCTCCAACCTTCCGTAGT-3′ (forward), 5′-CCAGATCAGAGAGGTGGTGTAG-3′ (reverse); and PPP2CA, 5′-CAAAAGAATCCAACGTGCAAGAG-3′ (forward), 5′-CGTTCACGGTAACGAACCTT-3′ (reverse). ## 2.10. Western Blot Analysis The lysate buffer of the protease inhibitor mixture and the 1% phosphatase inhibitor mixture were added to gastric cancer cells and lysed on ice for 10 minutes to extract the proteins. Protein concentration of each group was quantitatively detected by protein quantitative kit (BCA method, Beyotime Biotechnology, Shanghai, China). Before the WB experiment, protein lysates were added into the 5× SDS-PAGE protein loading buffer in proportion, boiled for 10 minutes, and stored in the refrigerator at -20°C. Protein extract (30-50μg) was extracted for precast gel electrophoresis. After electrophoresis, the PVDF membrane (Beyotime Biotechnology, Shanghai, China) was transferred and then sealed with 5% skimmed milk powder at room temperature for 1 hour. After slight rinsing of the blocking solution with TBST, the diluted primary antibody was incubated overnight at 4°C, and the next day with the corresponding diluted secondary antibody was incubated at room temperature for 1 h. Finally, an appropriate amount of developer solution was added in a dark room before exposure strips were performed. The primary antibodies used in this study were PP2A-Cα/β (1 : 500; Santa Cruz Biotechnology, Dallas, TX, USA); METTL3 (1 : 1000; Proteintech Group, Chicago, IL, USA); GAPDH (1 : 1000; Proteintech Group, Chicago, IL, USA); and β-actin (1 : 1000; Proteintech Group, Chicago, IL, USA). ## 2.11. Nude Mouse Tumor Formation Experiment This animal experiment ethics is approved by Experimental Animal Center of Nanjing Medical University, approval number: IACUC-2103060. In order to establish the xenograft model of gastric cancer cells, nude mice were obtained from the animal center of Nanjing Medical University (BALA/c; 4 weeks old). Before injection, the mice were reared in a specific pathogen-free environment for 1 week. A total of5×106 stably transfected GC cells in 150 μL of phosphate-buffered saline were injected into the flank of nude mice (n=5 per group). After 3 weeks, we euthanized nude mice for the measurement of tumor volume and tumor weights. ## 2.12. Statistical Analysis T-tests were used to analyze the statistical differences between normally distributed data, and P<0.05 was considered statistically significant. SPSS 13.0 software (IBM Inc., Armonk, New York, USA) was used for statistical analysis. ## 3. Results ### 3.1. PP2Acα and METTL3 Are Both Abnormally Expressed in Gastric Cancer Tissue and Related to Gastric Cancer Prognosis To study the role of PP2Acα and METTL3 in the progression of GC, we performed immunohistochemistry on 10 pairs of GC tissue and normal gastric mucosal tissue adjacent to the cancer. According to the above Materials and Methods, immunohistochemistry scoring was performed based on the scoring standard, and the results showed that the level of PP2Acα in GC tissue was significantly lower than that in normal gastric mucosal tissue adjacent to the cancer (P<0.001; Figure 1(a)), while the level of METTL3 was significantly increased in GC tissue (P<0.0001; Figure 1(b)). The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov) was used to compare the expression levels of the PPP2CA and METTL3 genes in GC tissue and normal gastric mucosal tissue adjacent to the cancer. PPP2CA expression was significantly decreased in GC tissue (32 tumor samples vs. 32 normal samples; P=3.76e−02; Figure 1(c)), while METTL3 expression was increased significantly in GC tissue (415 tumor samples vs. 35 normal samples; P=7.32e−06; Figure 1(d)). The results of prognostic analysis through the Kaplan-Meier plotter website (http://kmplot.com/analysis/) showed that the prognoses of the high PPP2CA expression and low METTL3 expression groups were significantly better than those of the respective control groups (P=1.2e−09 and P=4.4e−05, respectively; Figures 1(e) and 1(f)). The above results suggested that PP2Acα levels were significantly decreased and METTL3 levels were significantly increased in GC tissue. In addition, both the PPP2CA and METTL3 genes were closely related to GC prognosis. The different roles of PPP2CA and METTL3 in the progression and prognosis of GC require further exploration.Figure 1 PPP2CA and METTL3 expression levels in GC and relationships with GC prognosis. (a, b) Representative pictures of PP2Acα and METTL3 levels in GC tissue and normal gastric mucosal tissue adjacent to the cancer (magnification, ×40 and ×100; scalebars=200μm). (c, d) PPP2CA and METTL3 mRNA levels in The Cancer Genome Atlas database (P=3.76e−02 and P=7.32e−06, respectively; P<0.05 is statistically significant). (e, f) In the Kaplan-Meier plotter database, the prognostic values of PPP2CA and METTL3 for overall survival in GC patients were statistically significant (P<0.05). The picture can be downloaded at http://kmplot.com/analysis/index.php?p=service&cancer=ovar. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. GC: gastric cancer. (a)(b)(c)(d)(e)(f) ### 3.2. Inhibition of PP2Acα Results in Higher METTL3 Protein Levels To explore the relationship between PP2Acα and METTL3, we used lentivirus-mediated shRNA to knock down the PPP2CA gene in BGC823 and MGC803 cells. After 48 hours of transfection, the control, shRNA1, and shRNA3 groups with strong fluorescent expression (Figure 2(a)) were screened for puromycin, and GC cells with stable and low levels of PP2Acα were obtained. The successful knockdown at the mRNA and protein levels was verified (Figures 2(b)–2(e)), and METTL3 protein levels were significantly increased (Figures 2(d) and 2(e)). The mechanism of this wane-and-wax relationship deserves further exploration.Figure 2 Effect of PP2Acα inhibition on METTL3 levels. (a) The PPP2CA gene was knocked down in gastric cancer cells (BGC823 and MGC803) by lentivirus. (b, c) After knocking down PPP2CA, RT-qPCR was used to detect the expression of the METTL3 gene in the BGC823 and MGC803 cells. (d, e) Western blot analysis was used to detect METTL3 levels in the BGC823 and MGC803 cells after PP2Acα inhibition ∗P<0.05. (a)(b)(c)(d)(e) ### 3.3. Inhibition of PP2Acα Upregulates METTL3 through p-ATM According to the above results, we first used the Protein-Protein Interactions website to analyze the relationship between PP2Acα and METTL3. We found that PP2Acα and METTL3 establish links via the middle proteins WTAP and HSP90AA1 (Figure 3(a)). However, we could not clarify which one was the upstream or downstream protein. Therefore, we needed to find another molecular mechanism to identify links between PP2Acα and METTL3. PP2Acα plays the key role in maintaining phosphorylation homeostasis. We thus used the PhosphoSite website (https://www.phosphosite.org) to search the amino acid sequences of the phosphorylation sites of METTL3, and we found that METTL3 was rich in phosphorylation (Figure 3(b)), suggesting that the levels or functions of METTL3 may be regulated by PP2Acα. Based on this conjecture, through the PubMed database, we found that the phosphate groups of METTL3 could be added by the ataxia-telangiectasia mutated (ATM) kinase. The phosphate groups added onto the serine or threonine of this kinase can be removed by PP2A [21, 22]. In addition, PP2Acα inhibition can upregulate p-ATM levels, which can enhance ATM kinase activity [23]. In sum, we proposed that PP2Acα inhibition could upregulate METTL3 levels by enhancing the kinase activity of ATM. To test this speculation, we added KU55933, which is an ATM kinase inhibitor, to the medium of the GC cells. After 48 hours, we extracted the proteins of the GC cells to conduct a western blot assay. The experimental results showed that the p-ATM and METTL3 levels were all decreased compared to before treatment, while the total ATM level did not change significantly (Figure 3(c)). In conclusion, PP2Acα inhibition was found to upregulate METTL3 levels by stimulating the kinase activity of ATM in GC cells. Some experiments have shown that high METTL3 levels are closely related to the malignant progression of GC [12, 13]. Therefore, we used phenotypic experiments to detect the malignant phenotype of GC cells after inhibiting PP2Acα before and after adding KU55933.Figure 3 Effect of PP2Acα inhibition on METTL3 levels. (a) The protein interaction network between PP2Acα and METTL3 was queried through the STRING website. (b) The PhosphoSite website was used to query the phosphorylated modification of the amino acid sequence of METTL3. (c) Western blot analysis was used to detect changes in METTL3, ATM, and p-ATM levels before and after adding KU55933. (a)(b)(c) ### 3.4. Inhibition of PP2Acα Promotes the Malignant Phenotype of GC Cells In Vitro To explore the effect of PP2Acα inhibition on GC cells, various phenotypic experiments were carried out in vitro. The results of the CCK8 experiment showed that on the 4th and 5th days, the proliferation of the experimental group was significantly greater than that of the control group (P<0.05; Figures 4(a) and 4(b)). Similarly, in the clone formation experiment (Figures 4(c) and 4(d)) and EdU cell proliferation detection experiment (Figures 4(e) and 4(f)), significantly greater proliferation ability was found in the sh1 and sh3 groups compared to the control group (P<0.05). GC cells in the ibidi chamber were paved at the same density (5×105cells/mL), and the healing of scratches at 0, 12, and 24 hours after adhesion was observed. It was found that the healing ability of the shRNA1 and shRNA3 groups was significantly faster than that of the control group at 12 hours (Figures 4(g) and 4(h)). The Transwell invasion experiment showed that after 24 hours of culture, the invasion ability of the experimental group was significantly stronger than that of the control group (P<0.01; Figures 4(i) and 4(j)). The above phenotypic experiments proved that PP2Acα inhibition could promote the proliferation, migration, and invasion of GC cells.Figure 4 Effect of PP2Acα inhibition on the proliferation, migration, and invasion of gastric cancer cells in vitro. Cell Counting Kit-8 (a, b), clone formation (c, d), and EdU (e, f) tests were used to detect the effect of PPP2CA knockdown on the proliferation of BGC823 and MGC803 cells. (g, h) The scratch test was used to detect the effect of PPP2CA knockdown on the migration of BGC823 and MGC803 cells. (i, j) The Transwell experiment was used to detect the effect of PPP2CA knockdown on the invasion of BGC823 and MGC803 cells. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ### 3.5. Inhibition of PP2Acα Promotes GC Cell Proliferation In Vivo Stable knockout cells (MGC803/LV-shPPP2CA and MKN28/LV-shPPP2CA) and their corresponding control cells were subcutaneously inoculated into the axillae of 4-week-old male nude mice (n=5 per group). After 4 weeks, all mice were euthanized, and the tumors were isolated and removed. The results showed that PP2Acα inhibition significantly promoted the tumorigenicity of GC cells in the sh1 and sh3 groups in vivo. Compared with the control group, the tumor volume increased significantly in the experimental group (P<0.01; Figures 5(a) and 5(b)). These data further confirmed that knockdown of the PPP2CA gene could promote GC cell proliferation in animal models.Figure 5 Effect of PP2Acα inhibition on the proliferation of gastric cancer cells in vivo. (a, b) BGC823 and MGC803 cells transfected with LV-sh-PPP2CA-1,3 or LV-sh-NC were subcutaneously injected into nude mice (n=5 per group), and then, 4 weeks after injection, the tumors were removed, and tumor volumes were measured. LV: lentivirus; NC: normal control. (a)(b) ### 3.6. Inhibition of ATM Kinase Activity Can Reverse the Malignant Progression of Gastric Cancer Cells That Is Promoted by Inhibiting PP2Acα After KU55933 was used to inhibit the ATM activity of the GC cells, the morphological changes of the BGC-823 and MGC-803 cells were observed, and the proliferation and migration abilities of the GC cells were detected. We found that the apoptosis of the GC cells in each BGC-823 group was increased, and the epithelial-mesenchymal transition characteristics were weakened (Figure6(a)). That is, the GC cells became round, and the looseness between cells decreased. The CCK-8 and EdU cell proliferation assay results showed that the proliferation abilities of the BGC-823 and MGC-803 cells were significantly inhibited (Figures 6(b)–6(d)). The scratch test results showed that the migration ability of the GC cells was significantly inhibited (Figures 6(e) and 6(f)). These results suggest that the inhibition of ATM activity can reverse the enhancement of the malignant phenotype of GC cells that is induced by inhibiting PP2Acα. It can be concluded that PP2Acα inhibition upregulates METTL3 levels by stimulating the kinase activity of ATM, thereby promoting the malignant phenotype of GC cells.Figure 6 Effect of the inhibition of ATM activity on the malignant phenotype of gastric cancer cells. (a) After ATM activity was inhibited, the morphological changes of BGC-823 and MGC-803 cells were microscopically observed (magnification, ×10). (b, c) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was detected by using the Cell Counting Kit-8. (d) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was microscopically observed following the EdU cell proliferation test. (e, f) The migration of BGC-823 and MGC-803 cells was detected by using the scratch test.∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f) ## 3.1. PP2Acα and METTL3 Are Both Abnormally Expressed in Gastric Cancer Tissue and Related to Gastric Cancer Prognosis To study the role of PP2Acα and METTL3 in the progression of GC, we performed immunohistochemistry on 10 pairs of GC tissue and normal gastric mucosal tissue adjacent to the cancer. According to the above Materials and Methods, immunohistochemistry scoring was performed based on the scoring standard, and the results showed that the level of PP2Acα in GC tissue was significantly lower than that in normal gastric mucosal tissue adjacent to the cancer (P<0.001; Figure 1(a)), while the level of METTL3 was significantly increased in GC tissue (P<0.0001; Figure 1(b)). The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov) was used to compare the expression levels of the PPP2CA and METTL3 genes in GC tissue and normal gastric mucosal tissue adjacent to the cancer. PPP2CA expression was significantly decreased in GC tissue (32 tumor samples vs. 32 normal samples; P=3.76e−02; Figure 1(c)), while METTL3 expression was increased significantly in GC tissue (415 tumor samples vs. 35 normal samples; P=7.32e−06; Figure 1(d)). The results of prognostic analysis through the Kaplan-Meier plotter website (http://kmplot.com/analysis/) showed that the prognoses of the high PPP2CA expression and low METTL3 expression groups were significantly better than those of the respective control groups (P=1.2e−09 and P=4.4e−05, respectively; Figures 1(e) and 1(f)). The above results suggested that PP2Acα levels were significantly decreased and METTL3 levels were significantly increased in GC tissue. In addition, both the PPP2CA and METTL3 genes were closely related to GC prognosis. The different roles of PPP2CA and METTL3 in the progression and prognosis of GC require further exploration.Figure 1 PPP2CA and METTL3 expression levels in GC and relationships with GC prognosis. (a, b) Representative pictures of PP2Acα and METTL3 levels in GC tissue and normal gastric mucosal tissue adjacent to the cancer (magnification, ×40 and ×100; scalebars=200μm). (c, d) PPP2CA and METTL3 mRNA levels in The Cancer Genome Atlas database (P=3.76e−02 and P=7.32e−06, respectively; P<0.05 is statistically significant). (e, f) In the Kaplan-Meier plotter database, the prognostic values of PPP2CA and METTL3 for overall survival in GC patients were statistically significant (P<0.05). The picture can be downloaded at http://kmplot.com/analysis/index.php?p=service&cancer=ovar. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001. GC: gastric cancer. (a)(b)(c)(d)(e)(f) ## 3.2. Inhibition of PP2Acα Results in Higher METTL3 Protein Levels To explore the relationship between PP2Acα and METTL3, we used lentivirus-mediated shRNA to knock down the PPP2CA gene in BGC823 and MGC803 cells. After 48 hours of transfection, the control, shRNA1, and shRNA3 groups with strong fluorescent expression (Figure 2(a)) were screened for puromycin, and GC cells with stable and low levels of PP2Acα were obtained. The successful knockdown at the mRNA and protein levels was verified (Figures 2(b)–2(e)), and METTL3 protein levels were significantly increased (Figures 2(d) and 2(e)). The mechanism of this wane-and-wax relationship deserves further exploration.Figure 2 Effect of PP2Acα inhibition on METTL3 levels. (a) The PPP2CA gene was knocked down in gastric cancer cells (BGC823 and MGC803) by lentivirus. (b, c) After knocking down PPP2CA, RT-qPCR was used to detect the expression of the METTL3 gene in the BGC823 and MGC803 cells. (d, e) Western blot analysis was used to detect METTL3 levels in the BGC823 and MGC803 cells after PP2Acα inhibition ∗P<0.05. (a)(b)(c)(d)(e) ## 3.3. Inhibition of PP2Acα Upregulates METTL3 through p-ATM According to the above results, we first used the Protein-Protein Interactions website to analyze the relationship between PP2Acα and METTL3. We found that PP2Acα and METTL3 establish links via the middle proteins WTAP and HSP90AA1 (Figure 3(a)). However, we could not clarify which one was the upstream or downstream protein. Therefore, we needed to find another molecular mechanism to identify links between PP2Acα and METTL3. PP2Acα plays the key role in maintaining phosphorylation homeostasis. We thus used the PhosphoSite website (https://www.phosphosite.org) to search the amino acid sequences of the phosphorylation sites of METTL3, and we found that METTL3 was rich in phosphorylation (Figure 3(b)), suggesting that the levels or functions of METTL3 may be regulated by PP2Acα. Based on this conjecture, through the PubMed database, we found that the phosphate groups of METTL3 could be added by the ataxia-telangiectasia mutated (ATM) kinase. The phosphate groups added onto the serine or threonine of this kinase can be removed by PP2A [21, 22]. In addition, PP2Acα inhibition can upregulate p-ATM levels, which can enhance ATM kinase activity [23]. In sum, we proposed that PP2Acα inhibition could upregulate METTL3 levels by enhancing the kinase activity of ATM. To test this speculation, we added KU55933, which is an ATM kinase inhibitor, to the medium of the GC cells. After 48 hours, we extracted the proteins of the GC cells to conduct a western blot assay. The experimental results showed that the p-ATM and METTL3 levels were all decreased compared to before treatment, while the total ATM level did not change significantly (Figure 3(c)). In conclusion, PP2Acα inhibition was found to upregulate METTL3 levels by stimulating the kinase activity of ATM in GC cells. Some experiments have shown that high METTL3 levels are closely related to the malignant progression of GC [12, 13]. Therefore, we used phenotypic experiments to detect the malignant phenotype of GC cells after inhibiting PP2Acα before and after adding KU55933.Figure 3 Effect of PP2Acα inhibition on METTL3 levels. (a) The protein interaction network between PP2Acα and METTL3 was queried through the STRING website. (b) The PhosphoSite website was used to query the phosphorylated modification of the amino acid sequence of METTL3. (c) Western blot analysis was used to detect changes in METTL3, ATM, and p-ATM levels before and after adding KU55933. (a)(b)(c) ## 3.4. Inhibition of PP2Acα Promotes the Malignant Phenotype of GC Cells In Vitro To explore the effect of PP2Acα inhibition on GC cells, various phenotypic experiments were carried out in vitro. The results of the CCK8 experiment showed that on the 4th and 5th days, the proliferation of the experimental group was significantly greater than that of the control group (P<0.05; Figures 4(a) and 4(b)). Similarly, in the clone formation experiment (Figures 4(c) and 4(d)) and EdU cell proliferation detection experiment (Figures 4(e) and 4(f)), significantly greater proliferation ability was found in the sh1 and sh3 groups compared to the control group (P<0.05). GC cells in the ibidi chamber were paved at the same density (5×105cells/mL), and the healing of scratches at 0, 12, and 24 hours after adhesion was observed. It was found that the healing ability of the shRNA1 and shRNA3 groups was significantly faster than that of the control group at 12 hours (Figures 4(g) and 4(h)). The Transwell invasion experiment showed that after 24 hours of culture, the invasion ability of the experimental group was significantly stronger than that of the control group (P<0.01; Figures 4(i) and 4(j)). The above phenotypic experiments proved that PP2Acα inhibition could promote the proliferation, migration, and invasion of GC cells.Figure 4 Effect of PP2Acα inhibition on the proliferation, migration, and invasion of gastric cancer cells in vitro. Cell Counting Kit-8 (a, b), clone formation (c, d), and EdU (e, f) tests were used to detect the effect of PPP2CA knockdown on the proliferation of BGC823 and MGC803 cells. (g, h) The scratch test was used to detect the effect of PPP2CA knockdown on the migration of BGC823 and MGC803 cells. (i, j) The Transwell experiment was used to detect the effect of PPP2CA knockdown on the invasion of BGC823 and MGC803 cells. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ## 3.5. Inhibition of PP2Acα Promotes GC Cell Proliferation In Vivo Stable knockout cells (MGC803/LV-shPPP2CA and MKN28/LV-shPPP2CA) and their corresponding control cells were subcutaneously inoculated into the axillae of 4-week-old male nude mice (n=5 per group). After 4 weeks, all mice were euthanized, and the tumors were isolated and removed. The results showed that PP2Acα inhibition significantly promoted the tumorigenicity of GC cells in the sh1 and sh3 groups in vivo. Compared with the control group, the tumor volume increased significantly in the experimental group (P<0.01; Figures 5(a) and 5(b)). These data further confirmed that knockdown of the PPP2CA gene could promote GC cell proliferation in animal models.Figure 5 Effect of PP2Acα inhibition on the proliferation of gastric cancer cells in vivo. (a, b) BGC823 and MGC803 cells transfected with LV-sh-PPP2CA-1,3 or LV-sh-NC were subcutaneously injected into nude mice (n=5 per group), and then, 4 weeks after injection, the tumors were removed, and tumor volumes were measured. LV: lentivirus; NC: normal control. (a)(b) ## 3.6. Inhibition of ATM Kinase Activity Can Reverse the Malignant Progression of Gastric Cancer Cells That Is Promoted by Inhibiting PP2Acα After KU55933 was used to inhibit the ATM activity of the GC cells, the morphological changes of the BGC-823 and MGC-803 cells were observed, and the proliferation and migration abilities of the GC cells were detected. We found that the apoptosis of the GC cells in each BGC-823 group was increased, and the epithelial-mesenchymal transition characteristics were weakened (Figure6(a)). That is, the GC cells became round, and the looseness between cells decreased. The CCK-8 and EdU cell proliferation assay results showed that the proliferation abilities of the BGC-823 and MGC-803 cells were significantly inhibited (Figures 6(b)–6(d)). The scratch test results showed that the migration ability of the GC cells was significantly inhibited (Figures 6(e) and 6(f)). These results suggest that the inhibition of ATM activity can reverse the enhancement of the malignant phenotype of GC cells that is induced by inhibiting PP2Acα. It can be concluded that PP2Acα inhibition upregulates METTL3 levels by stimulating the kinase activity of ATM, thereby promoting the malignant phenotype of GC cells.Figure 6 Effect of the inhibition of ATM activity on the malignant phenotype of gastric cancer cells. (a) After ATM activity was inhibited, the morphological changes of BGC-823 and MGC-803 cells were microscopically observed (magnification, ×10). (b, c) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was detected by using the Cell Counting Kit-8. (d) After ATM activity was inhibited, the cell proliferation of the BGC-823 and MGC-803 groups was microscopically observed following the EdU cell proliferation test. (e, f) The migration of BGC-823 and MGC-803 cells was detected by using the scratch test.∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001, and ∗∗∗∗P<0.0001. EdU: 5-ethynyl-2′-deoxyuridine. (a)(b)(c)(d)(e)(f) ## 4. Discussion The poor prognosis associated with advanced GC has become a major public health problem [24–26]. The radical resection of GC has been well-developed since the 1980s, chemotherapy regimens have improved in recent years [24, 27, 28], and advanced intervention methods, such as arterial interventional embolization for distant metastases and intraperitoneal hyperthermic perfusion therapy, have emerged. However, the prognoses of patients with advanced GC have not reached public expectations. Nevertheless, in recent years, it has been discovered that molecular targeted drugs can significantly prolong the survival of patients with malignant tumors [29, 30], which is promising when it comes to curing GC. Combined with the ongoing breakthrough in the research of targeted tumor therapy [31], the molecular mechanism of GC is worthy of in-depth research in order to improve the therapeutic targets for GC and lay the foundation for better diagnosis and treatment of GC in the future.To explore the molecular mechanism of advanced GC, this study used the most extensively modified proteins in the body as entry points. PP2A and m6A are the important components of phosphorylation homeostasis maintenance and RNA methylation modification, respectively. Between them, PP2A has been favored by researchers due to the complexity of its trimer structure, especially the regulatory subunit B, with its substrate specificity and functional diversity, which enriches the functions of the PP2A holoenzyme [8, 32]. However, the implementation of PP2A’s functional diversity is inseparable from its core enzyme, which is composed of the structural subunit A and catalytic subunit C [33]. As an important part of the PP2A core enzyme, PP2Acα is highly conservative, and PP2Acα dysfunction often leads to the loss of PP2A holoenzyme activity, leading to a variety of life activity disorders in the body, in turn inducing various diseases [34]. So far, relevant basic research on GC has not involved PP2Acα/PPP2CA. The TCGA and Kaplan-Meier plotter databases show that low PPP2CA expression is related to the poor prognosis of GC. Therefore, studying the expression imbalance of PPP2CA is crucial for the in-depth exploration of the pathogenesis of GC. M6A has become a hot research topic in recent years due to its dynamic and reversible methylation modification characteristics, and it has been found to have more and more important roles in various diseases [9, 35–38]. METTL3 is the core of m6A modification, and changes in the levels or methylation function of METTL3 have been found in the progress of many diseases, so METTL3 function and level abnormalities are often important research points. The correlation research of METTL3 in GC is not exceptional, and most results have shown that increased METTL3 levels promote the progression of GC [11, 39–41].This study found that PP2Acα inhibition significantly upregulated METTL3 protein levels. However, through the PubMed database, we did not find a correlation study to explain the connection between PP2Acα and METTL3. Using the STRING database to query the interaction network between PP2Acα and METTL3, we did not find a superior or subordinate regulatory relationship between them. Through the PhosphoSite website, we found that METTL3 has a large number of phosphorylation sites in the amino acid sequence, suggesting that METTL3 can be affected by kinase phosphorylation modification, and phosphorylation modification is often accompanied by changes in protein levels and functions.A previous study found that ATM kinase phosphorylates the serine 43 (S43) site in the amino acid sequence of METTL3 to upregulate the level and function of METTL3. The activated METTL3 locates the position of DNA double-strand breaks (DSBs). The DSB-related RNA is methylated, and then, the m6A recognition protein YTHDC1 recognizes this methylation and recruits the RAD51 and BRCA1 proteins to perform homologous recombination repair on the damage to maintain a stable genome. Therefore, cells with low METTL3 levels lack effective homologous recombination repair, which increases the instability of the genome and leads to cell death. Tumor cells with high METTL3 levels are more likely to respond to DSBs, stabilize their own genome, and maintain their malignant phenotype and drug resistance [21]. In particular, the activity of ATM, as the upstream kinase of METTL3, can be regulated by PP2Acα. Experiments have shown that PP2Acα inhibition, which leads to the upregulation of the autophosphorylation of the ATM Ser1981 site, activates the activity of ATM [23].In summary, it is speculated that in GC cells, inhibiting PP2Acα can upregulate the activity of ATM, leading to the phosphorylation of the S43 position of the METTL3 amino acid sequence, which may activate the METTL3 methylation function and upregulate the protein level of METTL3, ultimately enhancing the malignant phenotype of GC cells. To verify this speculation, we extracted GC cell proteins to perform western blot experiments. The results showed that PP2Acα inhibition led to increased p-ATM (Ser1981) and METTL3 levels, while the total ATM protein level did not change significantly. These results verify that the inhibition of PP2Acα upregulates the activity of the ATM kinase in GC cells and is accompanied by high METTL3 levels.Subsequently, we used KU55933 to inhibit the activity of ATM in each group of GC cells in rescue experiments. The western blot analysis results showed that after inhibiting ATM activity, the METTL3 protein levels decreased significantly. This result clarified the regulatory relationship between p-ATM and METTL3. In addition, through cell phenotyping experiments to compare the malignant phenotype of GC cells before and after adding KU55933, we found that PP2Acα inhibition promoted the malignant phenotype of GC cells, but this could be reversed by adding KU55933.It can be concluded that PP2Acα inhibition promotes increased METTL3 levels by upregulating ATM activity, and it ultimately enhances the malignant phenotype of GC cells. PP2Acα is the upstream of this signal axis, and its expression imbalance is the root cause of the activation of this axis. Combined with the upregulation of PPP2CA expression, it will inhibit the malignant phenotype of malignant tumor cells such as colon cancer, thyroid cancer, and prostate cancer. Targeted therapy of PP2Acα may help to control the malignant progression of gastric cancer.Of course, this study has limitations. Our understanding of the regulation of downstream targets by METTL3 still needs to be supplemented by follow-up studies. However, it is undeniable that this study has enriched the molecular mechanism research related to GC and laid the foundation for basic follow-up research of the clinical diagnosis and treatment of GC. --- *Source: 1015293-2021-08-25.xml*
2021
# Protective Mechanisms of Nootropic Herb Shankhpushpi (Convolvulus pluricaulis) against Dementia: Network Pharmacology and Computational Approach **Authors:** Md. Abdul Hannan; Armin Sultana; Md. Hasanur Rahman; Abdullah Al Mamun Sohag; Raju Dash; Md Jamal Uddin; Muhammad Jahangir Hossen; Il Soo Moon **Journal:** Evidence-Based Complementary and Alternative Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015310 --- ## Abstract Convolvulus pluricaulis (CP), a Medhya Rasayana (nootropic) herb, is a major ingredient in Ayurvedic and Traditional Chinese formulae indicated for neurological conditions, namely, dementia, anxiety, depression, insanity, and epilepsy. Experimental evidence suggests various neuroactive potentials of CP such as memory-enhancing, neuroprotective, and antiepileptic. However, precise mechanisms underlying the neuropharmacological effects of CP remain unclear. The study, therefore, aimed at deciphering the molecular basis of neuroprotective effects of CP phytochemicals against the pathology of dementia disorders such as Alzheimer’s (AD) and Parkinson’s (PD) disease. The study exploited bioinformatics tools and resources, such as Cytoscape, DAVID (Database for annotation, visualization, and integrated discovery), NetworkAnalyst, and KEGG (Kyoto Encyclopedia of Genes and Genomes) database to investigate the interaction between CP compounds and molecular targets. An in silico analysis was also employed to screen druglike compounds and validate some selective interactions. ADME (absorption, distribution, metabolism, and excretion) analysis predicted a total of five druglike phytochemicals from CP constituents, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin. In network analysis, these compounds were found to interact with some molecular targets such as prostaglandin G/H synthase 1 and 2 (PTGS1 and PTGS2), endothelial nitric oxide synthase (NOS3), insulin receptor (INSR), heme oxygenase 1 (HMOX1), acetylcholinesterase (ACHE), peroxisome proliferator-activated receptor-gamma (PPARG), and monoamine oxidase A and B (MAOA and MAOB) that are associated with neuronal growth, survival, and activity. Docking simulation further confirmed interaction patterns and binding affinity of selected CP compounds with those molecular targets. Notably, scopoletin showed the highest binding affinity with PTGS1, NOS3, PPARG, ACHE, MAOA, MAOB, and TRKB, quercetin with PTGS2, 4-hydroxycinnamic acid with INSR, and ayapanin with HMOX1. The findings indicate that scopoletin, kaempferol, quercetin, 4-hydroxycinnamic acid, and ayapanin are the main active constituents of CP which might account for its memory enhancement and neuroprotective effects and that target proteins such as PTGS1, PTGS2, NOS3, PPARG, ACHE, MAOA, MAOB, INSR, HMOX1, and TRKB could be druggable targets against dementia. --- ## Body ## 1. Introduction Dementia is a leading cause of disability and dependency among the elderly. Dementia patients may have difficulty remembering, thinking critically, behaving normally, and even performing normal daily activities. Neurodegenerative diseases (NDD) such as Alzheimer’s (AD) and Parkinson’s (PD) disease account for 60–80% of all dementia cases. The pathobiology of NDD is still unclear, however, pathogenic events such as oxidative stress, inflammation, apoptosis, and mitochondrial dysfunction play a critical role in the onset and progression of NDD [1]. Targeting cellular pathways that are associated with these pathological phenomena constitutes a prospective therapeutic strategy in the management of NDD. Having complex pathobiology, NDD can be adequately treated through a multitarget/multidrug therapeutic protocol [2]. With diverse phytochemical profiles, medicinal herbs are the native multidrug formulation and are utilized in many traditional therapies with no/minimal side effects [2].Convolvulus pluricaulis Choisy (synonym, Convolvulus prostratus Forssk, belongs to Convolvulaceae) is a perennial herb native to the Indian subcontinent. Commonly termed as Shankhpushpi in Ayurveda, C. pluricaulis (CP) has been indicated for various human ailments, including those affecting the central nervous system, namely, anxiety, depression, epilepsy, and dementia [3, 4]. The pharmacological attributes owing to the health benefits of CP include anti-inflammatory, antioxidant, and immunomodulatory properties [5]. CP has been endowed with several potential phytochemicals, namely, flavonoids (kaempferol and quercetin), coumarins (scopoletin and ayapanin), phenolic acid (hydroxycinnamic acid), and phytosterol (β-sitosterol) that are related to its pharmacological effects [6].A growing body of preclinical evidence has emerged supporting the ethnopharmacological uses of CP for neurological problems [7]. In healthy rats, CP extract can promote memory capacity by modulating synaptic plasticity in hippocampus [8]. The nootropic effect of CP was also confirmed by other studies [9, 10]. In various experimental models, CP can protect against neuronal injury and ameliorate memory deficits [11–15]. CP treatment prevented protein and mRNA expressions of tau and amyloid precursor protein (APP) in scopolamine-induced rat brain [16]. In drosophila model of AD, CP can rescue neurons from tau-induced neurotoxicity by attenuating oxidative stress and restoring the depleted AChE activity [17]. Scopoletin, a coumarin of CP, attenuated oxidative stress-mediated loss of dopaminergic neurons and increased the efficacy of dopamine in PD model [18]. Scopoletin also ameliorated amnesia in scopolamine-induced animals [19]. In rat model of cerebral ischemic reperfusion injury, CP improved brain pathology by antioxidant mechanism [20]. Polyherbal formulation containing CP can improve streptozotocin-induced memory deficits in rats by downregulating the mRNA expression of mitochondria-targeted cytochromes [21]. CP also improved the disease outcomes of diabetes, which are often complicated by cognitive deficits [22]. In addition, CP improved anxiety, depression, and epileptic seizure [9, 23–27]. CP can also help withstand stress conditions in experimental animals [28, 29].The neuropharmacological effects highlighted above are mostly cumulative effects of CP phytochemicals. The existing literature, however, can hardly explain precise mechanisms that underlie the neuroactive functions of CP. Understanding the underlying molecular mechanisms through an experimental approach requires intensive endeavors. Alternatively, network pharmacology is a promising bioinformatics tool that can predict the active phytochemicals and the molecular targets that are associated with the pharmacological actions of plant extracts [30, 31]. The results obtained from Network Pharmacology could lead to further precise research in vivo. In this study, a network pharmacology and docking approach was used to explore the pharmacological mechanisms of CP phytochemicals against dementia disorders. The present study also provides evidence that helps understand the mechanisms underlying the reputed memory-enhancing capacity of CP and provides some valuable insights to advance future research to encourage the use of CP and its metabolites in the management of dementia disorders. ## 2. Materials and Methods ### 2.1. Retrieval of Compounds’ Information CP compounds were collected from the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database [32]. We also verified compounds’ information through PubMed database. The chemical information of CP compounds was obtained from PubChem and ChEMBL databases. ### 2.2. Compound Screening Drug-likeness of CP compounds were predicted by QikProp (Schrödinger Release 2019–3: QikProp, Schrödinger, LLC, New York, NY, 2019). The screening was carried out based on #stars (0–5), which indicates the number of properties that fall outside the 95% range of similar values for known drugs. A compound with fewer stars is more druglike than compounds with large stars. ### 2.3. Target Retrieval Target information for the individual compound was retrieved from TCMSP database [32]. The protein data, namely, standard protein name, gene ID, and organism were verified through UniProt (http://www.uniprot.org/) [33]. ### 2.4. Network Construction First, the individual list of AD, PD, and dementia-related genes was retrieved from DisGeNET database v6.0 [34]. Targets associated with AD, PD, and dementia are those that were common to compounds’ targets. The overlapping targets amongst the lists of targets related to CP compounds, AD, PD, and dementia were obtained by the Venny 2.1.0 online software (https://bioinfogp.cnb.csic.es/tools/venny/index.html). An interaction network among compounds, targets, and diseases was established by Cytoscape v3.8.2 [35]. The nodes and edges in the network represent molecules (compounds and targets), and intermolecular interactions (compounds and targets interactions), respectively. ### 2.5. Gene Ontology (GO) Analysis Functional enrichment analysis of Gene ontology (GO) for biological process, molecular function, and cellular components was carried out using DAVID 6.8 Gene Functional Classification Tool [36] (https://david.ncifcrf.gov/home.jsp). GO terms with a Pvalue of <0.01 were considered significant. Target proteins were categorized by the Panther classification system [37] (http://pantherdb.org/). ### 2.6. Network Pathway Analysis A protein-protein interaction network was constructed by NetworkAnalyst [38] (https://www.networkanalyst.ca/). An interactive network connecting molecular targets and associated cellular pathways was also constructed by NetworkAnalyst. Signaling and disease pathways highlighting the targets of CP compounds were retrieved through the KEGG pathway mapper [39] (https://www.genome.jp/kegg/tool/map_pathway2.html). ### 2.7. Molecular Docking and Binding Energy Analysis #### 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. #### 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. #### 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. #### 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 2.1. Retrieval of Compounds’ Information CP compounds were collected from the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database [32]. We also verified compounds’ information through PubMed database. The chemical information of CP compounds was obtained from PubChem and ChEMBL databases. ## 2.2. Compound Screening Drug-likeness of CP compounds were predicted by QikProp (Schrödinger Release 2019–3: QikProp, Schrödinger, LLC, New York, NY, 2019). The screening was carried out based on #stars (0–5), which indicates the number of properties that fall outside the 95% range of similar values for known drugs. A compound with fewer stars is more druglike than compounds with large stars. ## 2.3. Target Retrieval Target information for the individual compound was retrieved from TCMSP database [32]. The protein data, namely, standard protein name, gene ID, and organism were verified through UniProt (http://www.uniprot.org/) [33]. ## 2.4. Network Construction First, the individual list of AD, PD, and dementia-related genes was retrieved from DisGeNET database v6.0 [34]. Targets associated with AD, PD, and dementia are those that were common to compounds’ targets. The overlapping targets amongst the lists of targets related to CP compounds, AD, PD, and dementia were obtained by the Venny 2.1.0 online software (https://bioinfogp.cnb.csic.es/tools/venny/index.html). An interaction network among compounds, targets, and diseases was established by Cytoscape v3.8.2 [35]. The nodes and edges in the network represent molecules (compounds and targets), and intermolecular interactions (compounds and targets interactions), respectively. ## 2.5. Gene Ontology (GO) Analysis Functional enrichment analysis of Gene ontology (GO) for biological process, molecular function, and cellular components was carried out using DAVID 6.8 Gene Functional Classification Tool [36] (https://david.ncifcrf.gov/home.jsp). GO terms with a Pvalue of <0.01 were considered significant. Target proteins were categorized by the Panther classification system [37] (http://pantherdb.org/). ## 2.6. Network Pathway Analysis A protein-protein interaction network was constructed by NetworkAnalyst [38] (https://www.networkanalyst.ca/). An interactive network connecting molecular targets and associated cellular pathways was also constructed by NetworkAnalyst. Signaling and disease pathways highlighting the targets of CP compounds were retrieved through the KEGG pathway mapper [39] (https://www.genome.jp/kegg/tool/map_pathway2.html). ## 2.7. Molecular Docking and Binding Energy Analysis ### 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. ### 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. ### 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. ### 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. ## 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. ## 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. ## 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 3. Results ### 3.1. ADME Screening Twelve phytochemicals belonging to CP were retrieved from the TCMSP database. ADME screening offered 11 compounds having a #stars score ≤5 (Supplementary TableS2). Of these, six compounds lacking biological targets were omitted. Finally, five were chosen for further bioinformatic analysis, as displayed in Table 1. Most of the compounds are considered druglike and are more likely to be available orally as they maximally obeyed Lipinski’s rule of five [45] (mol_MW < 500, QPlogPo/w < 5, donorHB°≤ 5, accptHB ≤ 10) and Jorgensen’s rule of three [46] (QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7), respectively. Moreover, all compounds fall within the recommended range (−3.0 to 1.2) of predicted brain/blood partition coefficient (QPlogBB) (Supplementary Table S2).Table 1 Druglike compounds ofC. pluricaulis as screened by QikProp ADME prediction tool. Compound nameChemical natureStructureADME parametersa#starsb rule of fivec rule of threeScopoletinCoumarin000Hydroxycinnamic acidCarboxylic acid000KaempferolFlavonoid000QuercetinFlavonoid001AyapaninCoumarin100a#Stars indicates the number of property or descriptor values that fall outside the 95% range of similar values for known drugs (ranging from 0–5). A large number of stars suggests that a molecule is less druglike than molecules with few stars. The following properties and descriptors are included in the determination of #stars: MW, donorHB, accptHB, QPlogPw, QPlogPo/w, QPlogS, QPLogKhsa, QPlogBB, and #metabol. bRule of five indicates the number of violations of Lipinski’s rule of five [3]. The rules are: mol_MW < 500, QPlogPo/w < 5, donor HB ≤ 5, accptHB ≤ 10. Compounds that satisfy these rules are considered druglike (maximum is 4). cRule of three indicates the number of violations of Jorgensen’s rule of three. The three rules are QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7. Compounds with fewer (and preferably no) violations of these rules are more likely to be orally available (maximum is 3). ### 3.2. Target Fishing A total of 174 possible targets of five compounds were obtained from TCMSP database (Supplementary TableS3) and validated using a literature scan in the PubMed database. Of these, a total of 117, 109, and 51 targets were found to be associated with AD, PD, and dementia, respectively, after comparing with DisGeNET database (Supplementary Table S4). ### 3.3. Network Building Compound-target-disease (C-T-D) network established through Cytoscape could explain the multitarget effects of CP, which are used to treat brain disorders associated with cognitive deficits. C-T-D network represents the interaction of CP compounds with the targets that are linked with AD, PD, and dementia (Figure2). Focusing on the degree of connectivity, we assume that quercetin (degree, 144) and kaempferol (degree, 58) could potentially contribute to the management of cognitive disorders. Of the targets, PTGS1 and PTGS2 (each with degree, 5) had the highest degree of connectivity with the compounds, followed by NOS3, INSR, NR1I3, NR1I2, HMOX1, ACHE, PPARG, MAOA, and MAOB (each with degree ≥3) suggesting the implication of these gene products as a prospective drug-target for CP compounds in the dementia management. The protein-protein interaction network illustrates the target proteins, some of which are direct targets of CP compounds and others are interacting proteins (Supplementary Figure S1).Figure 2 Network analysis. (a) Overlapping target genes among CP compounds, AD, PD, and dementia. (b) Compound-target-disease (C-T-D) network shows the interaction among CP compounds, targets, and dementia disorders. Hexagonal nodes represent CP compounds, whereas oval nodes represent their targets. Node size is proportional to its degree. The nodes of the first tier represent the targets with a higher degree of interaction with the compound. ### 3.4. GO Analysis GO analysis was carried out only with the disease-associated genes (a total of 45) that are common to AD, PD, and dementia as retrieved by employing Venny 2.1.0 online software (Figure3). The top 15 highly enriched GO terms under biological process (BP), molecular function (MF), and cellular components (CC) (P<0.05, Pvalues were adjusted using the Benjamini‒Hochberg procedure) are shown in Figure 4(a). The top biological processes, including inflammatory response, response to drug, and aging have been linked to the pathophysiology of the disease, assuming that CP and its metabolites may interfere with the AD progression via modulating these biological processes. Moreover, the functional classification of target proteins suggests their diversity in biological functions (Figure 4(b)).Figure 3 Venn diagram. Overlapping target genes among CP compounds, AD, PD, and dementia.Figure 4 Bioinformatics analysis of overlapping target genes. (a) Gene ontology (GO) analysis: Top 15 GO terms for biological processes, molecular function, and cellular components were displayed where thex-axis represented GO terms for the target genes, and the y-axis showed target counts. The number on the tip of each bar represents the corresponding target number. Cut off: P<0.001 and FDR < 0.001. (b) Panther classification categorized target proteins into nine classes. The figures next to the group in the pie chart indicate the number and percentage of protein in the given functional class. (a)(b) ### 3.5. Analysis of Cellular Pathways and Targets Involved in the Pathobiology of Dementia Disorders An interactive network illustrates top cellular pathways that involved targets of CP compounds (Figure5). Cellular pathways were grouped into various modular systems according to KEGG pathway annotation.Figure 5 Integrated target-pathway network, a comprehensive network that visualizes the interactions of curcumin’s targets with cellular pathways, which were categorized into seven modular systems (differentiated by color) using KEGG pathway annotation. Potential druggable targets were marked with small pink circles.Among the signaling pathways that were enriched (AdjustedPvalue <0.05) in the “signal transduction” module (Figure 5), the highly enriched pathway was PI3K/Akt signaling, followed by MAPK signaling, which is critically implicated in neuronal maturation and survival. PI3K/Akt pathway retrieved from KEGG pathway database illustrates a total of 12 targets that were targeted by the CP compounds (Figure 6). The upstream signaling receptor to PI3K/Akt pathway is TrkB which bound to the natural ligand, namely, brain-derived neurotrophic factor (BDNF) conveys neurotrophin signals to several downstream effectors such as Bcl-2 and Bax. Based on this information, it was further verified by docking analysis whether the CP compounds could interact with the TrkB.Figure 6 PI3K-Akt pathway is a top enriched signaling pathway. CP targets are highlighted in red.Among the endocrine system-related pathways, insulin receptor signaling was the top overrepresented pathway. Insulin receptors (INSR) were highly connected by CP compounds, and their interaction was further verified by molecular docking. Several signaling pathways related to inflammation including TNF pathway, HIF-1 pathway, and NF-κB pathway were enriched (Figure 5). Since cyclooxygenases such as COX-1 and COX-2 (PTGS1 and PTGS2) catalyzing the production of inflammatory mediators were targeted by CP compounds with the highest degree of connectivity (Figure 3), their interaction was further verified by docking simulation.In addition, nervous system-related pathways such as neurotrophin signaling pathway, cholinergic synapse, dopaminergic synapse, serotonergic synapse, and long-term potentiation were enriched (Figure5). Any abnormality in these pathways disrupts brain function leading to the onset of NDD and related pathology. Notably, acetylcholinesterase (ACHE) has clinical significance in cholinergic deficits and therefore its binding and interaction with CP compounds were further verified with docking analysis. A number of immune system-related pathways, namely, tolllike receptor, T cell and B cell receptor, chemokine, and NODlike receptor signaling pathways were also highlighted in the network (Figure 5).An AD-pathway (Figure7) was retrieved from KEGG pathway database, illustrating a total of 13 proteins including those that are involved in amyloidogenesis (for example, APP and PSEN), cellular survival, and growth (for example, INSR, Akt, and Erk1/2) and inflammation (for example, iNOS, COX2, IKK, TNF, IL-1, and IL-6), which are potential targets of CP compounds as appeared in network pharmacology. Considering the appearance of INSR and COX2 in network pharmacology and in AD pathobiology, their interactions with the selected CP compounds were further verified by docking simulation. In addition, monoamine oxidases (MAOA and MAOB) are potential targets for both AD and PD, and thus their interactions with the selected CP compounds were also further verified.Figure 7 KEGG pathway of Alzheimer’s disease. Targets of CP compounds are marked with asterisk (∗). Of these, β-secretase and GSK-3β are the potential druggable targets for AD therapy. ### 3.6.In Silico Analysis We employed molecular docking analysis to validate the interaction patterns and the efficiency of CP phytochemicals with some of the vital target proteins that showed a higher degree of connectivity in network pharmacology. Accordingly, we selectedPTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, and MAOB for further analysis. Additionally, we included TrkB in docking analysis since several downstream effectors of TrkB receptor signaling, including PI3K, AKT1, BAX, and BCL2, showed a higher degree of connectivity in the network (Figure 2), and TrkB is a potential receptor for neuronal growth and survival.In any docking analysis of protein-ligand, it is ascertained that if the predicted complex obtained docking scores less than zero, indicating binding affinity of the ligand toward the receptor. However, molecular docking usually used approximated scoring functions to calculate binding energies, which are not correlated with experimental values [47, 48]. In such a case, we used MM-GBSA binding energy calculation to compute the free energy of binding the complex, which uses an implicit continuum solvent approximation [49]. A total of five compounds, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin, were subjected to molecular docking to the corresponding proteins of 12 target genes (PTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, MAOB, and TRKB), and the obtained docked complex was further subjected for MM-GBSA analysis. As shown in Figure 8, the quercetin-PTGS2 complex represented the highest binding energy of −46.27 kcal/mol, while in NOS3, the scopoletin showed maximum binding affinity and formed a stable complex with a binding energy of −34.98 kcal/mol. Interestingly, scopoletin also showed maximum binding energy to form complexes with PTGS1, NR1I3, NR1I2, ACHE, MAOA, and TRKB with binding energies of −36.28, −56.01, −39.13, −43.13, −51.18, and −34.67 kcal/mol, respectively. On the other hand, while bound to INSR, MAOB, and PPARG, 4-hydroxycinnamic acid showed maximum binding energies of −21.46, −34.044, and −41.04 kcal/mol, respectively. In HMOX1, ayapanin showed higher binding energy than other compounds. The details of molecular interactions of top hits from docking analysis are shown in Figure 8.Figure 8 Molecular docking analysis of target proteins and compounds. Heatmap representing the binding energy revealed from MM-GBSA analysis (a). Two-dimensional molecular interaction for protein-ligand complex for TRKB-Scopoletin (b), PTGS2-Quercetin (c), NOS3-Scopoletin (d), PTGS1-Scopoletin (e), INSR-4-hydroxycinnamic acid (f), NR1I3-Scopoletin (g), NR1I2-Scopoletin (h), HMOX1-Ayapanin (i), AChE-Scopoletin (j), PPARG-4-Hydroxycinnamic acid (k), MAOA-Scopoletin (l), and MAOB-4-Hydroxycinnamic acid (m). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)(m) ## 3.1. ADME Screening Twelve phytochemicals belonging to CP were retrieved from the TCMSP database. ADME screening offered 11 compounds having a #stars score ≤5 (Supplementary TableS2). Of these, six compounds lacking biological targets were omitted. Finally, five were chosen for further bioinformatic analysis, as displayed in Table 1. Most of the compounds are considered druglike and are more likely to be available orally as they maximally obeyed Lipinski’s rule of five [45] (mol_MW < 500, QPlogPo/w < 5, donorHB°≤ 5, accptHB ≤ 10) and Jorgensen’s rule of three [46] (QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7), respectively. Moreover, all compounds fall within the recommended range (−3.0 to 1.2) of predicted brain/blood partition coefficient (QPlogBB) (Supplementary Table S2).Table 1 Druglike compounds ofC. pluricaulis as screened by QikProp ADME prediction tool. Compound nameChemical natureStructureADME parametersa#starsb rule of fivec rule of threeScopoletinCoumarin000Hydroxycinnamic acidCarboxylic acid000KaempferolFlavonoid000QuercetinFlavonoid001AyapaninCoumarin100a#Stars indicates the number of property or descriptor values that fall outside the 95% range of similar values for known drugs (ranging from 0–5). A large number of stars suggests that a molecule is less druglike than molecules with few stars. The following properties and descriptors are included in the determination of #stars: MW, donorHB, accptHB, QPlogPw, QPlogPo/w, QPlogS, QPLogKhsa, QPlogBB, and #metabol. bRule of five indicates the number of violations of Lipinski’s rule of five [3]. The rules are: mol_MW < 500, QPlogPo/w < 5, donor HB ≤ 5, accptHB ≤ 10. Compounds that satisfy these rules are considered druglike (maximum is 4). cRule of three indicates the number of violations of Jorgensen’s rule of three. The three rules are QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7. Compounds with fewer (and preferably no) violations of these rules are more likely to be orally available (maximum is 3). ## 3.2. Target Fishing A total of 174 possible targets of five compounds were obtained from TCMSP database (Supplementary TableS3) and validated using a literature scan in the PubMed database. Of these, a total of 117, 109, and 51 targets were found to be associated with AD, PD, and dementia, respectively, after comparing with DisGeNET database (Supplementary Table S4). ## 3.3. Network Building Compound-target-disease (C-T-D) network established through Cytoscape could explain the multitarget effects of CP, which are used to treat brain disorders associated with cognitive deficits. C-T-D network represents the interaction of CP compounds with the targets that are linked with AD, PD, and dementia (Figure2). Focusing on the degree of connectivity, we assume that quercetin (degree, 144) and kaempferol (degree, 58) could potentially contribute to the management of cognitive disorders. Of the targets, PTGS1 and PTGS2 (each with degree, 5) had the highest degree of connectivity with the compounds, followed by NOS3, INSR, NR1I3, NR1I2, HMOX1, ACHE, PPARG, MAOA, and MAOB (each with degree ≥3) suggesting the implication of these gene products as a prospective drug-target for CP compounds in the dementia management. The protein-protein interaction network illustrates the target proteins, some of which are direct targets of CP compounds and others are interacting proteins (Supplementary Figure S1).Figure 2 Network analysis. (a) Overlapping target genes among CP compounds, AD, PD, and dementia. (b) Compound-target-disease (C-T-D) network shows the interaction among CP compounds, targets, and dementia disorders. Hexagonal nodes represent CP compounds, whereas oval nodes represent their targets. Node size is proportional to its degree. The nodes of the first tier represent the targets with a higher degree of interaction with the compound. ## 3.4. GO Analysis GO analysis was carried out only with the disease-associated genes (a total of 45) that are common to AD, PD, and dementia as retrieved by employing Venny 2.1.0 online software (Figure3). The top 15 highly enriched GO terms under biological process (BP), molecular function (MF), and cellular components (CC) (P<0.05, Pvalues were adjusted using the Benjamini‒Hochberg procedure) are shown in Figure 4(a). The top biological processes, including inflammatory response, response to drug, and aging have been linked to the pathophysiology of the disease, assuming that CP and its metabolites may interfere with the AD progression via modulating these biological processes. Moreover, the functional classification of target proteins suggests their diversity in biological functions (Figure 4(b)).Figure 3 Venn diagram. Overlapping target genes among CP compounds, AD, PD, and dementia.Figure 4 Bioinformatics analysis of overlapping target genes. (a) Gene ontology (GO) analysis: Top 15 GO terms for biological processes, molecular function, and cellular components were displayed where thex-axis represented GO terms for the target genes, and the y-axis showed target counts. The number on the tip of each bar represents the corresponding target number. Cut off: P<0.001 and FDR < 0.001. (b) Panther classification categorized target proteins into nine classes. The figures next to the group in the pie chart indicate the number and percentage of protein in the given functional class. (a)(b) ## 3.5. Analysis of Cellular Pathways and Targets Involved in the Pathobiology of Dementia Disorders An interactive network illustrates top cellular pathways that involved targets of CP compounds (Figure5). Cellular pathways were grouped into various modular systems according to KEGG pathway annotation.Figure 5 Integrated target-pathway network, a comprehensive network that visualizes the interactions of curcumin’s targets with cellular pathways, which were categorized into seven modular systems (differentiated by color) using KEGG pathway annotation. Potential druggable targets were marked with small pink circles.Among the signaling pathways that were enriched (AdjustedPvalue <0.05) in the “signal transduction” module (Figure 5), the highly enriched pathway was PI3K/Akt signaling, followed by MAPK signaling, which is critically implicated in neuronal maturation and survival. PI3K/Akt pathway retrieved from KEGG pathway database illustrates a total of 12 targets that were targeted by the CP compounds (Figure 6). The upstream signaling receptor to PI3K/Akt pathway is TrkB which bound to the natural ligand, namely, brain-derived neurotrophic factor (BDNF) conveys neurotrophin signals to several downstream effectors such as Bcl-2 and Bax. Based on this information, it was further verified by docking analysis whether the CP compounds could interact with the TrkB.Figure 6 PI3K-Akt pathway is a top enriched signaling pathway. CP targets are highlighted in red.Among the endocrine system-related pathways, insulin receptor signaling was the top overrepresented pathway. Insulin receptors (INSR) were highly connected by CP compounds, and their interaction was further verified by molecular docking. Several signaling pathways related to inflammation including TNF pathway, HIF-1 pathway, and NF-κB pathway were enriched (Figure 5). Since cyclooxygenases such as COX-1 and COX-2 (PTGS1 and PTGS2) catalyzing the production of inflammatory mediators were targeted by CP compounds with the highest degree of connectivity (Figure 3), their interaction was further verified by docking simulation.In addition, nervous system-related pathways such as neurotrophin signaling pathway, cholinergic synapse, dopaminergic synapse, serotonergic synapse, and long-term potentiation were enriched (Figure5). Any abnormality in these pathways disrupts brain function leading to the onset of NDD and related pathology. Notably, acetylcholinesterase (ACHE) has clinical significance in cholinergic deficits and therefore its binding and interaction with CP compounds were further verified with docking analysis. A number of immune system-related pathways, namely, tolllike receptor, T cell and B cell receptor, chemokine, and NODlike receptor signaling pathways were also highlighted in the network (Figure 5).An AD-pathway (Figure7) was retrieved from KEGG pathway database, illustrating a total of 13 proteins including those that are involved in amyloidogenesis (for example, APP and PSEN), cellular survival, and growth (for example, INSR, Akt, and Erk1/2) and inflammation (for example, iNOS, COX2, IKK, TNF, IL-1, and IL-6), which are potential targets of CP compounds as appeared in network pharmacology. Considering the appearance of INSR and COX2 in network pharmacology and in AD pathobiology, their interactions with the selected CP compounds were further verified by docking simulation. In addition, monoamine oxidases (MAOA and MAOB) are potential targets for both AD and PD, and thus their interactions with the selected CP compounds were also further verified.Figure 7 KEGG pathway of Alzheimer’s disease. Targets of CP compounds are marked with asterisk (∗). Of these, β-secretase and GSK-3β are the potential druggable targets for AD therapy. ## 3.6.In Silico Analysis We employed molecular docking analysis to validate the interaction patterns and the efficiency of CP phytochemicals with some of the vital target proteins that showed a higher degree of connectivity in network pharmacology. Accordingly, we selectedPTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, and MAOB for further analysis. Additionally, we included TrkB in docking analysis since several downstream effectors of TrkB receptor signaling, including PI3K, AKT1, BAX, and BCL2, showed a higher degree of connectivity in the network (Figure 2), and TrkB is a potential receptor for neuronal growth and survival.In any docking analysis of protein-ligand, it is ascertained that if the predicted complex obtained docking scores less than zero, indicating binding affinity of the ligand toward the receptor. However, molecular docking usually used approximated scoring functions to calculate binding energies, which are not correlated with experimental values [47, 48]. In such a case, we used MM-GBSA binding energy calculation to compute the free energy of binding the complex, which uses an implicit continuum solvent approximation [49]. A total of five compounds, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin, were subjected to molecular docking to the corresponding proteins of 12 target genes (PTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, MAOB, and TRKB), and the obtained docked complex was further subjected for MM-GBSA analysis. As shown in Figure 8, the quercetin-PTGS2 complex represented the highest binding energy of −46.27 kcal/mol, while in NOS3, the scopoletin showed maximum binding affinity and formed a stable complex with a binding energy of −34.98 kcal/mol. Interestingly, scopoletin also showed maximum binding energy to form complexes with PTGS1, NR1I3, NR1I2, ACHE, MAOA, and TRKB with binding energies of −36.28, −56.01, −39.13, −43.13, −51.18, and −34.67 kcal/mol, respectively. On the other hand, while bound to INSR, MAOB, and PPARG, 4-hydroxycinnamic acid showed maximum binding energies of −21.46, −34.044, and −41.04 kcal/mol, respectively. In HMOX1, ayapanin showed higher binding energy than other compounds. The details of molecular interactions of top hits from docking analysis are shown in Figure 8.Figure 8 Molecular docking analysis of target proteins and compounds. Heatmap representing the binding energy revealed from MM-GBSA analysis (a). Two-dimensional molecular interaction for protein-ligand complex for TRKB-Scopoletin (b), PTGS2-Quercetin (c), NOS3-Scopoletin (d), PTGS1-Scopoletin (e), INSR-4-hydroxycinnamic acid (f), NR1I3-Scopoletin (g), NR1I2-Scopoletin (h), HMOX1-Ayapanin (i), AChE-Scopoletin (j), PPARG-4-Hydroxycinnamic acid (k), MAOA-Scopoletin (l), and MAOB-4-Hydroxycinnamic acid (m). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)(m) ## 4. Discussion Traditional knowledge and experimental evidence suggest thatC. pluricaulis, alone or in combination, can enhance memory and protect against cognitive impairment [3, 4, 6, 50]. However, the underlying mechanisms supporting these claims remain largely unexplored. The present study, therefore, employed integrated network pharmacology and in silico approach to provide an in-depth insight into the neuropharmacological effects of CP phytochemicals and their protective potential against dementia. Virtual ADME screening identified a total of five active compounds from CP, such as scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin showing drug-likeness and blood-brain barrier permeability. Growing evidence suggest neurorestorative and memory protective potentials of these compounds. Quercetin, a natural polyphenolic of many plants, fruits, and vegetables, is found to be effective in protecting neurons from various injuries and ameliorating cognitive deficits [51]. Quercetin can ameliorate Alzheimer’s disease pathology (such as β-amyloidosis, tauopathy, astrogliosis and microgliosis in the hippocampus and the amygdala) and recover cognitive deficits in triple transgenic Alzheimer’s disease model mice [52, 53]. Another study has shown that quercetin can ameliorate hippocampus-dependent learning and memory deficits in mice fed with high fat diet through attenuating oxidative stress by activating antioxidant signaling system [54]. The flavonoid antioxidant, kaempferol, is also equally available in fruits and vegetables showing neuroprotective effects and memory-promoting potentials in experimental models of AD, PD, and other neurological diseases [55, 56]. Kaempferol can attenuate Aβ25-35-induced apoptosis of PC-12 cells via the ER/ERK/MAPK signaling pathway [57]. Other compounds, including scopoletin and 4-hydroxycinnamic acid, were also shown to be protective against neuronal damage and effective in ameliorating memory deficits [19, 58, 59]. 4-Hydroxycinnamic acid (P-coumaric acid) promotes hippocampal neurogenesis, improves cognitive functions, and reduces anxiety in post-ischemic stroke rats by activating BDNF/TrkB/AKT signaling pathway [60]. Scopoletin shows neuroprotective effects by inhibiting MOA, Aβ aggregation, and lipid peroxidation [61]. Another study shows that scopoletin can attenuate intracerebral hemorrhage-induced brain injury and improve neurological performance in rats [62].The C-T-D network illustrates that the selected CP metabolites were linked to the target proteins of dementia-associated cellular pathways. GO analysis revealed several enriched biological processes such as inflammatory response, response to drug, and aging that are implicated in the pathobiology of NDD. Network pathway analysis also shows that CP metabolites target several markers of the top enriched pathways. PI3K/Akt signaling is at the top of the enriched pathways associated with the development, survival, and activity of neurons. This pathway has multiple downstream effector targets including those associated with cell survival (Bcl-2, Bax, IKK, NF-κB, and p53). Bcl-2 is a prosurvival protein whereas Bax is a proapoptotic protein. IKK, NF-κB, and p53 are involved in inflammatory response [63, 64]. Other signaling pathways, particularly the MAPK pathway, in association with PI3K/Akt signaling take part in the regulation of growth and survival of cells.Several pathways that are associated with nervous system, namely, neurotrophin signaling pathway, long-term potentiation, and cholinergic, dopaminergic, and serotonergic synapses were enriched, indicating that CP compounds may have shown neuropharmacological effects by modulating these neuronal pathways. Neurotrophin signaling pathway maintains growth, maintenance, and survival of neurons. In aging or degenerating brain, there is inadequate neurotrophic support, causing neuronal death [65]. Neurotrophin, in particular BDNF, mimetic could, therefore, have clinical importance in the management of NDD [66]. Downstream to the neurotrophin signaling is PI3K/Akt pathway, which was highly enriched in this study, and CP compounds were found to target the genes involved. As BDNF mimetic, 7,8-dihydroxyflavone, a TrkB agonist, has shown neurotrophic activities [67] and has been found to be effective in ameliorating motor and cognitive deficits [68]. Docking analysis further indicates that scopoletin exhibited the highest binding affinity to TrkB, the receptor of neurotrophin signaling pathway, and may act as a BDNF-mimetic and take part in neuronal growth and survival by modulating the classical neurotrophin/PI3K/Akt signaling.In AD pathobiology, there is a cholinergic deficit due to dysfunction of cholinergic synapse. Although symptomatic, acetylcholinesterase (AChE) inhibitors such as donepezil, rivastigmine, and galantamine are currently in use to compensate for memory deficits due to cholinergic dysfunction [69]. Molecular docking has predicted that except for kaempferol and quercetin, the other three compounds may interrupt AChE activity. The current data suggest that these CP compounds would be a promising alternative to existing AChE inhibitors for AD patients.Among the endocrine pathways, the dominant pathway is the insulin signaling pathway, which plays an essential role in ensuring neuronal survival and homeostasis, promoting synaptic plasticity and thereby supporting learning and memory function [70, 71]. Evidence shows that insulin signaling is impaired in degenerating brains [71]. Targeting impaired insulin signaling, therefore, constitutes a viable strategy against NDD. In docking analysis, 4-hydroxycinnamic acid showed the highest binding affinity with insulin receptor (INSR) although in network pharmacology quercetin and kaempferol interact with this target.There was an enrichment of inflammation-related pathways, including TNF pathway, HIF-1 pathway, and NF-κB pathway, suggesting that anti-inflammatory effects mediated by CP compounds would play a pivotal role in preventing inflammatory cascade during pathobiological progression of NDD. Cyclooxygenase enzymes, namely, COX-1 (PTGS1) and COX-2 (PTGS2) catalyze the biosynthesis of inflammatory mediators such as prostaglandins and thromboxane. In the brain, COX-2 is activated by excitatory synaptic activity in neurons and by inflammation in the glia. COX-1/COX-2 pathway has pathogenic relevance in preclinical stages of Alzheimer’s disease development [72]. Pathological activation of COX-2 disrupts hippocampal synaptic function, leading to cognitive deficits [72]. Cyclooxygenase inhibitors, such as nonsteroidal anti-inflammatory drugs (NSAIDs), may have preventive effects against dementia [73]. Several COX-2 inhibitors such as celecoxib [74] and indomethacin [75] have shown promise in the management of AD. Docking results demonstrate that all CP compounds, including scopoletin and quercetin, exhibited substantial binding affinity to COX-2 and COX-1, suggesting their potential application in the development of antineuroinflammatory agents. Previous in silico reports on interaction of COX-2 with quercetin and kaempferol also support our data [76].In addition to the above cellular pathways, CP compounds target some other pathways, namely, autophagy, mitophagy, apoptosis, necroptosis, and some specific molecular markers of AD and PD pathways. Endothelial nitric oxide synthase or eNOS (NOS3) is known for its outstanding role in regulating cerebral blood flow and is associated with synaptic plasticity such as long-term potentiation [77]. eNOS attenuates ischemic damage by regulating BDNF expression [78]. Nitric oxide produced by eNOS protects neurons from Tau pathology [79]. Another study reports that pharmacological activation of PI3K-eNOS signaling can ameliorate cognitive deficits in streptozotocin-induced rats [80]. Pharmacological interruption of eNOS activity results in an increase in inflammatory mediators, such as iNOS in rat ischemic brains [81]. eNOS is, thereby, protective against inflammation and other pathologic stimuli. Statins such as atorvastatin and simvastatin may contribute to the amelioration of brain tissue injury in ischemic brain by activating eNOS [82]. Together, this evidence suggests that CP compounds that target eNOS may have pharmacological significance against NDD pathobiology.Other important targets are monoamine oxidases (MAOs) that catalyze the oxidative deamination of monoamines and contribute to the metabolism of dopamine, a neurotransmitter of dopaminergic neurons. Drugs that inhibit MAO, particularly MAOB, such as selegiline and rasagiline are currently in clinical use in patients with PD [83–85]. Docking findings demonstrate that CP compounds, particularly 4-hydroxycinnamic acid and scopoletin, showed higher binding affinity, suggesting their prospects as MAO inhibitors to be used in PD management.Heme oxygenase-1 or HO-1 (HMOX1) is a stress-sensitive enzyme that catalyzes the breakdown of heme into iron, carbon monoxide, and biliverdin/bilirubin and is involved in the pathobiology of AD and other brain disorders. Astroglial induction of the HMOX1 by β-amyloid and cytokines leads to mitochondrial iron sequestration and may thereby contribute to pathological iron deposition and bioenergy failure [86]. Pharmacological intervention in glial HO-1 activity may provide neuroprotection in AD by limiting iron-mediated neurotoxicity [86]. All CP compounds except kaempferol exhibit higher binding affinity to HO-1, and thereby, may be neuroprotective through regulating HO-1 activity.Peroxisome proliferator-activated receptor-gamma or PPARγ (PPARG), a ligand-activated nuclear transcription factor, regulates the expression of multiple genes that encode proteins involved in the regulation of lipid metabolism, improvement of insulin sensitivity, and inhibition of inflammation [87]. PPARγ agonists counteract oxidative stress, neuroinflammation, and Aβ clearance [70, 88]. PPARγ agonists such as fenofibrate, icariin, and naringenin are known to be neuroprotective, supporting neuronal development, synaptic plasticity, and ameliorating cognitive deficits [70, 89, 90]. In docking analysis, 4-hydroxycinnamic acid and scopoletin showed the highest binding affinity to PPARγ, suggesting that these compounds can ameliorate cognitive deficits through activating PPARγ signaling. ## 5. Conclusion Thein silico analysis predicts that CP metabolites, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin are the major bioactive leads that showed interaction with various molecular targets and cellular pathways crucial to neuronal growth, survival, and activity. The signaling pathways that CP compounds primarily target include the PI3K/Akt signaling pathway, the neurotrophin signaling pathway, and the insulin signaling pathway. In addition, top targets of CP compounds including PTGS1, PTGS2, NOS3, INSR, HMOX1, ACHE, PPARG, MAOA, MAOB, and TRKB may be potential druggable targets for future drug designing to address dementia disorders. Together with the previous reports, the combined network pharmacology and in-silico observations form a scientific basis that supports the ethnomedical application of CP for memory enhancement and against aging/pathological cognitive deficits. However, further investigation of memory-enhancing and neuroprotective effects of CP and its metabolites is essential to extrapolate the findings from preclinical and in silico models into clinical subjects. --- *Source: 1015310-2022-10-03.xml*
1015310-2022-10-03_1015310-2022-10-03.md
55,856
Protective Mechanisms of Nootropic Herb Shankhpushpi (Convolvulus pluricaulis) against Dementia: Network Pharmacology and Computational Approach
Md. Abdul Hannan; Armin Sultana; Md. Hasanur Rahman; Abdullah Al Mamun Sohag; Raju Dash; Md Jamal Uddin; Muhammad Jahangir Hossen; Il Soo Moon
Evidence-Based Complementary and Alternative Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015310
1015310-2022-10-03.xml
--- ## Abstract Convolvulus pluricaulis (CP), a Medhya Rasayana (nootropic) herb, is a major ingredient in Ayurvedic and Traditional Chinese formulae indicated for neurological conditions, namely, dementia, anxiety, depression, insanity, and epilepsy. Experimental evidence suggests various neuroactive potentials of CP such as memory-enhancing, neuroprotective, and antiepileptic. However, precise mechanisms underlying the neuropharmacological effects of CP remain unclear. The study, therefore, aimed at deciphering the molecular basis of neuroprotective effects of CP phytochemicals against the pathology of dementia disorders such as Alzheimer’s (AD) and Parkinson’s (PD) disease. The study exploited bioinformatics tools and resources, such as Cytoscape, DAVID (Database for annotation, visualization, and integrated discovery), NetworkAnalyst, and KEGG (Kyoto Encyclopedia of Genes and Genomes) database to investigate the interaction between CP compounds and molecular targets. An in silico analysis was also employed to screen druglike compounds and validate some selective interactions. ADME (absorption, distribution, metabolism, and excretion) analysis predicted a total of five druglike phytochemicals from CP constituents, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin. In network analysis, these compounds were found to interact with some molecular targets such as prostaglandin G/H synthase 1 and 2 (PTGS1 and PTGS2), endothelial nitric oxide synthase (NOS3), insulin receptor (INSR), heme oxygenase 1 (HMOX1), acetylcholinesterase (ACHE), peroxisome proliferator-activated receptor-gamma (PPARG), and monoamine oxidase A and B (MAOA and MAOB) that are associated with neuronal growth, survival, and activity. Docking simulation further confirmed interaction patterns and binding affinity of selected CP compounds with those molecular targets. Notably, scopoletin showed the highest binding affinity with PTGS1, NOS3, PPARG, ACHE, MAOA, MAOB, and TRKB, quercetin with PTGS2, 4-hydroxycinnamic acid with INSR, and ayapanin with HMOX1. The findings indicate that scopoletin, kaempferol, quercetin, 4-hydroxycinnamic acid, and ayapanin are the main active constituents of CP which might account for its memory enhancement and neuroprotective effects and that target proteins such as PTGS1, PTGS2, NOS3, PPARG, ACHE, MAOA, MAOB, INSR, HMOX1, and TRKB could be druggable targets against dementia. --- ## Body ## 1. Introduction Dementia is a leading cause of disability and dependency among the elderly. Dementia patients may have difficulty remembering, thinking critically, behaving normally, and even performing normal daily activities. Neurodegenerative diseases (NDD) such as Alzheimer’s (AD) and Parkinson’s (PD) disease account for 60–80% of all dementia cases. The pathobiology of NDD is still unclear, however, pathogenic events such as oxidative stress, inflammation, apoptosis, and mitochondrial dysfunction play a critical role in the onset and progression of NDD [1]. Targeting cellular pathways that are associated with these pathological phenomena constitutes a prospective therapeutic strategy in the management of NDD. Having complex pathobiology, NDD can be adequately treated through a multitarget/multidrug therapeutic protocol [2]. With diverse phytochemical profiles, medicinal herbs are the native multidrug formulation and are utilized in many traditional therapies with no/minimal side effects [2].Convolvulus pluricaulis Choisy (synonym, Convolvulus prostratus Forssk, belongs to Convolvulaceae) is a perennial herb native to the Indian subcontinent. Commonly termed as Shankhpushpi in Ayurveda, C. pluricaulis (CP) has been indicated for various human ailments, including those affecting the central nervous system, namely, anxiety, depression, epilepsy, and dementia [3, 4]. The pharmacological attributes owing to the health benefits of CP include anti-inflammatory, antioxidant, and immunomodulatory properties [5]. CP has been endowed with several potential phytochemicals, namely, flavonoids (kaempferol and quercetin), coumarins (scopoletin and ayapanin), phenolic acid (hydroxycinnamic acid), and phytosterol (β-sitosterol) that are related to its pharmacological effects [6].A growing body of preclinical evidence has emerged supporting the ethnopharmacological uses of CP for neurological problems [7]. In healthy rats, CP extract can promote memory capacity by modulating synaptic plasticity in hippocampus [8]. The nootropic effect of CP was also confirmed by other studies [9, 10]. In various experimental models, CP can protect against neuronal injury and ameliorate memory deficits [11–15]. CP treatment prevented protein and mRNA expressions of tau and amyloid precursor protein (APP) in scopolamine-induced rat brain [16]. In drosophila model of AD, CP can rescue neurons from tau-induced neurotoxicity by attenuating oxidative stress and restoring the depleted AChE activity [17]. Scopoletin, a coumarin of CP, attenuated oxidative stress-mediated loss of dopaminergic neurons and increased the efficacy of dopamine in PD model [18]. Scopoletin also ameliorated amnesia in scopolamine-induced animals [19]. In rat model of cerebral ischemic reperfusion injury, CP improved brain pathology by antioxidant mechanism [20]. Polyherbal formulation containing CP can improve streptozotocin-induced memory deficits in rats by downregulating the mRNA expression of mitochondria-targeted cytochromes [21]. CP also improved the disease outcomes of diabetes, which are often complicated by cognitive deficits [22]. In addition, CP improved anxiety, depression, and epileptic seizure [9, 23–27]. CP can also help withstand stress conditions in experimental animals [28, 29].The neuropharmacological effects highlighted above are mostly cumulative effects of CP phytochemicals. The existing literature, however, can hardly explain precise mechanisms that underlie the neuroactive functions of CP. Understanding the underlying molecular mechanisms through an experimental approach requires intensive endeavors. Alternatively, network pharmacology is a promising bioinformatics tool that can predict the active phytochemicals and the molecular targets that are associated with the pharmacological actions of plant extracts [30, 31]. The results obtained from Network Pharmacology could lead to further precise research in vivo. In this study, a network pharmacology and docking approach was used to explore the pharmacological mechanisms of CP phytochemicals against dementia disorders. The present study also provides evidence that helps understand the mechanisms underlying the reputed memory-enhancing capacity of CP and provides some valuable insights to advance future research to encourage the use of CP and its metabolites in the management of dementia disorders. ## 2. Materials and Methods ### 2.1. Retrieval of Compounds’ Information CP compounds were collected from the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database [32]. We also verified compounds’ information through PubMed database. The chemical information of CP compounds was obtained from PubChem and ChEMBL databases. ### 2.2. Compound Screening Drug-likeness of CP compounds were predicted by QikProp (Schrödinger Release 2019–3: QikProp, Schrödinger, LLC, New York, NY, 2019). The screening was carried out based on #stars (0–5), which indicates the number of properties that fall outside the 95% range of similar values for known drugs. A compound with fewer stars is more druglike than compounds with large stars. ### 2.3. Target Retrieval Target information for the individual compound was retrieved from TCMSP database [32]. The protein data, namely, standard protein name, gene ID, and organism were verified through UniProt (http://www.uniprot.org/) [33]. ### 2.4. Network Construction First, the individual list of AD, PD, and dementia-related genes was retrieved from DisGeNET database v6.0 [34]. Targets associated with AD, PD, and dementia are those that were common to compounds’ targets. The overlapping targets amongst the lists of targets related to CP compounds, AD, PD, and dementia were obtained by the Venny 2.1.0 online software (https://bioinfogp.cnb.csic.es/tools/venny/index.html). An interaction network among compounds, targets, and diseases was established by Cytoscape v3.8.2 [35]. The nodes and edges in the network represent molecules (compounds and targets), and intermolecular interactions (compounds and targets interactions), respectively. ### 2.5. Gene Ontology (GO) Analysis Functional enrichment analysis of Gene ontology (GO) for biological process, molecular function, and cellular components was carried out using DAVID 6.8 Gene Functional Classification Tool [36] (https://david.ncifcrf.gov/home.jsp). GO terms with a Pvalue of <0.01 were considered significant. Target proteins were categorized by the Panther classification system [37] (http://pantherdb.org/). ### 2.6. Network Pathway Analysis A protein-protein interaction network was constructed by NetworkAnalyst [38] (https://www.networkanalyst.ca/). An interactive network connecting molecular targets and associated cellular pathways was also constructed by NetworkAnalyst. Signaling and disease pathways highlighting the targets of CP compounds were retrieved through the KEGG pathway mapper [39] (https://www.genome.jp/kegg/tool/map_pathway2.html). ### 2.7. Molecular Docking and Binding Energy Analysis #### 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. #### 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. #### 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. #### 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 2.1. Retrieval of Compounds’ Information CP compounds were collected from the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database [32]. We also verified compounds’ information through PubMed database. The chemical information of CP compounds was obtained from PubChem and ChEMBL databases. ## 2.2. Compound Screening Drug-likeness of CP compounds were predicted by QikProp (Schrödinger Release 2019–3: QikProp, Schrödinger, LLC, New York, NY, 2019). The screening was carried out based on #stars (0–5), which indicates the number of properties that fall outside the 95% range of similar values for known drugs. A compound with fewer stars is more druglike than compounds with large stars. ## 2.3. Target Retrieval Target information for the individual compound was retrieved from TCMSP database [32]. The protein data, namely, standard protein name, gene ID, and organism were verified through UniProt (http://www.uniprot.org/) [33]. ## 2.4. Network Construction First, the individual list of AD, PD, and dementia-related genes was retrieved from DisGeNET database v6.0 [34]. Targets associated with AD, PD, and dementia are those that were common to compounds’ targets. The overlapping targets amongst the lists of targets related to CP compounds, AD, PD, and dementia were obtained by the Venny 2.1.0 online software (https://bioinfogp.cnb.csic.es/tools/venny/index.html). An interaction network among compounds, targets, and diseases was established by Cytoscape v3.8.2 [35]. The nodes and edges in the network represent molecules (compounds and targets), and intermolecular interactions (compounds and targets interactions), respectively. ## 2.5. Gene Ontology (GO) Analysis Functional enrichment analysis of Gene ontology (GO) for biological process, molecular function, and cellular components was carried out using DAVID 6.8 Gene Functional Classification Tool [36] (https://david.ncifcrf.gov/home.jsp). GO terms with a Pvalue of <0.01 were considered significant. Target proteins were categorized by the Panther classification system [37] (http://pantherdb.org/). ## 2.6. Network Pathway Analysis A protein-protein interaction network was constructed by NetworkAnalyst [38] (https://www.networkanalyst.ca/). An interactive network connecting molecular targets and associated cellular pathways was also constructed by NetworkAnalyst. Signaling and disease pathways highlighting the targets of CP compounds were retrieved through the KEGG pathway mapper [39] (https://www.genome.jp/kegg/tool/map_pathway2.html). ## 2.7. Molecular Docking and Binding Energy Analysis ### 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. ### 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. ### 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. ### 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 2.7.1. Preparation of Ligand For virtual screening, five compounds of 2D structure with SDF format were retrieved from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) and then using ligand preparation by applying ligand preparation in Schrodinger 2017–1 with an OPLS-3 force field [40]. Before minimization, the ionization state of each compound was fixed at pH 7.0 ± 2.0 by Epik 2.2 tool [41, 42]. During the process, a maximum of 32 possible stereoisomers for every compound was generated, from where we preferred only the conformer compasses with the least energy for subsequent analysis. ## 2.7.2. Prediction of Molecular Docking between Active Compound and Target Protein The target proteins were downloaded from Protein Data Bank (https://www.rcsb.org/, Supplementary Table S1), were prepared and refined with the assistance of a protein preparation wizard (Schrödinger 2017–1), where the bond orders, charges, and proper hydrogen were assigned to the crystal structure. Besides, the protein structure was optimized at neutral pH by removing all nonessential water molecules. A grid box was generated automatically for Glide XP docking. Ligands and receptors were then docked by ligand docking in maestro. ## 2.7.3. Prime MM-GBSA Analysis Binding free energy calculation is commonly applied to analysis for determining the sum of energy produced during the binding or docking of ligand compounds with a protein [43]. The protein-ligand pose viewer file was used. In MM-GBSA (molecular mechanics with generalized born and surface area solvation) analysis, binding free energy was calculated using OPLS_3 force field as molecular mechanics energies (EMM); for polar solvation, the SGB solvation model GSGB was used, and for nonpolar solvation (GNP), Vander Waals interaction, and nonpolar solvent accessible surface area [44]. The dielectric solvent model VSGB 2.0 was used to predict the directionality of hydrogen bond and π-stacking interactions [43]. A higher negative binding score denotes tremendous binding. ## 2.7.4. The Total Binding Free Energy (1)ΔGbind=Gcomplex−Gprotein+Gligand,whereG=EMM+GSGB+GNP.The flowchart of the integrated network pharmacology andin silico approach employed in this study is illustrated in Figure 1.Figure 1 An outline of network pharmacology-based deciphering neuropharmacological mechanism ofC pluricaulis compounds. ## 3. Results ### 3.1. ADME Screening Twelve phytochemicals belonging to CP were retrieved from the TCMSP database. ADME screening offered 11 compounds having a #stars score ≤5 (Supplementary TableS2). Of these, six compounds lacking biological targets were omitted. Finally, five were chosen for further bioinformatic analysis, as displayed in Table 1. Most of the compounds are considered druglike and are more likely to be available orally as they maximally obeyed Lipinski’s rule of five [45] (mol_MW < 500, QPlogPo/w < 5, donorHB°≤ 5, accptHB ≤ 10) and Jorgensen’s rule of three [46] (QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7), respectively. Moreover, all compounds fall within the recommended range (−3.0 to 1.2) of predicted brain/blood partition coefficient (QPlogBB) (Supplementary Table S2).Table 1 Druglike compounds ofC. pluricaulis as screened by QikProp ADME prediction tool. Compound nameChemical natureStructureADME parametersa#starsb rule of fivec rule of threeScopoletinCoumarin000Hydroxycinnamic acidCarboxylic acid000KaempferolFlavonoid000QuercetinFlavonoid001AyapaninCoumarin100a#Stars indicates the number of property or descriptor values that fall outside the 95% range of similar values for known drugs (ranging from 0–5). A large number of stars suggests that a molecule is less druglike than molecules with few stars. The following properties and descriptors are included in the determination of #stars: MW, donorHB, accptHB, QPlogPw, QPlogPo/w, QPlogS, QPLogKhsa, QPlogBB, and #metabol. bRule of five indicates the number of violations of Lipinski’s rule of five [3]. The rules are: mol_MW < 500, QPlogPo/w < 5, donor HB ≤ 5, accptHB ≤ 10. Compounds that satisfy these rules are considered druglike (maximum is 4). cRule of three indicates the number of violations of Jorgensen’s rule of three. The three rules are QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7. Compounds with fewer (and preferably no) violations of these rules are more likely to be orally available (maximum is 3). ### 3.2. Target Fishing A total of 174 possible targets of five compounds were obtained from TCMSP database (Supplementary TableS3) and validated using a literature scan in the PubMed database. Of these, a total of 117, 109, and 51 targets were found to be associated with AD, PD, and dementia, respectively, after comparing with DisGeNET database (Supplementary Table S4). ### 3.3. Network Building Compound-target-disease (C-T-D) network established through Cytoscape could explain the multitarget effects of CP, which are used to treat brain disorders associated with cognitive deficits. C-T-D network represents the interaction of CP compounds with the targets that are linked with AD, PD, and dementia (Figure2). Focusing on the degree of connectivity, we assume that quercetin (degree, 144) and kaempferol (degree, 58) could potentially contribute to the management of cognitive disorders. Of the targets, PTGS1 and PTGS2 (each with degree, 5) had the highest degree of connectivity with the compounds, followed by NOS3, INSR, NR1I3, NR1I2, HMOX1, ACHE, PPARG, MAOA, and MAOB (each with degree ≥3) suggesting the implication of these gene products as a prospective drug-target for CP compounds in the dementia management. The protein-protein interaction network illustrates the target proteins, some of which are direct targets of CP compounds and others are interacting proteins (Supplementary Figure S1).Figure 2 Network analysis. (a) Overlapping target genes among CP compounds, AD, PD, and dementia. (b) Compound-target-disease (C-T-D) network shows the interaction among CP compounds, targets, and dementia disorders. Hexagonal nodes represent CP compounds, whereas oval nodes represent their targets. Node size is proportional to its degree. The nodes of the first tier represent the targets with a higher degree of interaction with the compound. ### 3.4. GO Analysis GO analysis was carried out only with the disease-associated genes (a total of 45) that are common to AD, PD, and dementia as retrieved by employing Venny 2.1.0 online software (Figure3). The top 15 highly enriched GO terms under biological process (BP), molecular function (MF), and cellular components (CC) (P<0.05, Pvalues were adjusted using the Benjamini‒Hochberg procedure) are shown in Figure 4(a). The top biological processes, including inflammatory response, response to drug, and aging have been linked to the pathophysiology of the disease, assuming that CP and its metabolites may interfere with the AD progression via modulating these biological processes. Moreover, the functional classification of target proteins suggests their diversity in biological functions (Figure 4(b)).Figure 3 Venn diagram. Overlapping target genes among CP compounds, AD, PD, and dementia.Figure 4 Bioinformatics analysis of overlapping target genes. (a) Gene ontology (GO) analysis: Top 15 GO terms for biological processes, molecular function, and cellular components were displayed where thex-axis represented GO terms for the target genes, and the y-axis showed target counts. The number on the tip of each bar represents the corresponding target number. Cut off: P<0.001 and FDR < 0.001. (b) Panther classification categorized target proteins into nine classes. The figures next to the group in the pie chart indicate the number and percentage of protein in the given functional class. (a)(b) ### 3.5. Analysis of Cellular Pathways and Targets Involved in the Pathobiology of Dementia Disorders An interactive network illustrates top cellular pathways that involved targets of CP compounds (Figure5). Cellular pathways were grouped into various modular systems according to KEGG pathway annotation.Figure 5 Integrated target-pathway network, a comprehensive network that visualizes the interactions of curcumin’s targets with cellular pathways, which were categorized into seven modular systems (differentiated by color) using KEGG pathway annotation. Potential druggable targets were marked with small pink circles.Among the signaling pathways that were enriched (AdjustedPvalue <0.05) in the “signal transduction” module (Figure 5), the highly enriched pathway was PI3K/Akt signaling, followed by MAPK signaling, which is critically implicated in neuronal maturation and survival. PI3K/Akt pathway retrieved from KEGG pathway database illustrates a total of 12 targets that were targeted by the CP compounds (Figure 6). The upstream signaling receptor to PI3K/Akt pathway is TrkB which bound to the natural ligand, namely, brain-derived neurotrophic factor (BDNF) conveys neurotrophin signals to several downstream effectors such as Bcl-2 and Bax. Based on this information, it was further verified by docking analysis whether the CP compounds could interact with the TrkB.Figure 6 PI3K-Akt pathway is a top enriched signaling pathway. CP targets are highlighted in red.Among the endocrine system-related pathways, insulin receptor signaling was the top overrepresented pathway. Insulin receptors (INSR) were highly connected by CP compounds, and their interaction was further verified by molecular docking. Several signaling pathways related to inflammation including TNF pathway, HIF-1 pathway, and NF-κB pathway were enriched (Figure 5). Since cyclooxygenases such as COX-1 and COX-2 (PTGS1 and PTGS2) catalyzing the production of inflammatory mediators were targeted by CP compounds with the highest degree of connectivity (Figure 3), their interaction was further verified by docking simulation.In addition, nervous system-related pathways such as neurotrophin signaling pathway, cholinergic synapse, dopaminergic synapse, serotonergic synapse, and long-term potentiation were enriched (Figure5). Any abnormality in these pathways disrupts brain function leading to the onset of NDD and related pathology. Notably, acetylcholinesterase (ACHE) has clinical significance in cholinergic deficits and therefore its binding and interaction with CP compounds were further verified with docking analysis. A number of immune system-related pathways, namely, tolllike receptor, T cell and B cell receptor, chemokine, and NODlike receptor signaling pathways were also highlighted in the network (Figure 5).An AD-pathway (Figure7) was retrieved from KEGG pathway database, illustrating a total of 13 proteins including those that are involved in amyloidogenesis (for example, APP and PSEN), cellular survival, and growth (for example, INSR, Akt, and Erk1/2) and inflammation (for example, iNOS, COX2, IKK, TNF, IL-1, and IL-6), which are potential targets of CP compounds as appeared in network pharmacology. Considering the appearance of INSR and COX2 in network pharmacology and in AD pathobiology, their interactions with the selected CP compounds were further verified by docking simulation. In addition, monoamine oxidases (MAOA and MAOB) are potential targets for both AD and PD, and thus their interactions with the selected CP compounds were also further verified.Figure 7 KEGG pathway of Alzheimer’s disease. Targets of CP compounds are marked with asterisk (∗). Of these, β-secretase and GSK-3β are the potential druggable targets for AD therapy. ### 3.6.In Silico Analysis We employed molecular docking analysis to validate the interaction patterns and the efficiency of CP phytochemicals with some of the vital target proteins that showed a higher degree of connectivity in network pharmacology. Accordingly, we selectedPTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, and MAOB for further analysis. Additionally, we included TrkB in docking analysis since several downstream effectors of TrkB receptor signaling, including PI3K, AKT1, BAX, and BCL2, showed a higher degree of connectivity in the network (Figure 2), and TrkB is a potential receptor for neuronal growth and survival.In any docking analysis of protein-ligand, it is ascertained that if the predicted complex obtained docking scores less than zero, indicating binding affinity of the ligand toward the receptor. However, molecular docking usually used approximated scoring functions to calculate binding energies, which are not correlated with experimental values [47, 48]. In such a case, we used MM-GBSA binding energy calculation to compute the free energy of binding the complex, which uses an implicit continuum solvent approximation [49]. A total of five compounds, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin, were subjected to molecular docking to the corresponding proteins of 12 target genes (PTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, MAOB, and TRKB), and the obtained docked complex was further subjected for MM-GBSA analysis. As shown in Figure 8, the quercetin-PTGS2 complex represented the highest binding energy of −46.27 kcal/mol, while in NOS3, the scopoletin showed maximum binding affinity and formed a stable complex with a binding energy of −34.98 kcal/mol. Interestingly, scopoletin also showed maximum binding energy to form complexes with PTGS1, NR1I3, NR1I2, ACHE, MAOA, and TRKB with binding energies of −36.28, −56.01, −39.13, −43.13, −51.18, and −34.67 kcal/mol, respectively. On the other hand, while bound to INSR, MAOB, and PPARG, 4-hydroxycinnamic acid showed maximum binding energies of −21.46, −34.044, and −41.04 kcal/mol, respectively. In HMOX1, ayapanin showed higher binding energy than other compounds. The details of molecular interactions of top hits from docking analysis are shown in Figure 8.Figure 8 Molecular docking analysis of target proteins and compounds. Heatmap representing the binding energy revealed from MM-GBSA analysis (a). Two-dimensional molecular interaction for protein-ligand complex for TRKB-Scopoletin (b), PTGS2-Quercetin (c), NOS3-Scopoletin (d), PTGS1-Scopoletin (e), INSR-4-hydroxycinnamic acid (f), NR1I3-Scopoletin (g), NR1I2-Scopoletin (h), HMOX1-Ayapanin (i), AChE-Scopoletin (j), PPARG-4-Hydroxycinnamic acid (k), MAOA-Scopoletin (l), and MAOB-4-Hydroxycinnamic acid (m). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)(m) ## 3.1. ADME Screening Twelve phytochemicals belonging to CP were retrieved from the TCMSP database. ADME screening offered 11 compounds having a #stars score ≤5 (Supplementary TableS2). Of these, six compounds lacking biological targets were omitted. Finally, five were chosen for further bioinformatic analysis, as displayed in Table 1. Most of the compounds are considered druglike and are more likely to be available orally as they maximally obeyed Lipinski’s rule of five [45] (mol_MW < 500, QPlogPo/w < 5, donorHB°≤ 5, accptHB ≤ 10) and Jorgensen’s rule of three [46] (QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7), respectively. Moreover, all compounds fall within the recommended range (−3.0 to 1.2) of predicted brain/blood partition coefficient (QPlogBB) (Supplementary Table S2).Table 1 Druglike compounds ofC. pluricaulis as screened by QikProp ADME prediction tool. Compound nameChemical natureStructureADME parametersa#starsb rule of fivec rule of threeScopoletinCoumarin000Hydroxycinnamic acidCarboxylic acid000KaempferolFlavonoid000QuercetinFlavonoid001AyapaninCoumarin100a#Stars indicates the number of property or descriptor values that fall outside the 95% range of similar values for known drugs (ranging from 0–5). A large number of stars suggests that a molecule is less druglike than molecules with few stars. The following properties and descriptors are included in the determination of #stars: MW, donorHB, accptHB, QPlogPw, QPlogPo/w, QPlogS, QPLogKhsa, QPlogBB, and #metabol. bRule of five indicates the number of violations of Lipinski’s rule of five [3]. The rules are: mol_MW < 500, QPlogPo/w < 5, donor HB ≤ 5, accptHB ≤ 10. Compounds that satisfy these rules are considered druglike (maximum is 4). cRule of three indicates the number of violations of Jorgensen’s rule of three. The three rules are QPlogS > −5.7, QP PCaco > 22 nm/s, # Primary Metabolites < 7. Compounds with fewer (and preferably no) violations of these rules are more likely to be orally available (maximum is 3). ## 3.2. Target Fishing A total of 174 possible targets of five compounds were obtained from TCMSP database (Supplementary TableS3) and validated using a literature scan in the PubMed database. Of these, a total of 117, 109, and 51 targets were found to be associated with AD, PD, and dementia, respectively, after comparing with DisGeNET database (Supplementary Table S4). ## 3.3. Network Building Compound-target-disease (C-T-D) network established through Cytoscape could explain the multitarget effects of CP, which are used to treat brain disorders associated with cognitive deficits. C-T-D network represents the interaction of CP compounds with the targets that are linked with AD, PD, and dementia (Figure2). Focusing on the degree of connectivity, we assume that quercetin (degree, 144) and kaempferol (degree, 58) could potentially contribute to the management of cognitive disorders. Of the targets, PTGS1 and PTGS2 (each with degree, 5) had the highest degree of connectivity with the compounds, followed by NOS3, INSR, NR1I3, NR1I2, HMOX1, ACHE, PPARG, MAOA, and MAOB (each with degree ≥3) suggesting the implication of these gene products as a prospective drug-target for CP compounds in the dementia management. The protein-protein interaction network illustrates the target proteins, some of which are direct targets of CP compounds and others are interacting proteins (Supplementary Figure S1).Figure 2 Network analysis. (a) Overlapping target genes among CP compounds, AD, PD, and dementia. (b) Compound-target-disease (C-T-D) network shows the interaction among CP compounds, targets, and dementia disorders. Hexagonal nodes represent CP compounds, whereas oval nodes represent their targets. Node size is proportional to its degree. The nodes of the first tier represent the targets with a higher degree of interaction with the compound. ## 3.4. GO Analysis GO analysis was carried out only with the disease-associated genes (a total of 45) that are common to AD, PD, and dementia as retrieved by employing Venny 2.1.0 online software (Figure3). The top 15 highly enriched GO terms under biological process (BP), molecular function (MF), and cellular components (CC) (P<0.05, Pvalues were adjusted using the Benjamini‒Hochberg procedure) are shown in Figure 4(a). The top biological processes, including inflammatory response, response to drug, and aging have been linked to the pathophysiology of the disease, assuming that CP and its metabolites may interfere with the AD progression via modulating these biological processes. Moreover, the functional classification of target proteins suggests their diversity in biological functions (Figure 4(b)).Figure 3 Venn diagram. Overlapping target genes among CP compounds, AD, PD, and dementia.Figure 4 Bioinformatics analysis of overlapping target genes. (a) Gene ontology (GO) analysis: Top 15 GO terms for biological processes, molecular function, and cellular components were displayed where thex-axis represented GO terms for the target genes, and the y-axis showed target counts. The number on the tip of each bar represents the corresponding target number. Cut off: P<0.001 and FDR < 0.001. (b) Panther classification categorized target proteins into nine classes. The figures next to the group in the pie chart indicate the number and percentage of protein in the given functional class. (a)(b) ## 3.5. Analysis of Cellular Pathways and Targets Involved in the Pathobiology of Dementia Disorders An interactive network illustrates top cellular pathways that involved targets of CP compounds (Figure5). Cellular pathways were grouped into various modular systems according to KEGG pathway annotation.Figure 5 Integrated target-pathway network, a comprehensive network that visualizes the interactions of curcumin’s targets with cellular pathways, which were categorized into seven modular systems (differentiated by color) using KEGG pathway annotation. Potential druggable targets were marked with small pink circles.Among the signaling pathways that were enriched (AdjustedPvalue <0.05) in the “signal transduction” module (Figure 5), the highly enriched pathway was PI3K/Akt signaling, followed by MAPK signaling, which is critically implicated in neuronal maturation and survival. PI3K/Akt pathway retrieved from KEGG pathway database illustrates a total of 12 targets that were targeted by the CP compounds (Figure 6). The upstream signaling receptor to PI3K/Akt pathway is TrkB which bound to the natural ligand, namely, brain-derived neurotrophic factor (BDNF) conveys neurotrophin signals to several downstream effectors such as Bcl-2 and Bax. Based on this information, it was further verified by docking analysis whether the CP compounds could interact with the TrkB.Figure 6 PI3K-Akt pathway is a top enriched signaling pathway. CP targets are highlighted in red.Among the endocrine system-related pathways, insulin receptor signaling was the top overrepresented pathway. Insulin receptors (INSR) were highly connected by CP compounds, and their interaction was further verified by molecular docking. Several signaling pathways related to inflammation including TNF pathway, HIF-1 pathway, and NF-κB pathway were enriched (Figure 5). Since cyclooxygenases such as COX-1 and COX-2 (PTGS1 and PTGS2) catalyzing the production of inflammatory mediators were targeted by CP compounds with the highest degree of connectivity (Figure 3), their interaction was further verified by docking simulation.In addition, nervous system-related pathways such as neurotrophin signaling pathway, cholinergic synapse, dopaminergic synapse, serotonergic synapse, and long-term potentiation were enriched (Figure5). Any abnormality in these pathways disrupts brain function leading to the onset of NDD and related pathology. Notably, acetylcholinesterase (ACHE) has clinical significance in cholinergic deficits and therefore its binding and interaction with CP compounds were further verified with docking analysis. A number of immune system-related pathways, namely, tolllike receptor, T cell and B cell receptor, chemokine, and NODlike receptor signaling pathways were also highlighted in the network (Figure 5).An AD-pathway (Figure7) was retrieved from KEGG pathway database, illustrating a total of 13 proteins including those that are involved in amyloidogenesis (for example, APP and PSEN), cellular survival, and growth (for example, INSR, Akt, and Erk1/2) and inflammation (for example, iNOS, COX2, IKK, TNF, IL-1, and IL-6), which are potential targets of CP compounds as appeared in network pharmacology. Considering the appearance of INSR and COX2 in network pharmacology and in AD pathobiology, their interactions with the selected CP compounds were further verified by docking simulation. In addition, monoamine oxidases (MAOA and MAOB) are potential targets for both AD and PD, and thus their interactions with the selected CP compounds were also further verified.Figure 7 KEGG pathway of Alzheimer’s disease. Targets of CP compounds are marked with asterisk (∗). Of these, β-secretase and GSK-3β are the potential druggable targets for AD therapy. ## 3.6.In Silico Analysis We employed molecular docking analysis to validate the interaction patterns and the efficiency of CP phytochemicals with some of the vital target proteins that showed a higher degree of connectivity in network pharmacology. Accordingly, we selectedPTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, and MAOB for further analysis. Additionally, we included TrkB in docking analysis since several downstream effectors of TrkB receptor signaling, including PI3K, AKT1, BAX, and BCL2, showed a higher degree of connectivity in the network (Figure 2), and TrkB is a potential receptor for neuronal growth and survival.In any docking analysis of protein-ligand, it is ascertained that if the predicted complex obtained docking scores less than zero, indicating binding affinity of the ligand toward the receptor. However, molecular docking usually used approximated scoring functions to calculate binding energies, which are not correlated with experimental values [47, 48]. In such a case, we used MM-GBSA binding energy calculation to compute the free energy of binding the complex, which uses an implicit continuum solvent approximation [49]. A total of five compounds, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin, were subjected to molecular docking to the corresponding proteins of 12 target genes (PTGS2, NOS3, PTGS1, INSR, NR1I3, NR1I2, HMOX, ACHE, PPARG, MAOA, MAOB, and TRKB), and the obtained docked complex was further subjected for MM-GBSA analysis. As shown in Figure 8, the quercetin-PTGS2 complex represented the highest binding energy of −46.27 kcal/mol, while in NOS3, the scopoletin showed maximum binding affinity and formed a stable complex with a binding energy of −34.98 kcal/mol. Interestingly, scopoletin also showed maximum binding energy to form complexes with PTGS1, NR1I3, NR1I2, ACHE, MAOA, and TRKB with binding energies of −36.28, −56.01, −39.13, −43.13, −51.18, and −34.67 kcal/mol, respectively. On the other hand, while bound to INSR, MAOB, and PPARG, 4-hydroxycinnamic acid showed maximum binding energies of −21.46, −34.044, and −41.04 kcal/mol, respectively. In HMOX1, ayapanin showed higher binding energy than other compounds. The details of molecular interactions of top hits from docking analysis are shown in Figure 8.Figure 8 Molecular docking analysis of target proteins and compounds. Heatmap representing the binding energy revealed from MM-GBSA analysis (a). Two-dimensional molecular interaction for protein-ligand complex for TRKB-Scopoletin (b), PTGS2-Quercetin (c), NOS3-Scopoletin (d), PTGS1-Scopoletin (e), INSR-4-hydroxycinnamic acid (f), NR1I3-Scopoletin (g), NR1I2-Scopoletin (h), HMOX1-Ayapanin (i), AChE-Scopoletin (j), PPARG-4-Hydroxycinnamic acid (k), MAOA-Scopoletin (l), and MAOB-4-Hydroxycinnamic acid (m). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)(m) ## 4. Discussion Traditional knowledge and experimental evidence suggest thatC. pluricaulis, alone or in combination, can enhance memory and protect against cognitive impairment [3, 4, 6, 50]. However, the underlying mechanisms supporting these claims remain largely unexplored. The present study, therefore, employed integrated network pharmacology and in silico approach to provide an in-depth insight into the neuropharmacological effects of CP phytochemicals and their protective potential against dementia. Virtual ADME screening identified a total of five active compounds from CP, such as scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin showing drug-likeness and blood-brain barrier permeability. Growing evidence suggest neurorestorative and memory protective potentials of these compounds. Quercetin, a natural polyphenolic of many plants, fruits, and vegetables, is found to be effective in protecting neurons from various injuries and ameliorating cognitive deficits [51]. Quercetin can ameliorate Alzheimer’s disease pathology (such as β-amyloidosis, tauopathy, astrogliosis and microgliosis in the hippocampus and the amygdala) and recover cognitive deficits in triple transgenic Alzheimer’s disease model mice [52, 53]. Another study has shown that quercetin can ameliorate hippocampus-dependent learning and memory deficits in mice fed with high fat diet through attenuating oxidative stress by activating antioxidant signaling system [54]. The flavonoid antioxidant, kaempferol, is also equally available in fruits and vegetables showing neuroprotective effects and memory-promoting potentials in experimental models of AD, PD, and other neurological diseases [55, 56]. Kaempferol can attenuate Aβ25-35-induced apoptosis of PC-12 cells via the ER/ERK/MAPK signaling pathway [57]. Other compounds, including scopoletin and 4-hydroxycinnamic acid, were also shown to be protective against neuronal damage and effective in ameliorating memory deficits [19, 58, 59]. 4-Hydroxycinnamic acid (P-coumaric acid) promotes hippocampal neurogenesis, improves cognitive functions, and reduces anxiety in post-ischemic stroke rats by activating BDNF/TrkB/AKT signaling pathway [60]. Scopoletin shows neuroprotective effects by inhibiting MOA, Aβ aggregation, and lipid peroxidation [61]. Another study shows that scopoletin can attenuate intracerebral hemorrhage-induced brain injury and improve neurological performance in rats [62].The C-T-D network illustrates that the selected CP metabolites were linked to the target proteins of dementia-associated cellular pathways. GO analysis revealed several enriched biological processes such as inflammatory response, response to drug, and aging that are implicated in the pathobiology of NDD. Network pathway analysis also shows that CP metabolites target several markers of the top enriched pathways. PI3K/Akt signaling is at the top of the enriched pathways associated with the development, survival, and activity of neurons. This pathway has multiple downstream effector targets including those associated with cell survival (Bcl-2, Bax, IKK, NF-κB, and p53). Bcl-2 is a prosurvival protein whereas Bax is a proapoptotic protein. IKK, NF-κB, and p53 are involved in inflammatory response [63, 64]. Other signaling pathways, particularly the MAPK pathway, in association with PI3K/Akt signaling take part in the regulation of growth and survival of cells.Several pathways that are associated with nervous system, namely, neurotrophin signaling pathway, long-term potentiation, and cholinergic, dopaminergic, and serotonergic synapses were enriched, indicating that CP compounds may have shown neuropharmacological effects by modulating these neuronal pathways. Neurotrophin signaling pathway maintains growth, maintenance, and survival of neurons. In aging or degenerating brain, there is inadequate neurotrophic support, causing neuronal death [65]. Neurotrophin, in particular BDNF, mimetic could, therefore, have clinical importance in the management of NDD [66]. Downstream to the neurotrophin signaling is PI3K/Akt pathway, which was highly enriched in this study, and CP compounds were found to target the genes involved. As BDNF mimetic, 7,8-dihydroxyflavone, a TrkB agonist, has shown neurotrophic activities [67] and has been found to be effective in ameliorating motor and cognitive deficits [68]. Docking analysis further indicates that scopoletin exhibited the highest binding affinity to TrkB, the receptor of neurotrophin signaling pathway, and may act as a BDNF-mimetic and take part in neuronal growth and survival by modulating the classical neurotrophin/PI3K/Akt signaling.In AD pathobiology, there is a cholinergic deficit due to dysfunction of cholinergic synapse. Although symptomatic, acetylcholinesterase (AChE) inhibitors such as donepezil, rivastigmine, and galantamine are currently in use to compensate for memory deficits due to cholinergic dysfunction [69]. Molecular docking has predicted that except for kaempferol and quercetin, the other three compounds may interrupt AChE activity. The current data suggest that these CP compounds would be a promising alternative to existing AChE inhibitors for AD patients.Among the endocrine pathways, the dominant pathway is the insulin signaling pathway, which plays an essential role in ensuring neuronal survival and homeostasis, promoting synaptic plasticity and thereby supporting learning and memory function [70, 71]. Evidence shows that insulin signaling is impaired in degenerating brains [71]. Targeting impaired insulin signaling, therefore, constitutes a viable strategy against NDD. In docking analysis, 4-hydroxycinnamic acid showed the highest binding affinity with insulin receptor (INSR) although in network pharmacology quercetin and kaempferol interact with this target.There was an enrichment of inflammation-related pathways, including TNF pathway, HIF-1 pathway, and NF-κB pathway, suggesting that anti-inflammatory effects mediated by CP compounds would play a pivotal role in preventing inflammatory cascade during pathobiological progression of NDD. Cyclooxygenase enzymes, namely, COX-1 (PTGS1) and COX-2 (PTGS2) catalyze the biosynthesis of inflammatory mediators such as prostaglandins and thromboxane. In the brain, COX-2 is activated by excitatory synaptic activity in neurons and by inflammation in the glia. COX-1/COX-2 pathway has pathogenic relevance in preclinical stages of Alzheimer’s disease development [72]. Pathological activation of COX-2 disrupts hippocampal synaptic function, leading to cognitive deficits [72]. Cyclooxygenase inhibitors, such as nonsteroidal anti-inflammatory drugs (NSAIDs), may have preventive effects against dementia [73]. Several COX-2 inhibitors such as celecoxib [74] and indomethacin [75] have shown promise in the management of AD. Docking results demonstrate that all CP compounds, including scopoletin and quercetin, exhibited substantial binding affinity to COX-2 and COX-1, suggesting their potential application in the development of antineuroinflammatory agents. Previous in silico reports on interaction of COX-2 with quercetin and kaempferol also support our data [76].In addition to the above cellular pathways, CP compounds target some other pathways, namely, autophagy, mitophagy, apoptosis, necroptosis, and some specific molecular markers of AD and PD pathways. Endothelial nitric oxide synthase or eNOS (NOS3) is known for its outstanding role in regulating cerebral blood flow and is associated with synaptic plasticity such as long-term potentiation [77]. eNOS attenuates ischemic damage by regulating BDNF expression [78]. Nitric oxide produced by eNOS protects neurons from Tau pathology [79]. Another study reports that pharmacological activation of PI3K-eNOS signaling can ameliorate cognitive deficits in streptozotocin-induced rats [80]. Pharmacological interruption of eNOS activity results in an increase in inflammatory mediators, such as iNOS in rat ischemic brains [81]. eNOS is, thereby, protective against inflammation and other pathologic stimuli. Statins such as atorvastatin and simvastatin may contribute to the amelioration of brain tissue injury in ischemic brain by activating eNOS [82]. Together, this evidence suggests that CP compounds that target eNOS may have pharmacological significance against NDD pathobiology.Other important targets are monoamine oxidases (MAOs) that catalyze the oxidative deamination of monoamines and contribute to the metabolism of dopamine, a neurotransmitter of dopaminergic neurons. Drugs that inhibit MAO, particularly MAOB, such as selegiline and rasagiline are currently in clinical use in patients with PD [83–85]. Docking findings demonstrate that CP compounds, particularly 4-hydroxycinnamic acid and scopoletin, showed higher binding affinity, suggesting their prospects as MAO inhibitors to be used in PD management.Heme oxygenase-1 or HO-1 (HMOX1) is a stress-sensitive enzyme that catalyzes the breakdown of heme into iron, carbon monoxide, and biliverdin/bilirubin and is involved in the pathobiology of AD and other brain disorders. Astroglial induction of the HMOX1 by β-amyloid and cytokines leads to mitochondrial iron sequestration and may thereby contribute to pathological iron deposition and bioenergy failure [86]. Pharmacological intervention in glial HO-1 activity may provide neuroprotection in AD by limiting iron-mediated neurotoxicity [86]. All CP compounds except kaempferol exhibit higher binding affinity to HO-1, and thereby, may be neuroprotective through regulating HO-1 activity.Peroxisome proliferator-activated receptor-gamma or PPARγ (PPARG), a ligand-activated nuclear transcription factor, regulates the expression of multiple genes that encode proteins involved in the regulation of lipid metabolism, improvement of insulin sensitivity, and inhibition of inflammation [87]. PPARγ agonists counteract oxidative stress, neuroinflammation, and Aβ clearance [70, 88]. PPARγ agonists such as fenofibrate, icariin, and naringenin are known to be neuroprotective, supporting neuronal development, synaptic plasticity, and ameliorating cognitive deficits [70, 89, 90]. In docking analysis, 4-hydroxycinnamic acid and scopoletin showed the highest binding affinity to PPARγ, suggesting that these compounds can ameliorate cognitive deficits through activating PPARγ signaling. ## 5. Conclusion Thein silico analysis predicts that CP metabolites, namely, scopoletin, 4-hydroxycinnamic acid, kaempferol, quercetin, and ayapanin are the major bioactive leads that showed interaction with various molecular targets and cellular pathways crucial to neuronal growth, survival, and activity. The signaling pathways that CP compounds primarily target include the PI3K/Akt signaling pathway, the neurotrophin signaling pathway, and the insulin signaling pathway. In addition, top targets of CP compounds including PTGS1, PTGS2, NOS3, INSR, HMOX1, ACHE, PPARG, MAOA, MAOB, and TRKB may be potential druggable targets for future drug designing to address dementia disorders. Together with the previous reports, the combined network pharmacology and in-silico observations form a scientific basis that supports the ethnomedical application of CP for memory enhancement and against aging/pathological cognitive deficits. However, further investigation of memory-enhancing and neuroprotective effects of CP and its metabolites is essential to extrapolate the findings from preclinical and in silico models into clinical subjects. --- *Source: 1015310-2022-10-03.xml*
2022
# Effects of Cryotherapy on the Maxillary Antrostomy Patency in a Rabbit Model of Chronic Rhinosinusitis **Authors:** Anamaria Gocea; Marian Taulescu; Veronica Trombitas; Silviu Albu **Journal:** BioMed Research International (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101534 --- ## Abstract It is acknowledged that many causes of failures in endoscopic sinus surgery are related to scarring and narrowing of the maxillary antrostomy. We assessed the effect of low-pressure spray cryotherapy in preventing the maxillary antrostomy stenosis in a chronic rhinosinusitis (CRS) rabbit model. A controlled, randomized, double-blind study was conducted on 22 New Zealand rabbits. After inducing unilateral rhinogenic CRS, a maxillary antrostomy was performed and spray cryotherapy was employed on randomly selected 12 rabbits, while saline solution was applied to the control group (n=10). The antrostomy dimensions and the histological scores were assessed 4 weeks postoperatively. The diameter of cryotreated antrostomy was significantly larger at 4 weeks than that in the control group. At 4 weeks, the maxillary antrostomy area in the study group was significantly larger than the mean area in the control group (103.92 ± 30.39 mm2 versus 61.62 ± 28.35 mm2, P=0.002). Submucosal fibrous tissues and leukocytic infiltration in saline-treated ostia were more prominent than those in cryotreated ostia with no significant differences between the two groups regarding the histological scores. Intraoperative low-pressure spray cryotherapy increases the patency of the maxillary antrostomy at 4 weeks postoperatively with no important local side effects. --- ## Body ## 1. Introduction Endoscopic sinus surgery (ESS) is highly effective in curing medically resistant maxillary chronic rhinosinusitis (CRS). Nevertheless, between 2 and 18% of cases require revision surgery [1–3] due to the following failure reasons: middle turbinate lateralization, mucosal adhesions, vicious scarring, and ostium narrowing. It is accounted that up to 25% of maxillary ESS failures are related to ostium stenosis [1–4]. In order to enhance success rates, various methods were forecasted: mucosal sparing surgical techniques [5], postoperative endoscopic debridements [6], perioperative drug-infused dressings [7–13], bioabsorbable drug-coated stents [14–18], mucoadhesive drug-eluting polymers [19, 20], and the use of oral and topical corticosteroids [21].Cryotherapy, as a surgical tool, has been extensively used until nowadays in oncology, ophthalmology, and gastroenterology [22, 23]. Unlike the cryoprobe (−86°C), spray cryotherapy with liquid nitrogen (−196°C) is a noncontact method of tissue ablation that can be used to quickly treat larger areas, providing more uniform treatment. A study in the airway of humans [24] was conducted using surgically resected specimens that determined cryotherapy’s safety and feasibility. The recent reports of successful noncontact low-pressure spray cryotherapy [25, 26] to modify the wound response in granulation-induced glottic and subglottic stenosis have prompted us to investigate its effect on mucosal healing after ESS.Our objective was to study the outcomes of the use of low-pressure spray cryotherapy on the surgically created maxillary antrostomy in an experimental CRS rabbit model. Our hypothesis was that cryotherapy is able to reduce stenosis of the antrostomy during the postoperative period. Since ESS is the treatment of choice in CRS cases, we chose to examine the effect of cryotherapy on long-term inflamed mucosa. ## 2. Materials and Methods This study was approved by the University Committee on Animal Care & Use and the animals were treated according to the National Institute of Health Guide for the Care & Use of Laboratory Animals. We designed a prospective, controlled, randomized, double-blind, and parallel-group animal study on a total of 24 New Zealand white rabbits of both genders with body weights ranging from 2.5 to 3.2 kg. ### 2.1. Animal Model and Surgical Technique Initially, unilateral rhinogenic CRS according to the method described by Liang et al. [27] was induced. Briefly, rabbits were anesthetized with ketamine (50 mg/kg i.m.) and xylazine (4 mg/kg i.m.). Under endoscopic control, 1 μg phorbol 12-myristate 13-acetate (PMA, Sigma-Aldrich, St. Louis, MO, USA) was injected into unilateral nasal lateral wall near the endoturbinates (similar to middle turbinates in humans) [16]. The specific sides to be injected were randomly generated by a computer program.Afterward, a 3 × 5 × 30 mm piece of Merocel (Medtronic Xomed) was inserted into the nasal cavity. The Merocel was big enough to ensure ostial occlusion and it was removed 15 days later. As demonstrated by Liang et al., this model is able to induce persistent inflammation lasting for more than 12 weeks, meeting the current definition of CRS [27].At the end of the three-month period, we performed a maxillary antrostomy on the infected sinus. The operative technique was standardized and performed as described in the literature [20, 27–29]. After induction of anesthesia as previously described, a T-shaped incision was made over the lateral nasal dorsum through the skin and periosteum. Both maxillary sinuses were inspected. The superior wall of the infected sinus was completely opened rendering the natural ostium visible. The ostium was circumferentially widened by a cutting bur, creating a through-and-through wound; we sent the mucosa samples removed around the maxillary ostium for histological examination and we took digital photographs of the antrostomies.Before wounding, rabbits were randomly assigned to one of the two treatment groups. In Group 1 (12 rabbits), cryotherapy was employed (two cycles of 4-second cryospray with complete thaw of the treated area between applications), while animals within Group 2 (12 rabbits) were sprayed with saline solution at the same dosage and period. Spray cryotherapy was performed with the CryoPro Cryotherapy System (Williams Medical Supplies Ltd), a device that provides a uniform and broad distribution of liquid nitrogen −196°. Precise handling of the device allowed spraying the liquid nitrogen for 4 seconds on the maxillary antrostomy circumference, and then we waited 35 seconds for thawing the sprayed area; afterwards, we applied the last cycle 4 seconds. We carefully avoided overspraying the liquid nitrogen on the maxillary sinus mucosa or the nasal fossa (through the ostium) in order to prevent any interference with the study results and not to damage surrounding tissues. The periosteum, subcutaneous tissue, and skin were closed with 4–0 Vicryl suture. Rabbits received antibiotics (30 mg/kg i.m ceftriaxone) for 10 days and were closely monitored. ### 2.2. Antrostomy Dimensions At the end of the 4th postoperative week, the rabbits were sacrificed by intravenous administration of 500 mg phenobarbital. Immediately after their death, we reopened the midline incision to permit access to the maxillary sinus. A blinded observer inspected the sinuses and extracted mucosa samples for histological analysis; local aspects were documented by digital photography. The pictures were taken by a professional photographer from the same camera angle respecting the same focal distance, minimalising as much as possible the bias of different angled pictures for the digital area measurement. All pictures were loaded on the Graphisoft ArchiCAD 13 program and the antrostomy area was objectively measured in an automated manner (Figure1). The patency of an ostium was scored by comparing its area value at 4 weeks with its area value at the date of the surgical procedure.Figure 1 Measuring the antrostomy area. ### 2.3. Histological Analysis Mucosa specimens obtained around each ostium were immediately fixed in 10% phosphate-buffered formalin solution for 24 hours, embedded in paraffin wax, cut into 5–7μm sections, and stained with hematoxylin and eosin (H&E). Masson’s trichrome (M&T) staining was also done for evaluating collagen fibers. The slides were analyzed with an Olympus BX51 microscope with an Olympus SP 350 digital camera.“Cell B” basic imaging software (Olympus) was used for semiautomatic counting of the inflammatory parameters. Morphological evidence of epithelial damage such as cilia disappearance, disruption of epithelium or inflammatory cell infiltrates, fibrosis, and edema was looked for under light microscopy and assessed according to the scale represented in Table1. The pathologist was blinded, unaware if the samples were harvested at the time of surgery or 4 weeks later or if they came from rabbits in the control or the study group.Table 1 Grading of histological parameters evaluated during the study. Parameter Grade Description Mononuclear cell infiltrate Grade 0 Normal aspect (between 0 and 10 cells/field 40x) Grade 1 Discrete inflammation (between 10 and 30 cells/field 40x) Grade 2 Moderate inflammation (between 30 and 50 cells/field 40x) Grade 3 Severe inflammation (>50 cells/field 40x) Fibrosis Grade 0 Normal (between 3 and 4 subepithelial layers of collagen fibers) Grade 1 Subepithelial fibrosis Grade 2 Subepithelial and interglandular fibrosis Grade 3 Diffuse fibrosis with compression atrophy of adjacent structures (glands and capillaries) Edema Grade 0 No edema Grade 1 Focal subepithelial edema with collagen fibers dislocation Grade 2 Diffuse subepithelial edema with collagen fibers dislocation Grade 3 Diffuse edema (subepithelial and interglandular) Cilia Grade 0 Normal aspect (height: 4.5–5 µm) Grade 1 Shortened cilia Grade 2 Dotted cilia disappearance (epithelial areas without cilia) Grade 3 Lack of cilia on the epithelium of the inflammatory areas Epithelial hyperplasia Grade 0 Absent Grade 1 Present ### 2.4. Statistical Methods The statistical analysis was performed by means of SPSS 20.0 (SPSS, Inc., Chicago, IL, USA). Data were expressed as mean ± standard deviation (SD).P values < 0.05 were considered significant. We used the Student’s t-test for normally distributed data, while the nonparametric test Mann-Whitney U test was applied for numbers that did not follow a normal distribution. Wilcoxon rank test assessed differences in treatment outcomes because the number of animals was limited and the data were not normally distributed. Correlation analysis was performed using Spearman statistics. ## 2.1. Animal Model and Surgical Technique Initially, unilateral rhinogenic CRS according to the method described by Liang et al. [27] was induced. Briefly, rabbits were anesthetized with ketamine (50 mg/kg i.m.) and xylazine (4 mg/kg i.m.). Under endoscopic control, 1 μg phorbol 12-myristate 13-acetate (PMA, Sigma-Aldrich, St. Louis, MO, USA) was injected into unilateral nasal lateral wall near the endoturbinates (similar to middle turbinates in humans) [16]. The specific sides to be injected were randomly generated by a computer program.Afterward, a 3 × 5 × 30 mm piece of Merocel (Medtronic Xomed) was inserted into the nasal cavity. The Merocel was big enough to ensure ostial occlusion and it was removed 15 days later. As demonstrated by Liang et al., this model is able to induce persistent inflammation lasting for more than 12 weeks, meeting the current definition of CRS [27].At the end of the three-month period, we performed a maxillary antrostomy on the infected sinus. The operative technique was standardized and performed as described in the literature [20, 27–29]. After induction of anesthesia as previously described, a T-shaped incision was made over the lateral nasal dorsum through the skin and periosteum. Both maxillary sinuses were inspected. The superior wall of the infected sinus was completely opened rendering the natural ostium visible. The ostium was circumferentially widened by a cutting bur, creating a through-and-through wound; we sent the mucosa samples removed around the maxillary ostium for histological examination and we took digital photographs of the antrostomies.Before wounding, rabbits were randomly assigned to one of the two treatment groups. In Group 1 (12 rabbits), cryotherapy was employed (two cycles of 4-second cryospray with complete thaw of the treated area between applications), while animals within Group 2 (12 rabbits) were sprayed with saline solution at the same dosage and period. Spray cryotherapy was performed with the CryoPro Cryotherapy System (Williams Medical Supplies Ltd), a device that provides a uniform and broad distribution of liquid nitrogen −196°. Precise handling of the device allowed spraying the liquid nitrogen for 4 seconds on the maxillary antrostomy circumference, and then we waited 35 seconds for thawing the sprayed area; afterwards, we applied the last cycle 4 seconds. We carefully avoided overspraying the liquid nitrogen on the maxillary sinus mucosa or the nasal fossa (through the ostium) in order to prevent any interference with the study results and not to damage surrounding tissues. The periosteum, subcutaneous tissue, and skin were closed with 4–0 Vicryl suture. Rabbits received antibiotics (30 mg/kg i.m ceftriaxone) for 10 days and were closely monitored. ## 2.2. Antrostomy Dimensions At the end of the 4th postoperative week, the rabbits were sacrificed by intravenous administration of 500 mg phenobarbital. Immediately after their death, we reopened the midline incision to permit access to the maxillary sinus. A blinded observer inspected the sinuses and extracted mucosa samples for histological analysis; local aspects were documented by digital photography. The pictures were taken by a professional photographer from the same camera angle respecting the same focal distance, minimalising as much as possible the bias of different angled pictures for the digital area measurement. All pictures were loaded on the Graphisoft ArchiCAD 13 program and the antrostomy area was objectively measured in an automated manner (Figure1). The patency of an ostium was scored by comparing its area value at 4 weeks with its area value at the date of the surgical procedure.Figure 1 Measuring the antrostomy area. ## 2.3. Histological Analysis Mucosa specimens obtained around each ostium were immediately fixed in 10% phosphate-buffered formalin solution for 24 hours, embedded in paraffin wax, cut into 5–7μm sections, and stained with hematoxylin and eosin (H&E). Masson’s trichrome (M&T) staining was also done for evaluating collagen fibers. The slides were analyzed with an Olympus BX51 microscope with an Olympus SP 350 digital camera.“Cell B” basic imaging software (Olympus) was used for semiautomatic counting of the inflammatory parameters. Morphological evidence of epithelial damage such as cilia disappearance, disruption of epithelium or inflammatory cell infiltrates, fibrosis, and edema was looked for under light microscopy and assessed according to the scale represented in Table1. The pathologist was blinded, unaware if the samples were harvested at the time of surgery or 4 weeks later or if they came from rabbits in the control or the study group.Table 1 Grading of histological parameters evaluated during the study. Parameter Grade Description Mononuclear cell infiltrate Grade 0 Normal aspect (between 0 and 10 cells/field 40x) Grade 1 Discrete inflammation (between 10 and 30 cells/field 40x) Grade 2 Moderate inflammation (between 30 and 50 cells/field 40x) Grade 3 Severe inflammation (>50 cells/field 40x) Fibrosis Grade 0 Normal (between 3 and 4 subepithelial layers of collagen fibers) Grade 1 Subepithelial fibrosis Grade 2 Subepithelial and interglandular fibrosis Grade 3 Diffuse fibrosis with compression atrophy of adjacent structures (glands and capillaries) Edema Grade 0 No edema Grade 1 Focal subepithelial edema with collagen fibers dislocation Grade 2 Diffuse subepithelial edema with collagen fibers dislocation Grade 3 Diffuse edema (subepithelial and interglandular) Cilia Grade 0 Normal aspect (height: 4.5–5 µm) Grade 1 Shortened cilia Grade 2 Dotted cilia disappearance (epithelial areas without cilia) Grade 3 Lack of cilia on the epithelium of the inflammatory areas Epithelial hyperplasia Grade 0 Absent Grade 1 Present ## 2.4. Statistical Methods The statistical analysis was performed by means of SPSS 20.0 (SPSS, Inc., Chicago, IL, USA). Data were expressed as mean ± standard deviation (SD).P values < 0.05 were considered significant. We used the Student’s t-test for normally distributed data, while the nonparametric test Mann-Whitney U test was applied for numbers that did not follow a normal distribution. Wilcoxon rank test assessed differences in treatment outcomes because the number of animals was limited and the data were not normally distributed. Correlation analysis was performed using Spearman statistics. ## 3. Results Twenty-two rabbits (12 animals in Group 1 and 10 rabbits in Group 2) survived until the end of the study; one animal died because of an anesthetic accident and another one died during the 3-month period of chronic sinusitis induction time; the latter ate very poorly for a few days probably due to a dental abscess, rather frequent in rabbits. Unilateral mucopurulent nasal discharges were observed in all 22 rabbits at the end of the 3-month maxillary sinusitis induction time. At surgery, all the ostium-occluded maxillary sinuses were filled with purulent secretions and hypertrophic mucosa. However, the contralateral sinus was free of disease in all animals. ### 3.1. Chronic Rhinosinusitis Model Microscopic analysis of mucosal specimens collected at surgery demonstrated thickening of sinus mucosa, epithelial hyperplasia, mucous metaplasia, moderate to severe subepithelial fibrosis, and glandular atrophy. A prominent leukocytic infiltration into the lamina propria and epithelium was observed, with predominant mononuclear cell infiltrates (lymphocytes, macrophages, and plasma cells) and lymphoid follicle hyperplasia (see Figure2). A moderate infiltrate with heterophils and eosinophils was also observed. No nasal polyps were found either macroscopically or microscopically in this study. Discrete inflammation (between 10 and 30 monocytes/field 40x) was found in 2 rabbits (9.09%) and grade 2 (moderate inflammation) was found in 31.81% (7 rabbits), and the majority (59.1%—13 rabbits) presented with grade 3 of severe inflammation (>50 mononuclear cell infiltrate/field 40x) at the time of surgery.Figure 2 Mucosa around the ostia at the moment of surgical intervention-light microscopy. Maxillary sinus biopsy showing thickening of sinus mucosa, lymphoid follicle hyperplasia, cilia degeneration (black arrow head), diffuse and several infiltrate with mononuclear cells (black arrow), heterophils, and scattered eosinophils. H&E stain, bar = 20μm. ### 3.2. Antrostomy Dimension All wounds healed without infection and all antrostomies remained open in 4 weeks’ time. In the fourth postoperative week, there were no cryotherapy-induced changes to the periantrostomy sinus mucosa, so we consider that the blinding was accurate. Table2 shows the mean diameter and area values of the maxillary ostia in both animal groups. Initial dimensions and diameters of the antrostomies were identical in both groups. At 4 weeks, the Group 1 ostia were statistically significantly wider than saline-treated ostia (see Figure 3). At 4 weeks, the antrostomy patency in study group has enlarged by a mean of 47.53%, while in the control group, it has narrowed by 20.06% (statistically significant difference, P<0.05). Cryotherapy was able to significantly enlarge the antrostomy as compared with that of the control group (P<0.05 for both area and diameter, Wilcoxon sum rank test). A direct correlation was found between the histology scores (mean epithelial height, mononuclear cell infiltrate, fibrosis, and edema) and the mean ostial diameter for each of the two treatments (r=0.548, P<0.05, Spearman r test for correlation).Table 2 Mean values of antrostomy diameters and areas for both animal groups. Group 1 (N=12) mean ± SD Group 2 (N=10) mean ± SD Statistics P value Initial diameter (mm) 8.14 ± 2.31 8 ± 1.82 Mann-WhitneyU test 0.005 Final diameter (mm) 9.73 ± 2.92 7.2 ± 2.61 Initial area (mm2) 72.86 ± 23.62 74.79 ± 25.43 Mann-WhitneyU test 0.002 Final area (mm2) 103.92 ± 30.39 61.62 ± 28.35 Group 1: cryotherapy. Group 2: control. SD: standard deviation.Figure 3 Antrostomy area distribution in the two groups. ### 3.3. Histological Analysis The histology scores at the end of the study are displayed in Table3. Postoperative 4-week specimens stained with Masson’s trichrome showed that even though fibrosis score is higher in saline-treated ostia as compared to that in cryotreated ostia (Figure 4), the difference did not attain statistical significance (see Table 3). The mean value of the epithelial height decreases at 4 weeks to 36.02 ± 12.04 μm even though it does not reach the normal values. In the cryotreated group, it decreased in a statistically significant way (P<0.05) from 51 μm to 33 μm, while in the control group the mean height of epithelium suffers only a slight variation (3 μm). There was an increase in edema in both groups, but the difference was not statistically significant (see Table 3). Fibrosis decreases in the study group after 4 weeks, from a mean value of 2.25 to 1.85, the difference not being statistically significant (P>0.05; see Table 3). We found a better organization of collagen fibers in the study group versus the control group. There was no significant difference between the control and study groups with respect to loss of cilia. More areas with normally ciliated epithelia were found in the cryotreated group even though their function has not been assessed. Cellular atypia was not observed in any of the samples. Light microscopic findings of the mucosal specimens obtained from the cryotreated group showed epithelial hyperplasia (Figure 5) after 4 weeks in four cases (33.3%), all the rest presenting a normal architecture of the epithelial layer with some inflammatory cells in the submucosal planes.Table 3 Histology scores. Parameter Group 1N=12 mean ± SD Group 2N=10 mean ± SD Statistics P value Mean epithelial height 38.02 ± 12.96 33.62 ± 11.52 T-independent test 0.415 (95% confidence interval: −6.62437–15.41171) Edema 1.83 ± 1.267 1.80 ± 1.033 Chi square 0.505 (Chi Sq Val = 2.338) Fibrosis 1.75 ± 1.422 1.60 ± 1.174 Chi square 0.417 (Chi Sq Val = 2.842) Cilia 1.36 ± 1.206 1.00 ± 1.054 Chi square 0.774 (Chi Sq Val = 1.113) Mononuclear cell infiltrate 1.42 ± 0.996 1.40 ± 1.075 Mann-Whitney 0.945 (U=59) Group 1: cryotherapy. Group 2: control. SD: standard deviation.The light microscopic findings of Masson’s trichrome (M&T) stained mucosa around the ostia at 4 weeks in cryotreated mucosa: subepithelial collagen fiber depositions stained with blue color (a), bar = 20μm, were less prominent compared with saline-treated mucosa (b), bar = 20 μm. (a) (b)Figure 5 The histological exam of mucosa around the ostia at 4 weeks in the cryotreated group revealing epithelial hyperplasia, scattered leukocytes, and moderate subepithelial fibrosis (white arrow). H&E stain, bar = 20μm. ## 3.1. Chronic Rhinosinusitis Model Microscopic analysis of mucosal specimens collected at surgery demonstrated thickening of sinus mucosa, epithelial hyperplasia, mucous metaplasia, moderate to severe subepithelial fibrosis, and glandular atrophy. A prominent leukocytic infiltration into the lamina propria and epithelium was observed, with predominant mononuclear cell infiltrates (lymphocytes, macrophages, and plasma cells) and lymphoid follicle hyperplasia (see Figure2). A moderate infiltrate with heterophils and eosinophils was also observed. No nasal polyps were found either macroscopically or microscopically in this study. Discrete inflammation (between 10 and 30 monocytes/field 40x) was found in 2 rabbits (9.09%) and grade 2 (moderate inflammation) was found in 31.81% (7 rabbits), and the majority (59.1%—13 rabbits) presented with grade 3 of severe inflammation (>50 mononuclear cell infiltrate/field 40x) at the time of surgery.Figure 2 Mucosa around the ostia at the moment of surgical intervention-light microscopy. Maxillary sinus biopsy showing thickening of sinus mucosa, lymphoid follicle hyperplasia, cilia degeneration (black arrow head), diffuse and several infiltrate with mononuclear cells (black arrow), heterophils, and scattered eosinophils. H&E stain, bar = 20μm. ## 3.2. Antrostomy Dimension All wounds healed without infection and all antrostomies remained open in 4 weeks’ time. In the fourth postoperative week, there were no cryotherapy-induced changes to the periantrostomy sinus mucosa, so we consider that the blinding was accurate. Table2 shows the mean diameter and area values of the maxillary ostia in both animal groups. Initial dimensions and diameters of the antrostomies were identical in both groups. At 4 weeks, the Group 1 ostia were statistically significantly wider than saline-treated ostia (see Figure 3). At 4 weeks, the antrostomy patency in study group has enlarged by a mean of 47.53%, while in the control group, it has narrowed by 20.06% (statistically significant difference, P<0.05). Cryotherapy was able to significantly enlarge the antrostomy as compared with that of the control group (P<0.05 for both area and diameter, Wilcoxon sum rank test). A direct correlation was found between the histology scores (mean epithelial height, mononuclear cell infiltrate, fibrosis, and edema) and the mean ostial diameter for each of the two treatments (r=0.548, P<0.05, Spearman r test for correlation).Table 2 Mean values of antrostomy diameters and areas for both animal groups. Group 1 (N=12) mean ± SD Group 2 (N=10) mean ± SD Statistics P value Initial diameter (mm) 8.14 ± 2.31 8 ± 1.82 Mann-WhitneyU test 0.005 Final diameter (mm) 9.73 ± 2.92 7.2 ± 2.61 Initial area (mm2) 72.86 ± 23.62 74.79 ± 25.43 Mann-WhitneyU test 0.002 Final area (mm2) 103.92 ± 30.39 61.62 ± 28.35 Group 1: cryotherapy. Group 2: control. SD: standard deviation.Figure 3 Antrostomy area distribution in the two groups. ## 3.3. Histological Analysis The histology scores at the end of the study are displayed in Table3. Postoperative 4-week specimens stained with Masson’s trichrome showed that even though fibrosis score is higher in saline-treated ostia as compared to that in cryotreated ostia (Figure 4), the difference did not attain statistical significance (see Table 3). The mean value of the epithelial height decreases at 4 weeks to 36.02 ± 12.04 μm even though it does not reach the normal values. In the cryotreated group, it decreased in a statistically significant way (P<0.05) from 51 μm to 33 μm, while in the control group the mean height of epithelium suffers only a slight variation (3 μm). There was an increase in edema in both groups, but the difference was not statistically significant (see Table 3). Fibrosis decreases in the study group after 4 weeks, from a mean value of 2.25 to 1.85, the difference not being statistically significant (P>0.05; see Table 3). We found a better organization of collagen fibers in the study group versus the control group. There was no significant difference between the control and study groups with respect to loss of cilia. More areas with normally ciliated epithelia were found in the cryotreated group even though their function has not been assessed. Cellular atypia was not observed in any of the samples. Light microscopic findings of the mucosal specimens obtained from the cryotreated group showed epithelial hyperplasia (Figure 5) after 4 weeks in four cases (33.3%), all the rest presenting a normal architecture of the epithelial layer with some inflammatory cells in the submucosal planes.Table 3 Histology scores. Parameter Group 1N=12 mean ± SD Group 2N=10 mean ± SD Statistics P value Mean epithelial height 38.02 ± 12.96 33.62 ± 11.52 T-independent test 0.415 (95% confidence interval: −6.62437–15.41171) Edema 1.83 ± 1.267 1.80 ± 1.033 Chi square 0.505 (Chi Sq Val = 2.338) Fibrosis 1.75 ± 1.422 1.60 ± 1.174 Chi square 0.417 (Chi Sq Val = 2.842) Cilia 1.36 ± 1.206 1.00 ± 1.054 Chi square 0.774 (Chi Sq Val = 1.113) Mononuclear cell infiltrate 1.42 ± 0.996 1.40 ± 1.075 Mann-Whitney 0.945 (U=59) Group 1: cryotherapy. Group 2: control. SD: standard deviation.The light microscopic findings of Masson’s trichrome (M&T) stained mucosa around the ostia at 4 weeks in cryotreated mucosa: subepithelial collagen fiber depositions stained with blue color (a), bar = 20μm, were less prominent compared with saline-treated mucosa (b), bar = 20 μm. (a) (b)Figure 5 The histological exam of mucosa around the ostia at 4 weeks in the cryotreated group revealing epithelial hyperplasia, scattered leukocytes, and moderate subepithelial fibrosis (white arrow). H&E stain, bar = 20μm. ## 4. Discussion Our histopathological results showed severe and moderate degrees of inflammation in 20 rabbits at the time of surgery, reinforcing the study’s clinical significance: endoscopic sinus surgery is indicated for chronic severe disease persisting for months, sometimes years, in spite of repeated appropriate antibiotic and steroid therapy. The sinus mucosa in the rhinosinusitis-induced rabbits was characterized by eosinophil-dominant inflammation, similar to the results reported by Johnston et al. [23] Thus, we succeeded in reproducing an important experimental model of chronic rhinosinusitis without nasal polyps. Maintaining the patency of maxillary antrostomy is one of the goals of successful endoscopic sinus surgery. Four weeks after the surgical intervention, the cryotreated maxillary antrostomies were significantly wider than the saline-treated ostia.The anteroinferior end of the middle turbinate and the lateral nasal wall delineate an isthmus, which is prone to synechia formation and ostium stenosis [1–3] during the stage of postoperative edema. Endoscopic debridements were recommended in order to prevent adhesions following ESS. This painful procedure enables observation and management of scarring but may also slow down wound healing [19]. Therefore, as mentioned above, different devices were foreseen to promote middle meatus and ostium patency. In this study, we assumed that cryotherapy may hinder ostium stenosis.Experimental animal models are good tools for understanding the pathogenesis and envisaging treatment in CRS [27–30]. Likewise, the maxillary sinus of rabbits is considered to be a good experimental model for regenerative studies following sinus surgery [27–33]. Several models [27–34] of sinusitis induction have been described in the literature; the majority of them are for acute disease. We chose Liang’s method because it is simple to perform and feasible, and it avoids the need of manipulating pathogenic bacteria. Since ESS is the method of choice in caring for CRS, we thought this model would best replicate the clinical setting of performing middle meatal antrostomy during surgery. In other experimental models, the mean diameter of antrostomy was 4 mm [19, 20, 29]. We choose the 8 mm diameter since, as opposed to the previous models in the literature, we operated on infected sinuses, and in this setting a wide antrostomy is recommended by most authors [5].The mechanism of action of cryotherapy includes disruption of cell membranes by the formation of intracellular ice crystals during the freeze cycle and vasoconstriction, endothelial damage, thrombosis, and tissue ischemia during the thaw cycle [22, 23]. Basic and clinical research demonstrated that cryotherapy induced the production of collagen type III along with the development of more organized collagen architecture [24]. Wound healing is a significant determinant of successful outcomes in endoscopic sinus surgery. In the human airways, histological evaluations showed a better wound healing following cryotherapy including improved collagen organization and decreased keratinization [24, 25]. Moreover, the supporting connective matrix was left intact after cryotherapy application and long-term pathology findings revealed a complete lack of scarring or stricture [26]. Therefore, patients who exhibit persistent mucosal disease after ESS, which leads to repetitive antibiotic treatment and even surgical revisions, might benefit from intraoperative cryotherapy application, since it modulates mucosal healing and decreases granulation tissue formation.Our study demonstrated that cryotherapy significantly improved the antrostomy patency. At 4 weeks, the ostium diameter was significantly bigger in the cryotreated group. Further, the mean area in the cryotreated group has increased by 47.53% (including by default the cryoablation effect), while in the control group it has narrowed by 20.06% following the normal process of mucosal healing by granulating tissue. No mucosal toxic effects were observed in Group 1. On the contrary, histology demonstrated that submucosal fibrous tissues and leukocytic infiltration in the control group ostia were more prominent than those in cryotreated ostia. In addition, a better organization of collagen fibers and more areas with normally ciliated epithelia were found in the cryotherapy group. Improved ciliated epithelia maintain normal drainage of sinuses which is based on mucociliary transport function regardless of maxillary antrostomy patency.There are several limitations to our study. Firstly, we did not split the study rabbits into several groups to which we should have applied different amounts of cryotherapy for different cycles in order to find out the most appropriate dosage. Although cilia appeared to be intact in the cryotreated group, their function was not appropriately assessed. Moreover, electronic microscopy analysis of the treated cilia and measurement of cellular markers for inflammation and wound repair might have made our conclusions about the use of cryotherapy more accurate. Another flaw is that we should have measured the antrostomy diameter after application of cryotherapy and should have assessed it at different time intervals during the study. Previous work of Proctor et al. [29] with the noninfected rabbit model of ostium patency used a 3-week end point for analysis. On the other hand, Chen et al. [20] assessed the neo-ostium diameter at 2, 3, and 4 weeks postoperatively in their study on the hyaluronan hydrogels dressing in the rabbit maxillary sinus. In the latter study, control at 3 weeks simulated postoperative debridement in human ESS [20]. Since we assume that cryotherapy does not imply debridement, we set the 4-week end point for our investigation. Another major flaw is the restricted observation period of our study. However, we were guided by the abovementioned papers on experimental sinusitis in rabbits, choosing a similar observation period [27–34]. Moreover, it is very well known that small animals heal more quickly than humans, and this ability to maintain a patent antrostomy for at least 4 weeks would equate to a useful clinical effect on humans for about 10 months (since one month in a rabbit’s life stands for about 10 months in a human’s [31–34]). An important limitation of our study is also the absence of specific histological examination (Alcian blue staining to detect hyaluronic acid, Movat’s staining to detect elastin fibers, and picrosirius-polarization staining to detect collagen fibers [25]) performed at defined time intervals—1, 2, 3, 4, 6, 8, and 12 weeks. Because of this drawback, we were not able to reveal the mechanism underlying the enlarged antrostomy in the cryotherapy group. Based on the experimental data in the larynx [25], we can only suppose that in an inflammatory sinus setting, cryotherapy induces local necrosis and the wound healing is characterized by a better organized connective tissue structure. ## 5. Conclusion In conclusion, intraoperative low-pressure spray cryotherapy increases the patency of the maxillary antrostomy at 4 weeks postoperatively with no important local side effects. Further studies are needed in order to determine the best dosage for effects of cryotherapy. Moreover, in order to demonstrate the effectiveness and the safety of cryotherapy in maxillary antrostomy, longer-term observation and careful histological examination might be needed. Based on solid experimental evidence, further clinical trials of the use of spray cryotherapy in ESS might be attempted, such as enhancing the middle meatus antrostomy and frontal recess patency rates and decreasing dacryocystorhinostomy failure rates. --- *Source: 101534-2013-10-28.xml*
101534-2013-10-28_101534-2013-10-28.md
36,991
Effects of Cryotherapy on the Maxillary Antrostomy Patency in a Rabbit Model of Chronic Rhinosinusitis
Anamaria Gocea; Marian Taulescu; Veronica Trombitas; Silviu Albu
BioMed Research International (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101534
101534-2013-10-28.xml
--- ## Abstract It is acknowledged that many causes of failures in endoscopic sinus surgery are related to scarring and narrowing of the maxillary antrostomy. We assessed the effect of low-pressure spray cryotherapy in preventing the maxillary antrostomy stenosis in a chronic rhinosinusitis (CRS) rabbit model. A controlled, randomized, double-blind study was conducted on 22 New Zealand rabbits. After inducing unilateral rhinogenic CRS, a maxillary antrostomy was performed and spray cryotherapy was employed on randomly selected 12 rabbits, while saline solution was applied to the control group (n=10). The antrostomy dimensions and the histological scores were assessed 4 weeks postoperatively. The diameter of cryotreated antrostomy was significantly larger at 4 weeks than that in the control group. At 4 weeks, the maxillary antrostomy area in the study group was significantly larger than the mean area in the control group (103.92 ± 30.39 mm2 versus 61.62 ± 28.35 mm2, P=0.002). Submucosal fibrous tissues and leukocytic infiltration in saline-treated ostia were more prominent than those in cryotreated ostia with no significant differences between the two groups regarding the histological scores. Intraoperative low-pressure spray cryotherapy increases the patency of the maxillary antrostomy at 4 weeks postoperatively with no important local side effects. --- ## Body ## 1. Introduction Endoscopic sinus surgery (ESS) is highly effective in curing medically resistant maxillary chronic rhinosinusitis (CRS). Nevertheless, between 2 and 18% of cases require revision surgery [1–3] due to the following failure reasons: middle turbinate lateralization, mucosal adhesions, vicious scarring, and ostium narrowing. It is accounted that up to 25% of maxillary ESS failures are related to ostium stenosis [1–4]. In order to enhance success rates, various methods were forecasted: mucosal sparing surgical techniques [5], postoperative endoscopic debridements [6], perioperative drug-infused dressings [7–13], bioabsorbable drug-coated stents [14–18], mucoadhesive drug-eluting polymers [19, 20], and the use of oral and topical corticosteroids [21].Cryotherapy, as a surgical tool, has been extensively used until nowadays in oncology, ophthalmology, and gastroenterology [22, 23]. Unlike the cryoprobe (−86°C), spray cryotherapy with liquid nitrogen (−196°C) is a noncontact method of tissue ablation that can be used to quickly treat larger areas, providing more uniform treatment. A study in the airway of humans [24] was conducted using surgically resected specimens that determined cryotherapy’s safety and feasibility. The recent reports of successful noncontact low-pressure spray cryotherapy [25, 26] to modify the wound response in granulation-induced glottic and subglottic stenosis have prompted us to investigate its effect on mucosal healing after ESS.Our objective was to study the outcomes of the use of low-pressure spray cryotherapy on the surgically created maxillary antrostomy in an experimental CRS rabbit model. Our hypothesis was that cryotherapy is able to reduce stenosis of the antrostomy during the postoperative period. Since ESS is the treatment of choice in CRS cases, we chose to examine the effect of cryotherapy on long-term inflamed mucosa. ## 2. Materials and Methods This study was approved by the University Committee on Animal Care & Use and the animals were treated according to the National Institute of Health Guide for the Care & Use of Laboratory Animals. We designed a prospective, controlled, randomized, double-blind, and parallel-group animal study on a total of 24 New Zealand white rabbits of both genders with body weights ranging from 2.5 to 3.2 kg. ### 2.1. Animal Model and Surgical Technique Initially, unilateral rhinogenic CRS according to the method described by Liang et al. [27] was induced. Briefly, rabbits were anesthetized with ketamine (50 mg/kg i.m.) and xylazine (4 mg/kg i.m.). Under endoscopic control, 1 μg phorbol 12-myristate 13-acetate (PMA, Sigma-Aldrich, St. Louis, MO, USA) was injected into unilateral nasal lateral wall near the endoturbinates (similar to middle turbinates in humans) [16]. The specific sides to be injected were randomly generated by a computer program.Afterward, a 3 × 5 × 30 mm piece of Merocel (Medtronic Xomed) was inserted into the nasal cavity. The Merocel was big enough to ensure ostial occlusion and it was removed 15 days later. As demonstrated by Liang et al., this model is able to induce persistent inflammation lasting for more than 12 weeks, meeting the current definition of CRS [27].At the end of the three-month period, we performed a maxillary antrostomy on the infected sinus. The operative technique was standardized and performed as described in the literature [20, 27–29]. After induction of anesthesia as previously described, a T-shaped incision was made over the lateral nasal dorsum through the skin and periosteum. Both maxillary sinuses were inspected. The superior wall of the infected sinus was completely opened rendering the natural ostium visible. The ostium was circumferentially widened by a cutting bur, creating a through-and-through wound; we sent the mucosa samples removed around the maxillary ostium for histological examination and we took digital photographs of the antrostomies.Before wounding, rabbits were randomly assigned to one of the two treatment groups. In Group 1 (12 rabbits), cryotherapy was employed (two cycles of 4-second cryospray with complete thaw of the treated area between applications), while animals within Group 2 (12 rabbits) were sprayed with saline solution at the same dosage and period. Spray cryotherapy was performed with the CryoPro Cryotherapy System (Williams Medical Supplies Ltd), a device that provides a uniform and broad distribution of liquid nitrogen −196°. Precise handling of the device allowed spraying the liquid nitrogen for 4 seconds on the maxillary antrostomy circumference, and then we waited 35 seconds for thawing the sprayed area; afterwards, we applied the last cycle 4 seconds. We carefully avoided overspraying the liquid nitrogen on the maxillary sinus mucosa or the nasal fossa (through the ostium) in order to prevent any interference with the study results and not to damage surrounding tissues. The periosteum, subcutaneous tissue, and skin were closed with 4–0 Vicryl suture. Rabbits received antibiotics (30 mg/kg i.m ceftriaxone) for 10 days and were closely monitored. ### 2.2. Antrostomy Dimensions At the end of the 4th postoperative week, the rabbits were sacrificed by intravenous administration of 500 mg phenobarbital. Immediately after their death, we reopened the midline incision to permit access to the maxillary sinus. A blinded observer inspected the sinuses and extracted mucosa samples for histological analysis; local aspects were documented by digital photography. The pictures were taken by a professional photographer from the same camera angle respecting the same focal distance, minimalising as much as possible the bias of different angled pictures for the digital area measurement. All pictures were loaded on the Graphisoft ArchiCAD 13 program and the antrostomy area was objectively measured in an automated manner (Figure1). The patency of an ostium was scored by comparing its area value at 4 weeks with its area value at the date of the surgical procedure.Figure 1 Measuring the antrostomy area. ### 2.3. Histological Analysis Mucosa specimens obtained around each ostium were immediately fixed in 10% phosphate-buffered formalin solution for 24 hours, embedded in paraffin wax, cut into 5–7μm sections, and stained with hematoxylin and eosin (H&E). Masson’s trichrome (M&T) staining was also done for evaluating collagen fibers. The slides were analyzed with an Olympus BX51 microscope with an Olympus SP 350 digital camera.“Cell B” basic imaging software (Olympus) was used for semiautomatic counting of the inflammatory parameters. Morphological evidence of epithelial damage such as cilia disappearance, disruption of epithelium or inflammatory cell infiltrates, fibrosis, and edema was looked for under light microscopy and assessed according to the scale represented in Table1. The pathologist was blinded, unaware if the samples were harvested at the time of surgery or 4 weeks later or if they came from rabbits in the control or the study group.Table 1 Grading of histological parameters evaluated during the study. Parameter Grade Description Mononuclear cell infiltrate Grade 0 Normal aspect (between 0 and 10 cells/field 40x) Grade 1 Discrete inflammation (between 10 and 30 cells/field 40x) Grade 2 Moderate inflammation (between 30 and 50 cells/field 40x) Grade 3 Severe inflammation (>50 cells/field 40x) Fibrosis Grade 0 Normal (between 3 and 4 subepithelial layers of collagen fibers) Grade 1 Subepithelial fibrosis Grade 2 Subepithelial and interglandular fibrosis Grade 3 Diffuse fibrosis with compression atrophy of adjacent structures (glands and capillaries) Edema Grade 0 No edema Grade 1 Focal subepithelial edema with collagen fibers dislocation Grade 2 Diffuse subepithelial edema with collagen fibers dislocation Grade 3 Diffuse edema (subepithelial and interglandular) Cilia Grade 0 Normal aspect (height: 4.5–5 µm) Grade 1 Shortened cilia Grade 2 Dotted cilia disappearance (epithelial areas without cilia) Grade 3 Lack of cilia on the epithelium of the inflammatory areas Epithelial hyperplasia Grade 0 Absent Grade 1 Present ### 2.4. Statistical Methods The statistical analysis was performed by means of SPSS 20.0 (SPSS, Inc., Chicago, IL, USA). Data were expressed as mean ± standard deviation (SD).P values < 0.05 were considered significant. We used the Student’s t-test for normally distributed data, while the nonparametric test Mann-Whitney U test was applied for numbers that did not follow a normal distribution. Wilcoxon rank test assessed differences in treatment outcomes because the number of animals was limited and the data were not normally distributed. Correlation analysis was performed using Spearman statistics. ## 2.1. Animal Model and Surgical Technique Initially, unilateral rhinogenic CRS according to the method described by Liang et al. [27] was induced. Briefly, rabbits were anesthetized with ketamine (50 mg/kg i.m.) and xylazine (4 mg/kg i.m.). Under endoscopic control, 1 μg phorbol 12-myristate 13-acetate (PMA, Sigma-Aldrich, St. Louis, MO, USA) was injected into unilateral nasal lateral wall near the endoturbinates (similar to middle turbinates in humans) [16]. The specific sides to be injected were randomly generated by a computer program.Afterward, a 3 × 5 × 30 mm piece of Merocel (Medtronic Xomed) was inserted into the nasal cavity. The Merocel was big enough to ensure ostial occlusion and it was removed 15 days later. As demonstrated by Liang et al., this model is able to induce persistent inflammation lasting for more than 12 weeks, meeting the current definition of CRS [27].At the end of the three-month period, we performed a maxillary antrostomy on the infected sinus. The operative technique was standardized and performed as described in the literature [20, 27–29]. After induction of anesthesia as previously described, a T-shaped incision was made over the lateral nasal dorsum through the skin and periosteum. Both maxillary sinuses were inspected. The superior wall of the infected sinus was completely opened rendering the natural ostium visible. The ostium was circumferentially widened by a cutting bur, creating a through-and-through wound; we sent the mucosa samples removed around the maxillary ostium for histological examination and we took digital photographs of the antrostomies.Before wounding, rabbits were randomly assigned to one of the two treatment groups. In Group 1 (12 rabbits), cryotherapy was employed (two cycles of 4-second cryospray with complete thaw of the treated area between applications), while animals within Group 2 (12 rabbits) were sprayed with saline solution at the same dosage and period. Spray cryotherapy was performed with the CryoPro Cryotherapy System (Williams Medical Supplies Ltd), a device that provides a uniform and broad distribution of liquid nitrogen −196°. Precise handling of the device allowed spraying the liquid nitrogen for 4 seconds on the maxillary antrostomy circumference, and then we waited 35 seconds for thawing the sprayed area; afterwards, we applied the last cycle 4 seconds. We carefully avoided overspraying the liquid nitrogen on the maxillary sinus mucosa or the nasal fossa (through the ostium) in order to prevent any interference with the study results and not to damage surrounding tissues. The periosteum, subcutaneous tissue, and skin were closed with 4–0 Vicryl suture. Rabbits received antibiotics (30 mg/kg i.m ceftriaxone) for 10 days and were closely monitored. ## 2.2. Antrostomy Dimensions At the end of the 4th postoperative week, the rabbits were sacrificed by intravenous administration of 500 mg phenobarbital. Immediately after their death, we reopened the midline incision to permit access to the maxillary sinus. A blinded observer inspected the sinuses and extracted mucosa samples for histological analysis; local aspects were documented by digital photography. The pictures were taken by a professional photographer from the same camera angle respecting the same focal distance, minimalising as much as possible the bias of different angled pictures for the digital area measurement. All pictures were loaded on the Graphisoft ArchiCAD 13 program and the antrostomy area was objectively measured in an automated manner (Figure1). The patency of an ostium was scored by comparing its area value at 4 weeks with its area value at the date of the surgical procedure.Figure 1 Measuring the antrostomy area. ## 2.3. Histological Analysis Mucosa specimens obtained around each ostium were immediately fixed in 10% phosphate-buffered formalin solution for 24 hours, embedded in paraffin wax, cut into 5–7μm sections, and stained with hematoxylin and eosin (H&E). Masson’s trichrome (M&T) staining was also done for evaluating collagen fibers. The slides were analyzed with an Olympus BX51 microscope with an Olympus SP 350 digital camera.“Cell B” basic imaging software (Olympus) was used for semiautomatic counting of the inflammatory parameters. Morphological evidence of epithelial damage such as cilia disappearance, disruption of epithelium or inflammatory cell infiltrates, fibrosis, and edema was looked for under light microscopy and assessed according to the scale represented in Table1. The pathologist was blinded, unaware if the samples were harvested at the time of surgery or 4 weeks later or if they came from rabbits in the control or the study group.Table 1 Grading of histological parameters evaluated during the study. Parameter Grade Description Mononuclear cell infiltrate Grade 0 Normal aspect (between 0 and 10 cells/field 40x) Grade 1 Discrete inflammation (between 10 and 30 cells/field 40x) Grade 2 Moderate inflammation (between 30 and 50 cells/field 40x) Grade 3 Severe inflammation (>50 cells/field 40x) Fibrosis Grade 0 Normal (between 3 and 4 subepithelial layers of collagen fibers) Grade 1 Subepithelial fibrosis Grade 2 Subepithelial and interglandular fibrosis Grade 3 Diffuse fibrosis with compression atrophy of adjacent structures (glands and capillaries) Edema Grade 0 No edema Grade 1 Focal subepithelial edema with collagen fibers dislocation Grade 2 Diffuse subepithelial edema with collagen fibers dislocation Grade 3 Diffuse edema (subepithelial and interglandular) Cilia Grade 0 Normal aspect (height: 4.5–5 µm) Grade 1 Shortened cilia Grade 2 Dotted cilia disappearance (epithelial areas without cilia) Grade 3 Lack of cilia on the epithelium of the inflammatory areas Epithelial hyperplasia Grade 0 Absent Grade 1 Present ## 2.4. Statistical Methods The statistical analysis was performed by means of SPSS 20.0 (SPSS, Inc., Chicago, IL, USA). Data were expressed as mean ± standard deviation (SD).P values < 0.05 were considered significant. We used the Student’s t-test for normally distributed data, while the nonparametric test Mann-Whitney U test was applied for numbers that did not follow a normal distribution. Wilcoxon rank test assessed differences in treatment outcomes because the number of animals was limited and the data were not normally distributed. Correlation analysis was performed using Spearman statistics. ## 3. Results Twenty-two rabbits (12 animals in Group 1 and 10 rabbits in Group 2) survived until the end of the study; one animal died because of an anesthetic accident and another one died during the 3-month period of chronic sinusitis induction time; the latter ate very poorly for a few days probably due to a dental abscess, rather frequent in rabbits. Unilateral mucopurulent nasal discharges were observed in all 22 rabbits at the end of the 3-month maxillary sinusitis induction time. At surgery, all the ostium-occluded maxillary sinuses were filled with purulent secretions and hypertrophic mucosa. However, the contralateral sinus was free of disease in all animals. ### 3.1. Chronic Rhinosinusitis Model Microscopic analysis of mucosal specimens collected at surgery demonstrated thickening of sinus mucosa, epithelial hyperplasia, mucous metaplasia, moderate to severe subepithelial fibrosis, and glandular atrophy. A prominent leukocytic infiltration into the lamina propria and epithelium was observed, with predominant mononuclear cell infiltrates (lymphocytes, macrophages, and plasma cells) and lymphoid follicle hyperplasia (see Figure2). A moderate infiltrate with heterophils and eosinophils was also observed. No nasal polyps were found either macroscopically or microscopically in this study. Discrete inflammation (between 10 and 30 monocytes/field 40x) was found in 2 rabbits (9.09%) and grade 2 (moderate inflammation) was found in 31.81% (7 rabbits), and the majority (59.1%—13 rabbits) presented with grade 3 of severe inflammation (>50 mononuclear cell infiltrate/field 40x) at the time of surgery.Figure 2 Mucosa around the ostia at the moment of surgical intervention-light microscopy. Maxillary sinus biopsy showing thickening of sinus mucosa, lymphoid follicle hyperplasia, cilia degeneration (black arrow head), diffuse and several infiltrate with mononuclear cells (black arrow), heterophils, and scattered eosinophils. H&E stain, bar = 20μm. ### 3.2. Antrostomy Dimension All wounds healed without infection and all antrostomies remained open in 4 weeks’ time. In the fourth postoperative week, there were no cryotherapy-induced changes to the periantrostomy sinus mucosa, so we consider that the blinding was accurate. Table2 shows the mean diameter and area values of the maxillary ostia in both animal groups. Initial dimensions and diameters of the antrostomies were identical in both groups. At 4 weeks, the Group 1 ostia were statistically significantly wider than saline-treated ostia (see Figure 3). At 4 weeks, the antrostomy patency in study group has enlarged by a mean of 47.53%, while in the control group, it has narrowed by 20.06% (statistically significant difference, P<0.05). Cryotherapy was able to significantly enlarge the antrostomy as compared with that of the control group (P<0.05 for both area and diameter, Wilcoxon sum rank test). A direct correlation was found between the histology scores (mean epithelial height, mononuclear cell infiltrate, fibrosis, and edema) and the mean ostial diameter for each of the two treatments (r=0.548, P<0.05, Spearman r test for correlation).Table 2 Mean values of antrostomy diameters and areas for both animal groups. Group 1 (N=12) mean ± SD Group 2 (N=10) mean ± SD Statistics P value Initial diameter (mm) 8.14 ± 2.31 8 ± 1.82 Mann-WhitneyU test 0.005 Final diameter (mm) 9.73 ± 2.92 7.2 ± 2.61 Initial area (mm2) 72.86 ± 23.62 74.79 ± 25.43 Mann-WhitneyU test 0.002 Final area (mm2) 103.92 ± 30.39 61.62 ± 28.35 Group 1: cryotherapy. Group 2: control. SD: standard deviation.Figure 3 Antrostomy area distribution in the two groups. ### 3.3. Histological Analysis The histology scores at the end of the study are displayed in Table3. Postoperative 4-week specimens stained with Masson’s trichrome showed that even though fibrosis score is higher in saline-treated ostia as compared to that in cryotreated ostia (Figure 4), the difference did not attain statistical significance (see Table 3). The mean value of the epithelial height decreases at 4 weeks to 36.02 ± 12.04 μm even though it does not reach the normal values. In the cryotreated group, it decreased in a statistically significant way (P<0.05) from 51 μm to 33 μm, while in the control group the mean height of epithelium suffers only a slight variation (3 μm). There was an increase in edema in both groups, but the difference was not statistically significant (see Table 3). Fibrosis decreases in the study group after 4 weeks, from a mean value of 2.25 to 1.85, the difference not being statistically significant (P>0.05; see Table 3). We found a better organization of collagen fibers in the study group versus the control group. There was no significant difference between the control and study groups with respect to loss of cilia. More areas with normally ciliated epithelia were found in the cryotreated group even though their function has not been assessed. Cellular atypia was not observed in any of the samples. Light microscopic findings of the mucosal specimens obtained from the cryotreated group showed epithelial hyperplasia (Figure 5) after 4 weeks in four cases (33.3%), all the rest presenting a normal architecture of the epithelial layer with some inflammatory cells in the submucosal planes.Table 3 Histology scores. Parameter Group 1N=12 mean ± SD Group 2N=10 mean ± SD Statistics P value Mean epithelial height 38.02 ± 12.96 33.62 ± 11.52 T-independent test 0.415 (95% confidence interval: −6.62437–15.41171) Edema 1.83 ± 1.267 1.80 ± 1.033 Chi square 0.505 (Chi Sq Val = 2.338) Fibrosis 1.75 ± 1.422 1.60 ± 1.174 Chi square 0.417 (Chi Sq Val = 2.842) Cilia 1.36 ± 1.206 1.00 ± 1.054 Chi square 0.774 (Chi Sq Val = 1.113) Mononuclear cell infiltrate 1.42 ± 0.996 1.40 ± 1.075 Mann-Whitney 0.945 (U=59) Group 1: cryotherapy. Group 2: control. SD: standard deviation.The light microscopic findings of Masson’s trichrome (M&T) stained mucosa around the ostia at 4 weeks in cryotreated mucosa: subepithelial collagen fiber depositions stained with blue color (a), bar = 20μm, were less prominent compared with saline-treated mucosa (b), bar = 20 μm. (a) (b)Figure 5 The histological exam of mucosa around the ostia at 4 weeks in the cryotreated group revealing epithelial hyperplasia, scattered leukocytes, and moderate subepithelial fibrosis (white arrow). H&E stain, bar = 20μm. ## 3.1. Chronic Rhinosinusitis Model Microscopic analysis of mucosal specimens collected at surgery demonstrated thickening of sinus mucosa, epithelial hyperplasia, mucous metaplasia, moderate to severe subepithelial fibrosis, and glandular atrophy. A prominent leukocytic infiltration into the lamina propria and epithelium was observed, with predominant mononuclear cell infiltrates (lymphocytes, macrophages, and plasma cells) and lymphoid follicle hyperplasia (see Figure2). A moderate infiltrate with heterophils and eosinophils was also observed. No nasal polyps were found either macroscopically or microscopically in this study. Discrete inflammation (between 10 and 30 monocytes/field 40x) was found in 2 rabbits (9.09%) and grade 2 (moderate inflammation) was found in 31.81% (7 rabbits), and the majority (59.1%—13 rabbits) presented with grade 3 of severe inflammation (>50 mononuclear cell infiltrate/field 40x) at the time of surgery.Figure 2 Mucosa around the ostia at the moment of surgical intervention-light microscopy. Maxillary sinus biopsy showing thickening of sinus mucosa, lymphoid follicle hyperplasia, cilia degeneration (black arrow head), diffuse and several infiltrate with mononuclear cells (black arrow), heterophils, and scattered eosinophils. H&E stain, bar = 20μm. ## 3.2. Antrostomy Dimension All wounds healed without infection and all antrostomies remained open in 4 weeks’ time. In the fourth postoperative week, there were no cryotherapy-induced changes to the periantrostomy sinus mucosa, so we consider that the blinding was accurate. Table2 shows the mean diameter and area values of the maxillary ostia in both animal groups. Initial dimensions and diameters of the antrostomies were identical in both groups. At 4 weeks, the Group 1 ostia were statistically significantly wider than saline-treated ostia (see Figure 3). At 4 weeks, the antrostomy patency in study group has enlarged by a mean of 47.53%, while in the control group, it has narrowed by 20.06% (statistically significant difference, P<0.05). Cryotherapy was able to significantly enlarge the antrostomy as compared with that of the control group (P<0.05 for both area and diameter, Wilcoxon sum rank test). A direct correlation was found between the histology scores (mean epithelial height, mononuclear cell infiltrate, fibrosis, and edema) and the mean ostial diameter for each of the two treatments (r=0.548, P<0.05, Spearman r test for correlation).Table 2 Mean values of antrostomy diameters and areas for both animal groups. Group 1 (N=12) mean ± SD Group 2 (N=10) mean ± SD Statistics P value Initial diameter (mm) 8.14 ± 2.31 8 ± 1.82 Mann-WhitneyU test 0.005 Final diameter (mm) 9.73 ± 2.92 7.2 ± 2.61 Initial area (mm2) 72.86 ± 23.62 74.79 ± 25.43 Mann-WhitneyU test 0.002 Final area (mm2) 103.92 ± 30.39 61.62 ± 28.35 Group 1: cryotherapy. Group 2: control. SD: standard deviation.Figure 3 Antrostomy area distribution in the two groups. ## 3.3. Histological Analysis The histology scores at the end of the study are displayed in Table3. Postoperative 4-week specimens stained with Masson’s trichrome showed that even though fibrosis score is higher in saline-treated ostia as compared to that in cryotreated ostia (Figure 4), the difference did not attain statistical significance (see Table 3). The mean value of the epithelial height decreases at 4 weeks to 36.02 ± 12.04 μm even though it does not reach the normal values. In the cryotreated group, it decreased in a statistically significant way (P<0.05) from 51 μm to 33 μm, while in the control group the mean height of epithelium suffers only a slight variation (3 μm). There was an increase in edema in both groups, but the difference was not statistically significant (see Table 3). Fibrosis decreases in the study group after 4 weeks, from a mean value of 2.25 to 1.85, the difference not being statistically significant (P>0.05; see Table 3). We found a better organization of collagen fibers in the study group versus the control group. There was no significant difference between the control and study groups with respect to loss of cilia. More areas with normally ciliated epithelia were found in the cryotreated group even though their function has not been assessed. Cellular atypia was not observed in any of the samples. Light microscopic findings of the mucosal specimens obtained from the cryotreated group showed epithelial hyperplasia (Figure 5) after 4 weeks in four cases (33.3%), all the rest presenting a normal architecture of the epithelial layer with some inflammatory cells in the submucosal planes.Table 3 Histology scores. Parameter Group 1N=12 mean ± SD Group 2N=10 mean ± SD Statistics P value Mean epithelial height 38.02 ± 12.96 33.62 ± 11.52 T-independent test 0.415 (95% confidence interval: −6.62437–15.41171) Edema 1.83 ± 1.267 1.80 ± 1.033 Chi square 0.505 (Chi Sq Val = 2.338) Fibrosis 1.75 ± 1.422 1.60 ± 1.174 Chi square 0.417 (Chi Sq Val = 2.842) Cilia 1.36 ± 1.206 1.00 ± 1.054 Chi square 0.774 (Chi Sq Val = 1.113) Mononuclear cell infiltrate 1.42 ± 0.996 1.40 ± 1.075 Mann-Whitney 0.945 (U=59) Group 1: cryotherapy. Group 2: control. SD: standard deviation.The light microscopic findings of Masson’s trichrome (M&T) stained mucosa around the ostia at 4 weeks in cryotreated mucosa: subepithelial collagen fiber depositions stained with blue color (a), bar = 20μm, were less prominent compared with saline-treated mucosa (b), bar = 20 μm. (a) (b)Figure 5 The histological exam of mucosa around the ostia at 4 weeks in the cryotreated group revealing epithelial hyperplasia, scattered leukocytes, and moderate subepithelial fibrosis (white arrow). H&E stain, bar = 20μm. ## 4. Discussion Our histopathological results showed severe and moderate degrees of inflammation in 20 rabbits at the time of surgery, reinforcing the study’s clinical significance: endoscopic sinus surgery is indicated for chronic severe disease persisting for months, sometimes years, in spite of repeated appropriate antibiotic and steroid therapy. The sinus mucosa in the rhinosinusitis-induced rabbits was characterized by eosinophil-dominant inflammation, similar to the results reported by Johnston et al. [23] Thus, we succeeded in reproducing an important experimental model of chronic rhinosinusitis without nasal polyps. Maintaining the patency of maxillary antrostomy is one of the goals of successful endoscopic sinus surgery. Four weeks after the surgical intervention, the cryotreated maxillary antrostomies were significantly wider than the saline-treated ostia.The anteroinferior end of the middle turbinate and the lateral nasal wall delineate an isthmus, which is prone to synechia formation and ostium stenosis [1–3] during the stage of postoperative edema. Endoscopic debridements were recommended in order to prevent adhesions following ESS. This painful procedure enables observation and management of scarring but may also slow down wound healing [19]. Therefore, as mentioned above, different devices were foreseen to promote middle meatus and ostium patency. In this study, we assumed that cryotherapy may hinder ostium stenosis.Experimental animal models are good tools for understanding the pathogenesis and envisaging treatment in CRS [27–30]. Likewise, the maxillary sinus of rabbits is considered to be a good experimental model for regenerative studies following sinus surgery [27–33]. Several models [27–34] of sinusitis induction have been described in the literature; the majority of them are for acute disease. We chose Liang’s method because it is simple to perform and feasible, and it avoids the need of manipulating pathogenic bacteria. Since ESS is the method of choice in caring for CRS, we thought this model would best replicate the clinical setting of performing middle meatal antrostomy during surgery. In other experimental models, the mean diameter of antrostomy was 4 mm [19, 20, 29]. We choose the 8 mm diameter since, as opposed to the previous models in the literature, we operated on infected sinuses, and in this setting a wide antrostomy is recommended by most authors [5].The mechanism of action of cryotherapy includes disruption of cell membranes by the formation of intracellular ice crystals during the freeze cycle and vasoconstriction, endothelial damage, thrombosis, and tissue ischemia during the thaw cycle [22, 23]. Basic and clinical research demonstrated that cryotherapy induced the production of collagen type III along with the development of more organized collagen architecture [24]. Wound healing is a significant determinant of successful outcomes in endoscopic sinus surgery. In the human airways, histological evaluations showed a better wound healing following cryotherapy including improved collagen organization and decreased keratinization [24, 25]. Moreover, the supporting connective matrix was left intact after cryotherapy application and long-term pathology findings revealed a complete lack of scarring or stricture [26]. Therefore, patients who exhibit persistent mucosal disease after ESS, which leads to repetitive antibiotic treatment and even surgical revisions, might benefit from intraoperative cryotherapy application, since it modulates mucosal healing and decreases granulation tissue formation.Our study demonstrated that cryotherapy significantly improved the antrostomy patency. At 4 weeks, the ostium diameter was significantly bigger in the cryotreated group. Further, the mean area in the cryotreated group has increased by 47.53% (including by default the cryoablation effect), while in the control group it has narrowed by 20.06% following the normal process of mucosal healing by granulating tissue. No mucosal toxic effects were observed in Group 1. On the contrary, histology demonstrated that submucosal fibrous tissues and leukocytic infiltration in the control group ostia were more prominent than those in cryotreated ostia. In addition, a better organization of collagen fibers and more areas with normally ciliated epithelia were found in the cryotherapy group. Improved ciliated epithelia maintain normal drainage of sinuses which is based on mucociliary transport function regardless of maxillary antrostomy patency.There are several limitations to our study. Firstly, we did not split the study rabbits into several groups to which we should have applied different amounts of cryotherapy for different cycles in order to find out the most appropriate dosage. Although cilia appeared to be intact in the cryotreated group, their function was not appropriately assessed. Moreover, electronic microscopy analysis of the treated cilia and measurement of cellular markers for inflammation and wound repair might have made our conclusions about the use of cryotherapy more accurate. Another flaw is that we should have measured the antrostomy diameter after application of cryotherapy and should have assessed it at different time intervals during the study. Previous work of Proctor et al. [29] with the noninfected rabbit model of ostium patency used a 3-week end point for analysis. On the other hand, Chen et al. [20] assessed the neo-ostium diameter at 2, 3, and 4 weeks postoperatively in their study on the hyaluronan hydrogels dressing in the rabbit maxillary sinus. In the latter study, control at 3 weeks simulated postoperative debridement in human ESS [20]. Since we assume that cryotherapy does not imply debridement, we set the 4-week end point for our investigation. Another major flaw is the restricted observation period of our study. However, we were guided by the abovementioned papers on experimental sinusitis in rabbits, choosing a similar observation period [27–34]. Moreover, it is very well known that small animals heal more quickly than humans, and this ability to maintain a patent antrostomy for at least 4 weeks would equate to a useful clinical effect on humans for about 10 months (since one month in a rabbit’s life stands for about 10 months in a human’s [31–34]). An important limitation of our study is also the absence of specific histological examination (Alcian blue staining to detect hyaluronic acid, Movat’s staining to detect elastin fibers, and picrosirius-polarization staining to detect collagen fibers [25]) performed at defined time intervals—1, 2, 3, 4, 6, 8, and 12 weeks. Because of this drawback, we were not able to reveal the mechanism underlying the enlarged antrostomy in the cryotherapy group. Based on the experimental data in the larynx [25], we can only suppose that in an inflammatory sinus setting, cryotherapy induces local necrosis and the wound healing is characterized by a better organized connective tissue structure. ## 5. Conclusion In conclusion, intraoperative low-pressure spray cryotherapy increases the patency of the maxillary antrostomy at 4 weeks postoperatively with no important local side effects. Further studies are needed in order to determine the best dosage for effects of cryotherapy. Moreover, in order to demonstrate the effectiveness and the safety of cryotherapy in maxillary antrostomy, longer-term observation and careful histological examination might be needed. Based on solid experimental evidence, further clinical trials of the use of spray cryotherapy in ESS might be attempted, such as enhancing the middle meatus antrostomy and frontal recess patency rates and decreasing dacryocystorhinostomy failure rates. --- *Source: 101534-2013-10-28.xml*
2013
# Recent Advances in Morphological Cell Image Analysis **Authors:** Shengyong Chen; Mingzhu Zhao; Guang Wu; Chunyan Yao; Jianwei Zhang **Journal:** Computational and Mathematical Methods in Medicine (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101536 --- ## Abstract This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. --- ## Body ## 1. Introduction Cell morphology has become a standard theory for computerized cell image processing and pattern recognition. The purpose of which is the quantitative characterization of cell morphology, including structure and inner-components analysis for better understanding functioning and pathogenesis associated with malignancy and behavior [1].Morphological cell analysis is a key issue for abnormality identification and classification, early cancer detection, and dynamic changes analysis under specific environmental stress. The quantitative results and primary, objective, and reliable, which is beneficial to pathologists in making the final diagnosis and providing fast observation and automated analysis systems.In the present study, advances in morphological cell analysis are briefly reviewed. Overall, significant progress has been made in several issues. Morphological cell analysis has been integrated in new methods for biomedical applications, such as automatic segmentation and analysis of histological tumour sections [2–4], boundary detection of cervical cell nuclei considering overlapping and clustering [5, 6], the granules segmentation and spatial distribution analysis [7], morphological characteristics analysis of specific biomedical cells [8–10], understanding the chemotactic response and drug influences [11–14], or identifying cell morphogenesis in different cell cycle progression [15].Morphological feature quantification for grading cancerous or precancerous cells is especially widely researched in the literature, such as nuclei segmentation based on marker-controlled watershed transform and snake model for hepatocellular carcinoma feature extraction and classification, which is important for prognosis and treatment planning [16], nuclei feature quantification for cancer cell cycle analysis [17], and using feature extraction including image morphological analysis, wavelet analysis, and texture analysis for automated classification of renal cell [18].Computerized/automated early cancer or abnormalities detection provides a basis for reducing deaths and morbidity, especially for cervical cancer, which is reported to be the most preventable disease through early detection [19], provision of prompt advice, and opportunities for follow-up treatments. As an example, [20] presents a prototype expert system for automated segmentation and effective cervical cancer detection, providing primary, objective, and reliable diagnostic results to gynaecologists in making the final diagnosis. These advances will contribute to realize computer-assisted, interactive, or automated processing, quantification, statistic analysis, and diagnosis systems for biomedical applications.The scope of this paper is restricted to morphological cell analysis by image processing in the field of biomedical research. Although this topic has attracted researchers as since early as the 1980s [21–23], this survey concentrates on the contributions of the last 5 years. No review of this nature can possibly cite each and every paper that has been published. Therefore, we include only what we believe to be representative samples of important works and broad trends from recent years. In many cases, references were provided to better summarize and draw distinctions among key ideas and approaches.The paper has five more sections. Section2 briefly provides an overview of related contributions. Section 3 introduces the typical formulation of cell morphology. Section 4 lists the relevant tasks, problems, and applications of cell morphology. Section 5 concentrates typical solutions and methods. Section 6 is a discussion of our impressions on current and future trends. Section 7 is the conclusion. ## 2. Overview of Contributions ### 2.1. Summary From 1980s to 2010, about 1000 research papers with topics on or closely related to morphological cell analysis for robot vision were published. Figure1 shows the yearly distribution of these published papers. The plot shows that the topic of morphological cell analysis steadily developed in the past 20 years.Figure 1 Yearly published records from 1990 to 2010. ### 2.2. Representatives Morphological cell analysis has many applications in biomedical engineering. Their most significant roles are summarized as follows.(1) Malignant cell identification and cancer detection [20, 24, 25].(2) Morphological changes during a cell cycle as division, proliferation, transition, and apoptosis [26–28] or to follow cell culture development [29].(3) Morphological differences to elucidate the physiological mechanisms [30] or classify a set of cell populations with different functions such as neurons [31, 32].(4) Dynamic characteristics investigation under specific environmental stress for personalized therapy [33–36] or for the selection of new drugs [37].(5) Morphometrical study such as subcellular structures (DNA, chromosome) analysis [38] for higher animals or plants based on 3D reconstruction [39, 40].The commonly researched topics for solving morphological problems are listed below.(1) Mathematical morphology theory used in binary, gray, and color images for preprocessing or features analysis [41–48].(2) Location determination: objects located and analysis of distribution [7, 49, 50].(3) Meaningful areas segmentation: based on the features of pixel, edge, region, and model [2–4].(4) Characteristics quantification: based on cytopathology and the experience of physicians [51–58].(5) Recognition, classification automated analysis, and diagnosis [6, 16, 24, 51, 59].Morphological analysis has become a powerful mathematical tool for analyzing and solving cell informatics. Automatic features quantification is undoubtedly the most widely used estimation technique in this topic. Among the variety of developed methods, the main differences and remarkable features can be summarized briefly: shape, geometrical, intensity, and texture. A few representative types of segmentation and classification are selected for easy appreciation of state-of-the-art as shown in Table1.Table 1 Representative contributions. ProcessingMethodRepresentativeSegmentationActive contour model (ACM)[5]—2011Reconstruct the approximate location of cellular membranes[51]—2011A marker-controlled watershed transform and a snake model[16]—2010Segmentation combing features[51]—2011ClassificationK-means and support vector machines (SVM)[6]—2011Bayesian classifier[18]—2009 ## 2.1. Summary From 1980s to 2010, about 1000 research papers with topics on or closely related to morphological cell analysis for robot vision were published. Figure1 shows the yearly distribution of these published papers. The plot shows that the topic of morphological cell analysis steadily developed in the past 20 years.Figure 1 Yearly published records from 1990 to 2010. ## 2.2. Representatives Morphological cell analysis has many applications in biomedical engineering. Their most significant roles are summarized as follows.(1) Malignant cell identification and cancer detection [20, 24, 25].(2) Morphological changes during a cell cycle as division, proliferation, transition, and apoptosis [26–28] or to follow cell culture development [29].(3) Morphological differences to elucidate the physiological mechanisms [30] or classify a set of cell populations with different functions such as neurons [31, 32].(4) Dynamic characteristics investigation under specific environmental stress for personalized therapy [33–36] or for the selection of new drugs [37].(5) Morphometrical study such as subcellular structures (DNA, chromosome) analysis [38] for higher animals or plants based on 3D reconstruction [39, 40].The commonly researched topics for solving morphological problems are listed below.(1) Mathematical morphology theory used in binary, gray, and color images for preprocessing or features analysis [41–48].(2) Location determination: objects located and analysis of distribution [7, 49, 50].(3) Meaningful areas segmentation: based on the features of pixel, edge, region, and model [2–4].(4) Characteristics quantification: based on cytopathology and the experience of physicians [51–58].(5) Recognition, classification automated analysis, and diagnosis [6, 16, 24, 51, 59].Morphological analysis has become a powerful mathematical tool for analyzing and solving cell informatics. Automatic features quantification is undoubtedly the most widely used estimation technique in this topic. Among the variety of developed methods, the main differences and remarkable features can be summarized briefly: shape, geometrical, intensity, and texture. A few representative types of segmentation and classification are selected for easy appreciation of state-of-the-art as shown in Table1.Table 1 Representative contributions. ProcessingMethodRepresentativeSegmentationActive contour model (ACM)[5]—2011Reconstruct the approximate location of cellular membranes[51]—2011A marker-controlled watershed transform and a snake model[16]—2010Segmentation combing features[51]—2011ClassificationK-means and support vector machines (SVM)[6]—2011Bayesian classifier[18]—2009 ## 3. The Problem and Fundamental Principle The fundamental principle of morphological cell analysis is dependent on cell biology, cytopathology, and the diagnostic experience of pathologists. To study cell characteristics, detect abnormalities, and determine the malignant degree, the pathologists examine biopsy material under a microscope, which is subjective, laborious, and time consuming. Therefore quantitative cell morphology is studied and computer-assisted systems are presented for diagnostic process at the same time. The general procedure of such applications can be described in Figure2.Figure 2 The general procedure of cell image analysis. ## 4. Tasks and Problems ### 4.1. Morphological Operation Mathematical morphology is the basic theory for many image processing algorithms, which can also extract image shape features by operating with various shape-structuring elements [60]. This processing technique has proved to be a powerful tool for many computer-vision tasks in binary and gray scale images, such as edge detection, noise suppression, image enhancement, skeletonization, and pattern recognition, [45]. This technique is consisted of two parts: binary morphology and gray-scale morphology, and the commonly used operations as morphological dilation and erosion are defined as follows, respectively:(1)(f⊕k)(x,y)=max⁡{f(x-m,y-n)+k(m,n)},(fΘk)(x,y)=max⁡{f(x-m,y-n)-k(m,n)}, where f is the original image (gray scale or binary), which is operated by the corresponding structuring element k, and (x, y) is the pixel of image f, (m, n) is the size of element k. After morphological operation, image shape features such as edges, fillets, holes, corners, wedges, and cracks can be extracted.Mathematical morphology can also be used in color images avoiding the loss of information of traditional binary techniques [45]. The new operations are based on the order in multivariate data processing. ### 4.2. Cell Localization Determination of the orientation of a cell, termed localization, is of paramount importance in achieving reliable and robust morphological analysis. Achieving high-level tasks such as segmentation and shape description is possible if the initial position is known. From the early literature, primary methods were used in sample images, such as [61] using a sequence of morphological image operations to identify the cell nuclei and [29] using conditional dilation techniques to estimate unbiasedly cell density and obtain precisely cell contours. The results were acceptable only in single images without any complex factors.Even when membranes are partially or completely not visible in the image (Figure3(a)), the approximate locations of cells can be detected by reconstructing cellular membranes [51]. This method is effective for lung cells location in immunohistochemistry tissue images. Cell nuclei that are in cell clusters detecting are the key point for eliminating the positions of cervical cells in conventional Pap smear images (Figure 3(b)). To deal with this problem, Plissiti et al. present a fully automated method [6]. It takes the advantage of color information to obtain the candidate nuclei centroids in the images and eliminate the undesirable artifacts by applying a distance-dependent rule on the resulted centroids and classification algorithms (fuzzy C-means and support vector machines). The experiments shows that even in the case of images with high degree of cell overlapping, the results are very promising.Biomedical cell images. (a) Lung cells(b) Cervical cellsFor automatic detection of granules in different cell groups and statistical analysis of their spatial locations, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, the empirical cumulative distribution function of the distances and the density of granules can be considered [7]. Jiang et al. propose a machine learning method [62], which is based on Haar feature (which is the combination of the intensity, shape, and scale information of the objects), to detect the particle’s position. ### 4.3. Segmentation Segmentation is one of the most important points for automated image analysis and better cell information understanding. The algorithms that have been presented can be divided into edge-based, region-based, and model-based modules. Region-based approaches attempt to segment an image into regions according to regional image data similarity (or dissimilarity), such as scale-space filtering, watershed clustering [63], gray-level threshold [26], and region growing [64]. For clear stained images, multilevel thresholds are the most simply and commonly applied methods for low-level segmentation to remove noise and obtain the interest region (nucleus, cytoplasm, or the whole cell), which are defined as follows:(2)g(x,y)={Ii,Ti-1≤f(m,n)≤Ti,0,others,wherei is the number of regions need to be divided, Ti is the threshold and the extension ranges from Ti-1 to Ti corresponding to the region i.Nevertheless numerous algorithms have been developed, overlapping and connected cluster is still the key problem in cell image segmentation. The methods presented available to solve specific images with clear stained situation, semiautomated algorithms based on preknowledge for adequate segmentation of cell images under complex situation, are always more efficient than totally automated methods. ### 4.4. Quantitative Measurement of Meaningful Parameters The quantitative measurement of cell features is meaningful for both image segmentation and abnormalities detection. Fast, reproducible, accurate, and objective measurement of cell morphology is beneficial to avoid subjective and interobserver variations, which result in diagnostic shifts and consequently disagreement between different interpreters [20]. The quantitative characteristics of cell or nuclear structure alterations extracted after robust image processing algorithms and 3D reconstruction is also called morphological biosignatures, which learn about cellular level features and nuclear structure including inner-components analysis, such as the quantitative evaluation of the approximate number of mRNA varying during cell cycle, developing, aging, and in different pathologies and treatment with drugs by extracting morphological parameters (cytoplasm and nucleus areas) [28]. Accurate quantification of these parameters could be beneficial for developing robust biosignatures for early cancer detection [1]. Multivariate statistical analyses of morphological data to suggest that quantitative cytology may be a useful adjunct to conventional tests for the selection of new drugs with differentiating potential [37].The extracting features as cell area, perimeter, centroid, and the length of major and minor axes for calculating more meaningful parameters such as displacement, protrusiveness, and ellipticity, are used to analyze the dynamic changes of human cancerous glioma cells [35], which can also be used to identify different classed of neurons and relate neural structure (such as total dendritic length and dendritic field area) to function [31].The most meaningful parameters are obtained in discriminating different patterns, such as cell size, shape distribution, and nuclear-to-cytoplasmic ratio for normal and precancerous cervical squamous epithelium determination [44], and texture quantification as a measurement to interchromosome coarseness to study cell proliferation [38]. Local gray level differences and cell density combining with other morphological parameters are possible to follow cell culture development under various experimental conditions [29]. Hitherto, the relationship between malignancy-associated morphological features in single tumour cells and the expression of markers indicating functional properties of these cells remained widely unknown [65]. ### 4.5. Statistical Analysis Multivariate statistic analysis is applied to compare multivariate data and establish the quantitative changes and differences between groups under investigation on their characteristics. The kernel approach is to find a high correlation feature set without redundancy. Principal components analysis (PCA) displays the original variables in a bidimensional space, thus reducing the dimensionality of the data and allowing the visualization of a large number of variables into a two-dimensional plot [11, 49, 66]. ## 4.1. Morphological Operation Mathematical morphology is the basic theory for many image processing algorithms, which can also extract image shape features by operating with various shape-structuring elements [60]. This processing technique has proved to be a powerful tool for many computer-vision tasks in binary and gray scale images, such as edge detection, noise suppression, image enhancement, skeletonization, and pattern recognition, [45]. This technique is consisted of two parts: binary morphology and gray-scale morphology, and the commonly used operations as morphological dilation and erosion are defined as follows, respectively:(1)(f⊕k)(x,y)=max⁡{f(x-m,y-n)+k(m,n)},(fΘk)(x,y)=max⁡{f(x-m,y-n)-k(m,n)}, where f is the original image (gray scale or binary), which is operated by the corresponding structuring element k, and (x, y) is the pixel of image f, (m, n) is the size of element k. After morphological operation, image shape features such as edges, fillets, holes, corners, wedges, and cracks can be extracted.Mathematical morphology can also be used in color images avoiding the loss of information of traditional binary techniques [45]. The new operations are based on the order in multivariate data processing. ## 4.2. Cell Localization Determination of the orientation of a cell, termed localization, is of paramount importance in achieving reliable and robust morphological analysis. Achieving high-level tasks such as segmentation and shape description is possible if the initial position is known. From the early literature, primary methods were used in sample images, such as [61] using a sequence of morphological image operations to identify the cell nuclei and [29] using conditional dilation techniques to estimate unbiasedly cell density and obtain precisely cell contours. The results were acceptable only in single images without any complex factors.Even when membranes are partially or completely not visible in the image (Figure3(a)), the approximate locations of cells can be detected by reconstructing cellular membranes [51]. This method is effective for lung cells location in immunohistochemistry tissue images. Cell nuclei that are in cell clusters detecting are the key point for eliminating the positions of cervical cells in conventional Pap smear images (Figure 3(b)). To deal with this problem, Plissiti et al. present a fully automated method [6]. It takes the advantage of color information to obtain the candidate nuclei centroids in the images and eliminate the undesirable artifacts by applying a distance-dependent rule on the resulted centroids and classification algorithms (fuzzy C-means and support vector machines). The experiments shows that even in the case of images with high degree of cell overlapping, the results are very promising.Biomedical cell images. (a) Lung cells(b) Cervical cellsFor automatic detection of granules in different cell groups and statistical analysis of their spatial locations, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, the empirical cumulative distribution function of the distances and the density of granules can be considered [7]. Jiang et al. propose a machine learning method [62], which is based on Haar feature (which is the combination of the intensity, shape, and scale information of the objects), to detect the particle’s position. ## 4.3. Segmentation Segmentation is one of the most important points for automated image analysis and better cell information understanding. The algorithms that have been presented can be divided into edge-based, region-based, and model-based modules. Region-based approaches attempt to segment an image into regions according to regional image data similarity (or dissimilarity), such as scale-space filtering, watershed clustering [63], gray-level threshold [26], and region growing [64]. For clear stained images, multilevel thresholds are the most simply and commonly applied methods for low-level segmentation to remove noise and obtain the interest region (nucleus, cytoplasm, or the whole cell), which are defined as follows:(2)g(x,y)={Ii,Ti-1≤f(m,n)≤Ti,0,others,wherei is the number of regions need to be divided, Ti is the threshold and the extension ranges from Ti-1 to Ti corresponding to the region i.Nevertheless numerous algorithms have been developed, overlapping and connected cluster is still the key problem in cell image segmentation. The methods presented available to solve specific images with clear stained situation, semiautomated algorithms based on preknowledge for adequate segmentation of cell images under complex situation, are always more efficient than totally automated methods. ## 4.4. Quantitative Measurement of Meaningful Parameters The quantitative measurement of cell features is meaningful for both image segmentation and abnormalities detection. Fast, reproducible, accurate, and objective measurement of cell morphology is beneficial to avoid subjective and interobserver variations, which result in diagnostic shifts and consequently disagreement between different interpreters [20]. The quantitative characteristics of cell or nuclear structure alterations extracted after robust image processing algorithms and 3D reconstruction is also called morphological biosignatures, which learn about cellular level features and nuclear structure including inner-components analysis, such as the quantitative evaluation of the approximate number of mRNA varying during cell cycle, developing, aging, and in different pathologies and treatment with drugs by extracting morphological parameters (cytoplasm and nucleus areas) [28]. Accurate quantification of these parameters could be beneficial for developing robust biosignatures for early cancer detection [1]. Multivariate statistical analyses of morphological data to suggest that quantitative cytology may be a useful adjunct to conventional tests for the selection of new drugs with differentiating potential [37].The extracting features as cell area, perimeter, centroid, and the length of major and minor axes for calculating more meaningful parameters such as displacement, protrusiveness, and ellipticity, are used to analyze the dynamic changes of human cancerous glioma cells [35], which can also be used to identify different classed of neurons and relate neural structure (such as total dendritic length and dendritic field area) to function [31].The most meaningful parameters are obtained in discriminating different patterns, such as cell size, shape distribution, and nuclear-to-cytoplasmic ratio for normal and precancerous cervical squamous epithelium determination [44], and texture quantification as a measurement to interchromosome coarseness to study cell proliferation [38]. Local gray level differences and cell density combining with other morphological parameters are possible to follow cell culture development under various experimental conditions [29]. Hitherto, the relationship between malignancy-associated morphological features in single tumour cells and the expression of markers indicating functional properties of these cells remained widely unknown [65]. ## 4.5. Statistical Analysis Multivariate statistic analysis is applied to compare multivariate data and establish the quantitative changes and differences between groups under investigation on their characteristics. The kernel approach is to find a high correlation feature set without redundancy. Principal components analysis (PCA) displays the original variables in a bidimensional space, thus reducing the dimensionality of the data and allowing the visualization of a large number of variables into a two-dimensional plot [11, 49, 66]. ## 5. Methods and Solutions ### 5.1. Formulation in Morphological Analysis Morphological analysis is often studied as the shape appearances of objects and the surfaces of the images, with intensity seen as height and texture appearing as relief. Formulization of morphological features is of benefit to computerized calculation and more efficient than manual morphological quantification, which is still laborious and subjective. The morphology characteristics can be described by shape, geometrical, intensity, and texture analysis.The geometrical features of regions can be described by area, radii, perimeter, the major and the minor axis length, and so forth. The area of the object is calculated as the number of pixels of the region (Figure4, the area defined by the closed curve). Radii are calculated based on projected cell area supposing that each cell is circular. The major and the minor axis length are the maximal and minimum numbers of pixels of the axis, respectively. Take Figure 4 as an example, the perimeter is calculated as follows:(3)P=N1+N2+2N3,Figure 4 Geometrical features quantification.where,N1, N2, N3 are the numbers of the horizontal, vertical bevel lines on the boundary, respectively.Circularity, rectangle, eccentricity, and irregularity are used to describe the shape features. Circularity (C) and rectangle (R) represent the rotundity-like and rectangle-like degree, defined as follows:(4)C=P24πA,R=AreaH*W.Eccentricity is defined as follows:(5)E=TheminoraxislengthThemajoraxislength.Texture is an important visual cue and widely exists in images. Texture feature extraction is the most basic problem for texture analysis including classification and segmentation. Dimension, discrimination, stability, and calculation are considered in practical application and studied for more than fifty years. Based on the statistical theory, structure, model, and signal processing, many effective methods were presented for different applications. Among which, gray level co-occurrence matrix (GLCM) has become one of the best known and most widely used statistic method for texture feature extraction [26], especially in cell image texture feature analyzing. The interrelationship of textural primitives which define morphological texture can be estimated by quite different descriptors, the discriminant value of which varies considerably [67]. The descriptors based on GLCM are summarized in Table 2.Table 2 Texture features. Energy:ASM=∑i=1k(gi-g)-2p(gi)Uniformity:U=∑i=1kp2(gi)Entropy:ENT=-∑i=1kp(gi)log2p(gi)Smoothness:IDM=1-1/(1+s2), where s=∑i=1k(gi-g)-2p(gi)Given thatgi is the gray value, k is the number of gray levels.The intensity feature is characterized by the average of the intensity value of all the pixels of the region. For RGB color images, it is calculated independently from the red, green, and blue component of the original image. Histogram is an efficient way to show intensity features. Kruk et al. characterize the histograms of different color components by applying the following parameters: the mean, standard deviation, skewness, kurtosis, maximum value, and the span of the histogram [59]. ### 5.2. Deformable Models It is known that biomedical images are always under complex situation, which made segmentation a hard task for the extraction of the interest region. Because of the various challenges in medical image processing, deformable models were widely investigated and innovated, becoming a powerful tool for medical image segmentation. Active counter model is one of the most classical algorithms. Techniques based on active contour models have the potential to produce better estimates of cell morphologies.The existing active contour models can be categorized into two classes: edge-based models [68], and region-based models [69]. On one hand edge-based model directly uses intensity gradient information to attract the contour toward the object boundaries. Therefore this kind of model has worse performance for weak object boundaries since cell image exhibits great fuzzy degree due to low contrast at the location of the cell membrane. On the other hand region-based model aims to identify each region of interest by using a certain region descriptor. It guides the motion of the contour, and is less sensitive to the location of initial contours in some extents. It is much more suitable for cell segmentation than the fore one.Chan and Vese model [70] is one of the most popular region-based active contour models. This model has been successfully used for segmenting images. Chan and Vese model proposed an active contour model that segments an image into two sets of possibly disjoint regions, by minimizing a simplified Mumford-Shah functional. The basic idea is as follows. Assume that Ω⊂R2 is the image domain and I: Ω→R is a given image. Mumford and Shah consider image segmentation as a problem of seeking an optimal contour C that divides the image domain into two approximately piecewise-constant regions with intensities ui and u0. Let C denote its boundary. Thus the global data fitting term in the Chan and Vese model is defined as follows:(6)Ecv(c1,c2)=∫Ω¯(I-c1)2dxdy+∫Ω(I-c2)2dxdy,whereΩ and Ω¯ represent the regions outside and inside the contour C, respectively, c1 and c2 are two constants that fit the image intensities outside C and inside C.This model considers pixels within the same region having the most similarity, and makes up the shortcomings of edged etector. When the contour accurately captures the object boundary, the two fitting terms minimize the fitting energy value. In each segmented area, the clustered pixels’ mean value approximately equals thec1 and c2, respectively. Thus the fitting terms with respect to c1 and c2 are the driving forces that evolve the curve motion on the principle of inner-region homogeneity.Since the regional difference is the guideline in image segmentation, the interregional differences should be considered as the model’s driving force as follows:(7)E=-12(c1-c2)2.This kind of region-based active contour model’s energy is characterized by the maximum dissimilarity between regions. Minimizing the energyE in (7) is the same as maximizing the difference between different regions. Equation (7) formulates the global instructive guidance term. ### 5.3. Classification The extracted features involved the input to classification procedure for better analysis, correct grading, and pattern recognition. From the literature, unsupervised (asK-means and spectral clustering) and supervised (as super vector machine, SVM) classification schemes and artificial neural network (ANN) architecture were applied. SVM clustering is a state-of-the-art method, which was originally proposed in [71]. The decision function of a two-class problem can be written as follows:(8)f(x)=ω⋅ϕ(x)+b=∑i=1NαiyiK(x,xi)+b, where xi∈Rd is the sample and yi∈{±1} is the class label of xi. A transformation ϕ(·) maps the data points x of the input space Rd into a higher-dimensional feature space RD, (D≥d). K(·,·) is a kernel function, which defines an inner product in RD. K(·,·) is commonly defined as follows:(9)K(x,xi)=[(x⋅xi)+1]q,K(x,xi)=exp⁡{-|x-xi|2σ2},K(x,xi)=tanh(v(x⋅xi)+c).The parametersαi≥0 are optimized by finding the hyperplane in feature space with maximum distance to the closest image ϕ(xi) from the training set. For multilevel classification based on SVM, a decision-tree classification scheme discriminated between different grades is showed in Figure 5.Figure 5 A decision-tree SVM classification scheme.Although SVM is one of the most famous methods for classification and has achieved a great success in pattern recognition, problems still exist, such as the neglect of different data distributions within classes. Recently, structural super vector machine (SSVM) is proposed accordingly, which firstly exploits the intrinsic structures of samples within classes by some unsupervised clustering methods and directly embedding the structural information into the SVM objective function [72]. SSVM is theoretically and empirically a better generalization than SVM algorithm. ### 5.4. D Morphology Three-dimensional morphology using 3D reconstruction and image processing techniques is applied for quantitative morphometric analysis of cellular and subcellular structures, which is much more powerful than its 2D counterpart, but still largely based on the processing of separate 2D slices.The approach to 3D morphological analysis consists of digital micrographs acquisition, reconstruction, and 3D-based feature extraction. The acquired images are serialy taken by CT instrument at uniform angular intervals during a full 360° rotation [1], from the electron imaging film taken by photo products [73], or by electron microscopy [40]. Computer programs such as MATLAB or Visual Studio software can be used for automated 3D image reconstruction.Based on the reconstructed models, features such as three-dimensional shape of the cells can be extracted, which are correlated with the assembly state of myofibrils in different stages [74] and ultrastructure such as the arrangement of compact chromatin of GO lymphocytes can be studied [23]. ## 5.1. Formulation in Morphological Analysis Morphological analysis is often studied as the shape appearances of objects and the surfaces of the images, with intensity seen as height and texture appearing as relief. Formulization of morphological features is of benefit to computerized calculation and more efficient than manual morphological quantification, which is still laborious and subjective. The morphology characteristics can be described by shape, geometrical, intensity, and texture analysis.The geometrical features of regions can be described by area, radii, perimeter, the major and the minor axis length, and so forth. The area of the object is calculated as the number of pixels of the region (Figure4, the area defined by the closed curve). Radii are calculated based on projected cell area supposing that each cell is circular. The major and the minor axis length are the maximal and minimum numbers of pixels of the axis, respectively. Take Figure 4 as an example, the perimeter is calculated as follows:(3)P=N1+N2+2N3,Figure 4 Geometrical features quantification.where,N1, N2, N3 are the numbers of the horizontal, vertical bevel lines on the boundary, respectively.Circularity, rectangle, eccentricity, and irregularity are used to describe the shape features. Circularity (C) and rectangle (R) represent the rotundity-like and rectangle-like degree, defined as follows:(4)C=P24πA,R=AreaH*W.Eccentricity is defined as follows:(5)E=TheminoraxislengthThemajoraxislength.Texture is an important visual cue and widely exists in images. Texture feature extraction is the most basic problem for texture analysis including classification and segmentation. Dimension, discrimination, stability, and calculation are considered in practical application and studied for more than fifty years. Based on the statistical theory, structure, model, and signal processing, many effective methods were presented for different applications. Among which, gray level co-occurrence matrix (GLCM) has become one of the best known and most widely used statistic method for texture feature extraction [26], especially in cell image texture feature analyzing. The interrelationship of textural primitives which define morphological texture can be estimated by quite different descriptors, the discriminant value of which varies considerably [67]. The descriptors based on GLCM are summarized in Table 2.Table 2 Texture features. Energy:ASM=∑i=1k(gi-g)-2p(gi)Uniformity:U=∑i=1kp2(gi)Entropy:ENT=-∑i=1kp(gi)log2p(gi)Smoothness:IDM=1-1/(1+s2), where s=∑i=1k(gi-g)-2p(gi)Given thatgi is the gray value, k is the number of gray levels.The intensity feature is characterized by the average of the intensity value of all the pixels of the region. For RGB color images, it is calculated independently from the red, green, and blue component of the original image. Histogram is an efficient way to show intensity features. Kruk et al. characterize the histograms of different color components by applying the following parameters: the mean, standard deviation, skewness, kurtosis, maximum value, and the span of the histogram [59]. ## 5.2. Deformable Models It is known that biomedical images are always under complex situation, which made segmentation a hard task for the extraction of the interest region. Because of the various challenges in medical image processing, deformable models were widely investigated and innovated, becoming a powerful tool for medical image segmentation. Active counter model is one of the most classical algorithms. Techniques based on active contour models have the potential to produce better estimates of cell morphologies.The existing active contour models can be categorized into two classes: edge-based models [68], and region-based models [69]. On one hand edge-based model directly uses intensity gradient information to attract the contour toward the object boundaries. Therefore this kind of model has worse performance for weak object boundaries since cell image exhibits great fuzzy degree due to low contrast at the location of the cell membrane. On the other hand region-based model aims to identify each region of interest by using a certain region descriptor. It guides the motion of the contour, and is less sensitive to the location of initial contours in some extents. It is much more suitable for cell segmentation than the fore one.Chan and Vese model [70] is one of the most popular region-based active contour models. This model has been successfully used for segmenting images. Chan and Vese model proposed an active contour model that segments an image into two sets of possibly disjoint regions, by minimizing a simplified Mumford-Shah functional. The basic idea is as follows. Assume that Ω⊂R2 is the image domain and I: Ω→R is a given image. Mumford and Shah consider image segmentation as a problem of seeking an optimal contour C that divides the image domain into two approximately piecewise-constant regions with intensities ui and u0. Let C denote its boundary. Thus the global data fitting term in the Chan and Vese model is defined as follows:(6)Ecv(c1,c2)=∫Ω¯(I-c1)2dxdy+∫Ω(I-c2)2dxdy,whereΩ and Ω¯ represent the regions outside and inside the contour C, respectively, c1 and c2 are two constants that fit the image intensities outside C and inside C.This model considers pixels within the same region having the most similarity, and makes up the shortcomings of edged etector. When the contour accurately captures the object boundary, the two fitting terms minimize the fitting energy value. In each segmented area, the clustered pixels’ mean value approximately equals thec1 and c2, respectively. Thus the fitting terms with respect to c1 and c2 are the driving forces that evolve the curve motion on the principle of inner-region homogeneity.Since the regional difference is the guideline in image segmentation, the interregional differences should be considered as the model’s driving force as follows:(7)E=-12(c1-c2)2.This kind of region-based active contour model’s energy is characterized by the maximum dissimilarity between regions. Minimizing the energyE in (7) is the same as maximizing the difference between different regions. Equation (7) formulates the global instructive guidance term. ## 5.3. Classification The extracted features involved the input to classification procedure for better analysis, correct grading, and pattern recognition. From the literature, unsupervised (asK-means and spectral clustering) and supervised (as super vector machine, SVM) classification schemes and artificial neural network (ANN) architecture were applied. SVM clustering is a state-of-the-art method, which was originally proposed in [71]. The decision function of a two-class problem can be written as follows:(8)f(x)=ω⋅ϕ(x)+b=∑i=1NαiyiK(x,xi)+b, where xi∈Rd is the sample and yi∈{±1} is the class label of xi. A transformation ϕ(·) maps the data points x of the input space Rd into a higher-dimensional feature space RD, (D≥d). K(·,·) is a kernel function, which defines an inner product in RD. K(·,·) is commonly defined as follows:(9)K(x,xi)=[(x⋅xi)+1]q,K(x,xi)=exp⁡{-|x-xi|2σ2},K(x,xi)=tanh(v(x⋅xi)+c).The parametersαi≥0 are optimized by finding the hyperplane in feature space with maximum distance to the closest image ϕ(xi) from the training set. For multilevel classification based on SVM, a decision-tree classification scheme discriminated between different grades is showed in Figure 5.Figure 5 A decision-tree SVM classification scheme.Although SVM is one of the most famous methods for classification and has achieved a great success in pattern recognition, problems still exist, such as the neglect of different data distributions within classes. Recently, structural super vector machine (SSVM) is proposed accordingly, which firstly exploits the intrinsic structures of samples within classes by some unsupervised clustering methods and directly embedding the structural information into the SVM objective function [72]. SSVM is theoretically and empirically a better generalization than SVM algorithm. ## 5.4. D Morphology Three-dimensional morphology using 3D reconstruction and image processing techniques is applied for quantitative morphometric analysis of cellular and subcellular structures, which is much more powerful than its 2D counterpart, but still largely based on the processing of separate 2D slices.The approach to 3D morphological analysis consists of digital micrographs acquisition, reconstruction, and 3D-based feature extraction. The acquired images are serialy taken by CT instrument at uniform angular intervals during a full 360° rotation [1], from the electron imaging film taken by photo products [73], or by electron microscopy [40]. Computer programs such as MATLAB or Visual Studio software can be used for automated 3D image reconstruction.Based on the reconstructed models, features such as three-dimensional shape of the cells can be extracted, which are correlated with the assembly state of myofibrils in different stages [74] and ultrastructure such as the arrangement of compact chromatin of GO lymphocytes can be studied [23]. ## 6. Existing Problems and Future Trends Although morphological cell analysis has been developed in many applications as mature approaches for estimation and diagnosis, some problems still exist in its applications in biomedical engineering. Researchers are exerting efforts not only in simple localization and segmentation, but also in improving the methods mainly in the following aspects. ### 6.1. Real-Time Application and Computational Complexity Morphological cell analysis has been applied in almost all hospitals, which are key means in automatic microscopic analysis. However, because of its high computational complexity, it has strict limits on the number and stability of feature points. The traditional method selects a few features, which limits the application scope of morphological analysis. The computational complexity greatly affects real-time application systems [50, 75]. ### 6.2. Reliability Reliability is a great concern in practical applications [55, 76]. Morphological analysis relies on tuning of many parameters. Related techniques rely on existing noise statistics, initial positions, and sufficiently good approximation of measurement functions. Deviations from such assumptions usually lead to degraded estimations during automatic analysis. Stochastic stability is established in terms of the conditions of the initial errors, bound on observation noise covariance, observation nonlinearity, and modeling error. Features have to be effectively and efficiently treated by their removal from or addition to the system. New methods should be explored to discard outliers and improve the matching rate. These will help stabilize algorithms and allow more accurate localizations or parametric estimations. ### 6.3. With a Priori Knowledge Constraints introduced in morphological cell parameters may help in some occasions. For example, morphological cell analysis is commonly used to estimate the cell shapes and activities, which incorporatea priori information in a consistent manner. However, the known model or information are often either ignored or heuristically dealt with [6]. ### 6.4. Accuracy Accuracy is always the most important factor in biomedical engineering. The accuracy of the calculated cells strongly depends on the computational potential and the statistical possibilities. For example, automated method provides accurate segmentation of the cellular membranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per-cell morphological analysis and quantification of the target membrane [16, 51, 77]. ### 6.5. Artificial Intelligence The integration of the morphological cell analysis with some artificial intelligence methods may yield a better performance. Fuzzy logic, neural network, genetic algorithm, and so forth can be combined to wholly resolve the complex task. ## 6.1. Real-Time Application and Computational Complexity Morphological cell analysis has been applied in almost all hospitals, which are key means in automatic microscopic analysis. However, because of its high computational complexity, it has strict limits on the number and stability of feature points. The traditional method selects a few features, which limits the application scope of morphological analysis. The computational complexity greatly affects real-time application systems [50, 75]. ## 6.2. Reliability Reliability is a great concern in practical applications [55, 76]. Morphological analysis relies on tuning of many parameters. Related techniques rely on existing noise statistics, initial positions, and sufficiently good approximation of measurement functions. Deviations from such assumptions usually lead to degraded estimations during automatic analysis. Stochastic stability is established in terms of the conditions of the initial errors, bound on observation noise covariance, observation nonlinearity, and modeling error. Features have to be effectively and efficiently treated by their removal from or addition to the system. New methods should be explored to discard outliers and improve the matching rate. These will help stabilize algorithms and allow more accurate localizations or parametric estimations. ## 6.3. With a Priori Knowledge Constraints introduced in morphological cell parameters may help in some occasions. For example, morphological cell analysis is commonly used to estimate the cell shapes and activities, which incorporatea priori information in a consistent manner. However, the known model or information are often either ignored or heuristically dealt with [6]. ## 6.4. Accuracy Accuracy is always the most important factor in biomedical engineering. The accuracy of the calculated cells strongly depends on the computational potential and the statistical possibilities. For example, automated method provides accurate segmentation of the cellular membranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per-cell morphological analysis and quantification of the target membrane [16, 51, 77]. ## 6.5. Artificial Intelligence The integration of the morphological cell analysis with some artificial intelligence methods may yield a better performance. Fuzzy logic, neural network, genetic algorithm, and so forth can be combined to wholly resolve the complex task. ## 7. Conclusion This paper summarizes recent advances in morphological cell analysis for biomedical engineering applications. Typical contributions are addressed for initialization, localization, segmentation, estimation, modeling, shape analysis, cell parameters, and so forth. Representative works are listed for readers to have a general overview of state-of-the art. A number of methods for solving morphological problems are investigated. Many methods developed for morphological cell analysis, extended morphological cell segmentation, are introduced. In the 20-year history of morphological cell analysis, they gained entry into the field of biomedical engineering as a critical role. The largest volume of published reports in this literature belongs to the last ten years. --- *Source: 101536-2012-01-09.xml*
101536-2012-01-09_101536-2012-01-09.md
51,709
Recent Advances in Morphological Cell Image Analysis
Shengyong Chen; Mingzhu Zhao; Guang Wu; Chunyan Yao; Jianwei Zhang
Computational and Mathematical Methods in Medicine (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101536
101536-2012-01-09.xml
--- ## Abstract This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. --- ## Body ## 1. Introduction Cell morphology has become a standard theory for computerized cell image processing and pattern recognition. The purpose of which is the quantitative characterization of cell morphology, including structure and inner-components analysis for better understanding functioning and pathogenesis associated with malignancy and behavior [1].Morphological cell analysis is a key issue for abnormality identification and classification, early cancer detection, and dynamic changes analysis under specific environmental stress. The quantitative results and primary, objective, and reliable, which is beneficial to pathologists in making the final diagnosis and providing fast observation and automated analysis systems.In the present study, advances in morphological cell analysis are briefly reviewed. Overall, significant progress has been made in several issues. Morphological cell analysis has been integrated in new methods for biomedical applications, such as automatic segmentation and analysis of histological tumour sections [2–4], boundary detection of cervical cell nuclei considering overlapping and clustering [5, 6], the granules segmentation and spatial distribution analysis [7], morphological characteristics analysis of specific biomedical cells [8–10], understanding the chemotactic response and drug influences [11–14], or identifying cell morphogenesis in different cell cycle progression [15].Morphological feature quantification for grading cancerous or precancerous cells is especially widely researched in the literature, such as nuclei segmentation based on marker-controlled watershed transform and snake model for hepatocellular carcinoma feature extraction and classification, which is important for prognosis and treatment planning [16], nuclei feature quantification for cancer cell cycle analysis [17], and using feature extraction including image morphological analysis, wavelet analysis, and texture analysis for automated classification of renal cell [18].Computerized/automated early cancer or abnormalities detection provides a basis for reducing deaths and morbidity, especially for cervical cancer, which is reported to be the most preventable disease through early detection [19], provision of prompt advice, and opportunities for follow-up treatments. As an example, [20] presents a prototype expert system for automated segmentation and effective cervical cancer detection, providing primary, objective, and reliable diagnostic results to gynaecologists in making the final diagnosis. These advances will contribute to realize computer-assisted, interactive, or automated processing, quantification, statistic analysis, and diagnosis systems for biomedical applications.The scope of this paper is restricted to morphological cell analysis by image processing in the field of biomedical research. Although this topic has attracted researchers as since early as the 1980s [21–23], this survey concentrates on the contributions of the last 5 years. No review of this nature can possibly cite each and every paper that has been published. Therefore, we include only what we believe to be representative samples of important works and broad trends from recent years. In many cases, references were provided to better summarize and draw distinctions among key ideas and approaches.The paper has five more sections. Section2 briefly provides an overview of related contributions. Section 3 introduces the typical formulation of cell morphology. Section 4 lists the relevant tasks, problems, and applications of cell morphology. Section 5 concentrates typical solutions and methods. Section 6 is a discussion of our impressions on current and future trends. Section 7 is the conclusion. ## 2. Overview of Contributions ### 2.1. Summary From 1980s to 2010, about 1000 research papers with topics on or closely related to morphological cell analysis for robot vision were published. Figure1 shows the yearly distribution of these published papers. The plot shows that the topic of morphological cell analysis steadily developed in the past 20 years.Figure 1 Yearly published records from 1990 to 2010. ### 2.2. Representatives Morphological cell analysis has many applications in biomedical engineering. Their most significant roles are summarized as follows.(1) Malignant cell identification and cancer detection [20, 24, 25].(2) Morphological changes during a cell cycle as division, proliferation, transition, and apoptosis [26–28] or to follow cell culture development [29].(3) Morphological differences to elucidate the physiological mechanisms [30] or classify a set of cell populations with different functions such as neurons [31, 32].(4) Dynamic characteristics investigation under specific environmental stress for personalized therapy [33–36] or for the selection of new drugs [37].(5) Morphometrical study such as subcellular structures (DNA, chromosome) analysis [38] for higher animals or plants based on 3D reconstruction [39, 40].The commonly researched topics for solving morphological problems are listed below.(1) Mathematical morphology theory used in binary, gray, and color images for preprocessing or features analysis [41–48].(2) Location determination: objects located and analysis of distribution [7, 49, 50].(3) Meaningful areas segmentation: based on the features of pixel, edge, region, and model [2–4].(4) Characteristics quantification: based on cytopathology and the experience of physicians [51–58].(5) Recognition, classification automated analysis, and diagnosis [6, 16, 24, 51, 59].Morphological analysis has become a powerful mathematical tool for analyzing and solving cell informatics. Automatic features quantification is undoubtedly the most widely used estimation technique in this topic. Among the variety of developed methods, the main differences and remarkable features can be summarized briefly: shape, geometrical, intensity, and texture. A few representative types of segmentation and classification are selected for easy appreciation of state-of-the-art as shown in Table1.Table 1 Representative contributions. ProcessingMethodRepresentativeSegmentationActive contour model (ACM)[5]—2011Reconstruct the approximate location of cellular membranes[51]—2011A marker-controlled watershed transform and a snake model[16]—2010Segmentation combing features[51]—2011ClassificationK-means and support vector machines (SVM)[6]—2011Bayesian classifier[18]—2009 ## 2.1. Summary From 1980s to 2010, about 1000 research papers with topics on or closely related to morphological cell analysis for robot vision were published. Figure1 shows the yearly distribution of these published papers. The plot shows that the topic of morphological cell analysis steadily developed in the past 20 years.Figure 1 Yearly published records from 1990 to 2010. ## 2.2. Representatives Morphological cell analysis has many applications in biomedical engineering. Their most significant roles are summarized as follows.(1) Malignant cell identification and cancer detection [20, 24, 25].(2) Morphological changes during a cell cycle as division, proliferation, transition, and apoptosis [26–28] or to follow cell culture development [29].(3) Morphological differences to elucidate the physiological mechanisms [30] or classify a set of cell populations with different functions such as neurons [31, 32].(4) Dynamic characteristics investigation under specific environmental stress for personalized therapy [33–36] or for the selection of new drugs [37].(5) Morphometrical study such as subcellular structures (DNA, chromosome) analysis [38] for higher animals or plants based on 3D reconstruction [39, 40].The commonly researched topics for solving morphological problems are listed below.(1) Mathematical morphology theory used in binary, gray, and color images for preprocessing or features analysis [41–48].(2) Location determination: objects located and analysis of distribution [7, 49, 50].(3) Meaningful areas segmentation: based on the features of pixel, edge, region, and model [2–4].(4) Characteristics quantification: based on cytopathology and the experience of physicians [51–58].(5) Recognition, classification automated analysis, and diagnosis [6, 16, 24, 51, 59].Morphological analysis has become a powerful mathematical tool for analyzing and solving cell informatics. Automatic features quantification is undoubtedly the most widely used estimation technique in this topic. Among the variety of developed methods, the main differences and remarkable features can be summarized briefly: shape, geometrical, intensity, and texture. A few representative types of segmentation and classification are selected for easy appreciation of state-of-the-art as shown in Table1.Table 1 Representative contributions. ProcessingMethodRepresentativeSegmentationActive contour model (ACM)[5]—2011Reconstruct the approximate location of cellular membranes[51]—2011A marker-controlled watershed transform and a snake model[16]—2010Segmentation combing features[51]—2011ClassificationK-means and support vector machines (SVM)[6]—2011Bayesian classifier[18]—2009 ## 3. The Problem and Fundamental Principle The fundamental principle of morphological cell analysis is dependent on cell biology, cytopathology, and the diagnostic experience of pathologists. To study cell characteristics, detect abnormalities, and determine the malignant degree, the pathologists examine biopsy material under a microscope, which is subjective, laborious, and time consuming. Therefore quantitative cell morphology is studied and computer-assisted systems are presented for diagnostic process at the same time. The general procedure of such applications can be described in Figure2.Figure 2 The general procedure of cell image analysis. ## 4. Tasks and Problems ### 4.1. Morphological Operation Mathematical morphology is the basic theory for many image processing algorithms, which can also extract image shape features by operating with various shape-structuring elements [60]. This processing technique has proved to be a powerful tool for many computer-vision tasks in binary and gray scale images, such as edge detection, noise suppression, image enhancement, skeletonization, and pattern recognition, [45]. This technique is consisted of two parts: binary morphology and gray-scale morphology, and the commonly used operations as morphological dilation and erosion are defined as follows, respectively:(1)(f⊕k)(x,y)=max⁡{f(x-m,y-n)+k(m,n)},(fΘk)(x,y)=max⁡{f(x-m,y-n)-k(m,n)}, where f is the original image (gray scale or binary), which is operated by the corresponding structuring element k, and (x, y) is the pixel of image f, (m, n) is the size of element k. After morphological operation, image shape features such as edges, fillets, holes, corners, wedges, and cracks can be extracted.Mathematical morphology can also be used in color images avoiding the loss of information of traditional binary techniques [45]. The new operations are based on the order in multivariate data processing. ### 4.2. Cell Localization Determination of the orientation of a cell, termed localization, is of paramount importance in achieving reliable and robust morphological analysis. Achieving high-level tasks such as segmentation and shape description is possible if the initial position is known. From the early literature, primary methods were used in sample images, such as [61] using a sequence of morphological image operations to identify the cell nuclei and [29] using conditional dilation techniques to estimate unbiasedly cell density and obtain precisely cell contours. The results were acceptable only in single images without any complex factors.Even when membranes are partially or completely not visible in the image (Figure3(a)), the approximate locations of cells can be detected by reconstructing cellular membranes [51]. This method is effective for lung cells location in immunohistochemistry tissue images. Cell nuclei that are in cell clusters detecting are the key point for eliminating the positions of cervical cells in conventional Pap smear images (Figure 3(b)). To deal with this problem, Plissiti et al. present a fully automated method [6]. It takes the advantage of color information to obtain the candidate nuclei centroids in the images and eliminate the undesirable artifacts by applying a distance-dependent rule on the resulted centroids and classification algorithms (fuzzy C-means and support vector machines). The experiments shows that even in the case of images with high degree of cell overlapping, the results are very promising.Biomedical cell images. (a) Lung cells(b) Cervical cellsFor automatic detection of granules in different cell groups and statistical analysis of their spatial locations, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, the empirical cumulative distribution function of the distances and the density of granules can be considered [7]. Jiang et al. propose a machine learning method [62], which is based on Haar feature (which is the combination of the intensity, shape, and scale information of the objects), to detect the particle’s position. ### 4.3. Segmentation Segmentation is one of the most important points for automated image analysis and better cell information understanding. The algorithms that have been presented can be divided into edge-based, region-based, and model-based modules. Region-based approaches attempt to segment an image into regions according to regional image data similarity (or dissimilarity), such as scale-space filtering, watershed clustering [63], gray-level threshold [26], and region growing [64]. For clear stained images, multilevel thresholds are the most simply and commonly applied methods for low-level segmentation to remove noise and obtain the interest region (nucleus, cytoplasm, or the whole cell), which are defined as follows:(2)g(x,y)={Ii,Ti-1≤f(m,n)≤Ti,0,others,wherei is the number of regions need to be divided, Ti is the threshold and the extension ranges from Ti-1 to Ti corresponding to the region i.Nevertheless numerous algorithms have been developed, overlapping and connected cluster is still the key problem in cell image segmentation. The methods presented available to solve specific images with clear stained situation, semiautomated algorithms based on preknowledge for adequate segmentation of cell images under complex situation, are always more efficient than totally automated methods. ### 4.4. Quantitative Measurement of Meaningful Parameters The quantitative measurement of cell features is meaningful for both image segmentation and abnormalities detection. Fast, reproducible, accurate, and objective measurement of cell morphology is beneficial to avoid subjective and interobserver variations, which result in diagnostic shifts and consequently disagreement between different interpreters [20]. The quantitative characteristics of cell or nuclear structure alterations extracted after robust image processing algorithms and 3D reconstruction is also called morphological biosignatures, which learn about cellular level features and nuclear structure including inner-components analysis, such as the quantitative evaluation of the approximate number of mRNA varying during cell cycle, developing, aging, and in different pathologies and treatment with drugs by extracting morphological parameters (cytoplasm and nucleus areas) [28]. Accurate quantification of these parameters could be beneficial for developing robust biosignatures for early cancer detection [1]. Multivariate statistical analyses of morphological data to suggest that quantitative cytology may be a useful adjunct to conventional tests for the selection of new drugs with differentiating potential [37].The extracting features as cell area, perimeter, centroid, and the length of major and minor axes for calculating more meaningful parameters such as displacement, protrusiveness, and ellipticity, are used to analyze the dynamic changes of human cancerous glioma cells [35], which can also be used to identify different classed of neurons and relate neural structure (such as total dendritic length and dendritic field area) to function [31].The most meaningful parameters are obtained in discriminating different patterns, such as cell size, shape distribution, and nuclear-to-cytoplasmic ratio for normal and precancerous cervical squamous epithelium determination [44], and texture quantification as a measurement to interchromosome coarseness to study cell proliferation [38]. Local gray level differences and cell density combining with other morphological parameters are possible to follow cell culture development under various experimental conditions [29]. Hitherto, the relationship between malignancy-associated morphological features in single tumour cells and the expression of markers indicating functional properties of these cells remained widely unknown [65]. ### 4.5. Statistical Analysis Multivariate statistic analysis is applied to compare multivariate data and establish the quantitative changes and differences between groups under investigation on their characteristics. The kernel approach is to find a high correlation feature set without redundancy. Principal components analysis (PCA) displays the original variables in a bidimensional space, thus reducing the dimensionality of the data and allowing the visualization of a large number of variables into a two-dimensional plot [11, 49, 66]. ## 4.1. Morphological Operation Mathematical morphology is the basic theory for many image processing algorithms, which can also extract image shape features by operating with various shape-structuring elements [60]. This processing technique has proved to be a powerful tool for many computer-vision tasks in binary and gray scale images, such as edge detection, noise suppression, image enhancement, skeletonization, and pattern recognition, [45]. This technique is consisted of two parts: binary morphology and gray-scale morphology, and the commonly used operations as morphological dilation and erosion are defined as follows, respectively:(1)(f⊕k)(x,y)=max⁡{f(x-m,y-n)+k(m,n)},(fΘk)(x,y)=max⁡{f(x-m,y-n)-k(m,n)}, where f is the original image (gray scale or binary), which is operated by the corresponding structuring element k, and (x, y) is the pixel of image f, (m, n) is the size of element k. After morphological operation, image shape features such as edges, fillets, holes, corners, wedges, and cracks can be extracted.Mathematical morphology can also be used in color images avoiding the loss of information of traditional binary techniques [45]. The new operations are based on the order in multivariate data processing. ## 4.2. Cell Localization Determination of the orientation of a cell, termed localization, is of paramount importance in achieving reliable and robust morphological analysis. Achieving high-level tasks such as segmentation and shape description is possible if the initial position is known. From the early literature, primary methods were used in sample images, such as [61] using a sequence of morphological image operations to identify the cell nuclei and [29] using conditional dilation techniques to estimate unbiasedly cell density and obtain precisely cell contours. The results were acceptable only in single images without any complex factors.Even when membranes are partially or completely not visible in the image (Figure3(a)), the approximate locations of cells can be detected by reconstructing cellular membranes [51]. This method is effective for lung cells location in immunohistochemistry tissue images. Cell nuclei that are in cell clusters detecting are the key point for eliminating the positions of cervical cells in conventional Pap smear images (Figure 3(b)). To deal with this problem, Plissiti et al. present a fully automated method [6]. It takes the advantage of color information to obtain the candidate nuclei centroids in the images and eliminate the undesirable artifacts by applying a distance-dependent rule on the resulted centroids and classification algorithms (fuzzy C-means and support vector machines). The experiments shows that even in the case of images with high degree of cell overlapping, the results are very promising.Biomedical cell images. (a) Lung cells(b) Cervical cellsFor automatic detection of granules in different cell groups and statistical analysis of their spatial locations, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, the empirical cumulative distribution function of the distances and the density of granules can be considered [7]. Jiang et al. propose a machine learning method [62], which is based on Haar feature (which is the combination of the intensity, shape, and scale information of the objects), to detect the particle’s position. ## 4.3. Segmentation Segmentation is one of the most important points for automated image analysis and better cell information understanding. The algorithms that have been presented can be divided into edge-based, region-based, and model-based modules. Region-based approaches attempt to segment an image into regions according to regional image data similarity (or dissimilarity), such as scale-space filtering, watershed clustering [63], gray-level threshold [26], and region growing [64]. For clear stained images, multilevel thresholds are the most simply and commonly applied methods for low-level segmentation to remove noise and obtain the interest region (nucleus, cytoplasm, or the whole cell), which are defined as follows:(2)g(x,y)={Ii,Ti-1≤f(m,n)≤Ti,0,others,wherei is the number of regions need to be divided, Ti is the threshold and the extension ranges from Ti-1 to Ti corresponding to the region i.Nevertheless numerous algorithms have been developed, overlapping and connected cluster is still the key problem in cell image segmentation. The methods presented available to solve specific images with clear stained situation, semiautomated algorithms based on preknowledge for adequate segmentation of cell images under complex situation, are always more efficient than totally automated methods. ## 4.4. Quantitative Measurement of Meaningful Parameters The quantitative measurement of cell features is meaningful for both image segmentation and abnormalities detection. Fast, reproducible, accurate, and objective measurement of cell morphology is beneficial to avoid subjective and interobserver variations, which result in diagnostic shifts and consequently disagreement between different interpreters [20]. The quantitative characteristics of cell or nuclear structure alterations extracted after robust image processing algorithms and 3D reconstruction is also called morphological biosignatures, which learn about cellular level features and nuclear structure including inner-components analysis, such as the quantitative evaluation of the approximate number of mRNA varying during cell cycle, developing, aging, and in different pathologies and treatment with drugs by extracting morphological parameters (cytoplasm and nucleus areas) [28]. Accurate quantification of these parameters could be beneficial for developing robust biosignatures for early cancer detection [1]. Multivariate statistical analyses of morphological data to suggest that quantitative cytology may be a useful adjunct to conventional tests for the selection of new drugs with differentiating potential [37].The extracting features as cell area, perimeter, centroid, and the length of major and minor axes for calculating more meaningful parameters such as displacement, protrusiveness, and ellipticity, are used to analyze the dynamic changes of human cancerous glioma cells [35], which can also be used to identify different classed of neurons and relate neural structure (such as total dendritic length and dendritic field area) to function [31].The most meaningful parameters are obtained in discriminating different patterns, such as cell size, shape distribution, and nuclear-to-cytoplasmic ratio for normal and precancerous cervical squamous epithelium determination [44], and texture quantification as a measurement to interchromosome coarseness to study cell proliferation [38]. Local gray level differences and cell density combining with other morphological parameters are possible to follow cell culture development under various experimental conditions [29]. Hitherto, the relationship between malignancy-associated morphological features in single tumour cells and the expression of markers indicating functional properties of these cells remained widely unknown [65]. ## 4.5. Statistical Analysis Multivariate statistic analysis is applied to compare multivariate data and establish the quantitative changes and differences between groups under investigation on their characteristics. The kernel approach is to find a high correlation feature set without redundancy. Principal components analysis (PCA) displays the original variables in a bidimensional space, thus reducing the dimensionality of the data and allowing the visualization of a large number of variables into a two-dimensional plot [11, 49, 66]. ## 5. Methods and Solutions ### 5.1. Formulation in Morphological Analysis Morphological analysis is often studied as the shape appearances of objects and the surfaces of the images, with intensity seen as height and texture appearing as relief. Formulization of morphological features is of benefit to computerized calculation and more efficient than manual morphological quantification, which is still laborious and subjective. The morphology characteristics can be described by shape, geometrical, intensity, and texture analysis.The geometrical features of regions can be described by area, radii, perimeter, the major and the minor axis length, and so forth. The area of the object is calculated as the number of pixels of the region (Figure4, the area defined by the closed curve). Radii are calculated based on projected cell area supposing that each cell is circular. The major and the minor axis length are the maximal and minimum numbers of pixels of the axis, respectively. Take Figure 4 as an example, the perimeter is calculated as follows:(3)P=N1+N2+2N3,Figure 4 Geometrical features quantification.where,N1, N2, N3 are the numbers of the horizontal, vertical bevel lines on the boundary, respectively.Circularity, rectangle, eccentricity, and irregularity are used to describe the shape features. Circularity (C) and rectangle (R) represent the rotundity-like and rectangle-like degree, defined as follows:(4)C=P24πA,R=AreaH*W.Eccentricity is defined as follows:(5)E=TheminoraxislengthThemajoraxislength.Texture is an important visual cue and widely exists in images. Texture feature extraction is the most basic problem for texture analysis including classification and segmentation. Dimension, discrimination, stability, and calculation are considered in practical application and studied for more than fifty years. Based on the statistical theory, structure, model, and signal processing, many effective methods were presented for different applications. Among which, gray level co-occurrence matrix (GLCM) has become one of the best known and most widely used statistic method for texture feature extraction [26], especially in cell image texture feature analyzing. The interrelationship of textural primitives which define morphological texture can be estimated by quite different descriptors, the discriminant value of which varies considerably [67]. The descriptors based on GLCM are summarized in Table 2.Table 2 Texture features. Energy:ASM=∑i=1k(gi-g)-2p(gi)Uniformity:U=∑i=1kp2(gi)Entropy:ENT=-∑i=1kp(gi)log2p(gi)Smoothness:IDM=1-1/(1+s2), where s=∑i=1k(gi-g)-2p(gi)Given thatgi is the gray value, k is the number of gray levels.The intensity feature is characterized by the average of the intensity value of all the pixels of the region. For RGB color images, it is calculated independently from the red, green, and blue component of the original image. Histogram is an efficient way to show intensity features. Kruk et al. characterize the histograms of different color components by applying the following parameters: the mean, standard deviation, skewness, kurtosis, maximum value, and the span of the histogram [59]. ### 5.2. Deformable Models It is known that biomedical images are always under complex situation, which made segmentation a hard task for the extraction of the interest region. Because of the various challenges in medical image processing, deformable models were widely investigated and innovated, becoming a powerful tool for medical image segmentation. Active counter model is one of the most classical algorithms. Techniques based on active contour models have the potential to produce better estimates of cell morphologies.The existing active contour models can be categorized into two classes: edge-based models [68], and region-based models [69]. On one hand edge-based model directly uses intensity gradient information to attract the contour toward the object boundaries. Therefore this kind of model has worse performance for weak object boundaries since cell image exhibits great fuzzy degree due to low contrast at the location of the cell membrane. On the other hand region-based model aims to identify each region of interest by using a certain region descriptor. It guides the motion of the contour, and is less sensitive to the location of initial contours in some extents. It is much more suitable for cell segmentation than the fore one.Chan and Vese model [70] is one of the most popular region-based active contour models. This model has been successfully used for segmenting images. Chan and Vese model proposed an active contour model that segments an image into two sets of possibly disjoint regions, by minimizing a simplified Mumford-Shah functional. The basic idea is as follows. Assume that Ω⊂R2 is the image domain and I: Ω→R is a given image. Mumford and Shah consider image segmentation as a problem of seeking an optimal contour C that divides the image domain into two approximately piecewise-constant regions with intensities ui and u0. Let C denote its boundary. Thus the global data fitting term in the Chan and Vese model is defined as follows:(6)Ecv(c1,c2)=∫Ω¯(I-c1)2dxdy+∫Ω(I-c2)2dxdy,whereΩ and Ω¯ represent the regions outside and inside the contour C, respectively, c1 and c2 are two constants that fit the image intensities outside C and inside C.This model considers pixels within the same region having the most similarity, and makes up the shortcomings of edged etector. When the contour accurately captures the object boundary, the two fitting terms minimize the fitting energy value. In each segmented area, the clustered pixels’ mean value approximately equals thec1 and c2, respectively. Thus the fitting terms with respect to c1 and c2 are the driving forces that evolve the curve motion on the principle of inner-region homogeneity.Since the regional difference is the guideline in image segmentation, the interregional differences should be considered as the model’s driving force as follows:(7)E=-12(c1-c2)2.This kind of region-based active contour model’s energy is characterized by the maximum dissimilarity between regions. Minimizing the energyE in (7) is the same as maximizing the difference between different regions. Equation (7) formulates the global instructive guidance term. ### 5.3. Classification The extracted features involved the input to classification procedure for better analysis, correct grading, and pattern recognition. From the literature, unsupervised (asK-means and spectral clustering) and supervised (as super vector machine, SVM) classification schemes and artificial neural network (ANN) architecture were applied. SVM clustering is a state-of-the-art method, which was originally proposed in [71]. The decision function of a two-class problem can be written as follows:(8)f(x)=ω⋅ϕ(x)+b=∑i=1NαiyiK(x,xi)+b, where xi∈Rd is the sample and yi∈{±1} is the class label of xi. A transformation ϕ(·) maps the data points x of the input space Rd into a higher-dimensional feature space RD, (D≥d). K(·,·) is a kernel function, which defines an inner product in RD. K(·,·) is commonly defined as follows:(9)K(x,xi)=[(x⋅xi)+1]q,K(x,xi)=exp⁡{-|x-xi|2σ2},K(x,xi)=tanh(v(x⋅xi)+c).The parametersαi≥0 are optimized by finding the hyperplane in feature space with maximum distance to the closest image ϕ(xi) from the training set. For multilevel classification based on SVM, a decision-tree classification scheme discriminated between different grades is showed in Figure 5.Figure 5 A decision-tree SVM classification scheme.Although SVM is one of the most famous methods for classification and has achieved a great success in pattern recognition, problems still exist, such as the neglect of different data distributions within classes. Recently, structural super vector machine (SSVM) is proposed accordingly, which firstly exploits the intrinsic structures of samples within classes by some unsupervised clustering methods and directly embedding the structural information into the SVM objective function [72]. SSVM is theoretically and empirically a better generalization than SVM algorithm. ### 5.4. D Morphology Three-dimensional morphology using 3D reconstruction and image processing techniques is applied for quantitative morphometric analysis of cellular and subcellular structures, which is much more powerful than its 2D counterpart, but still largely based on the processing of separate 2D slices.The approach to 3D morphological analysis consists of digital micrographs acquisition, reconstruction, and 3D-based feature extraction. The acquired images are serialy taken by CT instrument at uniform angular intervals during a full 360° rotation [1], from the electron imaging film taken by photo products [73], or by electron microscopy [40]. Computer programs such as MATLAB or Visual Studio software can be used for automated 3D image reconstruction.Based on the reconstructed models, features such as three-dimensional shape of the cells can be extracted, which are correlated with the assembly state of myofibrils in different stages [74] and ultrastructure such as the arrangement of compact chromatin of GO lymphocytes can be studied [23]. ## 5.1. Formulation in Morphological Analysis Morphological analysis is often studied as the shape appearances of objects and the surfaces of the images, with intensity seen as height and texture appearing as relief. Formulization of morphological features is of benefit to computerized calculation and more efficient than manual morphological quantification, which is still laborious and subjective. The morphology characteristics can be described by shape, geometrical, intensity, and texture analysis.The geometrical features of regions can be described by area, radii, perimeter, the major and the minor axis length, and so forth. The area of the object is calculated as the number of pixels of the region (Figure4, the area defined by the closed curve). Radii are calculated based on projected cell area supposing that each cell is circular. The major and the minor axis length are the maximal and minimum numbers of pixels of the axis, respectively. Take Figure 4 as an example, the perimeter is calculated as follows:(3)P=N1+N2+2N3,Figure 4 Geometrical features quantification.where,N1, N2, N3 are the numbers of the horizontal, vertical bevel lines on the boundary, respectively.Circularity, rectangle, eccentricity, and irregularity are used to describe the shape features. Circularity (C) and rectangle (R) represent the rotundity-like and rectangle-like degree, defined as follows:(4)C=P24πA,R=AreaH*W.Eccentricity is defined as follows:(5)E=TheminoraxislengthThemajoraxislength.Texture is an important visual cue and widely exists in images. Texture feature extraction is the most basic problem for texture analysis including classification and segmentation. Dimension, discrimination, stability, and calculation are considered in practical application and studied for more than fifty years. Based on the statistical theory, structure, model, and signal processing, many effective methods were presented for different applications. Among which, gray level co-occurrence matrix (GLCM) has become one of the best known and most widely used statistic method for texture feature extraction [26], especially in cell image texture feature analyzing. The interrelationship of textural primitives which define morphological texture can be estimated by quite different descriptors, the discriminant value of which varies considerably [67]. The descriptors based on GLCM are summarized in Table 2.Table 2 Texture features. Energy:ASM=∑i=1k(gi-g)-2p(gi)Uniformity:U=∑i=1kp2(gi)Entropy:ENT=-∑i=1kp(gi)log2p(gi)Smoothness:IDM=1-1/(1+s2), where s=∑i=1k(gi-g)-2p(gi)Given thatgi is the gray value, k is the number of gray levels.The intensity feature is characterized by the average of the intensity value of all the pixels of the region. For RGB color images, it is calculated independently from the red, green, and blue component of the original image. Histogram is an efficient way to show intensity features. Kruk et al. characterize the histograms of different color components by applying the following parameters: the mean, standard deviation, skewness, kurtosis, maximum value, and the span of the histogram [59]. ## 5.2. Deformable Models It is known that biomedical images are always under complex situation, which made segmentation a hard task for the extraction of the interest region. Because of the various challenges in medical image processing, deformable models were widely investigated and innovated, becoming a powerful tool for medical image segmentation. Active counter model is one of the most classical algorithms. Techniques based on active contour models have the potential to produce better estimates of cell morphologies.The existing active contour models can be categorized into two classes: edge-based models [68], and region-based models [69]. On one hand edge-based model directly uses intensity gradient information to attract the contour toward the object boundaries. Therefore this kind of model has worse performance for weak object boundaries since cell image exhibits great fuzzy degree due to low contrast at the location of the cell membrane. On the other hand region-based model aims to identify each region of interest by using a certain region descriptor. It guides the motion of the contour, and is less sensitive to the location of initial contours in some extents. It is much more suitable for cell segmentation than the fore one.Chan and Vese model [70] is one of the most popular region-based active contour models. This model has been successfully used for segmenting images. Chan and Vese model proposed an active contour model that segments an image into two sets of possibly disjoint regions, by minimizing a simplified Mumford-Shah functional. The basic idea is as follows. Assume that Ω⊂R2 is the image domain and I: Ω→R is a given image. Mumford and Shah consider image segmentation as a problem of seeking an optimal contour C that divides the image domain into two approximately piecewise-constant regions with intensities ui and u0. Let C denote its boundary. Thus the global data fitting term in the Chan and Vese model is defined as follows:(6)Ecv(c1,c2)=∫Ω¯(I-c1)2dxdy+∫Ω(I-c2)2dxdy,whereΩ and Ω¯ represent the regions outside and inside the contour C, respectively, c1 and c2 are two constants that fit the image intensities outside C and inside C.This model considers pixels within the same region having the most similarity, and makes up the shortcomings of edged etector. When the contour accurately captures the object boundary, the two fitting terms minimize the fitting energy value. In each segmented area, the clustered pixels’ mean value approximately equals thec1 and c2, respectively. Thus the fitting terms with respect to c1 and c2 are the driving forces that evolve the curve motion on the principle of inner-region homogeneity.Since the regional difference is the guideline in image segmentation, the interregional differences should be considered as the model’s driving force as follows:(7)E=-12(c1-c2)2.This kind of region-based active contour model’s energy is characterized by the maximum dissimilarity between regions. Minimizing the energyE in (7) is the same as maximizing the difference between different regions. Equation (7) formulates the global instructive guidance term. ## 5.3. Classification The extracted features involved the input to classification procedure for better analysis, correct grading, and pattern recognition. From the literature, unsupervised (asK-means and spectral clustering) and supervised (as super vector machine, SVM) classification schemes and artificial neural network (ANN) architecture were applied. SVM clustering is a state-of-the-art method, which was originally proposed in [71]. The decision function of a two-class problem can be written as follows:(8)f(x)=ω⋅ϕ(x)+b=∑i=1NαiyiK(x,xi)+b, where xi∈Rd is the sample and yi∈{±1} is the class label of xi. A transformation ϕ(·) maps the data points x of the input space Rd into a higher-dimensional feature space RD, (D≥d). K(·,·) is a kernel function, which defines an inner product in RD. K(·,·) is commonly defined as follows:(9)K(x,xi)=[(x⋅xi)+1]q,K(x,xi)=exp⁡{-|x-xi|2σ2},K(x,xi)=tanh(v(x⋅xi)+c).The parametersαi≥0 are optimized by finding the hyperplane in feature space with maximum distance to the closest image ϕ(xi) from the training set. For multilevel classification based on SVM, a decision-tree classification scheme discriminated between different grades is showed in Figure 5.Figure 5 A decision-tree SVM classification scheme.Although SVM is one of the most famous methods for classification and has achieved a great success in pattern recognition, problems still exist, such as the neglect of different data distributions within classes. Recently, structural super vector machine (SSVM) is proposed accordingly, which firstly exploits the intrinsic structures of samples within classes by some unsupervised clustering methods and directly embedding the structural information into the SVM objective function [72]. SSVM is theoretically and empirically a better generalization than SVM algorithm. ## 5.4. D Morphology Three-dimensional morphology using 3D reconstruction and image processing techniques is applied for quantitative morphometric analysis of cellular and subcellular structures, which is much more powerful than its 2D counterpart, but still largely based on the processing of separate 2D slices.The approach to 3D morphological analysis consists of digital micrographs acquisition, reconstruction, and 3D-based feature extraction. The acquired images are serialy taken by CT instrument at uniform angular intervals during a full 360° rotation [1], from the electron imaging film taken by photo products [73], or by electron microscopy [40]. Computer programs such as MATLAB or Visual Studio software can be used for automated 3D image reconstruction.Based on the reconstructed models, features such as three-dimensional shape of the cells can be extracted, which are correlated with the assembly state of myofibrils in different stages [74] and ultrastructure such as the arrangement of compact chromatin of GO lymphocytes can be studied [23]. ## 6. Existing Problems and Future Trends Although morphological cell analysis has been developed in many applications as mature approaches for estimation and diagnosis, some problems still exist in its applications in biomedical engineering. Researchers are exerting efforts not only in simple localization and segmentation, but also in improving the methods mainly in the following aspects. ### 6.1. Real-Time Application and Computational Complexity Morphological cell analysis has been applied in almost all hospitals, which are key means in automatic microscopic analysis. However, because of its high computational complexity, it has strict limits on the number and stability of feature points. The traditional method selects a few features, which limits the application scope of morphological analysis. The computational complexity greatly affects real-time application systems [50, 75]. ### 6.2. Reliability Reliability is a great concern in practical applications [55, 76]. Morphological analysis relies on tuning of many parameters. Related techniques rely on existing noise statistics, initial positions, and sufficiently good approximation of measurement functions. Deviations from such assumptions usually lead to degraded estimations during automatic analysis. Stochastic stability is established in terms of the conditions of the initial errors, bound on observation noise covariance, observation nonlinearity, and modeling error. Features have to be effectively and efficiently treated by their removal from or addition to the system. New methods should be explored to discard outliers and improve the matching rate. These will help stabilize algorithms and allow more accurate localizations or parametric estimations. ### 6.3. With a Priori Knowledge Constraints introduced in morphological cell parameters may help in some occasions. For example, morphological cell analysis is commonly used to estimate the cell shapes and activities, which incorporatea priori information in a consistent manner. However, the known model or information are often either ignored or heuristically dealt with [6]. ### 6.4. Accuracy Accuracy is always the most important factor in biomedical engineering. The accuracy of the calculated cells strongly depends on the computational potential and the statistical possibilities. For example, automated method provides accurate segmentation of the cellular membranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per-cell morphological analysis and quantification of the target membrane [16, 51, 77]. ### 6.5. Artificial Intelligence The integration of the morphological cell analysis with some artificial intelligence methods may yield a better performance. Fuzzy logic, neural network, genetic algorithm, and so forth can be combined to wholly resolve the complex task. ## 6.1. Real-Time Application and Computational Complexity Morphological cell analysis has been applied in almost all hospitals, which are key means in automatic microscopic analysis. However, because of its high computational complexity, it has strict limits on the number and stability of feature points. The traditional method selects a few features, which limits the application scope of morphological analysis. The computational complexity greatly affects real-time application systems [50, 75]. ## 6.2. Reliability Reliability is a great concern in practical applications [55, 76]. Morphological analysis relies on tuning of many parameters. Related techniques rely on existing noise statistics, initial positions, and sufficiently good approximation of measurement functions. Deviations from such assumptions usually lead to degraded estimations during automatic analysis. Stochastic stability is established in terms of the conditions of the initial errors, bound on observation noise covariance, observation nonlinearity, and modeling error. Features have to be effectively and efficiently treated by their removal from or addition to the system. New methods should be explored to discard outliers and improve the matching rate. These will help stabilize algorithms and allow more accurate localizations or parametric estimations. ## 6.3. With a Priori Knowledge Constraints introduced in morphological cell parameters may help in some occasions. For example, morphological cell analysis is commonly used to estimate the cell shapes and activities, which incorporatea priori information in a consistent manner. However, the known model or information are often either ignored or heuristically dealt with [6]. ## 6.4. Accuracy Accuracy is always the most important factor in biomedical engineering. The accuracy of the calculated cells strongly depends on the computational potential and the statistical possibilities. For example, automated method provides accurate segmentation of the cellular membranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per-cell morphological analysis and quantification of the target membrane [16, 51, 77]. ## 6.5. Artificial Intelligence The integration of the morphological cell analysis with some artificial intelligence methods may yield a better performance. Fuzzy logic, neural network, genetic algorithm, and so forth can be combined to wholly resolve the complex task. ## 7. Conclusion This paper summarizes recent advances in morphological cell analysis for biomedical engineering applications. Typical contributions are addressed for initialization, localization, segmentation, estimation, modeling, shape analysis, cell parameters, and so forth. Representative works are listed for readers to have a general overview of state-of-the art. A number of methods for solving morphological problems are investigated. Many methods developed for morphological cell analysis, extended morphological cell segmentation, are introduced. In the 20-year history of morphological cell analysis, they gained entry into the field of biomedical engineering as a critical role. The largest volume of published reports in this literature belongs to the last ten years. --- *Source: 101536-2012-01-09.xml*
2012
# Optimal Placement of Wind Power Plants in Transmission Power Networks by Applying an Effectively Proposed Metaheuristic Algorithm **Authors:** Minh Quan Duong; Thang Trung Nguyen; Thuan Thanh Nguyen **Journal:** Mathematical Problems in Engineering (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1015367 --- ## Abstract In this paper, a modified equilibrium algorithm (MEA) is proposed for optimally determining the position and capacity of wind power plants added in a transmission power network with 30 nodes and effectively selecting operation parameters for other electric components of the network. Two single objectives are separately optimized, including generation cost and active power loss for the case of placing one wind power plant (WPP) and two wind power plants (WPPs) at predetermined nodes and unknown nodes. In addition to the proposed MEA, the conventional equilibrium algorithm (CEA), heap-based optimizer (HBO), forensic-based investigation (FBI), and modified social group optimization (MSGO) are also implemented for the cases. Result comparisons indicate that the generation cost and power loss can be reduced effectively, thanks to the suitable location selection and appropriate power determination for WPPs. In addition, the generation cost and loss of the proposed MEA are also less than those from other compared methods. Thus, it is recommended that WPPs should be placed in power systems to reduce cost and loss, and MEA is a powerful method for the placement of wind power plants in power systems. --- ## Body ## 1. Introduction Solving optimal power flow problem (OPF) to have the steady and effective states of power systems is considered as the leading priority in operation of power systems. Specifically, the steady state is represented as a state vector and regarded as a set of variables, such as output of active and reactive power from power plants, voltage of generators in power plants, output of reactive power from shunt capacitors, transformers’ tap, voltage of loads, and operating current of transmission lines [1–3]. Generally, during the whole process of solving the OPF problem to determine the steady state in power system operation, the mentioned variables are separated into control variables and dependent variables [4, 5]. The output of the reactive power from power plants (QG), output of the active power of power plants at slack node (PGs), voltage of loads (VL), and current of lines (Il) are grouped in a dependent variable set [6–10], whereas other remaining variables including tap changer of transformers (TapT), output of the active power from the generators excluding that at slack node (PGs), and output of the reactive power supplied by capacitor banks (QCap) are put in a control variable set [11–15]. These control variables are utilized as the input of the Mathpower programme to find the dependent variables. The Mathpower programme is a calculating tool developed based on the Newton–Raphson method to deal with power flow. After having the dependent variable set, it is checked and penalized based on previously known upper bound and lower bound. The violation of the bounds will be considered for the quality of both control and dependent variable sets [16–20]. These violations are converted into penalty terms and added to objective functions, such as electrical power generation cost (Coste), active power loss (Ploss), polluted emission (Em), and load voltage stability index (IDsl).Recently, the presence of renewable energies has been considered in power systems when the percentages of wind power and solar energy joining into the process of generating electricity become more and more. In that situation, the OPF problem was modified and became more complex than ever. The conventional version of the OPF problem only considers thermal power plants (THPs) as the main source [21–24]. Other modified versions of the OPF problem, both THPs and renewable energies, are power sources. The modified OPF problem is outlined in Figure 1 in which the conventional OPF problem is a part of the figure without variables regarding renewable energies, such as output of active and reactive power of wind power plant (Pw, Qw), output of active and reactive power of photovoltaic power plants (PVPs) (Ppv, Qpv), and location of WPPs and PVPs (Lw, Lpv). There are large number of studies proposed to handle the modified OPF problems. These studies can be classified into three main groups. Specifically, the first group solves the OPF problem considering wind power source injecting both active and reactive power into grid. The second group considers the assumption that wind energy sources just generate active power only. The third group considers both wind and solar energies in the process of solving the OPF problem. The applied methods, test systems, objective functions, placed renewable power plants, and compared methods regarding modified OPF problems are summarized in Table 1. All the studies in the table have focused on the placement of wind and photovoltaic power plants to cut electricity generation fuel cost for THPs, and the results were mainly compared to base systems without the contribution of the renewable plants. In addition, other research directions of optimal power flow are without renewable power plants but using reactive power dispatch [50, 51] and using VSC (voltage source converter) based on HVDC (high-voltage direct current) [52, 53]. These studies also achieved the reduction of cost and improved the quality of voltage as expected. If the combination of both using renewable energies and optimal dispatch of reactive power or the combination of using both renewable energies and these converters can be implemented, expected results such as the reduction of cost and power loss and the voltage enhancement can be significantly better.Figure 1 Configuration of the modified OPF problem in the presence of renewable energies.Table 1 The summary of studies proposed to solve the modified OPF problem considering renewable energies. ReferenceMethodApplied systemRenewable energyCompared methods[25]BFA30 nodesWind (P, Q)GA[26]MBFA30 nodesWind (P, Q)ACA[27]HABC30 nodesWind (P, Q)ABA[28]MCS30, 57 nodesWind (P, Q)MPSO[29]HA30 nodesWind (P, Q)PSO[30]MFO30 nodesWind, solar (P, Q)GWA, MVA, IMA[31]AFAPA30, 75 nodesWind (P)APO, BA[32]KHA30, 57 nodesWind (P)ALPSO, DE, RCGA[33]GSO300 nodesWind (P)NSGA-II[34]MHGSPSO30, 57 nodesWind (P, Q)MSA, GWA, WA[35]BSA30 nodesWind, solar (P)—[36]IMVA30 nodesWind, solar (P, Q)PSO, MVA, NSGA-II[37]PSO39 nodesWind, solar (P)PSO variants[38]NSGA-II30, 118 nodesWind, solar (P)—[39]MFA30 nodesWind, solar (P, Q)MDE[40]GWO30, 57 nodesWind, solar (P)GA, PSO, CSA1, MDE, ABA[41]BWOA, ALO, PSO30 nodesWind, solar (P, Q)—GSA, MFA, BMA[42]FPAIEEE 30-busWind, solar (P)—[43]APDEIEEE 30-busWind, solar (P, Q)—[44]HGTPEA30 nodesWind, solar (P)—[45]HGNIPA118 nodesWind, solar (P, Q)—[46]NDSGWA30 nodesWind, solar (P)MDE[47]JYA30 nodesWind, solar (P)—[48]HSQTIICA30 nodesDG (P, Q)IICA[49]MJYA30, 118 nodesWind (P)MSA, ABA, CSA, GWA, BSAIn recent years, metaheuristic algorithms have been developed widely and applied successfully for optimization problems in engineering. One of the most well-known algorithms is the conventional equilibrium algorithm (CEA) [54], which was introduced in the early 2020. The conventional version was demonstrated more effective than PSO, GWA, GA, GSA, and SSA for a set of fifty-eight mathematical functions with a different number of variables and types. Over the past year and this year, CEA was widely replicated for different optimization problems such as AC/DC power grids [55], loss reduction of distribution networks [56], component design for vehicles [57], and multidisciplinary problem design [58]. However, the performance of CEA is not the most effective among utilized methods for the same problems. Consequently, CEA had been indicated to be effective for large-scale problems, and it needs more improvements [59–62]. Thus, we proposed another version of CEA, called the modified equilibrium algorithm (MEA), and also applied four other metaheuristic algorithms for checking the performance of MEA.In this paper, the authors solve a modified OPF (MOPF) problem with the placement of wind power plants in an IEEE 30-bus transmission power network. About the number of wind power plants located in the system, two cases are, respectively, one wind power plant (WPP) and two WPPs. About the locations of the WPPs, simple cases are referred to the previous study [47] and other more complicated cases are to determine suitable buses in the system by applying metaheuristic algorithms. It is noted that the study in [47] has only studied the placement of one WPP, and it has indicated the most suitable bus as bus 30 and the most ineffective bus as bus 3. In this paper, we have employed buses 3 and 30 for two separated cases to check the indication of the study [47]. The results indicated that the placement of one WPP at bus 30 can reach smaller power loss and smaller fuel cost than at bus 3. In addition, the paper also investigated the effectiveness of locations by applying MEA and four other metaheuristic algorithms to determine the location. As a result, placing one WPP at bus 30 has reached the smallest power loss and the smallest total fuel cost. For the case of placing two WPPs, buses 30 and 3 could not result in the smallest fuel cost and the smallest power loss. Buses 30 and 5 are the best locations for the minimization of fuel cost, while buses 30 and 24 are the best locations for the minimization of power loss. Therefore, the main contribution of the study regarding the electrical field is determining the best locations for the best power loss and the best total cost.All study cases explained above are implemented by the proposed MEA and four existing algorithms published in 2020, including conventional equilibrium algorithm (CEA) [54], heap-based optimizer (HBO) [63], forensic-based investigation (FBI) [64], and modified social group optimization (MSGO) [65]. As a result, the best locations leading to the smallest cost and smallest loss are obtained by MEA. Thus, the applications of the four recent algorithms and the proposed MEA aim to introduce a new algorithm and show their effectiveness to readers in solving the MOPF problem. Readers can give evaluations and decide if the algorithms are used for their own optimization problems, which maybe in electrical engineering or other fields. The major contributions of the paper are summarized again as follows:(1) Find the best locations for placing wind power plants in the IEEE 30-bus transmission power gird.(2) The added wind power plants and other found parameters of the system found by MEA can reach the smallest power loss and smallest total cost.(3) Introduce four existing algorithms developed in 2020 and a proposed MEA. In addition, the performance of these optimization tools is shown to readers for deciding if these tools are used for their applications.(4) Provide MEA, the most effective algorithm among five applied optimization tools for the MOPF problem.The organization of the paper is as follows. Two single objectives and a considered constraint set are presented in Section2. The configuration of CEA for solving a sample optimization problem and then modified points of MEA are clarified in detail in Section 3. Section 4 summarizes the computation steps for solving the modified OPF problem by using MEA. Section 5 presents results obtained by the proposed MEA and other methods such as JYA, FBI, HBO, and MSGO. Finally, conclusions are given in Section 6 for stating achievements in the paper. ## 2. Objective Functions and Constraints of the Modified OPF Problem ### 2.1. Objective Functions #### 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. #### 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ### 2.2. Constraints #### 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. #### 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. #### 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 2.1. Objective Functions ### 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. ### 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ## 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. ## 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ## 2.2. Constraints ### 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. ### 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. ### 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. ## 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. ## 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 3. The Proposed Modified Equilibrium Algorithm (MEA) ### 3.1. Conventional Equilibrium Algorithm (CEA) CEA was first introduced and applied in 2020 for solving a high number of benchmark functions. The method was superior to popular and well-known metaheuristic algorithms, but its feature is simple with one technique of newly updating solutions and one technique of keeping promising solutions between new and old solutions.The implementation of CEA for a general optimization problem is mathematically presented as follows. #### 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. #### 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. #### 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. #### 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. #### 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ### 3.2. The Proposed MEA The proposed MEA is a modified variant of CEA by using a new technique for updating new control variables. From equation (18), it sees that CEA only chooses search spaces around the four best solutions and the middle solution (i.e., Zb1,Zb2,Zb3,Zb4,Zmid) for updating decision variables whilst from search spaces nearby from the fifth best solution to the worst solution are skipped intentionally. In addition, the strategy has led to the success of CEA with better performance than other metaheuristics. However, CEA cannot reach a higher performance because it is coping with two shortcomings as follows:(1) The first shortcoming is to pick up one out of five solutions in the setZ5b randomly. The search spaces may be repeated more than once and even too many times. Therefore, promising search spaces can be exploited ineffectively or skipped unfortunately.(2) The second shortcoming is to use two update steps includingM×Zb1−Zd and K×M/r21−M, which are decreased when the computation iteration is increased. Especially, the steps become zero at final computation iterations. In fact, parameter A in equation (21) becomes 0 when the current iteration is equal to the maximum iteration N3. If we substitute A = 0 into equation (19), M becomes 0.Thus, the proposed MEA is reformed to eliminate the above drawbacks of CEA and reach better results as solving the OPF problem with the presence of wind energy. The two proposed formulas for updating new decision variables are as follows:(26)Zdnew1=Zd+MZb1−Zd+r7Zr1−Zr2,(27)Zdnew2=Zb1+MZd−Zb1+r8Zr3−Zr4.The two equations above are not applied simultaneously for the same old solutioni. Either equation (26) or equation (27) is used for the dth new solution. Zdnew1 in equation (26) is applied to update Zd if Zd has better fitness than the medium fitness of the population, i.e., Fitd < Fitmean. For the other case, i.e., Fitd ≥ Fitmean, Zdnew2 in equation (27) is determined. ## 3.1. Conventional Equilibrium Algorithm (CEA) CEA was first introduced and applied in 2020 for solving a high number of benchmark functions. The method was superior to popular and well-known metaheuristic algorithms, but its feature is simple with one technique of newly updating solutions and one technique of keeping promising solutions between new and old solutions.The implementation of CEA for a general optimization problem is mathematically presented as follows. ### 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. ### 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. ### 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. ### 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. ### 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ## 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. ## 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. ## 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. ## 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. ## 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ## 3.2. The Proposed MEA The proposed MEA is a modified variant of CEA by using a new technique for updating new control variables. From equation (18), it sees that CEA only chooses search spaces around the four best solutions and the middle solution (i.e., Zb1,Zb2,Zb3,Zb4,Zmid) for updating decision variables whilst from search spaces nearby from the fifth best solution to the worst solution are skipped intentionally. In addition, the strategy has led to the success of CEA with better performance than other metaheuristics. However, CEA cannot reach a higher performance because it is coping with two shortcomings as follows:(1) The first shortcoming is to pick up one out of five solutions in the setZ5b randomly. The search spaces may be repeated more than once and even too many times. Therefore, promising search spaces can be exploited ineffectively or skipped unfortunately.(2) The second shortcoming is to use two update steps includingM×Zb1−Zd and K×M/r21−M, which are decreased when the computation iteration is increased. Especially, the steps become zero at final computation iterations. In fact, parameter A in equation (21) becomes 0 when the current iteration is equal to the maximum iteration N3. If we substitute A = 0 into equation (19), M becomes 0.Thus, the proposed MEA is reformed to eliminate the above drawbacks of CEA and reach better results as solving the OPF problem with the presence of wind energy. The two proposed formulas for updating new decision variables are as follows:(26)Zdnew1=Zd+MZb1−Zd+r7Zr1−Zr2,(27)Zdnew2=Zb1+MZd−Zb1+r8Zr3−Zr4.The two equations above are not applied simultaneously for the same old solutioni. Either equation (26) or equation (27) is used for the dth new solution. Zdnew1 in equation (26) is applied to update Zd if Zd has better fitness than the medium fitness of the population, i.e., Fitd < Fitmean. For the other case, i.e., Fitd ≥ Fitmean, Zdnew2 in equation (27) is determined. ## 4. The Application of the Proposed MEA for OPF Problem ### 4.1. Generation of Initial Population The problem of placing wind turbines in the transmission power network is successfully solved by using decision variables as follows: the active power generation and voltage of thermal generators (excluding power generation at slack node), generation of capacitors, tap of transformer, and position, active and reactive power of wind turbines. Hence,Zd is comprised of the following decision variables: PTGi (i = 2, …, NTG); UTGi (i = 1, …, NTG); QComi (i = 1, …, NCom); Tapi (i = 1, …, NT); PWindx (x = 1, …, NW); QWindx (x = 1, …, NW); and LWindx (x = 1, …, NW).The decision variables are initialized within their lower boundZlow and upper bound Zup as shown in Section 2. ### 4.2. The Calculation of Dependent Variables Before running Mathpower program, control variables of wind turbines including active power, reactive power, and location are collected to calculate the new values of loads at the placement of the wind turbines. Then, the data of the load must be changed and then added in the input data of Mathpower program. Finally, other remaining decision variables are added to Mathpower program and running power flow for obtaining dependent variables includingPTG1; QTGi (i = 1, …, NTG); ULNt (t = 1, …, NLN); and SBrq (q = 1, …, NBr). ### 4.3. Solution Evaluation The quality of solutionZd is evaluated by calculating the fitness function. Total cost and total active power loss are two single objectives, while the violations of dependent variables are converted into penalty values [66]. ### 4.4. Implementation of MEA for the Problem In order to reach the optimal solution for the OPF problem with the presence of wind turbines, the implementation of MEA is shown in the following steps and is summarized in Figure2.Step 1: select populationN1 and maximum iteration N2.Step 2: select and initialize decision variables for the population as shown in Section4.1.Step 3: collect variables of wind turbines and tune loads.Step 4: running Mathpower for obtaining dependent variables shown in Section4.2.Step 5: evaluate quality of obtained solutions as shown in Section4.3.Step 6: select the best solutionZb1 and set Iter = 1.Step 7: select four random solutionsZr1, Zr2, Zr3, and Zr4.Step 8: calculate the mean fitness of the whole population.Step 9: produce new solutions.If Fitd < Fitmean, apply equation (26) to produce new solution. Otherwise, apply equation (27) to produce new solutions.Step 10: correct new solutions using equation (23).Step 11: collect variables of wind turbines and tune loads.Step 12: running Mathpower for obtaining dependent variables shown in Section4.2.Step 13: evaluate quality of obtained solutions as shown in Section4.3.Step 14: select good solutions among old population and new population using equations (24) and (25).Step 15: select the best solutionZb1.Step 16: if Iter =N3, stop the search process and print the optimal solution. Otherwise, set Iter to Iter + 1 and back to Step 7.Figure 2 The flowchart of using MEA for solving the modified OPF problem. ## 4.1. Generation of Initial Population The problem of placing wind turbines in the transmission power network is successfully solved by using decision variables as follows: the active power generation and voltage of thermal generators (excluding power generation at slack node), generation of capacitors, tap of transformer, and position, active and reactive power of wind turbines. Hence,Zd is comprised of the following decision variables: PTGi (i = 2, …, NTG); UTGi (i = 1, …, NTG); QComi (i = 1, …, NCom); Tapi (i = 1, …, NT); PWindx (x = 1, …, NW); QWindx (x = 1, …, NW); and LWindx (x = 1, …, NW).The decision variables are initialized within their lower boundZlow and upper bound Zup as shown in Section 2. ## 4.2. The Calculation of Dependent Variables Before running Mathpower program, control variables of wind turbines including active power, reactive power, and location are collected to calculate the new values of loads at the placement of the wind turbines. Then, the data of the load must be changed and then added in the input data of Mathpower program. Finally, other remaining decision variables are added to Mathpower program and running power flow for obtaining dependent variables includingPTG1; QTGi (i = 1, …, NTG); ULNt (t = 1, …, NLN); and SBrq (q = 1, …, NBr). ## 4.3. Solution Evaluation The quality of solutionZd is evaluated by calculating the fitness function. Total cost and total active power loss are two single objectives, while the violations of dependent variables are converted into penalty values [66]. ## 4.4. Implementation of MEA for the Problem In order to reach the optimal solution for the OPF problem with the presence of wind turbines, the implementation of MEA is shown in the following steps and is summarized in Figure2.Step 1: select populationN1 and maximum iteration N2.Step 2: select and initialize decision variables for the population as shown in Section4.1.Step 3: collect variables of wind turbines and tune loads.Step 4: running Mathpower for obtaining dependent variables shown in Section4.2.Step 5: evaluate quality of obtained solutions as shown in Section4.3.Step 6: select the best solutionZb1 and set Iter = 1.Step 7: select four random solutionsZr1, Zr2, Zr3, and Zr4.Step 8: calculate the mean fitness of the whole population.Step 9: produce new solutions.If Fitd < Fitmean, apply equation (26) to produce new solution. Otherwise, apply equation (27) to produce new solutions.Step 10: correct new solutions using equation (23).Step 11: collect variables of wind turbines and tune loads.Step 12: running Mathpower for obtaining dependent variables shown in Section4.2.Step 13: evaluate quality of obtained solutions as shown in Section4.3.Step 14: select good solutions among old population and new population using equations (24) and (25).Step 15: select the best solutionZb1.Step 16: if Iter =N3, stop the search process and print the optimal solution. Otherwise, set Iter to Iter + 1 and back to Step 7.Figure 2 The flowchart of using MEA for solving the modified OPF problem. ## 5. Numerical Results In this section, MEA together with four other methods including FBI, HBO, MSGO, and CEA are applied for placing WPPs on the IEEE 30-node system with 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. The ssingle line diagram of the system is shown in Figure3 [67]. Different study cases are carried out as follows:Case 1: minimization of generation costCase 1.1: place one wind power plant at nodes 3 and 30, respectivelyCase 1.2: place one WPP at one unknown nodeCase 1.3: place two wind power plants at two nodesCase 2: minimization of power lossCase 2.1: place one wind power plant at nodes 3 and 30, respectivelyCase 2.2: place one WPP at one unknown nodeCase 2.3: place two wind power plants at two nodesFigure 3 The IEEE 30-node transmission network.The five methods are coded on Matlab 2016a and run on a personal computer with a processor of 2.0 GHz and 4.0 GB of RAM. For each study case, fifty independent runs are performed for each method, and the collected results are minimum, mean, maximum, and standard deviation values. ### 5.1. Electricity Generation Cost Reduction #### 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. #### 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. #### 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ### 5.2. Active Power Loss Reduction #### 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. #### 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. #### 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ### 5.3. Discussion on the Capability of MEA In this paper, we considered the placement of WPPs on the IEEE 30-node system. The dimension of the system is not high, it is just medium. In fact, among IEEE standard transmission power systems such as IEEE 14-bus system, IEEE 30-bus system, IEEE 57-bus system, IEEE 118-bus system, etc. The considered system is not the largest system, and it has 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. With the number of power plants, lines, loads, transformers, and capacitors, the IEEE 30-bus system is approximately as large as an area power system in a province. By considering the placement of WPPs, the control variables of WPPs are location, active power, and reactive power. Therefore, there are six control variables regarding two placed WPPs, including two locations, two values of rated power, and two values of reactive power. In addition, other control variables regarding optimal power flow problem are 5 values of active power output for 6 THPs, 6 voltage values for THPs, 4 tap values for transformers, and 9 reactive power output values for shunt capacitors. On the other hand, the dependent variables are 1 value of the active power output for generator at slack node, 6 values of reactive power for THPs, 41 current values of lines, and 24 voltage values of loads. As a result, the total number of control variables for placing two WPPs in the IEEE 30-bus system is 30, and the total number of dependent variables is 72. In the conventional OPF problem, control variables have a high impact on the change of dependent variables, and updating the control variables causes the change of dependent variables. Furthermore, in the modified OPF problem, updating the location and size of WPPs also cause the change of control variables such as voltage and active power of THPs. Therefore, reaching optimal control parameters in the modified OPF problem becomes more difficult for metaheuristic algorithm. By experiment, MEA could solve the conventional OPF problem successfully for the IEEE 30-node system by setting 10 to population and 50 to iteration number and MEA could reach the most optimal solutions by setting 15 to population and 75 to iteration number. However, for the modified problem with the placement of two WPPs, the settings to reach the best performance for MEA were 60 for population and 100 for the iteration number. Clearly, the setting values were higher for the modified OPF problem. About the average simulation time for the study cases, Table13 summarizes the time from all methods for all study cases. Comparisons of the computation time indicate that MEA has the same computation time as FBI, HBO, MSGO, and CEA, but it has shorter time than JYA [47]. The average time for applied methods is about 30 seconds for the cases of placing one WPP and about 53 seconds for other cases of placing two WPPs, while the time is about 72 seconds for JYA for the cases of placing one WPP. The five algorithms approximately have the same average time because the setting of population and iteration number is the same. The reported time of the proposed method is not too long for a system with 30 nodes, and it seems that MEA can be capable for handling a real power system or a larger-scale power system. Therefore, we have tried to apply MEA for other larger scale systems with 57 or 118 nodes. For conventional OPF problem without the optimal placement of WPPs, MEA could solve the conventional OPF problem successfully. However, for the placement of WPPs in modified OPF problem for the IEEE 57-node system and the IEEE 118-node system, MEA could not succeed to reach valid solutions. Therefore, the highest shortcoming of the study is not to reach the successful application of MEA for placing WPPs on large-scale systems with 57 and 118 nodes.Table 13 Average computation time of each run obtained by methods for study cases. MethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place one WPP at node 3)35.2130.6532.8228.1828.45∼72.4Case 1.1 (place one WPP at node 30)28.9326.0632.4527.0727.77∼72.4Case 1.2 (place one WPP at one unknown node)27.9331.9227.3227.6427.76—Subcase 1.3.1 (place two WPPs at nodes 3 and 30)51.0453.3653.5653.3852.57—Subcase 1.3.2 (place two WPPs at two unknown nodes)55.1854.0556.2754.1855.14—Case 2.1 (place one WPP at node 3)30.8227.1427.6531.1529.7∼72.4Case 2.1 (place one WPP at node 30)28.0228.728.3628.2028.04∼72.4Case 2.2 (place one WPP at one unknown node)28.2026.7528.2729.6227.40—Subcase 2.2.1 (place two WPPs at nodes 3 and 30)55.4156.4356.4356.8156.09—Subcase 2.2.2 (place one WPP at one unknown node)54.0454.1459.255.1956.12—It can be stated that CEA and MEA are powerful optimization tools for the IEEE 30-node system, but their capability on other large-scale systems or real systems is limited. The methods may need more effective improvement to overcome the mentioned limitation. ## 5.1. Electricity Generation Cost Reduction ### 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. ### 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. ### 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ## 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. ## 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. ## 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ## 5.2. Active Power Loss Reduction ### 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. ### 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. ### 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ## 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. ## 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. ## 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ## 5.3. Discussion on the Capability of MEA In this paper, we considered the placement of WPPs on the IEEE 30-node system. The dimension of the system is not high, it is just medium. In fact, among IEEE standard transmission power systems such as IEEE 14-bus system, IEEE 30-bus system, IEEE 57-bus system, IEEE 118-bus system, etc. The considered system is not the largest system, and it has 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. With the number of power plants, lines, loads, transformers, and capacitors, the IEEE 30-bus system is approximately as large as an area power system in a province. By considering the placement of WPPs, the control variables of WPPs are location, active power, and reactive power. Therefore, there are six control variables regarding two placed WPPs, including two locations, two values of rated power, and two values of reactive power. In addition, other control variables regarding optimal power flow problem are 5 values of active power output for 6 THPs, 6 voltage values for THPs, 4 tap values for transformers, and 9 reactive power output values for shunt capacitors. On the other hand, the dependent variables are 1 value of the active power output for generator at slack node, 6 values of reactive power for THPs, 41 current values of lines, and 24 voltage values of loads. As a result, the total number of control variables for placing two WPPs in the IEEE 30-bus system is 30, and the total number of dependent variables is 72. In the conventional OPF problem, control variables have a high impact on the change of dependent variables, and updating the control variables causes the change of dependent variables. Furthermore, in the modified OPF problem, updating the location and size of WPPs also cause the change of control variables such as voltage and active power of THPs. Therefore, reaching optimal control parameters in the modified OPF problem becomes more difficult for metaheuristic algorithm. By experiment, MEA could solve the conventional OPF problem successfully for the IEEE 30-node system by setting 10 to population and 50 to iteration number and MEA could reach the most optimal solutions by setting 15 to population and 75 to iteration number. However, for the modified problem with the placement of two WPPs, the settings to reach the best performance for MEA were 60 for population and 100 for the iteration number. Clearly, the setting values were higher for the modified OPF problem. About the average simulation time for the study cases, Table13 summarizes the time from all methods for all study cases. Comparisons of the computation time indicate that MEA has the same computation time as FBI, HBO, MSGO, and CEA, but it has shorter time than JYA [47]. The average time for applied methods is about 30 seconds for the cases of placing one WPP and about 53 seconds for other cases of placing two WPPs, while the time is about 72 seconds for JYA for the cases of placing one WPP. The five algorithms approximately have the same average time because the setting of population and iteration number is the same. The reported time of the proposed method is not too long for a system with 30 nodes, and it seems that MEA can be capable for handling a real power system or a larger-scale power system. Therefore, we have tried to apply MEA for other larger scale systems with 57 or 118 nodes. For conventional OPF problem without the optimal placement of WPPs, MEA could solve the conventional OPF problem successfully. However, for the placement of WPPs in modified OPF problem for the IEEE 57-node system and the IEEE 118-node system, MEA could not succeed to reach valid solutions. Therefore, the highest shortcoming of the study is not to reach the successful application of MEA for placing WPPs on large-scale systems with 57 and 118 nodes.Table 13 Average computation time of each run obtained by methods for study cases. MethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place one WPP at node 3)35.2130.6532.8228.1828.45∼72.4Case 1.1 (place one WPP at node 30)28.9326.0632.4527.0727.77∼72.4Case 1.2 (place one WPP at one unknown node)27.9331.9227.3227.6427.76—Subcase 1.3.1 (place two WPPs at nodes 3 and 30)51.0453.3653.5653.3852.57—Subcase 1.3.2 (place two WPPs at two unknown nodes)55.1854.0556.2754.1855.14—Case 2.1 (place one WPP at node 3)30.8227.1427.6531.1529.7∼72.4Case 2.1 (place one WPP at node 30)28.0228.728.3628.2028.04∼72.4Case 2.2 (place one WPP at one unknown node)28.2026.7528.2729.6227.40—Subcase 2.2.1 (place two WPPs at nodes 3 and 30)55.4156.4356.4356.8156.09—Subcase 2.2.2 (place one WPP at one unknown node)54.0454.1459.255.1956.12—It can be stated that CEA and MEA are powerful optimization tools for the IEEE 30-node system, but their capability on other large-scale systems or real systems is limited. The methods may need more effective improvement to overcome the mentioned limitation. ## 6. Conclusions In this paper, a modified OPF (MOPF) problem with the placement of wind power plants in an IEEE 30-bus transmission power network was solved by implementing four conventional metaheuristic algorithms and the proposed MEA. Two single objectives taken into account were minimization of total generation cost and minimization of power loss. About the number of WPPs located in the system, two cases are, respectively, one WPP and two WPPs. About the locations of the WPPs, simple cases were to accept the result from the previous study [47]. Buses 30 and 3 were the most effective and ineffective locations. The results indicated that the placement of one WPP at bus 30 can reach smaller power loss and smaller fuel cost than at bus 3. For other complicated cases, the paper also investigated the effectiveness of locations by applying MEA and four other metaheuristic algorithms to determine the locations. As a result, placing one WPP at bus 30 has reached the smallest power loss and the smallest total fuel cost. For placing two WPPs, buses 30 and 3 could not result in the smallest fuel cost and the smallest power loss. Buses 30 and 5 were the best locations for the minimization of fuel cost, while buses 30 and 24 were the best locations for the minimization of power loss. Therefore, the main contribution of the study regarding the electrical field is to determine the best locations for the best power loss and the best total cost.For placing one WPP, fuel costs of MEA were the smallest and equal to $764.33 and $762.53 for locations at node 3 and node 30, whilst those of others were much higher and equal to $769.963 and $768.039, respectively. For placing two WPPs at two found locations, MEA has reached the cost of $726.77, but the worst cost of others was $728.81. The power losses of MEA were also reduced significantly as compared to others. For placing one WPP at node 3 and node 30, MEA has reached 2.79 and 2.35 MW, but those of others have been larger and equal to 3.339 and 2.67504 MW, respectively. For placing two WPPs at two found locations, the best loss of 2.03 MW was found by MEA and the worst loss of 2.24 MW was found by others. In summary, the proposed MEA could attain lesser cost than others from 0.28% to 0.73% and lesser power loss than others from 9.38% to 16.44%. Clearly, the improvement levels are significant. However, for other systems with larger scale, MEA could not succeed in determining the best location and size for WPPs. Thus, in the future work, we will find solutions to improve MEA for larger systems and real systems. In addition, we will also consider more renewable energy power plants, such as photovoltaic power plants and uncertainty characteristics of solar and wind speed. All considered complexities will form a real problem as a real power system, and contributions of optimization algorithms and renewable energies will be shown clearly. --- *Source: 1015367-2021-10-12.xml*
1015367-2021-10-12_1015367-2021-10-12.md
107,080
Optimal Placement of Wind Power Plants in Transmission Power Networks by Applying an Effectively Proposed Metaheuristic Algorithm
Minh Quan Duong; Thang Trung Nguyen; Thuan Thanh Nguyen
Mathematical Problems in Engineering (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1015367
1015367-2021-10-12.xml
--- ## Abstract In this paper, a modified equilibrium algorithm (MEA) is proposed for optimally determining the position and capacity of wind power plants added in a transmission power network with 30 nodes and effectively selecting operation parameters for other electric components of the network. Two single objectives are separately optimized, including generation cost and active power loss for the case of placing one wind power plant (WPP) and two wind power plants (WPPs) at predetermined nodes and unknown nodes. In addition to the proposed MEA, the conventional equilibrium algorithm (CEA), heap-based optimizer (HBO), forensic-based investigation (FBI), and modified social group optimization (MSGO) are also implemented for the cases. Result comparisons indicate that the generation cost and power loss can be reduced effectively, thanks to the suitable location selection and appropriate power determination for WPPs. In addition, the generation cost and loss of the proposed MEA are also less than those from other compared methods. Thus, it is recommended that WPPs should be placed in power systems to reduce cost and loss, and MEA is a powerful method for the placement of wind power plants in power systems. --- ## Body ## 1. Introduction Solving optimal power flow problem (OPF) to have the steady and effective states of power systems is considered as the leading priority in operation of power systems. Specifically, the steady state is represented as a state vector and regarded as a set of variables, such as output of active and reactive power from power plants, voltage of generators in power plants, output of reactive power from shunt capacitors, transformers’ tap, voltage of loads, and operating current of transmission lines [1–3]. Generally, during the whole process of solving the OPF problem to determine the steady state in power system operation, the mentioned variables are separated into control variables and dependent variables [4, 5]. The output of the reactive power from power plants (QG), output of the active power of power plants at slack node (PGs), voltage of loads (VL), and current of lines (Il) are grouped in a dependent variable set [6–10], whereas other remaining variables including tap changer of transformers (TapT), output of the active power from the generators excluding that at slack node (PGs), and output of the reactive power supplied by capacitor banks (QCap) are put in a control variable set [11–15]. These control variables are utilized as the input of the Mathpower programme to find the dependent variables. The Mathpower programme is a calculating tool developed based on the Newton–Raphson method to deal with power flow. After having the dependent variable set, it is checked and penalized based on previously known upper bound and lower bound. The violation of the bounds will be considered for the quality of both control and dependent variable sets [16–20]. These violations are converted into penalty terms and added to objective functions, such as electrical power generation cost (Coste), active power loss (Ploss), polluted emission (Em), and load voltage stability index (IDsl).Recently, the presence of renewable energies has been considered in power systems when the percentages of wind power and solar energy joining into the process of generating electricity become more and more. In that situation, the OPF problem was modified and became more complex than ever. The conventional version of the OPF problem only considers thermal power plants (THPs) as the main source [21–24]. Other modified versions of the OPF problem, both THPs and renewable energies, are power sources. The modified OPF problem is outlined in Figure 1 in which the conventional OPF problem is a part of the figure without variables regarding renewable energies, such as output of active and reactive power of wind power plant (Pw, Qw), output of active and reactive power of photovoltaic power plants (PVPs) (Ppv, Qpv), and location of WPPs and PVPs (Lw, Lpv). There are large number of studies proposed to handle the modified OPF problems. These studies can be classified into three main groups. Specifically, the first group solves the OPF problem considering wind power source injecting both active and reactive power into grid. The second group considers the assumption that wind energy sources just generate active power only. The third group considers both wind and solar energies in the process of solving the OPF problem. The applied methods, test systems, objective functions, placed renewable power plants, and compared methods regarding modified OPF problems are summarized in Table 1. All the studies in the table have focused on the placement of wind and photovoltaic power plants to cut electricity generation fuel cost for THPs, and the results were mainly compared to base systems without the contribution of the renewable plants. In addition, other research directions of optimal power flow are without renewable power plants but using reactive power dispatch [50, 51] and using VSC (voltage source converter) based on HVDC (high-voltage direct current) [52, 53]. These studies also achieved the reduction of cost and improved the quality of voltage as expected. If the combination of both using renewable energies and optimal dispatch of reactive power or the combination of using both renewable energies and these converters can be implemented, expected results such as the reduction of cost and power loss and the voltage enhancement can be significantly better.Figure 1 Configuration of the modified OPF problem in the presence of renewable energies.Table 1 The summary of studies proposed to solve the modified OPF problem considering renewable energies. ReferenceMethodApplied systemRenewable energyCompared methods[25]BFA30 nodesWind (P, Q)GA[26]MBFA30 nodesWind (P, Q)ACA[27]HABC30 nodesWind (P, Q)ABA[28]MCS30, 57 nodesWind (P, Q)MPSO[29]HA30 nodesWind (P, Q)PSO[30]MFO30 nodesWind, solar (P, Q)GWA, MVA, IMA[31]AFAPA30, 75 nodesWind (P)APO, BA[32]KHA30, 57 nodesWind (P)ALPSO, DE, RCGA[33]GSO300 nodesWind (P)NSGA-II[34]MHGSPSO30, 57 nodesWind (P, Q)MSA, GWA, WA[35]BSA30 nodesWind, solar (P)—[36]IMVA30 nodesWind, solar (P, Q)PSO, MVA, NSGA-II[37]PSO39 nodesWind, solar (P)PSO variants[38]NSGA-II30, 118 nodesWind, solar (P)—[39]MFA30 nodesWind, solar (P, Q)MDE[40]GWO30, 57 nodesWind, solar (P)GA, PSO, CSA1, MDE, ABA[41]BWOA, ALO, PSO30 nodesWind, solar (P, Q)—GSA, MFA, BMA[42]FPAIEEE 30-busWind, solar (P)—[43]APDEIEEE 30-busWind, solar (P, Q)—[44]HGTPEA30 nodesWind, solar (P)—[45]HGNIPA118 nodesWind, solar (P, Q)—[46]NDSGWA30 nodesWind, solar (P)MDE[47]JYA30 nodesWind, solar (P)—[48]HSQTIICA30 nodesDG (P, Q)IICA[49]MJYA30, 118 nodesWind (P)MSA, ABA, CSA, GWA, BSAIn recent years, metaheuristic algorithms have been developed widely and applied successfully for optimization problems in engineering. One of the most well-known algorithms is the conventional equilibrium algorithm (CEA) [54], which was introduced in the early 2020. The conventional version was demonstrated more effective than PSO, GWA, GA, GSA, and SSA for a set of fifty-eight mathematical functions with a different number of variables and types. Over the past year and this year, CEA was widely replicated for different optimization problems such as AC/DC power grids [55], loss reduction of distribution networks [56], component design for vehicles [57], and multidisciplinary problem design [58]. However, the performance of CEA is not the most effective among utilized methods for the same problems. Consequently, CEA had been indicated to be effective for large-scale problems, and it needs more improvements [59–62]. Thus, we proposed another version of CEA, called the modified equilibrium algorithm (MEA), and also applied four other metaheuristic algorithms for checking the performance of MEA.In this paper, the authors solve a modified OPF (MOPF) problem with the placement of wind power plants in an IEEE 30-bus transmission power network. About the number of wind power plants located in the system, two cases are, respectively, one wind power plant (WPP) and two WPPs. About the locations of the WPPs, simple cases are referred to the previous study [47] and other more complicated cases are to determine suitable buses in the system by applying metaheuristic algorithms. It is noted that the study in [47] has only studied the placement of one WPP, and it has indicated the most suitable bus as bus 30 and the most ineffective bus as bus 3. In this paper, we have employed buses 3 and 30 for two separated cases to check the indication of the study [47]. The results indicated that the placement of one WPP at bus 30 can reach smaller power loss and smaller fuel cost than at bus 3. In addition, the paper also investigated the effectiveness of locations by applying MEA and four other metaheuristic algorithms to determine the location. As a result, placing one WPP at bus 30 has reached the smallest power loss and the smallest total fuel cost. For the case of placing two WPPs, buses 30 and 3 could not result in the smallest fuel cost and the smallest power loss. Buses 30 and 5 are the best locations for the minimization of fuel cost, while buses 30 and 24 are the best locations for the minimization of power loss. Therefore, the main contribution of the study regarding the electrical field is determining the best locations for the best power loss and the best total cost.All study cases explained above are implemented by the proposed MEA and four existing algorithms published in 2020, including conventional equilibrium algorithm (CEA) [54], heap-based optimizer (HBO) [63], forensic-based investigation (FBI) [64], and modified social group optimization (MSGO) [65]. As a result, the best locations leading to the smallest cost and smallest loss are obtained by MEA. Thus, the applications of the four recent algorithms and the proposed MEA aim to introduce a new algorithm and show their effectiveness to readers in solving the MOPF problem. Readers can give evaluations and decide if the algorithms are used for their own optimization problems, which maybe in electrical engineering or other fields. The major contributions of the paper are summarized again as follows:(1) Find the best locations for placing wind power plants in the IEEE 30-bus transmission power gird.(2) The added wind power plants and other found parameters of the system found by MEA can reach the smallest power loss and smallest total cost.(3) Introduce four existing algorithms developed in 2020 and a proposed MEA. In addition, the performance of these optimization tools is shown to readers for deciding if these tools are used for their applications.(4) Provide MEA, the most effective algorithm among five applied optimization tools for the MOPF problem.The organization of the paper is as follows. Two single objectives and a considered constraint set are presented in Section2. The configuration of CEA for solving a sample optimization problem and then modified points of MEA are clarified in detail in Section 3. Section 4 summarizes the computation steps for solving the modified OPF problem by using MEA. Section 5 presents results obtained by the proposed MEA and other methods such as JYA, FBI, HBO, and MSGO. Finally, conclusions are given in Section 6 for stating achievements in the paper. ## 2. Objective Functions and Constraints of the Modified OPF Problem ### 2.1. Objective Functions #### 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. #### 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ### 2.2. Constraints #### 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. #### 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. #### 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 2.1. Objective Functions ### 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. ### 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ## 2.1.1. Minimization of Electricity Generation Cost In this research, the first single objective is considered to be electricity generation cost of all thermal generators. At generator nodes, where thermal units are working, the cost is the most important factor in optimal operation of the distribution power networks, and it should be low reasonably as the following model. The total cost is formulated by(1)EGC=∑i=1NTGFETGiPTGi,where FETGi is the fuel cost of the ith thermal unit and calculated as follows:(2)FETGiPTGi=μ1i+μ2iPTGi+μ3iPTGi2. ## 2.1.2. Minimization of Active Power Loss Minimizing active power loss (APL) is a highly important target in transmission line operation. In general, reactive power loss of transmission power networks is very significant due to a high number of transmission lines with high operating current. If the loss can be minimized, the energy loss and the energy loss cost are also reduced accordingly. The loss can be obtained by different ways as follows:(3)APL=∑q=1NBr3.Iq2.Rq,APL=∑i=1NTGPTGi−∑i=1NnPrqi,APL=∑x=1Nn∑y=1,x≠yNnYxyUx2+Vy2−2UxUycosφx−φy. ## 2.2. Constraints ### 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. ### 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. ### 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 2.2.1. Physical Constraints regarding Thermal Generators In the operating process of thermal generators, three main constraints need to be supervised strictly consisting of the limitation of real power output, the limitation of reactive power output, and the limitation of the voltage magnitude. The violation of any limitations as mentioned will cause damage and insecure status in whole system substantially. Thus, the following constraints should be satisfied all the time:(4)PTGimin≤PTGi≤PTGimax,withi=1,…,NTG,QTGimin≤QTGi≤QTGimax,withi=1,…,NTG,UTGimin≤UTGi≤UTGimax,withi=1,…,NTG. ## 2.2.2. The Power Balance Constraint The power balance constraint is the relationship between source side and consumption side in which sources are TUs and renewable energies, and consumption side is comprised of loads and loss on lines. The balance status is established when the amount of power supplied by thermal generators equals to the amount of power required by load plus the loss.Active power equation at each nodex is formulated as follows:(5)PTGx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy.For the case that wind turbines supply electricity at nodex, the balance of the active power is as follows:(6)PTGx+PWindx−Prqx=Ux∑y=1NnUyYxycosφx−φy+Xxysinφx−φy,where PWindx is the power generation of wind turbines at node x and limited by the following constraint:(7)PWindmin≤PWindx≤PWindmax.Similarly, reactive power is also balanced at nodex as the following model:(8)QTGx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,where(9)QComxmin≤QComx≤QComxmax,withi=1,…,NCom.For the case that wind turbines are placed at nodex, the reactive power is also supplied by the turbine as the role of thermal generators. As a result, the reactive power balance is as follows:(10)QTGx+QWindx+QComx−Qrqx=Ux∑y=1NnUyYxysinφx−φy−Xxycosφx−φy,∈where QWindx is the reactive power generation of wind turbines at node x and is subject to the following constraint:(11)QWindmin≤QWindx≤QWindmax. ## 2.2.3. Other Inequality Constraints These constraints are related to operating limits of electric components such as lines, loads, and transformers. Lines and loads are dependent on other operating parameters of other components like TUs, wind turbines, shunt capacitors, and transformers. However, operating values of lines and loads are very important for a stable operating status of networks. If the components are working beyond their allowable range, networks are working unstably, and fault can occur in the next phenomenon. Thus, the operating parameters of loads and lines must be satisfied as shown in the following models:(12)ULNmin≤ULNt≤ULNmax,witht=1,…,NLN,SBrq≤SBrmax,withq=1,…,NBr.In addition, transformers located at some nodes need to be tuned for supplying standard voltage within a working range. The voltage regulation is performed by setting tap of transformers satisfying the following constraint:(13)Tapmin≤Tapi≤Tapmax,withi=1,…,NT. ## 3. The Proposed Modified Equilibrium Algorithm (MEA) ### 3.1. Conventional Equilibrium Algorithm (CEA) CEA was first introduced and applied in 2020 for solving a high number of benchmark functions. The method was superior to popular and well-known metaheuristic algorithms, but its feature is simple with one technique of newly updating solutions and one technique of keeping promising solutions between new and old solutions.The implementation of CEA for a general optimization problem is mathematically presented as follows. #### 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. #### 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. #### 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. #### 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. #### 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ### 3.2. The Proposed MEA The proposed MEA is a modified variant of CEA by using a new technique for updating new control variables. From equation (18), it sees that CEA only chooses search spaces around the four best solutions and the middle solution (i.e., Zb1,Zb2,Zb3,Zb4,Zmid) for updating decision variables whilst from search spaces nearby from the fifth best solution to the worst solution are skipped intentionally. In addition, the strategy has led to the success of CEA with better performance than other metaheuristics. However, CEA cannot reach a higher performance because it is coping with two shortcomings as follows:(1) The first shortcoming is to pick up one out of five solutions in the setZ5b randomly. The search spaces may be repeated more than once and even too many times. Therefore, promising search spaces can be exploited ineffectively or skipped unfortunately.(2) The second shortcoming is to use two update steps includingM×Zb1−Zd and K×M/r21−M, which are decreased when the computation iteration is increased. Especially, the steps become zero at final computation iterations. In fact, parameter A in equation (21) becomes 0 when the current iteration is equal to the maximum iteration N3. If we substitute A = 0 into equation (19), M becomes 0.Thus, the proposed MEA is reformed to eliminate the above drawbacks of CEA and reach better results as solving the OPF problem with the presence of wind energy. The two proposed formulas for updating new decision variables are as follows:(26)Zdnew1=Zd+MZb1−Zd+r7Zr1−Zr2,(27)Zdnew2=Zb1+MZd−Zb1+r8Zr3−Zr4.The two equations above are not applied simultaneously for the same old solutioni. Either equation (26) or equation (27) is used for the dth new solution. Zdnew1 in equation (26) is applied to update Zd if Zd has better fitness than the medium fitness of the population, i.e., Fitd < Fitmean. For the other case, i.e., Fitd ≥ Fitmean, Zdnew2 in equation (27) is determined. ## 3.1. Conventional Equilibrium Algorithm (CEA) CEA was first introduced and applied in 2020 for solving a high number of benchmark functions. The method was superior to popular and well-known metaheuristic algorithms, but its feature is simple with one technique of newly updating solutions and one technique of keeping promising solutions between new and old solutions.The implementation of CEA for a general optimization problem is mathematically presented as follows. ### 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. ### 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. ### 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. ### 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. ### 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ## 3.1.1. The Generation of Initial Population CEA has a set ofN1 candidate solutions similar to other metaheuristic algorithms. The solution set needs to define the boundaries in advance, and then it must be produced in the second stage. The set of solution is represented by Z = [Zd], where d = 1, …, N1, and the fitness function of the solution set is represented by Fit = [Fitd], where d = 1, …, N1.To produce an initial solution set, control variables included in each solution and their boundaries must be, respectively, predetermined as follows:(14)Zd=zjd;j=1,…,N2;d=1,…N1,Zlow=zjlow;j=1,…,N2,Zup=zjup;j=1,…,N2,where N2 is the control variable number, zjd is the jth variable of the dth solution, Zlow and Zup are lower and upper bounds of all solutions, respectively, and zjlow and zjup are the minimum and maximum values of the jth control variable, respectively.The initial solutions are produced within their boundsZlow and Zup as follows:(15)Zd=Zlow+r1⋅Zup−Zlow;d=1,…,N1. ## 3.1.2. New Update Technique for Variables The matrix fitness fit is sorted to select the four best solutions with the lowest fitness values among the available set. The solution with the lowest fitness is set toZb1, while the second, third, and fourth best solutions with the second, third, and fourth lowest fitness functions are assigned to Zb2, Zb3, and Zb4. In addition, another solution, which is called the middle solution (Zmid) of the four best solutions, is also produced by(16)Zmid=Zb1+Zb2+Zb3+Zb44.The four best solutions and the middle solution are grouped into the solution setZ5b as follows:(17)Z5b=Zb1,Zb2,Zb3,Zb4,Zmid.As a result, the new solutionZdnew of the old solution Zd is determined as follows:(18)Zdnew=Z5brd+M⋅Zd−Z5brd+K×Mr21−M.In the above equation,Z5brd is a randomly chosen solution among five solutions of Z5b in equation (17), whereas M and K are calculated by(19)M=2signr3−0.51eA.Iter−1,(20)K=K0⋅Z5brd−r4⋅Zd,(21)A=1−IterN3Iter/N3,(22)K0=0,if&r5<0.5,r62,& otherwise. ## 3.1.3. New Solution Correction The new solutionZdnew is a set of new control variables zjd,,new that can be beyond the minimum and maximum values of control variables. It means zjd,,new may be either higher than zjup or smaller than zjlow. If one out of the two cases happens, each new variable zjd,,new must be redetermined as follows:(23)zjd,new=zjd,new,ifzjlow≤zjd,new≤zjup,zjlow,ifzjlow>zjd,new,zjup,ifzjd,new>zjup.After correcting the new solutions, the new fitness function is calculated and assigned to Fitdnew. ## 3.1.4. Selection of Good Solutions Currently, there are two solution sets, one old set and one new set. Therefore, it is important to retain higher quality solutions so that the retained solutions are equal toN1. This task is accomplished by using the following formula:(24)Zd=Zdnew,ifFitd>Fitdnew,Zdnew,ifFitd=Fitdnew,Zd,else,(25)Fitd=Fitdnew,ifFitd>Fitdnew,Fitdnew,ifFitd=Fitdnew,Fitd,else. ## 3.1.5. Termination Condition CEA will stop updating new control variables when the computation iteration reaches the maximum valueN3. In addition, the best solution and its fitness are also reported. ## 3.2. The Proposed MEA The proposed MEA is a modified variant of CEA by using a new technique for updating new control variables. From equation (18), it sees that CEA only chooses search spaces around the four best solutions and the middle solution (i.e., Zb1,Zb2,Zb3,Zb4,Zmid) for updating decision variables whilst from search spaces nearby from the fifth best solution to the worst solution are skipped intentionally. In addition, the strategy has led to the success of CEA with better performance than other metaheuristics. However, CEA cannot reach a higher performance because it is coping with two shortcomings as follows:(1) The first shortcoming is to pick up one out of five solutions in the setZ5b randomly. The search spaces may be repeated more than once and even too many times. Therefore, promising search spaces can be exploited ineffectively or skipped unfortunately.(2) The second shortcoming is to use two update steps includingM×Zb1−Zd and K×M/r21−M, which are decreased when the computation iteration is increased. Especially, the steps become zero at final computation iterations. In fact, parameter A in equation (21) becomes 0 when the current iteration is equal to the maximum iteration N3. If we substitute A = 0 into equation (19), M becomes 0.Thus, the proposed MEA is reformed to eliminate the above drawbacks of CEA and reach better results as solving the OPF problem with the presence of wind energy. The two proposed formulas for updating new decision variables are as follows:(26)Zdnew1=Zd+MZb1−Zd+r7Zr1−Zr2,(27)Zdnew2=Zb1+MZd−Zb1+r8Zr3−Zr4.The two equations above are not applied simultaneously for the same old solutioni. Either equation (26) or equation (27) is used for the dth new solution. Zdnew1 in equation (26) is applied to update Zd if Zd has better fitness than the medium fitness of the population, i.e., Fitd < Fitmean. For the other case, i.e., Fitd ≥ Fitmean, Zdnew2 in equation (27) is determined. ## 4. The Application of the Proposed MEA for OPF Problem ### 4.1. Generation of Initial Population The problem of placing wind turbines in the transmission power network is successfully solved by using decision variables as follows: the active power generation and voltage of thermal generators (excluding power generation at slack node), generation of capacitors, tap of transformer, and position, active and reactive power of wind turbines. Hence,Zd is comprised of the following decision variables: PTGi (i = 2, …, NTG); UTGi (i = 1, …, NTG); QComi (i = 1, …, NCom); Tapi (i = 1, …, NT); PWindx (x = 1, …, NW); QWindx (x = 1, …, NW); and LWindx (x = 1, …, NW).The decision variables are initialized within their lower boundZlow and upper bound Zup as shown in Section 2. ### 4.2. The Calculation of Dependent Variables Before running Mathpower program, control variables of wind turbines including active power, reactive power, and location are collected to calculate the new values of loads at the placement of the wind turbines. Then, the data of the load must be changed and then added in the input data of Mathpower program. Finally, other remaining decision variables are added to Mathpower program and running power flow for obtaining dependent variables includingPTG1; QTGi (i = 1, …, NTG); ULNt (t = 1, …, NLN); and SBrq (q = 1, …, NBr). ### 4.3. Solution Evaluation The quality of solutionZd is evaluated by calculating the fitness function. Total cost and total active power loss are two single objectives, while the violations of dependent variables are converted into penalty values [66]. ### 4.4. Implementation of MEA for the Problem In order to reach the optimal solution for the OPF problem with the presence of wind turbines, the implementation of MEA is shown in the following steps and is summarized in Figure2.Step 1: select populationN1 and maximum iteration N2.Step 2: select and initialize decision variables for the population as shown in Section4.1.Step 3: collect variables of wind turbines and tune loads.Step 4: running Mathpower for obtaining dependent variables shown in Section4.2.Step 5: evaluate quality of obtained solutions as shown in Section4.3.Step 6: select the best solutionZb1 and set Iter = 1.Step 7: select four random solutionsZr1, Zr2, Zr3, and Zr4.Step 8: calculate the mean fitness of the whole population.Step 9: produce new solutions.If Fitd < Fitmean, apply equation (26) to produce new solution. Otherwise, apply equation (27) to produce new solutions.Step 10: correct new solutions using equation (23).Step 11: collect variables of wind turbines and tune loads.Step 12: running Mathpower for obtaining dependent variables shown in Section4.2.Step 13: evaluate quality of obtained solutions as shown in Section4.3.Step 14: select good solutions among old population and new population using equations (24) and (25).Step 15: select the best solutionZb1.Step 16: if Iter =N3, stop the search process and print the optimal solution. Otherwise, set Iter to Iter + 1 and back to Step 7.Figure 2 The flowchart of using MEA for solving the modified OPF problem. ## 4.1. Generation of Initial Population The problem of placing wind turbines in the transmission power network is successfully solved by using decision variables as follows: the active power generation and voltage of thermal generators (excluding power generation at slack node), generation of capacitors, tap of transformer, and position, active and reactive power of wind turbines. Hence,Zd is comprised of the following decision variables: PTGi (i = 2, …, NTG); UTGi (i = 1, …, NTG); QComi (i = 1, …, NCom); Tapi (i = 1, …, NT); PWindx (x = 1, …, NW); QWindx (x = 1, …, NW); and LWindx (x = 1, …, NW).The decision variables are initialized within their lower boundZlow and upper bound Zup as shown in Section 2. ## 4.2. The Calculation of Dependent Variables Before running Mathpower program, control variables of wind turbines including active power, reactive power, and location are collected to calculate the new values of loads at the placement of the wind turbines. Then, the data of the load must be changed and then added in the input data of Mathpower program. Finally, other remaining decision variables are added to Mathpower program and running power flow for obtaining dependent variables includingPTG1; QTGi (i = 1, …, NTG); ULNt (t = 1, …, NLN); and SBrq (q = 1, …, NBr). ## 4.3. Solution Evaluation The quality of solutionZd is evaluated by calculating the fitness function. Total cost and total active power loss are two single objectives, while the violations of dependent variables are converted into penalty values [66]. ## 4.4. Implementation of MEA for the Problem In order to reach the optimal solution for the OPF problem with the presence of wind turbines, the implementation of MEA is shown in the following steps and is summarized in Figure2.Step 1: select populationN1 and maximum iteration N2.Step 2: select and initialize decision variables for the population as shown in Section4.1.Step 3: collect variables of wind turbines and tune loads.Step 4: running Mathpower for obtaining dependent variables shown in Section4.2.Step 5: evaluate quality of obtained solutions as shown in Section4.3.Step 6: select the best solutionZb1 and set Iter = 1.Step 7: select four random solutionsZr1, Zr2, Zr3, and Zr4.Step 8: calculate the mean fitness of the whole population.Step 9: produce new solutions.If Fitd < Fitmean, apply equation (26) to produce new solution. Otherwise, apply equation (27) to produce new solutions.Step 10: correct new solutions using equation (23).Step 11: collect variables of wind turbines and tune loads.Step 12: running Mathpower for obtaining dependent variables shown in Section4.2.Step 13: evaluate quality of obtained solutions as shown in Section4.3.Step 14: select good solutions among old population and new population using equations (24) and (25).Step 15: select the best solutionZb1.Step 16: if Iter =N3, stop the search process and print the optimal solution. Otherwise, set Iter to Iter + 1 and back to Step 7.Figure 2 The flowchart of using MEA for solving the modified OPF problem. ## 5. Numerical Results In this section, MEA together with four other methods including FBI, HBO, MSGO, and CEA are applied for placing WPPs on the IEEE 30-node system with 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. The ssingle line diagram of the system is shown in Figure3 [67]. Different study cases are carried out as follows:Case 1: minimization of generation costCase 1.1: place one wind power plant at nodes 3 and 30, respectivelyCase 1.2: place one WPP at one unknown nodeCase 1.3: place two wind power plants at two nodesCase 2: minimization of power lossCase 2.1: place one wind power plant at nodes 3 and 30, respectivelyCase 2.2: place one WPP at one unknown nodeCase 2.3: place two wind power plants at two nodesFigure 3 The IEEE 30-node transmission network.The five methods are coded on Matlab 2016a and run on a personal computer with a processor of 2.0 GHz and 4.0 GB of RAM. For each study case, fifty independent runs are performed for each method, and the collected results are minimum, mean, maximum, and standard deviation values. ### 5.1. Electricity Generation Cost Reduction #### 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. #### 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. #### 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ### 5.2. Active Power Loss Reduction #### 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. #### 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. #### 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ### 5.3. Discussion on the Capability of MEA In this paper, we considered the placement of WPPs on the IEEE 30-node system. The dimension of the system is not high, it is just medium. In fact, among IEEE standard transmission power systems such as IEEE 14-bus system, IEEE 30-bus system, IEEE 57-bus system, IEEE 118-bus system, etc. The considered system is not the largest system, and it has 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. With the number of power plants, lines, loads, transformers, and capacitors, the IEEE 30-bus system is approximately as large as an area power system in a province. By considering the placement of WPPs, the control variables of WPPs are location, active power, and reactive power. Therefore, there are six control variables regarding two placed WPPs, including two locations, two values of rated power, and two values of reactive power. In addition, other control variables regarding optimal power flow problem are 5 values of active power output for 6 THPs, 6 voltage values for THPs, 4 tap values for transformers, and 9 reactive power output values for shunt capacitors. On the other hand, the dependent variables are 1 value of the active power output for generator at slack node, 6 values of reactive power for THPs, 41 current values of lines, and 24 voltage values of loads. As a result, the total number of control variables for placing two WPPs in the IEEE 30-bus system is 30, and the total number of dependent variables is 72. In the conventional OPF problem, control variables have a high impact on the change of dependent variables, and updating the control variables causes the change of dependent variables. Furthermore, in the modified OPF problem, updating the location and size of WPPs also cause the change of control variables such as voltage and active power of THPs. Therefore, reaching optimal control parameters in the modified OPF problem becomes more difficult for metaheuristic algorithm. By experiment, MEA could solve the conventional OPF problem successfully for the IEEE 30-node system by setting 10 to population and 50 to iteration number and MEA could reach the most optimal solutions by setting 15 to population and 75 to iteration number. However, for the modified problem with the placement of two WPPs, the settings to reach the best performance for MEA were 60 for population and 100 for the iteration number. Clearly, the setting values were higher for the modified OPF problem. About the average simulation time for the study cases, Table13 summarizes the time from all methods for all study cases. Comparisons of the computation time indicate that MEA has the same computation time as FBI, HBO, MSGO, and CEA, but it has shorter time than JYA [47]. The average time for applied methods is about 30 seconds for the cases of placing one WPP and about 53 seconds for other cases of placing two WPPs, while the time is about 72 seconds for JYA for the cases of placing one WPP. The five algorithms approximately have the same average time because the setting of population and iteration number is the same. The reported time of the proposed method is not too long for a system with 30 nodes, and it seems that MEA can be capable for handling a real power system or a larger-scale power system. Therefore, we have tried to apply MEA for other larger scale systems with 57 or 118 nodes. For conventional OPF problem without the optimal placement of WPPs, MEA could solve the conventional OPF problem successfully. However, for the placement of WPPs in modified OPF problem for the IEEE 57-node system and the IEEE 118-node system, MEA could not succeed to reach valid solutions. Therefore, the highest shortcoming of the study is not to reach the successful application of MEA for placing WPPs on large-scale systems with 57 and 118 nodes.Table 13 Average computation time of each run obtained by methods for study cases. MethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place one WPP at node 3)35.2130.6532.8228.1828.45∼72.4Case 1.1 (place one WPP at node 30)28.9326.0632.4527.0727.77∼72.4Case 1.2 (place one WPP at one unknown node)27.9331.9227.3227.6427.76—Subcase 1.3.1 (place two WPPs at nodes 3 and 30)51.0453.3653.5653.3852.57—Subcase 1.3.2 (place two WPPs at two unknown nodes)55.1854.0556.2754.1855.14—Case 2.1 (place one WPP at node 3)30.8227.1427.6531.1529.7∼72.4Case 2.1 (place one WPP at node 30)28.0228.728.3628.2028.04∼72.4Case 2.2 (place one WPP at one unknown node)28.2026.7528.2729.6227.40—Subcase 2.2.1 (place two WPPs at nodes 3 and 30)55.4156.4356.4356.8156.09—Subcase 2.2.2 (place one WPP at one unknown node)54.0454.1459.255.1956.12—It can be stated that CEA and MEA are powerful optimization tools for the IEEE 30-node system, but their capability on other large-scale systems or real systems is limited. The methods may need more effective improvement to overcome the mentioned limitation. ## 5.1. Electricity Generation Cost Reduction ### 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. ### 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. ### 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ## 5.1.1. Case 1.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this study case, one power plant is, respectively, placed at node 3 and node 30 for comparison of the effectiveness of the placement position. As shown in [47], node 30 and node 3 are the most effective and ineffective locations for placing renewable energies. The found results from the five applied and JYA methods for the placement of WPPs at node 3 and node 30 are reported in Tables 2 and 3, respectively. Table 2 shows that the best cost of MEA is $764.33, while that of other methods is from $764.53 (CEA) to $769.963 (JYA). Exactly, MEA reaches better cost than others by from $0.2 to $5.633. Table 3 has the same features since MEA reaches the lowest cost of $762.53, while that from others is from $762.62 (CEA) to $768.039 (JYA). In addition, MEA can obtain less cost than others by from $0.09 to $5.509. The best cost indicates that MEA is the most powerful method among all the applied and JYA methods, while the standard deviation (STD) of MEA is the second lowest and it is only higher than that of HBO.Table 2 The results obtained by methods for placing one WPP at node 3. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Mean cost ($/h)767.23765.76782.38765.95765.94—Maximum cost ($/h)772.62766.71838.51768.83767.91—STD1.870.8715.781.051.01—N1103015303040N3100100100100100100Table 3 The results obtained by methods for placing one WPP at node 30. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Mean cost ($/h)765.91764.15780.02764.61764.36—Maximum cost ($/h)769.50765.82880.75766.62767.88—STD1.610.6822.191.281.23—N1103015303030N3100100100100100100For the effectiveness comparison between node 3 and node 30, it concludes that node 30 is more suitable to place WPPs. In fact, FBA, HBO, MSGO, CEA, MEA, and IAYA [47] can reach less cost for node 30. The cost of the methods is, respectively, $763.56, $763.24, $765.74, $762.62, $762.53, and $768.039 for node 30, but $764.69, $764.99, $766.85, $764.53, $764.33, and $769.963 for node 3.Figures4 and 5 show the best run of the applied methods for placing WPPs at node 3 and node 30, respectively. The curves see that MEA is much faster than the other methods even its solution at the 70th iteration is much better than that of others at the final iteration.Figure 4 The best runs of methods for the wind power plant placement at node 3.Figure 5 The best runs of methods for the wind power plant placement at node 30. ## 5.1.2. Case 1.2: Place One WPP at One Unknown Node In this section, the location of one WPP together with active and reactive power are determined instead of fixing the location at node 3 and node 30 similar to Case 1.1. Table4 indicates that MEA can reach better costs and STD than all methods. Table 5 summarizes the results of one WPP for Case 1.1 and Case 1.2. It is recalled that the power factor selection of wind power plant is from 0.85 to 1.0, while the active power is from 0 to 10 MW. For Case 1.2, HBO, CEA, and MEA can find the same location (node 30), while FBI and MSGO can find node 19 and node 26, respectively. Thus, the cost of FBI and MSGO is the worst, while others have better costs. The conclusion is very important to confirm that the renewable energy source placement location in the transmission power network has high impact on the effective operation. Figure 6 shows the best run of applied methods, and it also leads to the conclusion that MEA is the fastest among these methods.Table 4 The results obtained by five implemented methods for Case 1.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)763.96762.72765.22762.89762.52Mean cost ($/h)767.96764.32782.08764.59763.54Maximum cost ($/h)786.95766.45869.63767.68766.17STD4.450.9618.271.290.94N11030153030N3100100100100100Table 5 The optimal solutions obtained by methods for case 1.1 and case 1.2. CaseMethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place WPP at node 3)Location of WPP333333Generation of WPP (MW)9.9910.009.9910.0010.009.1169Power factor of WPP0.970.890.880.850.890.85Minimum cost ($/h)764.69764.99766.85764.53764.33769.963Case 1.1 (place one WPP at node 30)Location of WPP303030303030Generation of WPP (MW)9.9810.0010.0010.0010.009.1478Power factor of WPP0.880.900.990.991.000.85Minimum cost ($/h)763.56763.24765.74762.62762.53768.039Case 1.2 (find the location of WPP)Location of WPP1930263030—Generation of WPP (MW)10.0010.0010.0010.0010.00—Power factor of WPP0.930.970.920.990.95—Minimum cost ($/h)763.96762.72765.22762.89762.52—Figure 6 The best run of applied methods for Case 1.2.Figure7 shows the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while blue bars show the location of the added WPP. In the figure, fifty runs are rearranged by sorting the cost from the smallest to the highest. Accordingly, the location of the runs is also reported. The locations indicate that node 30 is found many times, while other nodes such as 19 and 5 are also found, but the cost of nodes 19 and 5 is much higher than that of node 30.Figure 7 WPP location and cost of 50 rearranged runs obtained by MEA. ## 5.1.3. Case 1.3: Place Two Wind Power Plants at Two Nodes In this case, five methods are applied to minimize the cost for two subcases, Subcase 1.3.1 with two WPPs at node 3 and node 30 and Subcase 1.3.2 with unknown locations of two WPPs. The results for the two cases are reported in Tables6 and 7. MEA can reach the lowest cost for the two subcases, which is $728.15 for Subcase 1.3.1 and $726.77 for Subcase 1.3.2. It can be seen that the locations at nodes 30 and 3 are not as effective as locations at nodes 30 and 5. In addition to MEA, CEA also finds the same locations at nodes 30 and 5 for Subcase 1.3.2, and CEA reaches the second-best cost behind MEA. FBI, HBO, and MSGO cannot find the same nodes 30 and 5, and they suffer higher cost than CEA and MEA.Table 6 The results obtained by five implemented methods for Subcase 1.3.1. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.61728.40728.80728.39728.15Mean cost ($/h)731.27729.73736.72731.28728.74Maximum cost ($/h)738.69731.50762.72765.83730.55STD1.890.717.525.130.57N12060306060N3100100100100100Generation of WPP at node 30 (MW)9.9731109.95891010Generation of WPP at node 3 (MW)9.9693109.99999.985810Power factor of WPP at node 300.920.90040.98030.90480.8857Power factor of WPP at node 30.97820.93660.99920.98380.947Table 7 The results obtained by five implemented methods for Subcase 1.3.2. MethodFBIHBOMSGOCEAMEAMinimum cost ($/h)728.20727.41728.81726.84726.77Mean cost ($/h)730.67728.69731.75728.17728.04Maximum cost ($/h)733.77729.89739.08730.59730.35STD1.270.602.200.980.95N12060306060N3100100100100100Locations of WPPs30, 2430, 1924, 1930, 530, 5Generation of WPPs (MW)9.95, 9.962310, 9.999510, 9.998210, 1010, 10Power factor of WPPs0.9604, 0.93490.9605, 0.93940.9418, 0.91350.9342, 0.94580.9433, 0.8718Figure8 presents the cost and the locations of the two WPPs obtained by 50 runs. The black curve shows fifty values of loss sorted in the ascending order, while the blue and orange bars show the location of the first WPP and the second WPP. All the costs are rearranged from the lowest to the highest values. The figure indicates that the best cost and second-best cost are obtained by placing WPPs at nodes 30 and 5,while the next six best costs are obtained by placing WPPs at nodes 30 and 19. Other worse costs are found by placing the same nodes 30 and 5 or nodes 30 and 19. For a few cases, two WPPs are placed at nodes 30 and 24, but their cost is much higher. Clearly, node 30 is the most important, and node 5 is the next important location for supplying additional active and reactive power.Figure 8 WPP location and cost of 50 rearranged runs obtained by MEA for Subcase 1.3.2.Figures9 and 10 show the best run of applied methods for Subcases 1.3.1 and 1.3.2, respectively. Figure 9 shows a clear outstanding performance of MEA over other methods since the sixtieth iteration to the last iteration. The cost of MEA at the sixtieth iteration is smaller than that of other methods at the final iteration. Figure 10 also shows that MEA is much faster than FBI, HBO, and MSGO from the 30th iteration to the last iteration. The cost of MEA is always smaller than these methods from the 30th iteration to the last iteration. CEA shows a faster search than MEA from the first to the 80th iteration, but it is still worse than MEA from the 81st iteration to the last iteration. Obviously, MEA has a faster search than others.Figure 9 The best run obtained by five applied methods for Subcase 1.3.1.Figure 10 The best run obtained by five applied methods for Subcase 1.3.2. ## 5.2. Active Power Loss Reduction ### 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. ### 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. ### 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ## 5.2.1. Case 2.1: Place One Wind Power Plant at Nodes 3 and 30, Respectively In this section, one WPP is, respectively, located at nodes 30 and 3 for reducing power loss. Tables8 and 9 show the obtained results from 50 trial runs. The loss of MEA is the best for the cases of placing one WPP at node 3 and node 30. The best loss of MEA is 2.79 MW for the placement at node 3 and 2.35 MW for the placement at node 30, while those of others are from 2.8 MW to 3.339 MW for the placement at node 3 and from 2.37 MW to 2.67504 MW for the placement at node 30. On the other hand, all methods can have better loss when placing one WPP at node 30. Clearly, node 30 needs more supplied power than node 3. About the speed of search, Figures 11 and 12 indicate that MEA is much more effective than the other ones since its loss found at the 70th iteration is smaller than that of others at the final iteration.Table 8 The results obtained by methods as placing one WPP at node 3 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.912.872.912.802.793.3390Mean power loss (MW)3.353.084.013.113.10—Maximum power loss (MW)4.683.306.873.403.36—STD0.390.101.090.160.15—Generation of WPP (MW)9.15059.80768.93699.95599.99588.2827Power factor of WPP0.96010.99890.9740.916810.85Table 9 The results obtained by methods as placing one WPP at node 30 for loss reduction. MethodFBIHBOMSGOCEAMEAJYA [47]Minimum power loss (MW)2.512.472.432.372.352.67504Mean power loss (MW)2.962.623.232.732.65—Maximum power loss (MW)4.502.855.693.182.97—STD0.430.090.890.170.14—N1103015303030N3100100100100100100Generation of WPP (MW)9.18219.82579.71119.99049.98539.95433Power factor of WPP0.97320.99470.99050.94550.99140.85Figure 11 The best runs of methods for minimizing loss as placing one WPP at node 3.Figure 12 The best runs of methods for minimizing loss as placing one WPP at node 30. ## 5.2.2. Case 2.2. Place One WPP at One Unknown Node In this section, five applied methods are implemented to find the location and power generation of one WPP. Table10 indicates that all methods have found the same location at node 30, but MEA is still the most effective method with the lowest loss even it is not much smaller than others. The loss of MEA is 2.39 MW, while that of others is from 2.45 MW to 2.7 MW. HBO is still the most stable method with the smallest STD. Figure 13 shows the best run of five methods. In the figure, MSGO has a premature convergence to a local optimum with very low quality, while other methods are searching optimal solutions. CEA seems to have a better search process than MEA from the 1st iteration to the 90th iteration, but then it must adopt a higher loss from the 91st iteration to the last iteration. Figure 14 presents the location and the loss of the proposed MEA for 50 runs. The rearranged losses from the lowest to the highest indicate that node 30 can reduce the loss at most, while other nodes such as 5, 7, 19, 24, and 26 are not suitable for reducing loss.Table 10 The results obtained by methods for placing one WPP at one unknown node. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.702.492.462.452.39Mean power loss (MW)3.172.683.542.822.74Maximum power loss (MW)4.723.186.343.293.43STD0.400.130.970.210.21N11030153030N3100100100100100Found position3030303030Generation of WPP (MW)8.89119.99989.90659.98559.9956Power factor of WPP0.98990.95030.99130.8950.9122Figure 13 The best runs of five applied methods for Subcase 2.2.Figure 14 WPP location and loss of 50 rearranged runs obtained by MEA for Subcase 2.2. ## 5.2.3. Case 2.2. Place Two WPPs at Two Nodes In this section, Subcase 2.2.1 is to place two WPPs at two predetermined nodes 3 and 30 and Subcase 2.2.2 is to place two WPPs at two random nodes. Tables11 and 12 show the results for the two studied subcases.Table 11 The results obtained by five methods for Subcase 2.2.1. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.372.432.322.282.26Mean power loss (MW)2.602.572.642.492.51Maximum power loss (MW)2.972.853.302.852.89STD0.140.090.250.130.13N12060306060N3100100100100100Location of WPPs3, 303, 303, 303, 303, 30Generation of WPPs (MW)9.6108, 9.81287.4719, 9.99366.8908, 9.96749.8404, 9.99999.9922, 9.9299Power factor of WPPs0.9814, 0.97290.9925, 0.93520.8752, 0.98410.9974, 0.95420.8597, 0.9973Table 12 The results obtained by five methods for Subcase 2.2.2. MethodFBIHBOMSGOCEAMEAMinimum power loss (MW)2.242.162.102.052.03Mean power loss (MW)2.612.372.462.272.31Maximum power loss (MW)3.932.644.242.702.77STD0.280.130.370.150.14N12060306060N3100100100100100Locations of WPPs30, 1919, 3030, 1930, 1924, 30Generation of WPPs (MW)9.83, 9.289.98, 9.979.70, 8.159.99, 9.989.95, 9.99Power factor of WPPs0.98, 0.961.00, 0.961.00, 0.920.94, 0.880.87, 0.99The two tables reveal that MEA can reach the lowest loss for both cases, 2.26 MW for Subcase 2.2.1 and 2.03 MW for Subcase 2.2.2. Clearly, placing WPP at the most effective node (node 30) and the least effective node (node 3) cannot lead to a very good solution of reducing total loss. While, the WPP placement at node 30 and node 24 can reduce the loss from 2.26 to 2.03 MW, which is about 0.23 MW and equivalent to 10.2%. When comparing to CEA, MSGO, HBO, and FBI, the proposed MEA can save 0.02, 0.06, 0.17, and 0.11 MW for Subcase 2.2.1 and 0.02, 0.07, 0.13, and 0.21 MW for Subcase 2.2.2. The mean loss of MEA is also smaller than that of MSGO, HBO, and FBI and only higher than that of CEA. The STD comparison is the same as the mean loss comparison. Figures15 and 16 show the search procedure of the best run obtained by five applied methods. Figure 15 indicates that MEA can find better parameters for wind power plants and other electrical components than other methods from the 75th iteration to the last iteration. Therefore, its loss is less than that of four remaining methods from the 75th to the last iteration. Figure 16 shows a better search procedure for MEA with less loss than other ones from the 55th iteration to the last iteration. The two figures have the same point that the loss of MEA at the 86th iteration is less than that of CEA at the final iteration. Compared to three other remaining methods, the loss of MEA at the 67th iteration for Subcase 2.2.1 and at the 56th iteration for Subcase 2.2.2 is less than that of these methods at the final iteration. Obviously, MEA is very strong for placing two WPPs in the IEEE 30-bus system.Figure 15 The best runs obtained by methods for Subcase 2.2.1.Figure 16 The best runs obtained by methods for Subcase 2.2.2.Figure17 shows the power loss and the location of the two WPPs for the fifty runs obtained by MEA. The black curve shows fifty sorted loss values, while the blue and orange bars show the location of the first WPP and the second WPP. The view on the bars and the curve sees that node 30 is always chosen, while the second location can be nodes 24, 19, 21, 5, and 4. The best loss and second-best loss are obtained at nodes 30 and 24, while other nodes reach much higher losses.Figure 17 Power losses and locations of two WPPs for 50 runs obtained by MEA. ## 5.3. Discussion on the Capability of MEA In this paper, we considered the placement of WPPs on the IEEE 30-node system. The dimension of the system is not high, it is just medium. In fact, among IEEE standard transmission power systems such as IEEE 14-bus system, IEEE 30-bus system, IEEE 57-bus system, IEEE 118-bus system, etc. The considered system is not the largest system, and it has 6 thermal generators, 24 loads, 41 transmission lines, 4 transformers, and 9 shunt capacitors. With the number of power plants, lines, loads, transformers, and capacitors, the IEEE 30-bus system is approximately as large as an area power system in a province. By considering the placement of WPPs, the control variables of WPPs are location, active power, and reactive power. Therefore, there are six control variables regarding two placed WPPs, including two locations, two values of rated power, and two values of reactive power. In addition, other control variables regarding optimal power flow problem are 5 values of active power output for 6 THPs, 6 voltage values for THPs, 4 tap values for transformers, and 9 reactive power output values for shunt capacitors. On the other hand, the dependent variables are 1 value of the active power output for generator at slack node, 6 values of reactive power for THPs, 41 current values of lines, and 24 voltage values of loads. As a result, the total number of control variables for placing two WPPs in the IEEE 30-bus system is 30, and the total number of dependent variables is 72. In the conventional OPF problem, control variables have a high impact on the change of dependent variables, and updating the control variables causes the change of dependent variables. Furthermore, in the modified OPF problem, updating the location and size of WPPs also cause the change of control variables such as voltage and active power of THPs. Therefore, reaching optimal control parameters in the modified OPF problem becomes more difficult for metaheuristic algorithm. By experiment, MEA could solve the conventional OPF problem successfully for the IEEE 30-node system by setting 10 to population and 50 to iteration number and MEA could reach the most optimal solutions by setting 15 to population and 75 to iteration number. However, for the modified problem with the placement of two WPPs, the settings to reach the best performance for MEA were 60 for population and 100 for the iteration number. Clearly, the setting values were higher for the modified OPF problem. About the average simulation time for the study cases, Table13 summarizes the time from all methods for all study cases. Comparisons of the computation time indicate that MEA has the same computation time as FBI, HBO, MSGO, and CEA, but it has shorter time than JYA [47]. The average time for applied methods is about 30 seconds for the cases of placing one WPP and about 53 seconds for other cases of placing two WPPs, while the time is about 72 seconds for JYA for the cases of placing one WPP. The five algorithms approximately have the same average time because the setting of population and iteration number is the same. The reported time of the proposed method is not too long for a system with 30 nodes, and it seems that MEA can be capable for handling a real power system or a larger-scale power system. Therefore, we have tried to apply MEA for other larger scale systems with 57 or 118 nodes. For conventional OPF problem without the optimal placement of WPPs, MEA could solve the conventional OPF problem successfully. However, for the placement of WPPs in modified OPF problem for the IEEE 57-node system and the IEEE 118-node system, MEA could not succeed to reach valid solutions. Therefore, the highest shortcoming of the study is not to reach the successful application of MEA for placing WPPs on large-scale systems with 57 and 118 nodes.Table 13 Average computation time of each run obtained by methods for study cases. MethodFBIHBOMSGOCEAMEAJYA [47]Case 1.1 (place one WPP at node 3)35.2130.6532.8228.1828.45∼72.4Case 1.1 (place one WPP at node 30)28.9326.0632.4527.0727.77∼72.4Case 1.2 (place one WPP at one unknown node)27.9331.9227.3227.6427.76—Subcase 1.3.1 (place two WPPs at nodes 3 and 30)51.0453.3653.5653.3852.57—Subcase 1.3.2 (place two WPPs at two unknown nodes)55.1854.0556.2754.1855.14—Case 2.1 (place one WPP at node 3)30.8227.1427.6531.1529.7∼72.4Case 2.1 (place one WPP at node 30)28.0228.728.3628.2028.04∼72.4Case 2.2 (place one WPP at one unknown node)28.2026.7528.2729.6227.40—Subcase 2.2.1 (place two WPPs at nodes 3 and 30)55.4156.4356.4356.8156.09—Subcase 2.2.2 (place one WPP at one unknown node)54.0454.1459.255.1956.12—It can be stated that CEA and MEA are powerful optimization tools for the IEEE 30-node system, but their capability on other large-scale systems or real systems is limited. The methods may need more effective improvement to overcome the mentioned limitation. ## 6. Conclusions In this paper, a modified OPF (MOPF) problem with the placement of wind power plants in an IEEE 30-bus transmission power network was solved by implementing four conventional metaheuristic algorithms and the proposed MEA. Two single objectives taken into account were minimization of total generation cost and minimization of power loss. About the number of WPPs located in the system, two cases are, respectively, one WPP and two WPPs. About the locations of the WPPs, simple cases were to accept the result from the previous study [47]. Buses 30 and 3 were the most effective and ineffective locations. The results indicated that the placement of one WPP at bus 30 can reach smaller power loss and smaller fuel cost than at bus 3. For other complicated cases, the paper also investigated the effectiveness of locations by applying MEA and four other metaheuristic algorithms to determine the locations. As a result, placing one WPP at bus 30 has reached the smallest power loss and the smallest total fuel cost. For placing two WPPs, buses 30 and 3 could not result in the smallest fuel cost and the smallest power loss. Buses 30 and 5 were the best locations for the minimization of fuel cost, while buses 30 and 24 were the best locations for the minimization of power loss. Therefore, the main contribution of the study regarding the electrical field is to determine the best locations for the best power loss and the best total cost.For placing one WPP, fuel costs of MEA were the smallest and equal to $764.33 and $762.53 for locations at node 3 and node 30, whilst those of others were much higher and equal to $769.963 and $768.039, respectively. For placing two WPPs at two found locations, MEA has reached the cost of $726.77, but the worst cost of others was $728.81. The power losses of MEA were also reduced significantly as compared to others. For placing one WPP at node 3 and node 30, MEA has reached 2.79 and 2.35 MW, but those of others have been larger and equal to 3.339 and 2.67504 MW, respectively. For placing two WPPs at two found locations, the best loss of 2.03 MW was found by MEA and the worst loss of 2.24 MW was found by others. In summary, the proposed MEA could attain lesser cost than others from 0.28% to 0.73% and lesser power loss than others from 9.38% to 16.44%. Clearly, the improvement levels are significant. However, for other systems with larger scale, MEA could not succeed in determining the best location and size for WPPs. Thus, in the future work, we will find solutions to improve MEA for larger systems and real systems. In addition, we will also consider more renewable energy power plants, such as photovoltaic power plants and uncertainty characteristics of solar and wind speed. All considered complexities will form a real problem as a real power system, and contributions of optimization algorithms and renewable energies will be shown clearly. --- *Source: 1015367-2021-10-12.xml*
2021
# Diffusion MRI Findings in Encephalopathy Induced by Immunosuppressive Therapy after Liver Transplantation **Authors:** Emanuele Tinelli; Nicoletta Locuratolo; Alberto Pierallini; Massimo Rossi; Francesco Fattapposta **Journal:** Case Reports in Medicine (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1015385 --- ## Abstract Neurological complications are common after liver transplantation, as they affect up to one-third of the transplanted patients and are associated with significant morbidity. The introduction of calcineurin inhibitors, cyclosporine A and tacrolimus, in immunosuppressive regimens significantly improved the outcome of solid-organ transplantation even though immunosuppression-associated neurotoxicity remains a significant complication, particularly occurring in about 25% of cases after liver transplantation. The immunosuppressant cyclosporine A and tacrolimus have been associated with the occurrence of major neurological complications, diffuse encephalopathy being the most common. The biochemical and pathogenetic basis of calcineurin inhibitors-induced neurotoxicity are still unclear although several mechanisms have been suggested. Early recognition of symptoms could help reduce neurotoxic event. The aim of the study was to evaluate cerebral changes through MRI, in particular with diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) maps, in two patients undergoing liver transplantation after immunosuppressive therapy. We describe two patients in which clinical pictures, presenting as a severe neurological condition, early after orthotopic liver transplantation during immunosuppression therapy, showed a different evolution in keeping with evidence of focal-multifocal lesions at DWI and ADC maps. At clinical onset, DWI showed hyperintensity of the temporo-parieto-occipital cortex with normal ADC values in the patient with following good clinical recovery and decreased values in the other one; in the latter case, MRI abnormalities were still present after ten days, until the patient’s exitus. The changes in DWI with normal ADC may be linked to brain edema with a predominant vasogenic component and therefore reversible, while the reduction in ADC is due to cytotoxic edema and linked to more severe, nonreversible, clinical picture. Brain MRI and particularly DWI and ADC maps provide not only a good and early representation of neurological complications during immunosuppressant therapy but can also provide a useful prognostic tool on clinical outcome of the patient. --- ## Body ## 1. Introduction Neurological complications are common to all solid-organ transplantations (SOT), approximately occurring in one-third of patients; if not related to failure or compromise of the transplanted organ, they are frequently attributable to the immunosuppressive regimens [1, 2].In fact, the introduction of calcineurin inhibitors (CNIs), cyclosporine A (CsA) and tacrolimus (Tac), in immunosuppressive regimens significantly improved the outcome of solid-organ transplantation, although immunosuppression-associated neurotoxicity remained a significant complication in the postoperative course.Liver transplant recipients seem to develop neurological syndromes with higher incidence, between 9 and 42%, and earlier after the transplantation procedure than other organ transplant recipients [3].Differences in the incidence of postoperative neurological complication are evident in patients with liver disease due to different etiologies, with over 40% of patients suffering from alcoholic hepatitis. A wide range of neurological side effects, both with tacrolimus and cyclosporine, have been reported. Less serious symptoms consist of tremor, headache, agitation, and sensorineural hearing loss [4].More severe complications include seizures, hallucinations,quadriplegia, and visual disturbances. Speech disorder has been described, occurring in approximately 1% adults who had undergone liver transplantation, presenting as reversible spastic dysarthria, speech-activated myoclonus, speech apraxia, until a complete loss of speech [5], and a more severe condition with the involvement of consciousness, as akinetic mutism.A peculiar picture, named PRES, characterized by reversible syndrome of headache, altered mental functioning, seizures, visual disturbances, and imaging study indicating leukoencephalopathy predominantly in the posterior regions of cerebral hemispheres, occurring in about 1–5% of patients, has been described [6]. However, the condition is not always reversible, nor is it restricted to posterior structures or white matter [5]. The biochemical and pathogenetic basis of CNIs-induced neurotoxicity are still unclear although several mechanisms have been suggested. Direct toxicity has been postulated, but blood CsA levels usually are within the therapeutic range in most patients.Similarities between hypertensive encephalopathy and immunosuppression neurotoxicity leaded to suppose that hypertension could be a common risk factor in this syndrome [7].Although CsA, more than tacrolimus, is a very lipophilic drug, it does not easily pass through the BBB. A possible hypothesis is that CNIs increase the permeability of BBB especially enhancing nitric oxide production that, associated to possibly anoxic injury, may facilitate dysfunction of the BEE. Moreover, low levels of cholesterol, expected in patients with significant liver failure, may increase the percentage of unbound drug predisposing to increased diffusion of CsA across the blood-brain barrier [8].An alternative hypothesis is that neurotoxicity may result from mitochondrial impairment due to direct toxic action on the respiratory chain. Neurotoxic effects could also depend on immune dysregulation in CNS due to the pharmacologic effects of CNI-immunophilin complex [9].Hypomagnesemia, associated to lower seizure threshold in a patient receiving CsA, posttransplant hyponatremia [10], prolonged surgical time, and pre-existing brain disease are also considered as risk factors.Toxic effect, resulted from abnormal metabolism of CsA by livercytochrome P-450, was also investigated. CsA neurotoxicity may be enhanced by pharmacokinetic and pharmacodynamic drug interactions [4].Identification of patients at risk for neurological complications can help to stratify transplant recipients with potential reduction of perioperative risk.Diffusion weighted images (DWI) are sequences used to reveal recent ischemic stroke. This sequence depends on the diffusion coefficient that measures the grade of translation of water molecules over small distances. Apparent diffusion coefficient (ADC) maps calculated by DWIs quantify the amount of motion of water molecules. The slow motion of the proton in a precocious stage of ischemic stroke leads to a high DWI signal although T2-weighted images do not show abnomalities. DWI are useful to differentiate cystic tumour and abscess. An abscess shows high DWI signals and low values in ADC maps, whereas cystic or necrotic tumor shows high ADC map values. Other DWI applications are used to detect the pathology of white matter [11, 12]. In order to differentiate between vasogenic and cytotoxic edema, we used DWI and ADC maps in encephalopathy secondary to immunosuppressive therapy after liver transplantation [13]. This information may be an important prognostic tool in predicting the recovery of these complications independent of the clinical status. ## 2. Case Presentation We describe the occurrence of a severe neurological syndrome, identified as akinetic mutism, in two patients following successful orthotopic liver transplantation, however characterized by different clinical outcomes.A 52-year-old woman and a 46-year-old man underwent liver transplantation for decompensated cirrhosis secondary to alcoholic liver disease. They had not had previous episode of hepatic encephalopathy except for transient mild cognitive slowing or history of neurological or psychiatric symptoms or previous neurological diseases.We studied two patients before orthotopic liver transplantation. A complete physical examination by a neurologist with evaluation of cognitive performances and an EEG recording were a standard part of the pretransplantation assessment.MR imaging was performed by Gyroscan NT Intera Philips 1.5 Tesla system. The MRI examination was performed with 5 mm slices thickness using T1-weighted spin echo (TR/TE = 600/20 ms), Proton Density, and T2-weighted (TR/TE = 2800/40–110 ms) Fluid Attenuated Inversion Recovery (FLAIR) (TR/TE/TI = 6000/100/2000 ms). Diffusion-weighted (TR/TE = 3500/120 ms) images were obtained in the axial plane, and an additional T2-weighted image was obtained in the coronal plane (TR/TE = 3000/110 ms). The ADC maps were calculated from diffusion images through the software supplied by the scanner. We drawn regions of interest (ROIs) for ADC maps analysis; all ROIs had a uniform shape and size (elliptical, 40 mm2), and were positioned in the temporal-parietal regions bilaterally where DWIt showed marked alterations. The ROIs placed on the images with a isotropic diffusion maps were transferred into the ADC maps to obtain the corresponding ADC values.The neurological examinations performed in the pretransplantation assessment did not reveal any pathological signs, with exception of a fine postural tremor at limbs. The EEG recordings showed no specific abnormalities. Mild psychomotor slowing was evident during the cognitive test battery for the assessment of hepatic encephalopathy, without deficit in special domain except for minimal attentional deficit. None of the patients had previous episodes of hepatic encephalopathy or history of neurological or psychiatric symptoms before transplantation.The initial postoperative recovery was uncomplicated, and the patients regained full consciousness after surgery. A triple immunosuppression therapy was started within 24 h of transplantation with CsA at a dose of 8 mg/Kg/day, mycophenolate mofetil, and prednisone. At third postoperative day, the patients began to be confused, manifesting psychomotor agitation, and then became mute. The neurologic disorder progressed rapidly, and during the 2 following days from onset, they were in a state of altered consciousness, in which they appeared intermittently alert. Even when they appeared awake, spontaneous motor and verbal responses were completely absent, and they were unresponsive even to noxious stimuli. Sleep-waking cycle was conserved. Neurological examination revealed oculogyric upward deviation of gaze. Oculocephalic (Doll’s eyes) and corneal reflexes were elicitable. Mioclonic-like involuntary movements were sporadically observed, without EEG correlates. EEG recording showed a mild slowing of the background rhythm. They were normotensive. Cerebrospinal fluid analysis did not revealed abnormalities. Fungal, bacterial, and viral cultures were negative. Arterial blood pressure and biochemical parameters, including serum sodium, potassium, magnesium, phosphate, and cholesterol levels, were in the normal range. They did not experience abnormal fluctuation in serum sodium. Continuous monitoring of renal and liver functions did not reveal any signs of failure. Blood levels of cyclosporin were measured daily by high-performance liquid chromatography and were in the normal range.An MRI performed at the time of clinical onset showed in both cases bilateral and symmetrical hyperintense signals in the T2-FLAIR and DWI, involving the temporo-parieto-occipital cortex with normal ADC values (mean ± standard deviation 0.830 ± 0.097 × 103 mm2/sec) in one case and decreased ADC values (mean ± standard deviation 0.604 ± 0.116 × 103 mm2/sec) in the other patient, in which was also evident hyperintensity in the T2-FLAIR and DWI, involving basal ganglia and thalami (Figure 1).Figure 1 An MRI performed three days after clinical onset showed (a) T2-FLAIR and DWI bilateral and symmetrical hyperintense signal, involving the temporo-parieto-occipital cortex with normal ADC values and (b) T2-FLAIR and DWI which show the similar alterations involving the temporo-parieto-occipital cortex, with decreased ADC values; in this patient, hyperintensity in the T2-FLAIR and DWI is also evident, involving the basal ganglia and thalami without alterations on ADC maps.During the following two weeks, an improvement in consciousness was observed in the patient with normal ADC values. Spontaneous motor activity was evident. He was able to protrude the tongue as requested. After one month, he was able to speak and verbal and written comprehension, as well as orientation was uninjured. Only a stuttering dysarthria and dysprosody were evident. He did not show major motor and sensory deficits of the limbs, but hyper-reflexia with intermittentclonus at the lower limbs and bilateral Babinski sign was evident. Thus, a rehabilitation program was started.An MRI performed ten days later revealed that the hyperintense signal was slightly decreased in the temporoparietal cortex (Figure2(a)), with normal ADC values (mean ± standard deviation 0.894 ± 0.096 × 103 mm2/sec); the last MRI performed two months later failed to show any abnormality either on FLAIR or DWI.Figure 2 An MRI performed ten days later showed (a) DWI and ADC maps failed to show any abnormality and (b) DWI showed a persistence of bilateral and symmetrical signal abnormalities at the level of temporo-parietal-occipital cortex with reduced ADC map values.Conversely, the patient with decreased ADC values at the first MRI examination died 12 days after surgery. Neurological examination performed daily did not reveal improvement in the state of consciousness. A MRI exam performed ten days from the onset of neurological symptoms showed a persistence of bilateral and symmetrical signal abnormalities at the level of temporo-parietal-occipital cortex with reduced ADC map values (mean ± standard deviation 0.584 ± 0.121 × 103 mm2/sec); (Figure 2(b)).A postmortem examination showed diffuse rarefaction of the brain’s white matter, swollen vascular endothelium, and perivascular macrophages. ## 3. Discussion Together with surgical technical advances, the introduction of CNIs, CsA, and Tac, in immunosuppressive regimens significantly improved the outcome of liver transplantation. However, neurological complications occur in about 30% of liver transplant patients [4]. A wide variety of neurological adverse events can arise early or later after transplantation, suggesting the need for careful clinical assessment and follow-up, in order to promptly define the neurological syndromes. Several risk factors, such as sepsis, shock associated with multiple organ dysfunction, and graft versus host disease (GVHD) may coexist with CsA or Tac toxicity, determining the onset of encephalopathy, especially PRES; blood levels of immunosuppressive drug, however, do not correlate in most cases with the severity of neurotoxicity, suggesting that genetic differences in the CsA metabolism might be related to toxicity at therapeutic blood levels.Clinical symptoms and neuroradiological abnormalities have been reported to mostly resolve after withdrawl of the drug [1]. However, an adverse and occasionally fatal outcome has been reported in up to 26% of the cases, and a cortical involvement of frontal regions has been reported in up to 82% of cases [13].Normal ADC map values and high DWI signals may result from intravoxel averaging of both cytotoxic and vasogenic edema. Decreased values are caused by a prevalent cytotoxic edema. In fact, the death of the patient that was in a worse clinical status can also be explained by the neurological complications.In conclusion, we suggest that MRI provides not only a good representation of immunosuppresant therapy complications but can also provide useful prognostic information on the patient. The hallmark of this diagnosis is the presence of vasogenic edema which is the characteristic of reversible syndrome, regardless of whether or not anterior or posterior structures are involved, and is evident at MRI as hyperintensity on both T2 FLAIR and DWI and with ADC normal or slightly high values [5, 14]. When cytotoxic edema is present or predominant, as may occur in vasospasm and ischemic complications, the ADC values are reduced which may represent an early sign of the nonreversibility of the complications. In conclusion, the diffusion-weighted sequences offer not only the possibility of diagnosing PRES but also valuable prognostic information. --- *Source: 1015385-2020-02-14.xml*
1015385-2020-02-14_1015385-2020-02-14.md
16,785
Diffusion MRI Findings in Encephalopathy Induced by Immunosuppressive Therapy after Liver Transplantation
Emanuele Tinelli; Nicoletta Locuratolo; Alberto Pierallini; Massimo Rossi; Francesco Fattapposta
Case Reports in Medicine (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1015385
1015385-2020-02-14.xml
--- ## Abstract Neurological complications are common after liver transplantation, as they affect up to one-third of the transplanted patients and are associated with significant morbidity. The introduction of calcineurin inhibitors, cyclosporine A and tacrolimus, in immunosuppressive regimens significantly improved the outcome of solid-organ transplantation even though immunosuppression-associated neurotoxicity remains a significant complication, particularly occurring in about 25% of cases after liver transplantation. The immunosuppressant cyclosporine A and tacrolimus have been associated with the occurrence of major neurological complications, diffuse encephalopathy being the most common. The biochemical and pathogenetic basis of calcineurin inhibitors-induced neurotoxicity are still unclear although several mechanisms have been suggested. Early recognition of symptoms could help reduce neurotoxic event. The aim of the study was to evaluate cerebral changes through MRI, in particular with diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) maps, in two patients undergoing liver transplantation after immunosuppressive therapy. We describe two patients in which clinical pictures, presenting as a severe neurological condition, early after orthotopic liver transplantation during immunosuppression therapy, showed a different evolution in keeping with evidence of focal-multifocal lesions at DWI and ADC maps. At clinical onset, DWI showed hyperintensity of the temporo-parieto-occipital cortex with normal ADC values in the patient with following good clinical recovery and decreased values in the other one; in the latter case, MRI abnormalities were still present after ten days, until the patient’s exitus. The changes in DWI with normal ADC may be linked to brain edema with a predominant vasogenic component and therefore reversible, while the reduction in ADC is due to cytotoxic edema and linked to more severe, nonreversible, clinical picture. Brain MRI and particularly DWI and ADC maps provide not only a good and early representation of neurological complications during immunosuppressant therapy but can also provide a useful prognostic tool on clinical outcome of the patient. --- ## Body ## 1. Introduction Neurological complications are common to all solid-organ transplantations (SOT), approximately occurring in one-third of patients; if not related to failure or compromise of the transplanted organ, they are frequently attributable to the immunosuppressive regimens [1, 2].In fact, the introduction of calcineurin inhibitors (CNIs), cyclosporine A (CsA) and tacrolimus (Tac), in immunosuppressive regimens significantly improved the outcome of solid-organ transplantation, although immunosuppression-associated neurotoxicity remained a significant complication in the postoperative course.Liver transplant recipients seem to develop neurological syndromes with higher incidence, between 9 and 42%, and earlier after the transplantation procedure than other organ transplant recipients [3].Differences in the incidence of postoperative neurological complication are evident in patients with liver disease due to different etiologies, with over 40% of patients suffering from alcoholic hepatitis. A wide range of neurological side effects, both with tacrolimus and cyclosporine, have been reported. Less serious symptoms consist of tremor, headache, agitation, and sensorineural hearing loss [4].More severe complications include seizures, hallucinations,quadriplegia, and visual disturbances. Speech disorder has been described, occurring in approximately 1% adults who had undergone liver transplantation, presenting as reversible spastic dysarthria, speech-activated myoclonus, speech apraxia, until a complete loss of speech [5], and a more severe condition with the involvement of consciousness, as akinetic mutism.A peculiar picture, named PRES, characterized by reversible syndrome of headache, altered mental functioning, seizures, visual disturbances, and imaging study indicating leukoencephalopathy predominantly in the posterior regions of cerebral hemispheres, occurring in about 1–5% of patients, has been described [6]. However, the condition is not always reversible, nor is it restricted to posterior structures or white matter [5]. The biochemical and pathogenetic basis of CNIs-induced neurotoxicity are still unclear although several mechanisms have been suggested. Direct toxicity has been postulated, but blood CsA levels usually are within the therapeutic range in most patients.Similarities between hypertensive encephalopathy and immunosuppression neurotoxicity leaded to suppose that hypertension could be a common risk factor in this syndrome [7].Although CsA, more than tacrolimus, is a very lipophilic drug, it does not easily pass through the BBB. A possible hypothesis is that CNIs increase the permeability of BBB especially enhancing nitric oxide production that, associated to possibly anoxic injury, may facilitate dysfunction of the BEE. Moreover, low levels of cholesterol, expected in patients with significant liver failure, may increase the percentage of unbound drug predisposing to increased diffusion of CsA across the blood-brain barrier [8].An alternative hypothesis is that neurotoxicity may result from mitochondrial impairment due to direct toxic action on the respiratory chain. Neurotoxic effects could also depend on immune dysregulation in CNS due to the pharmacologic effects of CNI-immunophilin complex [9].Hypomagnesemia, associated to lower seizure threshold in a patient receiving CsA, posttransplant hyponatremia [10], prolonged surgical time, and pre-existing brain disease are also considered as risk factors.Toxic effect, resulted from abnormal metabolism of CsA by livercytochrome P-450, was also investigated. CsA neurotoxicity may be enhanced by pharmacokinetic and pharmacodynamic drug interactions [4].Identification of patients at risk for neurological complications can help to stratify transplant recipients with potential reduction of perioperative risk.Diffusion weighted images (DWI) are sequences used to reveal recent ischemic stroke. This sequence depends on the diffusion coefficient that measures the grade of translation of water molecules over small distances. Apparent diffusion coefficient (ADC) maps calculated by DWIs quantify the amount of motion of water molecules. The slow motion of the proton in a precocious stage of ischemic stroke leads to a high DWI signal although T2-weighted images do not show abnomalities. DWI are useful to differentiate cystic tumour and abscess. An abscess shows high DWI signals and low values in ADC maps, whereas cystic or necrotic tumor shows high ADC map values. Other DWI applications are used to detect the pathology of white matter [11, 12]. In order to differentiate between vasogenic and cytotoxic edema, we used DWI and ADC maps in encephalopathy secondary to immunosuppressive therapy after liver transplantation [13]. This information may be an important prognostic tool in predicting the recovery of these complications independent of the clinical status. ## 2. Case Presentation We describe the occurrence of a severe neurological syndrome, identified as akinetic mutism, in two patients following successful orthotopic liver transplantation, however characterized by different clinical outcomes.A 52-year-old woman and a 46-year-old man underwent liver transplantation for decompensated cirrhosis secondary to alcoholic liver disease. They had not had previous episode of hepatic encephalopathy except for transient mild cognitive slowing or history of neurological or psychiatric symptoms or previous neurological diseases.We studied two patients before orthotopic liver transplantation. A complete physical examination by a neurologist with evaluation of cognitive performances and an EEG recording were a standard part of the pretransplantation assessment.MR imaging was performed by Gyroscan NT Intera Philips 1.5 Tesla system. The MRI examination was performed with 5 mm slices thickness using T1-weighted spin echo (TR/TE = 600/20 ms), Proton Density, and T2-weighted (TR/TE = 2800/40–110 ms) Fluid Attenuated Inversion Recovery (FLAIR) (TR/TE/TI = 6000/100/2000 ms). Diffusion-weighted (TR/TE = 3500/120 ms) images were obtained in the axial plane, and an additional T2-weighted image was obtained in the coronal plane (TR/TE = 3000/110 ms). The ADC maps were calculated from diffusion images through the software supplied by the scanner. We drawn regions of interest (ROIs) for ADC maps analysis; all ROIs had a uniform shape and size (elliptical, 40 mm2), and were positioned in the temporal-parietal regions bilaterally where DWIt showed marked alterations. The ROIs placed on the images with a isotropic diffusion maps were transferred into the ADC maps to obtain the corresponding ADC values.The neurological examinations performed in the pretransplantation assessment did not reveal any pathological signs, with exception of a fine postural tremor at limbs. The EEG recordings showed no specific abnormalities. Mild psychomotor slowing was evident during the cognitive test battery for the assessment of hepatic encephalopathy, without deficit in special domain except for minimal attentional deficit. None of the patients had previous episodes of hepatic encephalopathy or history of neurological or psychiatric symptoms before transplantation.The initial postoperative recovery was uncomplicated, and the patients regained full consciousness after surgery. A triple immunosuppression therapy was started within 24 h of transplantation with CsA at a dose of 8 mg/Kg/day, mycophenolate mofetil, and prednisone. At third postoperative day, the patients began to be confused, manifesting psychomotor agitation, and then became mute. The neurologic disorder progressed rapidly, and during the 2 following days from onset, they were in a state of altered consciousness, in which they appeared intermittently alert. Even when they appeared awake, spontaneous motor and verbal responses were completely absent, and they were unresponsive even to noxious stimuli. Sleep-waking cycle was conserved. Neurological examination revealed oculogyric upward deviation of gaze. Oculocephalic (Doll’s eyes) and corneal reflexes were elicitable. Mioclonic-like involuntary movements were sporadically observed, without EEG correlates. EEG recording showed a mild slowing of the background rhythm. They were normotensive. Cerebrospinal fluid analysis did not revealed abnormalities. Fungal, bacterial, and viral cultures were negative. Arterial blood pressure and biochemical parameters, including serum sodium, potassium, magnesium, phosphate, and cholesterol levels, were in the normal range. They did not experience abnormal fluctuation in serum sodium. Continuous monitoring of renal and liver functions did not reveal any signs of failure. Blood levels of cyclosporin were measured daily by high-performance liquid chromatography and were in the normal range.An MRI performed at the time of clinical onset showed in both cases bilateral and symmetrical hyperintense signals in the T2-FLAIR and DWI, involving the temporo-parieto-occipital cortex with normal ADC values (mean ± standard deviation 0.830 ± 0.097 × 103 mm2/sec) in one case and decreased ADC values (mean ± standard deviation 0.604 ± 0.116 × 103 mm2/sec) in the other patient, in which was also evident hyperintensity in the T2-FLAIR and DWI, involving basal ganglia and thalami (Figure 1).Figure 1 An MRI performed three days after clinical onset showed (a) T2-FLAIR and DWI bilateral and symmetrical hyperintense signal, involving the temporo-parieto-occipital cortex with normal ADC values and (b) T2-FLAIR and DWI which show the similar alterations involving the temporo-parieto-occipital cortex, with decreased ADC values; in this patient, hyperintensity in the T2-FLAIR and DWI is also evident, involving the basal ganglia and thalami without alterations on ADC maps.During the following two weeks, an improvement in consciousness was observed in the patient with normal ADC values. Spontaneous motor activity was evident. He was able to protrude the tongue as requested. After one month, he was able to speak and verbal and written comprehension, as well as orientation was uninjured. Only a stuttering dysarthria and dysprosody were evident. He did not show major motor and sensory deficits of the limbs, but hyper-reflexia with intermittentclonus at the lower limbs and bilateral Babinski sign was evident. Thus, a rehabilitation program was started.An MRI performed ten days later revealed that the hyperintense signal was slightly decreased in the temporoparietal cortex (Figure2(a)), with normal ADC values (mean ± standard deviation 0.894 ± 0.096 × 103 mm2/sec); the last MRI performed two months later failed to show any abnormality either on FLAIR or DWI.Figure 2 An MRI performed ten days later showed (a) DWI and ADC maps failed to show any abnormality and (b) DWI showed a persistence of bilateral and symmetrical signal abnormalities at the level of temporo-parietal-occipital cortex with reduced ADC map values.Conversely, the patient with decreased ADC values at the first MRI examination died 12 days after surgery. Neurological examination performed daily did not reveal improvement in the state of consciousness. A MRI exam performed ten days from the onset of neurological symptoms showed a persistence of bilateral and symmetrical signal abnormalities at the level of temporo-parietal-occipital cortex with reduced ADC map values (mean ± standard deviation 0.584 ± 0.121 × 103 mm2/sec); (Figure 2(b)).A postmortem examination showed diffuse rarefaction of the brain’s white matter, swollen vascular endothelium, and perivascular macrophages. ## 3. Discussion Together with surgical technical advances, the introduction of CNIs, CsA, and Tac, in immunosuppressive regimens significantly improved the outcome of liver transplantation. However, neurological complications occur in about 30% of liver transplant patients [4]. A wide variety of neurological adverse events can arise early or later after transplantation, suggesting the need for careful clinical assessment and follow-up, in order to promptly define the neurological syndromes. Several risk factors, such as sepsis, shock associated with multiple organ dysfunction, and graft versus host disease (GVHD) may coexist with CsA or Tac toxicity, determining the onset of encephalopathy, especially PRES; blood levels of immunosuppressive drug, however, do not correlate in most cases with the severity of neurotoxicity, suggesting that genetic differences in the CsA metabolism might be related to toxicity at therapeutic blood levels.Clinical symptoms and neuroradiological abnormalities have been reported to mostly resolve after withdrawl of the drug [1]. However, an adverse and occasionally fatal outcome has been reported in up to 26% of the cases, and a cortical involvement of frontal regions has been reported in up to 82% of cases [13].Normal ADC map values and high DWI signals may result from intravoxel averaging of both cytotoxic and vasogenic edema. Decreased values are caused by a prevalent cytotoxic edema. In fact, the death of the patient that was in a worse clinical status can also be explained by the neurological complications.In conclusion, we suggest that MRI provides not only a good representation of immunosuppresant therapy complications but can also provide useful prognostic information on the patient. The hallmark of this diagnosis is the presence of vasogenic edema which is the characteristic of reversible syndrome, regardless of whether or not anterior or posterior structures are involved, and is evident at MRI as hyperintensity on both T2 FLAIR and DWI and with ADC normal or slightly high values [5, 14]. When cytotoxic edema is present or predominant, as may occur in vasospasm and ischemic complications, the ADC values are reduced which may represent an early sign of the nonreversibility of the complications. In conclusion, the diffusion-weighted sequences offer not only the possibility of diagnosing PRES but also valuable prognostic information. --- *Source: 1015385-2020-02-14.xml*
2020
# Soluble Receptor for Advanced Glycation End Product Ameliorates Chronic Intermittent Hypoxia Induced Renal Injury, Inflammation, and Apoptosis via P38/JNK Signaling Pathways **Authors:** Xu Wu; Wenyu Gu; Huan Lu; Chengying Liu; Biyun Yu; Hui Xu; Yaodong Tang; Shanqun Li; Jian Zhou; Chuan Shao **Journal:** Oxidative Medicine and Cellular Longevity (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1015390 --- ## Abstract Obstructive sleep apnea (OSA) associated chronic kidney disease is mainly caused by chronic intermittent hypoxia (CIH) triggered tissue damage. Receptor for advanced glycation end product (RAGE) and its ligand high mobility group box 1 (HMGB1) are expressed on renal cells and mediate inflammatory responses in OSA-related diseases. To determine their roles in CIH-induced renal injury, soluble RAGE (sRAGE), the RAGE neutralizing antibody, was intravenously administered in a CIH model. We also evaluated the effect of sRAGE on inflammation and apoptosis. Rats were divided into four groups: (1) normal air (NA), (2) CIH, (3) CIH+sRAGE, and (4) NA+sRAGE. Our results showed that CIH accelerated renal histological injury and upregulated RAGE-HMGB1 levels involving inflammatory (NF-κB, TNF-α, and IL-6), apoptotic (Bcl-2/Bax), and mitogen-activated protein kinases (phosphorylation of P38, ERK, and JNK) signal transduction pathways, which were abolished by sRAGE but p-ERK. Furthermore, sRAGE ameliorated renal dysfunction by attenuating tubular endothelial apoptosis determined by immunofluorescence staining of CD31 and TUNEL. These findings suggested that RAGE-HMGB1 activated chronic inflammatory transduction cascades that contributed to the pathogenesis of the CIH-induced renal injury. Inhibition of RAGE ligand interaction by sRAGE provided a therapeutic potential for CIH-induced renal injury, inflammation, and apoptosis through P38 and JNK pathways. --- ## Body ## 1. Introduction Obstructive sleep apnea (OSA) is characterized by repetitive upper airway collapse and recurrent hypoxia during sleep. Emerging evidence indicates that chronic kidney disease (CKD) is highly prevalent complication of untreated OSA with symptoms of polyuria and proteinuria [1, 2]. Meanwhile, the prevalence of OSA in CKD patients ranges severalfold higher than the general population [3]. Two mechanisms are responsible for the loss of kidney function in OSA patients: chronic nocturnal intrarenal hypoxia and activation of sympathetic nervous system in response to oxidative stress, resulting in tubulointerstitial injury and ultimately leading common pathway to end-stage renal disease (ESKD) [4, 5]. As the foremost pathophysiological change in the process of OSA, chronic intermittent hypoxia (CIH) often causes oxidative stress and inflammations, contributing to damage of various tissue and organs [6].The receptor for advanced glycation end products (RAGE), first identified as a member of the immunoglobulin superfamily, is a pattern-recognition receptor that interacts with multiligands, such as advanced glycation end products (AGEs), high mobility group box 1 (HMGB1), S-100 calcium-binding protein (S100B), and Mac-1 [7]. Multiple descriptive studies have demonstrated RAGE and its ligands are potentially related to OSA. Regarding RAGE ligands, a previous study had evaluated levels of HMGB1 and their relation to endothelial function in OSA patients [8]. S100B levels, identified as a useful biochemical marker, have also been found increased in OSA [9]. Broadly speaking, RAGE and its ligands are almost expressed in all tissues and on a wide range of cell types, also including renal (proximal) tubules, mesangial cells, and podocytes [10]. More recently, accumulations of RAGE and its ligands are recognized to be upregulated in various types of renal disorders. A review investigated that RAGE was associated not only with diabetic nephropathy, but also with obesity-related glomerulopathy, hypertensive nephropathy, and ischemic renal injury, all of which were closely related to OSA-associated-renal injury [11]. Considering chronic kidney disease is an immune inflammatory condition [12], it is natural to link chronic kidney disease to RAGE-HMGB1 and to identify them as key mediators in inflammatory responses as well as potential signaling molecules in progression to ESKD [13, 14]. As expected, HMGB1 is elevated significantly in CKD patients and correlates with GFR as well as markers of inflammation [15]. In particular, serum levels of HMGB1 in CKD patients were also significantly higher than those in control subjects [16]. These findings raise the possibility of RAGE-HMGB1 in the pathogenesis of OSA-associated chronic kidney disease, but their contribution in CIH-induced renal injury has not yet been elucidated.Furthermore, it is well documented that RAGE is an inverse marker in CKD patients [17], thus inhibition of RAGE constituting a possible strategy for the treatment of CKD [18, 19]. Soluble RAGE (sRAGE), which possesses the RAGE ligand binding positions but lacks the cytoplasmic and transmembrane domains, secretes out of the cells and acts as decoys to prevent RAGE signal transduction directly [20]. In clinical settings, serum sRAGE showed increased levels in patients with ESKD [21], but whether it could protect against toxic effects of RAGE remains to be known. Recent study found that RAGE and nuclear factor kappa B (NF-κB) downstream signaling were centrally involved in sleep apnea obtained from the intermittent hypoxia (IH) experimental model [22]. In this study we establish a CIH model to investigate the participation of RAGE-HMGB1 and the therapeutic effect of recombinant soluble RAGE. The possible mechanism involved was also elucidated. ## 2. Material and Methods ### 2.1. Animal Model of CIH Male Sprague-Dawley rats at age of 4 weeks and body weight 140–150 g, obtained from the Experimental Animal Centre of Fudan University (China) and allowed free access to laboratory chow and tap water in day-night quarters at 25°C, were used in this study. The animal protocol was approved by the Animal Care Committee of Fudan University, in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. All effects were made to minimize animal suffering. Rats were randomly divided into the following four experimental groups of 6 animals each: the normal air (NA) control group; the CIH group; the CIH plus sRAGE group; normal air plus sRAGE group (NA+sRAGE). The CIH protocol was modeled according to the study of Fu et al. [23]. Rats were placed in four identical designed chambers. Nitrogen (100%) was delivered to the chambers for 30 s to reduce the ambient fraction of inspired oxygen to 6-7% for 10 s. Then, oxygen was infused for 20 s so that the oxygen concentration returned to 20~21%. This cycle took one minute, 8 h/day for 7 d/week for 5 weeks. The oxygen concentration was measured automatically using an oxygen analyzer (Chang Ai Electronic Science & Technology Company, Shanghai, China). The CIH plus sRAGE treatment group received the same CIH protocol as for comparison with the CIH model group. Rats in the sRAGE treatment group were subsequently injected with recombinant sRAGE protein 150 μg/per rat (diluted in 1 mL phosphate-buffered saline) intraperitoneally every 48 h for 5 weeks before each hypoxia cycle. This dose of rats was converted based on previous work where daily dose of sRAGE in a mouse model of chronic hypoxia was 20 μg/day [24]. Control rats included NA and NA+sRAGE group rats, all of which were subjected to normal air and administered with phosphate-buffered saline (PBS) as a vehicle control and 150 μg/per rat of recombinant sRAGE, respectively. During this exposure, all rats were kept under pathogen-free conditions and allowed to access to food and water. At the end of 5 weeks of CIH model, all the rats were euthanized 15 h after last hypoxia circle. Their blood and kidneys were collected. Blood samples were obtained from the inferior vena cava. Renal function was calculated as serum creatinine and blood urea nitrogen (BUN), which were measured in the core laboratory of Zhongshan Hospital (Shanghai, China). ### 2.2. Kidney Histology After being euthanized, kidneys were fixed with 4% paraformaldehyde and embedded in paraffin. Paraffin-embedded specimens were cut into 4μm thick sections and stained with hematoxylin-eosin. Kidney injury was examined using the modified 0–5 Jablonski grading scale under a light microscope: 0 represents normal; 1 represents occasional degeneration and necrosis of individual cells; 2 represents degenerative cells and necrosis of individual tubules; 3 represents degeneration and necrosis of all cells in adjacent proximal convoluted tubules with survival of surrounding tubules; 4 represents necrosis confined to the distal third of the proximal convoluted tubules, with a band of necrosis extending across the inner cortex; and 5 represents necrosis affecting all the three segments of the proximal convoluted tubules, as described previously [25]. ### 2.3. HMGB1 Immunohistochemistry For HMGB1 detection, the samples were dewaxed in xylene and dehydrated in a graded ethanol series. Endogenous peroxidase activity was inhibited by incubating the slides with 0.3% H2O2 for 5 min, followed by washing thrice with PBS; the sections were incubated with the primary antibodies to HMGB1 (1 : 1000 dilution, ab18256; Abcam, Cambridge, UK) and incubated at 4°C overnight, washed in PBS, and incubated at 37°C for 1 h with biotinylated anti-rabbit/rat IgG (1 : 200; Maixin-Bio, Shanghai, China) according to manufacturer’s instructions. The tissue was incubated with Streptavidin Peroxidase (Maixin-Bio) reagents at 37°C for 30 min, stained with freshly prepared DAB (Maixin-Bio). Morphometric quantification of the stained sections was performed with a customized digital image analysis system (IMAGE-Pro plus 4.5). Analysis of the kidney and capturing images was performed. ### 2.4. Immunofluorescent Triple Staining for CD31, TUNEL, DAPI, and Apoptotic Determination To visualize apoptotic changes during CIH-induced renal injury, 4μm paraffin-embedded tissue slides were deparaffinized, rehydrated, and prepared as described in immunohistochemistry. Antigen was retrieved by microwave-citrate buffer antigen retrieval method. The slides were blocked with 5% goat serum (Invitrogen) for 1 hour at room temperature, permeabilized with 0.2% Triton X-100, incubated overnight at 4°C with mouse anti-rat CD31 antibody (1 : 50 dilution, No. ab64543; Abcam, Cambridge, UK). After the samples were rinsed 4 times (3 min each) with PBS, the slides were then incubated for 30 minutes at room temperature with Alexa Fluor 488-conjugated goat anti-mouse IgG (1 : 400; B40941, Invitrogen). For detection of apoptotic cells, TUNEL staining was carried out using a Promega apoptosis detection kit. Immunofluorescence for TUNEL staining was performed with Alexa Fluor 594-conjugated goat anti-mouse IgG (1 : 400; A11020, Invitrogen). The glass was mounted with cover slips containing Vectashield mounting medium with 4′,6-diamidino-2-phenylindole (DAPI; Vector Laboratories) and imaged under an fluorescent microscope (Leica). Endothelial cells (CD31+) stained green fluorescence and DAPI stained blue fluorescence. TUNEL-positive apoptotic cells were detected by localized red fluorescence within cell nuclei. TUNEL-positive (TUNEL+) and DAPI positive (DAPI+) cells were counted at ×100 magnification with a fluorescence microscopy, respectively. The number of apoptotic cells was calculated as TUNEL+/DAPI+ cells in random 10 fields per section for quantification. ### 2.5. Western Blot Analysis for Target Protein Proteins were extracted from animal kidney tissues using NucleoSpin (REF 740933.50; Macherey-Nagel), after which they were separated with SDS-PAGE on 8% gels and transferred to PVDF membranes which were then incubated overnight at 4°C with the primary antibody diluted in blocking solution. The primary antibodies and the dilutions were as follows: p-ERK [1/2] No. 9101 (1 : 1000), p-JNK No. 9255 (1 : 2000), t-JNK No. 9252 (1 : 1000), p-p38 No. 9211 (1 : 1000), and t-p38 No. 9212 (1 : 1000) (Cell Signaling Technology, Danvers, MA); Bax No. ab5714 (1 : 500), Bcl-2 No. ab136285 (1 : 500), NF-κB p65 No. ab16502 (1 : 1000), HMGB1 No. ab18256 (1 : 1000), (Abcam, Cambridge, UK), and RAGE (1 : 500, No. R3611, Sigma, USA). Horseradish peroxidase-coupled rabbit and mouse IgG (1 : 2000) were used as secondary antibodies. The blots were incubated with horseradish peroxidase-conjugated anti-IgG for 1 h at 37°C. Nonspecific binding sites were blocked for 1 h with 0.05 g/mL nonfat milk powder in Tris-buffered saline (pH 7.4) and 0.05% (v/v) Tween 20 (Bio-Rad) followed by overnight incubations with primary antibody. Blots were probed with anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) antibody (sc-25778, Santa Cruz, USA) to ensure equal loading and detected using ECL chemiluminescent system (Amersham Biosciences, Piscataway, NJ, USA). Band intensity was quantified by scanning densitometry. Each measurement was made 3 times. ### 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) Serum was isolated from the blood after centrifugation at 14 000 rpm for 20 min at 4°C. After centrifugation, serum was frozen at −80°C until enzyme-linked immunosorbent assay (ELISA) analyses were performed. HMGB1 (IBL International) and the levels of inflammatory mediators (TNF-a, IL-6, and IL-17 from R&D Systems) in the serum samples were measured in triplicate following the procedures supplied by the manufacturer. ### 2.7. Statistical Analysis Data were presented as mean ± SEM and analyzed using SPSS 18.0. Comparisons between multiple groups were performed using ANOVA with the Bonferroni test.P < 0.05 was considered statistically significant between groups. ## 2.1. Animal Model of CIH Male Sprague-Dawley rats at age of 4 weeks and body weight 140–150 g, obtained from the Experimental Animal Centre of Fudan University (China) and allowed free access to laboratory chow and tap water in day-night quarters at 25°C, were used in this study. The animal protocol was approved by the Animal Care Committee of Fudan University, in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. All effects were made to minimize animal suffering. Rats were randomly divided into the following four experimental groups of 6 animals each: the normal air (NA) control group; the CIH group; the CIH plus sRAGE group; normal air plus sRAGE group (NA+sRAGE). The CIH protocol was modeled according to the study of Fu et al. [23]. Rats were placed in four identical designed chambers. Nitrogen (100%) was delivered to the chambers for 30 s to reduce the ambient fraction of inspired oxygen to 6-7% for 10 s. Then, oxygen was infused for 20 s so that the oxygen concentration returned to 20~21%. This cycle took one minute, 8 h/day for 7 d/week for 5 weeks. The oxygen concentration was measured automatically using an oxygen analyzer (Chang Ai Electronic Science & Technology Company, Shanghai, China). The CIH plus sRAGE treatment group received the same CIH protocol as for comparison with the CIH model group. Rats in the sRAGE treatment group were subsequently injected with recombinant sRAGE protein 150 μg/per rat (diluted in 1 mL phosphate-buffered saline) intraperitoneally every 48 h for 5 weeks before each hypoxia cycle. This dose of rats was converted based on previous work where daily dose of sRAGE in a mouse model of chronic hypoxia was 20 μg/day [24]. Control rats included NA and NA+sRAGE group rats, all of which were subjected to normal air and administered with phosphate-buffered saline (PBS) as a vehicle control and 150 μg/per rat of recombinant sRAGE, respectively. During this exposure, all rats were kept under pathogen-free conditions and allowed to access to food and water. At the end of 5 weeks of CIH model, all the rats were euthanized 15 h after last hypoxia circle. Their blood and kidneys were collected. Blood samples were obtained from the inferior vena cava. Renal function was calculated as serum creatinine and blood urea nitrogen (BUN), which were measured in the core laboratory of Zhongshan Hospital (Shanghai, China). ## 2.2. Kidney Histology After being euthanized, kidneys were fixed with 4% paraformaldehyde and embedded in paraffin. Paraffin-embedded specimens were cut into 4μm thick sections and stained with hematoxylin-eosin. Kidney injury was examined using the modified 0–5 Jablonski grading scale under a light microscope: 0 represents normal; 1 represents occasional degeneration and necrosis of individual cells; 2 represents degenerative cells and necrosis of individual tubules; 3 represents degeneration and necrosis of all cells in adjacent proximal convoluted tubules with survival of surrounding tubules; 4 represents necrosis confined to the distal third of the proximal convoluted tubules, with a band of necrosis extending across the inner cortex; and 5 represents necrosis affecting all the three segments of the proximal convoluted tubules, as described previously [25]. ## 2.3. HMGB1 Immunohistochemistry For HMGB1 detection, the samples were dewaxed in xylene and dehydrated in a graded ethanol series. Endogenous peroxidase activity was inhibited by incubating the slides with 0.3% H2O2 for 5 min, followed by washing thrice with PBS; the sections were incubated with the primary antibodies to HMGB1 (1 : 1000 dilution, ab18256; Abcam, Cambridge, UK) and incubated at 4°C overnight, washed in PBS, and incubated at 37°C for 1 h with biotinylated anti-rabbit/rat IgG (1 : 200; Maixin-Bio, Shanghai, China) according to manufacturer’s instructions. The tissue was incubated with Streptavidin Peroxidase (Maixin-Bio) reagents at 37°C for 30 min, stained with freshly prepared DAB (Maixin-Bio). Morphometric quantification of the stained sections was performed with a customized digital image analysis system (IMAGE-Pro plus 4.5). Analysis of the kidney and capturing images was performed. ## 2.4. Immunofluorescent Triple Staining for CD31, TUNEL, DAPI, and Apoptotic Determination To visualize apoptotic changes during CIH-induced renal injury, 4μm paraffin-embedded tissue slides were deparaffinized, rehydrated, and prepared as described in immunohistochemistry. Antigen was retrieved by microwave-citrate buffer antigen retrieval method. The slides were blocked with 5% goat serum (Invitrogen) for 1 hour at room temperature, permeabilized with 0.2% Triton X-100, incubated overnight at 4°C with mouse anti-rat CD31 antibody (1 : 50 dilution, No. ab64543; Abcam, Cambridge, UK). After the samples were rinsed 4 times (3 min each) with PBS, the slides were then incubated for 30 minutes at room temperature with Alexa Fluor 488-conjugated goat anti-mouse IgG (1 : 400; B40941, Invitrogen). For detection of apoptotic cells, TUNEL staining was carried out using a Promega apoptosis detection kit. Immunofluorescence for TUNEL staining was performed with Alexa Fluor 594-conjugated goat anti-mouse IgG (1 : 400; A11020, Invitrogen). The glass was mounted with cover slips containing Vectashield mounting medium with 4′,6-diamidino-2-phenylindole (DAPI; Vector Laboratories) and imaged under an fluorescent microscope (Leica). Endothelial cells (CD31+) stained green fluorescence and DAPI stained blue fluorescence. TUNEL-positive apoptotic cells were detected by localized red fluorescence within cell nuclei. TUNEL-positive (TUNEL+) and DAPI positive (DAPI+) cells were counted at ×100 magnification with a fluorescence microscopy, respectively. The number of apoptotic cells was calculated as TUNEL+/DAPI+ cells in random 10 fields per section for quantification. ## 2.5. Western Blot Analysis for Target Protein Proteins were extracted from animal kidney tissues using NucleoSpin (REF 740933.50; Macherey-Nagel), after which they were separated with SDS-PAGE on 8% gels and transferred to PVDF membranes which were then incubated overnight at 4°C with the primary antibody diluted in blocking solution. The primary antibodies and the dilutions were as follows: p-ERK [1/2] No. 9101 (1 : 1000), p-JNK No. 9255 (1 : 2000), t-JNK No. 9252 (1 : 1000), p-p38 No. 9211 (1 : 1000), and t-p38 No. 9212 (1 : 1000) (Cell Signaling Technology, Danvers, MA); Bax No. ab5714 (1 : 500), Bcl-2 No. ab136285 (1 : 500), NF-κB p65 No. ab16502 (1 : 1000), HMGB1 No. ab18256 (1 : 1000), (Abcam, Cambridge, UK), and RAGE (1 : 500, No. R3611, Sigma, USA). Horseradish peroxidase-coupled rabbit and mouse IgG (1 : 2000) were used as secondary antibodies. The blots were incubated with horseradish peroxidase-conjugated anti-IgG for 1 h at 37°C. Nonspecific binding sites were blocked for 1 h with 0.05 g/mL nonfat milk powder in Tris-buffered saline (pH 7.4) and 0.05% (v/v) Tween 20 (Bio-Rad) followed by overnight incubations with primary antibody. Blots were probed with anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) antibody (sc-25778, Santa Cruz, USA) to ensure equal loading and detected using ECL chemiluminescent system (Amersham Biosciences, Piscataway, NJ, USA). Band intensity was quantified by scanning densitometry. Each measurement was made 3 times. ## 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) Serum was isolated from the blood after centrifugation at 14 000 rpm for 20 min at 4°C. After centrifugation, serum was frozen at −80°C until enzyme-linked immunosorbent assay (ELISA) analyses were performed. HMGB1 (IBL International) and the levels of inflammatory mediators (TNF-a, IL-6, and IL-17 from R&D Systems) in the serum samples were measured in triplicate following the procedures supplied by the manufacturer. ## 2.7. Statistical Analysis Data were presented as mean ± SEM and analyzed using SPSS 18.0. Comparisons between multiple groups were performed using ANOVA with the Bonferroni test.P < 0.05 was considered statistically significant between groups. ## 3. Results ### 3.1. Protective Effect of sRAGE on Kidney Function and Histopathological Assessment In histological examination of kidney stained with hematoxylin-eosin, NA group rats showed normal glomerular and tubular structures, while CIH resulted in prominent tubular atrophy and inflammatory cell infiltration (Figure1(a)). Injury score was further evaluated by the modified 0–5 Jablonski grading scale (Figure 1(b)). Consistent with desquamation of renal tubules epithelium in CIH group rats, the serum creatinine as well as BUN levels significantly elevated (58.59 ± 5.84 μmol/L and 17.54 ± 1.97 mmol/L, P < 0.0 0 1) as compared to the NA control group (37 . 2 9 ± 5 . 07 μmol/L and 6.12 ± 2.4 7 mmol/L; Figures 1(c) and 1(d)). However, animals treated with sRAGE before each hypoxia circle seldom displayed extensive features of tubule epithelial swelling and narrowed tubular lumens, without significant changes in distal convoluted tubule (Figure 1(a)). Contrast to CIH-incurred renal damage, sRAGE attenuated dysfunction and inflammation, as reflected by improvements in serum parameters (creatinine decreased by 24.41%, P = 0.00 43; BUN decreased by 14.59%, P = 0 . 0 41) and histological grade (P = 0.0 02).Figure 1 Effect of sRAGE on CIH-induced histological damage and renal dysfunction. (a) Representative kidney sections from normal air (NA) group, CIH group, CIH+sRAGE group, and negative control (NA+sRAGE) group are stained by H&E (scale bar: 50μm). In light microscopic examination, tubular degeneration, interstitial neutrophil infiltration, and massive desquamation of renal epithelium are more remarkable in the kidney tissues of CIH rats compared to NA group, while pretreatment of sRAGE apparently shows almost normal tubules and mild dilatation of tubular lumen absence of severe inflammatory infiltrations. (b) Sections are graded based on the 0–5 Jablonski grading scale averaging the values from 10 fields per kidney under microscopy. Renal function is determined by serum creatinine (c) and BUN (d). Data are presented as the mean ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ### 3.2. Soluble RAGE Attenuated CIH-Induced Renal Tubular Endothelial Cell Apoptosis Indeed, the renal tubular endothelial cell injury has been a key to the pathogenesis of CKD. Since CD31 was recognized to be a specific marker of endothelial cells [26], we performed immunofluorescence staining of CD31 and TUNEL to evaluate the degree of renal tubular endothelial cell apoptosis. DAPI was used to visualize cell nuclei; thus merged immunofluorescent TUNEL/DAPI staining depicted the proportion of apoptosis. In normal kidneys, CD31+ cells clearly stained in the wall of renal proximal and distal tubules did not express TUNEL+ cells. In contrast, CIH significantly reduced CD31 expression in corticomedullary junction and peritubular capillary endothelium, indicating that chronic hypoxia caused severe endothelial injury (Figure 2(a)). In addition, TUNEL+ cells were widely noted at the corticomedullary section in the CIH group. Colocalization of CD31/TUNEL immunofluorescent staining yielded that endothelial cells were undergoing apoptosis and the percentage of apoptotic endothelial cells was greatest following CIH exposure (Figure 2(a)). Upon pretreatment with sRAGE during CIH, only some faint, nonspecific, red background TUNEL staining was observed, whereas endothelial CD31 staining remained relatively apparent. The specific costaining for CD31 and lessened TUNEL+ cells in the merged picture suggested sRAGE ameliorated CIH-induced endothelial injury. In line with this, the percentage of apoptosis was significantly reduced 37% by sRAGE compared with the CIH group (P = 0.002, Figure 2(b)).Figure 2 Effect of sRAGE on renal tubular endothelial cell apoptosis. (a) Representative immunofluorescence staining for CD31 (green), TUNEL (red), DAPI (blue), and the merged pictures from kidney tissues of each group. The scale bars represent 50μm. (b) Quantitative assessment of the percentage of apoptosis by counting the TUNEL+/DAPI+ cells in 10 random fields (100x) for each section. (c) Western blots analysis of Bcl-2 and Bax protein in comparison with GAPDH used as a loading control. (d) Representative bar diagram showing quantitative relative levels of Bcl-2 and Bax in NA, CIH, CIH+sRAGE, and NA plus sRAGE treated groups. Data are presented as the mean ± SEM. NS: no significance. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Oxidative stress affects the endothelial cell apoptosis by regulation of the balance between Bax and Bcl-2 proteins [27]. In contrast to NA group, expression of the proapoptotic protein Bax was upregulated during CIH process, whereas the magnitude of antiapoptotic protein Bcl-2 was significantly decreased as shown in Figure 2(c). Our histological results were further supported by Bcl-2/Bax protein ratio. Bcl-2/Bax ratio decreased from 1.49 ± 1.18 to 0.26 ± 0.43, suggestive of CIH-promoted propensity to apoptosis. But pretreatment of sRAGE exhibited a significant improvement of 125.11% in Bcl-2/Bax protein ratio compared with CIH alone. These data indicated that tubular endothelial cell apoptosis played a critical role in CIH-induced renal injury. Therefore, administration of sRAGE alleviated the renal endothelial cell death through a Bcl-2/Bax-dependent mechanism, thus improving functional recovery. ### 3.3. Effect of sRAGE on CIH-Induced HMGB1 Expression During oxidative stress provoked necrotic process, cells invariably lose membrane integrity and eventually lyse, resulting in intracellular contents release such as HMGB1. To ascertain it, immunohistochemistry was performed to determine the location of HMGB1 in each group. Results showed that HMGB1 expression was predominantly detected in cortical areas in contrast to medulla, meaning that proximal tubular cells were likely to be a prominent source of HMGB1. In CIH group, HMGB1 was expressed diffusely in the distended tubular cytoplasms as well as the nuclei of renal tubular epithelial cells, whereas HMGB1 was not or modestly expressed in the nuclei of proximal and distal convoluted tubules in NA group (Figure3). In addition, the increased extracellular and cytoplasmic HMGB1 in CIH rats was gradually attenuated upon pretreatment of sRAGE, as depicted by lessened but mild expression in proximal and distal convoluted tubule (Figure 3). Negative control group (NA+sRAGE) excluded the possibility of nonspecific staining of sRAGE.Figure 3 Representative immunohistochemistry of renal cortex (the top panel including glomeruli and proximal tubule) and medulla (the bottom panel including peritubular capillaries and distal tubule) for localization of HMGB1 (scale bar: 50μm). HMGB1 is abundant in cytoplasmic renal tubules of CIH group, compared with nuclear patterns of HMGB1 in NA group. sRAGE gradually attenuates HMGB1 cytoplasmic deposition, intraluminal infiltration, and nuclear staining in expanded renal tubules. ### 3.4. Effect of sRAGE on RAGE-HMGB1 Downstream Inflammatory Cytokines and Molecules To distinguish the deleterious contribution of RAGE-HMGB1 in the pathogenesis of CIH-induced kidney injury, western blot technique was used to detect RAGE-HMGB1 and associated inflammatory molecules. Results demonstrated significant differences in RAGE-HMGB1 expression between NA and CIH control (RAGE:0.466 ± 0.090 versus 2.368 ± 0.931, HMGB1: 0.038 ± 0.026 versus 1.118 ± 0.335, P < 0.001, Figure 4). Also, we found that NF-κB was significantly increased in CIH group as compared to normal condition (0.071 ± 0.056 versus 1.056 ± 0.376, P < 0.001, Figure 4). In contrast to CIH alone, sRAGE-pretreatment group exhibited a significant suppression on RAGE (1.643 ± 0.581, P < 0.01, Figure 4) and also on HMGB1 (0.566 ± 0.341, P < 0.01, Figure 4). Besides, sRAGE played a pivotal role in the inhibition of NF-κB activation (0.713 ± 0.628, P < 0.05, Figure 4). Conversely, treatment with sRAGE alone did not elicit apparent changes in RAGE-HMGB1 or NF-κB activation. In this regard, engagement of RAGE-HMGB1 may accompany transcription factor NF-κB activation that regulated the induction of multiple proinflammatory cytokines.Figure 4 Effect of sRAGE on expressions of RAGE, HMGB1, and NF-κB. (a) Representative western blot images (upper panel) and GAPDH (lower panel) used as the endogenous control are shown in each group. (b) Quantification densitometric analysis summarizes the fold changes of protein levels normalized to GAPDH. Data are expressed as means ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b)Following 5 weeks of CIH exposure, serum inflammatory cytokines of IL-6 and TNF-a increased significantly (118.28 ± 18.98 pg/mL and 125.16 ± 13.04 pg/mL resp., P < 0.0001; Figures 5(a) and 5(b)) but were lowly detectable in control (24.03 ± 6.77 pg/mL and 38.72 ± 9.36 pg/mL) and CIH+sRAGE group (84.75 ± 11.99 pg/mL and 69.29 ± 10.49 pg/mL). As expected, elevated serum levels of HMGB1 and IL-17 indeed occurred in the CIH rats (32.88 ± 2.69 ng/mL and 119.49 ± 18.77 pg/mL, P < 0.01; Figures 5(c) and 5(d)). Importantly, the results obtained from serum corresponded to the expression of RAGE signaling and inflammation response in tissues. Furthermore, as showed in Figures 5(a) and 5(b), the inhibitory effect of sRAGE on amplified inflammatory cytokine productions of IL-6 and TNF-α under CIH condition was obvious. However, it should be noteworthy that sRAGE had an adverse effect on the circulatory HMGB1 in CIH+sRAGE group rats compared with CIH alone (34.58 ± 4.32 ng/mL, P = 0.43; Figure 5(c)). Another negative result was observed in subsequent decreased levels of IL-17 in CIH+sRAGE group rats (105.49 ± 30.21 pg/mL, P = 0.061; Figure 5(d)). In this in vivo study, we confirmed the potential therapeutic effect of sRAGE on RAGE mediated inflammatory molecular signaling accompanied by reduced cytokines without the presence of circulatory HMGB1 and IL-17.Figure 5 Effect of sRAGE on proinflammatory cytokines IL-6 (a), TNF-a (b), HMGB1 (c), and IL-17 (d) in serum of rats. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ### 3.5. Soluble RAGE Modulates P-P38 and P-JNK but Not P-ERK Signaling Mitogen-activated protein kinases (MAPKs) are well accepted upstream modulators of apoptosis and inflammatory cytokines. They also have crucial roles in signal transduction from the cell surface to the nucleus, which are required for subsequent NF-κB transcriptional activation. As shown in Figure 6(a), the phosphorylated JNK and p38, measured as phospho/total-JNK and phospho/total-p38 level, both reached maximal kinase activities after 5 weeks of CIH exposure (JNK: 1.75 ± 0.81 and p38: 1.11 ± 0.49, P < 0.01). Also, CIH tended to enhance the phosphorylation of ERK1/2 approximately twofold over basal levels (0.2 4 ± 0.1 6 versus 0 . 12 ± 0 . 11, P = 0 . 013, Figure 6(b)). Regarding these, MAPKs family including p38, JNK, and ERK1/2 were investigated to be activated in response to oxidative stress. To address whether sRAGE could modulate MAPK activity, we further used specific antibodies to establish the active forms of the kinases activities. In contrast to CIH alone, the levels of phosphorylated JNK and p38 along with sRAGE treatment were decreased (JNK: 0.87 ± 0.31 and p38: 0.69 ± 0.61, P < 0.01; Figures 6(c) and 6(d)), whereas the phosphorylation levels of p-ERK1/2 were not significantly affected (0.2 5 ± 0 . 09, P > 0.05, Figure 6(b)). sRAGE treatment abrogated t-p38 activation, but no changes in the total levels of ERK1/2 or JNK were detectable.Figure 6 Representative western blot images show the effect of sRAGE on phosphorylated (P) and total (T) ERK, p38, and JNK expression (a). Histograms represent the quantitative densitometric ratio of MAPK signaling molecules ERK (b), p38 (c), and JNK (d) normalized to GAPDH in each group. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Accordingly, we identify RAGE ligand as key mediators of MAPK downstream molecules leading to CIH-activated inflammation and apoptosis, while signaling proteins such as p38 and JNK MAP kinases are potentially regulators involved in the renal protection of sRAGE. ## 3.1. Protective Effect of sRAGE on Kidney Function and Histopathological Assessment In histological examination of kidney stained with hematoxylin-eosin, NA group rats showed normal glomerular and tubular structures, while CIH resulted in prominent tubular atrophy and inflammatory cell infiltration (Figure1(a)). Injury score was further evaluated by the modified 0–5 Jablonski grading scale (Figure 1(b)). Consistent with desquamation of renal tubules epithelium in CIH group rats, the serum creatinine as well as BUN levels significantly elevated (58.59 ± 5.84 μmol/L and 17.54 ± 1.97 mmol/L, P < 0.0 0 1) as compared to the NA control group (37 . 2 9 ± 5 . 07 μmol/L and 6.12 ± 2.4 7 mmol/L; Figures 1(c) and 1(d)). However, animals treated with sRAGE before each hypoxia circle seldom displayed extensive features of tubule epithelial swelling and narrowed tubular lumens, without significant changes in distal convoluted tubule (Figure 1(a)). Contrast to CIH-incurred renal damage, sRAGE attenuated dysfunction and inflammation, as reflected by improvements in serum parameters (creatinine decreased by 24.41%, P = 0.00 43; BUN decreased by 14.59%, P = 0 . 0 41) and histological grade (P = 0.0 02).Figure 1 Effect of sRAGE on CIH-induced histological damage and renal dysfunction. (a) Representative kidney sections from normal air (NA) group, CIH group, CIH+sRAGE group, and negative control (NA+sRAGE) group are stained by H&E (scale bar: 50μm). In light microscopic examination, tubular degeneration, interstitial neutrophil infiltration, and massive desquamation of renal epithelium are more remarkable in the kidney tissues of CIH rats compared to NA group, while pretreatment of sRAGE apparently shows almost normal tubules and mild dilatation of tubular lumen absence of severe inflammatory infiltrations. (b) Sections are graded based on the 0–5 Jablonski grading scale averaging the values from 10 fields per kidney under microscopy. Renal function is determined by serum creatinine (c) and BUN (d). Data are presented as the mean ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ## 3.2. Soluble RAGE Attenuated CIH-Induced Renal Tubular Endothelial Cell Apoptosis Indeed, the renal tubular endothelial cell injury has been a key to the pathogenesis of CKD. Since CD31 was recognized to be a specific marker of endothelial cells [26], we performed immunofluorescence staining of CD31 and TUNEL to evaluate the degree of renal tubular endothelial cell apoptosis. DAPI was used to visualize cell nuclei; thus merged immunofluorescent TUNEL/DAPI staining depicted the proportion of apoptosis. In normal kidneys, CD31+ cells clearly stained in the wall of renal proximal and distal tubules did not express TUNEL+ cells. In contrast, CIH significantly reduced CD31 expression in corticomedullary junction and peritubular capillary endothelium, indicating that chronic hypoxia caused severe endothelial injury (Figure 2(a)). In addition, TUNEL+ cells were widely noted at the corticomedullary section in the CIH group. Colocalization of CD31/TUNEL immunofluorescent staining yielded that endothelial cells were undergoing apoptosis and the percentage of apoptotic endothelial cells was greatest following CIH exposure (Figure 2(a)). Upon pretreatment with sRAGE during CIH, only some faint, nonspecific, red background TUNEL staining was observed, whereas endothelial CD31 staining remained relatively apparent. The specific costaining for CD31 and lessened TUNEL+ cells in the merged picture suggested sRAGE ameliorated CIH-induced endothelial injury. In line with this, the percentage of apoptosis was significantly reduced 37% by sRAGE compared with the CIH group (P = 0.002, Figure 2(b)).Figure 2 Effect of sRAGE on renal tubular endothelial cell apoptosis. (a) Representative immunofluorescence staining for CD31 (green), TUNEL (red), DAPI (blue), and the merged pictures from kidney tissues of each group. The scale bars represent 50μm. (b) Quantitative assessment of the percentage of apoptosis by counting the TUNEL+/DAPI+ cells in 10 random fields (100x) for each section. (c) Western blots analysis of Bcl-2 and Bax protein in comparison with GAPDH used as a loading control. (d) Representative bar diagram showing quantitative relative levels of Bcl-2 and Bax in NA, CIH, CIH+sRAGE, and NA plus sRAGE treated groups. Data are presented as the mean ± SEM. NS: no significance. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Oxidative stress affects the endothelial cell apoptosis by regulation of the balance between Bax and Bcl-2 proteins [27]. In contrast to NA group, expression of the proapoptotic protein Bax was upregulated during CIH process, whereas the magnitude of antiapoptotic protein Bcl-2 was significantly decreased as shown in Figure 2(c). Our histological results were further supported by Bcl-2/Bax protein ratio. Bcl-2/Bax ratio decreased from 1.49 ± 1.18 to 0.26 ± 0.43, suggestive of CIH-promoted propensity to apoptosis. But pretreatment of sRAGE exhibited a significant improvement of 125.11% in Bcl-2/Bax protein ratio compared with CIH alone. These data indicated that tubular endothelial cell apoptosis played a critical role in CIH-induced renal injury. Therefore, administration of sRAGE alleviated the renal endothelial cell death through a Bcl-2/Bax-dependent mechanism, thus improving functional recovery. ## 3.3. Effect of sRAGE on CIH-Induced HMGB1 Expression During oxidative stress provoked necrotic process, cells invariably lose membrane integrity and eventually lyse, resulting in intracellular contents release such as HMGB1. To ascertain it, immunohistochemistry was performed to determine the location of HMGB1 in each group. Results showed that HMGB1 expression was predominantly detected in cortical areas in contrast to medulla, meaning that proximal tubular cells were likely to be a prominent source of HMGB1. In CIH group, HMGB1 was expressed diffusely in the distended tubular cytoplasms as well as the nuclei of renal tubular epithelial cells, whereas HMGB1 was not or modestly expressed in the nuclei of proximal and distal convoluted tubules in NA group (Figure3). In addition, the increased extracellular and cytoplasmic HMGB1 in CIH rats was gradually attenuated upon pretreatment of sRAGE, as depicted by lessened but mild expression in proximal and distal convoluted tubule (Figure 3). Negative control group (NA+sRAGE) excluded the possibility of nonspecific staining of sRAGE.Figure 3 Representative immunohistochemistry of renal cortex (the top panel including glomeruli and proximal tubule) and medulla (the bottom panel including peritubular capillaries and distal tubule) for localization of HMGB1 (scale bar: 50μm). HMGB1 is abundant in cytoplasmic renal tubules of CIH group, compared with nuclear patterns of HMGB1 in NA group. sRAGE gradually attenuates HMGB1 cytoplasmic deposition, intraluminal infiltration, and nuclear staining in expanded renal tubules. ## 3.4. Effect of sRAGE on RAGE-HMGB1 Downstream Inflammatory Cytokines and Molecules To distinguish the deleterious contribution of RAGE-HMGB1 in the pathogenesis of CIH-induced kidney injury, western blot technique was used to detect RAGE-HMGB1 and associated inflammatory molecules. Results demonstrated significant differences in RAGE-HMGB1 expression between NA and CIH control (RAGE:0.466 ± 0.090 versus 2.368 ± 0.931, HMGB1: 0.038 ± 0.026 versus 1.118 ± 0.335, P < 0.001, Figure 4). Also, we found that NF-κB was significantly increased in CIH group as compared to normal condition (0.071 ± 0.056 versus 1.056 ± 0.376, P < 0.001, Figure 4). In contrast to CIH alone, sRAGE-pretreatment group exhibited a significant suppression on RAGE (1.643 ± 0.581, P < 0.01, Figure 4) and also on HMGB1 (0.566 ± 0.341, P < 0.01, Figure 4). Besides, sRAGE played a pivotal role in the inhibition of NF-κB activation (0.713 ± 0.628, P < 0.05, Figure 4). Conversely, treatment with sRAGE alone did not elicit apparent changes in RAGE-HMGB1 or NF-κB activation. In this regard, engagement of RAGE-HMGB1 may accompany transcription factor NF-κB activation that regulated the induction of multiple proinflammatory cytokines.Figure 4 Effect of sRAGE on expressions of RAGE, HMGB1, and NF-κB. (a) Representative western blot images (upper panel) and GAPDH (lower panel) used as the endogenous control are shown in each group. (b) Quantification densitometric analysis summarizes the fold changes of protein levels normalized to GAPDH. Data are expressed as means ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b)Following 5 weeks of CIH exposure, serum inflammatory cytokines of IL-6 and TNF-a increased significantly (118.28 ± 18.98 pg/mL and 125.16 ± 13.04 pg/mL resp., P < 0.0001; Figures 5(a) and 5(b)) but were lowly detectable in control (24.03 ± 6.77 pg/mL and 38.72 ± 9.36 pg/mL) and CIH+sRAGE group (84.75 ± 11.99 pg/mL and 69.29 ± 10.49 pg/mL). As expected, elevated serum levels of HMGB1 and IL-17 indeed occurred in the CIH rats (32.88 ± 2.69 ng/mL and 119.49 ± 18.77 pg/mL, P < 0.01; Figures 5(c) and 5(d)). Importantly, the results obtained from serum corresponded to the expression of RAGE signaling and inflammation response in tissues. Furthermore, as showed in Figures 5(a) and 5(b), the inhibitory effect of sRAGE on amplified inflammatory cytokine productions of IL-6 and TNF-α under CIH condition was obvious. However, it should be noteworthy that sRAGE had an adverse effect on the circulatory HMGB1 in CIH+sRAGE group rats compared with CIH alone (34.58 ± 4.32 ng/mL, P = 0.43; Figure 5(c)). Another negative result was observed in subsequent decreased levels of IL-17 in CIH+sRAGE group rats (105.49 ± 30.21 pg/mL, P = 0.061; Figure 5(d)). In this in vivo study, we confirmed the potential therapeutic effect of sRAGE on RAGE mediated inflammatory molecular signaling accompanied by reduced cytokines without the presence of circulatory HMGB1 and IL-17.Figure 5 Effect of sRAGE on proinflammatory cytokines IL-6 (a), TNF-a (b), HMGB1 (c), and IL-17 (d) in serum of rats. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ## 3.5. Soluble RAGE Modulates P-P38 and P-JNK but Not P-ERK Signaling Mitogen-activated protein kinases (MAPKs) are well accepted upstream modulators of apoptosis and inflammatory cytokines. They also have crucial roles in signal transduction from the cell surface to the nucleus, which are required for subsequent NF-κB transcriptional activation. As shown in Figure 6(a), the phosphorylated JNK and p38, measured as phospho/total-JNK and phospho/total-p38 level, both reached maximal kinase activities after 5 weeks of CIH exposure (JNK: 1.75 ± 0.81 and p38: 1.11 ± 0.49, P < 0.01). Also, CIH tended to enhance the phosphorylation of ERK1/2 approximately twofold over basal levels (0.2 4 ± 0.1 6 versus 0 . 12 ± 0 . 11, P = 0 . 013, Figure 6(b)). Regarding these, MAPKs family including p38, JNK, and ERK1/2 were investigated to be activated in response to oxidative stress. To address whether sRAGE could modulate MAPK activity, we further used specific antibodies to establish the active forms of the kinases activities. In contrast to CIH alone, the levels of phosphorylated JNK and p38 along with sRAGE treatment were decreased (JNK: 0.87 ± 0.31 and p38: 0.69 ± 0.61, P < 0.01; Figures 6(c) and 6(d)), whereas the phosphorylation levels of p-ERK1/2 were not significantly affected (0.2 5 ± 0 . 09, P > 0.05, Figure 6(b)). sRAGE treatment abrogated t-p38 activation, but no changes in the total levels of ERK1/2 or JNK were detectable.Figure 6 Representative western blot images show the effect of sRAGE on phosphorylated (P) and total (T) ERK, p38, and JNK expression (a). Histograms represent the quantitative densitometric ratio of MAPK signaling molecules ERK (b), p38 (c), and JNK (d) normalized to GAPDH in each group. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Accordingly, we identify RAGE ligand as key mediators of MAPK downstream molecules leading to CIH-activated inflammation and apoptosis, while signaling proteins such as p38 and JNK MAP kinases are potentially regulators involved in the renal protection of sRAGE. ## 4. Discussion Accumulating evidences indicated that RAGE contributed, at least in part, to the development of OSA complications, such as diabetes and nephropathy, cardiovascular disease, and chronic inflammation [28]. Recent studies provided insight into sRAGE in competing with cell surface RAGE for ligand binding, thus potentially representing a novel molecular target for OSA-associated chronic kidney disease. The present study shows that RAGE-HMGB1 plays a pivotal role in a CIH model. Furthermore, it is the first evidence that sRAGE demonstrates its anti-inflammatory and antiapoptotic effects by altering p38 and JNK signaling pathways.Histological examination confirmed that our CIH protocol was sufficient to trigger renal damage. It has been shown that HIF1-α is the main molecular effector of hypoxia signaling and able to combine the HIF-1α binding site present in the RAGE promoter region [29]. Thereby hypoxia may activate RAGE mRNA gene transcription and stimulate RAGE production. Renal interstitial and tubular endothelial cells express specific RAGEs, ongoing generation of which may amplify chronic cellular perturbation and oxidant stress damage by engagement of these receptors on the endothelial surfaces [30]. Our observations in colocalization of CD31/TUNEL immunofluorescent staining revealed that activation of RAGE accelerated tubular endothelial apoptosis. Apart from a possible adaptive response to chronic hypoxia, activation of RAGE observed in our CIH model probably contributed to the CIH-induced injury according to western blot analysis. Of special interest is the finding of extracellular and abundant cytoplasmic accumulations of HMGB1 following CIH. Our results demonstrated that HMGB1 was limited in the nucleus of renal parenchyma cells under normal condition, but dramatically translocated into the cytoplasm and extracellular matrix upon hypoxia insult. For one reason, HMGB1 is passively released in response to inflammatory stress or necrosis [31]. For another, translocation of HMGB1 from the nucleus to cytoplasm requires inflammasome and caspase activity, thus facilitating the chronic inflammation and apoptosis [32, 33]. Furthermore, HMGB1 can behave as a secreted cytokine promoting neutrophil accumulating [34] and activate macrophages/monocytes to release more proinflammatory cytokines [35, 36]. Consistently, both ELISA and immunohistochemistry results showed that HMGB1 secreted from serum and tissue was collectively elevated and correlated with upregulated TNF-α and IL-6 after CIH. In addition, as a widely acknowledged cytokine for regulating inflammatory reaction and leukocyte migration, IL-17 was reported to be upregulated in RAGE-HMGB1 associated injury [37]. Report from Akirav et al. also indicated the association of RAGE expression and increased IL-17 [38]. Moreover, HMGB1 contributed to lymphocyte infiltration and the release of the Th17 cell specific cytokine IL-17 [39]. In our research, we confirmed previous results, exactly supporting the positive feedback loops of RAGE-HMGB1 activation and proinflammatory mediators.RAGE-mediated cascades of signal transduction could promote the proinflammatory NF-κB and the MAPK pathway in endothelial cells and monocytes. Importantly, the pathogenic role of RAGE appears to depend on the level of NF-κB transcriptional activity [40]. Inhibition of NF-κB decreased cardiomyocyte apoptosis and recruitment of neutrophils accompanied with HMGB1 suppression [41], indicative of the reciprocal modulation of NF-κB and HMGB1. A range of animal models in vitro and in vivo have demonstrated the involvement of RAGE in pathophysiologic processes, using a receptor decoy such as sRAGE [42]. Phosphorylated levels of p38, JNK, and ERK in present study were higher after CIH exposure and subsequently affected by sRAGE to different extent, implicating RAGE ligand as key mediators in MAPKs signaling. In line with our results, there was evidence in rat renal tubular epithelial cells that indicated the critical importance of HMGB1 in inducing circulating cyto/chemokines secretion through MAP kinase pathways [43]. Similar results were also observed in early reports that RAGE induced NF-κB activation and IL-1 and TNF-α production were dependent on p38 phosphorylation in diabetic glomerular injury [44, 45]. In addition, RAGE ligand interaction may directly induce generation of ERK and reactive oxygen species [46]. Specially, in our studies, these responses of MAPKs to sRAGE lack the participation of p-ERK1/2. Consistent with our results, Taguchi et al. found blockage of RAGE-amphoterin interaction also suppressed p38 and SAP/JNK MAPKs [47]. On the contrary, inhibition of RAGE by siRNA could reduce phosphorylated-ERK in cyst formation [48]. This ambiguity regarding MAPK molecular mechanisms perhaps depends on the cell and RAGE ligand types in vivo and in vitro.sRAGE can be used as a biomarker in RAGE-dependent inflammations as well as a therapeutic agent to neutralize hypoxia induced inflammation [49]. Moreover, sRAGE can cancel the effects of AGEs on cells in culture [50]. In another hypoxia/reoxygenation model, sRAGE significantly decreased cellular lactic dehydrogenase leakage and increased cell viability in neonatal rat cardiomyocytes [51]. The published data suggest that application of sRAGE is identified to intercept RAGE ligand interaction and subsequent downstream signaling [52]. Since sustained MAPK activation has been associated with oxidative stress and cell apoptosis [53], through histologic and western blot analysis, we reasoned that sRAGE protected against renal inflammation and apoptosis by suppression of p38 and JNK MAPK signaling molecules.Previous studies revealed the decreased sRAGE levels increased the propensity toward chronic inflammation such as hypertension [54] and coronary artery disease [55]. Serum sRAGE levels were elevated significantly in patients with decreased renal function and inversely related to inflammation [21]. These observations lead us to propose that subsequent production of sRAGE potentially protects against the decreased renal function, but Kalousová et al. found it was not related to mortality of haemodialysis patients [56]. Whether sRAGE represents only an epiphenomenon or a compensatory protective mechanism is still unknown. Although the protective effect of sRAGE is not as effective as the RAGE-deletion [57], the property of long half-life after intraperitoneal injection into normal rats renders the sustained effect in each hypoxia cycle until the end of the observation [58]. Considering RAGE is a multiligand receptor, the accurate blocking target of sRAGE remains to be elucidated. For example, sRAGE is found to interact with Mac-1 in an HMGB1-induced arthritis model [59]. In terms of proinflammatory and proapoptotic effects, HMGB1 is likely to be a main target of sRAGE [60]. Since S100 proteins and HMGB1 certainly do not exclusively bind to RAGE [61], sRAGE did not only result from intercepting the interaction of ligands with cell surface RAGE, but with other possible receptors. That was why that we only observed upregulated circulatory HMGB1 in serum without suddenly degraded levels by sRAGE. It is reasonably speculated that HMGB1 passively released from nucleus to circulation might not be efficiently scavenged. However, in accordance with immunohistochemistry results, western blot analysis of total renal cellular lysates detected the difference of HMGB1 between CIH and exogenous administration of sRAGE groups, suggesting that sRAGE exerted its effect by downregulation of HMGB1. Remarkably, Lee et al. determined that sRAGE exhibited no toxic effects on the liver by testing the activity of ALT [10], providing additional support for this potential therapeutic strategy.A limitation of this study was the lack of verification that whether the decreased RAGE expression induced by sRAGE treatment was abrogated with exogenous HMBG1 administration. We did not observe such endogenous sRAGE level in renal insufficiency following the chronic hypoxia. To confirm its exact blockage target of RAGE ligand interaction, the capacity of sRAGE in inflammatory responses from diverse models remains to be elucidated. ## 5. Conclusions Taken together, RAGE and its ligand HMGB1 activate chronic inflammatory transduction cascades that contribute to the pathogenesis of CIH-induced renal injury. The consequences of amplifying inflammatory response include the recruitment of inflammatory cytokines and effector molecules (sustained expression of NF-κB, TNF-α, IL-6, and MAPK signaling), leading to apoptosis and accelerated renal dysfunction. Interruption of RAGE interaction by administration of sRAGE has been shown to attenuate these detrimental effects. According to a decoy mechanism, blockade of RAGE ligand interaction could provide a new therapeutic approach in the development and progression of OSA-associated chronic kidney disease. --- *Source: 1015390-2016-09-05.xml*
1015390-2016-09-05_1015390-2016-09-05.md
56,200
Soluble Receptor for Advanced Glycation End Product Ameliorates Chronic Intermittent Hypoxia Induced Renal Injury, Inflammation, and Apoptosis via P38/JNK Signaling Pathways
Xu Wu; Wenyu Gu; Huan Lu; Chengying Liu; Biyun Yu; Hui Xu; Yaodong Tang; Shanqun Li; Jian Zhou; Chuan Shao
Oxidative Medicine and Cellular Longevity (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1015390
1015390-2016-09-05.xml
--- ## Abstract Obstructive sleep apnea (OSA) associated chronic kidney disease is mainly caused by chronic intermittent hypoxia (CIH) triggered tissue damage. Receptor for advanced glycation end product (RAGE) and its ligand high mobility group box 1 (HMGB1) are expressed on renal cells and mediate inflammatory responses in OSA-related diseases. To determine their roles in CIH-induced renal injury, soluble RAGE (sRAGE), the RAGE neutralizing antibody, was intravenously administered in a CIH model. We also evaluated the effect of sRAGE on inflammation and apoptosis. Rats were divided into four groups: (1) normal air (NA), (2) CIH, (3) CIH+sRAGE, and (4) NA+sRAGE. Our results showed that CIH accelerated renal histological injury and upregulated RAGE-HMGB1 levels involving inflammatory (NF-κB, TNF-α, and IL-6), apoptotic (Bcl-2/Bax), and mitogen-activated protein kinases (phosphorylation of P38, ERK, and JNK) signal transduction pathways, which were abolished by sRAGE but p-ERK. Furthermore, sRAGE ameliorated renal dysfunction by attenuating tubular endothelial apoptosis determined by immunofluorescence staining of CD31 and TUNEL. These findings suggested that RAGE-HMGB1 activated chronic inflammatory transduction cascades that contributed to the pathogenesis of the CIH-induced renal injury. Inhibition of RAGE ligand interaction by sRAGE provided a therapeutic potential for CIH-induced renal injury, inflammation, and apoptosis through P38 and JNK pathways. --- ## Body ## 1. Introduction Obstructive sleep apnea (OSA) is characterized by repetitive upper airway collapse and recurrent hypoxia during sleep. Emerging evidence indicates that chronic kidney disease (CKD) is highly prevalent complication of untreated OSA with symptoms of polyuria and proteinuria [1, 2]. Meanwhile, the prevalence of OSA in CKD patients ranges severalfold higher than the general population [3]. Two mechanisms are responsible for the loss of kidney function in OSA patients: chronic nocturnal intrarenal hypoxia and activation of sympathetic nervous system in response to oxidative stress, resulting in tubulointerstitial injury and ultimately leading common pathway to end-stage renal disease (ESKD) [4, 5]. As the foremost pathophysiological change in the process of OSA, chronic intermittent hypoxia (CIH) often causes oxidative stress and inflammations, contributing to damage of various tissue and organs [6].The receptor for advanced glycation end products (RAGE), first identified as a member of the immunoglobulin superfamily, is a pattern-recognition receptor that interacts with multiligands, such as advanced glycation end products (AGEs), high mobility group box 1 (HMGB1), S-100 calcium-binding protein (S100B), and Mac-1 [7]. Multiple descriptive studies have demonstrated RAGE and its ligands are potentially related to OSA. Regarding RAGE ligands, a previous study had evaluated levels of HMGB1 and their relation to endothelial function in OSA patients [8]. S100B levels, identified as a useful biochemical marker, have also been found increased in OSA [9]. Broadly speaking, RAGE and its ligands are almost expressed in all tissues and on a wide range of cell types, also including renal (proximal) tubules, mesangial cells, and podocytes [10]. More recently, accumulations of RAGE and its ligands are recognized to be upregulated in various types of renal disorders. A review investigated that RAGE was associated not only with diabetic nephropathy, but also with obesity-related glomerulopathy, hypertensive nephropathy, and ischemic renal injury, all of which were closely related to OSA-associated-renal injury [11]. Considering chronic kidney disease is an immune inflammatory condition [12], it is natural to link chronic kidney disease to RAGE-HMGB1 and to identify them as key mediators in inflammatory responses as well as potential signaling molecules in progression to ESKD [13, 14]. As expected, HMGB1 is elevated significantly in CKD patients and correlates with GFR as well as markers of inflammation [15]. In particular, serum levels of HMGB1 in CKD patients were also significantly higher than those in control subjects [16]. These findings raise the possibility of RAGE-HMGB1 in the pathogenesis of OSA-associated chronic kidney disease, but their contribution in CIH-induced renal injury has not yet been elucidated.Furthermore, it is well documented that RAGE is an inverse marker in CKD patients [17], thus inhibition of RAGE constituting a possible strategy for the treatment of CKD [18, 19]. Soluble RAGE (sRAGE), which possesses the RAGE ligand binding positions but lacks the cytoplasmic and transmembrane domains, secretes out of the cells and acts as decoys to prevent RAGE signal transduction directly [20]. In clinical settings, serum sRAGE showed increased levels in patients with ESKD [21], but whether it could protect against toxic effects of RAGE remains to be known. Recent study found that RAGE and nuclear factor kappa B (NF-κB) downstream signaling were centrally involved in sleep apnea obtained from the intermittent hypoxia (IH) experimental model [22]. In this study we establish a CIH model to investigate the participation of RAGE-HMGB1 and the therapeutic effect of recombinant soluble RAGE. The possible mechanism involved was also elucidated. ## 2. Material and Methods ### 2.1. Animal Model of CIH Male Sprague-Dawley rats at age of 4 weeks and body weight 140–150 g, obtained from the Experimental Animal Centre of Fudan University (China) and allowed free access to laboratory chow and tap water in day-night quarters at 25°C, were used in this study. The animal protocol was approved by the Animal Care Committee of Fudan University, in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. All effects were made to minimize animal suffering. Rats were randomly divided into the following four experimental groups of 6 animals each: the normal air (NA) control group; the CIH group; the CIH plus sRAGE group; normal air plus sRAGE group (NA+sRAGE). The CIH protocol was modeled according to the study of Fu et al. [23]. Rats were placed in four identical designed chambers. Nitrogen (100%) was delivered to the chambers for 30 s to reduce the ambient fraction of inspired oxygen to 6-7% for 10 s. Then, oxygen was infused for 20 s so that the oxygen concentration returned to 20~21%. This cycle took one minute, 8 h/day for 7 d/week for 5 weeks. The oxygen concentration was measured automatically using an oxygen analyzer (Chang Ai Electronic Science & Technology Company, Shanghai, China). The CIH plus sRAGE treatment group received the same CIH protocol as for comparison with the CIH model group. Rats in the sRAGE treatment group were subsequently injected with recombinant sRAGE protein 150 μg/per rat (diluted in 1 mL phosphate-buffered saline) intraperitoneally every 48 h for 5 weeks before each hypoxia cycle. This dose of rats was converted based on previous work where daily dose of sRAGE in a mouse model of chronic hypoxia was 20 μg/day [24]. Control rats included NA and NA+sRAGE group rats, all of which were subjected to normal air and administered with phosphate-buffered saline (PBS) as a vehicle control and 150 μg/per rat of recombinant sRAGE, respectively. During this exposure, all rats were kept under pathogen-free conditions and allowed to access to food and water. At the end of 5 weeks of CIH model, all the rats were euthanized 15 h after last hypoxia circle. Their blood and kidneys were collected. Blood samples were obtained from the inferior vena cava. Renal function was calculated as serum creatinine and blood urea nitrogen (BUN), which were measured in the core laboratory of Zhongshan Hospital (Shanghai, China). ### 2.2. Kidney Histology After being euthanized, kidneys were fixed with 4% paraformaldehyde and embedded in paraffin. Paraffin-embedded specimens were cut into 4μm thick sections and stained with hematoxylin-eosin. Kidney injury was examined using the modified 0–5 Jablonski grading scale under a light microscope: 0 represents normal; 1 represents occasional degeneration and necrosis of individual cells; 2 represents degenerative cells and necrosis of individual tubules; 3 represents degeneration and necrosis of all cells in adjacent proximal convoluted tubules with survival of surrounding tubules; 4 represents necrosis confined to the distal third of the proximal convoluted tubules, with a band of necrosis extending across the inner cortex; and 5 represents necrosis affecting all the three segments of the proximal convoluted tubules, as described previously [25]. ### 2.3. HMGB1 Immunohistochemistry For HMGB1 detection, the samples were dewaxed in xylene and dehydrated in a graded ethanol series. Endogenous peroxidase activity was inhibited by incubating the slides with 0.3% H2O2 for 5 min, followed by washing thrice with PBS; the sections were incubated with the primary antibodies to HMGB1 (1 : 1000 dilution, ab18256; Abcam, Cambridge, UK) and incubated at 4°C overnight, washed in PBS, and incubated at 37°C for 1 h with biotinylated anti-rabbit/rat IgG (1 : 200; Maixin-Bio, Shanghai, China) according to manufacturer’s instructions. The tissue was incubated with Streptavidin Peroxidase (Maixin-Bio) reagents at 37°C for 30 min, stained with freshly prepared DAB (Maixin-Bio). Morphometric quantification of the stained sections was performed with a customized digital image analysis system (IMAGE-Pro plus 4.5). Analysis of the kidney and capturing images was performed. ### 2.4. Immunofluorescent Triple Staining for CD31, TUNEL, DAPI, and Apoptotic Determination To visualize apoptotic changes during CIH-induced renal injury, 4μm paraffin-embedded tissue slides were deparaffinized, rehydrated, and prepared as described in immunohistochemistry. Antigen was retrieved by microwave-citrate buffer antigen retrieval method. The slides were blocked with 5% goat serum (Invitrogen) for 1 hour at room temperature, permeabilized with 0.2% Triton X-100, incubated overnight at 4°C with mouse anti-rat CD31 antibody (1 : 50 dilution, No. ab64543; Abcam, Cambridge, UK). After the samples were rinsed 4 times (3 min each) with PBS, the slides were then incubated for 30 minutes at room temperature with Alexa Fluor 488-conjugated goat anti-mouse IgG (1 : 400; B40941, Invitrogen). For detection of apoptotic cells, TUNEL staining was carried out using a Promega apoptosis detection kit. Immunofluorescence for TUNEL staining was performed with Alexa Fluor 594-conjugated goat anti-mouse IgG (1 : 400; A11020, Invitrogen). The glass was mounted with cover slips containing Vectashield mounting medium with 4′,6-diamidino-2-phenylindole (DAPI; Vector Laboratories) and imaged under an fluorescent microscope (Leica). Endothelial cells (CD31+) stained green fluorescence and DAPI stained blue fluorescence. TUNEL-positive apoptotic cells were detected by localized red fluorescence within cell nuclei. TUNEL-positive (TUNEL+) and DAPI positive (DAPI+) cells were counted at ×100 magnification with a fluorescence microscopy, respectively. The number of apoptotic cells was calculated as TUNEL+/DAPI+ cells in random 10 fields per section for quantification. ### 2.5. Western Blot Analysis for Target Protein Proteins were extracted from animal kidney tissues using NucleoSpin (REF 740933.50; Macherey-Nagel), after which they were separated with SDS-PAGE on 8% gels and transferred to PVDF membranes which were then incubated overnight at 4°C with the primary antibody diluted in blocking solution. The primary antibodies and the dilutions were as follows: p-ERK [1/2] No. 9101 (1 : 1000), p-JNK No. 9255 (1 : 2000), t-JNK No. 9252 (1 : 1000), p-p38 No. 9211 (1 : 1000), and t-p38 No. 9212 (1 : 1000) (Cell Signaling Technology, Danvers, MA); Bax No. ab5714 (1 : 500), Bcl-2 No. ab136285 (1 : 500), NF-κB p65 No. ab16502 (1 : 1000), HMGB1 No. ab18256 (1 : 1000), (Abcam, Cambridge, UK), and RAGE (1 : 500, No. R3611, Sigma, USA). Horseradish peroxidase-coupled rabbit and mouse IgG (1 : 2000) were used as secondary antibodies. The blots were incubated with horseradish peroxidase-conjugated anti-IgG for 1 h at 37°C. Nonspecific binding sites were blocked for 1 h with 0.05 g/mL nonfat milk powder in Tris-buffered saline (pH 7.4) and 0.05% (v/v) Tween 20 (Bio-Rad) followed by overnight incubations with primary antibody. Blots were probed with anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) antibody (sc-25778, Santa Cruz, USA) to ensure equal loading and detected using ECL chemiluminescent system (Amersham Biosciences, Piscataway, NJ, USA). Band intensity was quantified by scanning densitometry. Each measurement was made 3 times. ### 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) Serum was isolated from the blood after centrifugation at 14 000 rpm for 20 min at 4°C. After centrifugation, serum was frozen at −80°C until enzyme-linked immunosorbent assay (ELISA) analyses were performed. HMGB1 (IBL International) and the levels of inflammatory mediators (TNF-a, IL-6, and IL-17 from R&D Systems) in the serum samples were measured in triplicate following the procedures supplied by the manufacturer. ### 2.7. Statistical Analysis Data were presented as mean ± SEM and analyzed using SPSS 18.0. Comparisons between multiple groups were performed using ANOVA with the Bonferroni test.P < 0.05 was considered statistically significant between groups. ## 2.1. Animal Model of CIH Male Sprague-Dawley rats at age of 4 weeks and body weight 140–150 g, obtained from the Experimental Animal Centre of Fudan University (China) and allowed free access to laboratory chow and tap water in day-night quarters at 25°C, were used in this study. The animal protocol was approved by the Animal Care Committee of Fudan University, in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. All effects were made to minimize animal suffering. Rats were randomly divided into the following four experimental groups of 6 animals each: the normal air (NA) control group; the CIH group; the CIH plus sRAGE group; normal air plus sRAGE group (NA+sRAGE). The CIH protocol was modeled according to the study of Fu et al. [23]. Rats were placed in four identical designed chambers. Nitrogen (100%) was delivered to the chambers for 30 s to reduce the ambient fraction of inspired oxygen to 6-7% for 10 s. Then, oxygen was infused for 20 s so that the oxygen concentration returned to 20~21%. This cycle took one minute, 8 h/day for 7 d/week for 5 weeks. The oxygen concentration was measured automatically using an oxygen analyzer (Chang Ai Electronic Science & Technology Company, Shanghai, China). The CIH plus sRAGE treatment group received the same CIH protocol as for comparison with the CIH model group. Rats in the sRAGE treatment group were subsequently injected with recombinant sRAGE protein 150 μg/per rat (diluted in 1 mL phosphate-buffered saline) intraperitoneally every 48 h for 5 weeks before each hypoxia cycle. This dose of rats was converted based on previous work where daily dose of sRAGE in a mouse model of chronic hypoxia was 20 μg/day [24]. Control rats included NA and NA+sRAGE group rats, all of which were subjected to normal air and administered with phosphate-buffered saline (PBS) as a vehicle control and 150 μg/per rat of recombinant sRAGE, respectively. During this exposure, all rats were kept under pathogen-free conditions and allowed to access to food and water. At the end of 5 weeks of CIH model, all the rats were euthanized 15 h after last hypoxia circle. Their blood and kidneys were collected. Blood samples were obtained from the inferior vena cava. Renal function was calculated as serum creatinine and blood urea nitrogen (BUN), which were measured in the core laboratory of Zhongshan Hospital (Shanghai, China). ## 2.2. Kidney Histology After being euthanized, kidneys were fixed with 4% paraformaldehyde and embedded in paraffin. Paraffin-embedded specimens were cut into 4μm thick sections and stained with hematoxylin-eosin. Kidney injury was examined using the modified 0–5 Jablonski grading scale under a light microscope: 0 represents normal; 1 represents occasional degeneration and necrosis of individual cells; 2 represents degenerative cells and necrosis of individual tubules; 3 represents degeneration and necrosis of all cells in adjacent proximal convoluted tubules with survival of surrounding tubules; 4 represents necrosis confined to the distal third of the proximal convoluted tubules, with a band of necrosis extending across the inner cortex; and 5 represents necrosis affecting all the three segments of the proximal convoluted tubules, as described previously [25]. ## 2.3. HMGB1 Immunohistochemistry For HMGB1 detection, the samples were dewaxed in xylene and dehydrated in a graded ethanol series. Endogenous peroxidase activity was inhibited by incubating the slides with 0.3% H2O2 for 5 min, followed by washing thrice with PBS; the sections were incubated with the primary antibodies to HMGB1 (1 : 1000 dilution, ab18256; Abcam, Cambridge, UK) and incubated at 4°C overnight, washed in PBS, and incubated at 37°C for 1 h with biotinylated anti-rabbit/rat IgG (1 : 200; Maixin-Bio, Shanghai, China) according to manufacturer’s instructions. The tissue was incubated with Streptavidin Peroxidase (Maixin-Bio) reagents at 37°C for 30 min, stained with freshly prepared DAB (Maixin-Bio). Morphometric quantification of the stained sections was performed with a customized digital image analysis system (IMAGE-Pro plus 4.5). Analysis of the kidney and capturing images was performed. ## 2.4. Immunofluorescent Triple Staining for CD31, TUNEL, DAPI, and Apoptotic Determination To visualize apoptotic changes during CIH-induced renal injury, 4μm paraffin-embedded tissue slides were deparaffinized, rehydrated, and prepared as described in immunohistochemistry. Antigen was retrieved by microwave-citrate buffer antigen retrieval method. The slides were blocked with 5% goat serum (Invitrogen) for 1 hour at room temperature, permeabilized with 0.2% Triton X-100, incubated overnight at 4°C with mouse anti-rat CD31 antibody (1 : 50 dilution, No. ab64543; Abcam, Cambridge, UK). After the samples were rinsed 4 times (3 min each) with PBS, the slides were then incubated for 30 minutes at room temperature with Alexa Fluor 488-conjugated goat anti-mouse IgG (1 : 400; B40941, Invitrogen). For detection of apoptotic cells, TUNEL staining was carried out using a Promega apoptosis detection kit. Immunofluorescence for TUNEL staining was performed with Alexa Fluor 594-conjugated goat anti-mouse IgG (1 : 400; A11020, Invitrogen). The glass was mounted with cover slips containing Vectashield mounting medium with 4′,6-diamidino-2-phenylindole (DAPI; Vector Laboratories) and imaged under an fluorescent microscope (Leica). Endothelial cells (CD31+) stained green fluorescence and DAPI stained blue fluorescence. TUNEL-positive apoptotic cells were detected by localized red fluorescence within cell nuclei. TUNEL-positive (TUNEL+) and DAPI positive (DAPI+) cells were counted at ×100 magnification with a fluorescence microscopy, respectively. The number of apoptotic cells was calculated as TUNEL+/DAPI+ cells in random 10 fields per section for quantification. ## 2.5. Western Blot Analysis for Target Protein Proteins were extracted from animal kidney tissues using NucleoSpin (REF 740933.50; Macherey-Nagel), after which they were separated with SDS-PAGE on 8% gels and transferred to PVDF membranes which were then incubated overnight at 4°C with the primary antibody diluted in blocking solution. The primary antibodies and the dilutions were as follows: p-ERK [1/2] No. 9101 (1 : 1000), p-JNK No. 9255 (1 : 2000), t-JNK No. 9252 (1 : 1000), p-p38 No. 9211 (1 : 1000), and t-p38 No. 9212 (1 : 1000) (Cell Signaling Technology, Danvers, MA); Bax No. ab5714 (1 : 500), Bcl-2 No. ab136285 (1 : 500), NF-κB p65 No. ab16502 (1 : 1000), HMGB1 No. ab18256 (1 : 1000), (Abcam, Cambridge, UK), and RAGE (1 : 500, No. R3611, Sigma, USA). Horseradish peroxidase-coupled rabbit and mouse IgG (1 : 2000) were used as secondary antibodies. The blots were incubated with horseradish peroxidase-conjugated anti-IgG for 1 h at 37°C. Nonspecific binding sites were blocked for 1 h with 0.05 g/mL nonfat milk powder in Tris-buffered saline (pH 7.4) and 0.05% (v/v) Tween 20 (Bio-Rad) followed by overnight incubations with primary antibody. Blots were probed with anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) antibody (sc-25778, Santa Cruz, USA) to ensure equal loading and detected using ECL chemiluminescent system (Amersham Biosciences, Piscataway, NJ, USA). Band intensity was quantified by scanning densitometry. Each measurement was made 3 times. ## 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) Serum was isolated from the blood after centrifugation at 14 000 rpm for 20 min at 4°C. After centrifugation, serum was frozen at −80°C until enzyme-linked immunosorbent assay (ELISA) analyses were performed. HMGB1 (IBL International) and the levels of inflammatory mediators (TNF-a, IL-6, and IL-17 from R&D Systems) in the serum samples were measured in triplicate following the procedures supplied by the manufacturer. ## 2.7. Statistical Analysis Data were presented as mean ± SEM and analyzed using SPSS 18.0. Comparisons between multiple groups were performed using ANOVA with the Bonferroni test.P < 0.05 was considered statistically significant between groups. ## 3. Results ### 3.1. Protective Effect of sRAGE on Kidney Function and Histopathological Assessment In histological examination of kidney stained with hematoxylin-eosin, NA group rats showed normal glomerular and tubular structures, while CIH resulted in prominent tubular atrophy and inflammatory cell infiltration (Figure1(a)). Injury score was further evaluated by the modified 0–5 Jablonski grading scale (Figure 1(b)). Consistent with desquamation of renal tubules epithelium in CIH group rats, the serum creatinine as well as BUN levels significantly elevated (58.59 ± 5.84 μmol/L and 17.54 ± 1.97 mmol/L, P < 0.0 0 1) as compared to the NA control group (37 . 2 9 ± 5 . 07 μmol/L and 6.12 ± 2.4 7 mmol/L; Figures 1(c) and 1(d)). However, animals treated with sRAGE before each hypoxia circle seldom displayed extensive features of tubule epithelial swelling and narrowed tubular lumens, without significant changes in distal convoluted tubule (Figure 1(a)). Contrast to CIH-incurred renal damage, sRAGE attenuated dysfunction and inflammation, as reflected by improvements in serum parameters (creatinine decreased by 24.41%, P = 0.00 43; BUN decreased by 14.59%, P = 0 . 0 41) and histological grade (P = 0.0 02).Figure 1 Effect of sRAGE on CIH-induced histological damage and renal dysfunction. (a) Representative kidney sections from normal air (NA) group, CIH group, CIH+sRAGE group, and negative control (NA+sRAGE) group are stained by H&E (scale bar: 50μm). In light microscopic examination, tubular degeneration, interstitial neutrophil infiltration, and massive desquamation of renal epithelium are more remarkable in the kidney tissues of CIH rats compared to NA group, while pretreatment of sRAGE apparently shows almost normal tubules and mild dilatation of tubular lumen absence of severe inflammatory infiltrations. (b) Sections are graded based on the 0–5 Jablonski grading scale averaging the values from 10 fields per kidney under microscopy. Renal function is determined by serum creatinine (c) and BUN (d). Data are presented as the mean ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ### 3.2. Soluble RAGE Attenuated CIH-Induced Renal Tubular Endothelial Cell Apoptosis Indeed, the renal tubular endothelial cell injury has been a key to the pathogenesis of CKD. Since CD31 was recognized to be a specific marker of endothelial cells [26], we performed immunofluorescence staining of CD31 and TUNEL to evaluate the degree of renal tubular endothelial cell apoptosis. DAPI was used to visualize cell nuclei; thus merged immunofluorescent TUNEL/DAPI staining depicted the proportion of apoptosis. In normal kidneys, CD31+ cells clearly stained in the wall of renal proximal and distal tubules did not express TUNEL+ cells. In contrast, CIH significantly reduced CD31 expression in corticomedullary junction and peritubular capillary endothelium, indicating that chronic hypoxia caused severe endothelial injury (Figure 2(a)). In addition, TUNEL+ cells were widely noted at the corticomedullary section in the CIH group. Colocalization of CD31/TUNEL immunofluorescent staining yielded that endothelial cells were undergoing apoptosis and the percentage of apoptotic endothelial cells was greatest following CIH exposure (Figure 2(a)). Upon pretreatment with sRAGE during CIH, only some faint, nonspecific, red background TUNEL staining was observed, whereas endothelial CD31 staining remained relatively apparent. The specific costaining for CD31 and lessened TUNEL+ cells in the merged picture suggested sRAGE ameliorated CIH-induced endothelial injury. In line with this, the percentage of apoptosis was significantly reduced 37% by sRAGE compared with the CIH group (P = 0.002, Figure 2(b)).Figure 2 Effect of sRAGE on renal tubular endothelial cell apoptosis. (a) Representative immunofluorescence staining for CD31 (green), TUNEL (red), DAPI (blue), and the merged pictures from kidney tissues of each group. The scale bars represent 50μm. (b) Quantitative assessment of the percentage of apoptosis by counting the TUNEL+/DAPI+ cells in 10 random fields (100x) for each section. (c) Western blots analysis of Bcl-2 and Bax protein in comparison with GAPDH used as a loading control. (d) Representative bar diagram showing quantitative relative levels of Bcl-2 and Bax in NA, CIH, CIH+sRAGE, and NA plus sRAGE treated groups. Data are presented as the mean ± SEM. NS: no significance. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Oxidative stress affects the endothelial cell apoptosis by regulation of the balance between Bax and Bcl-2 proteins [27]. In contrast to NA group, expression of the proapoptotic protein Bax was upregulated during CIH process, whereas the magnitude of antiapoptotic protein Bcl-2 was significantly decreased as shown in Figure 2(c). Our histological results were further supported by Bcl-2/Bax protein ratio. Bcl-2/Bax ratio decreased from 1.49 ± 1.18 to 0.26 ± 0.43, suggestive of CIH-promoted propensity to apoptosis. But pretreatment of sRAGE exhibited a significant improvement of 125.11% in Bcl-2/Bax protein ratio compared with CIH alone. These data indicated that tubular endothelial cell apoptosis played a critical role in CIH-induced renal injury. Therefore, administration of sRAGE alleviated the renal endothelial cell death through a Bcl-2/Bax-dependent mechanism, thus improving functional recovery. ### 3.3. Effect of sRAGE on CIH-Induced HMGB1 Expression During oxidative stress provoked necrotic process, cells invariably lose membrane integrity and eventually lyse, resulting in intracellular contents release such as HMGB1. To ascertain it, immunohistochemistry was performed to determine the location of HMGB1 in each group. Results showed that HMGB1 expression was predominantly detected in cortical areas in contrast to medulla, meaning that proximal tubular cells were likely to be a prominent source of HMGB1. In CIH group, HMGB1 was expressed diffusely in the distended tubular cytoplasms as well as the nuclei of renal tubular epithelial cells, whereas HMGB1 was not or modestly expressed in the nuclei of proximal and distal convoluted tubules in NA group (Figure3). In addition, the increased extracellular and cytoplasmic HMGB1 in CIH rats was gradually attenuated upon pretreatment of sRAGE, as depicted by lessened but mild expression in proximal and distal convoluted tubule (Figure 3). Negative control group (NA+sRAGE) excluded the possibility of nonspecific staining of sRAGE.Figure 3 Representative immunohistochemistry of renal cortex (the top panel including glomeruli and proximal tubule) and medulla (the bottom panel including peritubular capillaries and distal tubule) for localization of HMGB1 (scale bar: 50μm). HMGB1 is abundant in cytoplasmic renal tubules of CIH group, compared with nuclear patterns of HMGB1 in NA group. sRAGE gradually attenuates HMGB1 cytoplasmic deposition, intraluminal infiltration, and nuclear staining in expanded renal tubules. ### 3.4. Effect of sRAGE on RAGE-HMGB1 Downstream Inflammatory Cytokines and Molecules To distinguish the deleterious contribution of RAGE-HMGB1 in the pathogenesis of CIH-induced kidney injury, western blot technique was used to detect RAGE-HMGB1 and associated inflammatory molecules. Results demonstrated significant differences in RAGE-HMGB1 expression between NA and CIH control (RAGE:0.466 ± 0.090 versus 2.368 ± 0.931, HMGB1: 0.038 ± 0.026 versus 1.118 ± 0.335, P < 0.001, Figure 4). Also, we found that NF-κB was significantly increased in CIH group as compared to normal condition (0.071 ± 0.056 versus 1.056 ± 0.376, P < 0.001, Figure 4). In contrast to CIH alone, sRAGE-pretreatment group exhibited a significant suppression on RAGE (1.643 ± 0.581, P < 0.01, Figure 4) and also on HMGB1 (0.566 ± 0.341, P < 0.01, Figure 4). Besides, sRAGE played a pivotal role in the inhibition of NF-κB activation (0.713 ± 0.628, P < 0.05, Figure 4). Conversely, treatment with sRAGE alone did not elicit apparent changes in RAGE-HMGB1 or NF-κB activation. In this regard, engagement of RAGE-HMGB1 may accompany transcription factor NF-κB activation that regulated the induction of multiple proinflammatory cytokines.Figure 4 Effect of sRAGE on expressions of RAGE, HMGB1, and NF-κB. (a) Representative western blot images (upper panel) and GAPDH (lower panel) used as the endogenous control are shown in each group. (b) Quantification densitometric analysis summarizes the fold changes of protein levels normalized to GAPDH. Data are expressed as means ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b)Following 5 weeks of CIH exposure, serum inflammatory cytokines of IL-6 and TNF-a increased significantly (118.28 ± 18.98 pg/mL and 125.16 ± 13.04 pg/mL resp., P < 0.0001; Figures 5(a) and 5(b)) but were lowly detectable in control (24.03 ± 6.77 pg/mL and 38.72 ± 9.36 pg/mL) and CIH+sRAGE group (84.75 ± 11.99 pg/mL and 69.29 ± 10.49 pg/mL). As expected, elevated serum levels of HMGB1 and IL-17 indeed occurred in the CIH rats (32.88 ± 2.69 ng/mL and 119.49 ± 18.77 pg/mL, P < 0.01; Figures 5(c) and 5(d)). Importantly, the results obtained from serum corresponded to the expression of RAGE signaling and inflammation response in tissues. Furthermore, as showed in Figures 5(a) and 5(b), the inhibitory effect of sRAGE on amplified inflammatory cytokine productions of IL-6 and TNF-α under CIH condition was obvious. However, it should be noteworthy that sRAGE had an adverse effect on the circulatory HMGB1 in CIH+sRAGE group rats compared with CIH alone (34.58 ± 4.32 ng/mL, P = 0.43; Figure 5(c)). Another negative result was observed in subsequent decreased levels of IL-17 in CIH+sRAGE group rats (105.49 ± 30.21 pg/mL, P = 0.061; Figure 5(d)). In this in vivo study, we confirmed the potential therapeutic effect of sRAGE on RAGE mediated inflammatory molecular signaling accompanied by reduced cytokines without the presence of circulatory HMGB1 and IL-17.Figure 5 Effect of sRAGE on proinflammatory cytokines IL-6 (a), TNF-a (b), HMGB1 (c), and IL-17 (d) in serum of rats. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ### 3.5. Soluble RAGE Modulates P-P38 and P-JNK but Not P-ERK Signaling Mitogen-activated protein kinases (MAPKs) are well accepted upstream modulators of apoptosis and inflammatory cytokines. They also have crucial roles in signal transduction from the cell surface to the nucleus, which are required for subsequent NF-κB transcriptional activation. As shown in Figure 6(a), the phosphorylated JNK and p38, measured as phospho/total-JNK and phospho/total-p38 level, both reached maximal kinase activities after 5 weeks of CIH exposure (JNK: 1.75 ± 0.81 and p38: 1.11 ± 0.49, P < 0.01). Also, CIH tended to enhance the phosphorylation of ERK1/2 approximately twofold over basal levels (0.2 4 ± 0.1 6 versus 0 . 12 ± 0 . 11, P = 0 . 013, Figure 6(b)). Regarding these, MAPKs family including p38, JNK, and ERK1/2 were investigated to be activated in response to oxidative stress. To address whether sRAGE could modulate MAPK activity, we further used specific antibodies to establish the active forms of the kinases activities. In contrast to CIH alone, the levels of phosphorylated JNK and p38 along with sRAGE treatment were decreased (JNK: 0.87 ± 0.31 and p38: 0.69 ± 0.61, P < 0.01; Figures 6(c) and 6(d)), whereas the phosphorylation levels of p-ERK1/2 were not significantly affected (0.2 5 ± 0 . 09, P > 0.05, Figure 6(b)). sRAGE treatment abrogated t-p38 activation, but no changes in the total levels of ERK1/2 or JNK were detectable.Figure 6 Representative western blot images show the effect of sRAGE on phosphorylated (P) and total (T) ERK, p38, and JNK expression (a). Histograms represent the quantitative densitometric ratio of MAPK signaling molecules ERK (b), p38 (c), and JNK (d) normalized to GAPDH in each group. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Accordingly, we identify RAGE ligand as key mediators of MAPK downstream molecules leading to CIH-activated inflammation and apoptosis, while signaling proteins such as p38 and JNK MAP kinases are potentially regulators involved in the renal protection of sRAGE. ## 3.1. Protective Effect of sRAGE on Kidney Function and Histopathological Assessment In histological examination of kidney stained with hematoxylin-eosin, NA group rats showed normal glomerular and tubular structures, while CIH resulted in prominent tubular atrophy and inflammatory cell infiltration (Figure1(a)). Injury score was further evaluated by the modified 0–5 Jablonski grading scale (Figure 1(b)). Consistent with desquamation of renal tubules epithelium in CIH group rats, the serum creatinine as well as BUN levels significantly elevated (58.59 ± 5.84 μmol/L and 17.54 ± 1.97 mmol/L, P < 0.0 0 1) as compared to the NA control group (37 . 2 9 ± 5 . 07 μmol/L and 6.12 ± 2.4 7 mmol/L; Figures 1(c) and 1(d)). However, animals treated with sRAGE before each hypoxia circle seldom displayed extensive features of tubule epithelial swelling and narrowed tubular lumens, without significant changes in distal convoluted tubule (Figure 1(a)). Contrast to CIH-incurred renal damage, sRAGE attenuated dysfunction and inflammation, as reflected by improvements in serum parameters (creatinine decreased by 24.41%, P = 0.00 43; BUN decreased by 14.59%, P = 0 . 0 41) and histological grade (P = 0.0 02).Figure 1 Effect of sRAGE on CIH-induced histological damage and renal dysfunction. (a) Representative kidney sections from normal air (NA) group, CIH group, CIH+sRAGE group, and negative control (NA+sRAGE) group are stained by H&E (scale bar: 50μm). In light microscopic examination, tubular degeneration, interstitial neutrophil infiltration, and massive desquamation of renal epithelium are more remarkable in the kidney tissues of CIH rats compared to NA group, while pretreatment of sRAGE apparently shows almost normal tubules and mild dilatation of tubular lumen absence of severe inflammatory infiltrations. (b) Sections are graded based on the 0–5 Jablonski grading scale averaging the values from 10 fields per kidney under microscopy. Renal function is determined by serum creatinine (c) and BUN (d). Data are presented as the mean ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ## 3.2. Soluble RAGE Attenuated CIH-Induced Renal Tubular Endothelial Cell Apoptosis Indeed, the renal tubular endothelial cell injury has been a key to the pathogenesis of CKD. Since CD31 was recognized to be a specific marker of endothelial cells [26], we performed immunofluorescence staining of CD31 and TUNEL to evaluate the degree of renal tubular endothelial cell apoptosis. DAPI was used to visualize cell nuclei; thus merged immunofluorescent TUNEL/DAPI staining depicted the proportion of apoptosis. In normal kidneys, CD31+ cells clearly stained in the wall of renal proximal and distal tubules did not express TUNEL+ cells. In contrast, CIH significantly reduced CD31 expression in corticomedullary junction and peritubular capillary endothelium, indicating that chronic hypoxia caused severe endothelial injury (Figure 2(a)). In addition, TUNEL+ cells were widely noted at the corticomedullary section in the CIH group. Colocalization of CD31/TUNEL immunofluorescent staining yielded that endothelial cells were undergoing apoptosis and the percentage of apoptotic endothelial cells was greatest following CIH exposure (Figure 2(a)). Upon pretreatment with sRAGE during CIH, only some faint, nonspecific, red background TUNEL staining was observed, whereas endothelial CD31 staining remained relatively apparent. The specific costaining for CD31 and lessened TUNEL+ cells in the merged picture suggested sRAGE ameliorated CIH-induced endothelial injury. In line with this, the percentage of apoptosis was significantly reduced 37% by sRAGE compared with the CIH group (P = 0.002, Figure 2(b)).Figure 2 Effect of sRAGE on renal tubular endothelial cell apoptosis. (a) Representative immunofluorescence staining for CD31 (green), TUNEL (red), DAPI (blue), and the merged pictures from kidney tissues of each group. The scale bars represent 50μm. (b) Quantitative assessment of the percentage of apoptosis by counting the TUNEL+/DAPI+ cells in 10 random fields (100x) for each section. (c) Western blots analysis of Bcl-2 and Bax protein in comparison with GAPDH used as a loading control. (d) Representative bar diagram showing quantitative relative levels of Bcl-2 and Bax in NA, CIH, CIH+sRAGE, and NA plus sRAGE treated groups. Data are presented as the mean ± SEM. NS: no significance. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Oxidative stress affects the endothelial cell apoptosis by regulation of the balance between Bax and Bcl-2 proteins [27]. In contrast to NA group, expression of the proapoptotic protein Bax was upregulated during CIH process, whereas the magnitude of antiapoptotic protein Bcl-2 was significantly decreased as shown in Figure 2(c). Our histological results were further supported by Bcl-2/Bax protein ratio. Bcl-2/Bax ratio decreased from 1.49 ± 1.18 to 0.26 ± 0.43, suggestive of CIH-promoted propensity to apoptosis. But pretreatment of sRAGE exhibited a significant improvement of 125.11% in Bcl-2/Bax protein ratio compared with CIH alone. These data indicated that tubular endothelial cell apoptosis played a critical role in CIH-induced renal injury. Therefore, administration of sRAGE alleviated the renal endothelial cell death through a Bcl-2/Bax-dependent mechanism, thus improving functional recovery. ## 3.3. Effect of sRAGE on CIH-Induced HMGB1 Expression During oxidative stress provoked necrotic process, cells invariably lose membrane integrity and eventually lyse, resulting in intracellular contents release such as HMGB1. To ascertain it, immunohistochemistry was performed to determine the location of HMGB1 in each group. Results showed that HMGB1 expression was predominantly detected in cortical areas in contrast to medulla, meaning that proximal tubular cells were likely to be a prominent source of HMGB1. In CIH group, HMGB1 was expressed diffusely in the distended tubular cytoplasms as well as the nuclei of renal tubular epithelial cells, whereas HMGB1 was not or modestly expressed in the nuclei of proximal and distal convoluted tubules in NA group (Figure3). In addition, the increased extracellular and cytoplasmic HMGB1 in CIH rats was gradually attenuated upon pretreatment of sRAGE, as depicted by lessened but mild expression in proximal and distal convoluted tubule (Figure 3). Negative control group (NA+sRAGE) excluded the possibility of nonspecific staining of sRAGE.Figure 3 Representative immunohistochemistry of renal cortex (the top panel including glomeruli and proximal tubule) and medulla (the bottom panel including peritubular capillaries and distal tubule) for localization of HMGB1 (scale bar: 50μm). HMGB1 is abundant in cytoplasmic renal tubules of CIH group, compared with nuclear patterns of HMGB1 in NA group. sRAGE gradually attenuates HMGB1 cytoplasmic deposition, intraluminal infiltration, and nuclear staining in expanded renal tubules. ## 3.4. Effect of sRAGE on RAGE-HMGB1 Downstream Inflammatory Cytokines and Molecules To distinguish the deleterious contribution of RAGE-HMGB1 in the pathogenesis of CIH-induced kidney injury, western blot technique was used to detect RAGE-HMGB1 and associated inflammatory molecules. Results demonstrated significant differences in RAGE-HMGB1 expression between NA and CIH control (RAGE:0.466 ± 0.090 versus 2.368 ± 0.931, HMGB1: 0.038 ± 0.026 versus 1.118 ± 0.335, P < 0.001, Figure 4). Also, we found that NF-κB was significantly increased in CIH group as compared to normal condition (0.071 ± 0.056 versus 1.056 ± 0.376, P < 0.001, Figure 4). In contrast to CIH alone, sRAGE-pretreatment group exhibited a significant suppression on RAGE (1.643 ± 0.581, P < 0.01, Figure 4) and also on HMGB1 (0.566 ± 0.341, P < 0.01, Figure 4). Besides, sRAGE played a pivotal role in the inhibition of NF-κB activation (0.713 ± 0.628, P < 0.05, Figure 4). Conversely, treatment with sRAGE alone did not elicit apparent changes in RAGE-HMGB1 or NF-κB activation. In this regard, engagement of RAGE-HMGB1 may accompany transcription factor NF-κB activation that regulated the induction of multiple proinflammatory cytokines.Figure 4 Effect of sRAGE on expressions of RAGE, HMGB1, and NF-κB. (a) Representative western blot images (upper panel) and GAPDH (lower panel) used as the endogenous control are shown in each group. (b) Quantification densitometric analysis summarizes the fold changes of protein levels normalized to GAPDH. Data are expressed as means ± SEM. P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b)Following 5 weeks of CIH exposure, serum inflammatory cytokines of IL-6 and TNF-a increased significantly (118.28 ± 18.98 pg/mL and 125.16 ± 13.04 pg/mL resp., P < 0.0001; Figures 5(a) and 5(b)) but were lowly detectable in control (24.03 ± 6.77 pg/mL and 38.72 ± 9.36 pg/mL) and CIH+sRAGE group (84.75 ± 11.99 pg/mL and 69.29 ± 10.49 pg/mL). As expected, elevated serum levels of HMGB1 and IL-17 indeed occurred in the CIH rats (32.88 ± 2.69 ng/mL and 119.49 ± 18.77 pg/mL, P < 0.01; Figures 5(c) and 5(d)). Importantly, the results obtained from serum corresponded to the expression of RAGE signaling and inflammation response in tissues. Furthermore, as showed in Figures 5(a) and 5(b), the inhibitory effect of sRAGE on amplified inflammatory cytokine productions of IL-6 and TNF-α under CIH condition was obvious. However, it should be noteworthy that sRAGE had an adverse effect on the circulatory HMGB1 in CIH+sRAGE group rats compared with CIH alone (34.58 ± 4.32 ng/mL, P = 0.43; Figure 5(c)). Another negative result was observed in subsequent decreased levels of IL-17 in CIH+sRAGE group rats (105.49 ± 30.21 pg/mL, P = 0.061; Figure 5(d)). In this in vivo study, we confirmed the potential therapeutic effect of sRAGE on RAGE mediated inflammatory molecular signaling accompanied by reduced cytokines without the presence of circulatory HMGB1 and IL-17.Figure 5 Effect of sRAGE on proinflammatory cytokines IL-6 (a), TNF-a (b), HMGB1 (c), and IL-17 (d) in serum of rats. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d) ## 3.5. Soluble RAGE Modulates P-P38 and P-JNK but Not P-ERK Signaling Mitogen-activated protein kinases (MAPKs) are well accepted upstream modulators of apoptosis and inflammatory cytokines. They also have crucial roles in signal transduction from the cell surface to the nucleus, which are required for subsequent NF-κB transcriptional activation. As shown in Figure 6(a), the phosphorylated JNK and p38, measured as phospho/total-JNK and phospho/total-p38 level, both reached maximal kinase activities after 5 weeks of CIH exposure (JNK: 1.75 ± 0.81 and p38: 1.11 ± 0.49, P < 0.01). Also, CIH tended to enhance the phosphorylation of ERK1/2 approximately twofold over basal levels (0.2 4 ± 0.1 6 versus 0 . 12 ± 0 . 11, P = 0 . 013, Figure 6(b)). Regarding these, MAPKs family including p38, JNK, and ERK1/2 were investigated to be activated in response to oxidative stress. To address whether sRAGE could modulate MAPK activity, we further used specific antibodies to establish the active forms of the kinases activities. In contrast to CIH alone, the levels of phosphorylated JNK and p38 along with sRAGE treatment were decreased (JNK: 0.87 ± 0.31 and p38: 0.69 ± 0.61, P < 0.01; Figures 6(c) and 6(d)), whereas the phosphorylation levels of p-ERK1/2 were not significantly affected (0.2 5 ± 0 . 09, P > 0.05, Figure 6(b)). sRAGE treatment abrogated t-p38 activation, but no changes in the total levels of ERK1/2 or JNK were detectable.Figure 6 Representative western blot images show the effect of sRAGE on phosphorylated (P) and total (T) ERK, p38, and JNK expression (a). Histograms represent the quantitative densitometric ratio of MAPK signaling molecules ERK (b), p38 (c), and JNK (d) normalized to GAPDH in each group. Data are expressed as means ± SEM. NS: no significance.P ∗ < 0.05, P ∗ ∗ < 0.01; n = 6/group. (a) (b) (c) (d)Accordingly, we identify RAGE ligand as key mediators of MAPK downstream molecules leading to CIH-activated inflammation and apoptosis, while signaling proteins such as p38 and JNK MAP kinases are potentially regulators involved in the renal protection of sRAGE. ## 4. Discussion Accumulating evidences indicated that RAGE contributed, at least in part, to the development of OSA complications, such as diabetes and nephropathy, cardiovascular disease, and chronic inflammation [28]. Recent studies provided insight into sRAGE in competing with cell surface RAGE for ligand binding, thus potentially representing a novel molecular target for OSA-associated chronic kidney disease. The present study shows that RAGE-HMGB1 plays a pivotal role in a CIH model. Furthermore, it is the first evidence that sRAGE demonstrates its anti-inflammatory and antiapoptotic effects by altering p38 and JNK signaling pathways.Histological examination confirmed that our CIH protocol was sufficient to trigger renal damage. It has been shown that HIF1-α is the main molecular effector of hypoxia signaling and able to combine the HIF-1α binding site present in the RAGE promoter region [29]. Thereby hypoxia may activate RAGE mRNA gene transcription and stimulate RAGE production. Renal interstitial and tubular endothelial cells express specific RAGEs, ongoing generation of which may amplify chronic cellular perturbation and oxidant stress damage by engagement of these receptors on the endothelial surfaces [30]. Our observations in colocalization of CD31/TUNEL immunofluorescent staining revealed that activation of RAGE accelerated tubular endothelial apoptosis. Apart from a possible adaptive response to chronic hypoxia, activation of RAGE observed in our CIH model probably contributed to the CIH-induced injury according to western blot analysis. Of special interest is the finding of extracellular and abundant cytoplasmic accumulations of HMGB1 following CIH. Our results demonstrated that HMGB1 was limited in the nucleus of renal parenchyma cells under normal condition, but dramatically translocated into the cytoplasm and extracellular matrix upon hypoxia insult. For one reason, HMGB1 is passively released in response to inflammatory stress or necrosis [31]. For another, translocation of HMGB1 from the nucleus to cytoplasm requires inflammasome and caspase activity, thus facilitating the chronic inflammation and apoptosis [32, 33]. Furthermore, HMGB1 can behave as a secreted cytokine promoting neutrophil accumulating [34] and activate macrophages/monocytes to release more proinflammatory cytokines [35, 36]. Consistently, both ELISA and immunohistochemistry results showed that HMGB1 secreted from serum and tissue was collectively elevated and correlated with upregulated TNF-α and IL-6 after CIH. In addition, as a widely acknowledged cytokine for regulating inflammatory reaction and leukocyte migration, IL-17 was reported to be upregulated in RAGE-HMGB1 associated injury [37]. Report from Akirav et al. also indicated the association of RAGE expression and increased IL-17 [38]. Moreover, HMGB1 contributed to lymphocyte infiltration and the release of the Th17 cell specific cytokine IL-17 [39]. In our research, we confirmed previous results, exactly supporting the positive feedback loops of RAGE-HMGB1 activation and proinflammatory mediators.RAGE-mediated cascades of signal transduction could promote the proinflammatory NF-κB and the MAPK pathway in endothelial cells and monocytes. Importantly, the pathogenic role of RAGE appears to depend on the level of NF-κB transcriptional activity [40]. Inhibition of NF-κB decreased cardiomyocyte apoptosis and recruitment of neutrophils accompanied with HMGB1 suppression [41], indicative of the reciprocal modulation of NF-κB and HMGB1. A range of animal models in vitro and in vivo have demonstrated the involvement of RAGE in pathophysiologic processes, using a receptor decoy such as sRAGE [42]. Phosphorylated levels of p38, JNK, and ERK in present study were higher after CIH exposure and subsequently affected by sRAGE to different extent, implicating RAGE ligand as key mediators in MAPKs signaling. In line with our results, there was evidence in rat renal tubular epithelial cells that indicated the critical importance of HMGB1 in inducing circulating cyto/chemokines secretion through MAP kinase pathways [43]. Similar results were also observed in early reports that RAGE induced NF-κB activation and IL-1 and TNF-α production were dependent on p38 phosphorylation in diabetic glomerular injury [44, 45]. In addition, RAGE ligand interaction may directly induce generation of ERK and reactive oxygen species [46]. Specially, in our studies, these responses of MAPKs to sRAGE lack the participation of p-ERK1/2. Consistent with our results, Taguchi et al. found blockage of RAGE-amphoterin interaction also suppressed p38 and SAP/JNK MAPKs [47]. On the contrary, inhibition of RAGE by siRNA could reduce phosphorylated-ERK in cyst formation [48]. This ambiguity regarding MAPK molecular mechanisms perhaps depends on the cell and RAGE ligand types in vivo and in vitro.sRAGE can be used as a biomarker in RAGE-dependent inflammations as well as a therapeutic agent to neutralize hypoxia induced inflammation [49]. Moreover, sRAGE can cancel the effects of AGEs on cells in culture [50]. In another hypoxia/reoxygenation model, sRAGE significantly decreased cellular lactic dehydrogenase leakage and increased cell viability in neonatal rat cardiomyocytes [51]. The published data suggest that application of sRAGE is identified to intercept RAGE ligand interaction and subsequent downstream signaling [52]. Since sustained MAPK activation has been associated with oxidative stress and cell apoptosis [53], through histologic and western blot analysis, we reasoned that sRAGE protected against renal inflammation and apoptosis by suppression of p38 and JNK MAPK signaling molecules.Previous studies revealed the decreased sRAGE levels increased the propensity toward chronic inflammation such as hypertension [54] and coronary artery disease [55]. Serum sRAGE levels were elevated significantly in patients with decreased renal function and inversely related to inflammation [21]. These observations lead us to propose that subsequent production of sRAGE potentially protects against the decreased renal function, but Kalousová et al. found it was not related to mortality of haemodialysis patients [56]. Whether sRAGE represents only an epiphenomenon or a compensatory protective mechanism is still unknown. Although the protective effect of sRAGE is not as effective as the RAGE-deletion [57], the property of long half-life after intraperitoneal injection into normal rats renders the sustained effect in each hypoxia cycle until the end of the observation [58]. Considering RAGE is a multiligand receptor, the accurate blocking target of sRAGE remains to be elucidated. For example, sRAGE is found to interact with Mac-1 in an HMGB1-induced arthritis model [59]. In terms of proinflammatory and proapoptotic effects, HMGB1 is likely to be a main target of sRAGE [60]. Since S100 proteins and HMGB1 certainly do not exclusively bind to RAGE [61], sRAGE did not only result from intercepting the interaction of ligands with cell surface RAGE, but with other possible receptors. That was why that we only observed upregulated circulatory HMGB1 in serum without suddenly degraded levels by sRAGE. It is reasonably speculated that HMGB1 passively released from nucleus to circulation might not be efficiently scavenged. However, in accordance with immunohistochemistry results, western blot analysis of total renal cellular lysates detected the difference of HMGB1 between CIH and exogenous administration of sRAGE groups, suggesting that sRAGE exerted its effect by downregulation of HMGB1. Remarkably, Lee et al. determined that sRAGE exhibited no toxic effects on the liver by testing the activity of ALT [10], providing additional support for this potential therapeutic strategy.A limitation of this study was the lack of verification that whether the decreased RAGE expression induced by sRAGE treatment was abrogated with exogenous HMBG1 administration. We did not observe such endogenous sRAGE level in renal insufficiency following the chronic hypoxia. To confirm its exact blockage target of RAGE ligand interaction, the capacity of sRAGE in inflammatory responses from diverse models remains to be elucidated. ## 5. Conclusions Taken together, RAGE and its ligand HMGB1 activate chronic inflammatory transduction cascades that contribute to the pathogenesis of CIH-induced renal injury. The consequences of amplifying inflammatory response include the recruitment of inflammatory cytokines and effector molecules (sustained expression of NF-κB, TNF-α, IL-6, and MAPK signaling), leading to apoptosis and accelerated renal dysfunction. Interruption of RAGE interaction by administration of sRAGE has been shown to attenuate these detrimental effects. According to a decoy mechanism, blockade of RAGE ligand interaction could provide a new therapeutic approach in the development and progression of OSA-associated chronic kidney disease. --- *Source: 1015390-2016-09-05.xml*
2016
# Accurate Identification of Agricultural Inputs Based on Sensor Monitoring Platform and SSDA-HELM-SOFTMAX Model **Authors:** Juan Zou; Hanjing Jiang; Qingxiu Wang; Ningxia Chen; Ting Wu; Ling Yang **Journal:** Journal of Sensors (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1015391 --- ## Abstract The unreliability of traceability information on agricultural inputs has become one of the main factors hindering the development of traceability systems. At present, the major detection techniques of agricultural inputs were residue chemical detection at the postproduction stage. In this paper, a new detection method based on sensors and artificial intelligence algorithm was proposed in the detection of the commonly agricultural inputs inAgastache rugosa cultivation. An agricultural input monitoring platform including software system and hardware circuit was designed and built. A model called stacked sparse denoising autoencoder-hierarchical extreme learning machine-softmax (SSDA-HELM-SOFTMAX) was put forward to achieve accurate and real-time prediction of agricultural input varieties. The experiments showed that the combination of sensors and discriminant model could accurately classify different agricultural inputs. The accuracy of SSDA-HELM-SOFTMAX reached 97.08%, which was 4.08%, 1.78%, and 1.58% higher than a traditional BP neural network, DBN-SOFTMAX, and SAE-SOFTMAX models, respectively. Therefore, the method proposed in this paper was proved to be effective, accurate, and feasible and will provide a new online detection way of agricultural inputs. --- ## Body ## 1. Introduction In recent years, agricultural product traceability systems have been gradually applied to the actual production process, but manually entered traceability information is difficult to gain the trust of consumers and regulators, and a lack of trust in traceability information has become one of the main factors hindering the uptake of traceability systems. Three main factors affect the quality and safety of agricultural products: air pollution, soil pollution, and agricultural input pollution [1]. Among them, agricultural inputs refer to products permitted for use in organic farming, including feedstuffs, fertilizers, and permitted plant protection products as well as cleaning agents and additives used in food production. To prevent air pollution, traceability systems can automatically collect and save environmental data. To prevent soil pollution, traceability systems can record and save soil test reports. To prevent agricultural input pollution such as fertilizers and pesticides used in the production process, traceability systems are currently mainly used to record agricultural residue testing reports. However, the traditional chemical and biological detection methods are unable to cope with a large number of real-time online tests due to many problems such as sample preparation requirements, complicated operating processes, extended experiment durations, and sample destruction. In recent years, the rapid development of deep learning methods has directly promoted the in-depth application of artificial intelligence technology in the agricultural environment and other fields, especially for prediction and early warning based on the combination of real-time and prior information [2, 3]. Therefore, research on real-time online prediction of agricultural inputs based on deep learning is highly significant, which can improve the accuracy of input prediction and ensure the timeliness and accuracy of the traceability information.In recent years, some researchers have studied some techniques to predict agricultural inputs. For example, Chough et al., Kumaran and Tran-Minh, and others used electrochemical or biotechnology to quickly detect pesticides and achieved good results [4–6]. Galceran et al. used pyridine chloride hydrochloride as an electrolyte to identify several seasonal herbicides without UV absorption [7]. Andrade et al. established a liquid chromatography-electrospray tandem mass spectrometry method and used agents to neutralize the matrices and in turn to produce better recovery and faster detection in tomato samples [8]. Shamsipur et al. reported a method involving the DLLME method coupled with SPE for the identification of pesticides in fruit juice, water, milk, and honey samples [9]. Some researchers also used sensors to monitor agricultural inputs. For example, Datir and Wagh used a wireless sensor network to monitor and detect downy mildew in grapes, realizing a real-time system for detecting agricultural diseases based on weather data [10]. Zhu et al. used some nanozyme sensor arrays to detect pesticides [11]. Yi et al. used a photoluminescence sensor for ultrasensitive detection of pesticides [12]. However, these methods were residual detection after implementation, and input information needed to be recorded manually, which could not guarantee the real-time and accuracy of the traceability system.In recent years, with the rapid development of technologies such as artificial intelligence and sensors, extreme learning machines (ELMs) have become an important part of machine learning and have excellent generalization performance, fast learning, and are less likely to become trapped in local optimums [13]. The method has been successfully applied in load forecasting [14] and fault diagnosis [15, 16]. However, the complex and changeable crop planting environment has produced many interferences which influence the physicochemical parameters of agricultural inputs and create nonlinear variation. Thus, there were two problems with using an ELM neural network to classify and predict agricultural inputs [17]. Firstly, the input weights and hidden layer deviation of the ELM neural network during the modeling process were generated randomly, so the classification performance was reduced. Secondly, the random initial parameters may also make the number of ELM hidden layer nodes more than the traditional parameter adjustment neural network, increasing test time. Therefore, the key factor to improve the performance of ELM neural networks is efficient pretraining parameters.In order to solve the above problems, we have studied an algorithm of using deep learning to monitor agricultural inputs and achieved very good results [18]. Based on the previous research, this paper has made improvements in the algorithm model, experimental design, software architecture, hardware design, and preprocessing methods and achieved better prediction results than the previous research. This paper took eight kinds of agricultural inputs commonly used for Agastache rugosa as research objects (ammonium sulfate, potassium fertilizer, phosphate fertilizer, Bordeaux mixture, chlortetracycline, imidacloprid, pendimethalin, and bromoxynil). A monitoring platform including software system and hardware circuit, which could realize sensor data collection, wireless transmission, and storage, was built. In algorithm research, greedy layer-wise training and fine-tuning of the stacked autoencoder were used to initialize the parameters, then removed the decoding part of the stacked sparse denoising autoencoder (SSDA) model, and connect with the hierarchical extreme learning machine (HELM) neural network. Finally, an agricultural input classification prediction model based on SSDA-HELM-SOFTMAX was established, which laid the foundation for accurate classification prediction of agricultural inputs. ## 2. SSDA-HELM-SOFTMAX Algorithm Description ### 2.1. Stacked Autoencoder An autoencoder [19] is an unsupervised neural network model based on deep learning that can reconstruct the original input data into an approximate new data, expressing any data in low dimensions through the symmetric structure and weight coefficients of the network, and at its core is the ability to learn the deep representation of input data. The drawback is that the parameters of neurons in the network will continue to increase in the number of hidden layers, which affects the calculation speed of the network. One of its main applications is to get the weight parameters of the initial neural network by layer-wise pretraining and fine-tuning, with better results than the traditional symmetric random parameters.Multiple autoencoders were stacked to form a stacked autoencoder [20, 21] whose main function is to extract deep characteristics and nonlinear dimension reduction. A stacked autoencoder combined with a supervised classifier can accomplish multicategory classification. Each autoencoder in the structure of the stacked autoencoder (Figure 1) performed encoding and decoding operations and feature extraction from the output of the previous autoencoder. The output of the autoencoder is a reconstruction or approximation of its input, but it cannot be used to directly classify the input information without a supervised classifier. The three autoencoders in Figure 1 can obtain three hidden layers through feature extraction, and a supervised classifier can be added to the output layer to realize classification prediction.Figure 1 The structure of the stacked autoencoder. ### 2.2. Sparse Autoencoder (SAE) There were three layers in the autoencoder, namely, the input layer, hidden layer, and output layer. The number of nodes on each layer wasn, d, and n, respectively (Figure 2). Autoencoder attempts to approximate an identity function which causes the output data to approximate the input data, and the hidden layer activation value was a2=a21,a22,⋯,a2n, which were the features of the input vector.Figure 2 The structure of the AE.The formula used in the encoding process of stacked autoencoder was as follows:(1)h=fθ1x=σW1x+b1.The formula used in the decoding was as follows:(2)x^=fθ2h=σW2h+b2,whereW1 and W2 were the weight matrix of the input layer to the hidden layer and the hidden layer to the output layer, b1 and b2 were the unit bias coefficients of the hidden layer and the output layer, σ• was the logsig function, and θ was the network parameter matrix.The goal of the autoencoder is to find the optimal parameter matrix and minimize the error between the input and the output. The reconstructed error loss function was expressed as follows:(3)JEW,b=loss+R=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12,whereloss was the loss function, R was the weight attenuation term which could effectively prevent overfitting, m was the number of samples. xi and x∧i were the input and output characteristics of the ith sample, nl was the number of network layers, sl was the unit number of the jth layer, and λ was the weight attenuation coefficient.An SAE was used to add a sparsity restriction to the hidden layer of the autoencoder. The sparsity restriction was to control the number of network parameters by suppressing the activation of network neurons and to achieve more effective extraction of data features.aj2x was the activate degree of hidden layer neurons, and the mean degree of activation can be expressed as follows: (4)ρ^j=1m∑i=1maj2xi.It was considered active when the neuron output was close to 1 and inactive when the output was close to 0. Therefore, by addingρ, a sparsity parameter whose value approaches 0, and making ρ^=ρ, most neurons can be inhibited. To realize the sparsity limitation, the sparse penalty term was added to the cost function, and the total cost function was as follows: (5)JW,b=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12+βKLρρ^j,whereβ was the coefficient of sparse penalty and KLρρ^j was the sparse penalty for the hidden layer neuron j. (6)KLρρ^j=∑j=1s2ρlogρρ^j+1−ρlog1−ρ1−ρ^j,wheres2 was the number of neurons in the hidden layer. The sparse penalty got a unique minimum value when ρ^=ρ, minimizing the penalty term could make the mean activate degree of the hidden layer close to the sparse parameter. ### 2.3. ELM Modeling If there wereN different samples xi,ti,wherexi=xi1,xi2,⋯,xinT∈Rn,ti=ti1,ti2,⋯,timT∈Rm, a neural network with L hidden layer nodes could be expressed as follows: (7)∑i=1LβigiXj=∑i=1LβigWi•Xj+bi=ojj=1,⋯,N,where gx was the activation function, Wi=wi,1,wi,2,⋯,wi,nT was the weight between the input node and the ith hidden node, βi was the weight between the ith hidden node and the output node, bi was the bias of the ith hidden layer node, and Wi·Xj was the inner product of Wi and Xj.In order to ensure the most accurate of the output of the neural network model, it was necessary to(8)∑j=1Noj−tj=0.Import Equation (8) to produce the following: (9)∑i=1LβigWi•Xj+bi=tjj=1,⋯,N.IfH was the output of the hidden layer, β was the weight of H, and T was the desired output, then (10)HW1,⋯WL,b1,⋯,bL,X1,⋯,XL=gW1•X1+b1⋯gWL•X1+bL⋯⋯gW1•XN+b1⋯gWL•XN+bLN×L,β=β1T⋯βNTL×m,T=T1T⋯TNTN×m.From Equation (10), Hβ=T,Wi^,bi^,βi^ would be calculated in order to be able to train the single-hidden layer neural network. (11)HW¯1,⋯,W¯L,b¯1,⋯,b¯L=minWi,bj,βHW¯1,⋯,W¯L,b¯1,⋯,b¯Lβi−T,wherei=1,⋯,L.The above formula was equivalent to minimizing the following loss function.(12)E=∑j=1N∑i=1LβigWi•Xj+bi−tj2.The gradient algorithm was adopted, and the parameters need to be adjusted during the iteration process, but in the ELM algorithm, once the input weight and hidden layer bias were determined randomly, the hidden layer output matrixH could be uniquely determined by the least square solution β^. (13)β¯=H+T,whereH† was the generalized inverse matrix of H, and there were two conditions for β^, that the norm was the smallest, and that the value of β^ was unique.The structure of the ELM algorithm is shown in Figure3.Figure 3 The structure of the ELM. ### 2.4. SSDA-HELM-SOFTMAX Modeling The SSDA-HELM-SOFTMAX model took the SAE as the front-end pretrained for the initial weights and provided parameters for the multilayer ELM model to reach the optimal solution. Then, SOFTMAX was used for agricultural input identification in the output layer of this model. In the model training process, physical and chemical parameter values of agricultural inputs as collected by the sensor were sent to the SAE input layer as training samples. The SAE hidden layer would extract relevant features from these complex samples and adopt unsupervised learning methods for initial weight. Then, the decoding part of the SAE was removed, connected to the ELM and assigned the initial weight as the initial value of the EML, and used SOFTMAX for classification. The diagram of SSDA-HELM-SOFTMAX model constructed in this paper is shown in Figure4.Figure 4 SSDA-HELM-SOFTMAX model.The specific EML algorithm was as follows:(1) The number of hidden layers of the network was initialized tok, X1=X and X=x1,x2,⋯,xmT, the number of hidden layers’ nodes was N^, and the decoding part of SSDA was deleted and connected to the HELM(2) From the first layer, the SSDA-HELM network was initialized usingWi and bi of the SSDA network training as input weights(3) The input weightWi and hidden layer bias bi generated by pretraining were used to calculate the hidden layer output matrix: Ai=Hi−1Wli(4) According to ELM theory, the output weight matrix of neural network was calculated:β^=A†T(5) The output calculation was as follows:H^i=gHi−1·β^, where H^i and Hi−1 were the output and input of layer i, respectively, and g· was the activation function of the hidden layerThis was then repeated from step 2 for the next layer.(6) The extracted features were used as input values and sent to the SOFTMAX classifier for classification prediction ## 2.1. Stacked Autoencoder An autoencoder [19] is an unsupervised neural network model based on deep learning that can reconstruct the original input data into an approximate new data, expressing any data in low dimensions through the symmetric structure and weight coefficients of the network, and at its core is the ability to learn the deep representation of input data. The drawback is that the parameters of neurons in the network will continue to increase in the number of hidden layers, which affects the calculation speed of the network. One of its main applications is to get the weight parameters of the initial neural network by layer-wise pretraining and fine-tuning, with better results than the traditional symmetric random parameters.Multiple autoencoders were stacked to form a stacked autoencoder [20, 21] whose main function is to extract deep characteristics and nonlinear dimension reduction. A stacked autoencoder combined with a supervised classifier can accomplish multicategory classification. Each autoencoder in the structure of the stacked autoencoder (Figure 1) performed encoding and decoding operations and feature extraction from the output of the previous autoencoder. The output of the autoencoder is a reconstruction or approximation of its input, but it cannot be used to directly classify the input information without a supervised classifier. The three autoencoders in Figure 1 can obtain three hidden layers through feature extraction, and a supervised classifier can be added to the output layer to realize classification prediction.Figure 1 The structure of the stacked autoencoder. ## 2.2. Sparse Autoencoder (SAE) There were three layers in the autoencoder, namely, the input layer, hidden layer, and output layer. The number of nodes on each layer wasn, d, and n, respectively (Figure 2). Autoencoder attempts to approximate an identity function which causes the output data to approximate the input data, and the hidden layer activation value was a2=a21,a22,⋯,a2n, which were the features of the input vector.Figure 2 The structure of the AE.The formula used in the encoding process of stacked autoencoder was as follows:(1)h=fθ1x=σW1x+b1.The formula used in the decoding was as follows:(2)x^=fθ2h=σW2h+b2,whereW1 and W2 were the weight matrix of the input layer to the hidden layer and the hidden layer to the output layer, b1 and b2 were the unit bias coefficients of the hidden layer and the output layer, σ• was the logsig function, and θ was the network parameter matrix.The goal of the autoencoder is to find the optimal parameter matrix and minimize the error between the input and the output. The reconstructed error loss function was expressed as follows:(3)JEW,b=loss+R=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12,whereloss was the loss function, R was the weight attenuation term which could effectively prevent overfitting, m was the number of samples. xi and x∧i were the input and output characteristics of the ith sample, nl was the number of network layers, sl was the unit number of the jth layer, and λ was the weight attenuation coefficient.An SAE was used to add a sparsity restriction to the hidden layer of the autoencoder. The sparsity restriction was to control the number of network parameters by suppressing the activation of network neurons and to achieve more effective extraction of data features.aj2x was the activate degree of hidden layer neurons, and the mean degree of activation can be expressed as follows: (4)ρ^j=1m∑i=1maj2xi.It was considered active when the neuron output was close to 1 and inactive when the output was close to 0. Therefore, by addingρ, a sparsity parameter whose value approaches 0, and making ρ^=ρ, most neurons can be inhibited. To realize the sparsity limitation, the sparse penalty term was added to the cost function, and the total cost function was as follows: (5)JW,b=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12+βKLρρ^j,whereβ was the coefficient of sparse penalty and KLρρ^j was the sparse penalty for the hidden layer neuron j. (6)KLρρ^j=∑j=1s2ρlogρρ^j+1−ρlog1−ρ1−ρ^j,wheres2 was the number of neurons in the hidden layer. The sparse penalty got a unique minimum value when ρ^=ρ, minimizing the penalty term could make the mean activate degree of the hidden layer close to the sparse parameter. ## 2.3. ELM Modeling If there wereN different samples xi,ti,wherexi=xi1,xi2,⋯,xinT∈Rn,ti=ti1,ti2,⋯,timT∈Rm, a neural network with L hidden layer nodes could be expressed as follows: (7)∑i=1LβigiXj=∑i=1LβigWi•Xj+bi=ojj=1,⋯,N,where gx was the activation function, Wi=wi,1,wi,2,⋯,wi,nT was the weight between the input node and the ith hidden node, βi was the weight between the ith hidden node and the output node, bi was the bias of the ith hidden layer node, and Wi·Xj was the inner product of Wi and Xj.In order to ensure the most accurate of the output of the neural network model, it was necessary to(8)∑j=1Noj−tj=0.Import Equation (8) to produce the following: (9)∑i=1LβigWi•Xj+bi=tjj=1,⋯,N.IfH was the output of the hidden layer, β was the weight of H, and T was the desired output, then (10)HW1,⋯WL,b1,⋯,bL,X1,⋯,XL=gW1•X1+b1⋯gWL•X1+bL⋯⋯gW1•XN+b1⋯gWL•XN+bLN×L,β=β1T⋯βNTL×m,T=T1T⋯TNTN×m.From Equation (10), Hβ=T,Wi^,bi^,βi^ would be calculated in order to be able to train the single-hidden layer neural network. (11)HW¯1,⋯,W¯L,b¯1,⋯,b¯L=minWi,bj,βHW¯1,⋯,W¯L,b¯1,⋯,b¯Lβi−T,wherei=1,⋯,L.The above formula was equivalent to minimizing the following loss function.(12)E=∑j=1N∑i=1LβigWi•Xj+bi−tj2.The gradient algorithm was adopted, and the parameters need to be adjusted during the iteration process, but in the ELM algorithm, once the input weight and hidden layer bias were determined randomly, the hidden layer output matrixH could be uniquely determined by the least square solution β^. (13)β¯=H+T,whereH† was the generalized inverse matrix of H, and there were two conditions for β^, that the norm was the smallest, and that the value of β^ was unique.The structure of the ELM algorithm is shown in Figure3.Figure 3 The structure of the ELM. ## 2.4. SSDA-HELM-SOFTMAX Modeling The SSDA-HELM-SOFTMAX model took the SAE as the front-end pretrained for the initial weights and provided parameters for the multilayer ELM model to reach the optimal solution. Then, SOFTMAX was used for agricultural input identification in the output layer of this model. In the model training process, physical and chemical parameter values of agricultural inputs as collected by the sensor were sent to the SAE input layer as training samples. The SAE hidden layer would extract relevant features from these complex samples and adopt unsupervised learning methods for initial weight. Then, the decoding part of the SAE was removed, connected to the ELM and assigned the initial weight as the initial value of the EML, and used SOFTMAX for classification. The diagram of SSDA-HELM-SOFTMAX model constructed in this paper is shown in Figure4.Figure 4 SSDA-HELM-SOFTMAX model.The specific EML algorithm was as follows:(1) The number of hidden layers of the network was initialized tok, X1=X and X=x1,x2,⋯,xmT, the number of hidden layers’ nodes was N^, and the decoding part of SSDA was deleted and connected to the HELM(2) From the first layer, the SSDA-HELM network was initialized usingWi and bi of the SSDA network training as input weights(3) The input weightWi and hidden layer bias bi generated by pretraining were used to calculate the hidden layer output matrix: Ai=Hi−1Wli(4) According to ELM theory, the output weight matrix of neural network was calculated:β^=A†T(5) The output calculation was as follows:H^i=gHi−1·β^, where H^i and Hi−1 were the output and input of layer i, respectively, and g· was the activation function of the hidden layerThis was then repeated from step 2 for the next layer.(6) The extracted features were used as input values and sent to the SOFTMAX classifier for classification prediction ## 3. Monitoring Platform Design ### 3.1. Software Architecture The software architecture of the agricultural input monitoring system is shown in Figure5. It was mainly composed of four modules: data collection service, data mutation detection service, external display plan task, and server side. The functions of each part were as follows: (1) Data Collection Service. Through the TCP protocol and multithreaded programming, the acquisition program was designed. The sensors were queried every 15 seconds to obtain real-time data, such as pH value, EC value, temperature, and humidity, and stored in the database.(2) Data Mutation Detection Service. The sensor data stored in the database was detected in real time. When the sensor data changed more than 20% for three consecutive times, it is considered that the data has a sudden change, and the prediction module was called to predict the type of agricultural inputs.(3) External Display Plan Task Service. According to the needs of the display page, the module was designed to calculate the relevant data. For example, the module calculated the original data at regular intervals to calculate the hourly average, daily average, and monthly average.(4) Server-Side Architecture. At the presentation layer, AJAX was used to request server-side data, the handlebar engine was used for page rendering, and graphs and tables were used for data visualization. In the business layer, logical business was designed according to application requirements. WCF framework and general processing program ashx were used to provide web interface services for the visualization layer. In the data layer, the ORM framework was designed and implemented through the bottom layer of the database, and the ADO.NET class library was used to complete the database access and provide data services for the business layer.Figure 5 Schematic diagram of software architecture. ### 3.2. Hardware Design The hardware structure is shown in Figure6. The hardware was divided into four modules based on functions, such as sensor data acquisition, data conversion, data transmission, and power supply modules.Figure 6 Hardware circuit architecture.In the hardware system, the sensor module included pH sensors, electrical conductivity sensors, temperature sensors, and moisture sensors. During data collection, the RS485 serial communication module provided multisensor data communication services. The polling mode was used between different sensors to complete different data communication. Its working principle was to use the master 485 chip to convert the differential signal on the main bus into TTL level and then distributed it to the slave 485 chips of other branches by broadcasting, converted the differential signal from the slave chip, and sent it to each branch bus. During data transmission, the LoRa technology was applied with the wireless transmission chip SX1301, and the cyclic interleaving error correction coding algorithm was adopted to improve error correction ability. When sudden interference occurred, the maximum continuous 64 bit error could be corrected, which effectively improved the anti-interference performance of the sensors during transmission. ## 3.1. Software Architecture The software architecture of the agricultural input monitoring system is shown in Figure5. It was mainly composed of four modules: data collection service, data mutation detection service, external display plan task, and server side. The functions of each part were as follows: (1) Data Collection Service. Through the TCP protocol and multithreaded programming, the acquisition program was designed. The sensors were queried every 15 seconds to obtain real-time data, such as pH value, EC value, temperature, and humidity, and stored in the database.(2) Data Mutation Detection Service. The sensor data stored in the database was detected in real time. When the sensor data changed more than 20% for three consecutive times, it is considered that the data has a sudden change, and the prediction module was called to predict the type of agricultural inputs.(3) External Display Plan Task Service. According to the needs of the display page, the module was designed to calculate the relevant data. For example, the module calculated the original data at regular intervals to calculate the hourly average, daily average, and monthly average.(4) Server-Side Architecture. At the presentation layer, AJAX was used to request server-side data, the handlebar engine was used for page rendering, and graphs and tables were used for data visualization. In the business layer, logical business was designed according to application requirements. WCF framework and general processing program ashx were used to provide web interface services for the visualization layer. In the data layer, the ORM framework was designed and implemented through the bottom layer of the database, and the ADO.NET class library was used to complete the database access and provide data services for the business layer.Figure 5 Schematic diagram of software architecture. ## 3.2. Hardware Design The hardware structure is shown in Figure6. The hardware was divided into four modules based on functions, such as sensor data acquisition, data conversion, data transmission, and power supply modules.Figure 6 Hardware circuit architecture.In the hardware system, the sensor module included pH sensors, electrical conductivity sensors, temperature sensors, and moisture sensors. During data collection, the RS485 serial communication module provided multisensor data communication services. The polling mode was used between different sensors to complete different data communication. Its working principle was to use the master 485 chip to convert the differential signal on the main bus into TTL level and then distributed it to the slave 485 chips of other branches by broadcasting, converted the differential signal from the slave chip, and sent it to each branch bus. During data transmission, the LoRa technology was applied with the wireless transmission chip SX1301, and the cyclic interleaving error correction coding algorithm was adopted to improve error correction ability. When sudden interference occurred, the maximum continuous 64 bit error could be corrected, which effectively improved the anti-interference performance of the sensors during transmission. ## 4. The Detection Process of the Agricultural Inputs The prediction model of agricultural inputs includes real-time data collection, data feature analysis, data preprocessing, SSDA pretraining, SSDA-HELM feature extraction, and SOFTMAX classifier. The technical process is shown in Figure7.Figure 7 The flow diagram of modeling. ### 4.1. Data Collection Ammonium sulfate (SinoChem, China), potash fertilizer (K2O, SinoChem, China), phosphate fertilizer (P2O5, SinoChem, China), Bordeaux mixture (TaoChun, China), metalaxyl (SinoChem, China), imidacloprid (Bayer, Germany), pendimethalin (SinoChem, China), and bromothalonil (SinoChem, China), which were commonly used in the cultivation of Agastache rugosa, were selected as experimental objects. Ammonium sulfate, potash fertilizer, and phosphate are common fertilizers. Bordeaux mixture and metalaxyl are pesticides commonly used to treat brown patch and fusarium wilt, respectively. The latter three are commonly used to kill aphids, weed, and sterilize, respectively. Aqueous solutions of these eight products were diluted according to the label directions for use. Twenty-four pots (20cm×20cm×25cm of length, width, and height) with drainage holes at the bottom were filled with soil and were used to simulate the planting environment. Each agricultural input used three pots for parallel experiments. In each pot, four sensors including electrical conductivity (EC), temperature, moisture, and pH were inserted into the soil to record the chemical data changes in real time. The experiment was carried out from October 2017 to March 2018. During the experiment, 200 ml of each agricultural input was put into the sprinkling can and sprayed into the soil. In order to expand the number of data, the same experiment was performed 50 times in each pot, so 150 experimental data were obtained for each agricultural input. ### 4.2. Data Analysis and Preprocessing The sensor data was so messy that it was difficult to analyze, but with agricultural inputs in the same proportion, physical and chemical characteristics such as pH value and conductivity were relatively fixed. They changed after the input and were affected by the soil chemical characteristics and sensor contact time. Therefore, we could analyze the sudden changes in sensor data to find the inherent law.Due to insufficient contact between the sensor and the soil, unstable solar power supply, and other reasons, there were missing data and outliers in the sensor data. This paper used the mean method to deal with data anomalies. Every 15 seconds, the sensor data were polled, averaged to fill in missing data. When data outliers occurred, they were recorded if there were also abnormalities in other sensor data at the same time and otherwise were discarded.In the process of data denoising, this paper uses the wavelet denoise method [22] based on thresholds [23] to remove the noise of key factors of the model input, providing a good data foundation for the construction of prediction models. Further, in the data normalization process, the z-score method was used to normalize the feature data X of the sample set, as shown in Equation (14). (14)yi=xi−μσ1≤i≤n,whereyi was the eigenvalue of the ith data after normalization, xi was the eigenvalue of the ith data, μ was the average of all samples, and σ was the standard deviation of all samples. ### 4.3. SSDA Pretraining and Modeling The diagram of SSDA pretraining is shown in Figure8.Figure 8 The diagram of SSDA pretraining.The network parameters were set as follows: the learning rate was 0.1, the maximum number of iterations for pretraining was 400 and for fine-tuning was 300, the sparse parameter was 0.5, the sparse penalty parameter was 3, and the activation function was the sigmoid function. During SSDA pretraining, we could extract features from complex input data and use layer-wise pretraining and fine-tuning based on unsupervised learning methods to obtain initial weights. After pretraining, we constructed the SSDA-HELM model by removing the decoding part of SSDA and connecting to the HELM. The SAE training weights were initialized to the SSDA-HELM model to extract the characteristic values of the agricultural inputs. After that, the extracted feature values were sent to the SOFTMAX classifier for classification to obtain the final agricultural input prediction model (Figure4), which could predict the agricultural inputs based on test sample data. ## 4.1. Data Collection Ammonium sulfate (SinoChem, China), potash fertilizer (K2O, SinoChem, China), phosphate fertilizer (P2O5, SinoChem, China), Bordeaux mixture (TaoChun, China), metalaxyl (SinoChem, China), imidacloprid (Bayer, Germany), pendimethalin (SinoChem, China), and bromothalonil (SinoChem, China), which were commonly used in the cultivation of Agastache rugosa, were selected as experimental objects. Ammonium sulfate, potash fertilizer, and phosphate are common fertilizers. Bordeaux mixture and metalaxyl are pesticides commonly used to treat brown patch and fusarium wilt, respectively. The latter three are commonly used to kill aphids, weed, and sterilize, respectively. Aqueous solutions of these eight products were diluted according to the label directions for use. Twenty-four pots (20cm×20cm×25cm of length, width, and height) with drainage holes at the bottom were filled with soil and were used to simulate the planting environment. Each agricultural input used three pots for parallel experiments. In each pot, four sensors including electrical conductivity (EC), temperature, moisture, and pH were inserted into the soil to record the chemical data changes in real time. The experiment was carried out from October 2017 to March 2018. During the experiment, 200 ml of each agricultural input was put into the sprinkling can and sprayed into the soil. In order to expand the number of data, the same experiment was performed 50 times in each pot, so 150 experimental data were obtained for each agricultural input. ## 4.2. Data Analysis and Preprocessing The sensor data was so messy that it was difficult to analyze, but with agricultural inputs in the same proportion, physical and chemical characteristics such as pH value and conductivity were relatively fixed. They changed after the input and were affected by the soil chemical characteristics and sensor contact time. Therefore, we could analyze the sudden changes in sensor data to find the inherent law.Due to insufficient contact between the sensor and the soil, unstable solar power supply, and other reasons, there were missing data and outliers in the sensor data. This paper used the mean method to deal with data anomalies. Every 15 seconds, the sensor data were polled, averaged to fill in missing data. When data outliers occurred, they were recorded if there were also abnormalities in other sensor data at the same time and otherwise were discarded.In the process of data denoising, this paper uses the wavelet denoise method [22] based on thresholds [23] to remove the noise of key factors of the model input, providing a good data foundation for the construction of prediction models. Further, in the data normalization process, the z-score method was used to normalize the feature data X of the sample set, as shown in Equation (14). (14)yi=xi−μσ1≤i≤n,whereyi was the eigenvalue of the ith data after normalization, xi was the eigenvalue of the ith data, μ was the average of all samples, and σ was the standard deviation of all samples. ## 4.3. SSDA Pretraining and Modeling The diagram of SSDA pretraining is shown in Figure8.Figure 8 The diagram of SSDA pretraining.The network parameters were set as follows: the learning rate was 0.1, the maximum number of iterations for pretraining was 400 and for fine-tuning was 300, the sparse parameter was 0.5, the sparse penalty parameter was 3, and the activation function was the sigmoid function. During SSDA pretraining, we could extract features from complex input data and use layer-wise pretraining and fine-tuning based on unsupervised learning methods to obtain initial weights. After pretraining, we constructed the SSDA-HELM model by removing the decoding part of SSDA and connecting to the HELM. The SAE training weights were initialized to the SSDA-HELM model to extract the characteristic values of the agricultural inputs. After that, the extracted feature values were sent to the SOFTMAX classifier for classification to obtain the final agricultural input prediction model (Figure4), which could predict the agricultural inputs based on test sample data. ## 5. Results and Analysis ### 5.1. Data Analysis After the agricultural inputs were applied to the soil, the data collected by the sensors were impacted by the physical and chemical properties of the inputs. In this paper, eight agricultural inputs (ammonium sulfate, potash fertilizer, phosphate fertilizer, Bordeaux mixture, metalaxyl, imidacloprid, pendimethalin, and bromothalonil) were sprayed onto soil, and the data from 20 experiments were randomly selected for observation of the electric conductivity (EC) and pH data (Figures9 and 10). EC refers to the ability to conduct electric current, which measures the concentration of soluble conductive ions. pH refers to the hydrogen ion concentration, which measures the proportion of the number of hydrogen ions in the solution. Since different agricultural inputs had different conductivity and hydrogen number, EC and pH could be used to distinguish different agricultural inputs. It could be observed that EC differences in response to pesticides were smaller, while the changes caused by fertilizers were comparatively large. Among them, potash fertilizer had the greatest impact on the EC value, which was above 100, while other agricultural inputs were below 80. When observing the pH value, it could be seen that the pH value of metalaxyl and imidacloprid was much lower than that of other agricultural inputs, which showed that different agricultural inputs have some differences in hydrogen ion concentration. Therefore, the differences could be used to distinguish them.Figure 9 EC value after input.Figure 10 pH value after input. ### 5.2. Model and Analysis In the process of modeling, the EC, temperature, moisture, and pH differences before and after the agricultural inputs spraying into the soil were used as model inputs, the agricultural input categories were used as the model output, and the leave-one-out method [24] was used to cross-validate the model. Among the 1200 samples, 1199 samples were taken each time as the training set, and the remaining 1 sample was the test set. Each sample was tested individually, and the performance of the method was obtained by averaging the test results.Because the number of nodes and layers of SSDA-HELM network directly affected the performance of the algorithm, pairwise combinations of 2, 3, 4, and 5 hidden layers and 50, 100, 200, 300, and 400 nodes were created and the root-mean-square error (RMSE) was selected to find the optimal parameters. The RMSE network parameters are shown in Table1.Table 1 The value of RMSE in different model. LayersNodes5010020030040020.04560.04130.03230.01830.053430.04370.04560.03190.01120.047840.04980.04060.05640.01400.058950.04730.04320.05890.01960.0673We could see that when the number of layers was 3 and the number of neurons in the first layer was 300, the performance of the model was the best. In the pretraining process, autoencoder was used for unsupervised training each layer of the SDA network. L-BFGS [25] was used for training, and the other parameters were the same as the settings in the previous study [18]. After the training was completed, the input network part of the trained weight parameters was used as the initial parameter of the SSDA-HELM network. According to the ELM network principle, the output network parameters were obtained using the least squares method and softmax was connected, and then, the supervised fine-tuning was executed. Because the ELM network used the least squares method to obtain the output network parameters, instead of the gradient descent algorithm, the problems of local convergence and poor generalization performance were avoided. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder.In the model training process, first SSDA was used for unsupervised training, then SSDA-HELM was used to extract data features, and then, the SOFTMAX classifier was connected and fine-tuned to improve model performance. The feature data is shown in Table2, which refers to the nonlinear fitting and feature extraction of input data by neural network and is a higher-dimensional mapping of the input data. The prediction result is shown in Figure 11.Table 2 The characteristic data. InputsFeature dataPotash fertilizer0.6829530.2274090.2260570.4948510.5979230.229395Potash fertilizer0.5699000.2864570.2863500.6226010.2055750.351254Potash fertilizer0.2718960.7293550.7237360.6428490.2812360.668261Ammonium sulfate0.2437880.2515410.6734140.2622890.2492490.572459Ammonium sulfate0.7425560.3459610.6938960.7379770.3477100.726792Ammonium sulfate0.2752260.6864530.3476210.2921590.6838520.550425Imidacloprid0.2120000.7154120.4625850.4471700.6383410.233302Imidacloprid0.3560720.6295240.6871720.5730450.7595790.637693Imidacloprid0.6200000.6412730.5399240.3285730.5298350.222942Bordeaux mixture0.2520550.2213880.4819210.5937150.2360940.627335Bordeaux mixture0.3332590.2889670.6763690.2004570.6370800.762575Bordeaux mixture0.6505580.7275470.5120390.2814350.2251680.544173Metalaxyl0.4317940.6672010.6857350.6915930.2270020.714891Metalaxyl0.5821590.4729210.5812090.5791500.3405130.547994Metalaxyl0.3071820.7728030.7188310.7129720.6318920.542898Phosphate fertilizer0.5627100.6640560.5615160.4888920.6882130.752463Phosphate fertilizer0.7220780.4728860.7251810.6128610.6703660.594326Phosphate fertilizer0.5694130.7789500.5628790.6725250.3281240.585843Pendimethalin0.7706650.2502540.4904140.2360530.3773390.704024Pendimethalin0.5732810.3344980.6515090.2867130.5934370.333318Pendimethalin0.5718350.6507480.5272590.7329430.3716190.282657Bromothalonil0.5929240.2088070.6892040.3524090.7621020.235710Bromothalonil0.2024990.7324880.5702850.7264880.5800860.282564Bromothalonil0.2770060.2263960.7291480.3212720.5663020.723354Figure 11 The predicted result of SSDA-HELM-SOFTMAX for test sets (ordinate 1: imidacloprid, 2: Bordeaux mixture, 3: metalaxyl, 4: phosphate fertilizer, 5: pendimethalin, 6: potash fertilizer, 7: ammonium sulfate, and 8: bromothalonil).In order to evaluate the performance of the model, some other models such as BP, DBN [18], and SAE were also built to compare with SSDA-HELM-SOFTMAX. The BP model, SAE model, and DBN model were used to extract features, and a SOFTMAX classifier was added for prediction. Table 3 shows the prediction accuracy comparisons.Table 3 The comparison of prediction accuracy. ModelInput layer (neuron)Hidden layer (neuron)Output layer (neuron)AccuracyBP8300-100-50893%DBN-SOFTMAX8300-100-50895.3%SAE-SOFTMAX8300-100-50895.5%SSDA-HELM-SOFTMAX8300-100-50897.08%The comparison showed that SAE-SOFTMAX and DBN-SOFTMAX were more accurate than the BP, because the unsupervised layer-wise [26] was adopted and the extracted feature quality was better than by the back-propagation method. The difference between SAE and DBN was that the main feature direction though nonlinear transformation was found by SAE, while the high-level representation based on the probability distribution of samples was extracted by DBN. The results in Table 4 indicated that the high-level feature extraction based on the sample probability distribution by DBN was more in line with the characteristics of the input feature parameters. The SSDA-HELM model had the highest prediction accuracy because the SSDA model was used for pretraining; then, HELM was used to calculate the neural network output weights. Compared with other deep learning methods, SSDA could obtain the optimal parameters for initializing HELM, and HELM could be trained stably and quickly to get good classification results. Therefore, this method had the advantages of avoiding inappropriate initialization weights and avoiding local optimization and inappropriate learning rate, which was more stable and had stronger generalization ability than SAE and DBN.Table 4 The performance comparison between SSDA-HELM, BP, SAE, and DBN models. ModelsR2calRMSECR2CVRMSECVSSDA-HELM0.990.020.990.12DBN0.990.030.970.15SAE0.990.090.960.35BP0.990.090.980.21The coefficient of determination (R2) and root mean square error (RMSE) of BP, SAE, DBN, and SSDA-HELM are further compared in Table 4. It could be observed that the performance of the training set of the SAE model was the same as that of the BP model. However, the cross-validation showed that the R2 of the SAE model was lower than that of the BP-NN model, while the RMSE was higher, indicating that SAE was not as stable as BP, although its prediction accuracy was superior.After feature extraction, theR2 of the SSDA-HELM model for the training set and cross-validation were the highest of all the models, both reaching 0.99. This indicated that the model was stable. Meanwhile, the RMSE for the training set and cross-validation of the SSDA-HELM model were 0.03 and 0.15, respectively, smaller than the other models. Unlike the DBN model, since the output matrix of HELM was generated by the least squares solution, once the input weights and hidden layer offsets were determined, the output matrix was uniquely determined. In this process, weight optimization was not a problem; the issues of local optimums, inappropriate learning rate, and overfitting were avoided. Therefore, the SSDA-HELM model was more stable than the DBN model. In terms of accuracy, the SSDA-HELM model was slightly lower than the DBN model [18], which was mainly due to the similarity of some inputs of Agastache rugosa (Figures 9 and 10) leading to more labeled categories, which decreases accuracy. In addition, the accuracy of the SSDA-HELM model was higher than that of the DBN model under the same experimental conditions. ## 5.1. Data Analysis After the agricultural inputs were applied to the soil, the data collected by the sensors were impacted by the physical and chemical properties of the inputs. In this paper, eight agricultural inputs (ammonium sulfate, potash fertilizer, phosphate fertilizer, Bordeaux mixture, metalaxyl, imidacloprid, pendimethalin, and bromothalonil) were sprayed onto soil, and the data from 20 experiments were randomly selected for observation of the electric conductivity (EC) and pH data (Figures9 and 10). EC refers to the ability to conduct electric current, which measures the concentration of soluble conductive ions. pH refers to the hydrogen ion concentration, which measures the proportion of the number of hydrogen ions in the solution. Since different agricultural inputs had different conductivity and hydrogen number, EC and pH could be used to distinguish different agricultural inputs. It could be observed that EC differences in response to pesticides were smaller, while the changes caused by fertilizers were comparatively large. Among them, potash fertilizer had the greatest impact on the EC value, which was above 100, while other agricultural inputs were below 80. When observing the pH value, it could be seen that the pH value of metalaxyl and imidacloprid was much lower than that of other agricultural inputs, which showed that different agricultural inputs have some differences in hydrogen ion concentration. Therefore, the differences could be used to distinguish them.Figure 9 EC value after input.Figure 10 pH value after input. ## 5.2. Model and Analysis In the process of modeling, the EC, temperature, moisture, and pH differences before and after the agricultural inputs spraying into the soil were used as model inputs, the agricultural input categories were used as the model output, and the leave-one-out method [24] was used to cross-validate the model. Among the 1200 samples, 1199 samples were taken each time as the training set, and the remaining 1 sample was the test set. Each sample was tested individually, and the performance of the method was obtained by averaging the test results.Because the number of nodes and layers of SSDA-HELM network directly affected the performance of the algorithm, pairwise combinations of 2, 3, 4, and 5 hidden layers and 50, 100, 200, 300, and 400 nodes were created and the root-mean-square error (RMSE) was selected to find the optimal parameters. The RMSE network parameters are shown in Table1.Table 1 The value of RMSE in different model. LayersNodes5010020030040020.04560.04130.03230.01830.053430.04370.04560.03190.01120.047840.04980.04060.05640.01400.058950.04730.04320.05890.01960.0673We could see that when the number of layers was 3 and the number of neurons in the first layer was 300, the performance of the model was the best. In the pretraining process, autoencoder was used for unsupervised training each layer of the SDA network. L-BFGS [25] was used for training, and the other parameters were the same as the settings in the previous study [18]. After the training was completed, the input network part of the trained weight parameters was used as the initial parameter of the SSDA-HELM network. According to the ELM network principle, the output network parameters were obtained using the least squares method and softmax was connected, and then, the supervised fine-tuning was executed. Because the ELM network used the least squares method to obtain the output network parameters, instead of the gradient descent algorithm, the problems of local convergence and poor generalization performance were avoided. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder.In the model training process, first SSDA was used for unsupervised training, then SSDA-HELM was used to extract data features, and then, the SOFTMAX classifier was connected and fine-tuned to improve model performance. The feature data is shown in Table2, which refers to the nonlinear fitting and feature extraction of input data by neural network and is a higher-dimensional mapping of the input data. The prediction result is shown in Figure 11.Table 2 The characteristic data. InputsFeature dataPotash fertilizer0.6829530.2274090.2260570.4948510.5979230.229395Potash fertilizer0.5699000.2864570.2863500.6226010.2055750.351254Potash fertilizer0.2718960.7293550.7237360.6428490.2812360.668261Ammonium sulfate0.2437880.2515410.6734140.2622890.2492490.572459Ammonium sulfate0.7425560.3459610.6938960.7379770.3477100.726792Ammonium sulfate0.2752260.6864530.3476210.2921590.6838520.550425Imidacloprid0.2120000.7154120.4625850.4471700.6383410.233302Imidacloprid0.3560720.6295240.6871720.5730450.7595790.637693Imidacloprid0.6200000.6412730.5399240.3285730.5298350.222942Bordeaux mixture0.2520550.2213880.4819210.5937150.2360940.627335Bordeaux mixture0.3332590.2889670.6763690.2004570.6370800.762575Bordeaux mixture0.6505580.7275470.5120390.2814350.2251680.544173Metalaxyl0.4317940.6672010.6857350.6915930.2270020.714891Metalaxyl0.5821590.4729210.5812090.5791500.3405130.547994Metalaxyl0.3071820.7728030.7188310.7129720.6318920.542898Phosphate fertilizer0.5627100.6640560.5615160.4888920.6882130.752463Phosphate fertilizer0.7220780.4728860.7251810.6128610.6703660.594326Phosphate fertilizer0.5694130.7789500.5628790.6725250.3281240.585843Pendimethalin0.7706650.2502540.4904140.2360530.3773390.704024Pendimethalin0.5732810.3344980.6515090.2867130.5934370.333318Pendimethalin0.5718350.6507480.5272590.7329430.3716190.282657Bromothalonil0.5929240.2088070.6892040.3524090.7621020.235710Bromothalonil0.2024990.7324880.5702850.7264880.5800860.282564Bromothalonil0.2770060.2263960.7291480.3212720.5663020.723354Figure 11 The predicted result of SSDA-HELM-SOFTMAX for test sets (ordinate 1: imidacloprid, 2: Bordeaux mixture, 3: metalaxyl, 4: phosphate fertilizer, 5: pendimethalin, 6: potash fertilizer, 7: ammonium sulfate, and 8: bromothalonil).In order to evaluate the performance of the model, some other models such as BP, DBN [18], and SAE were also built to compare with SSDA-HELM-SOFTMAX. The BP model, SAE model, and DBN model were used to extract features, and a SOFTMAX classifier was added for prediction. Table 3 shows the prediction accuracy comparisons.Table 3 The comparison of prediction accuracy. ModelInput layer (neuron)Hidden layer (neuron)Output layer (neuron)AccuracyBP8300-100-50893%DBN-SOFTMAX8300-100-50895.3%SAE-SOFTMAX8300-100-50895.5%SSDA-HELM-SOFTMAX8300-100-50897.08%The comparison showed that SAE-SOFTMAX and DBN-SOFTMAX were more accurate than the BP, because the unsupervised layer-wise [26] was adopted and the extracted feature quality was better than by the back-propagation method. The difference between SAE and DBN was that the main feature direction though nonlinear transformation was found by SAE, while the high-level representation based on the probability distribution of samples was extracted by DBN. The results in Table 4 indicated that the high-level feature extraction based on the sample probability distribution by DBN was more in line with the characteristics of the input feature parameters. The SSDA-HELM model had the highest prediction accuracy because the SSDA model was used for pretraining; then, HELM was used to calculate the neural network output weights. Compared with other deep learning methods, SSDA could obtain the optimal parameters for initializing HELM, and HELM could be trained stably and quickly to get good classification results. Therefore, this method had the advantages of avoiding inappropriate initialization weights and avoiding local optimization and inappropriate learning rate, which was more stable and had stronger generalization ability than SAE and DBN.Table 4 The performance comparison between SSDA-HELM, BP, SAE, and DBN models. ModelsR2calRMSECR2CVRMSECVSSDA-HELM0.990.020.990.12DBN0.990.030.970.15SAE0.990.090.960.35BP0.990.090.980.21The coefficient of determination (R2) and root mean square error (RMSE) of BP, SAE, DBN, and SSDA-HELM are further compared in Table 4. It could be observed that the performance of the training set of the SAE model was the same as that of the BP model. However, the cross-validation showed that the R2 of the SAE model was lower than that of the BP-NN model, while the RMSE was higher, indicating that SAE was not as stable as BP, although its prediction accuracy was superior.After feature extraction, theR2 of the SSDA-HELM model for the training set and cross-validation were the highest of all the models, both reaching 0.99. This indicated that the model was stable. Meanwhile, the RMSE for the training set and cross-validation of the SSDA-HELM model were 0.03 and 0.15, respectively, smaller than the other models. Unlike the DBN model, since the output matrix of HELM was generated by the least squares solution, once the input weights and hidden layer offsets were determined, the output matrix was uniquely determined. In this process, weight optimization was not a problem; the issues of local optimums, inappropriate learning rate, and overfitting were avoided. Therefore, the SSDA-HELM model was more stable than the DBN model. In terms of accuracy, the SSDA-HELM model was slightly lower than the DBN model [18], which was mainly due to the similarity of some inputs of Agastache rugosa (Figures 9 and 10) leading to more labeled categories, which decreases accuracy. In addition, the accuracy of the SSDA-HELM model was higher than that of the DBN model under the same experimental conditions. ## 6. Conclusions The complex and changeable environment ofAgastache rugosa cultivation means many factors influence the nonlinear physicochemical parameters of agricultural inputs, and traditional neural network classification used to predict agricultural inputs has the problems of local convergence, poor calculation efficiency, and poor generalization performance under the circumstances. To minimize these problems, this paper tested an input prediction model based on SSDA-HELM-SOFTMAX to predict inputs in real time. This model used the HELM to calculate the output network weights without feedback adjustment weights. It had excellent characteristics, such as fast learning speed, strong generalization ability, and resisted becoming trapped in locally optimal solutions. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder to initialize the parameters of SSDA-HELM model. Experiments showed that the accuracy of the method proposed in this paper reached 97.08%, which was 4.08%, 1.78%, and 1.58% higher than BP neural network, DBN-SOFTMAX, and SAE-SOFTMAX neural networks, respectively. Therefore, the model proposed in this paper was effective and feasible, with good prediction accuracy and generalization performance, and can provide a theoretical basis and parameter support for real-time online prediction of agricultural inputs. However, quantitative detection was difficult for this paper, which required higher sensitivity of the sensors and expansion of experiments with different agricultural input concentrations. In addition, when the experiment exceeded the eight types of agricultural inputs in this paper, the model would not be applicable, which required further expanding the types of inputs and retraining the model. Nevertheless, this paper still gives a new idea and can provide a theoretical basis and method support for real-time online prediction of agricultural inputs. --- *Source: 1015391-2021-11-24.xml*
1015391-2021-11-24_1015391-2021-11-24.md
60,234
Accurate Identification of Agricultural Inputs Based on Sensor Monitoring Platform and SSDA-HELM-SOFTMAX Model
Juan Zou; Hanjing Jiang; Qingxiu Wang; Ningxia Chen; Ting Wu; Ling Yang
Journal of Sensors (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1015391
1015391-2021-11-24.xml
--- ## Abstract The unreliability of traceability information on agricultural inputs has become one of the main factors hindering the development of traceability systems. At present, the major detection techniques of agricultural inputs were residue chemical detection at the postproduction stage. In this paper, a new detection method based on sensors and artificial intelligence algorithm was proposed in the detection of the commonly agricultural inputs inAgastache rugosa cultivation. An agricultural input monitoring platform including software system and hardware circuit was designed and built. A model called stacked sparse denoising autoencoder-hierarchical extreme learning machine-softmax (SSDA-HELM-SOFTMAX) was put forward to achieve accurate and real-time prediction of agricultural input varieties. The experiments showed that the combination of sensors and discriminant model could accurately classify different agricultural inputs. The accuracy of SSDA-HELM-SOFTMAX reached 97.08%, which was 4.08%, 1.78%, and 1.58% higher than a traditional BP neural network, DBN-SOFTMAX, and SAE-SOFTMAX models, respectively. Therefore, the method proposed in this paper was proved to be effective, accurate, and feasible and will provide a new online detection way of agricultural inputs. --- ## Body ## 1. Introduction In recent years, agricultural product traceability systems have been gradually applied to the actual production process, but manually entered traceability information is difficult to gain the trust of consumers and regulators, and a lack of trust in traceability information has become one of the main factors hindering the uptake of traceability systems. Three main factors affect the quality and safety of agricultural products: air pollution, soil pollution, and agricultural input pollution [1]. Among them, agricultural inputs refer to products permitted for use in organic farming, including feedstuffs, fertilizers, and permitted plant protection products as well as cleaning agents and additives used in food production. To prevent air pollution, traceability systems can automatically collect and save environmental data. To prevent soil pollution, traceability systems can record and save soil test reports. To prevent agricultural input pollution such as fertilizers and pesticides used in the production process, traceability systems are currently mainly used to record agricultural residue testing reports. However, the traditional chemical and biological detection methods are unable to cope with a large number of real-time online tests due to many problems such as sample preparation requirements, complicated operating processes, extended experiment durations, and sample destruction. In recent years, the rapid development of deep learning methods has directly promoted the in-depth application of artificial intelligence technology in the agricultural environment and other fields, especially for prediction and early warning based on the combination of real-time and prior information [2, 3]. Therefore, research on real-time online prediction of agricultural inputs based on deep learning is highly significant, which can improve the accuracy of input prediction and ensure the timeliness and accuracy of the traceability information.In recent years, some researchers have studied some techniques to predict agricultural inputs. For example, Chough et al., Kumaran and Tran-Minh, and others used electrochemical or biotechnology to quickly detect pesticides and achieved good results [4–6]. Galceran et al. used pyridine chloride hydrochloride as an electrolyte to identify several seasonal herbicides without UV absorption [7]. Andrade et al. established a liquid chromatography-electrospray tandem mass spectrometry method and used agents to neutralize the matrices and in turn to produce better recovery and faster detection in tomato samples [8]. Shamsipur et al. reported a method involving the DLLME method coupled with SPE for the identification of pesticides in fruit juice, water, milk, and honey samples [9]. Some researchers also used sensors to monitor agricultural inputs. For example, Datir and Wagh used a wireless sensor network to monitor and detect downy mildew in grapes, realizing a real-time system for detecting agricultural diseases based on weather data [10]. Zhu et al. used some nanozyme sensor arrays to detect pesticides [11]. Yi et al. used a photoluminescence sensor for ultrasensitive detection of pesticides [12]. However, these methods were residual detection after implementation, and input information needed to be recorded manually, which could not guarantee the real-time and accuracy of the traceability system.In recent years, with the rapid development of technologies such as artificial intelligence and sensors, extreme learning machines (ELMs) have become an important part of machine learning and have excellent generalization performance, fast learning, and are less likely to become trapped in local optimums [13]. The method has been successfully applied in load forecasting [14] and fault diagnosis [15, 16]. However, the complex and changeable crop planting environment has produced many interferences which influence the physicochemical parameters of agricultural inputs and create nonlinear variation. Thus, there were two problems with using an ELM neural network to classify and predict agricultural inputs [17]. Firstly, the input weights and hidden layer deviation of the ELM neural network during the modeling process were generated randomly, so the classification performance was reduced. Secondly, the random initial parameters may also make the number of ELM hidden layer nodes more than the traditional parameter adjustment neural network, increasing test time. Therefore, the key factor to improve the performance of ELM neural networks is efficient pretraining parameters.In order to solve the above problems, we have studied an algorithm of using deep learning to monitor agricultural inputs and achieved very good results [18]. Based on the previous research, this paper has made improvements in the algorithm model, experimental design, software architecture, hardware design, and preprocessing methods and achieved better prediction results than the previous research. This paper took eight kinds of agricultural inputs commonly used for Agastache rugosa as research objects (ammonium sulfate, potassium fertilizer, phosphate fertilizer, Bordeaux mixture, chlortetracycline, imidacloprid, pendimethalin, and bromoxynil). A monitoring platform including software system and hardware circuit, which could realize sensor data collection, wireless transmission, and storage, was built. In algorithm research, greedy layer-wise training and fine-tuning of the stacked autoencoder were used to initialize the parameters, then removed the decoding part of the stacked sparse denoising autoencoder (SSDA) model, and connect with the hierarchical extreme learning machine (HELM) neural network. Finally, an agricultural input classification prediction model based on SSDA-HELM-SOFTMAX was established, which laid the foundation for accurate classification prediction of agricultural inputs. ## 2. SSDA-HELM-SOFTMAX Algorithm Description ### 2.1. Stacked Autoencoder An autoencoder [19] is an unsupervised neural network model based on deep learning that can reconstruct the original input data into an approximate new data, expressing any data in low dimensions through the symmetric structure and weight coefficients of the network, and at its core is the ability to learn the deep representation of input data. The drawback is that the parameters of neurons in the network will continue to increase in the number of hidden layers, which affects the calculation speed of the network. One of its main applications is to get the weight parameters of the initial neural network by layer-wise pretraining and fine-tuning, with better results than the traditional symmetric random parameters.Multiple autoencoders were stacked to form a stacked autoencoder [20, 21] whose main function is to extract deep characteristics and nonlinear dimension reduction. A stacked autoencoder combined with a supervised classifier can accomplish multicategory classification. Each autoencoder in the structure of the stacked autoencoder (Figure 1) performed encoding and decoding operations and feature extraction from the output of the previous autoencoder. The output of the autoencoder is a reconstruction or approximation of its input, but it cannot be used to directly classify the input information without a supervised classifier. The three autoencoders in Figure 1 can obtain three hidden layers through feature extraction, and a supervised classifier can be added to the output layer to realize classification prediction.Figure 1 The structure of the stacked autoencoder. ### 2.2. Sparse Autoencoder (SAE) There were three layers in the autoencoder, namely, the input layer, hidden layer, and output layer. The number of nodes on each layer wasn, d, and n, respectively (Figure 2). Autoencoder attempts to approximate an identity function which causes the output data to approximate the input data, and the hidden layer activation value was a2=a21,a22,⋯,a2n, which were the features of the input vector.Figure 2 The structure of the AE.The formula used in the encoding process of stacked autoencoder was as follows:(1)h=fθ1x=σW1x+b1.The formula used in the decoding was as follows:(2)x^=fθ2h=σW2h+b2,whereW1 and W2 were the weight matrix of the input layer to the hidden layer and the hidden layer to the output layer, b1 and b2 were the unit bias coefficients of the hidden layer and the output layer, σ• was the logsig function, and θ was the network parameter matrix.The goal of the autoencoder is to find the optimal parameter matrix and minimize the error between the input and the output. The reconstructed error loss function was expressed as follows:(3)JEW,b=loss+R=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12,whereloss was the loss function, R was the weight attenuation term which could effectively prevent overfitting, m was the number of samples. xi and x∧i were the input and output characteristics of the ith sample, nl was the number of network layers, sl was the unit number of the jth layer, and λ was the weight attenuation coefficient.An SAE was used to add a sparsity restriction to the hidden layer of the autoencoder. The sparsity restriction was to control the number of network parameters by suppressing the activation of network neurons and to achieve more effective extraction of data features.aj2x was the activate degree of hidden layer neurons, and the mean degree of activation can be expressed as follows: (4)ρ^j=1m∑i=1maj2xi.It was considered active when the neuron output was close to 1 and inactive when the output was close to 0. Therefore, by addingρ, a sparsity parameter whose value approaches 0, and making ρ^=ρ, most neurons can be inhibited. To realize the sparsity limitation, the sparse penalty term was added to the cost function, and the total cost function was as follows: (5)JW,b=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12+βKLρρ^j,whereβ was the coefficient of sparse penalty and KLρρ^j was the sparse penalty for the hidden layer neuron j. (6)KLρρ^j=∑j=1s2ρlogρρ^j+1−ρlog1−ρ1−ρ^j,wheres2 was the number of neurons in the hidden layer. The sparse penalty got a unique minimum value when ρ^=ρ, minimizing the penalty term could make the mean activate degree of the hidden layer close to the sparse parameter. ### 2.3. ELM Modeling If there wereN different samples xi,ti,wherexi=xi1,xi2,⋯,xinT∈Rn,ti=ti1,ti2,⋯,timT∈Rm, a neural network with L hidden layer nodes could be expressed as follows: (7)∑i=1LβigiXj=∑i=1LβigWi•Xj+bi=ojj=1,⋯,N,where gx was the activation function, Wi=wi,1,wi,2,⋯,wi,nT was the weight between the input node and the ith hidden node, βi was the weight between the ith hidden node and the output node, bi was the bias of the ith hidden layer node, and Wi·Xj was the inner product of Wi and Xj.In order to ensure the most accurate of the output of the neural network model, it was necessary to(8)∑j=1Noj−tj=0.Import Equation (8) to produce the following: (9)∑i=1LβigWi•Xj+bi=tjj=1,⋯,N.IfH was the output of the hidden layer, β was the weight of H, and T was the desired output, then (10)HW1,⋯WL,b1,⋯,bL,X1,⋯,XL=gW1•X1+b1⋯gWL•X1+bL⋯⋯gW1•XN+b1⋯gWL•XN+bLN×L,β=β1T⋯βNTL×m,T=T1T⋯TNTN×m.From Equation (10), Hβ=T,Wi^,bi^,βi^ would be calculated in order to be able to train the single-hidden layer neural network. (11)HW¯1,⋯,W¯L,b¯1,⋯,b¯L=minWi,bj,βHW¯1,⋯,W¯L,b¯1,⋯,b¯Lβi−T,wherei=1,⋯,L.The above formula was equivalent to minimizing the following loss function.(12)E=∑j=1N∑i=1LβigWi•Xj+bi−tj2.The gradient algorithm was adopted, and the parameters need to be adjusted during the iteration process, but in the ELM algorithm, once the input weight and hidden layer bias were determined randomly, the hidden layer output matrixH could be uniquely determined by the least square solution β^. (13)β¯=H+T,whereH† was the generalized inverse matrix of H, and there were two conditions for β^, that the norm was the smallest, and that the value of β^ was unique.The structure of the ELM algorithm is shown in Figure3.Figure 3 The structure of the ELM. ### 2.4. SSDA-HELM-SOFTMAX Modeling The SSDA-HELM-SOFTMAX model took the SAE as the front-end pretrained for the initial weights and provided parameters for the multilayer ELM model to reach the optimal solution. Then, SOFTMAX was used for agricultural input identification in the output layer of this model. In the model training process, physical and chemical parameter values of agricultural inputs as collected by the sensor were sent to the SAE input layer as training samples. The SAE hidden layer would extract relevant features from these complex samples and adopt unsupervised learning methods for initial weight. Then, the decoding part of the SAE was removed, connected to the ELM and assigned the initial weight as the initial value of the EML, and used SOFTMAX for classification. The diagram of SSDA-HELM-SOFTMAX model constructed in this paper is shown in Figure4.Figure 4 SSDA-HELM-SOFTMAX model.The specific EML algorithm was as follows:(1) The number of hidden layers of the network was initialized tok, X1=X and X=x1,x2,⋯,xmT, the number of hidden layers’ nodes was N^, and the decoding part of SSDA was deleted and connected to the HELM(2) From the first layer, the SSDA-HELM network was initialized usingWi and bi of the SSDA network training as input weights(3) The input weightWi and hidden layer bias bi generated by pretraining were used to calculate the hidden layer output matrix: Ai=Hi−1Wli(4) According to ELM theory, the output weight matrix of neural network was calculated:β^=A†T(5) The output calculation was as follows:H^i=gHi−1·β^, where H^i and Hi−1 were the output and input of layer i, respectively, and g· was the activation function of the hidden layerThis was then repeated from step 2 for the next layer.(6) The extracted features were used as input values and sent to the SOFTMAX classifier for classification prediction ## 2.1. Stacked Autoencoder An autoencoder [19] is an unsupervised neural network model based on deep learning that can reconstruct the original input data into an approximate new data, expressing any data in low dimensions through the symmetric structure and weight coefficients of the network, and at its core is the ability to learn the deep representation of input data. The drawback is that the parameters of neurons in the network will continue to increase in the number of hidden layers, which affects the calculation speed of the network. One of its main applications is to get the weight parameters of the initial neural network by layer-wise pretraining and fine-tuning, with better results than the traditional symmetric random parameters.Multiple autoencoders were stacked to form a stacked autoencoder [20, 21] whose main function is to extract deep characteristics and nonlinear dimension reduction. A stacked autoencoder combined with a supervised classifier can accomplish multicategory classification. Each autoencoder in the structure of the stacked autoencoder (Figure 1) performed encoding and decoding operations and feature extraction from the output of the previous autoencoder. The output of the autoencoder is a reconstruction or approximation of its input, but it cannot be used to directly classify the input information without a supervised classifier. The three autoencoders in Figure 1 can obtain three hidden layers through feature extraction, and a supervised classifier can be added to the output layer to realize classification prediction.Figure 1 The structure of the stacked autoencoder. ## 2.2. Sparse Autoencoder (SAE) There were three layers in the autoencoder, namely, the input layer, hidden layer, and output layer. The number of nodes on each layer wasn, d, and n, respectively (Figure 2). Autoencoder attempts to approximate an identity function which causes the output data to approximate the input data, and the hidden layer activation value was a2=a21,a22,⋯,a2n, which were the features of the input vector.Figure 2 The structure of the AE.The formula used in the encoding process of stacked autoencoder was as follows:(1)h=fθ1x=σW1x+b1.The formula used in the decoding was as follows:(2)x^=fθ2h=σW2h+b2,whereW1 and W2 were the weight matrix of the input layer to the hidden layer and the hidden layer to the output layer, b1 and b2 were the unit bias coefficients of the hidden layer and the output layer, σ• was the logsig function, and θ was the network parameter matrix.The goal of the autoencoder is to find the optimal parameter matrix and minimize the error between the input and the output. The reconstructed error loss function was expressed as follows:(3)JEW,b=loss+R=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12,whereloss was the loss function, R was the weight attenuation term which could effectively prevent overfitting, m was the number of samples. xi and x∧i were the input and output characteristics of the ith sample, nl was the number of network layers, sl was the unit number of the jth layer, and λ was the weight attenuation coefficient.An SAE was used to add a sparsity restriction to the hidden layer of the autoencoder. The sparsity restriction was to control the number of network parameters by suppressing the activation of network neurons and to achieve more effective extraction of data features.aj2x was the activate degree of hidden layer neurons, and the mean degree of activation can be expressed as follows: (4)ρ^j=1m∑i=1maj2xi.It was considered active when the neuron output was close to 1 and inactive when the output was close to 0. Therefore, by addingρ, a sparsity parameter whose value approaches 0, and making ρ^=ρ, most neurons can be inhibited. To realize the sparsity limitation, the sparse penalty term was added to the cost function, and the total cost function was as follows: (5)JW,b=1m∑i=1m12x∧i−hji,bxi+λ2∑l=1n1∑i=1s1∑j=1s1−1Wji12+βKLρρ^j,whereβ was the coefficient of sparse penalty and KLρρ^j was the sparse penalty for the hidden layer neuron j. (6)KLρρ^j=∑j=1s2ρlogρρ^j+1−ρlog1−ρ1−ρ^j,wheres2 was the number of neurons in the hidden layer. The sparse penalty got a unique minimum value when ρ^=ρ, minimizing the penalty term could make the mean activate degree of the hidden layer close to the sparse parameter. ## 2.3. ELM Modeling If there wereN different samples xi,ti,wherexi=xi1,xi2,⋯,xinT∈Rn,ti=ti1,ti2,⋯,timT∈Rm, a neural network with L hidden layer nodes could be expressed as follows: (7)∑i=1LβigiXj=∑i=1LβigWi•Xj+bi=ojj=1,⋯,N,where gx was the activation function, Wi=wi,1,wi,2,⋯,wi,nT was the weight between the input node and the ith hidden node, βi was the weight between the ith hidden node and the output node, bi was the bias of the ith hidden layer node, and Wi·Xj was the inner product of Wi and Xj.In order to ensure the most accurate of the output of the neural network model, it was necessary to(8)∑j=1Noj−tj=0.Import Equation (8) to produce the following: (9)∑i=1LβigWi•Xj+bi=tjj=1,⋯,N.IfH was the output of the hidden layer, β was the weight of H, and T was the desired output, then (10)HW1,⋯WL,b1,⋯,bL,X1,⋯,XL=gW1•X1+b1⋯gWL•X1+bL⋯⋯gW1•XN+b1⋯gWL•XN+bLN×L,β=β1T⋯βNTL×m,T=T1T⋯TNTN×m.From Equation (10), Hβ=T,Wi^,bi^,βi^ would be calculated in order to be able to train the single-hidden layer neural network. (11)HW¯1,⋯,W¯L,b¯1,⋯,b¯L=minWi,bj,βHW¯1,⋯,W¯L,b¯1,⋯,b¯Lβi−T,wherei=1,⋯,L.The above formula was equivalent to minimizing the following loss function.(12)E=∑j=1N∑i=1LβigWi•Xj+bi−tj2.The gradient algorithm was adopted, and the parameters need to be adjusted during the iteration process, but in the ELM algorithm, once the input weight and hidden layer bias were determined randomly, the hidden layer output matrixH could be uniquely determined by the least square solution β^. (13)β¯=H+T,whereH† was the generalized inverse matrix of H, and there were two conditions for β^, that the norm was the smallest, and that the value of β^ was unique.The structure of the ELM algorithm is shown in Figure3.Figure 3 The structure of the ELM. ## 2.4. SSDA-HELM-SOFTMAX Modeling The SSDA-HELM-SOFTMAX model took the SAE as the front-end pretrained for the initial weights and provided parameters for the multilayer ELM model to reach the optimal solution. Then, SOFTMAX was used for agricultural input identification in the output layer of this model. In the model training process, physical and chemical parameter values of agricultural inputs as collected by the sensor were sent to the SAE input layer as training samples. The SAE hidden layer would extract relevant features from these complex samples and adopt unsupervised learning methods for initial weight. Then, the decoding part of the SAE was removed, connected to the ELM and assigned the initial weight as the initial value of the EML, and used SOFTMAX for classification. The diagram of SSDA-HELM-SOFTMAX model constructed in this paper is shown in Figure4.Figure 4 SSDA-HELM-SOFTMAX model.The specific EML algorithm was as follows:(1) The number of hidden layers of the network was initialized tok, X1=X and X=x1,x2,⋯,xmT, the number of hidden layers’ nodes was N^, and the decoding part of SSDA was deleted and connected to the HELM(2) From the first layer, the SSDA-HELM network was initialized usingWi and bi of the SSDA network training as input weights(3) The input weightWi and hidden layer bias bi generated by pretraining were used to calculate the hidden layer output matrix: Ai=Hi−1Wli(4) According to ELM theory, the output weight matrix of neural network was calculated:β^=A†T(5) The output calculation was as follows:H^i=gHi−1·β^, where H^i and Hi−1 were the output and input of layer i, respectively, and g· was the activation function of the hidden layerThis was then repeated from step 2 for the next layer.(6) The extracted features were used as input values and sent to the SOFTMAX classifier for classification prediction ## 3. Monitoring Platform Design ### 3.1. Software Architecture The software architecture of the agricultural input monitoring system is shown in Figure5. It was mainly composed of four modules: data collection service, data mutation detection service, external display plan task, and server side. The functions of each part were as follows: (1) Data Collection Service. Through the TCP protocol and multithreaded programming, the acquisition program was designed. The sensors were queried every 15 seconds to obtain real-time data, such as pH value, EC value, temperature, and humidity, and stored in the database.(2) Data Mutation Detection Service. The sensor data stored in the database was detected in real time. When the sensor data changed more than 20% for three consecutive times, it is considered that the data has a sudden change, and the prediction module was called to predict the type of agricultural inputs.(3) External Display Plan Task Service. According to the needs of the display page, the module was designed to calculate the relevant data. For example, the module calculated the original data at regular intervals to calculate the hourly average, daily average, and monthly average.(4) Server-Side Architecture. At the presentation layer, AJAX was used to request server-side data, the handlebar engine was used for page rendering, and graphs and tables were used for data visualization. In the business layer, logical business was designed according to application requirements. WCF framework and general processing program ashx were used to provide web interface services for the visualization layer. In the data layer, the ORM framework was designed and implemented through the bottom layer of the database, and the ADO.NET class library was used to complete the database access and provide data services for the business layer.Figure 5 Schematic diagram of software architecture. ### 3.2. Hardware Design The hardware structure is shown in Figure6. The hardware was divided into four modules based on functions, such as sensor data acquisition, data conversion, data transmission, and power supply modules.Figure 6 Hardware circuit architecture.In the hardware system, the sensor module included pH sensors, electrical conductivity sensors, temperature sensors, and moisture sensors. During data collection, the RS485 serial communication module provided multisensor data communication services. The polling mode was used between different sensors to complete different data communication. Its working principle was to use the master 485 chip to convert the differential signal on the main bus into TTL level and then distributed it to the slave 485 chips of other branches by broadcasting, converted the differential signal from the slave chip, and sent it to each branch bus. During data transmission, the LoRa technology was applied with the wireless transmission chip SX1301, and the cyclic interleaving error correction coding algorithm was adopted to improve error correction ability. When sudden interference occurred, the maximum continuous 64 bit error could be corrected, which effectively improved the anti-interference performance of the sensors during transmission. ## 3.1. Software Architecture The software architecture of the agricultural input monitoring system is shown in Figure5. It was mainly composed of four modules: data collection service, data mutation detection service, external display plan task, and server side. The functions of each part were as follows: (1) Data Collection Service. Through the TCP protocol and multithreaded programming, the acquisition program was designed. The sensors were queried every 15 seconds to obtain real-time data, such as pH value, EC value, temperature, and humidity, and stored in the database.(2) Data Mutation Detection Service. The sensor data stored in the database was detected in real time. When the sensor data changed more than 20% for three consecutive times, it is considered that the data has a sudden change, and the prediction module was called to predict the type of agricultural inputs.(3) External Display Plan Task Service. According to the needs of the display page, the module was designed to calculate the relevant data. For example, the module calculated the original data at regular intervals to calculate the hourly average, daily average, and monthly average.(4) Server-Side Architecture. At the presentation layer, AJAX was used to request server-side data, the handlebar engine was used for page rendering, and graphs and tables were used for data visualization. In the business layer, logical business was designed according to application requirements. WCF framework and general processing program ashx were used to provide web interface services for the visualization layer. In the data layer, the ORM framework was designed and implemented through the bottom layer of the database, and the ADO.NET class library was used to complete the database access and provide data services for the business layer.Figure 5 Schematic diagram of software architecture. ## 3.2. Hardware Design The hardware structure is shown in Figure6. The hardware was divided into four modules based on functions, such as sensor data acquisition, data conversion, data transmission, and power supply modules.Figure 6 Hardware circuit architecture.In the hardware system, the sensor module included pH sensors, electrical conductivity sensors, temperature sensors, and moisture sensors. During data collection, the RS485 serial communication module provided multisensor data communication services. The polling mode was used between different sensors to complete different data communication. Its working principle was to use the master 485 chip to convert the differential signal on the main bus into TTL level and then distributed it to the slave 485 chips of other branches by broadcasting, converted the differential signal from the slave chip, and sent it to each branch bus. During data transmission, the LoRa technology was applied with the wireless transmission chip SX1301, and the cyclic interleaving error correction coding algorithm was adopted to improve error correction ability. When sudden interference occurred, the maximum continuous 64 bit error could be corrected, which effectively improved the anti-interference performance of the sensors during transmission. ## 4. The Detection Process of the Agricultural Inputs The prediction model of agricultural inputs includes real-time data collection, data feature analysis, data preprocessing, SSDA pretraining, SSDA-HELM feature extraction, and SOFTMAX classifier. The technical process is shown in Figure7.Figure 7 The flow diagram of modeling. ### 4.1. Data Collection Ammonium sulfate (SinoChem, China), potash fertilizer (K2O, SinoChem, China), phosphate fertilizer (P2O5, SinoChem, China), Bordeaux mixture (TaoChun, China), metalaxyl (SinoChem, China), imidacloprid (Bayer, Germany), pendimethalin (SinoChem, China), and bromothalonil (SinoChem, China), which were commonly used in the cultivation of Agastache rugosa, were selected as experimental objects. Ammonium sulfate, potash fertilizer, and phosphate are common fertilizers. Bordeaux mixture and metalaxyl are pesticides commonly used to treat brown patch and fusarium wilt, respectively. The latter three are commonly used to kill aphids, weed, and sterilize, respectively. Aqueous solutions of these eight products were diluted according to the label directions for use. Twenty-four pots (20cm×20cm×25cm of length, width, and height) with drainage holes at the bottom were filled with soil and were used to simulate the planting environment. Each agricultural input used three pots for parallel experiments. In each pot, four sensors including electrical conductivity (EC), temperature, moisture, and pH were inserted into the soil to record the chemical data changes in real time. The experiment was carried out from October 2017 to March 2018. During the experiment, 200 ml of each agricultural input was put into the sprinkling can and sprayed into the soil. In order to expand the number of data, the same experiment was performed 50 times in each pot, so 150 experimental data were obtained for each agricultural input. ### 4.2. Data Analysis and Preprocessing The sensor data was so messy that it was difficult to analyze, but with agricultural inputs in the same proportion, physical and chemical characteristics such as pH value and conductivity were relatively fixed. They changed after the input and were affected by the soil chemical characteristics and sensor contact time. Therefore, we could analyze the sudden changes in sensor data to find the inherent law.Due to insufficient contact between the sensor and the soil, unstable solar power supply, and other reasons, there were missing data and outliers in the sensor data. This paper used the mean method to deal with data anomalies. Every 15 seconds, the sensor data were polled, averaged to fill in missing data. When data outliers occurred, they were recorded if there were also abnormalities in other sensor data at the same time and otherwise were discarded.In the process of data denoising, this paper uses the wavelet denoise method [22] based on thresholds [23] to remove the noise of key factors of the model input, providing a good data foundation for the construction of prediction models. Further, in the data normalization process, the z-score method was used to normalize the feature data X of the sample set, as shown in Equation (14). (14)yi=xi−μσ1≤i≤n,whereyi was the eigenvalue of the ith data after normalization, xi was the eigenvalue of the ith data, μ was the average of all samples, and σ was the standard deviation of all samples. ### 4.3. SSDA Pretraining and Modeling The diagram of SSDA pretraining is shown in Figure8.Figure 8 The diagram of SSDA pretraining.The network parameters were set as follows: the learning rate was 0.1, the maximum number of iterations for pretraining was 400 and for fine-tuning was 300, the sparse parameter was 0.5, the sparse penalty parameter was 3, and the activation function was the sigmoid function. During SSDA pretraining, we could extract features from complex input data and use layer-wise pretraining and fine-tuning based on unsupervised learning methods to obtain initial weights. After pretraining, we constructed the SSDA-HELM model by removing the decoding part of SSDA and connecting to the HELM. The SAE training weights were initialized to the SSDA-HELM model to extract the characteristic values of the agricultural inputs. After that, the extracted feature values were sent to the SOFTMAX classifier for classification to obtain the final agricultural input prediction model (Figure4), which could predict the agricultural inputs based on test sample data. ## 4.1. Data Collection Ammonium sulfate (SinoChem, China), potash fertilizer (K2O, SinoChem, China), phosphate fertilizer (P2O5, SinoChem, China), Bordeaux mixture (TaoChun, China), metalaxyl (SinoChem, China), imidacloprid (Bayer, Germany), pendimethalin (SinoChem, China), and bromothalonil (SinoChem, China), which were commonly used in the cultivation of Agastache rugosa, were selected as experimental objects. Ammonium sulfate, potash fertilizer, and phosphate are common fertilizers. Bordeaux mixture and metalaxyl are pesticides commonly used to treat brown patch and fusarium wilt, respectively. The latter three are commonly used to kill aphids, weed, and sterilize, respectively. Aqueous solutions of these eight products were diluted according to the label directions for use. Twenty-four pots (20cm×20cm×25cm of length, width, and height) with drainage holes at the bottom were filled with soil and were used to simulate the planting environment. Each agricultural input used three pots for parallel experiments. In each pot, four sensors including electrical conductivity (EC), temperature, moisture, and pH were inserted into the soil to record the chemical data changes in real time. The experiment was carried out from October 2017 to March 2018. During the experiment, 200 ml of each agricultural input was put into the sprinkling can and sprayed into the soil. In order to expand the number of data, the same experiment was performed 50 times in each pot, so 150 experimental data were obtained for each agricultural input. ## 4.2. Data Analysis and Preprocessing The sensor data was so messy that it was difficult to analyze, but with agricultural inputs in the same proportion, physical and chemical characteristics such as pH value and conductivity were relatively fixed. They changed after the input and were affected by the soil chemical characteristics and sensor contact time. Therefore, we could analyze the sudden changes in sensor data to find the inherent law.Due to insufficient contact between the sensor and the soil, unstable solar power supply, and other reasons, there were missing data and outliers in the sensor data. This paper used the mean method to deal with data anomalies. Every 15 seconds, the sensor data were polled, averaged to fill in missing data. When data outliers occurred, they were recorded if there were also abnormalities in other sensor data at the same time and otherwise were discarded.In the process of data denoising, this paper uses the wavelet denoise method [22] based on thresholds [23] to remove the noise of key factors of the model input, providing a good data foundation for the construction of prediction models. Further, in the data normalization process, the z-score method was used to normalize the feature data X of the sample set, as shown in Equation (14). (14)yi=xi−μσ1≤i≤n,whereyi was the eigenvalue of the ith data after normalization, xi was the eigenvalue of the ith data, μ was the average of all samples, and σ was the standard deviation of all samples. ## 4.3. SSDA Pretraining and Modeling The diagram of SSDA pretraining is shown in Figure8.Figure 8 The diagram of SSDA pretraining.The network parameters were set as follows: the learning rate was 0.1, the maximum number of iterations for pretraining was 400 and for fine-tuning was 300, the sparse parameter was 0.5, the sparse penalty parameter was 3, and the activation function was the sigmoid function. During SSDA pretraining, we could extract features from complex input data and use layer-wise pretraining and fine-tuning based on unsupervised learning methods to obtain initial weights. After pretraining, we constructed the SSDA-HELM model by removing the decoding part of SSDA and connecting to the HELM. The SAE training weights were initialized to the SSDA-HELM model to extract the characteristic values of the agricultural inputs. After that, the extracted feature values were sent to the SOFTMAX classifier for classification to obtain the final agricultural input prediction model (Figure4), which could predict the agricultural inputs based on test sample data. ## 5. Results and Analysis ### 5.1. Data Analysis After the agricultural inputs were applied to the soil, the data collected by the sensors were impacted by the physical and chemical properties of the inputs. In this paper, eight agricultural inputs (ammonium sulfate, potash fertilizer, phosphate fertilizer, Bordeaux mixture, metalaxyl, imidacloprid, pendimethalin, and bromothalonil) were sprayed onto soil, and the data from 20 experiments were randomly selected for observation of the electric conductivity (EC) and pH data (Figures9 and 10). EC refers to the ability to conduct electric current, which measures the concentration of soluble conductive ions. pH refers to the hydrogen ion concentration, which measures the proportion of the number of hydrogen ions in the solution. Since different agricultural inputs had different conductivity and hydrogen number, EC and pH could be used to distinguish different agricultural inputs. It could be observed that EC differences in response to pesticides were smaller, while the changes caused by fertilizers were comparatively large. Among them, potash fertilizer had the greatest impact on the EC value, which was above 100, while other agricultural inputs were below 80. When observing the pH value, it could be seen that the pH value of metalaxyl and imidacloprid was much lower than that of other agricultural inputs, which showed that different agricultural inputs have some differences in hydrogen ion concentration. Therefore, the differences could be used to distinguish them.Figure 9 EC value after input.Figure 10 pH value after input. ### 5.2. Model and Analysis In the process of modeling, the EC, temperature, moisture, and pH differences before and after the agricultural inputs spraying into the soil were used as model inputs, the agricultural input categories were used as the model output, and the leave-one-out method [24] was used to cross-validate the model. Among the 1200 samples, 1199 samples were taken each time as the training set, and the remaining 1 sample was the test set. Each sample was tested individually, and the performance of the method was obtained by averaging the test results.Because the number of nodes and layers of SSDA-HELM network directly affected the performance of the algorithm, pairwise combinations of 2, 3, 4, and 5 hidden layers and 50, 100, 200, 300, and 400 nodes were created and the root-mean-square error (RMSE) was selected to find the optimal parameters. The RMSE network parameters are shown in Table1.Table 1 The value of RMSE in different model. LayersNodes5010020030040020.04560.04130.03230.01830.053430.04370.04560.03190.01120.047840.04980.04060.05640.01400.058950.04730.04320.05890.01960.0673We could see that when the number of layers was 3 and the number of neurons in the first layer was 300, the performance of the model was the best. In the pretraining process, autoencoder was used for unsupervised training each layer of the SDA network. L-BFGS [25] was used for training, and the other parameters were the same as the settings in the previous study [18]. After the training was completed, the input network part of the trained weight parameters was used as the initial parameter of the SSDA-HELM network. According to the ELM network principle, the output network parameters were obtained using the least squares method and softmax was connected, and then, the supervised fine-tuning was executed. Because the ELM network used the least squares method to obtain the output network parameters, instead of the gradient descent algorithm, the problems of local convergence and poor generalization performance were avoided. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder.In the model training process, first SSDA was used for unsupervised training, then SSDA-HELM was used to extract data features, and then, the SOFTMAX classifier was connected and fine-tuned to improve model performance. The feature data is shown in Table2, which refers to the nonlinear fitting and feature extraction of input data by neural network and is a higher-dimensional mapping of the input data. The prediction result is shown in Figure 11.Table 2 The characteristic data. InputsFeature dataPotash fertilizer0.6829530.2274090.2260570.4948510.5979230.229395Potash fertilizer0.5699000.2864570.2863500.6226010.2055750.351254Potash fertilizer0.2718960.7293550.7237360.6428490.2812360.668261Ammonium sulfate0.2437880.2515410.6734140.2622890.2492490.572459Ammonium sulfate0.7425560.3459610.6938960.7379770.3477100.726792Ammonium sulfate0.2752260.6864530.3476210.2921590.6838520.550425Imidacloprid0.2120000.7154120.4625850.4471700.6383410.233302Imidacloprid0.3560720.6295240.6871720.5730450.7595790.637693Imidacloprid0.6200000.6412730.5399240.3285730.5298350.222942Bordeaux mixture0.2520550.2213880.4819210.5937150.2360940.627335Bordeaux mixture0.3332590.2889670.6763690.2004570.6370800.762575Bordeaux mixture0.6505580.7275470.5120390.2814350.2251680.544173Metalaxyl0.4317940.6672010.6857350.6915930.2270020.714891Metalaxyl0.5821590.4729210.5812090.5791500.3405130.547994Metalaxyl0.3071820.7728030.7188310.7129720.6318920.542898Phosphate fertilizer0.5627100.6640560.5615160.4888920.6882130.752463Phosphate fertilizer0.7220780.4728860.7251810.6128610.6703660.594326Phosphate fertilizer0.5694130.7789500.5628790.6725250.3281240.585843Pendimethalin0.7706650.2502540.4904140.2360530.3773390.704024Pendimethalin0.5732810.3344980.6515090.2867130.5934370.333318Pendimethalin0.5718350.6507480.5272590.7329430.3716190.282657Bromothalonil0.5929240.2088070.6892040.3524090.7621020.235710Bromothalonil0.2024990.7324880.5702850.7264880.5800860.282564Bromothalonil0.2770060.2263960.7291480.3212720.5663020.723354Figure 11 The predicted result of SSDA-HELM-SOFTMAX for test sets (ordinate 1: imidacloprid, 2: Bordeaux mixture, 3: metalaxyl, 4: phosphate fertilizer, 5: pendimethalin, 6: potash fertilizer, 7: ammonium sulfate, and 8: bromothalonil).In order to evaluate the performance of the model, some other models such as BP, DBN [18], and SAE were also built to compare with SSDA-HELM-SOFTMAX. The BP model, SAE model, and DBN model were used to extract features, and a SOFTMAX classifier was added for prediction. Table 3 shows the prediction accuracy comparisons.Table 3 The comparison of prediction accuracy. ModelInput layer (neuron)Hidden layer (neuron)Output layer (neuron)AccuracyBP8300-100-50893%DBN-SOFTMAX8300-100-50895.3%SAE-SOFTMAX8300-100-50895.5%SSDA-HELM-SOFTMAX8300-100-50897.08%The comparison showed that SAE-SOFTMAX and DBN-SOFTMAX were more accurate than the BP, because the unsupervised layer-wise [26] was adopted and the extracted feature quality was better than by the back-propagation method. The difference between SAE and DBN was that the main feature direction though nonlinear transformation was found by SAE, while the high-level representation based on the probability distribution of samples was extracted by DBN. The results in Table 4 indicated that the high-level feature extraction based on the sample probability distribution by DBN was more in line with the characteristics of the input feature parameters. The SSDA-HELM model had the highest prediction accuracy because the SSDA model was used for pretraining; then, HELM was used to calculate the neural network output weights. Compared with other deep learning methods, SSDA could obtain the optimal parameters for initializing HELM, and HELM could be trained stably and quickly to get good classification results. Therefore, this method had the advantages of avoiding inappropriate initialization weights and avoiding local optimization and inappropriate learning rate, which was more stable and had stronger generalization ability than SAE and DBN.Table 4 The performance comparison between SSDA-HELM, BP, SAE, and DBN models. ModelsR2calRMSECR2CVRMSECVSSDA-HELM0.990.020.990.12DBN0.990.030.970.15SAE0.990.090.960.35BP0.990.090.980.21The coefficient of determination (R2) and root mean square error (RMSE) of BP, SAE, DBN, and SSDA-HELM are further compared in Table 4. It could be observed that the performance of the training set of the SAE model was the same as that of the BP model. However, the cross-validation showed that the R2 of the SAE model was lower than that of the BP-NN model, while the RMSE was higher, indicating that SAE was not as stable as BP, although its prediction accuracy was superior.After feature extraction, theR2 of the SSDA-HELM model for the training set and cross-validation were the highest of all the models, both reaching 0.99. This indicated that the model was stable. Meanwhile, the RMSE for the training set and cross-validation of the SSDA-HELM model were 0.03 and 0.15, respectively, smaller than the other models. Unlike the DBN model, since the output matrix of HELM was generated by the least squares solution, once the input weights and hidden layer offsets were determined, the output matrix was uniquely determined. In this process, weight optimization was not a problem; the issues of local optimums, inappropriate learning rate, and overfitting were avoided. Therefore, the SSDA-HELM model was more stable than the DBN model. In terms of accuracy, the SSDA-HELM model was slightly lower than the DBN model [18], which was mainly due to the similarity of some inputs of Agastache rugosa (Figures 9 and 10) leading to more labeled categories, which decreases accuracy. In addition, the accuracy of the SSDA-HELM model was higher than that of the DBN model under the same experimental conditions. ## 5.1. Data Analysis After the agricultural inputs were applied to the soil, the data collected by the sensors were impacted by the physical and chemical properties of the inputs. In this paper, eight agricultural inputs (ammonium sulfate, potash fertilizer, phosphate fertilizer, Bordeaux mixture, metalaxyl, imidacloprid, pendimethalin, and bromothalonil) were sprayed onto soil, and the data from 20 experiments were randomly selected for observation of the electric conductivity (EC) and pH data (Figures9 and 10). EC refers to the ability to conduct electric current, which measures the concentration of soluble conductive ions. pH refers to the hydrogen ion concentration, which measures the proportion of the number of hydrogen ions in the solution. Since different agricultural inputs had different conductivity and hydrogen number, EC and pH could be used to distinguish different agricultural inputs. It could be observed that EC differences in response to pesticides were smaller, while the changes caused by fertilizers were comparatively large. Among them, potash fertilizer had the greatest impact on the EC value, which was above 100, while other agricultural inputs were below 80. When observing the pH value, it could be seen that the pH value of metalaxyl and imidacloprid was much lower than that of other agricultural inputs, which showed that different agricultural inputs have some differences in hydrogen ion concentration. Therefore, the differences could be used to distinguish them.Figure 9 EC value after input.Figure 10 pH value after input. ## 5.2. Model and Analysis In the process of modeling, the EC, temperature, moisture, and pH differences before and after the agricultural inputs spraying into the soil were used as model inputs, the agricultural input categories were used as the model output, and the leave-one-out method [24] was used to cross-validate the model. Among the 1200 samples, 1199 samples were taken each time as the training set, and the remaining 1 sample was the test set. Each sample was tested individually, and the performance of the method was obtained by averaging the test results.Because the number of nodes and layers of SSDA-HELM network directly affected the performance of the algorithm, pairwise combinations of 2, 3, 4, and 5 hidden layers and 50, 100, 200, 300, and 400 nodes were created and the root-mean-square error (RMSE) was selected to find the optimal parameters. The RMSE network parameters are shown in Table1.Table 1 The value of RMSE in different model. LayersNodes5010020030040020.04560.04130.03230.01830.053430.04370.04560.03190.01120.047840.04980.04060.05640.01400.058950.04730.04320.05890.01960.0673We could see that when the number of layers was 3 and the number of neurons in the first layer was 300, the performance of the model was the best. In the pretraining process, autoencoder was used for unsupervised training each layer of the SDA network. L-BFGS [25] was used for training, and the other parameters were the same as the settings in the previous study [18]. After the training was completed, the input network part of the trained weight parameters was used as the initial parameter of the SSDA-HELM network. According to the ELM network principle, the output network parameters were obtained using the least squares method and softmax was connected, and then, the supervised fine-tuning was executed. Because the ELM network used the least squares method to obtain the output network parameters, instead of the gradient descent algorithm, the problems of local convergence and poor generalization performance were avoided. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder.In the model training process, first SSDA was used for unsupervised training, then SSDA-HELM was used to extract data features, and then, the SOFTMAX classifier was connected and fine-tuned to improve model performance. The feature data is shown in Table2, which refers to the nonlinear fitting and feature extraction of input data by neural network and is a higher-dimensional mapping of the input data. The prediction result is shown in Figure 11.Table 2 The characteristic data. InputsFeature dataPotash fertilizer0.6829530.2274090.2260570.4948510.5979230.229395Potash fertilizer0.5699000.2864570.2863500.6226010.2055750.351254Potash fertilizer0.2718960.7293550.7237360.6428490.2812360.668261Ammonium sulfate0.2437880.2515410.6734140.2622890.2492490.572459Ammonium sulfate0.7425560.3459610.6938960.7379770.3477100.726792Ammonium sulfate0.2752260.6864530.3476210.2921590.6838520.550425Imidacloprid0.2120000.7154120.4625850.4471700.6383410.233302Imidacloprid0.3560720.6295240.6871720.5730450.7595790.637693Imidacloprid0.6200000.6412730.5399240.3285730.5298350.222942Bordeaux mixture0.2520550.2213880.4819210.5937150.2360940.627335Bordeaux mixture0.3332590.2889670.6763690.2004570.6370800.762575Bordeaux mixture0.6505580.7275470.5120390.2814350.2251680.544173Metalaxyl0.4317940.6672010.6857350.6915930.2270020.714891Metalaxyl0.5821590.4729210.5812090.5791500.3405130.547994Metalaxyl0.3071820.7728030.7188310.7129720.6318920.542898Phosphate fertilizer0.5627100.6640560.5615160.4888920.6882130.752463Phosphate fertilizer0.7220780.4728860.7251810.6128610.6703660.594326Phosphate fertilizer0.5694130.7789500.5628790.6725250.3281240.585843Pendimethalin0.7706650.2502540.4904140.2360530.3773390.704024Pendimethalin0.5732810.3344980.6515090.2867130.5934370.333318Pendimethalin0.5718350.6507480.5272590.7329430.3716190.282657Bromothalonil0.5929240.2088070.6892040.3524090.7621020.235710Bromothalonil0.2024990.7324880.5702850.7264880.5800860.282564Bromothalonil0.2770060.2263960.7291480.3212720.5663020.723354Figure 11 The predicted result of SSDA-HELM-SOFTMAX for test sets (ordinate 1: imidacloprid, 2: Bordeaux mixture, 3: metalaxyl, 4: phosphate fertilizer, 5: pendimethalin, 6: potash fertilizer, 7: ammonium sulfate, and 8: bromothalonil).In order to evaluate the performance of the model, some other models such as BP, DBN [18], and SAE were also built to compare with SSDA-HELM-SOFTMAX. The BP model, SAE model, and DBN model were used to extract features, and a SOFTMAX classifier was added for prediction. Table 3 shows the prediction accuracy comparisons.Table 3 The comparison of prediction accuracy. ModelInput layer (neuron)Hidden layer (neuron)Output layer (neuron)AccuracyBP8300-100-50893%DBN-SOFTMAX8300-100-50895.3%SAE-SOFTMAX8300-100-50895.5%SSDA-HELM-SOFTMAX8300-100-50897.08%The comparison showed that SAE-SOFTMAX and DBN-SOFTMAX were more accurate than the BP, because the unsupervised layer-wise [26] was adopted and the extracted feature quality was better than by the back-propagation method. The difference between SAE and DBN was that the main feature direction though nonlinear transformation was found by SAE, while the high-level representation based on the probability distribution of samples was extracted by DBN. The results in Table 4 indicated that the high-level feature extraction based on the sample probability distribution by DBN was more in line with the characteristics of the input feature parameters. The SSDA-HELM model had the highest prediction accuracy because the SSDA model was used for pretraining; then, HELM was used to calculate the neural network output weights. Compared with other deep learning methods, SSDA could obtain the optimal parameters for initializing HELM, and HELM could be trained stably and quickly to get good classification results. Therefore, this method had the advantages of avoiding inappropriate initialization weights and avoiding local optimization and inappropriate learning rate, which was more stable and had stronger generalization ability than SAE and DBN.Table 4 The performance comparison between SSDA-HELM, BP, SAE, and DBN models. ModelsR2calRMSECR2CVRMSECVSSDA-HELM0.990.020.990.12DBN0.990.030.970.15SAE0.990.090.960.35BP0.990.090.980.21The coefficient of determination (R2) and root mean square error (RMSE) of BP, SAE, DBN, and SSDA-HELM are further compared in Table 4. It could be observed that the performance of the training set of the SAE model was the same as that of the BP model. However, the cross-validation showed that the R2 of the SAE model was lower than that of the BP-NN model, while the RMSE was higher, indicating that SAE was not as stable as BP, although its prediction accuracy was superior.After feature extraction, theR2 of the SSDA-HELM model for the training set and cross-validation were the highest of all the models, both reaching 0.99. This indicated that the model was stable. Meanwhile, the RMSE for the training set and cross-validation of the SSDA-HELM model were 0.03 and 0.15, respectively, smaller than the other models. Unlike the DBN model, since the output matrix of HELM was generated by the least squares solution, once the input weights and hidden layer offsets were determined, the output matrix was uniquely determined. In this process, weight optimization was not a problem; the issues of local optimums, inappropriate learning rate, and overfitting were avoided. Therefore, the SSDA-HELM model was more stable than the DBN model. In terms of accuracy, the SSDA-HELM model was slightly lower than the DBN model [18], which was mainly due to the similarity of some inputs of Agastache rugosa (Figures 9 and 10) leading to more labeled categories, which decreases accuracy. In addition, the accuracy of the SSDA-HELM model was higher than that of the DBN model under the same experimental conditions. ## 6. Conclusions The complex and changeable environment ofAgastache rugosa cultivation means many factors influence the nonlinear physicochemical parameters of agricultural inputs, and traditional neural network classification used to predict agricultural inputs has the problems of local convergence, poor calculation efficiency, and poor generalization performance under the circumstances. To minimize these problems, this paper tested an input prediction model based on SSDA-HELM-SOFTMAX to predict inputs in real time. This model used the HELM to calculate the output network weights without feedback adjustment weights. It had excellent characteristics, such as fast learning speed, strong generalization ability, and resisted becoming trapped in locally optimal solutions. Meanwhile, the problem of network instability generated by the random initial value of the HELM network was solved by pretraining the network parameters of the autoencoder to initialize the parameters of SSDA-HELM model. Experiments showed that the accuracy of the method proposed in this paper reached 97.08%, which was 4.08%, 1.78%, and 1.58% higher than BP neural network, DBN-SOFTMAX, and SAE-SOFTMAX neural networks, respectively. Therefore, the model proposed in this paper was effective and feasible, with good prediction accuracy and generalization performance, and can provide a theoretical basis and parameter support for real-time online prediction of agricultural inputs. However, quantitative detection was difficult for this paper, which required higher sensitivity of the sensors and expansion of experiments with different agricultural input concentrations. In addition, when the experiment exceeded the eight types of agricultural inputs in this paper, the model would not be applicable, which required further expanding the types of inputs and retraining the model. Nevertheless, this paper still gives a new idea and can provide a theoretical basis and method support for real-time online prediction of agricultural inputs. --- *Source: 1015391-2021-11-24.xml*
2021
# Tumor Necrosis Factor Alpha Inhibition for Inflammatory Bowel Disease after Liver Transplant for Primary Sclerosing Cholangitis **Authors:** Ravish Parekh; Ahmed Abdulhamid; Sheri Trudeau; Nirmal Kaur **Journal:** Case Reports in Gastrointestinal Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1015408 --- ## Abstract Background. Outcome data regarding the use of tumor necrosis factor alpha inhibitors (anti-TNFα) in patients with inflammatory bowel disease (IBD) after liver transplant (LT) for primary sclerosing cholangitis (PSC) are scant.Methods. We performed a retrospective chart review to investigate outcomes among a series of post-liver-transplant PSC/IBD patients receiving anti-TNFα therapy at Henry Ford Health System ((HFHS), Detroit, MI).Results. A total of five patients were treated with anti-TNFα agents for IBD after LT for PSC from 1993 through 2015. Two patients were treated with adalimumab, and three were treated with infliximab. Three patients were hospitalized with severe posttransplant infections. Two patients developed posttransplant lymphoproliferative disease (PTLD); one of these patients died due to complications of PTLD.Conclusion. Anti-TNFα treatment following LT worsened the disease course in our patients with concurrent PSC/IBD and led to serious complications and surgical intervention. Larger studies are needed to evaluate the side effects and outcomes of the use of such agents in this patient population. Until then, clinicians should have a high threshold to use anti-TNFα therapy in this setting. --- ## Body ## 1. Introduction The co-occurrence of inflammatory bowel disease (IBD) and primary sclerosing cholangitis (PSC) is a well-documented phenomenon. Although there are no epidemiological studies regarding the prevalence of concurrent PSC/IBD, as many as 90% of patients with PSC may have underlying IBD [1, 2]. No medical therapy has yet been proven to affect the natural progression of PSC and therefore, liver transplant (LT) remains the mainstay of therapy for patients with advanced cirrhosis secondary to the disease; without transplant, the mean survival of patients with PSC is 10–12 years [3–5]. Compared to patients with IBD alone, patients with cooccurring PSC/IBD generally present with a different clinical course, mainly characterized by a high prevalence of pancolitis with rectal sparing and backwash ileitis [6].In recent years, multiple agents have been approved for the treatment of IBD. However, tumor necrosis factor alpha inhibitors (anti-TNFα) remain widely used, given their widespread demonstrated efficacy in moderate, severe, and refractory IBD [7, 8]. However, the risks and benefits of these agents in PSC/IBD patients after liver transplant are yet to be determined. We examined the clinical course of PSC/IBD patients receiving liver transplants and treated with anti-TNFα agents. ## 2. Methods This study was approved by the HFHS Institutional Review Board; requirements for written informed consent were waived due to the deidentified nature of the study. A retrospective chart review of our patient database was performed, using International Classification of Diseases, version 9 (ICD-9) codes related to Crohn’s disease (555.0, 555.1, 555.9), ulcerative colitis (556.9), PSC (576.1), and LT (V42.7). Using this method, we identified five patients with concurrent PSC/IBD who underwent liver transplantation and also received anti-TNFα therapy at HFHS between 1993 and 2015. Three trained gastroenterologists (RP, AAH, and NK) performed retrospective chart review for data including demographic data (sex, age, and race); hospital admissions (indications); medical treatment, including prednisone escalation for IBD; endoscopy results; surgery; and infectious complications. The aim of the study was to assess the clinical effectiveness (defined as the absence of symptoms and endoscopic remission) and safety of biologic therapy in this clinical scenario. ## 3. Results A total of five post-LT PSC/IBD patients were treated with anti-TNFα agents from 1993 through 2015 at HFHS. Two patients were treated with adalimumab, and three were treated with infliximab. See summary results in Table 1.Table 1 Five patients with inflammatory bowel disease, primary sclerosing cholangitis, and liver transplant treated with antitumor necrosis factor alpha agents. IBD type Age at IBD onset Age at PSC onset Age at LT Pre-LT TNFα agent Liver donor status Post-LT TNFα agent Concomitant immunosuppression Complications IBD disease activity Patient 1: white male Crohn’s 9 11 17 None Deceased Adalimumab Cyclosporine; mycophenolate mofetil C. diff colitis, esophageal candidiasis, CMV viremia, PTLD, Death Hospital admission; prednisone escalation; colectomy Patient 2: white female UC 18 20 25 None Living unrelated Infliximab Tacrolimus; azathioprine Pancytopenia Hospital admission; prednisone escalation; active colitis Patient 3: white male UC 29 32 41 None Deceased Infliximab Tacrolimus MRSA bacteremia; pneumonia with sepsis; C. diff. colitis. Hospital admission; prednisone escalation; active colitis Patient 4: white female UC 26 33 48 None Deceased Infliximab Tacrolimus; mycophenolate mofetil Acute rejection; C. diff colitis Hospital admission; colectomy Patient 5: white male Crohn’s 29 33 45 Adalimumab Deceased Adalimumab Tacrolimus PTLD Active colitis IBD: inflammatory bowel disease; PSC: primary sclerosing cholangitis; LT: liver transplant; TNFα: tumor necrosis factor alpha; C. diff: Clostridium difficile; CMV: cytomegalovirus; PTLD: posttransplant lymphoproliferative disease; UC: ulcerative colitis. ### 3.1. Subject 1 A 9-year-old white male was diagnosed with pancolonic Crohn’s disease and responded well to treatment with prednisone, azathioprine, and then methotrexate. Two years following his IBD diagnosis, the patient was found to have primary sclerosing cholangitis with bridging fibrosis and cirrhosis. At age 17, the patient received a deceased donor liver transplant, after which he received mycophenolate mofetil and cyclosporine for immunosuppression. Following the transplant, the patient underwent multiple hospitalizations for cholangitis, perihepatic abscess secondary to methicillin-resistant Staphylococcus aureus (MRSA), cytomegalovirus (CMV) viremia, cellulitis, and esophageal candidiasis. The patient’s posttransplant course was also complicated by worsening Crohn’s colitis, treated with adalimumab. Despite this therapy, however, his colitis continued to worsen. He subsequently developed toxic megacolon and underwent a subtotal abdominal colectomy with end-ileostomy for refractory colitis. For approximately 5 months following his colectomy, the patient had marked improvement in pain, appetite, and functional status. He subsequently began to develop strictures at the ileostomy site secondary to active colitis with small bowel involvement, requiring multiple office visits for dilation of the ileostomy site. The patient was eventually hospitalized with worsening abdominal pain. Endoscopic ultrasound with biopsy showed malignant lymphoma, consistent with posttransplant lymphoproliferative disease (PTLD). Despite treatment with rituximab and corticosteroids, the patient continued to decompensate and eventually expired due to complications of PTLD. ### 3.2. Subject 2 A white female patient was diagnosed with ulcerative colitis at age 18, and PSC at age 20. The patient’s colitis symptoms were initially well-controlled with azathioprine and mesalamine. At age 25, the patient received a living-donor liver transplant subsequent to PSC; posttransplant immunosuppressant treatment included tacrolimus and azathioprine in addition to mesalamine. Following transplant, the patient experienced symptoms of worsening colitis, with frequent flare-ups requiring multiple courses of high-dose prednisone for disease control. Despite the steroid treatments, the patient’s symptoms continued to worsen. Subsequent treatment with infliximab resulted in marked improvement in her symptoms. However, the patient’s course was complicated by pancytopenia. She continued to have active IBD symptoms following her transplant. ### 3.3. Subject 3 A white male patient was diagnosed with ulcerative colitis at age 29 and responded well to ASA and azathioprine therapy. He was diagnosed with PSC at age 32 and received a deceased donor liver transplant at age 41. He had a history of recurrent clostridium difficile infections requiring fecal transplantation prior to the transplant. Following liver transplant, the patient was hospitalized for MRSA bacteremia, pneumonia with severe sepsis, recurrent MRSA pneumonia, and recurrent clostridium difficile infections. However, despite treatment with azathioprine, his colitis symptoms began to worsen posttransplant and required escalation of prednisone dose. Azathioprine was subsequently discontinued and infliximab started in response to worsening colitis. Prednisone therapy was tapered off due to side effects. At of the end of follow-up, the patient was maintained on infliximab and budesonide; his UC was in clinical remission. ### 3.4. Subject 4 A white female patient was diagnosed with ulcerative colitis at age 26 and PSC at age 33. Prior to transplant, the patient was maintained on mesalamine and her UC was under good control, with no evidence of active colitis on colonoscopy. The patient’s PSC was initially asymptomatic but quickly deteriorated, with multiple hospital admissions for episodes of cholangitis. At age 48, she received a liver transplant from a deceased donor for end-stage liver disease secondary to PSC. Posttransplant, the patient was started on mycophenolate mofetil, tacrolimus, and prednisone for immunosuppression. Her course was complicated by acute cellular rejection, which was treated with an increased dose of corticosteroids. The patient also experienced worsening of colitis. Mesalamine therapy was reinitiated with poor response; treatment was changed to infliximab, but symptoms continued to worsen. Her course was further complicated by clostridium difficile colitis, which did not respond to antibiotics and subsequently required a fecal transplant. Due to uncontrolled colitis with worsening symptoms, the patient underwent a colectomy with ileostomy, with marked improvement in her symptoms and overall health following surgery. ### 3.5. Subject 5 A white male patient was diagnosed with Crohn’s disease at age 29 and PSC at age 33. Prior to liver transplant, the patient’s Crohn’s disease was in remission, maintained on adalimumab. Following transplant of a deceased donor liver at age 45, the patient was continued on adalimumab, with tacrolimus added for immunosuppression. His posttransplant course was complicating by posttransplant lymphoproliferative disease (PTLD); 9 months after transplant, adalimumab was discontinued. The patient was started on cyclophosphamide for his PTLD with interval improvement in disease activity on his most recent PET scan. The patient’s Crohn’s disease symptoms remained under good control, with multiple colonoscopies showing no evidence of flare-ups of his disease. However, the patient’s course was further complicated by acute cellular rejection and recurrence of PTLD. At last follow-up, the patient was maintained on mycophenolate, tacrolimus, and prednisone therapy, in addition to cyclophosphamide for PTLD. The addition of mesalamine has provided only partial relief from symptoms. ## 3.1. Subject 1 A 9-year-old white male was diagnosed with pancolonic Crohn’s disease and responded well to treatment with prednisone, azathioprine, and then methotrexate. Two years following his IBD diagnosis, the patient was found to have primary sclerosing cholangitis with bridging fibrosis and cirrhosis. At age 17, the patient received a deceased donor liver transplant, after which he received mycophenolate mofetil and cyclosporine for immunosuppression. Following the transplant, the patient underwent multiple hospitalizations for cholangitis, perihepatic abscess secondary to methicillin-resistant Staphylococcus aureus (MRSA), cytomegalovirus (CMV) viremia, cellulitis, and esophageal candidiasis. The patient’s posttransplant course was also complicated by worsening Crohn’s colitis, treated with adalimumab. Despite this therapy, however, his colitis continued to worsen. He subsequently developed toxic megacolon and underwent a subtotal abdominal colectomy with end-ileostomy for refractory colitis. For approximately 5 months following his colectomy, the patient had marked improvement in pain, appetite, and functional status. He subsequently began to develop strictures at the ileostomy site secondary to active colitis with small bowel involvement, requiring multiple office visits for dilation of the ileostomy site. The patient was eventually hospitalized with worsening abdominal pain. Endoscopic ultrasound with biopsy showed malignant lymphoma, consistent with posttransplant lymphoproliferative disease (PTLD). Despite treatment with rituximab and corticosteroids, the patient continued to decompensate and eventually expired due to complications of PTLD. ## 3.2. Subject 2 A white female patient was diagnosed with ulcerative colitis at age 18, and PSC at age 20. The patient’s colitis symptoms were initially well-controlled with azathioprine and mesalamine. At age 25, the patient received a living-donor liver transplant subsequent to PSC; posttransplant immunosuppressant treatment included tacrolimus and azathioprine in addition to mesalamine. Following transplant, the patient experienced symptoms of worsening colitis, with frequent flare-ups requiring multiple courses of high-dose prednisone for disease control. Despite the steroid treatments, the patient’s symptoms continued to worsen. Subsequent treatment with infliximab resulted in marked improvement in her symptoms. However, the patient’s course was complicated by pancytopenia. She continued to have active IBD symptoms following her transplant. ## 3.3. Subject 3 A white male patient was diagnosed with ulcerative colitis at age 29 and responded well to ASA and azathioprine therapy. He was diagnosed with PSC at age 32 and received a deceased donor liver transplant at age 41. He had a history of recurrent clostridium difficile infections requiring fecal transplantation prior to the transplant. Following liver transplant, the patient was hospitalized for MRSA bacteremia, pneumonia with severe sepsis, recurrent MRSA pneumonia, and recurrent clostridium difficile infections. However, despite treatment with azathioprine, his colitis symptoms began to worsen posttransplant and required escalation of prednisone dose. Azathioprine was subsequently discontinued and infliximab started in response to worsening colitis. Prednisone therapy was tapered off due to side effects. At of the end of follow-up, the patient was maintained on infliximab and budesonide; his UC was in clinical remission. ## 3.4. Subject 4 A white female patient was diagnosed with ulcerative colitis at age 26 and PSC at age 33. Prior to transplant, the patient was maintained on mesalamine and her UC was under good control, with no evidence of active colitis on colonoscopy. The patient’s PSC was initially asymptomatic but quickly deteriorated, with multiple hospital admissions for episodes of cholangitis. At age 48, she received a liver transplant from a deceased donor for end-stage liver disease secondary to PSC. Posttransplant, the patient was started on mycophenolate mofetil, tacrolimus, and prednisone for immunosuppression. Her course was complicated by acute cellular rejection, which was treated with an increased dose of corticosteroids. The patient also experienced worsening of colitis. Mesalamine therapy was reinitiated with poor response; treatment was changed to infliximab, but symptoms continued to worsen. Her course was further complicated by clostridium difficile colitis, which did not respond to antibiotics and subsequently required a fecal transplant. Due to uncontrolled colitis with worsening symptoms, the patient underwent a colectomy with ileostomy, with marked improvement in her symptoms and overall health following surgery. ## 3.5. Subject 5 A white male patient was diagnosed with Crohn’s disease at age 29 and PSC at age 33. Prior to liver transplant, the patient’s Crohn’s disease was in remission, maintained on adalimumab. Following transplant of a deceased donor liver at age 45, the patient was continued on adalimumab, with tacrolimus added for immunosuppression. His posttransplant course was complicating by posttransplant lymphoproliferative disease (PTLD); 9 months after transplant, adalimumab was discontinued. The patient was started on cyclophosphamide for his PTLD with interval improvement in disease activity on his most recent PET scan. The patient’s Crohn’s disease symptoms remained under good control, with multiple colonoscopies showing no evidence of flare-ups of his disease. However, the patient’s course was further complicated by acute cellular rejection and recurrence of PTLD. At last follow-up, the patient was maintained on mycophenolate, tacrolimus, and prednisone therapy, in addition to cyclophosphamide for PTLD. The addition of mesalamine has provided only partial relief from symptoms. ## 4. Discussion Our patient experience suggests that anti-TNFα agents appear to be both relatively unsafe for patients with IBD after liver transplant and less effective at mitigating the disease than in patients without liver disease or transplant. Two patients went on to require a colectomy for severe colitis with immediate improvement in symptoms following the surgery. While our patients did well after colectomy, undergoing such a major operation in the post-LT setting is a high-risk scenario that should ideally be avoided. These outcomes demonstrate that these anti-TNFα agents can be poorly effective in the post-LT setting, in stark contrast to the known effectiveness of these therapies in patients without transplant.Our study also demonstrates the severity of anti-TNFα-related complications in the post-LT setting. After transplant, three of five patients treated with anti-TNFα agents developed serious infections, including clostridium difficile colitis, esophageal candidiasis, CMV viremia, MRSA bacteremia, and community acquired pneumonia requiring multiple hospitalizations. In addition, two patients developed PTLD while being treated with an anti-TNFα agent, and one patient died due to this condition. This relatively high rate of such severe and potentially fatal complications is disproportionate to what is generally observed with anti-TNFα agents and suggests an underlying pathophysiology that is specific to the post-LT setting.A previous study (n=8) [9] of anti-TNFα agents in PSC/IBD patients reported similar outcomes. Four patients developed opportunistic infections (esophageal candidiasis, Clostridium difficile colitis, community acquired bacterial pneumonia, and cryptosporidiosis); one patient developed PTLD. This is consistent with our own observations; it is possible that anti-TNFα agents increase risk of PTLD among these patients. In contrast, however, that study also observed improvement in IBD-related clinical outcomes as well as mucosal healing. Another similar study (n=6) [10] described significant improvement in IBD-related symptoms in four patients following the use of infliximab therapy.Our case series is limited by the small number of patients observed; although this is a reflection of the relative rarity of IBD/PSC-LT in the population, we are hesitant to generalize the results to a whole population. Furthermore, given the variation in both the IBD subtype (Crohn’s disease versus ulcerative colitis), the timing and type of anti-TNFα agents that each patient received, and the posttransplant immunosuppressive regimens used, it is difficult to isolate the effects of the anti-TNFα treatment [11] on disease activity. In particular, it is important to note that tacrolimus and immunosuppressive medications may also contribute to the risk of adverse clinical outcomes, especially infections, observed among these patients [12].In summary, this case report illustrates that—despite widespread use of anti-TNFα agents in patients with refractory IBD—clinicians should exercise caution when employing these medications in the treatment of patients after liver transplant. Given the potential for significant complications, choice of immunosuppressive therapy and IBD treatment should be carefully considered; patients should be counseled regarding the possibility of an IBD exacerbation prior to transplant and monitored closely afterward. Further, large-scale studies are needed to evaluate the safety and efficacy of anti-TNFα therapies in IBD/PSC patients. --- *Source: 1015408-2018-05-15.xml*
1015408-2018-05-15_1015408-2018-05-15.md
21,075
Tumor Necrosis Factor Alpha Inhibition for Inflammatory Bowel Disease after Liver Transplant for Primary Sclerosing Cholangitis
Ravish Parekh; Ahmed Abdulhamid; Sheri Trudeau; Nirmal Kaur
Case Reports in Gastrointestinal Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1015408
1015408-2018-05-15.xml
--- ## Abstract Background. Outcome data regarding the use of tumor necrosis factor alpha inhibitors (anti-TNFα) in patients with inflammatory bowel disease (IBD) after liver transplant (LT) for primary sclerosing cholangitis (PSC) are scant.Methods. We performed a retrospective chart review to investigate outcomes among a series of post-liver-transplant PSC/IBD patients receiving anti-TNFα therapy at Henry Ford Health System ((HFHS), Detroit, MI).Results. A total of five patients were treated with anti-TNFα agents for IBD after LT for PSC from 1993 through 2015. Two patients were treated with adalimumab, and three were treated with infliximab. Three patients were hospitalized with severe posttransplant infections. Two patients developed posttransplant lymphoproliferative disease (PTLD); one of these patients died due to complications of PTLD.Conclusion. Anti-TNFα treatment following LT worsened the disease course in our patients with concurrent PSC/IBD and led to serious complications and surgical intervention. Larger studies are needed to evaluate the side effects and outcomes of the use of such agents in this patient population. Until then, clinicians should have a high threshold to use anti-TNFα therapy in this setting. --- ## Body ## 1. Introduction The co-occurrence of inflammatory bowel disease (IBD) and primary sclerosing cholangitis (PSC) is a well-documented phenomenon. Although there are no epidemiological studies regarding the prevalence of concurrent PSC/IBD, as many as 90% of patients with PSC may have underlying IBD [1, 2]. No medical therapy has yet been proven to affect the natural progression of PSC and therefore, liver transplant (LT) remains the mainstay of therapy for patients with advanced cirrhosis secondary to the disease; without transplant, the mean survival of patients with PSC is 10–12 years [3–5]. Compared to patients with IBD alone, patients with cooccurring PSC/IBD generally present with a different clinical course, mainly characterized by a high prevalence of pancolitis with rectal sparing and backwash ileitis [6].In recent years, multiple agents have been approved for the treatment of IBD. However, tumor necrosis factor alpha inhibitors (anti-TNFα) remain widely used, given their widespread demonstrated efficacy in moderate, severe, and refractory IBD [7, 8]. However, the risks and benefits of these agents in PSC/IBD patients after liver transplant are yet to be determined. We examined the clinical course of PSC/IBD patients receiving liver transplants and treated with anti-TNFα agents. ## 2. Methods This study was approved by the HFHS Institutional Review Board; requirements for written informed consent were waived due to the deidentified nature of the study. A retrospective chart review of our patient database was performed, using International Classification of Diseases, version 9 (ICD-9) codes related to Crohn’s disease (555.0, 555.1, 555.9), ulcerative colitis (556.9), PSC (576.1), and LT (V42.7). Using this method, we identified five patients with concurrent PSC/IBD who underwent liver transplantation and also received anti-TNFα therapy at HFHS between 1993 and 2015. Three trained gastroenterologists (RP, AAH, and NK) performed retrospective chart review for data including demographic data (sex, age, and race); hospital admissions (indications); medical treatment, including prednisone escalation for IBD; endoscopy results; surgery; and infectious complications. The aim of the study was to assess the clinical effectiveness (defined as the absence of symptoms and endoscopic remission) and safety of biologic therapy in this clinical scenario. ## 3. Results A total of five post-LT PSC/IBD patients were treated with anti-TNFα agents from 1993 through 2015 at HFHS. Two patients were treated with adalimumab, and three were treated with infliximab. See summary results in Table 1.Table 1 Five patients with inflammatory bowel disease, primary sclerosing cholangitis, and liver transplant treated with antitumor necrosis factor alpha agents. IBD type Age at IBD onset Age at PSC onset Age at LT Pre-LT TNFα agent Liver donor status Post-LT TNFα agent Concomitant immunosuppression Complications IBD disease activity Patient 1: white male Crohn’s 9 11 17 None Deceased Adalimumab Cyclosporine; mycophenolate mofetil C. diff colitis, esophageal candidiasis, CMV viremia, PTLD, Death Hospital admission; prednisone escalation; colectomy Patient 2: white female UC 18 20 25 None Living unrelated Infliximab Tacrolimus; azathioprine Pancytopenia Hospital admission; prednisone escalation; active colitis Patient 3: white male UC 29 32 41 None Deceased Infliximab Tacrolimus MRSA bacteremia; pneumonia with sepsis; C. diff. colitis. Hospital admission; prednisone escalation; active colitis Patient 4: white female UC 26 33 48 None Deceased Infliximab Tacrolimus; mycophenolate mofetil Acute rejection; C. diff colitis Hospital admission; colectomy Patient 5: white male Crohn’s 29 33 45 Adalimumab Deceased Adalimumab Tacrolimus PTLD Active colitis IBD: inflammatory bowel disease; PSC: primary sclerosing cholangitis; LT: liver transplant; TNFα: tumor necrosis factor alpha; C. diff: Clostridium difficile; CMV: cytomegalovirus; PTLD: posttransplant lymphoproliferative disease; UC: ulcerative colitis. ### 3.1. Subject 1 A 9-year-old white male was diagnosed with pancolonic Crohn’s disease and responded well to treatment with prednisone, azathioprine, and then methotrexate. Two years following his IBD diagnosis, the patient was found to have primary sclerosing cholangitis with bridging fibrosis and cirrhosis. At age 17, the patient received a deceased donor liver transplant, after which he received mycophenolate mofetil and cyclosporine for immunosuppression. Following the transplant, the patient underwent multiple hospitalizations for cholangitis, perihepatic abscess secondary to methicillin-resistant Staphylococcus aureus (MRSA), cytomegalovirus (CMV) viremia, cellulitis, and esophageal candidiasis. The patient’s posttransplant course was also complicated by worsening Crohn’s colitis, treated with adalimumab. Despite this therapy, however, his colitis continued to worsen. He subsequently developed toxic megacolon and underwent a subtotal abdominal colectomy with end-ileostomy for refractory colitis. For approximately 5 months following his colectomy, the patient had marked improvement in pain, appetite, and functional status. He subsequently began to develop strictures at the ileostomy site secondary to active colitis with small bowel involvement, requiring multiple office visits for dilation of the ileostomy site. The patient was eventually hospitalized with worsening abdominal pain. Endoscopic ultrasound with biopsy showed malignant lymphoma, consistent with posttransplant lymphoproliferative disease (PTLD). Despite treatment with rituximab and corticosteroids, the patient continued to decompensate and eventually expired due to complications of PTLD. ### 3.2. Subject 2 A white female patient was diagnosed with ulcerative colitis at age 18, and PSC at age 20. The patient’s colitis symptoms were initially well-controlled with azathioprine and mesalamine. At age 25, the patient received a living-donor liver transplant subsequent to PSC; posttransplant immunosuppressant treatment included tacrolimus and azathioprine in addition to mesalamine. Following transplant, the patient experienced symptoms of worsening colitis, with frequent flare-ups requiring multiple courses of high-dose prednisone for disease control. Despite the steroid treatments, the patient’s symptoms continued to worsen. Subsequent treatment with infliximab resulted in marked improvement in her symptoms. However, the patient’s course was complicated by pancytopenia. She continued to have active IBD symptoms following her transplant. ### 3.3. Subject 3 A white male patient was diagnosed with ulcerative colitis at age 29 and responded well to ASA and azathioprine therapy. He was diagnosed with PSC at age 32 and received a deceased donor liver transplant at age 41. He had a history of recurrent clostridium difficile infections requiring fecal transplantation prior to the transplant. Following liver transplant, the patient was hospitalized for MRSA bacteremia, pneumonia with severe sepsis, recurrent MRSA pneumonia, and recurrent clostridium difficile infections. However, despite treatment with azathioprine, his colitis symptoms began to worsen posttransplant and required escalation of prednisone dose. Azathioprine was subsequently discontinued and infliximab started in response to worsening colitis. Prednisone therapy was tapered off due to side effects. At of the end of follow-up, the patient was maintained on infliximab and budesonide; his UC was in clinical remission. ### 3.4. Subject 4 A white female patient was diagnosed with ulcerative colitis at age 26 and PSC at age 33. Prior to transplant, the patient was maintained on mesalamine and her UC was under good control, with no evidence of active colitis on colonoscopy. The patient’s PSC was initially asymptomatic but quickly deteriorated, with multiple hospital admissions for episodes of cholangitis. At age 48, she received a liver transplant from a deceased donor for end-stage liver disease secondary to PSC. Posttransplant, the patient was started on mycophenolate mofetil, tacrolimus, and prednisone for immunosuppression. Her course was complicated by acute cellular rejection, which was treated with an increased dose of corticosteroids. The patient also experienced worsening of colitis. Mesalamine therapy was reinitiated with poor response; treatment was changed to infliximab, but symptoms continued to worsen. Her course was further complicated by clostridium difficile colitis, which did not respond to antibiotics and subsequently required a fecal transplant. Due to uncontrolled colitis with worsening symptoms, the patient underwent a colectomy with ileostomy, with marked improvement in her symptoms and overall health following surgery. ### 3.5. Subject 5 A white male patient was diagnosed with Crohn’s disease at age 29 and PSC at age 33. Prior to liver transplant, the patient’s Crohn’s disease was in remission, maintained on adalimumab. Following transplant of a deceased donor liver at age 45, the patient was continued on adalimumab, with tacrolimus added for immunosuppression. His posttransplant course was complicating by posttransplant lymphoproliferative disease (PTLD); 9 months after transplant, adalimumab was discontinued. The patient was started on cyclophosphamide for his PTLD with interval improvement in disease activity on his most recent PET scan. The patient’s Crohn’s disease symptoms remained under good control, with multiple colonoscopies showing no evidence of flare-ups of his disease. However, the patient’s course was further complicated by acute cellular rejection and recurrence of PTLD. At last follow-up, the patient was maintained on mycophenolate, tacrolimus, and prednisone therapy, in addition to cyclophosphamide for PTLD. The addition of mesalamine has provided only partial relief from symptoms. ## 3.1. Subject 1 A 9-year-old white male was diagnosed with pancolonic Crohn’s disease and responded well to treatment with prednisone, azathioprine, and then methotrexate. Two years following his IBD diagnosis, the patient was found to have primary sclerosing cholangitis with bridging fibrosis and cirrhosis. At age 17, the patient received a deceased donor liver transplant, after which he received mycophenolate mofetil and cyclosporine for immunosuppression. Following the transplant, the patient underwent multiple hospitalizations for cholangitis, perihepatic abscess secondary to methicillin-resistant Staphylococcus aureus (MRSA), cytomegalovirus (CMV) viremia, cellulitis, and esophageal candidiasis. The patient’s posttransplant course was also complicated by worsening Crohn’s colitis, treated with adalimumab. Despite this therapy, however, his colitis continued to worsen. He subsequently developed toxic megacolon and underwent a subtotal abdominal colectomy with end-ileostomy for refractory colitis. For approximately 5 months following his colectomy, the patient had marked improvement in pain, appetite, and functional status. He subsequently began to develop strictures at the ileostomy site secondary to active colitis with small bowel involvement, requiring multiple office visits for dilation of the ileostomy site. The patient was eventually hospitalized with worsening abdominal pain. Endoscopic ultrasound with biopsy showed malignant lymphoma, consistent with posttransplant lymphoproliferative disease (PTLD). Despite treatment with rituximab and corticosteroids, the patient continued to decompensate and eventually expired due to complications of PTLD. ## 3.2. Subject 2 A white female patient was diagnosed with ulcerative colitis at age 18, and PSC at age 20. The patient’s colitis symptoms were initially well-controlled with azathioprine and mesalamine. At age 25, the patient received a living-donor liver transplant subsequent to PSC; posttransplant immunosuppressant treatment included tacrolimus and azathioprine in addition to mesalamine. Following transplant, the patient experienced symptoms of worsening colitis, with frequent flare-ups requiring multiple courses of high-dose prednisone for disease control. Despite the steroid treatments, the patient’s symptoms continued to worsen. Subsequent treatment with infliximab resulted in marked improvement in her symptoms. However, the patient’s course was complicated by pancytopenia. She continued to have active IBD symptoms following her transplant. ## 3.3. Subject 3 A white male patient was diagnosed with ulcerative colitis at age 29 and responded well to ASA and azathioprine therapy. He was diagnosed with PSC at age 32 and received a deceased donor liver transplant at age 41. He had a history of recurrent clostridium difficile infections requiring fecal transplantation prior to the transplant. Following liver transplant, the patient was hospitalized for MRSA bacteremia, pneumonia with severe sepsis, recurrent MRSA pneumonia, and recurrent clostridium difficile infections. However, despite treatment with azathioprine, his colitis symptoms began to worsen posttransplant and required escalation of prednisone dose. Azathioprine was subsequently discontinued and infliximab started in response to worsening colitis. Prednisone therapy was tapered off due to side effects. At of the end of follow-up, the patient was maintained on infliximab and budesonide; his UC was in clinical remission. ## 3.4. Subject 4 A white female patient was diagnosed with ulcerative colitis at age 26 and PSC at age 33. Prior to transplant, the patient was maintained on mesalamine and her UC was under good control, with no evidence of active colitis on colonoscopy. The patient’s PSC was initially asymptomatic but quickly deteriorated, with multiple hospital admissions for episodes of cholangitis. At age 48, she received a liver transplant from a deceased donor for end-stage liver disease secondary to PSC. Posttransplant, the patient was started on mycophenolate mofetil, tacrolimus, and prednisone for immunosuppression. Her course was complicated by acute cellular rejection, which was treated with an increased dose of corticosteroids. The patient also experienced worsening of colitis. Mesalamine therapy was reinitiated with poor response; treatment was changed to infliximab, but symptoms continued to worsen. Her course was further complicated by clostridium difficile colitis, which did not respond to antibiotics and subsequently required a fecal transplant. Due to uncontrolled colitis with worsening symptoms, the patient underwent a colectomy with ileostomy, with marked improvement in her symptoms and overall health following surgery. ## 3.5. Subject 5 A white male patient was diagnosed with Crohn’s disease at age 29 and PSC at age 33. Prior to liver transplant, the patient’s Crohn’s disease was in remission, maintained on adalimumab. Following transplant of a deceased donor liver at age 45, the patient was continued on adalimumab, with tacrolimus added for immunosuppression. His posttransplant course was complicating by posttransplant lymphoproliferative disease (PTLD); 9 months after transplant, adalimumab was discontinued. The patient was started on cyclophosphamide for his PTLD with interval improvement in disease activity on his most recent PET scan. The patient’s Crohn’s disease symptoms remained under good control, with multiple colonoscopies showing no evidence of flare-ups of his disease. However, the patient’s course was further complicated by acute cellular rejection and recurrence of PTLD. At last follow-up, the patient was maintained on mycophenolate, tacrolimus, and prednisone therapy, in addition to cyclophosphamide for PTLD. The addition of mesalamine has provided only partial relief from symptoms. ## 4. Discussion Our patient experience suggests that anti-TNFα agents appear to be both relatively unsafe for patients with IBD after liver transplant and less effective at mitigating the disease than in patients without liver disease or transplant. Two patients went on to require a colectomy for severe colitis with immediate improvement in symptoms following the surgery. While our patients did well after colectomy, undergoing such a major operation in the post-LT setting is a high-risk scenario that should ideally be avoided. These outcomes demonstrate that these anti-TNFα agents can be poorly effective in the post-LT setting, in stark contrast to the known effectiveness of these therapies in patients without transplant.Our study also demonstrates the severity of anti-TNFα-related complications in the post-LT setting. After transplant, three of five patients treated with anti-TNFα agents developed serious infections, including clostridium difficile colitis, esophageal candidiasis, CMV viremia, MRSA bacteremia, and community acquired pneumonia requiring multiple hospitalizations. In addition, two patients developed PTLD while being treated with an anti-TNFα agent, and one patient died due to this condition. This relatively high rate of such severe and potentially fatal complications is disproportionate to what is generally observed with anti-TNFα agents and suggests an underlying pathophysiology that is specific to the post-LT setting.A previous study (n=8) [9] of anti-TNFα agents in PSC/IBD patients reported similar outcomes. Four patients developed opportunistic infections (esophageal candidiasis, Clostridium difficile colitis, community acquired bacterial pneumonia, and cryptosporidiosis); one patient developed PTLD. This is consistent with our own observations; it is possible that anti-TNFα agents increase risk of PTLD among these patients. In contrast, however, that study also observed improvement in IBD-related clinical outcomes as well as mucosal healing. Another similar study (n=6) [10] described significant improvement in IBD-related symptoms in four patients following the use of infliximab therapy.Our case series is limited by the small number of patients observed; although this is a reflection of the relative rarity of IBD/PSC-LT in the population, we are hesitant to generalize the results to a whole population. Furthermore, given the variation in both the IBD subtype (Crohn’s disease versus ulcerative colitis), the timing and type of anti-TNFα agents that each patient received, and the posttransplant immunosuppressive regimens used, it is difficult to isolate the effects of the anti-TNFα treatment [11] on disease activity. In particular, it is important to note that tacrolimus and immunosuppressive medications may also contribute to the risk of adverse clinical outcomes, especially infections, observed among these patients [12].In summary, this case report illustrates that—despite widespread use of anti-TNFα agents in patients with refractory IBD—clinicians should exercise caution when employing these medications in the treatment of patients after liver transplant. Given the potential for significant complications, choice of immunosuppressive therapy and IBD treatment should be carefully considered; patients should be counseled regarding the possibility of an IBD exacerbation prior to transplant and monitored closely afterward. Further, large-scale studies are needed to evaluate the safety and efficacy of anti-TNFα therapies in IBD/PSC patients. --- *Source: 1015408-2018-05-15.xml*
2018
# From Abnormal Hippocampal Synaptic Plasticity in Down Syndrome Mouse Models to Cognitive Disability in Down Syndrome **Authors:** Nathan Cramer; Zygmunt Galdzicki **Journal:** Neural Plasticity (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101542 --- ## Abstract Down syndrome (DS) is caused by the overexpression of genes on triplicated regions of human chromosome 21 (Hsa21). While the resulting physiological and behavioral phenotypes vary in their penetrance and severity, all individuals with DS have variable but significant levels of cognitive disability. At the core of cognitive processes is the phenomenon of synaptic plasticity, a functional change in the strength at points of communication between neurons. A wide variety of evidence from studies on DS individuals and mouse models of DS indicates that synaptic plasticity is adversely affected in human trisomy 21 and mouse segmental trisomy 16, respectively, an outcome that almost certainly extensively contributes to the cognitive impairments associated with DS. In this review, we will highlight some of the neurophysiological changes that we believe reduce the ability of trisomic neurons to undergo neuroplasticity-related adaptations. We will focus primarily on hippocampal networks which appear to be particularly impacted in DS and where consequently the majority of cellular and neuronal network research has been performed using DS animal models, in particular the Ts65Dn mouse. Finally, we will postulate on how altered plasticity may contribute to the DS cognitive disability. --- ## Body ## 1. Introduction Down syndrome (DS) results from the triplication of genes on human chromosome 21 (Hsa21) and is associated with a range of phenotypes including craniofacial changes [1, 2], cardiac defects [3], susceptibility to leukemia but with reduced occurrence of solid cancers [4, 5], and intellectual disability [6, 7]. While the presence and severity of these individual phenotypes vary among DS individuals, every individual with DS has some degree of cognitive impairment. These impairments limit the independence of DS subjects and adversely impact their quality of life. Consequently, understanding the genetic causes of cognitive dysfunction in DS has been the focus of much research in this field.The phenomenon of synaptic plasticity has been strongly linked to cognitive processes, such as learning and memory [8, 9]. Synaptic plasticity refers to the dynamic nature of synapses, sites of communication between neurons, in which the structure, composition, or function of the synapse changes in response to network activity. Depending on the timing and strength of pre- and postsynaptic activity, synapses can either be strengthened or weakened providing a potential mechanism for memory formation and storage [10]. Structurally, synaptic connections on excitatory neurons are typically formed on the heads of dendritic spines [11]. The morphology of the spines enables compartmentalization of signaling cascades and facilitates manipulation of the structure and composition of the cell membrane by second messenger systems [12, 13]. Thus, not only is the number of spines important, as individual locations for excitatory synaptic transmission, but the shape of the individual spines also has a critical functional role.The link between synaptic plasticity and cognitive processes such as learning and memory is frequently studied within the hippocampus, a structure involved in diverse cognitive processes such as those related to acquisition, coding, storing, and recalling information in physical or perceived spatial environments [14–16]. Multiple lines of evidence indicate that long-lasting up- or downregulation of functional synaptic strengths, referred to as long-term potentiation (LTP) and long-term depression (LTD), respectively, are fundamental synaptic mechanisms underlying hippocampal contributions to these processes. Thus, dendritic and synaptic abnormalities in the hippocampus, either morphological or functional, would be expected to significantly impact spatial cognition. Indeed, neuropsychological investigations requiring the use of spatial information in problem solving indicate that deficits in hippocampal-mediated learning and memory processes are hallmarks of DS [17, 18]. In this paper, we will provide an overview of the morphological and behavioral evidence for altered synaptic plasticity in DS with a focus on the hippocampus and discuss the insights provided by mouse models of this neurodevelopmental disorder into the potential molecular mechanisms contributing to these deficits. ## 2. Evidence for Altered Synaptic Plasticity in DS: A Neurodevelopmental Impact The basis for altered synaptic plasticity in DS can be found in changes in the physical structure of the dendrites. Alterations in the shape and densities of dendrites would be expected to adversely affect the information storage capacity of neural networks by reducing the number of potential sites for plasticity to occur. Consistent with this idea and the observed deficits in cognition associated with DS, examination of postmortem brain tissue from DS individuals reveals profound alterations in dendritic and neuronal densities and morphology across many regions of the brain beginningin utero and persisting throughout life. The neocortical development of DS fetuses appears normal up to at least gestational week 22 [19–21]. By 40 weeks gestation, less discrete lamination is observed in the neocortex of DS fetuses with lower and higher cell densities observed in the visual cortex and superior temporal neocortex, respectively [19, 20]. In the hippocampus, deficits begin to appear slightly earlier as DS fetuses (17 to 21 weeks of gestation) show altered morphology, reduced neuron numbers, enhanced apoptosis, and reduced cell proliferation [22–24]. These changes may result, in part, from reductions in serotonin, dopamine, and GABA levels in the fetal DS cortex [25] since, during development, neurotransmitters such as these can act as neurotrophic factors assisting with neuronal migration, axon guidance, and neurite development [26].During the early postnatal period, significant deficits in brain weight and gross morphology as well as myelination and neuronal densities and morphology appear [27]. Initially, dendritic expansion is enhanced in DS infants, but, by the first to second year of life, this trend reverses to become a deficit [19, 28] which persists into adulthood [19, 29]. Dendritic spine numbers are reduced, and morphology altered in DS [30, 31]. Consistent with adverse changes in dendrite morphology, synaptogenesis is also aberrant in DS fetuses [19, 32, 33] and remains deficient in adulthood [34]. MRI studies reveal that DS children and young adults have smaller overall brain volumes [35, 36] with particular deficits noted in the hippocampus [36, 37]. Hippocampal volume, that continues to decrease with age in DS individuals [38], was found to be inversely correlated with the degree of cognitive impairment [36]. Cognitive tests such as the Cambridge Neuropsychological Testing Automated Battery (CANTAB) and the Arizona Cognitive Test Battery (ACTB), the latter specifically tailored to address DS deficits, indicate that hippocampal function is particularly impacted by the DS genetic condition [17, 39].These morphological and cognitive deficits are consistent with aberrant synaptic plasticity, and, indeed, while difficult to measure directly in human subjects, evidence suggests that plasticity is reduced at least in the motor cortex of DS individuals [40]. Additionally, functional MRI (fMRI) during cognitive processing tasks reveals abnormal neural activation patterns in DS children and young adults [41, 42]. Examination of resting glucose metabolism in the cerebral cortex of DS individuals found enhanced uptake in brain regions associated with cognition suggesting cellular hyperactivity in those areas [43]. To better understand the functional consequences resulting from altered network morphologies as well as investigate potential alterations in intracellular signaling cascades contributing to aberrant plasticity, it was necessary to develop and then examine animal models of DS. ## 3. Modeling DS Cognitive Impairment Over the past few decades, several mouse models of Down syndrome have been developed to further our understanding of the link between enhanced gene dosage and DS phenotypes such as altered plasticity and cognition. The Tc1 mouse model carries an almost complete, freely segregating copy of Hsa21, but the chromosome is present in only approximately 50% of cells making this a mosaic model of DS [46]. Interestingly, some genes have been deleted from the “inserted” Hsa21 [47]. It is important to note that, in spite of the mosaicism and gene deletions, many DS phenotypes have been replicated in this model [46, 48, 49]. Other mouse models have taken advantage of the homology between regions of Hsa21 and mouse chromosomes 10, 16, and 17 (Mmu10, 16, 17) making models in which these genes are triplicated highly useful in understanding the genetic basis of DS phenotypes [50, 51]. A mouse model trisomic for all Hsa21 homologous segments was recently developed and holds great promise for furthering our understanding of DS [52]. As this is a relatively new model, however, most research has been conducted using the Ts65Dn segmental trisomic mouse [53–55] which is trisomic for more than 50% genes of Hsa21 homologs [56, 57] and has well-documented DS-like deficits in behavioral tasks such as those relying upon declarative memory (novel object recognition and spontaneous alternation tasks) and the proper encoding and recollection of spatial information (radial arm and Morris water mazes) [58–63]. While the Ts65Dn mouse is the only mouse model of DS to have a freely segregating supernumerary chromosome, they are also trisomic for 60 genes that do not have Hsa21 homologs [64], and the impact of overexpression of these genes on Ts65Dn phenotypes remains to be determined.Similar to the Ts65Dn mouse but with smaller triplicated Mmu16 segments are the Ts1Cje and Ts1Rhr mouse models. These mice display phenotypes similar to Ts65Dn mice including hippocampal dysfunction; however, the severity of the deficits is reduced [65–69]. The reduced severity of DS-like deficits in mice with fewer trisomic genes highlights one of the powerful aspects of mouse models: the ability to control expression of certain HSA21 homologs to assess their contribution to specific DS phenotypes. Those deficits associated with the hippocampus, whose function is notably altered in DS individuals [17, 39], will be the focus of the remainder of this paper. ### 3.1. Morphological Changes Mouse models of DS, including the Ts65Dn strain, show similar detrimental changes in neuronal and dendritic morphologies observed in humans. The neocortex of Ts65Dn mice contains fewer excitatory neurons but an increased number of a subset of inhibitory neurons relative to euploid controls, a phenotype that was reversed by normalizing the expression levels ofOlig1/2 [70]. Additionally, regions both in the neocortex and hippocampus have decreased spine densities with larger spine volumes [71]. In the dentate granule cells of the hippocampus, there is a shift of inhibitory synaptic connections away from the dendritic shafts and onto the necks [71]. Such a change would be expected to increase the efficacy of inhibitory synaptic transmission given the significantly reduced volume of the spine neck compared to the shaft. At a finer resolution, symmetric (presumed inhibitory synapses) have greater opposition lengths in Ts65Dn while asymmetric synapses were unaltered [72], again supporting a shift towards excess inhibition in these mice. Similar but less severe changes are observed in Ts1Cje mice [67]. Beyond suppressing excitatory synaptic activity, the altered spine morphology and shift towards excess inhibition in trisomy would be expected to suppress plasticity-related signaling cascades that frequently rely on depolarization-mediated calcium influx into the postsynaptic structural domains. ### 3.2. Functional Changes Synaptic plasticity in the hippocampus is often investigated in the context of long-term potentiation (LTP) in which high-frequency activation of specific inputs in the hippocampus results in a long-lasting potentiation of synaptic responses along the excited afferent pathway. First described in the anesthetized rabbit [73], this phenomenon is believed to be a fundamental mechanism underlying memory formation [8, 74] and is suppressed in Ts65Dn (depicted in Figure 1 for the CA1 region of the hippocampus) [44, 75, 76] and Ts1Cje [66, 67] but not in Ts1Rhr mice [69] (however, see [68]) as well as mice trisomic for Hsa21 syntenic regions of Mmu16 and Mmu17 [77] or those carrying an almost complete copy of Hsa21 [46, 48].Depiction of altered CA1 hippocampal plasticity in Ts65Dn mice. (a) Diagram indicating electrode placement for stimulating Schaffer collaterals arising from CA3 and recording the evoked field excitatory postsynaptic potential (EPSP) in CA1. Traces to the right indicate the typical change in evoked responses (red) following LTP and LTD. (b) Simulated data depicting suppressed LTP in Ts65Dn mice. After high-frequency stimulation of SC (at arrow head), the field EPSP increases and remains enhanced in euploid mice but fails to remain elevated in Ts65Dn mice. (c) Simulated LTD data depicting exaggerated depression of evoked EPSPs following low-frequency stimulation of SC in Ts65Dn mice. (Traces in B and C based on data from [44, 45].) (a) (b) (c)As outlined above, structural changes suggest that inhibition is exaggerated in the trisomic hippocampus. Consistent with this idea is the observation that LTP in the dentate gyrus and CA1 regions of Ts65Dn hippocampal slices, induced by high-frequency stimulation and theta burst protocols, respectively, can be rescued by the GABAA antagonists picrotoxin [75, 76]. Blockade of GABAA receptors in hippocampal slices from Ts1Cje and Ts1Rhr mice rescues LTP deficits in the dentate gyrus in these DS mouse models as well [67, 68]. A similar treatment in Ts65Dn mice leads to an enhancement in cognitive performance [60].In addition to suppressed LTP, hippocampi from Ts65Dn mice show enhanced long-term depression (LTD) in response to sustained activation of excitatory synapses [45, 78]. This latter effect can be reversed with the uncompetitive NMDA receptor antagonist memantine [78] and also improves the cognitive performance of Ts65Dn mice [79–81]. These results draw a clear link between altered synaptic plasticity in the hippocampus and cognitive performance in the Ts65Dn mouse model of Down syndrome. ### 3.3. Synaptic-Plasticity-Related Signaling Cascades Changes in intracellular calcium concentrations are important triggers for many intracellular signaling cascades including those underlying LTP and LTD [82]. For example, the presence of a calcium chelator that buffers intracellular calcium levels in postsynaptic neurons prevents the induction of LTP [83] consistent with the hypothesis that a postsynaptic rise in intracellular calcium levels is necessary for LTP [8]. When strongly depolarized, the magnesium block of NMDA channels is lifted providing the main (but not exclusive) mechanism for calcium entry into the postsynaptic cell. Elevated intracellular calcium levels trigger a cascade of intracellular messengers that ultimately lead to the induction and maintenance of synaptic plasticity (both LTP and LTD depending on the kinetics). An excellent overview of this process can be found in several reviews [82, 84], and only key components known to be affected by trisomy (Figure 2) will be discussed here.Figure 2 Alterations in intracellular signaling cascades affecting postsynaptic AMPAR response in Ts65Dn hippocampus. Green indicates elevated levels/activity at baseline, while red indicated diminished activity. During LTP (right), enhanced CaMKII and GluR1 subunit phosphorylation in Ts65Dn synapses may result in a saturated condition incapable of additional potentiation. Reduced ERK activity may reduce migration of new AMPARs into the PSD. In LTD, overexpression of RCAN1 should reduce the activity of PP2B (calcineurin) resulting in reduced internalization of AMPA receptors and potential reduction of NMDAR mean open times. Rescue of LTD in Ts65Dn mice by NMDAR antagonists suggests enhanced NMDAR activity contributes to altered LTD through yet unidentified mechanisms. #### 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ### 3.4. PKA, RCAN1, Calcineurin Protein kinase A (PKA) also plays a critical role in establishing LTP. In particular, evidence suggests that it is involved in initiating the protein synthesis required for the late phase of LTP [97, 98]. Blocking PKA activity suppresses the late phase of LTP (lasting beyond 3 hours) while leaving the early phase of LTP (less than 3 hours) unaffected [99, 100]. Transgenic mice in which PKA activity is reduced have significantly decreased late-phase LTP in CA1 but normal early LTP and perform poorly on tasks requiring long- but not short-term memory formation [101].PKA plays a role in LTD where its substrates, such as GluR1, show increased dephosphorylation following induction [102, 103]. Dephosphorylation of GluR1 subunits should reduce the conductance levels of affected AMPA receptors [96] resulting in a reduction of synaptic strength. PKA also enhances the activity of RCAN1 [104], an inhibitor of calcineurin which contributes to AMPA receptor internalization [105] and reductions in NMDA receptor mean open time [106]. In Ts65Dn mice, we found that PKA activity is reduced in the hippocampus [93], which should adversely affect LTP by reducing protein expression required for the late phase. With respect to LTD, reduced PKA activity would result in more AMPA receptors remaining in a high conductance state and less facilitation of RCAN1 activity. This latter effect is offset, however, by the overexpression of the gene encoding RCAN1 in DS and Ts65Dn mice [4]. How these factors contribute to the enhancement of LTD in Ts65Dn hippocampi [45, 78] remains to be determined. ### 3.5. Extracellular Receptor Kinase (ERK) In addition to GluR1 subunits, both CaMKII and PKA converge on another common effector, the mitogen-activated protein kinase (MAPK/ERK), that is associated with a host of synaptic-plasticity-related cellular processes [107]. In the case of hippocampal LTP, there is a rapid increase in the amount of phosphorylated ERK following induction [108] and blocking ERK activation prevents expression of LTP [109]. Cultured hippocampal neurons undergo phosphorylated ERK-dependent spine generation following LTP conditioning stimuli implicating this pathway in spine formation [110]. Additionally, it is believed that lateral diffusion of extrasynaptic AMPA receptors containing GluR1 subunits into the postsynaptic density (PSD) is a major contributor to LTP expression [111]. This process is assisted by Ras/Erk phosphorylation of stargazin on extrasynaptic AMPA receptors enabling them to be structurally secured at the synapse to PSD95 [112]. In the Ts65Dn hippocampus, ERK phosphorylation is decreased [93] suggesting decreased activity. This would be expected to adversely affect the insertion of new AMPA receptors into the PSD as well as morphological restructuring of synaptic spines observed after LTP in normal mice [113, 114]. ### 3.6. BDNF Pathway Brain-derived neurotrophic factor (BDNF) contributes to LTP by stimulating protein synthesis. In activating postsynaptic TrkB receptors, BDNF stimulates the PI3K pathway [115] which can initiate translation through mammalian target of rapamycin (mTOR) thereby enhancing synthesis of proteins such as CaMKIIα, GluR1, and NMDA receptor subunit 1 [116]. In Ts65Dn mice, we found that PI3K phosphorylation failed to increase following an LTP induction protocol suggesting this pathway is perturbed by trisomy [93]. Consistent with this notion, in DS individuals, BDNF blood plasma levels are approximately 5 times higher than in age-matched controls [117]. As BDNF readily crosses the blood-brain barrier [118], these levels likely reflect those present in the CNS as well.Examination of BDNF levels in DS mouse models presents a complex picture. In Ts65Dn mice, levels of BDNF in the frontal cortex are diminished [119]. In the hippocampus, both no difference [81] and a reduction [120] compared to control have been reported. In the latter case, the reduction in BDNF levels was associated with decreased neurogenesis and was reversible through treatment with fluoxetine [120]. In the Ts1Cje mouse model of DS, BDNF is overexpressed in the hippocampus, particularly in the dentate gyrus and CA1 regions and in the dendrites of dissociated hippocampal neurons grown in culture [121]. Increased BDNF levels in Ts1Cje mice hippocampi were associated with greater levels of phosphorylated Akt-mTOR and expression of GluR1 protein which could not be further enhanced with exogenous supplemental BDNF suggesting this pathway related to synaptic plasticity is saturated in these mice preventing further contributions to LTP [121].The discrepancies between observations in Ts65Dn and Ts1Cje BDNF levels may reflect how BDNF expression is distributed in these structures, elevated in some subregions or subcellular compartments while diminished in others, resulting in an increased functional effect despite reduced global levels. Conversely, the differences in observed BDNF levels could be related to the different numbers of genes overexpressed in these two mouse lines [53, 65, 122] or, as mice of differing age groups were used in the studies, may reflect differences in expression levels as a function of age. Further investigation is necessary to fully align these observations. However, the observation that rapamycin has a restorative effect on phosphorylated Akt-mTOR levels in Ts1Cje suggests a potential therapeutic mechanism for improving cognition in DS individuals [121] possibly by normalizing a pathway involved in synaptic plasticity. ### 3.7. GABAB-GIRK2 Attenuation of Synaptic Plasticity As mentioned above, postsynaptic calcium influx is critical for LTP and LTD in the hippocampus. This initiating step relies heavily upon depolarization of the postsynaptic membrane to relieve the voltage-dependent magnesium block of NMDA channels. Any phenomenon that reduces the ability of the postsynaptic membrane to depolarize would thus be expected to adversely affect plasticity. Through its coupling toGABAB receptors, the type 2 G-protein-activated inward rectifying potassium (GIRK2) channel may act to dampen the expression of LTP in Ts65Dn hippocampus through a shunting mechanism.GIRK2 is encoded by the geneKcnj6 which is located on the chromosomal segment triplicated in DS and Ts65Dn mice, and, consequently, elevated expression levels have been found in the Ts65Dn hippocampus [50, 123]. At a cellular level, overexpression of GIRK2 leads to a more hyperpolarized resting potential in cultured hippocampal neurons [124] and CA1 pyramidal neurons in vitro[125]. Selectively reducing the expression level of GIRK2 by crossing euploid and Ts65Dn mice with mice heterozygous for GIRK2 (GIRK2+/-) resulted in a gene dosage-dependent change in the resting membrane potential and facilitation of LTP in GIRK2 knockout mice [50]. Selective overexpression of GIRK2 alone in mice results in cognitive deficits, reduced depotentiation (a functional reversal of potentiation at a synapse), and enhanced LTD [126].These effects on LTP and LTD could be mediated through GABAB receptors which, in pyramidal neurons, are in closest proximity to GIRK2-contaning potassium channels near glutamatergic synapses on dendritic spines [127]. GABAB receptors are functionally linked to GIRK channels, and, indeed, whole-cell GABAB-mediated potassium currents are exaggerated in Ts65Dn hippocampal neurons [50, 124, 128]. In CA1, these exaggerated currents have a greater functional impact on the distal dendrites of pyramidal neurons as opposed to those located more proximally [128]. A similar enhancement of GABAB-mediated currents is also found in the dentate gyrus where the presynaptic release probability of GABA is increased [129]. Thus, GIRK channels, activated by GABAB and other G-protein coupled receptors, appear to act as a break on synaptic plasticity in the Ts65Dn hippocampus. ## 3.1. Morphological Changes Mouse models of DS, including the Ts65Dn strain, show similar detrimental changes in neuronal and dendritic morphologies observed in humans. The neocortex of Ts65Dn mice contains fewer excitatory neurons but an increased number of a subset of inhibitory neurons relative to euploid controls, a phenotype that was reversed by normalizing the expression levels ofOlig1/2 [70]. Additionally, regions both in the neocortex and hippocampus have decreased spine densities with larger spine volumes [71]. In the dentate granule cells of the hippocampus, there is a shift of inhibitory synaptic connections away from the dendritic shafts and onto the necks [71]. Such a change would be expected to increase the efficacy of inhibitory synaptic transmission given the significantly reduced volume of the spine neck compared to the shaft. At a finer resolution, symmetric (presumed inhibitory synapses) have greater opposition lengths in Ts65Dn while asymmetric synapses were unaltered [72], again supporting a shift towards excess inhibition in these mice. Similar but less severe changes are observed in Ts1Cje mice [67]. Beyond suppressing excitatory synaptic activity, the altered spine morphology and shift towards excess inhibition in trisomy would be expected to suppress plasticity-related signaling cascades that frequently rely on depolarization-mediated calcium influx into the postsynaptic structural domains. ## 3.2. Functional Changes Synaptic plasticity in the hippocampus is often investigated in the context of long-term potentiation (LTP) in which high-frequency activation of specific inputs in the hippocampus results in a long-lasting potentiation of synaptic responses along the excited afferent pathway. First described in the anesthetized rabbit [73], this phenomenon is believed to be a fundamental mechanism underlying memory formation [8, 74] and is suppressed in Ts65Dn (depicted in Figure 1 for the CA1 region of the hippocampus) [44, 75, 76] and Ts1Cje [66, 67] but not in Ts1Rhr mice [69] (however, see [68]) as well as mice trisomic for Hsa21 syntenic regions of Mmu16 and Mmu17 [77] or those carrying an almost complete copy of Hsa21 [46, 48].Depiction of altered CA1 hippocampal plasticity in Ts65Dn mice. (a) Diagram indicating electrode placement for stimulating Schaffer collaterals arising from CA3 and recording the evoked field excitatory postsynaptic potential (EPSP) in CA1. Traces to the right indicate the typical change in evoked responses (red) following LTP and LTD. (b) Simulated data depicting suppressed LTP in Ts65Dn mice. After high-frequency stimulation of SC (at arrow head), the field EPSP increases and remains enhanced in euploid mice but fails to remain elevated in Ts65Dn mice. (c) Simulated LTD data depicting exaggerated depression of evoked EPSPs following low-frequency stimulation of SC in Ts65Dn mice. (Traces in B and C based on data from [44, 45].) (a) (b) (c)As outlined above, structural changes suggest that inhibition is exaggerated in the trisomic hippocampus. Consistent with this idea is the observation that LTP in the dentate gyrus and CA1 regions of Ts65Dn hippocampal slices, induced by high-frequency stimulation and theta burst protocols, respectively, can be rescued by the GABAA antagonists picrotoxin [75, 76]. Blockade of GABAA receptors in hippocampal slices from Ts1Cje and Ts1Rhr mice rescues LTP deficits in the dentate gyrus in these DS mouse models as well [67, 68]. A similar treatment in Ts65Dn mice leads to an enhancement in cognitive performance [60].In addition to suppressed LTP, hippocampi from Ts65Dn mice show enhanced long-term depression (LTD) in response to sustained activation of excitatory synapses [45, 78]. This latter effect can be reversed with the uncompetitive NMDA receptor antagonist memantine [78] and also improves the cognitive performance of Ts65Dn mice [79–81]. These results draw a clear link between altered synaptic plasticity in the hippocampus and cognitive performance in the Ts65Dn mouse model of Down syndrome. ## 3.3. Synaptic-Plasticity-Related Signaling Cascades Changes in intracellular calcium concentrations are important triggers for many intracellular signaling cascades including those underlying LTP and LTD [82]. For example, the presence of a calcium chelator that buffers intracellular calcium levels in postsynaptic neurons prevents the induction of LTP [83] consistent with the hypothesis that a postsynaptic rise in intracellular calcium levels is necessary for LTP [8]. When strongly depolarized, the magnesium block of NMDA channels is lifted providing the main (but not exclusive) mechanism for calcium entry into the postsynaptic cell. Elevated intracellular calcium levels trigger a cascade of intracellular messengers that ultimately lead to the induction and maintenance of synaptic plasticity (both LTP and LTD depending on the kinetics). An excellent overview of this process can be found in several reviews [82, 84], and only key components known to be affected by trisomy (Figure 2) will be discussed here.Figure 2 Alterations in intracellular signaling cascades affecting postsynaptic AMPAR response in Ts65Dn hippocampus. Green indicates elevated levels/activity at baseline, while red indicated diminished activity. During LTP (right), enhanced CaMKII and GluR1 subunit phosphorylation in Ts65Dn synapses may result in a saturated condition incapable of additional potentiation. Reduced ERK activity may reduce migration of new AMPARs into the PSD. In LTD, overexpression of RCAN1 should reduce the activity of PP2B (calcineurin) resulting in reduced internalization of AMPA receptors and potential reduction of NMDAR mean open times. Rescue of LTD in Ts65Dn mice by NMDAR antagonists suggests enhanced NMDAR activity contributes to altered LTD through yet unidentified mechanisms. ### 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ## 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ## 3.4. PKA, RCAN1, Calcineurin Protein kinase A (PKA) also plays a critical role in establishing LTP. In particular, evidence suggests that it is involved in initiating the protein synthesis required for the late phase of LTP [97, 98]. Blocking PKA activity suppresses the late phase of LTP (lasting beyond 3 hours) while leaving the early phase of LTP (less than 3 hours) unaffected [99, 100]. Transgenic mice in which PKA activity is reduced have significantly decreased late-phase LTP in CA1 but normal early LTP and perform poorly on tasks requiring long- but not short-term memory formation [101].PKA plays a role in LTD where its substrates, such as GluR1, show increased dephosphorylation following induction [102, 103]. Dephosphorylation of GluR1 subunits should reduce the conductance levels of affected AMPA receptors [96] resulting in a reduction of synaptic strength. PKA also enhances the activity of RCAN1 [104], an inhibitor of calcineurin which contributes to AMPA receptor internalization [105] and reductions in NMDA receptor mean open time [106]. In Ts65Dn mice, we found that PKA activity is reduced in the hippocampus [93], which should adversely affect LTP by reducing protein expression required for the late phase. With respect to LTD, reduced PKA activity would result in more AMPA receptors remaining in a high conductance state and less facilitation of RCAN1 activity. This latter effect is offset, however, by the overexpression of the gene encoding RCAN1 in DS and Ts65Dn mice [4]. How these factors contribute to the enhancement of LTD in Ts65Dn hippocampi [45, 78] remains to be determined. ## 3.5. Extracellular Receptor Kinase (ERK) In addition to GluR1 subunits, both CaMKII and PKA converge on another common effector, the mitogen-activated protein kinase (MAPK/ERK), that is associated with a host of synaptic-plasticity-related cellular processes [107]. In the case of hippocampal LTP, there is a rapid increase in the amount of phosphorylated ERK following induction [108] and blocking ERK activation prevents expression of LTP [109]. Cultured hippocampal neurons undergo phosphorylated ERK-dependent spine generation following LTP conditioning stimuli implicating this pathway in spine formation [110]. Additionally, it is believed that lateral diffusion of extrasynaptic AMPA receptors containing GluR1 subunits into the postsynaptic density (PSD) is a major contributor to LTP expression [111]. This process is assisted by Ras/Erk phosphorylation of stargazin on extrasynaptic AMPA receptors enabling them to be structurally secured at the synapse to PSD95 [112]. In the Ts65Dn hippocampus, ERK phosphorylation is decreased [93] suggesting decreased activity. This would be expected to adversely affect the insertion of new AMPA receptors into the PSD as well as morphological restructuring of synaptic spines observed after LTP in normal mice [113, 114]. ## 3.6. BDNF Pathway Brain-derived neurotrophic factor (BDNF) contributes to LTP by stimulating protein synthesis. In activating postsynaptic TrkB receptors, BDNF stimulates the PI3K pathway [115] which can initiate translation through mammalian target of rapamycin (mTOR) thereby enhancing synthesis of proteins such as CaMKIIα, GluR1, and NMDA receptor subunit 1 [116]. In Ts65Dn mice, we found that PI3K phosphorylation failed to increase following an LTP induction protocol suggesting this pathway is perturbed by trisomy [93]. Consistent with this notion, in DS individuals, BDNF blood plasma levels are approximately 5 times higher than in age-matched controls [117]. As BDNF readily crosses the blood-brain barrier [118], these levels likely reflect those present in the CNS as well.Examination of BDNF levels in DS mouse models presents a complex picture. In Ts65Dn mice, levels of BDNF in the frontal cortex are diminished [119]. In the hippocampus, both no difference [81] and a reduction [120] compared to control have been reported. In the latter case, the reduction in BDNF levels was associated with decreased neurogenesis and was reversible through treatment with fluoxetine [120]. In the Ts1Cje mouse model of DS, BDNF is overexpressed in the hippocampus, particularly in the dentate gyrus and CA1 regions and in the dendrites of dissociated hippocampal neurons grown in culture [121]. Increased BDNF levels in Ts1Cje mice hippocampi were associated with greater levels of phosphorylated Akt-mTOR and expression of GluR1 protein which could not be further enhanced with exogenous supplemental BDNF suggesting this pathway related to synaptic plasticity is saturated in these mice preventing further contributions to LTP [121].The discrepancies between observations in Ts65Dn and Ts1Cje BDNF levels may reflect how BDNF expression is distributed in these structures, elevated in some subregions or subcellular compartments while diminished in others, resulting in an increased functional effect despite reduced global levels. Conversely, the differences in observed BDNF levels could be related to the different numbers of genes overexpressed in these two mouse lines [53, 65, 122] or, as mice of differing age groups were used in the studies, may reflect differences in expression levels as a function of age. Further investigation is necessary to fully align these observations. However, the observation that rapamycin has a restorative effect on phosphorylated Akt-mTOR levels in Ts1Cje suggests a potential therapeutic mechanism for improving cognition in DS individuals [121] possibly by normalizing a pathway involved in synaptic plasticity. ## 3.7. GABAB-GIRK2 Attenuation of Synaptic Plasticity As mentioned above, postsynaptic calcium influx is critical for LTP and LTD in the hippocampus. This initiating step relies heavily upon depolarization of the postsynaptic membrane to relieve the voltage-dependent magnesium block of NMDA channels. Any phenomenon that reduces the ability of the postsynaptic membrane to depolarize would thus be expected to adversely affect plasticity. Through its coupling toGABAB receptors, the type 2 G-protein-activated inward rectifying potassium (GIRK2) channel may act to dampen the expression of LTP in Ts65Dn hippocampus through a shunting mechanism.GIRK2 is encoded by the geneKcnj6 which is located on the chromosomal segment triplicated in DS and Ts65Dn mice, and, consequently, elevated expression levels have been found in the Ts65Dn hippocampus [50, 123]. At a cellular level, overexpression of GIRK2 leads to a more hyperpolarized resting potential in cultured hippocampal neurons [124] and CA1 pyramidal neurons in vitro[125]. Selectively reducing the expression level of GIRK2 by crossing euploid and Ts65Dn mice with mice heterozygous for GIRK2 (GIRK2+/-) resulted in a gene dosage-dependent change in the resting membrane potential and facilitation of LTP in GIRK2 knockout mice [50]. Selective overexpression of GIRK2 alone in mice results in cognitive deficits, reduced depotentiation (a functional reversal of potentiation at a synapse), and enhanced LTD [126].These effects on LTP and LTD could be mediated through GABAB receptors which, in pyramidal neurons, are in closest proximity to GIRK2-contaning potassium channels near glutamatergic synapses on dendritic spines [127]. GABAB receptors are functionally linked to GIRK channels, and, indeed, whole-cell GABAB-mediated potassium currents are exaggerated in Ts65Dn hippocampal neurons [50, 124, 128]. In CA1, these exaggerated currents have a greater functional impact on the distal dendrites of pyramidal neurons as opposed to those located more proximally [128]. A similar enhancement of GABAB-mediated currents is also found in the dentate gyrus where the presynaptic release probability of GABA is increased [129]. Thus, GIRK channels, activated by GABAB and other G-protein coupled receptors, appear to act as a break on synaptic plasticity in the Ts65Dn hippocampus. ## 4. Potential Impact of Altered Plasticity on Hippocampal Processing The hippocampus receives major inputs from the entorhinal cortex (EC) which converge on CA1 pyramidal through two main pathways: the perforant pathway (PP) and the temporoammonic (TA) pathway [130]. The PP pathway passes through the dentate gyrus to pyramidal neurons in CA3 before impinging upon the relatively proximal dendrites of CA1 pyramidal neurons in stratum radiatum (SR). Conversely, inputs to CA1 from TA target the distal dendrites located in stratum lacunosum molecular (SLM). In the normal hippocampus, frequency-based synaptic plasticity at the CA3-CA1 synapse, coupled with a feed-forward inhibition loop from stratum oriens alveus interneurons that suppress inputs to distal CA1 dendrites, enables segregation of information flow through these two pathways [131]. During high-frequency synaptic activity, the CA3-CA1 synapse would be expected to undergo LTP, increasing the excitatory drive to CA1 pyramidal neurons and, consequently, enhanced suppression of inputs to distal CA1 dendrites by the feed-forward inhibition loop (Figure 3(a)). Thus, TA inputs that target distal CA1 dendrites would be suppressed, and information flows through the CA3-CA1 pathway enhanced during during high-frequency events. Conversely, during low-frequency synaptic activity, the CA3-CA1 synapse would be expected to undergo LTD and become less effective. Inhibition of distal CA1 synapses would then be decreased and information flow through the TA pathway would likely be enhanced (Figure 3(b)). Diminished LTP resulting from trisomy would then interfere with this frequency-based segregation of information flow through the hippocampus. Without LTP, feed-forward inhibition would cause suppression of information flow through the TA pathway causing inputs from the two pathways to become superimposed upon and interfere with each other (Figure 3(c)). In contrast, the flow of information during low frequency signaling would likely remain intact, or potentially facilitated, since enhanced LTD at CA3-CA1 would prevent interference from this pathway (Figure 3(d)).Potential impact of altered synaptic plasticity on hippocampal processing in Ts65Dn mouse model of Down syndrome. Schematic of two main pathways through hippocampus arriving from the entorhinal cortex: temporoammonic (TA)—direct to CA1 distal dendrites; trisynaptic pathway from DG through CA3 to proximal CA1 dendrites. LTP and LTD are proposed to minimize interference between the two pathways [50, 131]. (a) In euploid hippocampi, high-frequency inputs induce LTP in CA1 resulting in enhanced suppression of inputs from TA by feed-forward inhibition arising from interneurons in stratum oriens. (b) Low-frequency inputs depress the trisynaptic pathway releasing distal CA1 dendrites from feed-forward inhibition and allowing information to flow through the TA pathway. (c) In Ts65Dn hippocampi, aberrant LTP in CA1 results in diminished feed-forward inhibition during high-frequency activity allowing TA inputs to become superimposed on those flowing through the trisynaptic pathway. (d) Enhanced LTD would be expected to facilitate flow of low-frequency information through the direct TA pathway in Ts65Dn mice. (a) Euploid: LTP (b) Euploid: LTD (c) Ts65Dn: LTP (d) Ts65: LTDElectroencephalogram (EEG) recordings from DS individuals suggest that such a preferential suppression of high-frequency information flow may result from overexpression of Hsa21 genes. Compared to controls, DS individuals have increased power at low EEG frequencies and a corresponding reduction in power at higher frequencies [132]. Similar observations have been reported in Ts65Dn mice [133]. While it is not clear that hippocampal activity is accurately reflected in EEG recordings, abnormal EEGs findings are consistent with aberrant processing of high-frequency information in DS individuals and Ts65Dn mice. ## 5. Cognitive Therapies Targeting Plasticity A number of studies using the Ts65Dn mouse model of DS have examined the possibility of pharmacologically reversing cognitive deficits (reviewed in [122]). Of particular note with respect to hippocampal plasticity are those targeting the excess GABAergic inhibitory tone or NMDA receptors whose activation, as outlined above, is a critical step in initiating LTP and LTD.Application of the GABAA receptor antagonist, picrotoxin to hippocampal slices from Ts65Dn mice rescues LTP in the dentate gyrus [75] and CA1 region [76]. Chronic administration of low doses of picrotoxin or other GABAA receptor antagonists (pentylenetetrazole or bilobalide) improves cognition in Ts65Dn mice suggesting that the efficacy of this class of pharmacological agents could be tested for reversing impaired cognition in DS [60]. However, as overinhibition of GABAA receptors can induce seizures, translating these findings to humans requires great caution. Careful screening of similar drugs or design of pharmacological compounds with similar blocking capabilities but reduced propensities for inducing seizures may prove to be effective treatments. Currently, a small molecule targeting GABAA receptors developed by F. Hoffmann-La Roche Ltd (Pharmaceutical pipeline molecule RG1662 http://www.roche.com/roche_pharma_pipeline.htm) is in clinical trials with the goal of safely improving cognition in DS individuals.Braudeau et al. [134, 135] are currently investigating a similar promising inhibitor that targets the alpha-5 subunit of GABAA receptors.Another pharmacological avenue targets aberrant NMDA receptor-mediated signaling apparently present in Ts65Dn mice. The uncompetitive NMDA receptor antagonist memantine improves the cognitive performance of Ts65Dn mice [79] and normalizes hippocampal LTD [78]. Memantine is an FDA approved and fairly well-tolerated drug already in use for treating dementia in Alzheimer disease. Clinical trials assessing the safety, tolerability, and efficacy in alleviating DS cognitive phenotypes are currently underway [57].In addition to pharmacological approaches, behavioral therapies have been shown to improve cognition in Ts65Dn mice. When housed in enriched environments (larger cage with novel objects such as toys and running wheels), trisomic mice performed as well as euploid littermates in the Morris water maze and had normalized hippocampal LTP [136, 137]. Interestingly, environmental enrichment was effective for trisomic females but not trisomic males potentially due to social and physical factors associated with the new environments [138].The benefits of environmental enrichment appear to be linked in part to regulation of excess inhibition in the neocortex and hippocampus. Release of GABA from synaptosomes isolated from the hippocampus and neocortex is elevated in Ts65Dn mice, an effect that is reversed by environmental enrichment [137]. In adult rats with amblyopia (via monocular deprivation during critical period), environmental enrichment reversed visual deficits reduced GABA levels in the visual cortex contralateral to the deprived eye while increasing plasticity [137]. It thus appears possible to regulate aberrant levels of inhibition in trisomic mice behaviorally without pharmacological intervention and achieve similar behavioral outcomes without the concerns associated with nonspecific actions or adverse side-effects of drugs.Deficits in neurogenesis in the dentate gyrus and forebrain subventricular zone in Ts65Dn mice are also reversed following environmental enrichment [139] and may add an additional therapeutic layer to the beneficial effect of a decrease in inhibitory tone. A structural benefit of environmental enrichment appears lacking; however, as, unlike euploid mice, this treatment has failed to significantly increase dendritic branching and spine density in Ts65Dn mice [140].Early behavioral intervention techniques designed to improve development in DS children show great promise [141, 142] suggesting that this comparatively easily translatable therapeutic approach, either used alone or in combination with pharmacological agents, could potentially increase cognitive capacities in DS individuals. ## 6. Conclusion Synaptic plasticity is believed to be the process central to learning and memory. This belief is bolstered by experiments where drugs that normalize aberrant plasticity in hippocampal slices isolated from mouse models of DS also confer improvements in cognition in in intact adult mice. The initiation and maintenance of plastic changes involve structural and compositional modifications of synapses that depend on intracellular signaling cascades. The reduced excitatory neuronal densities and deficits in dendritic morphologies present in individuals with DS diminish the capacity of their neural networks in general to undergo neuroplastic adaptations. Combined with the deficits in signaling pathways reported in Ts65Dn mice, evidence strongly suggests that synaptic plasticity is severely impaired in DS neural networks. By understanding how plasticity is perturbed, we can design therapies to reverse these phenotypes and ultimately improve cognition in DS individuals. --- *Source: 101542-2012-07-12.xml*
101542-2012-07-12_101542-2012-07-12.md
55,631
From Abnormal Hippocampal Synaptic Plasticity in Down Syndrome Mouse Models to Cognitive Disability in Down Syndrome
Nathan Cramer; Zygmunt Galdzicki
Neural Plasticity (2012)
Biological Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101542
101542-2012-07-12.xml
--- ## Abstract Down syndrome (DS) is caused by the overexpression of genes on triplicated regions of human chromosome 21 (Hsa21). While the resulting physiological and behavioral phenotypes vary in their penetrance and severity, all individuals with DS have variable but significant levels of cognitive disability. At the core of cognitive processes is the phenomenon of synaptic plasticity, a functional change in the strength at points of communication between neurons. A wide variety of evidence from studies on DS individuals and mouse models of DS indicates that synaptic plasticity is adversely affected in human trisomy 21 and mouse segmental trisomy 16, respectively, an outcome that almost certainly extensively contributes to the cognitive impairments associated with DS. In this review, we will highlight some of the neurophysiological changes that we believe reduce the ability of trisomic neurons to undergo neuroplasticity-related adaptations. We will focus primarily on hippocampal networks which appear to be particularly impacted in DS and where consequently the majority of cellular and neuronal network research has been performed using DS animal models, in particular the Ts65Dn mouse. Finally, we will postulate on how altered plasticity may contribute to the DS cognitive disability. --- ## Body ## 1. Introduction Down syndrome (DS) results from the triplication of genes on human chromosome 21 (Hsa21) and is associated with a range of phenotypes including craniofacial changes [1, 2], cardiac defects [3], susceptibility to leukemia but with reduced occurrence of solid cancers [4, 5], and intellectual disability [6, 7]. While the presence and severity of these individual phenotypes vary among DS individuals, every individual with DS has some degree of cognitive impairment. These impairments limit the independence of DS subjects and adversely impact their quality of life. Consequently, understanding the genetic causes of cognitive dysfunction in DS has been the focus of much research in this field.The phenomenon of synaptic plasticity has been strongly linked to cognitive processes, such as learning and memory [8, 9]. Synaptic plasticity refers to the dynamic nature of synapses, sites of communication between neurons, in which the structure, composition, or function of the synapse changes in response to network activity. Depending on the timing and strength of pre- and postsynaptic activity, synapses can either be strengthened or weakened providing a potential mechanism for memory formation and storage [10]. Structurally, synaptic connections on excitatory neurons are typically formed on the heads of dendritic spines [11]. The morphology of the spines enables compartmentalization of signaling cascades and facilitates manipulation of the structure and composition of the cell membrane by second messenger systems [12, 13]. Thus, not only is the number of spines important, as individual locations for excitatory synaptic transmission, but the shape of the individual spines also has a critical functional role.The link between synaptic plasticity and cognitive processes such as learning and memory is frequently studied within the hippocampus, a structure involved in diverse cognitive processes such as those related to acquisition, coding, storing, and recalling information in physical or perceived spatial environments [14–16]. Multiple lines of evidence indicate that long-lasting up- or downregulation of functional synaptic strengths, referred to as long-term potentiation (LTP) and long-term depression (LTD), respectively, are fundamental synaptic mechanisms underlying hippocampal contributions to these processes. Thus, dendritic and synaptic abnormalities in the hippocampus, either morphological or functional, would be expected to significantly impact spatial cognition. Indeed, neuropsychological investigations requiring the use of spatial information in problem solving indicate that deficits in hippocampal-mediated learning and memory processes are hallmarks of DS [17, 18]. In this paper, we will provide an overview of the morphological and behavioral evidence for altered synaptic plasticity in DS with a focus on the hippocampus and discuss the insights provided by mouse models of this neurodevelopmental disorder into the potential molecular mechanisms contributing to these deficits. ## 2. Evidence for Altered Synaptic Plasticity in DS: A Neurodevelopmental Impact The basis for altered synaptic plasticity in DS can be found in changes in the physical structure of the dendrites. Alterations in the shape and densities of dendrites would be expected to adversely affect the information storage capacity of neural networks by reducing the number of potential sites for plasticity to occur. Consistent with this idea and the observed deficits in cognition associated with DS, examination of postmortem brain tissue from DS individuals reveals profound alterations in dendritic and neuronal densities and morphology across many regions of the brain beginningin utero and persisting throughout life. The neocortical development of DS fetuses appears normal up to at least gestational week 22 [19–21]. By 40 weeks gestation, less discrete lamination is observed in the neocortex of DS fetuses with lower and higher cell densities observed in the visual cortex and superior temporal neocortex, respectively [19, 20]. In the hippocampus, deficits begin to appear slightly earlier as DS fetuses (17 to 21 weeks of gestation) show altered morphology, reduced neuron numbers, enhanced apoptosis, and reduced cell proliferation [22–24]. These changes may result, in part, from reductions in serotonin, dopamine, and GABA levels in the fetal DS cortex [25] since, during development, neurotransmitters such as these can act as neurotrophic factors assisting with neuronal migration, axon guidance, and neurite development [26].During the early postnatal period, significant deficits in brain weight and gross morphology as well as myelination and neuronal densities and morphology appear [27]. Initially, dendritic expansion is enhanced in DS infants, but, by the first to second year of life, this trend reverses to become a deficit [19, 28] which persists into adulthood [19, 29]. Dendritic spine numbers are reduced, and morphology altered in DS [30, 31]. Consistent with adverse changes in dendrite morphology, synaptogenesis is also aberrant in DS fetuses [19, 32, 33] and remains deficient in adulthood [34]. MRI studies reveal that DS children and young adults have smaller overall brain volumes [35, 36] with particular deficits noted in the hippocampus [36, 37]. Hippocampal volume, that continues to decrease with age in DS individuals [38], was found to be inversely correlated with the degree of cognitive impairment [36]. Cognitive tests such as the Cambridge Neuropsychological Testing Automated Battery (CANTAB) and the Arizona Cognitive Test Battery (ACTB), the latter specifically tailored to address DS deficits, indicate that hippocampal function is particularly impacted by the DS genetic condition [17, 39].These morphological and cognitive deficits are consistent with aberrant synaptic plasticity, and, indeed, while difficult to measure directly in human subjects, evidence suggests that plasticity is reduced at least in the motor cortex of DS individuals [40]. Additionally, functional MRI (fMRI) during cognitive processing tasks reveals abnormal neural activation patterns in DS children and young adults [41, 42]. Examination of resting glucose metabolism in the cerebral cortex of DS individuals found enhanced uptake in brain regions associated with cognition suggesting cellular hyperactivity in those areas [43]. To better understand the functional consequences resulting from altered network morphologies as well as investigate potential alterations in intracellular signaling cascades contributing to aberrant plasticity, it was necessary to develop and then examine animal models of DS. ## 3. Modeling DS Cognitive Impairment Over the past few decades, several mouse models of Down syndrome have been developed to further our understanding of the link between enhanced gene dosage and DS phenotypes such as altered plasticity and cognition. The Tc1 mouse model carries an almost complete, freely segregating copy of Hsa21, but the chromosome is present in only approximately 50% of cells making this a mosaic model of DS [46]. Interestingly, some genes have been deleted from the “inserted” Hsa21 [47]. It is important to note that, in spite of the mosaicism and gene deletions, many DS phenotypes have been replicated in this model [46, 48, 49]. Other mouse models have taken advantage of the homology between regions of Hsa21 and mouse chromosomes 10, 16, and 17 (Mmu10, 16, 17) making models in which these genes are triplicated highly useful in understanding the genetic basis of DS phenotypes [50, 51]. A mouse model trisomic for all Hsa21 homologous segments was recently developed and holds great promise for furthering our understanding of DS [52]. As this is a relatively new model, however, most research has been conducted using the Ts65Dn segmental trisomic mouse [53–55] which is trisomic for more than 50% genes of Hsa21 homologs [56, 57] and has well-documented DS-like deficits in behavioral tasks such as those relying upon declarative memory (novel object recognition and spontaneous alternation tasks) and the proper encoding and recollection of spatial information (radial arm and Morris water mazes) [58–63]. While the Ts65Dn mouse is the only mouse model of DS to have a freely segregating supernumerary chromosome, they are also trisomic for 60 genes that do not have Hsa21 homologs [64], and the impact of overexpression of these genes on Ts65Dn phenotypes remains to be determined.Similar to the Ts65Dn mouse but with smaller triplicated Mmu16 segments are the Ts1Cje and Ts1Rhr mouse models. These mice display phenotypes similar to Ts65Dn mice including hippocampal dysfunction; however, the severity of the deficits is reduced [65–69]. The reduced severity of DS-like deficits in mice with fewer trisomic genes highlights one of the powerful aspects of mouse models: the ability to control expression of certain HSA21 homologs to assess their contribution to specific DS phenotypes. Those deficits associated with the hippocampus, whose function is notably altered in DS individuals [17, 39], will be the focus of the remainder of this paper. ### 3.1. Morphological Changes Mouse models of DS, including the Ts65Dn strain, show similar detrimental changes in neuronal and dendritic morphologies observed in humans. The neocortex of Ts65Dn mice contains fewer excitatory neurons but an increased number of a subset of inhibitory neurons relative to euploid controls, a phenotype that was reversed by normalizing the expression levels ofOlig1/2 [70]. Additionally, regions both in the neocortex and hippocampus have decreased spine densities with larger spine volumes [71]. In the dentate granule cells of the hippocampus, there is a shift of inhibitory synaptic connections away from the dendritic shafts and onto the necks [71]. Such a change would be expected to increase the efficacy of inhibitory synaptic transmission given the significantly reduced volume of the spine neck compared to the shaft. At a finer resolution, symmetric (presumed inhibitory synapses) have greater opposition lengths in Ts65Dn while asymmetric synapses were unaltered [72], again supporting a shift towards excess inhibition in these mice. Similar but less severe changes are observed in Ts1Cje mice [67]. Beyond suppressing excitatory synaptic activity, the altered spine morphology and shift towards excess inhibition in trisomy would be expected to suppress plasticity-related signaling cascades that frequently rely on depolarization-mediated calcium influx into the postsynaptic structural domains. ### 3.2. Functional Changes Synaptic plasticity in the hippocampus is often investigated in the context of long-term potentiation (LTP) in which high-frequency activation of specific inputs in the hippocampus results in a long-lasting potentiation of synaptic responses along the excited afferent pathway. First described in the anesthetized rabbit [73], this phenomenon is believed to be a fundamental mechanism underlying memory formation [8, 74] and is suppressed in Ts65Dn (depicted in Figure 1 for the CA1 region of the hippocampus) [44, 75, 76] and Ts1Cje [66, 67] but not in Ts1Rhr mice [69] (however, see [68]) as well as mice trisomic for Hsa21 syntenic regions of Mmu16 and Mmu17 [77] or those carrying an almost complete copy of Hsa21 [46, 48].Depiction of altered CA1 hippocampal plasticity in Ts65Dn mice. (a) Diagram indicating electrode placement for stimulating Schaffer collaterals arising from CA3 and recording the evoked field excitatory postsynaptic potential (EPSP) in CA1. Traces to the right indicate the typical change in evoked responses (red) following LTP and LTD. (b) Simulated data depicting suppressed LTP in Ts65Dn mice. After high-frequency stimulation of SC (at arrow head), the field EPSP increases and remains enhanced in euploid mice but fails to remain elevated in Ts65Dn mice. (c) Simulated LTD data depicting exaggerated depression of evoked EPSPs following low-frequency stimulation of SC in Ts65Dn mice. (Traces in B and C based on data from [44, 45].) (a) (b) (c)As outlined above, structural changes suggest that inhibition is exaggerated in the trisomic hippocampus. Consistent with this idea is the observation that LTP in the dentate gyrus and CA1 regions of Ts65Dn hippocampal slices, induced by high-frequency stimulation and theta burst protocols, respectively, can be rescued by the GABAA antagonists picrotoxin [75, 76]. Blockade of GABAA receptors in hippocampal slices from Ts1Cje and Ts1Rhr mice rescues LTP deficits in the dentate gyrus in these DS mouse models as well [67, 68]. A similar treatment in Ts65Dn mice leads to an enhancement in cognitive performance [60].In addition to suppressed LTP, hippocampi from Ts65Dn mice show enhanced long-term depression (LTD) in response to sustained activation of excitatory synapses [45, 78]. This latter effect can be reversed with the uncompetitive NMDA receptor antagonist memantine [78] and also improves the cognitive performance of Ts65Dn mice [79–81]. These results draw a clear link between altered synaptic plasticity in the hippocampus and cognitive performance in the Ts65Dn mouse model of Down syndrome. ### 3.3. Synaptic-Plasticity-Related Signaling Cascades Changes in intracellular calcium concentrations are important triggers for many intracellular signaling cascades including those underlying LTP and LTD [82]. For example, the presence of a calcium chelator that buffers intracellular calcium levels in postsynaptic neurons prevents the induction of LTP [83] consistent with the hypothesis that a postsynaptic rise in intracellular calcium levels is necessary for LTP [8]. When strongly depolarized, the magnesium block of NMDA channels is lifted providing the main (but not exclusive) mechanism for calcium entry into the postsynaptic cell. Elevated intracellular calcium levels trigger a cascade of intracellular messengers that ultimately lead to the induction and maintenance of synaptic plasticity (both LTP and LTD depending on the kinetics). An excellent overview of this process can be found in several reviews [82, 84], and only key components known to be affected by trisomy (Figure 2) will be discussed here.Figure 2 Alterations in intracellular signaling cascades affecting postsynaptic AMPAR response in Ts65Dn hippocampus. Green indicates elevated levels/activity at baseline, while red indicated diminished activity. During LTP (right), enhanced CaMKII and GluR1 subunit phosphorylation in Ts65Dn synapses may result in a saturated condition incapable of additional potentiation. Reduced ERK activity may reduce migration of new AMPARs into the PSD. In LTD, overexpression of RCAN1 should reduce the activity of PP2B (calcineurin) resulting in reduced internalization of AMPA receptors and potential reduction of NMDAR mean open times. Rescue of LTD in Ts65Dn mice by NMDAR antagonists suggests enhanced NMDAR activity contributes to altered LTD through yet unidentified mechanisms. #### 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ### 3.4. PKA, RCAN1, Calcineurin Protein kinase A (PKA) also plays a critical role in establishing LTP. In particular, evidence suggests that it is involved in initiating the protein synthesis required for the late phase of LTP [97, 98]. Blocking PKA activity suppresses the late phase of LTP (lasting beyond 3 hours) while leaving the early phase of LTP (less than 3 hours) unaffected [99, 100]. Transgenic mice in which PKA activity is reduced have significantly decreased late-phase LTP in CA1 but normal early LTP and perform poorly on tasks requiring long- but not short-term memory formation [101].PKA plays a role in LTD where its substrates, such as GluR1, show increased dephosphorylation following induction [102, 103]. Dephosphorylation of GluR1 subunits should reduce the conductance levels of affected AMPA receptors [96] resulting in a reduction of synaptic strength. PKA also enhances the activity of RCAN1 [104], an inhibitor of calcineurin which contributes to AMPA receptor internalization [105] and reductions in NMDA receptor mean open time [106]. In Ts65Dn mice, we found that PKA activity is reduced in the hippocampus [93], which should adversely affect LTP by reducing protein expression required for the late phase. With respect to LTD, reduced PKA activity would result in more AMPA receptors remaining in a high conductance state and less facilitation of RCAN1 activity. This latter effect is offset, however, by the overexpression of the gene encoding RCAN1 in DS and Ts65Dn mice [4]. How these factors contribute to the enhancement of LTD in Ts65Dn hippocampi [45, 78] remains to be determined. ### 3.5. Extracellular Receptor Kinase (ERK) In addition to GluR1 subunits, both CaMKII and PKA converge on another common effector, the mitogen-activated protein kinase (MAPK/ERK), that is associated with a host of synaptic-plasticity-related cellular processes [107]. In the case of hippocampal LTP, there is a rapid increase in the amount of phosphorylated ERK following induction [108] and blocking ERK activation prevents expression of LTP [109]. Cultured hippocampal neurons undergo phosphorylated ERK-dependent spine generation following LTP conditioning stimuli implicating this pathway in spine formation [110]. Additionally, it is believed that lateral diffusion of extrasynaptic AMPA receptors containing GluR1 subunits into the postsynaptic density (PSD) is a major contributor to LTP expression [111]. This process is assisted by Ras/Erk phosphorylation of stargazin on extrasynaptic AMPA receptors enabling them to be structurally secured at the synapse to PSD95 [112]. In the Ts65Dn hippocampus, ERK phosphorylation is decreased [93] suggesting decreased activity. This would be expected to adversely affect the insertion of new AMPA receptors into the PSD as well as morphological restructuring of synaptic spines observed after LTP in normal mice [113, 114]. ### 3.6. BDNF Pathway Brain-derived neurotrophic factor (BDNF) contributes to LTP by stimulating protein synthesis. In activating postsynaptic TrkB receptors, BDNF stimulates the PI3K pathway [115] which can initiate translation through mammalian target of rapamycin (mTOR) thereby enhancing synthesis of proteins such as CaMKIIα, GluR1, and NMDA receptor subunit 1 [116]. In Ts65Dn mice, we found that PI3K phosphorylation failed to increase following an LTP induction protocol suggesting this pathway is perturbed by trisomy [93]. Consistent with this notion, in DS individuals, BDNF blood plasma levels are approximately 5 times higher than in age-matched controls [117]. As BDNF readily crosses the blood-brain barrier [118], these levels likely reflect those present in the CNS as well.Examination of BDNF levels in DS mouse models presents a complex picture. In Ts65Dn mice, levels of BDNF in the frontal cortex are diminished [119]. In the hippocampus, both no difference [81] and a reduction [120] compared to control have been reported. In the latter case, the reduction in BDNF levels was associated with decreased neurogenesis and was reversible through treatment with fluoxetine [120]. In the Ts1Cje mouse model of DS, BDNF is overexpressed in the hippocampus, particularly in the dentate gyrus and CA1 regions and in the dendrites of dissociated hippocampal neurons grown in culture [121]. Increased BDNF levels in Ts1Cje mice hippocampi were associated with greater levels of phosphorylated Akt-mTOR and expression of GluR1 protein which could not be further enhanced with exogenous supplemental BDNF suggesting this pathway related to synaptic plasticity is saturated in these mice preventing further contributions to LTP [121].The discrepancies between observations in Ts65Dn and Ts1Cje BDNF levels may reflect how BDNF expression is distributed in these structures, elevated in some subregions or subcellular compartments while diminished in others, resulting in an increased functional effect despite reduced global levels. Conversely, the differences in observed BDNF levels could be related to the different numbers of genes overexpressed in these two mouse lines [53, 65, 122] or, as mice of differing age groups were used in the studies, may reflect differences in expression levels as a function of age. Further investigation is necessary to fully align these observations. However, the observation that rapamycin has a restorative effect on phosphorylated Akt-mTOR levels in Ts1Cje suggests a potential therapeutic mechanism for improving cognition in DS individuals [121] possibly by normalizing a pathway involved in synaptic plasticity. ### 3.7. GABAB-GIRK2 Attenuation of Synaptic Plasticity As mentioned above, postsynaptic calcium influx is critical for LTP and LTD in the hippocampus. This initiating step relies heavily upon depolarization of the postsynaptic membrane to relieve the voltage-dependent magnesium block of NMDA channels. Any phenomenon that reduces the ability of the postsynaptic membrane to depolarize would thus be expected to adversely affect plasticity. Through its coupling toGABAB receptors, the type 2 G-protein-activated inward rectifying potassium (GIRK2) channel may act to dampen the expression of LTP in Ts65Dn hippocampus through a shunting mechanism.GIRK2 is encoded by the geneKcnj6 which is located on the chromosomal segment triplicated in DS and Ts65Dn mice, and, consequently, elevated expression levels have been found in the Ts65Dn hippocampus [50, 123]. At a cellular level, overexpression of GIRK2 leads to a more hyperpolarized resting potential in cultured hippocampal neurons [124] and CA1 pyramidal neurons in vitro[125]. Selectively reducing the expression level of GIRK2 by crossing euploid and Ts65Dn mice with mice heterozygous for GIRK2 (GIRK2+/-) resulted in a gene dosage-dependent change in the resting membrane potential and facilitation of LTP in GIRK2 knockout mice [50]. Selective overexpression of GIRK2 alone in mice results in cognitive deficits, reduced depotentiation (a functional reversal of potentiation at a synapse), and enhanced LTD [126].These effects on LTP and LTD could be mediated through GABAB receptors which, in pyramidal neurons, are in closest proximity to GIRK2-contaning potassium channels near glutamatergic synapses on dendritic spines [127]. GABAB receptors are functionally linked to GIRK channels, and, indeed, whole-cell GABAB-mediated potassium currents are exaggerated in Ts65Dn hippocampal neurons [50, 124, 128]. In CA1, these exaggerated currents have a greater functional impact on the distal dendrites of pyramidal neurons as opposed to those located more proximally [128]. A similar enhancement of GABAB-mediated currents is also found in the dentate gyrus where the presynaptic release probability of GABA is increased [129]. Thus, GIRK channels, activated by GABAB and other G-protein coupled receptors, appear to act as a break on synaptic plasticity in the Ts65Dn hippocampus. ## 3.1. Morphological Changes Mouse models of DS, including the Ts65Dn strain, show similar detrimental changes in neuronal and dendritic morphologies observed in humans. The neocortex of Ts65Dn mice contains fewer excitatory neurons but an increased number of a subset of inhibitory neurons relative to euploid controls, a phenotype that was reversed by normalizing the expression levels ofOlig1/2 [70]. Additionally, regions both in the neocortex and hippocampus have decreased spine densities with larger spine volumes [71]. In the dentate granule cells of the hippocampus, there is a shift of inhibitory synaptic connections away from the dendritic shafts and onto the necks [71]. Such a change would be expected to increase the efficacy of inhibitory synaptic transmission given the significantly reduced volume of the spine neck compared to the shaft. At a finer resolution, symmetric (presumed inhibitory synapses) have greater opposition lengths in Ts65Dn while asymmetric synapses were unaltered [72], again supporting a shift towards excess inhibition in these mice. Similar but less severe changes are observed in Ts1Cje mice [67]. Beyond suppressing excitatory synaptic activity, the altered spine morphology and shift towards excess inhibition in trisomy would be expected to suppress plasticity-related signaling cascades that frequently rely on depolarization-mediated calcium influx into the postsynaptic structural domains. ## 3.2. Functional Changes Synaptic plasticity in the hippocampus is often investigated in the context of long-term potentiation (LTP) in which high-frequency activation of specific inputs in the hippocampus results in a long-lasting potentiation of synaptic responses along the excited afferent pathway. First described in the anesthetized rabbit [73], this phenomenon is believed to be a fundamental mechanism underlying memory formation [8, 74] and is suppressed in Ts65Dn (depicted in Figure 1 for the CA1 region of the hippocampus) [44, 75, 76] and Ts1Cje [66, 67] but not in Ts1Rhr mice [69] (however, see [68]) as well as mice trisomic for Hsa21 syntenic regions of Mmu16 and Mmu17 [77] or those carrying an almost complete copy of Hsa21 [46, 48].Depiction of altered CA1 hippocampal plasticity in Ts65Dn mice. (a) Diagram indicating electrode placement for stimulating Schaffer collaterals arising from CA3 and recording the evoked field excitatory postsynaptic potential (EPSP) in CA1. Traces to the right indicate the typical change in evoked responses (red) following LTP and LTD. (b) Simulated data depicting suppressed LTP in Ts65Dn mice. After high-frequency stimulation of SC (at arrow head), the field EPSP increases and remains enhanced in euploid mice but fails to remain elevated in Ts65Dn mice. (c) Simulated LTD data depicting exaggerated depression of evoked EPSPs following low-frequency stimulation of SC in Ts65Dn mice. (Traces in B and C based on data from [44, 45].) (a) (b) (c)As outlined above, structural changes suggest that inhibition is exaggerated in the trisomic hippocampus. Consistent with this idea is the observation that LTP in the dentate gyrus and CA1 regions of Ts65Dn hippocampal slices, induced by high-frequency stimulation and theta burst protocols, respectively, can be rescued by the GABAA antagonists picrotoxin [75, 76]. Blockade of GABAA receptors in hippocampal slices from Ts1Cje and Ts1Rhr mice rescues LTP deficits in the dentate gyrus in these DS mouse models as well [67, 68]. A similar treatment in Ts65Dn mice leads to an enhancement in cognitive performance [60].In addition to suppressed LTP, hippocampi from Ts65Dn mice show enhanced long-term depression (LTD) in response to sustained activation of excitatory synapses [45, 78]. This latter effect can be reversed with the uncompetitive NMDA receptor antagonist memantine [78] and also improves the cognitive performance of Ts65Dn mice [79–81]. These results draw a clear link between altered synaptic plasticity in the hippocampus and cognitive performance in the Ts65Dn mouse model of Down syndrome. ## 3.3. Synaptic-Plasticity-Related Signaling Cascades Changes in intracellular calcium concentrations are important triggers for many intracellular signaling cascades including those underlying LTP and LTD [82]. For example, the presence of a calcium chelator that buffers intracellular calcium levels in postsynaptic neurons prevents the induction of LTP [83] consistent with the hypothesis that a postsynaptic rise in intracellular calcium levels is necessary for LTP [8]. When strongly depolarized, the magnesium block of NMDA channels is lifted providing the main (but not exclusive) mechanism for calcium entry into the postsynaptic cell. Elevated intracellular calcium levels trigger a cascade of intracellular messengers that ultimately lead to the induction and maintenance of synaptic plasticity (both LTP and LTD depending on the kinetics). An excellent overview of this process can be found in several reviews [82, 84], and only key components known to be affected by trisomy (Figure 2) will be discussed here.Figure 2 Alterations in intracellular signaling cascades affecting postsynaptic AMPAR response in Ts65Dn hippocampus. Green indicates elevated levels/activity at baseline, while red indicated diminished activity. During LTP (right), enhanced CaMKII and GluR1 subunit phosphorylation in Ts65Dn synapses may result in a saturated condition incapable of additional potentiation. Reduced ERK activity may reduce migration of new AMPARs into the PSD. In LTD, overexpression of RCAN1 should reduce the activity of PP2B (calcineurin) resulting in reduced internalization of AMPA receptors and potential reduction of NMDAR mean open times. Rescue of LTD in Ts65Dn mice by NMDAR antagonists suggests enhanced NMDAR activity contributes to altered LTD through yet unidentified mechanisms. ### 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ## 3.3.1. CaMKII Activation of postsynaptic NMDA receptors (NMDARs) concomitant with the depolarization of the postsynaptic membrane is sufficient to relieve the magnesium block of NMDA channels leading to an influx of calcium into the intracellular postsynaptic space. In the case of LTP, the rise of intracellular calcium leads to the activation of calcium calmodulin-dependent protein kinase II (CaMKII), a necessary step for initiating NMDAR-dependent LTP [82]. Blocking CaMKII prevents induction of LTP, [82, 85, 86], while constitutively active forms can induce LTP [87]. During all phases of LTP (induction, early, and late), levels of phosphorylated CaMKII are increased in the hippocampus [88]. CaMKII phosphorylated at threonine 286 (Thr286) can become constitutively active providing a potential switch for initiating and then maintaining potentiation [89]. Alternatively, phosphorylation of Thr305/306 can inhibit the expression of LTP by interfering with the binding of calcium/calmodulin [90, 91]. Indeed, cognitive deficits associated with Angelman syndrome were reversed in a mouse model of the disorder by reducing the levels of CaMKII phosphorylated at Thr305/306 [92]. Thus, depending on the site of phosphorylation, CaMKII can facilitate or suppress initiation and maintenance of LTP. In Ts65Dn mice, we found that baseline levels of CaMKII phosphorylated at Thr286 are elevated in the hippocampus [93]. Excessive basal phosphorylation of the CaMKII site leading to constitutive activation could leave the DS modeling trisomic network in a saturated state unable to shift to more potentiated levels.One of the substrates targeted by CaMKII during the initial expression of LTP is the serine 831 residue on GluR1 subunits of AMPA receptors [94, 95]. This phosphorylation leads to an increase in conductance of the AMPA channel [96] providing a rapid mechanism for enhancing glutamatergic synaptic strength. In Ts65Dn mice, we find that baseline levels of phosphorylated serine 831 in synaptically located GluR1 receptors are elevated [93]. This apparent increase in AMPA channel conductance appears not to have any significant effect on baseline excitatory synaptic transmission which is normal in the Ts65Dn hippocampus [45, 75, 76, 93]. However, it could also partially occlude the initiation of LTP in these mice by leaving Ts65Dn hippocampal excitatory synapses with fewer AMPA channels available for potentiation. This finding would be consistent with our observation of increased CaMKII in Ts65Dn hippocampus noted above [93] and the suggestion that some components of the LTP network are in an apparent saturated state in these mice. ## 3.4. PKA, RCAN1, Calcineurin Protein kinase A (PKA) also plays a critical role in establishing LTP. In particular, evidence suggests that it is involved in initiating the protein synthesis required for the late phase of LTP [97, 98]. Blocking PKA activity suppresses the late phase of LTP (lasting beyond 3 hours) while leaving the early phase of LTP (less than 3 hours) unaffected [99, 100]. Transgenic mice in which PKA activity is reduced have significantly decreased late-phase LTP in CA1 but normal early LTP and perform poorly on tasks requiring long- but not short-term memory formation [101].PKA plays a role in LTD where its substrates, such as GluR1, show increased dephosphorylation following induction [102, 103]. Dephosphorylation of GluR1 subunits should reduce the conductance levels of affected AMPA receptors [96] resulting in a reduction of synaptic strength. PKA also enhances the activity of RCAN1 [104], an inhibitor of calcineurin which contributes to AMPA receptor internalization [105] and reductions in NMDA receptor mean open time [106]. In Ts65Dn mice, we found that PKA activity is reduced in the hippocampus [93], which should adversely affect LTP by reducing protein expression required for the late phase. With respect to LTD, reduced PKA activity would result in more AMPA receptors remaining in a high conductance state and less facilitation of RCAN1 activity. This latter effect is offset, however, by the overexpression of the gene encoding RCAN1 in DS and Ts65Dn mice [4]. How these factors contribute to the enhancement of LTD in Ts65Dn hippocampi [45, 78] remains to be determined. ## 3.5. Extracellular Receptor Kinase (ERK) In addition to GluR1 subunits, both CaMKII and PKA converge on another common effector, the mitogen-activated protein kinase (MAPK/ERK), that is associated with a host of synaptic-plasticity-related cellular processes [107]. In the case of hippocampal LTP, there is a rapid increase in the amount of phosphorylated ERK following induction [108] and blocking ERK activation prevents expression of LTP [109]. Cultured hippocampal neurons undergo phosphorylated ERK-dependent spine generation following LTP conditioning stimuli implicating this pathway in spine formation [110]. Additionally, it is believed that lateral diffusion of extrasynaptic AMPA receptors containing GluR1 subunits into the postsynaptic density (PSD) is a major contributor to LTP expression [111]. This process is assisted by Ras/Erk phosphorylation of stargazin on extrasynaptic AMPA receptors enabling them to be structurally secured at the synapse to PSD95 [112]. In the Ts65Dn hippocampus, ERK phosphorylation is decreased [93] suggesting decreased activity. This would be expected to adversely affect the insertion of new AMPA receptors into the PSD as well as morphological restructuring of synaptic spines observed after LTP in normal mice [113, 114]. ## 3.6. BDNF Pathway Brain-derived neurotrophic factor (BDNF) contributes to LTP by stimulating protein synthesis. In activating postsynaptic TrkB receptors, BDNF stimulates the PI3K pathway [115] which can initiate translation through mammalian target of rapamycin (mTOR) thereby enhancing synthesis of proteins such as CaMKIIα, GluR1, and NMDA receptor subunit 1 [116]. In Ts65Dn mice, we found that PI3K phosphorylation failed to increase following an LTP induction protocol suggesting this pathway is perturbed by trisomy [93]. Consistent with this notion, in DS individuals, BDNF blood plasma levels are approximately 5 times higher than in age-matched controls [117]. As BDNF readily crosses the blood-brain barrier [118], these levels likely reflect those present in the CNS as well.Examination of BDNF levels in DS mouse models presents a complex picture. In Ts65Dn mice, levels of BDNF in the frontal cortex are diminished [119]. In the hippocampus, both no difference [81] and a reduction [120] compared to control have been reported. In the latter case, the reduction in BDNF levels was associated with decreased neurogenesis and was reversible through treatment with fluoxetine [120]. In the Ts1Cje mouse model of DS, BDNF is overexpressed in the hippocampus, particularly in the dentate gyrus and CA1 regions and in the dendrites of dissociated hippocampal neurons grown in culture [121]. Increased BDNF levels in Ts1Cje mice hippocampi were associated with greater levels of phosphorylated Akt-mTOR and expression of GluR1 protein which could not be further enhanced with exogenous supplemental BDNF suggesting this pathway related to synaptic plasticity is saturated in these mice preventing further contributions to LTP [121].The discrepancies between observations in Ts65Dn and Ts1Cje BDNF levels may reflect how BDNF expression is distributed in these structures, elevated in some subregions or subcellular compartments while diminished in others, resulting in an increased functional effect despite reduced global levels. Conversely, the differences in observed BDNF levels could be related to the different numbers of genes overexpressed in these two mouse lines [53, 65, 122] or, as mice of differing age groups were used in the studies, may reflect differences in expression levels as a function of age. Further investigation is necessary to fully align these observations. However, the observation that rapamycin has a restorative effect on phosphorylated Akt-mTOR levels in Ts1Cje suggests a potential therapeutic mechanism for improving cognition in DS individuals [121] possibly by normalizing a pathway involved in synaptic plasticity. ## 3.7. GABAB-GIRK2 Attenuation of Synaptic Plasticity As mentioned above, postsynaptic calcium influx is critical for LTP and LTD in the hippocampus. This initiating step relies heavily upon depolarization of the postsynaptic membrane to relieve the voltage-dependent magnesium block of NMDA channels. Any phenomenon that reduces the ability of the postsynaptic membrane to depolarize would thus be expected to adversely affect plasticity. Through its coupling toGABAB receptors, the type 2 G-protein-activated inward rectifying potassium (GIRK2) channel may act to dampen the expression of LTP in Ts65Dn hippocampus through a shunting mechanism.GIRK2 is encoded by the geneKcnj6 which is located on the chromosomal segment triplicated in DS and Ts65Dn mice, and, consequently, elevated expression levels have been found in the Ts65Dn hippocampus [50, 123]. At a cellular level, overexpression of GIRK2 leads to a more hyperpolarized resting potential in cultured hippocampal neurons [124] and CA1 pyramidal neurons in vitro[125]. Selectively reducing the expression level of GIRK2 by crossing euploid and Ts65Dn mice with mice heterozygous for GIRK2 (GIRK2+/-) resulted in a gene dosage-dependent change in the resting membrane potential and facilitation of LTP in GIRK2 knockout mice [50]. Selective overexpression of GIRK2 alone in mice results in cognitive deficits, reduced depotentiation (a functional reversal of potentiation at a synapse), and enhanced LTD [126].These effects on LTP and LTD could be mediated through GABAB receptors which, in pyramidal neurons, are in closest proximity to GIRK2-contaning potassium channels near glutamatergic synapses on dendritic spines [127]. GABAB receptors are functionally linked to GIRK channels, and, indeed, whole-cell GABAB-mediated potassium currents are exaggerated in Ts65Dn hippocampal neurons [50, 124, 128]. In CA1, these exaggerated currents have a greater functional impact on the distal dendrites of pyramidal neurons as opposed to those located more proximally [128]. A similar enhancement of GABAB-mediated currents is also found in the dentate gyrus where the presynaptic release probability of GABA is increased [129]. Thus, GIRK channels, activated by GABAB and other G-protein coupled receptors, appear to act as a break on synaptic plasticity in the Ts65Dn hippocampus. ## 4. Potential Impact of Altered Plasticity on Hippocampal Processing The hippocampus receives major inputs from the entorhinal cortex (EC) which converge on CA1 pyramidal through two main pathways: the perforant pathway (PP) and the temporoammonic (TA) pathway [130]. The PP pathway passes through the dentate gyrus to pyramidal neurons in CA3 before impinging upon the relatively proximal dendrites of CA1 pyramidal neurons in stratum radiatum (SR). Conversely, inputs to CA1 from TA target the distal dendrites located in stratum lacunosum molecular (SLM). In the normal hippocampus, frequency-based synaptic plasticity at the CA3-CA1 synapse, coupled with a feed-forward inhibition loop from stratum oriens alveus interneurons that suppress inputs to distal CA1 dendrites, enables segregation of information flow through these two pathways [131]. During high-frequency synaptic activity, the CA3-CA1 synapse would be expected to undergo LTP, increasing the excitatory drive to CA1 pyramidal neurons and, consequently, enhanced suppression of inputs to distal CA1 dendrites by the feed-forward inhibition loop (Figure 3(a)). Thus, TA inputs that target distal CA1 dendrites would be suppressed, and information flows through the CA3-CA1 pathway enhanced during during high-frequency events. Conversely, during low-frequency synaptic activity, the CA3-CA1 synapse would be expected to undergo LTD and become less effective. Inhibition of distal CA1 synapses would then be decreased and information flow through the TA pathway would likely be enhanced (Figure 3(b)). Diminished LTP resulting from trisomy would then interfere with this frequency-based segregation of information flow through the hippocampus. Without LTP, feed-forward inhibition would cause suppression of information flow through the TA pathway causing inputs from the two pathways to become superimposed upon and interfere with each other (Figure 3(c)). In contrast, the flow of information during low frequency signaling would likely remain intact, or potentially facilitated, since enhanced LTD at CA3-CA1 would prevent interference from this pathway (Figure 3(d)).Potential impact of altered synaptic plasticity on hippocampal processing in Ts65Dn mouse model of Down syndrome. Schematic of two main pathways through hippocampus arriving from the entorhinal cortex: temporoammonic (TA)—direct to CA1 distal dendrites; trisynaptic pathway from DG through CA3 to proximal CA1 dendrites. LTP and LTD are proposed to minimize interference between the two pathways [50, 131]. (a) In euploid hippocampi, high-frequency inputs induce LTP in CA1 resulting in enhanced suppression of inputs from TA by feed-forward inhibition arising from interneurons in stratum oriens. (b) Low-frequency inputs depress the trisynaptic pathway releasing distal CA1 dendrites from feed-forward inhibition and allowing information to flow through the TA pathway. (c) In Ts65Dn hippocampi, aberrant LTP in CA1 results in diminished feed-forward inhibition during high-frequency activity allowing TA inputs to become superimposed on those flowing through the trisynaptic pathway. (d) Enhanced LTD would be expected to facilitate flow of low-frequency information through the direct TA pathway in Ts65Dn mice. (a) Euploid: LTP (b) Euploid: LTD (c) Ts65Dn: LTP (d) Ts65: LTDElectroencephalogram (EEG) recordings from DS individuals suggest that such a preferential suppression of high-frequency information flow may result from overexpression of Hsa21 genes. Compared to controls, DS individuals have increased power at low EEG frequencies and a corresponding reduction in power at higher frequencies [132]. Similar observations have been reported in Ts65Dn mice [133]. While it is not clear that hippocampal activity is accurately reflected in EEG recordings, abnormal EEGs findings are consistent with aberrant processing of high-frequency information in DS individuals and Ts65Dn mice. ## 5. Cognitive Therapies Targeting Plasticity A number of studies using the Ts65Dn mouse model of DS have examined the possibility of pharmacologically reversing cognitive deficits (reviewed in [122]). Of particular note with respect to hippocampal plasticity are those targeting the excess GABAergic inhibitory tone or NMDA receptors whose activation, as outlined above, is a critical step in initiating LTP and LTD.Application of the GABAA receptor antagonist, picrotoxin to hippocampal slices from Ts65Dn mice rescues LTP in the dentate gyrus [75] and CA1 region [76]. Chronic administration of low doses of picrotoxin or other GABAA receptor antagonists (pentylenetetrazole or bilobalide) improves cognition in Ts65Dn mice suggesting that the efficacy of this class of pharmacological agents could be tested for reversing impaired cognition in DS [60]. However, as overinhibition of GABAA receptors can induce seizures, translating these findings to humans requires great caution. Careful screening of similar drugs or design of pharmacological compounds with similar blocking capabilities but reduced propensities for inducing seizures may prove to be effective treatments. Currently, a small molecule targeting GABAA receptors developed by F. Hoffmann-La Roche Ltd (Pharmaceutical pipeline molecule RG1662 http://www.roche.com/roche_pharma_pipeline.htm) is in clinical trials with the goal of safely improving cognition in DS individuals.Braudeau et al. [134, 135] are currently investigating a similar promising inhibitor that targets the alpha-5 subunit of GABAA receptors.Another pharmacological avenue targets aberrant NMDA receptor-mediated signaling apparently present in Ts65Dn mice. The uncompetitive NMDA receptor antagonist memantine improves the cognitive performance of Ts65Dn mice [79] and normalizes hippocampal LTD [78]. Memantine is an FDA approved and fairly well-tolerated drug already in use for treating dementia in Alzheimer disease. Clinical trials assessing the safety, tolerability, and efficacy in alleviating DS cognitive phenotypes are currently underway [57].In addition to pharmacological approaches, behavioral therapies have been shown to improve cognition in Ts65Dn mice. When housed in enriched environments (larger cage with novel objects such as toys and running wheels), trisomic mice performed as well as euploid littermates in the Morris water maze and had normalized hippocampal LTP [136, 137]. Interestingly, environmental enrichment was effective for trisomic females but not trisomic males potentially due to social and physical factors associated with the new environments [138].The benefits of environmental enrichment appear to be linked in part to regulation of excess inhibition in the neocortex and hippocampus. Release of GABA from synaptosomes isolated from the hippocampus and neocortex is elevated in Ts65Dn mice, an effect that is reversed by environmental enrichment [137]. In adult rats with amblyopia (via monocular deprivation during critical period), environmental enrichment reversed visual deficits reduced GABA levels in the visual cortex contralateral to the deprived eye while increasing plasticity [137]. It thus appears possible to regulate aberrant levels of inhibition in trisomic mice behaviorally without pharmacological intervention and achieve similar behavioral outcomes without the concerns associated with nonspecific actions or adverse side-effects of drugs.Deficits in neurogenesis in the dentate gyrus and forebrain subventricular zone in Ts65Dn mice are also reversed following environmental enrichment [139] and may add an additional therapeutic layer to the beneficial effect of a decrease in inhibitory tone. A structural benefit of environmental enrichment appears lacking; however, as, unlike euploid mice, this treatment has failed to significantly increase dendritic branching and spine density in Ts65Dn mice [140].Early behavioral intervention techniques designed to improve development in DS children show great promise [141, 142] suggesting that this comparatively easily translatable therapeutic approach, either used alone or in combination with pharmacological agents, could potentially increase cognitive capacities in DS individuals. ## 6. Conclusion Synaptic plasticity is believed to be the process central to learning and memory. This belief is bolstered by experiments where drugs that normalize aberrant plasticity in hippocampal slices isolated from mouse models of DS also confer improvements in cognition in in intact adult mice. The initiation and maintenance of plastic changes involve structural and compositional modifications of synapses that depend on intracellular signaling cascades. The reduced excitatory neuronal densities and deficits in dendritic morphologies present in individuals with DS diminish the capacity of their neural networks in general to undergo neuroplastic adaptations. Combined with the deficits in signaling pathways reported in Ts65Dn mice, evidence strongly suggests that synaptic plasticity is severely impaired in DS neural networks. By understanding how plasticity is perturbed, we can design therapies to reverse these phenotypes and ultimately improve cognition in DS individuals. --- *Source: 101542-2012-07-12.xml*
2012
# Crosscorrelation of Earthquake Data Using Stationary Phase Evaluation: Insight into Reflection Structures of Oceanic Crust Surface in the Nankai Trough **Authors:** Shohei Minato; Takeshi Tsuji; Toshifumi Matsuoka; Koichiro Obana **Journal:** International Journal of Geophysics (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101545 --- ## Abstract Seismic interferometry (SI) has been recently employed to retrieve the reflection response from natural earthquakes. We perform experimental study to apply SI to Ocean Bottom Seismogram (OBS) records in the Nankai Trough, southwest Japan in order to reveal the relatively shallow geological boundaries including surface of oceanic crust. Although the local earthquakes with short raypath we use to retrieve reflection response are expected to contain the higher-frequency components to detect fine-scale structures by SI, they cannot be assumed as plane waves and are inhomogeneously distributed. Since the condition of inhomogeneous source distribution violates the assumption of SI, the conventional processing yields to the deteriorated subsurface images. Here we adopt the raypath calculation for stationary phase evaluation of SI in order to overcome this problem. To find stationary phase, we estimate the raypaths of two reflections: (1) sea-surfaceP-wave reflection and (2) sea-surface multiple P-wave reflection. From the estimated raypath, we choose the crosscorrelation traces which are expected to produce objective reflections considering the stationary phase points. We use the numerical-modeling data and field data with 6 localized earthquakes and show that choosing the crosscorrelation traces by stationary phase evaluation improves the quality of the reflections of the oceanic crust surface. --- ## Body ## 1. Introduction Among various seismic exploration methods using the body-wave of natural earthquakes [1, 2], seismic interferometry (SI) has been recently employed to retrieve the reflection response. Although the receiver function method [1] has been broadly used to image the Moho and mantle discontinuities, there is a study claiming that retrieving and migrating the reflection response using SI is superior to the receiver function method [3].SI retrieves Green’s function between receivers by crosscorrelating wavefield [4, 5]. This theory requires the physical sources homogeneously distributed along the enclosed surface which surrounds the receivers [4]. There are several successful applications of SI to natural earthquakes. Abe et al. [3] and Tonegawa et al. [6] retrieved the crustal reflection response in central Japan using P coda and S coda, respectively. Ruigrok et al. [7] used P wave to retrieve reflection response using Laramie array in USA. Abe et al. [3] further showed the comparison of migrated images of SI and those using receiver function analysis.These applications focused on the teleseismic wavefields in which the epicentral distance is much longer than the length of receiver array. Consequently the wavefields can be assumed as plane wave. Ruigrok et al. [7] replaced the integral for source position of SI into a one for ray parameter. These teleseismic events are suitable to SI processing since the most earthquakes are generated at the numerous plate boundaries in the world, and consequently the teleseismic records contain the earthquakes propagating from the various directions. This can moderately enable us to assume that their source distribution is homogeneous.Here, we performed experimental study to apply SI to the localized earthquake records acquired by Ocean Bottom Seismogram (OBS) in the Nankai Trough, southwest Japan in order to reveal the relatively shallow geological boundaries including surface of oceanic crust (i.e., plate boundary). There is no application of SI to the natural earthquake recorded by OBS. The OBS we used in this study was deployed to observe the local earthquake for earthquake observation in subduction zone [11, 12]. Those local earthquakes cannot be assumed as a teleseismic wavefield since the source-receiver distance is smaller than the length of receiver array. Furthermore, those localized earthquakes are usually inhomogeneously distributed and violate the assumption of SI. Therefore, the conventional SI processing (crosscorrelation and summation) yields to the deteriorated subsurface images. However, focusing on the local earthquakes with shorter raypath can give us the advantages over using the teleseismic wavefields because the teleseismic events usually have long propagating path and lose their higher-frequency components, which leads to detect only large-scale structure such as Moho reflections. On the other hand, the local earthquakes which have shorter raypath are expected to contain higher-frequency components and to resolve more fine-scale structures by using SI. In this study, we discuss a method to retrieve the reflection response using localized natural earthquakes observed by OBS; we adopt the raypath calculation for stationary phase evaluation in SI analysis in order to overcome the noise originated from the violation of homogeneous source distribution.The physical interpretation of the condition to be posed on the SI can be explained by the stationary phase approximation [5, 13, 14]. In this approximation, it explains that the dominant contribution of the retrieval of the Green’s function comes from the crosscorrelation of the records from the physical source located at the stationary phase position (stationary phase source). Therefore, in the localized source distribution, crosscorrelation pairs with stationary phase positions have physical meaning. The other crosscorrelation traces will produce noise and deteriorate the quality of the imaging results. In this study, we identify crosscorrelation pairs with stationary phase position using the estimated raypath information of two reflections: (1) sea-surface P-wave reflection and (2) sea-surface multiple P-wave reflection.Note that Chaput and Bostock [15] successfully retrieved reflection response using the subsurface noise sources located from 10 km to 60 km depth which is similar source depth to our study. They found that the source illumination is imperfect from the discussion using the stationary phase approximation. Our study is different from them in the point that we focus on the natural earthquakes. ## 2. Estimation of Stationary Phase Records Using Raypath Calculation ### 2.1. Stationary Phase Approximation of Seismic Interferometry Seismic interferometry by crosscorrelation (e.g., [4]) is written as(1)Ĝ(xB,xA,ω)+Ĝ*(xB,xA,ω)=2kωρ∮SsrcĜ(xB,xS,ω)Ĝ*(xA,xS,ω)d2xS, where Ĝ(xA,xS,ω) and Ĝ(xB,xS,ω) are the observed wavefield from the sources xS along the closed surface 𝕊src. Ĝ(xB,xA,ω) is a Green’s function between two receiver positions.We assume that the primary reflection is retrieved by crosscorrelating a direct wave and a specular reflection from the physical sources (Figure1(a)).(2)Ĝd(xA,xS,ω)=eiωτSA,Ĝr(xB,xS,ω)=eiωτSyB, where τSA denotes a travel time of a direct wave from xS to xA. τSyB denotes a travel time of a specular reflection from the source position xS to the receiver position xB through specular reflection point y (Figure 1(a)). Superscript d or r indicates that we only consider a direct wave for xA and reflected wave for xB. Note that the amplitudes of these wavefields are assumed to be normalized in (2). Substituting these waves into (1) and applying stationary phase approximation (e.g., [5, 16]) yields the following equation:(3)Ĝ(xB,xA,ω)≈eiωτAy0B∮Seiω(τSyB-τSA-τAy0B)d2xS≈αeiωτAy0B, where τAy0B denotes a travel time of the specular reflection between two receiver positions through its specular reflection point y0 (Figure 1(b)). α denotes a coefficient for stationary phase approximation. Note that we removed Ĝ*(xB,xA,ω) in (1) since we only consider causal part of Green’s function. The acausal part is obtained by considering a direct wave for xB and reflected wave for xA. The integral in (3) has a stationary point at xS=xS*, and the objective primary reflection is retrieved [5]. Furthermore, in this stationary point, the following relation is satisfied:(4)τS*yB-τS*A-τAy0B=0. This relation states that, for the stationary phase source position, the two events (direct wave and reflected wave in this case) have same raypath from the source position to the receiver position (Figure 1(b)). This corresponds to the fact that the crosscorrelation processing subtracts the travel times, cancels the common raypath, and produces the traveltime of the objective reflection event between receivers.(a) ReceiverxA observes direct wave with the travel time τSA and xB  observes reflected wave with τSyB. The specular reflect position y(xS) varies with the source position xS. (b) When the source position xS satisfies the stationary phase position xS*, two events have common raypath between xS* and xA. (a)(b)Note that the amplitudes derived from the crosscorrelation of the nonstationary phase sources are cancelled after the summation of the homogeneously distributed sources along the enclosed surface. In the case of the localized source distribution, the cancellation is insufficient and the unwanted amplitudes remain. ### 2.2. Selection of Receiver Pairs by Stationary Phase Evaluation When the physical sources are widely distributed, the stationary phase sources effectively produce the objective reflection events. On the other hand, when the source distribution is localized, only the reflection events with the stationary phase sources are retrieved. Therefore, we evaluate the crosscorrelation traces for the existence of the stationary phase source and exclude those without the stationary phase sources in the summation of the crosscorrelation traces. We assume that we can estimate the raypath propagating from the source position to the two receivers. When these two raypaths have common pathway, we define that the crosscorrelated trace using these receivers contains objective reflections.In order to evaluate the existence of the stationary phase sources, the raypaths for the direct waves and the arbitrary multiple reflected waves are needed to be estimated. We adopted a method developed by Tamagawa et al. [17] for raypath calculation. This method geometrically calculates the raypath for arbitrary multiple reflections with given 3 dimensionally deviated structures assuming straight raypath (Figure 2). When we have two reflector planes, the method calculates the mirror point of the source position for the reflector planes (M1→M2→M3 in Figure 2). A mirror point is defined as a projection of the position with plane symmetry. The arbitrary multiple reflection raypath can be derived by connecting those mirror points to the receiver position (R3→R2→R1 in Figure 2). This method is simple and fast to calculate raypaths.Figure 2 Raypath calculation for multiple reflections.We evaluate the stationary phase using these raypaths. For example, we assume that the receiverxA observes direct wave, and two different receivers xBand xB′ observe reflected wave (Figure 3). In this model, the candidate for crosscorrelation processing is xA-xB and xA-xB′. Stationary phase evaluation using raypath can give us the information that only the crosscorrelation of xA-xB has a stationary phase source and contain the objective reflection event. By evaluating this procedure for all source position and receiver combination, we remove the crosscorrelation traces which do not have stationary sources for objective reflections. Note that we need some knowledge of the position of reflector to evaluate raypaths before imaging. However, a rough estimate of the position of reflector can be sufficient since we do not need precise information about the location of the stationary points.Figure 3 SourcexS is a stationary source for the crosscorrelation pair xA-xB, but not for the pair xA-xB′. ## 2.1. Stationary Phase Approximation of Seismic Interferometry Seismic interferometry by crosscorrelation (e.g., [4]) is written as(1)Ĝ(xB,xA,ω)+Ĝ*(xB,xA,ω)=2kωρ∮SsrcĜ(xB,xS,ω)Ĝ*(xA,xS,ω)d2xS, where Ĝ(xA,xS,ω) and Ĝ(xB,xS,ω) are the observed wavefield from the sources xS along the closed surface 𝕊src. Ĝ(xB,xA,ω) is a Green’s function between two receiver positions.We assume that the primary reflection is retrieved by crosscorrelating a direct wave and a specular reflection from the physical sources (Figure1(a)).(2)Ĝd(xA,xS,ω)=eiωτSA,Ĝr(xB,xS,ω)=eiωτSyB, where τSA denotes a travel time of a direct wave from xS to xA. τSyB denotes a travel time of a specular reflection from the source position xS to the receiver position xB through specular reflection point y (Figure 1(a)). Superscript d or r indicates that we only consider a direct wave for xA and reflected wave for xB. Note that the amplitudes of these wavefields are assumed to be normalized in (2). Substituting these waves into (1) and applying stationary phase approximation (e.g., [5, 16]) yields the following equation:(3)Ĝ(xB,xA,ω)≈eiωτAy0B∮Seiω(τSyB-τSA-τAy0B)d2xS≈αeiωτAy0B, where τAy0B denotes a travel time of the specular reflection between two receiver positions through its specular reflection point y0 (Figure 1(b)). α denotes a coefficient for stationary phase approximation. Note that we removed Ĝ*(xB,xA,ω) in (1) since we only consider causal part of Green’s function. The acausal part is obtained by considering a direct wave for xB and reflected wave for xA. The integral in (3) has a stationary point at xS=xS*, and the objective primary reflection is retrieved [5]. Furthermore, in this stationary point, the following relation is satisfied:(4)τS*yB-τS*A-τAy0B=0. This relation states that, for the stationary phase source position, the two events (direct wave and reflected wave in this case) have same raypath from the source position to the receiver position (Figure 1(b)). This corresponds to the fact that the crosscorrelation processing subtracts the travel times, cancels the common raypath, and produces the traveltime of the objective reflection event between receivers.(a) ReceiverxA observes direct wave with the travel time τSA and xB  observes reflected wave with τSyB. The specular reflect position y(xS) varies with the source position xS. (b) When the source position xS satisfies the stationary phase position xS*, two events have common raypath between xS* and xA. (a)(b)Note that the amplitudes derived from the crosscorrelation of the nonstationary phase sources are cancelled after the summation of the homogeneously distributed sources along the enclosed surface. In the case of the localized source distribution, the cancellation is insufficient and the unwanted amplitudes remain. ## 2.2. Selection of Receiver Pairs by Stationary Phase Evaluation When the physical sources are widely distributed, the stationary phase sources effectively produce the objective reflection events. On the other hand, when the source distribution is localized, only the reflection events with the stationary phase sources are retrieved. Therefore, we evaluate the crosscorrelation traces for the existence of the stationary phase source and exclude those without the stationary phase sources in the summation of the crosscorrelation traces. We assume that we can estimate the raypath propagating from the source position to the two receivers. When these two raypaths have common pathway, we define that the crosscorrelated trace using these receivers contains objective reflections.In order to evaluate the existence of the stationary phase sources, the raypaths for the direct waves and the arbitrary multiple reflected waves are needed to be estimated. We adopted a method developed by Tamagawa et al. [17] for raypath calculation. This method geometrically calculates the raypath for arbitrary multiple reflections with given 3 dimensionally deviated structures assuming straight raypath (Figure 2). When we have two reflector planes, the method calculates the mirror point of the source position for the reflector planes (M1→M2→M3 in Figure 2). A mirror point is defined as a projection of the position with plane symmetry. The arbitrary multiple reflection raypath can be derived by connecting those mirror points to the receiver position (R3→R2→R1 in Figure 2). This method is simple and fast to calculate raypaths.Figure 2 Raypath calculation for multiple reflections.We evaluate the stationary phase using these raypaths. For example, we assume that the receiverxA observes direct wave, and two different receivers xBand xB′ observe reflected wave (Figure 3). In this model, the candidate for crosscorrelation processing is xA-xB and xA-xB′. Stationary phase evaluation using raypath can give us the information that only the crosscorrelation of xA-xB has a stationary phase source and contain the objective reflection event. By evaluating this procedure for all source position and receiver combination, we remove the crosscorrelation traces which do not have stationary sources for objective reflections. Note that we need some knowledge of the position of reflector to evaluate raypaths before imaging. However, a rough estimate of the position of reflector can be sufficient since we do not need precise information about the location of the stationary points.Figure 3 SourcexS is a stationary source for the crosscorrelation pair xA-xB, but not for the pair xA-xB′. ## 3. Numerical Modeling Results We numerically simulate the wavefield with localized source distribution in order to evaluate the processing considering stationary phase point. For the simulation study, we consider the local earthquakes recorded by Ocean Bottom Seismogram (OBS) on the seafloor. The objective reflection events are assumed to be those generated from the oceanic crust surface. We use only one physical source for the simulation study because we evaluate our proposed method for localized source distribution. The velocity model is a three-layer structure including sea water (Figure4). The dipping structure in the velocity model simulates a oceanic crust surface (i.e., plate interface) in the Nankai Trough.Figure 4 Velocity model for the numerical simulation.pPand pPpdenote the sea-surface reflection and the sea-surface multiple reflection, respectively.We define the two events which produce the objective reflection event for the stationary phase evaluation. In our source-receiver configuration, the source is located in the subsurface, and the sea-surface (modeled as a free-surface) expects to produce the strong downpropagating reflections. This is the difference of the OBS recorded wavefields with those by the surface receivers. Therefore, we assume that the sea-surfaceP-wave reflections (denoted as pP in Figure 4) and the sea-surface multiple P-wave reflections (pPp in Figure 4) produce the reflection events from the oceanic crust surface after crosscorrelation. Although other events may contribute to produce the objective reflection event (e.g., the crosscorrelation of the direct P-wave and the multiple P-wave reflections; Abe et al. [3]), we consider the crosscorrelation of these two events (pP and pPp) since they propagate with high energy and are easy to be separated from S-waves.We numerically modeled the seismic wavefield with a one subsurface source using the 2-dimensional staggered grid method [18]. The subsurface source is installed at the 30 km depth (Figure 4). We simulate the earthquake by giving horizontal stress as a source and using a Ricker wavelet with the central frequency of 5 Hz. The length of receiver array is 50 km with 200 m intervals.Figure5 shows the calculated wavefield (vertical component) at the receiver array. One can see that the sea-surface reflections (pP) and the sea-surface multiple reflections (pPp) are observed as well as the direct P-wave and direct S-wave (Figure 5). Since we focus on only P-waves in this study, we muted the amplitude of the direct S-wave and subsequent wavefields. Because we have 251 receivers, the total number of the crosscorrelation traces is251C2 which is the number of 2 combinations from 251 elements. Figure 6 shows the result of subsurface image derived from all crosscorrelation traces using prestack depth migration (PSDM). The target-dipping structure was appeared on the image. However, the signal-to-noise ratio at the near offset is low, and the artifact-dipping events are appeared (arrows in Figure 6). These artifacts are caused by the crosscorrelation traces which do not have a stationary phase source for objective reflections as pointed out by, for example, Snieder et al. [19].Figure 5 Vertical component of the modeled seismic wavefields. Arrows indicate the dominated events.Figure 6 PSDM-imaging result using all crosscorrelation traces. Arrows indicate the artifact events.We evaluate the stationary phase source in order to remove the unwanted crosscorrelation traces. We estimate the raypath ofpP and pPp and compare their raypaths. We fix the receiver which observe pP (denoted as RpP) and calculate the raypath of pPp for all receivers (RpPpi). We calculate the horizontal distance of the receiver RpP and the point where pPp passes through the seafloor to a downward direction (Figure 7). We refer to the distance as an interferometric distance (Figure 4). The interferometric distance of zero indicates that the two events have the common raypath between the buried source to the receiver RpP. Due to the receiver spacing, the interferometric distance is not always zero (Figure 7). Therefore, we define the threshold for the interferometric distance. The receivers RpPpi with the interferometric distance less than the threshold value (1 km in this case) are assumed to have the common raypath. This threshold corresponds to the determination of the size of the first Fresnel zone around the stationary points. Note that the interferometric distance projected from the first Fresnel zone is dependent on the position of reflector and the receiver geometry. We iterate this procedure by changing the fixed receiver RpP and evaluate all combination of crosscorrelation traces.Figure 7 Example of the interferometric distance for differentRpPpi with the fixed RpP. The receiver RpPpi with the interferometric distance less than the threshold value (1 km) is defined as the stationary phase pair.We remove the crosscorrelation traces which do not have a stationary phase source for objective reflections and apply PSDM using 2489 traces. Figure8 shows the result of subsurface image considering stationary phase. One can see that the signal-to-noise ratio at the near offset improved, and the artifact events are suppressed. We conclude that evaluating stationary phase by raypath calculation and removing unwanted crosscorrelation traces improve reflection images in SI.Figure 8 PSDM imaging result using selected crosscorrelation traces by stationary phase evaluation. This result much improves compared with Figure6. ## 4. Extracting Reflected Waves of Oceanic Crust Surface from OBS Records in the Nankai Trough We apply this analysis to the field passive-seismic data. The field data consists of the OBS records deployed in the Nankai Trough to observe local earthquakes. This dataset was originally obtained by JAMSTEC [11, 12] for the earthquake observation as well as for aftershocks of the September 2004 intraplate earthquake ruptured around this survey area [20]. The 28 OBS are 3 dimensionally deployed at the approximately 100 km-squared area (Figure 9(a)). The recorded period was for approximately 3 months from March 2005, and we used the data during 12 days in March. We extract the reflection events generated at the oceanic crust surface using crosscorrelation. Within the 653 local earthquakes detected in this duration, we extract 6 earthquakes whose magnitudes are larger than 3.0 with >20 km in depth (Figure 9(b)) because these earthquakes contain strong energy. We choose deep earthquakes since the P-wave energy is dominated in the vertical component of the records, and the arrival time of S-waves is late enough to be separated from P-waves. These earthquakes are localized at south-west of the survey area (Figure 9(a)).(a) Survey area in the Nankai Trough. Three seismic survey lines, KR9806, IL, and S3 indicate those from Figure9(b) [8], Figure 11(a) [9], and Figure 13(c) [10], respectively. (b) The depth of OBS and earthquakes projected to the line KR9806. Background velocity contour is derived from refraction tomography [8]. The hypocenters associated with 6 earthquakes are localized around the trough axis. (a)(b)We show the 168 traces from the 6 events aligned by the epicentral distance and corrected origin time with the earthquake nucleation time (Figure10). The dominant frequency was~5 Hz. One can see two events appearing with different propagating velocity on the seismic record. Since the first arrival is dominated in the vertical component (Figure 10), we assumed them as the P-waves and the second arrivals as the direct S-waves. We muted the amplitude of the S-waves as well as subsequent records in order to analyze only P-wave events.Recorded signal of the 168 traces from the 6 events aligned by the epicentral distance after correcting the origin time as the time of the earthquakes. (a)(b)Figure11(a) shows the reflection profile across the Nankai Trough (Moore et al. [9]; Line IL in Figure 9(a)). The oceanic crust surface in this area is located from 7 km to 10 km in depth and dipping landward direction (dashed line in Figure 11(a)). We extended this structure (landward-dipping structure with the angle of 5.5 degree) perpendicular to the 2D cross-line (Line IL in Figure 9(a)) and obtain the 3D dipping structure used for raypath calculation (Figure 11(b)).(a) 2D reflection profile across the Nankai Trough (Line IL in Figure9(a), modified after Moore et al. [9]). Dashed line shows the angle of 5.5 degree. (b) Constructed 3D dipping structure for the raypath calculation. (a)(b)We estimate the raypaths of the sea-surface reflection (pP) and the sea-surface multiple reflection (pPp) as in the case of simulation study. To account for the difference of each elevation of OBS, we define the interferometric distance as the distance between the receiver RpP and the point where pPp passes through the horizontal plane constructed by the receiver RpP to a downward direction. We defined the threshold of the interferometric distance as 5 km. We removed the crosscorrelation traces without the stationary phase sources and obtained 55 crosscorrelation traces. Since we calculate the raypath, we can estimate the reflection point at the given structure (Figure 12). Due to the source localization, only a part of combination of the receivers is selected.Figure 12 Reflection points estimated by a raypath calculation (black dots).We applied the 3D PSDM to the crosscorrelation traces. We extended the tomographic velocity estimated by Nakanishi et al. [8] perpendicular to its survey line (Line KR9806 in Figure 9(a)) and used it as a velocity model for PSDM. The expected reflection points are sparsely distributed in the 3D region (Figure 12). Therefore, we spatially stacked the 3D imaging result and obtained the pseudo-2D profile.We show the pseudo-2D profile projected to Line KR9806 (Figure13(a)) running perpendicular to the trough axis. The result shows that the oceanic crust surface is imaged at the same depth of the active-source reflection profile of Line IL (arrows in Figure 13(a)). For a comparison, we show the PSDM result using all crosscorrelation traces projected to Line KR9806 (Figure 13(b)). Since it is difficult to detect the oceanic crust surface in the profile (Figure 13(b)), stationary phase evaluation improves the quality of the imaging result (Figure 13(a)).PSDM-imaging results. (a) Stacked profile projected to Line KR9806 (Figure9(a)) running perpendicular to the trough axis. The seismic profile from Moore et al. [9] is overwrapped in this profile. Arrows show the imaged oceanic crust surface. (b) Stacked profile projected to Line KR9806 using all crosscorrelation traces. (c) Stacked profile projected to Line S3 (Figure 9(a)) running parallel to the trough axis. The seismic profile from Park et al. [10] is overwrapped. (a)(b)(c)We show the pseudo-2D profile projected to Line S3 (Figure13(c)) running parallel to the trough axis. We convert the depth axis of our imaging result to the time axis using the migration velocity model. The imaged oceanic crust surface (arrows in Figure 13(c)) is discontinuous around the horizontal distance of 20 km due to the migration aperture and the sparse distribution of the reflection points (Figure 12). However, we can observe the dominated amplitudes at the same two-way time of the oceanic crust surface in Line S3 [10] as well as the local bulge of the oceanic crust surface. ## 5. Conclusion We use the stationary phase interpretation to obtain high-quality imaging results in the localized source distribution. We estimate the raypath of two reflection events which are sea-surfaceP-wave reflection and sea-surface multiple P-wave reflection. We choose the crosscorrelation traces which is expected to produce objective reflections due to the stationary phase sources using the estimated raypath. We show the numerical modeling result to check the validity of this method. Furthermore, we use Ocean Bottom Seismogram (OBS) records which observe localized earthquakes. We show that choosing the crosscorrelation traces by stationary phase evaluation improves the quality of the imaged reflection boundary of the oceanic crust surface. This processing technique has a possibility to monitor the Nankai seismogenic fault without the active sources and higher resolution than using teleseismic records. --- *Source: 101545-2012-06-12.xml*
101545-2012-06-12_101545-2012-06-12.md
30,170
Crosscorrelation of Earthquake Data Using Stationary Phase Evaluation: Insight into Reflection Structures of Oceanic Crust Surface in the Nankai Trough
Shohei Minato; Takeshi Tsuji; Toshifumi Matsuoka; Koichiro Obana
International Journal of Geophysics (2012)
Physical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101545
101545-2012-06-12.xml
--- ## Abstract Seismic interferometry (SI) has been recently employed to retrieve the reflection response from natural earthquakes. We perform experimental study to apply SI to Ocean Bottom Seismogram (OBS) records in the Nankai Trough, southwest Japan in order to reveal the relatively shallow geological boundaries including surface of oceanic crust. Although the local earthquakes with short raypath we use to retrieve reflection response are expected to contain the higher-frequency components to detect fine-scale structures by SI, they cannot be assumed as plane waves and are inhomogeneously distributed. Since the condition of inhomogeneous source distribution violates the assumption of SI, the conventional processing yields to the deteriorated subsurface images. Here we adopt the raypath calculation for stationary phase evaluation of SI in order to overcome this problem. To find stationary phase, we estimate the raypaths of two reflections: (1) sea-surfaceP-wave reflection and (2) sea-surface multiple P-wave reflection. From the estimated raypath, we choose the crosscorrelation traces which are expected to produce objective reflections considering the stationary phase points. We use the numerical-modeling data and field data with 6 localized earthquakes and show that choosing the crosscorrelation traces by stationary phase evaluation improves the quality of the reflections of the oceanic crust surface. --- ## Body ## 1. Introduction Among various seismic exploration methods using the body-wave of natural earthquakes [1, 2], seismic interferometry (SI) has been recently employed to retrieve the reflection response. Although the receiver function method [1] has been broadly used to image the Moho and mantle discontinuities, there is a study claiming that retrieving and migrating the reflection response using SI is superior to the receiver function method [3].SI retrieves Green’s function between receivers by crosscorrelating wavefield [4, 5]. This theory requires the physical sources homogeneously distributed along the enclosed surface which surrounds the receivers [4]. There are several successful applications of SI to natural earthquakes. Abe et al. [3] and Tonegawa et al. [6] retrieved the crustal reflection response in central Japan using P coda and S coda, respectively. Ruigrok et al. [7] used P wave to retrieve reflection response using Laramie array in USA. Abe et al. [3] further showed the comparison of migrated images of SI and those using receiver function analysis.These applications focused on the teleseismic wavefields in which the epicentral distance is much longer than the length of receiver array. Consequently the wavefields can be assumed as plane wave. Ruigrok et al. [7] replaced the integral for source position of SI into a one for ray parameter. These teleseismic events are suitable to SI processing since the most earthquakes are generated at the numerous plate boundaries in the world, and consequently the teleseismic records contain the earthquakes propagating from the various directions. This can moderately enable us to assume that their source distribution is homogeneous.Here, we performed experimental study to apply SI to the localized earthquake records acquired by Ocean Bottom Seismogram (OBS) in the Nankai Trough, southwest Japan in order to reveal the relatively shallow geological boundaries including surface of oceanic crust (i.e., plate boundary). There is no application of SI to the natural earthquake recorded by OBS. The OBS we used in this study was deployed to observe the local earthquake for earthquake observation in subduction zone [11, 12]. Those local earthquakes cannot be assumed as a teleseismic wavefield since the source-receiver distance is smaller than the length of receiver array. Furthermore, those localized earthquakes are usually inhomogeneously distributed and violate the assumption of SI. Therefore, the conventional SI processing (crosscorrelation and summation) yields to the deteriorated subsurface images. However, focusing on the local earthquakes with shorter raypath can give us the advantages over using the teleseismic wavefields because the teleseismic events usually have long propagating path and lose their higher-frequency components, which leads to detect only large-scale structure such as Moho reflections. On the other hand, the local earthquakes which have shorter raypath are expected to contain higher-frequency components and to resolve more fine-scale structures by using SI. In this study, we discuss a method to retrieve the reflection response using localized natural earthquakes observed by OBS; we adopt the raypath calculation for stationary phase evaluation in SI analysis in order to overcome the noise originated from the violation of homogeneous source distribution.The physical interpretation of the condition to be posed on the SI can be explained by the stationary phase approximation [5, 13, 14]. In this approximation, it explains that the dominant contribution of the retrieval of the Green’s function comes from the crosscorrelation of the records from the physical source located at the stationary phase position (stationary phase source). Therefore, in the localized source distribution, crosscorrelation pairs with stationary phase positions have physical meaning. The other crosscorrelation traces will produce noise and deteriorate the quality of the imaging results. In this study, we identify crosscorrelation pairs with stationary phase position using the estimated raypath information of two reflections: (1) sea-surface P-wave reflection and (2) sea-surface multiple P-wave reflection.Note that Chaput and Bostock [15] successfully retrieved reflection response using the subsurface noise sources located from 10 km to 60 km depth which is similar source depth to our study. They found that the source illumination is imperfect from the discussion using the stationary phase approximation. Our study is different from them in the point that we focus on the natural earthquakes. ## 2. Estimation of Stationary Phase Records Using Raypath Calculation ### 2.1. Stationary Phase Approximation of Seismic Interferometry Seismic interferometry by crosscorrelation (e.g., [4]) is written as(1)Ĝ(xB,xA,ω)+Ĝ*(xB,xA,ω)=2kωρ∮SsrcĜ(xB,xS,ω)Ĝ*(xA,xS,ω)d2xS, where Ĝ(xA,xS,ω) and Ĝ(xB,xS,ω) are the observed wavefield from the sources xS along the closed surface 𝕊src. Ĝ(xB,xA,ω) is a Green’s function between two receiver positions.We assume that the primary reflection is retrieved by crosscorrelating a direct wave and a specular reflection from the physical sources (Figure1(a)).(2)Ĝd(xA,xS,ω)=eiωτSA,Ĝr(xB,xS,ω)=eiωτSyB, where τSA denotes a travel time of a direct wave from xS to xA. τSyB denotes a travel time of a specular reflection from the source position xS to the receiver position xB through specular reflection point y (Figure 1(a)). Superscript d or r indicates that we only consider a direct wave for xA and reflected wave for xB. Note that the amplitudes of these wavefields are assumed to be normalized in (2). Substituting these waves into (1) and applying stationary phase approximation (e.g., [5, 16]) yields the following equation:(3)Ĝ(xB,xA,ω)≈eiωτAy0B∮Seiω(τSyB-τSA-τAy0B)d2xS≈αeiωτAy0B, where τAy0B denotes a travel time of the specular reflection between two receiver positions through its specular reflection point y0 (Figure 1(b)). α denotes a coefficient for stationary phase approximation. Note that we removed Ĝ*(xB,xA,ω) in (1) since we only consider causal part of Green’s function. The acausal part is obtained by considering a direct wave for xB and reflected wave for xA. The integral in (3) has a stationary point at xS=xS*, and the objective primary reflection is retrieved [5]. Furthermore, in this stationary point, the following relation is satisfied:(4)τS*yB-τS*A-τAy0B=0. This relation states that, for the stationary phase source position, the two events (direct wave and reflected wave in this case) have same raypath from the source position to the receiver position (Figure 1(b)). This corresponds to the fact that the crosscorrelation processing subtracts the travel times, cancels the common raypath, and produces the traveltime of the objective reflection event between receivers.(a) ReceiverxA observes direct wave with the travel time τSA and xB  observes reflected wave with τSyB. The specular reflect position y(xS) varies with the source position xS. (b) When the source position xS satisfies the stationary phase position xS*, two events have common raypath between xS* and xA. (a)(b)Note that the amplitudes derived from the crosscorrelation of the nonstationary phase sources are cancelled after the summation of the homogeneously distributed sources along the enclosed surface. In the case of the localized source distribution, the cancellation is insufficient and the unwanted amplitudes remain. ### 2.2. Selection of Receiver Pairs by Stationary Phase Evaluation When the physical sources are widely distributed, the stationary phase sources effectively produce the objective reflection events. On the other hand, when the source distribution is localized, only the reflection events with the stationary phase sources are retrieved. Therefore, we evaluate the crosscorrelation traces for the existence of the stationary phase source and exclude those without the stationary phase sources in the summation of the crosscorrelation traces. We assume that we can estimate the raypath propagating from the source position to the two receivers. When these two raypaths have common pathway, we define that the crosscorrelated trace using these receivers contains objective reflections.In order to evaluate the existence of the stationary phase sources, the raypaths for the direct waves and the arbitrary multiple reflected waves are needed to be estimated. We adopted a method developed by Tamagawa et al. [17] for raypath calculation. This method geometrically calculates the raypath for arbitrary multiple reflections with given 3 dimensionally deviated structures assuming straight raypath (Figure 2). When we have two reflector planes, the method calculates the mirror point of the source position for the reflector planes (M1→M2→M3 in Figure 2). A mirror point is defined as a projection of the position with plane symmetry. The arbitrary multiple reflection raypath can be derived by connecting those mirror points to the receiver position (R3→R2→R1 in Figure 2). This method is simple and fast to calculate raypaths.Figure 2 Raypath calculation for multiple reflections.We evaluate the stationary phase using these raypaths. For example, we assume that the receiverxA observes direct wave, and two different receivers xBand xB′ observe reflected wave (Figure 3). In this model, the candidate for crosscorrelation processing is xA-xB and xA-xB′. Stationary phase evaluation using raypath can give us the information that only the crosscorrelation of xA-xB has a stationary phase source and contain the objective reflection event. By evaluating this procedure for all source position and receiver combination, we remove the crosscorrelation traces which do not have stationary sources for objective reflections. Note that we need some knowledge of the position of reflector to evaluate raypaths before imaging. However, a rough estimate of the position of reflector can be sufficient since we do not need precise information about the location of the stationary points.Figure 3 SourcexS is a stationary source for the crosscorrelation pair xA-xB, but not for the pair xA-xB′. ## 2.1. Stationary Phase Approximation of Seismic Interferometry Seismic interferometry by crosscorrelation (e.g., [4]) is written as(1)Ĝ(xB,xA,ω)+Ĝ*(xB,xA,ω)=2kωρ∮SsrcĜ(xB,xS,ω)Ĝ*(xA,xS,ω)d2xS, where Ĝ(xA,xS,ω) and Ĝ(xB,xS,ω) are the observed wavefield from the sources xS along the closed surface 𝕊src. Ĝ(xB,xA,ω) is a Green’s function between two receiver positions.We assume that the primary reflection is retrieved by crosscorrelating a direct wave and a specular reflection from the physical sources (Figure1(a)).(2)Ĝd(xA,xS,ω)=eiωτSA,Ĝr(xB,xS,ω)=eiωτSyB, where τSA denotes a travel time of a direct wave from xS to xA. τSyB denotes a travel time of a specular reflection from the source position xS to the receiver position xB through specular reflection point y (Figure 1(a)). Superscript d or r indicates that we only consider a direct wave for xA and reflected wave for xB. Note that the amplitudes of these wavefields are assumed to be normalized in (2). Substituting these waves into (1) and applying stationary phase approximation (e.g., [5, 16]) yields the following equation:(3)Ĝ(xB,xA,ω)≈eiωτAy0B∮Seiω(τSyB-τSA-τAy0B)d2xS≈αeiωτAy0B, where τAy0B denotes a travel time of the specular reflection between two receiver positions through its specular reflection point y0 (Figure 1(b)). α denotes a coefficient for stationary phase approximation. Note that we removed Ĝ*(xB,xA,ω) in (1) since we only consider causal part of Green’s function. The acausal part is obtained by considering a direct wave for xB and reflected wave for xA. The integral in (3) has a stationary point at xS=xS*, and the objective primary reflection is retrieved [5]. Furthermore, in this stationary point, the following relation is satisfied:(4)τS*yB-τS*A-τAy0B=0. This relation states that, for the stationary phase source position, the two events (direct wave and reflected wave in this case) have same raypath from the source position to the receiver position (Figure 1(b)). This corresponds to the fact that the crosscorrelation processing subtracts the travel times, cancels the common raypath, and produces the traveltime of the objective reflection event between receivers.(a) ReceiverxA observes direct wave with the travel time τSA and xB  observes reflected wave with τSyB. The specular reflect position y(xS) varies with the source position xS. (b) When the source position xS satisfies the stationary phase position xS*, two events have common raypath between xS* and xA. (a)(b)Note that the amplitudes derived from the crosscorrelation of the nonstationary phase sources are cancelled after the summation of the homogeneously distributed sources along the enclosed surface. In the case of the localized source distribution, the cancellation is insufficient and the unwanted amplitudes remain. ## 2.2. Selection of Receiver Pairs by Stationary Phase Evaluation When the physical sources are widely distributed, the stationary phase sources effectively produce the objective reflection events. On the other hand, when the source distribution is localized, only the reflection events with the stationary phase sources are retrieved. Therefore, we evaluate the crosscorrelation traces for the existence of the stationary phase source and exclude those without the stationary phase sources in the summation of the crosscorrelation traces. We assume that we can estimate the raypath propagating from the source position to the two receivers. When these two raypaths have common pathway, we define that the crosscorrelated trace using these receivers contains objective reflections.In order to evaluate the existence of the stationary phase sources, the raypaths for the direct waves and the arbitrary multiple reflected waves are needed to be estimated. We adopted a method developed by Tamagawa et al. [17] for raypath calculation. This method geometrically calculates the raypath for arbitrary multiple reflections with given 3 dimensionally deviated structures assuming straight raypath (Figure 2). When we have two reflector planes, the method calculates the mirror point of the source position for the reflector planes (M1→M2→M3 in Figure 2). A mirror point is defined as a projection of the position with plane symmetry. The arbitrary multiple reflection raypath can be derived by connecting those mirror points to the receiver position (R3→R2→R1 in Figure 2). This method is simple and fast to calculate raypaths.Figure 2 Raypath calculation for multiple reflections.We evaluate the stationary phase using these raypaths. For example, we assume that the receiverxA observes direct wave, and two different receivers xBand xB′ observe reflected wave (Figure 3). In this model, the candidate for crosscorrelation processing is xA-xB and xA-xB′. Stationary phase evaluation using raypath can give us the information that only the crosscorrelation of xA-xB has a stationary phase source and contain the objective reflection event. By evaluating this procedure for all source position and receiver combination, we remove the crosscorrelation traces which do not have stationary sources for objective reflections. Note that we need some knowledge of the position of reflector to evaluate raypaths before imaging. However, a rough estimate of the position of reflector can be sufficient since we do not need precise information about the location of the stationary points.Figure 3 SourcexS is a stationary source for the crosscorrelation pair xA-xB, but not for the pair xA-xB′. ## 3. Numerical Modeling Results We numerically simulate the wavefield with localized source distribution in order to evaluate the processing considering stationary phase point. For the simulation study, we consider the local earthquakes recorded by Ocean Bottom Seismogram (OBS) on the seafloor. The objective reflection events are assumed to be those generated from the oceanic crust surface. We use only one physical source for the simulation study because we evaluate our proposed method for localized source distribution. The velocity model is a three-layer structure including sea water (Figure4). The dipping structure in the velocity model simulates a oceanic crust surface (i.e., plate interface) in the Nankai Trough.Figure 4 Velocity model for the numerical simulation.pPand pPpdenote the sea-surface reflection and the sea-surface multiple reflection, respectively.We define the two events which produce the objective reflection event for the stationary phase evaluation. In our source-receiver configuration, the source is located in the subsurface, and the sea-surface (modeled as a free-surface) expects to produce the strong downpropagating reflections. This is the difference of the OBS recorded wavefields with those by the surface receivers. Therefore, we assume that the sea-surfaceP-wave reflections (denoted as pP in Figure 4) and the sea-surface multiple P-wave reflections (pPp in Figure 4) produce the reflection events from the oceanic crust surface after crosscorrelation. Although other events may contribute to produce the objective reflection event (e.g., the crosscorrelation of the direct P-wave and the multiple P-wave reflections; Abe et al. [3]), we consider the crosscorrelation of these two events (pP and pPp) since they propagate with high energy and are easy to be separated from S-waves.We numerically modeled the seismic wavefield with a one subsurface source using the 2-dimensional staggered grid method [18]. The subsurface source is installed at the 30 km depth (Figure 4). We simulate the earthquake by giving horizontal stress as a source and using a Ricker wavelet with the central frequency of 5 Hz. The length of receiver array is 50 km with 200 m intervals.Figure5 shows the calculated wavefield (vertical component) at the receiver array. One can see that the sea-surface reflections (pP) and the sea-surface multiple reflections (pPp) are observed as well as the direct P-wave and direct S-wave (Figure 5). Since we focus on only P-waves in this study, we muted the amplitude of the direct S-wave and subsequent wavefields. Because we have 251 receivers, the total number of the crosscorrelation traces is251C2 which is the number of 2 combinations from 251 elements. Figure 6 shows the result of subsurface image derived from all crosscorrelation traces using prestack depth migration (PSDM). The target-dipping structure was appeared on the image. However, the signal-to-noise ratio at the near offset is low, and the artifact-dipping events are appeared (arrows in Figure 6). These artifacts are caused by the crosscorrelation traces which do not have a stationary phase source for objective reflections as pointed out by, for example, Snieder et al. [19].Figure 5 Vertical component of the modeled seismic wavefields. Arrows indicate the dominated events.Figure 6 PSDM-imaging result using all crosscorrelation traces. Arrows indicate the artifact events.We evaluate the stationary phase source in order to remove the unwanted crosscorrelation traces. We estimate the raypath ofpP and pPp and compare their raypaths. We fix the receiver which observe pP (denoted as RpP) and calculate the raypath of pPp for all receivers (RpPpi). We calculate the horizontal distance of the receiver RpP and the point where pPp passes through the seafloor to a downward direction (Figure 7). We refer to the distance as an interferometric distance (Figure 4). The interferometric distance of zero indicates that the two events have the common raypath between the buried source to the receiver RpP. Due to the receiver spacing, the interferometric distance is not always zero (Figure 7). Therefore, we define the threshold for the interferometric distance. The receivers RpPpi with the interferometric distance less than the threshold value (1 km in this case) are assumed to have the common raypath. This threshold corresponds to the determination of the size of the first Fresnel zone around the stationary points. Note that the interferometric distance projected from the first Fresnel zone is dependent on the position of reflector and the receiver geometry. We iterate this procedure by changing the fixed receiver RpP and evaluate all combination of crosscorrelation traces.Figure 7 Example of the interferometric distance for differentRpPpi with the fixed RpP. The receiver RpPpi with the interferometric distance less than the threshold value (1 km) is defined as the stationary phase pair.We remove the crosscorrelation traces which do not have a stationary phase source for objective reflections and apply PSDM using 2489 traces. Figure8 shows the result of subsurface image considering stationary phase. One can see that the signal-to-noise ratio at the near offset improved, and the artifact events are suppressed. We conclude that evaluating stationary phase by raypath calculation and removing unwanted crosscorrelation traces improve reflection images in SI.Figure 8 PSDM imaging result using selected crosscorrelation traces by stationary phase evaluation. This result much improves compared with Figure6. ## 4. Extracting Reflected Waves of Oceanic Crust Surface from OBS Records in the Nankai Trough We apply this analysis to the field passive-seismic data. The field data consists of the OBS records deployed in the Nankai Trough to observe local earthquakes. This dataset was originally obtained by JAMSTEC [11, 12] for the earthquake observation as well as for aftershocks of the September 2004 intraplate earthquake ruptured around this survey area [20]. The 28 OBS are 3 dimensionally deployed at the approximately 100 km-squared area (Figure 9(a)). The recorded period was for approximately 3 months from March 2005, and we used the data during 12 days in March. We extract the reflection events generated at the oceanic crust surface using crosscorrelation. Within the 653 local earthquakes detected in this duration, we extract 6 earthquakes whose magnitudes are larger than 3.0 with >20 km in depth (Figure 9(b)) because these earthquakes contain strong energy. We choose deep earthquakes since the P-wave energy is dominated in the vertical component of the records, and the arrival time of S-waves is late enough to be separated from P-waves. These earthquakes are localized at south-west of the survey area (Figure 9(a)).(a) Survey area in the Nankai Trough. Three seismic survey lines, KR9806, IL, and S3 indicate those from Figure9(b) [8], Figure 11(a) [9], and Figure 13(c) [10], respectively. (b) The depth of OBS and earthquakes projected to the line KR9806. Background velocity contour is derived from refraction tomography [8]. The hypocenters associated with 6 earthquakes are localized around the trough axis. (a)(b)We show the 168 traces from the 6 events aligned by the epicentral distance and corrected origin time with the earthquake nucleation time (Figure10). The dominant frequency was~5 Hz. One can see two events appearing with different propagating velocity on the seismic record. Since the first arrival is dominated in the vertical component (Figure 10), we assumed them as the P-waves and the second arrivals as the direct S-waves. We muted the amplitude of the S-waves as well as subsequent records in order to analyze only P-wave events.Recorded signal of the 168 traces from the 6 events aligned by the epicentral distance after correcting the origin time as the time of the earthquakes. (a)(b)Figure11(a) shows the reflection profile across the Nankai Trough (Moore et al. [9]; Line IL in Figure 9(a)). The oceanic crust surface in this area is located from 7 km to 10 km in depth and dipping landward direction (dashed line in Figure 11(a)). We extended this structure (landward-dipping structure with the angle of 5.5 degree) perpendicular to the 2D cross-line (Line IL in Figure 9(a)) and obtain the 3D dipping structure used for raypath calculation (Figure 11(b)).(a) 2D reflection profile across the Nankai Trough (Line IL in Figure9(a), modified after Moore et al. [9]). Dashed line shows the angle of 5.5 degree. (b) Constructed 3D dipping structure for the raypath calculation. (a)(b)We estimate the raypaths of the sea-surface reflection (pP) and the sea-surface multiple reflection (pPp) as in the case of simulation study. To account for the difference of each elevation of OBS, we define the interferometric distance as the distance between the receiver RpP and the point where pPp passes through the horizontal plane constructed by the receiver RpP to a downward direction. We defined the threshold of the interferometric distance as 5 km. We removed the crosscorrelation traces without the stationary phase sources and obtained 55 crosscorrelation traces. Since we calculate the raypath, we can estimate the reflection point at the given structure (Figure 12). Due to the source localization, only a part of combination of the receivers is selected.Figure 12 Reflection points estimated by a raypath calculation (black dots).We applied the 3D PSDM to the crosscorrelation traces. We extended the tomographic velocity estimated by Nakanishi et al. [8] perpendicular to its survey line (Line KR9806 in Figure 9(a)) and used it as a velocity model for PSDM. The expected reflection points are sparsely distributed in the 3D region (Figure 12). Therefore, we spatially stacked the 3D imaging result and obtained the pseudo-2D profile.We show the pseudo-2D profile projected to Line KR9806 (Figure13(a)) running perpendicular to the trough axis. The result shows that the oceanic crust surface is imaged at the same depth of the active-source reflection profile of Line IL (arrows in Figure 13(a)). For a comparison, we show the PSDM result using all crosscorrelation traces projected to Line KR9806 (Figure 13(b)). Since it is difficult to detect the oceanic crust surface in the profile (Figure 13(b)), stationary phase evaluation improves the quality of the imaging result (Figure 13(a)).PSDM-imaging results. (a) Stacked profile projected to Line KR9806 (Figure9(a)) running perpendicular to the trough axis. The seismic profile from Moore et al. [9] is overwrapped in this profile. Arrows show the imaged oceanic crust surface. (b) Stacked profile projected to Line KR9806 using all crosscorrelation traces. (c) Stacked profile projected to Line S3 (Figure 9(a)) running parallel to the trough axis. The seismic profile from Park et al. [10] is overwrapped. (a)(b)(c)We show the pseudo-2D profile projected to Line S3 (Figure13(c)) running parallel to the trough axis. We convert the depth axis of our imaging result to the time axis using the migration velocity model. The imaged oceanic crust surface (arrows in Figure 13(c)) is discontinuous around the horizontal distance of 20 km due to the migration aperture and the sparse distribution of the reflection points (Figure 12). However, we can observe the dominated amplitudes at the same two-way time of the oceanic crust surface in Line S3 [10] as well as the local bulge of the oceanic crust surface. ## 5. Conclusion We use the stationary phase interpretation to obtain high-quality imaging results in the localized source distribution. We estimate the raypath of two reflection events which are sea-surfaceP-wave reflection and sea-surface multiple P-wave reflection. We choose the crosscorrelation traces which is expected to produce objective reflections due to the stationary phase sources using the estimated raypath. We show the numerical modeling result to check the validity of this method. Furthermore, we use Ocean Bottom Seismogram (OBS) records which observe localized earthquakes. We show that choosing the crosscorrelation traces by stationary phase evaluation improves the quality of the imaged reflection boundary of the oceanic crust surface. This processing technique has a possibility to monitor the Nankai seismogenic fault without the active sources and higher resolution than using teleseismic records. --- *Source: 101545-2012-06-12.xml*
2012
# Research on Distribution of Flow Field and Simulation of Working Pulsation Based on Rotating-Sleeve Distributing-Flow System **Authors:** Yanjun Zhang; Hongxin Zhang; Jingzhou Yang; Qinghai Zhao; Xiaotian Jiang; Qianchang Cheng; Qingsong Hua **Journal:** Modelling and Simulation in Engineering (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1015494 --- ## Abstract To solve problems of leakage, vibration, and noise caused by disorders of flow field distribution and working pulsation in the rotating-sleeve distributing-flow system, governing equations of plunger and rotating sleeve and computational fluid dynamics (CFD) model are developed through sliding mesh and dynamic mesh technology to simulate flow field and working pulsation. Simulation results show that the following issues exist: obviously periodic fluctuation and sharp corner in flow pulsation, backward flow when fluid is transformed between discharge and suction, and serious turbulence and large loss in kinetic energy around the damping groove in transitional movements. Pressure in the pump chamber rapidly rises to 2.2 MPa involving over 10% more than nominal pressure when the plunger is at the Top Dead Center (TDC) considering changes about damping groove’s position and flow area in two transitional movements. Shortly pressure overshoot gradually decreases to a normal condition with increasing flow area. Similarly, pressure in the pump chamber instantaneously drops to a saturated vapor pressure −98.9 KPa when the plunger is at the Bottom Dead Center (BDC). With increasing flow area the overshoot gradually increases to the normal condition. This research provides foundations for investigating flow field characteristic and structure optimization of rotating-sleeve distributing-flow system. --- ## Body ## 1. Introduction A distributing-flow system, the most important component of hydraulic system, is widely used in the fluid power industry because of robustness, controllability, and wide operating range. However, the distributing-flow system controlled flow by valves has many disadvantages including bulk mass, large pressure loss, noise, and going against high frequency, which can easily cause noise and vibration on account of flow pulsation [1]. To solve these issues, the novel rotating-sleeve distributing-flow system is proposed which uses reciprocating motion of plunger and single-track rotation of rotating sleeve to achieve distribution functions. It has reliable seal, high efficiency, and little impact from working frequency. To further improve its performance, the flow characteristics of rotating-sleeve distributing-flow system could be theoretically investigated; then the system efficiently reduces damage to the pump, such as undesirable noise, vibration, and cavitation even reducing working reliability of the pump [2–4].Flow ripple, a significant characteristic of piston pump, is closely relevant with the pressure fluctuation, backward flow, and noise of inner fluid. Noises in pump can be effectively reduced by taking factors influencing the flow ripple into account and focusing on flow ripple in the process through structure optimization [5]. Dhananchezhiyan and Hiremath [6] reported that the flow ripple is associated with pressure pulsation in various drive frequencies of micro pumps. Hence, intensive studies of the flow ripple and the pressure pulsations are necessary for better understanding the flow process [7].Recently, various analytic and simulative methods have been extensively studied on distribution of flow field and working pulsation. Particularly, the CFD simulation is generally utilized in many fluid field and hydraulic researches. Luo et al. [8] developed an adiabatic dehumidifier model using CFD technology and the interior heat and mass transfer processes were then simulated. Ma et al. [9] utilized a new CFD model with user defined function to simulate pump’s fluid characteristic and predict the flow ripple. At the same time, the flow ripple was tested under different working parameters such as rotation speed and working pressure. Delele et al. [10] conducted research on CFD model based on an Eulerian-Eulerian multiphase which can predict the fluid flow profile and study the effects of drum rotational speed. In addition, they used experimental results of particle and fluid velocities and residence time to verify the model simulation.The slide mesh and dynamics grid technology have been continually employed in CFD model to improve its simulation functions and apply varied motions into the model flexibly. Wang [11] utilized the dynamics CFD model to simulate cavitation of axial piston pump, which uses compressible fluid pump model with nine pistons. Guo et al. [12] studied the sliding mesh method of Fluent® to simulate the dynamic behavior in the course of ball valve closure. In addition, Vitagliano et al. [13] ran a slide mesh generated in zones of the flow field and connected with sliding boundaries under different rotation speed conditions to simulate unsteady flows with surfaces in relative motion.Fortunately, some researchers have found effective ways to reduce working pulsation to decrease noise. Lee et al. [14, 15] investigated the computed time-accurate pressure field and the loss generation process to establish the causal link to the induced flow ripple in the turbine system. Alves et al. [16] found that combining CFD modeling and analytical techniques is a great way to predict the oil flow rate in the eccentric-tube centrifugal oil pumping system. The research group led by Palmberg set a precompression volume in valve plate between the discharged kidney slot and sucked kidney slot [17]. They found that small precompression volume can amazingly lessen enormous problems of noise, pulsation, and hydraulic impact [18, 19]. And an axial piston pump’s valve plate which adopts a prepressurization fluid path consisting of a damping hole, a buffer chamber, and an orifice can reduce flow ripple to some extent [20].This paper aims to reduce vibration and noise by developing a novel rotating-sleeve distributing-flow system and analyzing the relationship between turbulence energy, velocity, and working pulsation through CFD simulation. In addition, the influences with respect to backward flow, flow pulsation, and pressure fluctuation are investigated. A complete simulation model with relevant parameters have been established and flow characteristics can be explicitly described. Furthermore, this work provides theoretical foundation for structure optimization of distributing-flow system and performance improvement. ## 2. Operating Principle of Rotating-Sleeve Distributing-Flow System In this section, a novel rotating-sleeve distributing-flow system is developed and analyzed. The plunger pulled by the crank-link mechanism achieves coupled reciprocating movement and then uses drive pin to transmit force into rotating sleeve along the cam groove pathway in unidirectional rotating movement. The drive pin moves while rolling with the cam groove molded line obtained by fitting linear equation through quadratic differential, rotating angle of rotating sleeve, and crank angle. Figure1 indicates the operating principle and components of rotating-sleeve distributing-flow system for mass flow with high frequency and efficiency.Figure 1 Structure principle of rotating-sleeve distributing-flow system.(1) Cam groove; (2) loading chamber; (3) inlet; (4) valve port; (5) pump chamber; (6) damping groove; (7) plunger; (8) blind flange; (9) pump body; (10) collecting chamber; (11) outlet; (12) drive pin; (13) compression spring; (14) rotating sleeve.In this system, there are two major movements: axial reciprocating movement of the plunger and unidirectional rotating movement of rotating sleeve. The plunger finishes reciprocating movements powered by the crankshaft and connecting rod mechanism via a connector cross slider. The displacement of plunger in reciprocating motion is formulated as follows:(1)x=r1-cos⁡φ+λ2sin2⁡φ,where r is the radius of bent axle, φ is crank angle about the crankshaft and connecting rod mechanism, λ=r/l is the ratio of crank and connecting link, and l is link length.Due toφ=ωt, the derivative of the displacement of plunger x with respect to the time t can be expressed as follows:(2)u=r·ω·sin⁡φ+λ2sin⁡2φ,where ω is angular velocity of bent axle.Angular velocity and acceleration of rotating sleeve have no obvious phase step and inflection point when sine molded line is selected for cam groove. Therefore, the relationship between the cam groove’s axial displacement and rotating-sleeve angle is given as follows:(3)zθ=S21-cos⁡θ0≤θ≤π,where S is plunger stroke and θ is the rotating-sleeve angle rotating around the central axis.Substituting (1) into (3), the rotating-sleeve angle θ with respect to crank angle φ is obtained as follows:(4)θ=arccos⁡cos⁡φ-λ2sin2⁡φ,0≤φ≤π2π-arccos⁡cos⁡φ-λ2sin2⁡φ,π≤φ≤2π.The angular velocity of rotating sleeve by taking the derivative oft in (3) can be expressed as follows: (5)ωt=±ω1-C02·sin⁡φ+λ2sin⁡2φ,where ωt is a positive number during 0≤φ<π and ωt is a negative number during π≤φ<2π and C0=cos⁡φ-0.5λsin2⁡φ. ## 3. Models and Methods ### 3.1. Fluid Model In light of structure and operating principle of rotating-sleeve distributing-flow system, the fluid model is established as shown in Figure2, in which the model indicates the main components with loading chamber, inlet, pump chamber, collecting chamber, outlet, and rotating sleeve.Figure 2 Fluid model of rotating-sleeve distributing-flow system.(1) Loading chamber; (2) inlet; (3) pump chamber; (4) collecting chamber; (5) outlet; (6) valve port.In this paper, the fluid model is simulated and analyzed by fluid simulation software Fluent®. The motions of plunger and rotating sleeve are defined in fluid model using Users Defined Function. Standardk-ε turbulence model and SIMPLE arithmetic are applied into simulation settings. Technologies of sliding mesh and dynamic mesh are also utilized in the fluid model according to its specific motion characteristics. The model parameters and boundary conditions are defined in Table 1.Table 1 The parameters of distributing-flow system. Parameters Value Symbol/unit The radius of bent axle 0.03 m The ratio of crank and connecting link 0.25 / Crankshaft speed 150 n/(r/min) Water density 998 kg/m3 Water viscosity 0.001003 Pa/s−1 Inlet pressure 0.1 MPa Outlet pressure 2 MPa Saturated vapor pressure 2339 Pa Time step 0.001 S ### 3.2. Cavitation Model The cavitation model is based on the flow equation Navier-Stokes with variable density and standard viscosity in hydromechanics. In this paper, taking the viscidity and turbulence into consideration and gas-liquid two-phase flow as the object of study, the transmission equation considering the content of gaseous mass is given as follows [21–24]: (6)∂∂tρf+∇·ρV→f=∇·Γ∇f+Re+Rc,where ρ is average density of gas-liquid mixture, f is gaseous mass content, Re and Rc denote velocity of bubble in generation and disappearance, respectively, V→ denotes average velocity of gaseous phase in two-dimensional flow, and Г denotes effective transmission efficiency.Based on the bubble dynamic equation of Rayleigh-Plesset, the bubble dynamics can be described by the variation of bubble radius under the surface tension term and the second derivative term in the equation neglected as follows:(7)dRBdt=2pv-p3ρlsgn⁡pv-p1ρ=fρv+1-fρlpv=psat+pturb,where RB is bubble radius, p is pressure, pv is the critical pressure of gas, psat is saturation pressure of gas, pturb is the pressure caused by turbulence, ρv is gaseous density, and ρl is average fluid density.Combining transmission equation of mass with the equation of continuity, the relationship of density in mixture and volume fraction is described as follows:(8)dρdt=ρv-ρldαdt,where α is the gas volume fraction.If the bubble numbers aren in unit volume, the gas volume fraction with respect to bubble radius can be expressed as follows:(9)α=4πnRB33.Substituting (9) into (8), the relationship of density in mixture and bubble dynamic is obtained as follows:(10)dρdt=ρv-ρl36πnα23dRBdt.According to the above equations, the velocity of bubble in generation and disappearance can be described, respectively, as follows:(11)Rc=3αρvρlRBρ2pv-p3ρlRe=31-αρvρlRBρ2pv-p3ρl.The gas volume fraction is proportional to average velocity and average velocity can be denoted by turbulence energy. When the surface tension coefficient of bubble is introduced, the velocity of bubble in generation and disappearance also can be described, respectively, as follows:(12)Rc=Ccfρvρlσ2Kpv-p3ρlRe=Ce1-fρvρlσ2Kpv-p3ρl,where Cc and Ce denote empirical constant, respectively, Cc=0.02, Ce=0.01, and σ is the surface tension coefficient of bubble. ### 3.3. Turbulence Model Turbulence energy represents turbulent fluctuation and directly reflects dissipation and stability of fluid flow. If turbulence energy is larger in some areas, these areas will have more loss of kinetic energy and become more unstable [16]. The standard k-ε turbulence modelbased on Reynolds Average Navier-Stokes equation is applied in our simulation and dissipation rate of turbulence energy defined as follows [22, 23]:(13)ε=μρ∂μt′∂xk∂μi′∂xk¯,where turbulence viscosity μt is the function of fundamental unknown quantity Turbulent Kinetic Energy k and dissipation rate ε and is expressed as follows:(14)μt=ρCμk2ε,where k and ε are fundamental unknown quantities in standard k-ε model, respectively. For incompressible fluid, the corresponding transport equations are given as(15)∂ρk∂t+∂ρkμi∂xi=∂∂xjμ+μtσk∂k∂xj+Gk-ρε∂ρε∂t+∂ρεμi∂xi=∂∂xjμ+μtσk∂ε∂xj+C1εkGk-C2ερε2k,where Gk is the generation of turbulence energy occurred by average velocity gradient and is defined as(16)Gk=μt∂μi∂xj+∂μj∂xi∂μi∂xj,where C1ε, C2ε, and Cμ are empirical constants, respectively, C1ε=1.44, C2ε=1.92, Cμ=0.09, σk and σε are Prandtl values, respectively, corresponding to turbulence energy and dissipation rate, σk=1.0, and σε=1.3. ## 3.1. Fluid Model In light of structure and operating principle of rotating-sleeve distributing-flow system, the fluid model is established as shown in Figure2, in which the model indicates the main components with loading chamber, inlet, pump chamber, collecting chamber, outlet, and rotating sleeve.Figure 2 Fluid model of rotating-sleeve distributing-flow system.(1) Loading chamber; (2) inlet; (3) pump chamber; (4) collecting chamber; (5) outlet; (6) valve port.In this paper, the fluid model is simulated and analyzed by fluid simulation software Fluent®. The motions of plunger and rotating sleeve are defined in fluid model using Users Defined Function. Standardk-ε turbulence model and SIMPLE arithmetic are applied into simulation settings. Technologies of sliding mesh and dynamic mesh are also utilized in the fluid model according to its specific motion characteristics. The model parameters and boundary conditions are defined in Table 1.Table 1 The parameters of distributing-flow system. Parameters Value Symbol/unit The radius of bent axle 0.03 m The ratio of crank and connecting link 0.25 / Crankshaft speed 150 n/(r/min) Water density 998 kg/m3 Water viscosity 0.001003 Pa/s−1 Inlet pressure 0.1 MPa Outlet pressure 2 MPa Saturated vapor pressure 2339 Pa Time step 0.001 S ## 3.2. Cavitation Model The cavitation model is based on the flow equation Navier-Stokes with variable density and standard viscosity in hydromechanics. In this paper, taking the viscidity and turbulence into consideration and gas-liquid two-phase flow as the object of study, the transmission equation considering the content of gaseous mass is given as follows [21–24]: (6)∂∂tρf+∇·ρV→f=∇·Γ∇f+Re+Rc,where ρ is average density of gas-liquid mixture, f is gaseous mass content, Re and Rc denote velocity of bubble in generation and disappearance, respectively, V→ denotes average velocity of gaseous phase in two-dimensional flow, and Г denotes effective transmission efficiency.Based on the bubble dynamic equation of Rayleigh-Plesset, the bubble dynamics can be described by the variation of bubble radius under the surface tension term and the second derivative term in the equation neglected as follows:(7)dRBdt=2pv-p3ρlsgn⁡pv-p1ρ=fρv+1-fρlpv=psat+pturb,where RB is bubble radius, p is pressure, pv is the critical pressure of gas, psat is saturation pressure of gas, pturb is the pressure caused by turbulence, ρv is gaseous density, and ρl is average fluid density.Combining transmission equation of mass with the equation of continuity, the relationship of density in mixture and volume fraction is described as follows:(8)dρdt=ρv-ρldαdt,where α is the gas volume fraction.If the bubble numbers aren in unit volume, the gas volume fraction with respect to bubble radius can be expressed as follows:(9)α=4πnRB33.Substituting (9) into (8), the relationship of density in mixture and bubble dynamic is obtained as follows:(10)dρdt=ρv-ρl36πnα23dRBdt.According to the above equations, the velocity of bubble in generation and disappearance can be described, respectively, as follows:(11)Rc=3αρvρlRBρ2pv-p3ρlRe=31-αρvρlRBρ2pv-p3ρl.The gas volume fraction is proportional to average velocity and average velocity can be denoted by turbulence energy. When the surface tension coefficient of bubble is introduced, the velocity of bubble in generation and disappearance also can be described, respectively, as follows:(12)Rc=Ccfρvρlσ2Kpv-p3ρlRe=Ce1-fρvρlσ2Kpv-p3ρl,where Cc and Ce denote empirical constant, respectively, Cc=0.02, Ce=0.01, and σ is the surface tension coefficient of bubble. ## 3.3. Turbulence Model Turbulence energy represents turbulent fluctuation and directly reflects dissipation and stability of fluid flow. If turbulence energy is larger in some areas, these areas will have more loss of kinetic energy and become more unstable [16]. The standard k-ε turbulence modelbased on Reynolds Average Navier-Stokes equation is applied in our simulation and dissipation rate of turbulence energy defined as follows [22, 23]:(13)ε=μρ∂μt′∂xk∂μi′∂xk¯,where turbulence viscosity μt is the function of fundamental unknown quantity Turbulent Kinetic Energy k and dissipation rate ε and is expressed as follows:(14)μt=ρCμk2ε,where k and ε are fundamental unknown quantities in standard k-ε model, respectively. For incompressible fluid, the corresponding transport equations are given as(15)∂ρk∂t+∂ρkμi∂xi=∂∂xjμ+μtσk∂k∂xj+Gk-ρε∂ρε∂t+∂ρεμi∂xi=∂∂xjμ+μtσk∂ε∂xj+C1εkGk-C2ερε2k,where Gk is the generation of turbulence energy occurred by average velocity gradient and is defined as(16)Gk=μt∂μi∂xj+∂μj∂xi∂μi∂xj,where C1ε, C2ε, and Cμ are empirical constants, respectively, C1ε=1.44, C2ε=1.92, Cμ=0.09, σk and σε are Prandtl values, respectively, corresponding to turbulence energy and dissipation rate, σk=1.0, and σε=1.3. ## 4. Results and Discussion ### 4.1. Distribution of Turbulence Energy The distribution of turbulence energy for one working cycle is shown in Figure3. It can be shown at rotating-sleeve angles of 180° and 360°, respectively. There are marks as evident highlight regions in specific area, and this phenomenon indicates that turbulence energy increases in the specific area. As shown in Figure 3, the angles of 180° and 360° are exactly when the interconversion between discharge and suction happens. Therefore, the flow field around highlight regions is seriously unstable to cause much loss of kinetic energy.Figure 3 Distribution of turbulence energy for one working cycle. ### 4.2. Velocity Distribution The velocity distribution inX direction for the distributing-flow system within one working cycle is shown in Figure 4. It is shown from Figure 4 that, at rotating-sleeve angle of 10°, velocity gradient and velocity magnitude increase, at the same time, the turbulence energy increases as shown in Figure 3. Additionally, at rotating-sleeve angles of 180° and 360°, respectively, the flow velocity magnitude is a negative value around the damping groove, in which the turbulence energy increases. It is demonstrated that the flow condition around the damping groove is unstable.Figure 4 Velocity distribution inX direction for one working cycle.Figure5 shows velocity vector distribution around the damping groove when the rotating sleeve rotates from discharge to suction. At this moment, the plunger descends from the TDC; the loading chamber separates from the valve port in a critical state; and there exists subtle streaming of fluid caused by inertia effect in the loading chamber as shown in Figure 5(a). Besides, the damping groove instantaneously connects with the collecting chamber and valve port as shown in Figure 5(b). The simulation results from Figure 5(b) illustrate that transitory backward flow flows from the collecting chamber into the valve port and causes dramatic turbulence around the damping groove on account of high pressure in the pump chamber as well as low pressure in the collecting chamber (differential pressure is about 1.9 MPa), in which differential pressure resulting in backward flow flowing from the collecting chamber into the pump chamber reduces volume efficiency.Figure 5 Local velocity vector distribution with rotating-sleeve angle of 180°.Figure6 shows local velocity vector distribution when the rotating sleeve rotates from suction to discharge. At this moment, the plunger ascends from the Bottom Dead Center (BDC); the damping groove flashily connects with the loading chamber and valve port as shown in Figure 6(a). Figure 6(b) shows that the transient backward flow flows from the valve port into the loading chamber causing dramatic turbulence around the damping groove on account of high pressure in the valve port as well as low pressure in the loading chamber. At the same time, differential pressure causes backward flow to flow from the pump chamber into the loading chamber, which can seriously reduce volume efficiency; the loading chamber is separating from the valve port in a critical state; and there exists subtle streaming that is caused by inertia effect in the loading chamber.Figure 6 Local velocity vector distribution with rotating-sleeve angle of 360°. ### 4.3. Analyses on Working Pulsation #### 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. #### 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 4.1. Distribution of Turbulence Energy The distribution of turbulence energy for one working cycle is shown in Figure3. It can be shown at rotating-sleeve angles of 180° and 360°, respectively. There are marks as evident highlight regions in specific area, and this phenomenon indicates that turbulence energy increases in the specific area. As shown in Figure 3, the angles of 180° and 360° are exactly when the interconversion between discharge and suction happens. Therefore, the flow field around highlight regions is seriously unstable to cause much loss of kinetic energy.Figure 3 Distribution of turbulence energy for one working cycle. ## 4.2. Velocity Distribution The velocity distribution inX direction for the distributing-flow system within one working cycle is shown in Figure 4. It is shown from Figure 4 that, at rotating-sleeve angle of 10°, velocity gradient and velocity magnitude increase, at the same time, the turbulence energy increases as shown in Figure 3. Additionally, at rotating-sleeve angles of 180° and 360°, respectively, the flow velocity magnitude is a negative value around the damping groove, in which the turbulence energy increases. It is demonstrated that the flow condition around the damping groove is unstable.Figure 4 Velocity distribution inX direction for one working cycle.Figure5 shows velocity vector distribution around the damping groove when the rotating sleeve rotates from discharge to suction. At this moment, the plunger descends from the TDC; the loading chamber separates from the valve port in a critical state; and there exists subtle streaming of fluid caused by inertia effect in the loading chamber as shown in Figure 5(a). Besides, the damping groove instantaneously connects with the collecting chamber and valve port as shown in Figure 5(b). The simulation results from Figure 5(b) illustrate that transitory backward flow flows from the collecting chamber into the valve port and causes dramatic turbulence around the damping groove on account of high pressure in the pump chamber as well as low pressure in the collecting chamber (differential pressure is about 1.9 MPa), in which differential pressure resulting in backward flow flowing from the collecting chamber into the pump chamber reduces volume efficiency.Figure 5 Local velocity vector distribution with rotating-sleeve angle of 180°.Figure6 shows local velocity vector distribution when the rotating sleeve rotates from suction to discharge. At this moment, the plunger ascends from the Bottom Dead Center (BDC); the damping groove flashily connects with the loading chamber and valve port as shown in Figure 6(a). Figure 6(b) shows that the transient backward flow flows from the valve port into the loading chamber causing dramatic turbulence around the damping groove on account of high pressure in the valve port as well as low pressure in the loading chamber. At the same time, differential pressure causes backward flow to flow from the pump chamber into the loading chamber, which can seriously reduce volume efficiency; the loading chamber is separating from the valve port in a critical state; and there exists subtle streaming that is caused by inertia effect in the loading chamber.Figure 6 Local velocity vector distribution with rotating-sleeve angle of 360°. ## 4.3. Analyses on Working Pulsation ### 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. ### 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. ## 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 5. Conclusions (1) According to characteristic of the novel rotating-sleeve distributing-flow system, the governing equations between the plunger and rotating sleeve are established to obtain the dynamic model of distributing-sleeve system. By utilizing sliding mesh and dynamic mesh technology, the improved CFD model taking the cavitation and turbulence into consideration is employed to simulate flow field and working pulsation. (2) Simulations of rotating-sleeve distributing-flow system have been conducted based on the governing equations and the improved CFD model. Primary performance parameters such as turbulence energy distribution, velocity distribution, and working pulsation have been investigated. In addition, the relationship between performance parameters is studied for the distributing-flow system. (3) Periodic fluctuation and sharp corner exist in flow pulsation in addition to the backward flow issue between discharge and suction. Serious turbulence and large loss in kinetic energy around the damping groove exist. Moreover the noticeable periodic fluctuation and sharp corner appear in the pressure pulsation, and pressure in the pump chamber rapidly rises to 2.2 MPa involving over 10% more than nominal pressure when the plunger is at the TDC. On the other hand, pressure in the pump chamber instantaneously reduces to the saturated vapor pressure −98.9 KPa when the plunger is at the BDC. For the rest time period, the pressure overshoot gradually reaches stability until it closes to the normal condition with increasing flow area. Fluid field and working pulsation simulation could provide foundations for structure optimization. --- *Source: 1015494-2017-11-01.xml*
1015494-2017-11-01_1015494-2017-11-01.md
36,867
Research on Distribution of Flow Field and Simulation of Working Pulsation Based on Rotating-Sleeve Distributing-Flow System
Yanjun Zhang; Hongxin Zhang; Jingzhou Yang; Qinghai Zhao; Xiaotian Jiang; Qianchang Cheng; Qingsong Hua
Modelling and Simulation in Engineering (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1015494
1015494-2017-11-01.xml
--- ## Abstract To solve problems of leakage, vibration, and noise caused by disorders of flow field distribution and working pulsation in the rotating-sleeve distributing-flow system, governing equations of plunger and rotating sleeve and computational fluid dynamics (CFD) model are developed through sliding mesh and dynamic mesh technology to simulate flow field and working pulsation. Simulation results show that the following issues exist: obviously periodic fluctuation and sharp corner in flow pulsation, backward flow when fluid is transformed between discharge and suction, and serious turbulence and large loss in kinetic energy around the damping groove in transitional movements. Pressure in the pump chamber rapidly rises to 2.2 MPa involving over 10% more than nominal pressure when the plunger is at the Top Dead Center (TDC) considering changes about damping groove’s position and flow area in two transitional movements. Shortly pressure overshoot gradually decreases to a normal condition with increasing flow area. Similarly, pressure in the pump chamber instantaneously drops to a saturated vapor pressure −98.9 KPa when the plunger is at the Bottom Dead Center (BDC). With increasing flow area the overshoot gradually increases to the normal condition. This research provides foundations for investigating flow field characteristic and structure optimization of rotating-sleeve distributing-flow system. --- ## Body ## 1. Introduction A distributing-flow system, the most important component of hydraulic system, is widely used in the fluid power industry because of robustness, controllability, and wide operating range. However, the distributing-flow system controlled flow by valves has many disadvantages including bulk mass, large pressure loss, noise, and going against high frequency, which can easily cause noise and vibration on account of flow pulsation [1]. To solve these issues, the novel rotating-sleeve distributing-flow system is proposed which uses reciprocating motion of plunger and single-track rotation of rotating sleeve to achieve distribution functions. It has reliable seal, high efficiency, and little impact from working frequency. To further improve its performance, the flow characteristics of rotating-sleeve distributing-flow system could be theoretically investigated; then the system efficiently reduces damage to the pump, such as undesirable noise, vibration, and cavitation even reducing working reliability of the pump [2–4].Flow ripple, a significant characteristic of piston pump, is closely relevant with the pressure fluctuation, backward flow, and noise of inner fluid. Noises in pump can be effectively reduced by taking factors influencing the flow ripple into account and focusing on flow ripple in the process through structure optimization [5]. Dhananchezhiyan and Hiremath [6] reported that the flow ripple is associated with pressure pulsation in various drive frequencies of micro pumps. Hence, intensive studies of the flow ripple and the pressure pulsations are necessary for better understanding the flow process [7].Recently, various analytic and simulative methods have been extensively studied on distribution of flow field and working pulsation. Particularly, the CFD simulation is generally utilized in many fluid field and hydraulic researches. Luo et al. [8] developed an adiabatic dehumidifier model using CFD technology and the interior heat and mass transfer processes were then simulated. Ma et al. [9] utilized a new CFD model with user defined function to simulate pump’s fluid characteristic and predict the flow ripple. At the same time, the flow ripple was tested under different working parameters such as rotation speed and working pressure. Delele et al. [10] conducted research on CFD model based on an Eulerian-Eulerian multiphase which can predict the fluid flow profile and study the effects of drum rotational speed. In addition, they used experimental results of particle and fluid velocities and residence time to verify the model simulation.The slide mesh and dynamics grid technology have been continually employed in CFD model to improve its simulation functions and apply varied motions into the model flexibly. Wang [11] utilized the dynamics CFD model to simulate cavitation of axial piston pump, which uses compressible fluid pump model with nine pistons. Guo et al. [12] studied the sliding mesh method of Fluent® to simulate the dynamic behavior in the course of ball valve closure. In addition, Vitagliano et al. [13] ran a slide mesh generated in zones of the flow field and connected with sliding boundaries under different rotation speed conditions to simulate unsteady flows with surfaces in relative motion.Fortunately, some researchers have found effective ways to reduce working pulsation to decrease noise. Lee et al. [14, 15] investigated the computed time-accurate pressure field and the loss generation process to establish the causal link to the induced flow ripple in the turbine system. Alves et al. [16] found that combining CFD modeling and analytical techniques is a great way to predict the oil flow rate in the eccentric-tube centrifugal oil pumping system. The research group led by Palmberg set a precompression volume in valve plate between the discharged kidney slot and sucked kidney slot [17]. They found that small precompression volume can amazingly lessen enormous problems of noise, pulsation, and hydraulic impact [18, 19]. And an axial piston pump’s valve plate which adopts a prepressurization fluid path consisting of a damping hole, a buffer chamber, and an orifice can reduce flow ripple to some extent [20].This paper aims to reduce vibration and noise by developing a novel rotating-sleeve distributing-flow system and analyzing the relationship between turbulence energy, velocity, and working pulsation through CFD simulation. In addition, the influences with respect to backward flow, flow pulsation, and pressure fluctuation are investigated. A complete simulation model with relevant parameters have been established and flow characteristics can be explicitly described. Furthermore, this work provides theoretical foundation for structure optimization of distributing-flow system and performance improvement. ## 2. Operating Principle of Rotating-Sleeve Distributing-Flow System In this section, a novel rotating-sleeve distributing-flow system is developed and analyzed. The plunger pulled by the crank-link mechanism achieves coupled reciprocating movement and then uses drive pin to transmit force into rotating sleeve along the cam groove pathway in unidirectional rotating movement. The drive pin moves while rolling with the cam groove molded line obtained by fitting linear equation through quadratic differential, rotating angle of rotating sleeve, and crank angle. Figure1 indicates the operating principle and components of rotating-sleeve distributing-flow system for mass flow with high frequency and efficiency.Figure 1 Structure principle of rotating-sleeve distributing-flow system.(1) Cam groove; (2) loading chamber; (3) inlet; (4) valve port; (5) pump chamber; (6) damping groove; (7) plunger; (8) blind flange; (9) pump body; (10) collecting chamber; (11) outlet; (12) drive pin; (13) compression spring; (14) rotating sleeve.In this system, there are two major movements: axial reciprocating movement of the plunger and unidirectional rotating movement of rotating sleeve. The plunger finishes reciprocating movements powered by the crankshaft and connecting rod mechanism via a connector cross slider. The displacement of plunger in reciprocating motion is formulated as follows:(1)x=r1-cos⁡φ+λ2sin2⁡φ,where r is the radius of bent axle, φ is crank angle about the crankshaft and connecting rod mechanism, λ=r/l is the ratio of crank and connecting link, and l is link length.Due toφ=ωt, the derivative of the displacement of plunger x with respect to the time t can be expressed as follows:(2)u=r·ω·sin⁡φ+λ2sin⁡2φ,where ω is angular velocity of bent axle.Angular velocity and acceleration of rotating sleeve have no obvious phase step and inflection point when sine molded line is selected for cam groove. Therefore, the relationship between the cam groove’s axial displacement and rotating-sleeve angle is given as follows:(3)zθ=S21-cos⁡θ0≤θ≤π,where S is plunger stroke and θ is the rotating-sleeve angle rotating around the central axis.Substituting (1) into (3), the rotating-sleeve angle θ with respect to crank angle φ is obtained as follows:(4)θ=arccos⁡cos⁡φ-λ2sin2⁡φ,0≤φ≤π2π-arccos⁡cos⁡φ-λ2sin2⁡φ,π≤φ≤2π.The angular velocity of rotating sleeve by taking the derivative oft in (3) can be expressed as follows: (5)ωt=±ω1-C02·sin⁡φ+λ2sin⁡2φ,where ωt is a positive number during 0≤φ<π and ωt is a negative number during π≤φ<2π and C0=cos⁡φ-0.5λsin2⁡φ. ## 3. Models and Methods ### 3.1. Fluid Model In light of structure and operating principle of rotating-sleeve distributing-flow system, the fluid model is established as shown in Figure2, in which the model indicates the main components with loading chamber, inlet, pump chamber, collecting chamber, outlet, and rotating sleeve.Figure 2 Fluid model of rotating-sleeve distributing-flow system.(1) Loading chamber; (2) inlet; (3) pump chamber; (4) collecting chamber; (5) outlet; (6) valve port.In this paper, the fluid model is simulated and analyzed by fluid simulation software Fluent®. The motions of plunger and rotating sleeve are defined in fluid model using Users Defined Function. Standardk-ε turbulence model and SIMPLE arithmetic are applied into simulation settings. Technologies of sliding mesh and dynamic mesh are also utilized in the fluid model according to its specific motion characteristics. The model parameters and boundary conditions are defined in Table 1.Table 1 The parameters of distributing-flow system. Parameters Value Symbol/unit The radius of bent axle 0.03 m The ratio of crank and connecting link 0.25 / Crankshaft speed 150 n/(r/min) Water density 998 kg/m3 Water viscosity 0.001003 Pa/s−1 Inlet pressure 0.1 MPa Outlet pressure 2 MPa Saturated vapor pressure 2339 Pa Time step 0.001 S ### 3.2. Cavitation Model The cavitation model is based on the flow equation Navier-Stokes with variable density and standard viscosity in hydromechanics. In this paper, taking the viscidity and turbulence into consideration and gas-liquid two-phase flow as the object of study, the transmission equation considering the content of gaseous mass is given as follows [21–24]: (6)∂∂tρf+∇·ρV→f=∇·Γ∇f+Re+Rc,where ρ is average density of gas-liquid mixture, f is gaseous mass content, Re and Rc denote velocity of bubble in generation and disappearance, respectively, V→ denotes average velocity of gaseous phase in two-dimensional flow, and Г denotes effective transmission efficiency.Based on the bubble dynamic equation of Rayleigh-Plesset, the bubble dynamics can be described by the variation of bubble radius under the surface tension term and the second derivative term in the equation neglected as follows:(7)dRBdt=2pv-p3ρlsgn⁡pv-p1ρ=fρv+1-fρlpv=psat+pturb,where RB is bubble radius, p is pressure, pv is the critical pressure of gas, psat is saturation pressure of gas, pturb is the pressure caused by turbulence, ρv is gaseous density, and ρl is average fluid density.Combining transmission equation of mass with the equation of continuity, the relationship of density in mixture and volume fraction is described as follows:(8)dρdt=ρv-ρldαdt,where α is the gas volume fraction.If the bubble numbers aren in unit volume, the gas volume fraction with respect to bubble radius can be expressed as follows:(9)α=4πnRB33.Substituting (9) into (8), the relationship of density in mixture and bubble dynamic is obtained as follows:(10)dρdt=ρv-ρl36πnα23dRBdt.According to the above equations, the velocity of bubble in generation and disappearance can be described, respectively, as follows:(11)Rc=3αρvρlRBρ2pv-p3ρlRe=31-αρvρlRBρ2pv-p3ρl.The gas volume fraction is proportional to average velocity and average velocity can be denoted by turbulence energy. When the surface tension coefficient of bubble is introduced, the velocity of bubble in generation and disappearance also can be described, respectively, as follows:(12)Rc=Ccfρvρlσ2Kpv-p3ρlRe=Ce1-fρvρlσ2Kpv-p3ρl,where Cc and Ce denote empirical constant, respectively, Cc=0.02, Ce=0.01, and σ is the surface tension coefficient of bubble. ### 3.3. Turbulence Model Turbulence energy represents turbulent fluctuation and directly reflects dissipation and stability of fluid flow. If turbulence energy is larger in some areas, these areas will have more loss of kinetic energy and become more unstable [16]. The standard k-ε turbulence modelbased on Reynolds Average Navier-Stokes equation is applied in our simulation and dissipation rate of turbulence energy defined as follows [22, 23]:(13)ε=μρ∂μt′∂xk∂μi′∂xk¯,where turbulence viscosity μt is the function of fundamental unknown quantity Turbulent Kinetic Energy k and dissipation rate ε and is expressed as follows:(14)μt=ρCμk2ε,where k and ε are fundamental unknown quantities in standard k-ε model, respectively. For incompressible fluid, the corresponding transport equations are given as(15)∂ρk∂t+∂ρkμi∂xi=∂∂xjμ+μtσk∂k∂xj+Gk-ρε∂ρε∂t+∂ρεμi∂xi=∂∂xjμ+μtσk∂ε∂xj+C1εkGk-C2ερε2k,where Gk is the generation of turbulence energy occurred by average velocity gradient and is defined as(16)Gk=μt∂μi∂xj+∂μj∂xi∂μi∂xj,where C1ε, C2ε, and Cμ are empirical constants, respectively, C1ε=1.44, C2ε=1.92, Cμ=0.09, σk and σε are Prandtl values, respectively, corresponding to turbulence energy and dissipation rate, σk=1.0, and σε=1.3. ## 3.1. Fluid Model In light of structure and operating principle of rotating-sleeve distributing-flow system, the fluid model is established as shown in Figure2, in which the model indicates the main components with loading chamber, inlet, pump chamber, collecting chamber, outlet, and rotating sleeve.Figure 2 Fluid model of rotating-sleeve distributing-flow system.(1) Loading chamber; (2) inlet; (3) pump chamber; (4) collecting chamber; (5) outlet; (6) valve port.In this paper, the fluid model is simulated and analyzed by fluid simulation software Fluent®. The motions of plunger and rotating sleeve are defined in fluid model using Users Defined Function. Standardk-ε turbulence model and SIMPLE arithmetic are applied into simulation settings. Technologies of sliding mesh and dynamic mesh are also utilized in the fluid model according to its specific motion characteristics. The model parameters and boundary conditions are defined in Table 1.Table 1 The parameters of distributing-flow system. Parameters Value Symbol/unit The radius of bent axle 0.03 m The ratio of crank and connecting link 0.25 / Crankshaft speed 150 n/(r/min) Water density 998 kg/m3 Water viscosity 0.001003 Pa/s−1 Inlet pressure 0.1 MPa Outlet pressure 2 MPa Saturated vapor pressure 2339 Pa Time step 0.001 S ## 3.2. Cavitation Model The cavitation model is based on the flow equation Navier-Stokes with variable density and standard viscosity in hydromechanics. In this paper, taking the viscidity and turbulence into consideration and gas-liquid two-phase flow as the object of study, the transmission equation considering the content of gaseous mass is given as follows [21–24]: (6)∂∂tρf+∇·ρV→f=∇·Γ∇f+Re+Rc,where ρ is average density of gas-liquid mixture, f is gaseous mass content, Re and Rc denote velocity of bubble in generation and disappearance, respectively, V→ denotes average velocity of gaseous phase in two-dimensional flow, and Г denotes effective transmission efficiency.Based on the bubble dynamic equation of Rayleigh-Plesset, the bubble dynamics can be described by the variation of bubble radius under the surface tension term and the second derivative term in the equation neglected as follows:(7)dRBdt=2pv-p3ρlsgn⁡pv-p1ρ=fρv+1-fρlpv=psat+pturb,where RB is bubble radius, p is pressure, pv is the critical pressure of gas, psat is saturation pressure of gas, pturb is the pressure caused by turbulence, ρv is gaseous density, and ρl is average fluid density.Combining transmission equation of mass with the equation of continuity, the relationship of density in mixture and volume fraction is described as follows:(8)dρdt=ρv-ρldαdt,where α is the gas volume fraction.If the bubble numbers aren in unit volume, the gas volume fraction with respect to bubble radius can be expressed as follows:(9)α=4πnRB33.Substituting (9) into (8), the relationship of density in mixture and bubble dynamic is obtained as follows:(10)dρdt=ρv-ρl36πnα23dRBdt.According to the above equations, the velocity of bubble in generation and disappearance can be described, respectively, as follows:(11)Rc=3αρvρlRBρ2pv-p3ρlRe=31-αρvρlRBρ2pv-p3ρl.The gas volume fraction is proportional to average velocity and average velocity can be denoted by turbulence energy. When the surface tension coefficient of bubble is introduced, the velocity of bubble in generation and disappearance also can be described, respectively, as follows:(12)Rc=Ccfρvρlσ2Kpv-p3ρlRe=Ce1-fρvρlσ2Kpv-p3ρl,where Cc and Ce denote empirical constant, respectively, Cc=0.02, Ce=0.01, and σ is the surface tension coefficient of bubble. ## 3.3. Turbulence Model Turbulence energy represents turbulent fluctuation and directly reflects dissipation and stability of fluid flow. If turbulence energy is larger in some areas, these areas will have more loss of kinetic energy and become more unstable [16]. The standard k-ε turbulence modelbased on Reynolds Average Navier-Stokes equation is applied in our simulation and dissipation rate of turbulence energy defined as follows [22, 23]:(13)ε=μρ∂μt′∂xk∂μi′∂xk¯,where turbulence viscosity μt is the function of fundamental unknown quantity Turbulent Kinetic Energy k and dissipation rate ε and is expressed as follows:(14)μt=ρCμk2ε,where k and ε are fundamental unknown quantities in standard k-ε model, respectively. For incompressible fluid, the corresponding transport equations are given as(15)∂ρk∂t+∂ρkμi∂xi=∂∂xjμ+μtσk∂k∂xj+Gk-ρε∂ρε∂t+∂ρεμi∂xi=∂∂xjμ+μtσk∂ε∂xj+C1εkGk-C2ερε2k,where Gk is the generation of turbulence energy occurred by average velocity gradient and is defined as(16)Gk=μt∂μi∂xj+∂μj∂xi∂μi∂xj,where C1ε, C2ε, and Cμ are empirical constants, respectively, C1ε=1.44, C2ε=1.92, Cμ=0.09, σk and σε are Prandtl values, respectively, corresponding to turbulence energy and dissipation rate, σk=1.0, and σε=1.3. ## 4. Results and Discussion ### 4.1. Distribution of Turbulence Energy The distribution of turbulence energy for one working cycle is shown in Figure3. It can be shown at rotating-sleeve angles of 180° and 360°, respectively. There are marks as evident highlight regions in specific area, and this phenomenon indicates that turbulence energy increases in the specific area. As shown in Figure 3, the angles of 180° and 360° are exactly when the interconversion between discharge and suction happens. Therefore, the flow field around highlight regions is seriously unstable to cause much loss of kinetic energy.Figure 3 Distribution of turbulence energy for one working cycle. ### 4.2. Velocity Distribution The velocity distribution inX direction for the distributing-flow system within one working cycle is shown in Figure 4. It is shown from Figure 4 that, at rotating-sleeve angle of 10°, velocity gradient and velocity magnitude increase, at the same time, the turbulence energy increases as shown in Figure 3. Additionally, at rotating-sleeve angles of 180° and 360°, respectively, the flow velocity magnitude is a negative value around the damping groove, in which the turbulence energy increases. It is demonstrated that the flow condition around the damping groove is unstable.Figure 4 Velocity distribution inX direction for one working cycle.Figure5 shows velocity vector distribution around the damping groove when the rotating sleeve rotates from discharge to suction. At this moment, the plunger descends from the TDC; the loading chamber separates from the valve port in a critical state; and there exists subtle streaming of fluid caused by inertia effect in the loading chamber as shown in Figure 5(a). Besides, the damping groove instantaneously connects with the collecting chamber and valve port as shown in Figure 5(b). The simulation results from Figure 5(b) illustrate that transitory backward flow flows from the collecting chamber into the valve port and causes dramatic turbulence around the damping groove on account of high pressure in the pump chamber as well as low pressure in the collecting chamber (differential pressure is about 1.9 MPa), in which differential pressure resulting in backward flow flowing from the collecting chamber into the pump chamber reduces volume efficiency.Figure 5 Local velocity vector distribution with rotating-sleeve angle of 180°.Figure6 shows local velocity vector distribution when the rotating sleeve rotates from suction to discharge. At this moment, the plunger ascends from the Bottom Dead Center (BDC); the damping groove flashily connects with the loading chamber and valve port as shown in Figure 6(a). Figure 6(b) shows that the transient backward flow flows from the valve port into the loading chamber causing dramatic turbulence around the damping groove on account of high pressure in the valve port as well as low pressure in the loading chamber. At the same time, differential pressure causes backward flow to flow from the pump chamber into the loading chamber, which can seriously reduce volume efficiency; the loading chamber is separating from the valve port in a critical state; and there exists subtle streaming that is caused by inertia effect in the loading chamber.Figure 6 Local velocity vector distribution with rotating-sleeve angle of 360°. ### 4.3. Analyses on Working Pulsation #### 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. #### 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 4.1. Distribution of Turbulence Energy The distribution of turbulence energy for one working cycle is shown in Figure3. It can be shown at rotating-sleeve angles of 180° and 360°, respectively. There are marks as evident highlight regions in specific area, and this phenomenon indicates that turbulence energy increases in the specific area. As shown in Figure 3, the angles of 180° and 360° are exactly when the interconversion between discharge and suction happens. Therefore, the flow field around highlight regions is seriously unstable to cause much loss of kinetic energy.Figure 3 Distribution of turbulence energy for one working cycle. ## 4.2. Velocity Distribution The velocity distribution inX direction for the distributing-flow system within one working cycle is shown in Figure 4. It is shown from Figure 4 that, at rotating-sleeve angle of 10°, velocity gradient and velocity magnitude increase, at the same time, the turbulence energy increases as shown in Figure 3. Additionally, at rotating-sleeve angles of 180° and 360°, respectively, the flow velocity magnitude is a negative value around the damping groove, in which the turbulence energy increases. It is demonstrated that the flow condition around the damping groove is unstable.Figure 4 Velocity distribution inX direction for one working cycle.Figure5 shows velocity vector distribution around the damping groove when the rotating sleeve rotates from discharge to suction. At this moment, the plunger descends from the TDC; the loading chamber separates from the valve port in a critical state; and there exists subtle streaming of fluid caused by inertia effect in the loading chamber as shown in Figure 5(a). Besides, the damping groove instantaneously connects with the collecting chamber and valve port as shown in Figure 5(b). The simulation results from Figure 5(b) illustrate that transitory backward flow flows from the collecting chamber into the valve port and causes dramatic turbulence around the damping groove on account of high pressure in the pump chamber as well as low pressure in the collecting chamber (differential pressure is about 1.9 MPa), in which differential pressure resulting in backward flow flowing from the collecting chamber into the pump chamber reduces volume efficiency.Figure 5 Local velocity vector distribution with rotating-sleeve angle of 180°.Figure6 shows local velocity vector distribution when the rotating sleeve rotates from suction to discharge. At this moment, the plunger ascends from the Bottom Dead Center (BDC); the damping groove flashily connects with the loading chamber and valve port as shown in Figure 6(a). Figure 6(b) shows that the transient backward flow flows from the valve port into the loading chamber causing dramatic turbulence around the damping groove on account of high pressure in the valve port as well as low pressure in the loading chamber. At the same time, differential pressure causes backward flow to flow from the pump chamber into the loading chamber, which can seriously reduce volume efficiency; the loading chamber is separating from the valve port in a critical state; and there exists subtle streaming that is caused by inertia effect in the loading chamber.Figure 6 Local velocity vector distribution with rotating-sleeve angle of 360°. ## 4.3. Analyses on Working Pulsation ### 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. ### 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 4.3.1. Flow Pulsation of Different Valve Port Structures Figure7 demonstrates the mesh of distributing-flow system by dealing with structured mesh generation. The sliding grid option provides the grid interface for the sliding surfaces between the static inlet/outlet part of grid and the outside part of valve port, the inside part of valve port, and the static part of pump chamber. The piston movement is modeled by a dynamic grid with a moving and deforming mesh for the pump chamber part. In the grid, maximum parameter in overall mesh is 1; different parts have different local mesh parameter; total numbers of grid are 397602.Figure 7 Fluid model grid of distributing-flow system.Flow pulsation is the source of noise, pressure pulsation, and vibration of plunger pump. It can have adverse impacts towards working parts, especially some precise hydraulic system. Hence it should be emphasized that the characteristic of fluid is most meaningful with respect to the flow pulsation of the distributing-flow system. Figure8 shows that outlet flow shows periodic fluctuations with maximum flow about 5 × 10−4m3/s and backward flow appears at the end of processes, that is, suction and discharge. It can be illustrated that these analyses are consistent with findings in Figures 5 and 6. Moreover, it appears as instantaneous backward flow about 2.2 × 10−4m3/s according to the left zoom in plot corresponding to the TDC in Figure 9, which includes two working cycles. Similarly, it appears as instantaneous backward flow about 2.6 × 10−4m3/s according to the right zoom in plot corresponding to the BDC in Figure 9.Figure 8 The characteristic of outlet flow pulsation.Figure 9 The characteristic of local outlet flow pulsation. ## 4.3.2. Pressure Pulsation of Different Valve Port Structures The phenomena incorporating backward flow, local cavitation, and erosion can easily cause pressure pulsation with regard to inner flow. And once pressure pulsation happens, it can lead to intensive vibration of pump and cavitation and even resonance. Figure10 shows periodic fluctuation and distinct sharp corner in dead center of pressure pulsations. As partial zoom in plots including two working cycles in Figure 10, left zoom in plot in Figure 11 depicts that pressure in the pump chamber quickly rises until it reaches 2.2 × 106 Pa involving over 10% more than nominal pressure when the plunger is at the TDC on account of throttling action in the damping groove. However, pressure overshoot gradually reduces to normal condition followed by increasing flow area. Because of throttling action in the damping groove, pressure in the pump chamber abruptly drops to the saturated vapor pressure about −98.9 KPa of fluid when the plunger is at the BDC; however, the overshoot gradually increases to the normal condition with increasing flow area.Figure 10 The characteristic of pressure in the pump chamber.Figure 11 The characteristic of local pressure in the pump chamber. ## 5. Conclusions (1) According to characteristic of the novel rotating-sleeve distributing-flow system, the governing equations between the plunger and rotating sleeve are established to obtain the dynamic model of distributing-sleeve system. By utilizing sliding mesh and dynamic mesh technology, the improved CFD model taking the cavitation and turbulence into consideration is employed to simulate flow field and working pulsation. (2) Simulations of rotating-sleeve distributing-flow system have been conducted based on the governing equations and the improved CFD model. Primary performance parameters such as turbulence energy distribution, velocity distribution, and working pulsation have been investigated. In addition, the relationship between performance parameters is studied for the distributing-flow system. (3) Periodic fluctuation and sharp corner exist in flow pulsation in addition to the backward flow issue between discharge and suction. Serious turbulence and large loss in kinetic energy around the damping groove exist. Moreover the noticeable periodic fluctuation and sharp corner appear in the pressure pulsation, and pressure in the pump chamber rapidly rises to 2.2 MPa involving over 10% more than nominal pressure when the plunger is at the TDC. On the other hand, pressure in the pump chamber instantaneously reduces to the saturated vapor pressure −98.9 KPa when the plunger is at the BDC. For the rest time period, the pressure overshoot gradually reaches stability until it closes to the normal condition with increasing flow area. Fluid field and working pulsation simulation could provide foundations for structure optimization. --- *Source: 1015494-2017-11-01.xml*
2017
# Identification of Key Genes and miRNAs Affecting Osteosarcoma Based on Bioinformatics **Authors:** Le Li; Xin Zhou; Wencan Zhang; Ran Zhao **Journal:** Disease Markers (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015593 --- ## Abstract Object. Osteosarcoma is an intractable malignant disease, and few therapeutic methods can thoroughly eradicate its focuses. This study attempted to investigate the related mechanism of osteosarcoma by bioinformatics methods. Methods. GSE70367 and GSE69470 were obtained from the GEO database. The differentially expressed genes (DEGs) and miRNAs were analyzed using the GEO2R tool and then visualized with R software. Moreover, the targets of the miRNAs in the DEGs were screened and then used for enrichment analysis. Besides, the STRING database and Cytoscape were applied to illustrate the protein-protein interaction network. RT-qPCR was performed to measure the expression of key genes and miRNAs. Western blot was applied to detect the signaling pathway. Results. 9 upregulated genes and 39 downregulated genes in GSE69470 were identified as the DEGs, and 31 upregulated genes and 56 downregulated genes in GSE70367 were identified as the DEGs. Moreover, 21 common genes were found in the DEGs of GSE70367 and GSE69470. The enrichment analysis showed that the common DEGs of GSE70367 and GSE69470 were related with cell development, covalent chromatin modification, and histone modification and involve in the regulation of MAPK, mTOR, and AMPK pathways. Besides, the miRNAs including miR-543, miR-495-3p, miR-433-3p, miR-381-3p, miR-301a-3p, miR-199b-5p, and miR-125b-5p were identified as the biomarkers of osteosarcoma. In addition, the target genes including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were identified as hub nodes. It was found that miR-301a-3p expression was decreased and mRNA expression of RAB5A and NFKBIA was increased in the pathological tissues. The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues. Conclusion. In this study, 7 miRNAs and 13 hub genes were identified, which might be candidate markers. miR-301a-3p, RAB5A, and NFKBIA were abnormally expressed in osteosarcoma tissues. --- ## Body ## 1. Introduction Osteosarcoma is a frequent malignant bone disease in children and older patients, which is characterized with poor prognosis including physical disability and metastases [1, 2]. Surgery excision, radiotherapy, and chemotherapy have been widely used for osteosarcoma treatment, which can effectively inhibit the development of the tumor progression in the early stage [3]. Nevertheless, considerable patients have been confirmed to be at the advanced stage in their first clinical diagnosis. Moreover, high metastasis rates of osteosarcoma also make the clinical intervention become tricky and then lead to treatment failure [4]. Although the survival times of the patients have been significantly prolonged with the modern medicine techniques, the treatment effect remains unsatisfactory for patients [5, 6]. At present, some reports have focused on revealing the potential mechanism of osteosarcoma, which can provide valuable reference for the progression of medicine strategies [7, 8]. At present, there have been many studies on the molecular mechanism of osteosarcoma, and several osteosarcoma-driving genes have been identified, such as TP53, RB1, and PTEN. There have also been targeted drugs for osteosarcoma, such as pazopanib, appatinib, cabotinib, and ivermex. However, these studies have not clearly explained the pathogenesis and metastasis of osteosarcoma. Therefore, it is urgent to further study the potential molecular mechanism of osteosarcoma cells, identify reliable molecular markers, and identify new drug targets.Microarray analysis is a useful method which has been used for screening the key genes in diseases [9]. Recently, the academic and guiding value of bioinformatics methods on improving the clinical practice have been proven by numerous researches [10]. MicroRNA is a class of the short noncoding RNA with 18-20 nucleotides, which plays a great part in the cellular life activity [11]. The abnormal expression of miRNA is a biomarker event in multiple diseases, especially in caner. In osteosarcoma, many studies have indicated that miRNA can regulate the cellular phenotype to influence the progression of the tumor via intervening the expression of key proteins [12]. However, the miRNA-mRNA interaction network of osteosarcoma is still far from complete clarification.In this project, the purpose was to identify the pivotal biomarkers and related mechanism of the osteosarcoma using bioinformatics methods through obtaining the open-source datasets in the GEO database. ## 2. Materials and Methods ### 2.1. Data Source We searched the datasets comparing mRNA or miRNA expression profiles of osteosarcoma and normal samples using “osteosarcoma” as search terms for the GEO datasets (https://www.ncbi.nlm.nih.gov/geo/). The datasets including GSE70367 and GSE69470 were obtained. GSE69470 contained the expression profile of 15 samples, including 10 osteosarcoma samples and 5 normal samples, which was based on platform GPL20275. For GSE70367 based on GPL16384, 5 samples of tumor cell lines and 1 sample of the hMSC cell line were used for analysis. ### 2.2. Identification of Differentially Expressed Genes The DEGs of the datasets were analyzed with the GEO2R tool of the GEO database to obtain the related matrix files. The genes with thelogFC>2 and P value < 0.05 were selected as the DEGs. ### 2.3. KEGG and GO Enrichment Analysis The targets of the DEGs were predicted with the mirDIP database (http://ophid.utoronto.ca/mirDIP/index.jsp), and the top 5% genes in the results were selected as potential targets of the DEGs in GSE70367 and GSE69470. The KEGG and GO enrichment of DEGs was performed by the DAVID database. In brief, the targets of the DEGS were uploaded into the DAVID database. The pathways and the related functional modules in the results with P value < 0.05 were visualized with the R language. ### 2.4. Network Analysis The protein-protein interaction network was performed to identify the hub nodes of the DEGs. Briefly, the targets of the DEGs were uploaded to the STRING database (https://cn.string-db.org/) to analyze and obtain protein interaction information, and then, Cytoscape software was applied to visualize the PPI network. ### 2.5. Clinical Tissues The pathological tissues and adjacent healthy tissues were requested from the Qilu Hospital of Shandong University. The experiments were approved by the ethics committee of the hospital. Besides, all tissues were frozen at -70°C. ### 2.6. qRT-PCR The RNAs in the tissues were extracted with a TRIzol reagent. The commercial kit (Shanghai Lianmai Biological Engineering Co., Ltd., Shanghai, China) was applied for the reverse transcription of cDNA. Subsequently, the PCR reaction was performed for the quantification of the genes. Moreover, the abundance of the RNAs were measured with the 2−ΔΔCt method. ### 2.7. Western Blot Analysis The protein was extracted by the RIPA buffer. Protein was separated by SDS-PAGE and transferred to the nitrocellulose membrane and blocked in 5% skim milk powder solution for 2 h. Then, the membrane was incubated with a primary antibody at 4°C for 12 h. Then, the membrane was incubated by a second antibody at 25°C for 2 h. The protein bands were colored, and the gray value was read under ImageJ software. ### 2.8. Immunohistochemistry (IHC) IHC was conducted using paraffin-embedded tissue sections. After being deparaffinized and hydrated, the antigen was extracted at 95°C. After treating with 3% H2O2, sections were incubated with primary antibody at 4°C overnight and then treated with a second antibody at 37°C for 30 min. Staining was conducted using DAB (Golden Bridge, China). ### 2.9. Data Analysis The experiments were repeated for three times, independently. SPSS 19.0 and GraphPad Prism 8.0 was applied for data analysis and visualization, respectively. Moreover, the chi-squared test or ANOVA with Tukey’s post hoc test was selected for calculating the difference of data, andP<0.05 represented that the difference was statistically significant. ## 2.1. Data Source We searched the datasets comparing mRNA or miRNA expression profiles of osteosarcoma and normal samples using “osteosarcoma” as search terms for the GEO datasets (https://www.ncbi.nlm.nih.gov/geo/). The datasets including GSE70367 and GSE69470 were obtained. GSE69470 contained the expression profile of 15 samples, including 10 osteosarcoma samples and 5 normal samples, which was based on platform GPL20275. For GSE70367 based on GPL16384, 5 samples of tumor cell lines and 1 sample of the hMSC cell line were used for analysis. ## 2.2. Identification of Differentially Expressed Genes The DEGs of the datasets were analyzed with the GEO2R tool of the GEO database to obtain the related matrix files. The genes with thelogFC>2 and P value < 0.05 were selected as the DEGs. ## 2.3. KEGG and GO Enrichment Analysis The targets of the DEGs were predicted with the mirDIP database (http://ophid.utoronto.ca/mirDIP/index.jsp), and the top 5% genes in the results were selected as potential targets of the DEGs in GSE70367 and GSE69470. The KEGG and GO enrichment of DEGs was performed by the DAVID database. In brief, the targets of the DEGS were uploaded into the DAVID database. The pathways and the related functional modules in the results with P value < 0.05 were visualized with the R language. ## 2.4. Network Analysis The protein-protein interaction network was performed to identify the hub nodes of the DEGs. Briefly, the targets of the DEGs were uploaded to the STRING database (https://cn.string-db.org/) to analyze and obtain protein interaction information, and then, Cytoscape software was applied to visualize the PPI network. ## 2.5. Clinical Tissues The pathological tissues and adjacent healthy tissues were requested from the Qilu Hospital of Shandong University. The experiments were approved by the ethics committee of the hospital. Besides, all tissues were frozen at -70°C. ## 2.6. qRT-PCR The RNAs in the tissues were extracted with a TRIzol reagent. The commercial kit (Shanghai Lianmai Biological Engineering Co., Ltd., Shanghai, China) was applied for the reverse transcription of cDNA. Subsequently, the PCR reaction was performed for the quantification of the genes. Moreover, the abundance of the RNAs were measured with the 2−ΔΔCt method. ## 2.7. Western Blot Analysis The protein was extracted by the RIPA buffer. Protein was separated by SDS-PAGE and transferred to the nitrocellulose membrane and blocked in 5% skim milk powder solution for 2 h. Then, the membrane was incubated with a primary antibody at 4°C for 12 h. Then, the membrane was incubated by a second antibody at 25°C for 2 h. The protein bands were colored, and the gray value was read under ImageJ software. ## 2.8. Immunohistochemistry (IHC) IHC was conducted using paraffin-embedded tissue sections. After being deparaffinized and hydrated, the antigen was extracted at 95°C. After treating with 3% H2O2, sections were incubated with primary antibody at 4°C overnight and then treated with a second antibody at 37°C for 30 min. Staining was conducted using DAB (Golden Bridge, China). ## 2.9. Data Analysis The experiments were repeated for three times, independently. SPSS 19.0 and GraphPad Prism 8.0 was applied for data analysis and visualization, respectively. Moreover, the chi-squared test or ANOVA with Tukey’s post hoc test was selected for calculating the difference of data, andP<0.05 represented that the difference was statistically significant. ## 3. Results ### 3.1. DEG Identification To investigate the gene profiles of OS, GSE70367 and GSE69470 were obtained from the GEO database and then analyzed with the GEO2R tool. 9 upregulated DEGs and 37 downregulated DEGs were found in GSE69470, and 31 upregulated DEGs and 55 downregulated DEGs were found in GSE70367 (Figures1(a) and 1(b)). Moreover, the abundance of the DEGs of GSE70367 and GSE69470 were exhibited in Figure 2. Moreover, 21 downregulated genes were found in GSE70367 and GSE69470 (Figure 1(c)). Those observations suggested that there were significant differences in gene profile of tumor cells and normal cells.Figure 1 The DEGs in GSE69470 and GSE70367 were visualized with volcano plots. (a) The DEGs in GSE69470. (b) The DEGs in GSE70367. (c) The common genes of GSE69470 and GSE70367 were screened by Venn diagram. (a)(b)(c)Figure 2 The expressions of DEGs in the samples of GSE69470 and GSE70367 were visualized by heat map: (a) the DEGs in GSE69470; (b) the DEGs in GSE70367. (a)(b) ### 3.2. Identification of Function Model To investigate the functions of the genes in the progression of osteosarcoma, the targets of the DEGs in GSE70367 and GSE69470 were analyzed with GO enrichment. The results proved that the DEGs in GSE69470 were associated with the regulation of extracellular structure, regulation of protein serine/threonine kinase activity, regulation of GTPase activity, and so on. For GSE70367, the DEGs were related with regulation of cell development, positive regulation of catabolic process, skeletal system development, and so on (Figures3(a) and 3(b)). Moreover, the common DEGs of GSE70367 and GSE69470 were also related with regulation of cell development, covalent chromatin modification, and histone modification (Figure 3).Figure 3 GO enrichment analysis of the DEGs. (a) The GO enrichment analysis of the DEGs in GSE69470. (b) The GO enrichment analysis of the DEGs in GSE70367. (c) The GO enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ### 3.3. KEGG Enrichment Analysis For revealing the regulation mechanisms of osteosarcoma, the DEGs of the datasets were analyzed with KEGG enrichment. It was proven that the DEGs in GSE69470 were connected with the extracellular matrix (ECM) receptor interaction, focal adhesion, PI3K/AKT pathways, P53 pathways, TGF-β, Wnt pathway, etc. (Figure 4(a)). The DEGs in GSE70367 were related with the ECM-receptor interaction, PI3K/AKT pathways, TGF-β pathway, Hippo pathway, p53 pathway, Wnt pathway, and so on (Figure 4(b)). In addition, the common DEGs of GSE69470 and GSE70367 were related with the MAPK signaling pathway, mTOR signaling pathway, AMPK signaling pathway, Ras signaling pathway, and so on (Figure 4(c)).Figure 4 The KEGG enrichment analysis of the DEGs. (a) The KEGG enrichment analysis of the DEGs in GSE69470. (b) The KEGG enrichment analysis of the DEGs in GSE70367. (c) The KEGG enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ### 3.4. PPI Network To illustrate the molecular mechanism of osteosarcoma, the protein interactions of DEGs were analyzed to obtain the hub nodes. The results mirrored that for GSE69470, 3 clusters were found in the targets, including cluster 1 with 16 nodes and 234 edges, cluster 2 with 54 nodes and 520 edges, and 84 nodes and 480 edges (Figure5(a)). For GSE70367, 3 clusters were found in targets, including cluster 1 with 33 nodes and 322 edges, cluster 2 with 43 nodes and 298 edges, and cluster 3 with 71 nodes and 376 edges (Figure 5(b)). Moreover, for the common miRNAs of GSE69470 and GSE70367, there were three clusters including cluster 1 with 22 nodes and 126 edges, cluster 2 with 56 nodes and 308 edges, and cluster 3 with 5 nodes and 20 edges. The results showed that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were selected as the hub nodes (Figure 5(c)). In addition, the miRNA-mRNA network was also established (Figure 5(d)). Besides, to verify the relationship of the genes and the progression of osteosarcoma, the screened genes were identified with the published studies or qRT-PCR. It was found that decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues (Figures 6(a)–6(c)). In addition, IHC results showed that RAB5A and NFKBIA were highly expressed in pathological tissues (Figure 6(d)). The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues (Figure 6(e)).Figure 5 PPI-network analysis and miRNA-mRNA network analysis of DEGs and the related targets. (a) The DEGs in GSE69470 (big and blue sizes were selected as hub nodes). (b) The DEGs in GSE70367 (big and blue sizes were selected as hub nodes). (c) The common genes of GSE69470 and GSE70367. (d) miRNA-mRNA network (red: hsa-miR-433-3p; green: protein). (a)(b)(c)(d)Figure 6 Decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues. (a–c) The related abundance of miR-301a-3p (a), RAB5A (b), and NFKBIA (c) in the pathological tissues. (d) IHC staining for RAB5A and NFKBIA was performed. (e) Protein expression was measured via western blot. (a)(b)(c)(d)(e) ## 3.1. DEG Identification To investigate the gene profiles of OS, GSE70367 and GSE69470 were obtained from the GEO database and then analyzed with the GEO2R tool. 9 upregulated DEGs and 37 downregulated DEGs were found in GSE69470, and 31 upregulated DEGs and 55 downregulated DEGs were found in GSE70367 (Figures1(a) and 1(b)). Moreover, the abundance of the DEGs of GSE70367 and GSE69470 were exhibited in Figure 2. Moreover, 21 downregulated genes were found in GSE70367 and GSE69470 (Figure 1(c)). Those observations suggested that there were significant differences in gene profile of tumor cells and normal cells.Figure 1 The DEGs in GSE69470 and GSE70367 were visualized with volcano plots. (a) The DEGs in GSE69470. (b) The DEGs in GSE70367. (c) The common genes of GSE69470 and GSE70367 were screened by Venn diagram. (a)(b)(c)Figure 2 The expressions of DEGs in the samples of GSE69470 and GSE70367 were visualized by heat map: (a) the DEGs in GSE69470; (b) the DEGs in GSE70367. (a)(b) ## 3.2. Identification of Function Model To investigate the functions of the genes in the progression of osteosarcoma, the targets of the DEGs in GSE70367 and GSE69470 were analyzed with GO enrichment. The results proved that the DEGs in GSE69470 were associated with the regulation of extracellular structure, regulation of protein serine/threonine kinase activity, regulation of GTPase activity, and so on. For GSE70367, the DEGs were related with regulation of cell development, positive regulation of catabolic process, skeletal system development, and so on (Figures3(a) and 3(b)). Moreover, the common DEGs of GSE70367 and GSE69470 were also related with regulation of cell development, covalent chromatin modification, and histone modification (Figure 3).Figure 3 GO enrichment analysis of the DEGs. (a) The GO enrichment analysis of the DEGs in GSE69470. (b) The GO enrichment analysis of the DEGs in GSE70367. (c) The GO enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ## 3.3. KEGG Enrichment Analysis For revealing the regulation mechanisms of osteosarcoma, the DEGs of the datasets were analyzed with KEGG enrichment. It was proven that the DEGs in GSE69470 were connected with the extracellular matrix (ECM) receptor interaction, focal adhesion, PI3K/AKT pathways, P53 pathways, TGF-β, Wnt pathway, etc. (Figure 4(a)). The DEGs in GSE70367 were related with the ECM-receptor interaction, PI3K/AKT pathways, TGF-β pathway, Hippo pathway, p53 pathway, Wnt pathway, and so on (Figure 4(b)). In addition, the common DEGs of GSE69470 and GSE70367 were related with the MAPK signaling pathway, mTOR signaling pathway, AMPK signaling pathway, Ras signaling pathway, and so on (Figure 4(c)).Figure 4 The KEGG enrichment analysis of the DEGs. (a) The KEGG enrichment analysis of the DEGs in GSE69470. (b) The KEGG enrichment analysis of the DEGs in GSE70367. (c) The KEGG enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ## 3.4. PPI Network To illustrate the molecular mechanism of osteosarcoma, the protein interactions of DEGs were analyzed to obtain the hub nodes. The results mirrored that for GSE69470, 3 clusters were found in the targets, including cluster 1 with 16 nodes and 234 edges, cluster 2 with 54 nodes and 520 edges, and 84 nodes and 480 edges (Figure5(a)). For GSE70367, 3 clusters were found in targets, including cluster 1 with 33 nodes and 322 edges, cluster 2 with 43 nodes and 298 edges, and cluster 3 with 71 nodes and 376 edges (Figure 5(b)). Moreover, for the common miRNAs of GSE69470 and GSE70367, there were three clusters including cluster 1 with 22 nodes and 126 edges, cluster 2 with 56 nodes and 308 edges, and cluster 3 with 5 nodes and 20 edges. The results showed that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were selected as the hub nodes (Figure 5(c)). In addition, the miRNA-mRNA network was also established (Figure 5(d)). Besides, to verify the relationship of the genes and the progression of osteosarcoma, the screened genes were identified with the published studies or qRT-PCR. It was found that decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues (Figures 6(a)–6(c)). In addition, IHC results showed that RAB5A and NFKBIA were highly expressed in pathological tissues (Figure 6(d)). The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues (Figure 6(e)).Figure 5 PPI-network analysis and miRNA-mRNA network analysis of DEGs and the related targets. (a) The DEGs in GSE69470 (big and blue sizes were selected as hub nodes). (b) The DEGs in GSE70367 (big and blue sizes were selected as hub nodes). (c) The common genes of GSE69470 and GSE70367. (d) miRNA-mRNA network (red: hsa-miR-433-3p; green: protein). (a)(b)(c)(d)Figure 6 Decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues. (a–c) The related abundance of miR-301a-3p (a), RAB5A (b), and NFKBIA (c) in the pathological tissues. (d) IHC staining for RAB5A and NFKBIA was performed. (e) Protein expression was measured via western blot. (a)(b)(c)(d)(e) ## 4. Discussion Osteosarcoma is one of the dangerous diseases with high incidence, and there are few effective strategies to completely heal this disease [5]. Bioinformatics analysis has been verified as a promising strategy for identifying the biomarkers and researching the molecular mechanism of cancer [13]. In this investigation, the datasets including GSE69470 and GSE70367 were obtained from the GEO database and then used for identifying the hub nodes in osteosarcoma.Osteosarcoma is characterized with aberrant expression of genes which may involve the some malignant behaviors of the tumor cells. In this project, the expressions of genes in tumor cell lines and normal cell lines were investigated, and 21 downregulated genes were found in GSE69470 and GSE70367. Moreover, downregulation of miR-127-3p, miR-154-5p, miR-323a-3p, miR-409-3p, miR-431-5p, miR-432-5p, miR-433-3p, miR-485-3p, miR-487b-3p, miR-495-3p, and miR-125b-5p was related with cancer development. For instance, miR-127-3p serves as an inhibitor role in the progression of multiple tumors such as glioblastoma and prostate cancer [14, 15]. For osteosarcoma, all of those miRNAs are also involved in the malignant behaviors such as invasion and proliferation.Cancer development always involves the changes of multiple signaling pathways, such as PI3K/AKT pathways, P53 pathway, and Wnt/β-catenin pathway [16, 17]. For osteosarcoma, the disorder of the cellular signal pathways has also been confirmed as the direct reasons leading to tumor cell proliferation and invasion [18]. The PI3K/AKT pathway is related with cellular proliferation, and the activated PI3K/AKT pathway has been confirmed to involve the progression of multiple cancers. The study of Yang et al. has indicated that the PI3K/AKT pathway was aberrantly activated in the osteosarcoma cells, and inhibiting the PI3K/AKT pathway could effectively impede the proliferation of tumor cells [19]. In this project, it was proven that the DEGs in GSE69470 or GSE70367 were associated with multiple pathways including the PI3K/AKT, TGF-β, Hippo, P53, Wnt, and MAPK pathways. The dysfunctions of signal pathways in tumor cells are closely connected with the miRNA disorder. Increased miR-127-3p, miR-495-3p, and miR-125b-5p have been proven to take part in suppressing the activity of the PI3K/AKT pathway [20–22]. Moreover, the report has proven that miR-409-3p involves regulation of MAPK to block cervical cancer development [23]. Beside, decreased miR-301a-3p was also found in the pathological tissues.miRNA can obstruct the translation progression of proteins via inducing the degradation of the special mRNAs [23]. In this project, the targets of the DEGs in GSE69470 and GSE70367 were predicted and used to reveal the molecular mechanism of osteosarcoma. It was found that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, NFKBIA, and so on were selected as the hub nodes. The study has indicated that HSPA5 inhibition is a promising method for inducing the endoplasmic reticulum stress, autophagy, and apoptosis of tumor cells [24]. MAPK family serves as an important role in the progression of multiple tumors. The studies have indicated that increased MAPK8 plays a critical role in the development of colorectal cancer, and MAPK14 downregulation could effectively impede the poor behaviors of the clear cell renal cell carcinoma [25, 26]. Increasing studies have revealed that the disorder of the RAS oncogene family was related with the progression of tumor. In this study, RAB11A and RAB5A were also identified as the hub nodes of osteosarcoma. RAB11A involves the regulation of the Wnt/β-catenin pathway to promote the deterioration of prostate cancer, and RAB5A upregulation is related with the proliferation invasion and EMT of ovarian cancer [15, 27]. LEF1 upregulation is related to the resistance of cancer. Fakhr et al. have proven that LEF1 silence could improve the lethal effect of chemotherapy drugs on colorectal cancer cells [28]. HIF1A serves as a key role in regulating the formation of the blood vessel under the hypoxic condition. Some reports have indicated that HIF1A upregulation is related with the invasion and metastasis of tumor cells. In this study, HIF1A was also identified as a hub node. Moreover, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA have also been proven as the biomarkers for prognosis of multiple cancers. Moreover, increased RAB5A and NFKBIA were detected in the pathological tissues.In conclusion, in this study, 7 miRNAs and 13 hub genes were identified, which might be candidate markers. miR-301a-3p, RAB5A, and NFKBIA were abnormally expressed in osteosarcoma tissues. However, one of the limitations of this study was that no more experiments have been conducted to verify whether miR-301a-3p, RAB5A, and NFKBIA affect tumor progression. In addition, this study lacks more dataset analysis to verify the conclusions of this study. --- *Source: 1015593-2022-11-16.xml*
1015593-2022-11-16_1015593-2022-11-16.md
27,418
Identification of Key Genes and miRNAs Affecting Osteosarcoma Based on Bioinformatics
Le Li; Xin Zhou; Wencan Zhang; Ran Zhao
Disease Markers (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015593
1015593-2022-11-16.xml
--- ## Abstract Object. Osteosarcoma is an intractable malignant disease, and few therapeutic methods can thoroughly eradicate its focuses. This study attempted to investigate the related mechanism of osteosarcoma by bioinformatics methods. Methods. GSE70367 and GSE69470 were obtained from the GEO database. The differentially expressed genes (DEGs) and miRNAs were analyzed using the GEO2R tool and then visualized with R software. Moreover, the targets of the miRNAs in the DEGs were screened and then used for enrichment analysis. Besides, the STRING database and Cytoscape were applied to illustrate the protein-protein interaction network. RT-qPCR was performed to measure the expression of key genes and miRNAs. Western blot was applied to detect the signaling pathway. Results. 9 upregulated genes and 39 downregulated genes in GSE69470 were identified as the DEGs, and 31 upregulated genes and 56 downregulated genes in GSE70367 were identified as the DEGs. Moreover, 21 common genes were found in the DEGs of GSE70367 and GSE69470. The enrichment analysis showed that the common DEGs of GSE70367 and GSE69470 were related with cell development, covalent chromatin modification, and histone modification and involve in the regulation of MAPK, mTOR, and AMPK pathways. Besides, the miRNAs including miR-543, miR-495-3p, miR-433-3p, miR-381-3p, miR-301a-3p, miR-199b-5p, and miR-125b-5p were identified as the biomarkers of osteosarcoma. In addition, the target genes including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were identified as hub nodes. It was found that miR-301a-3p expression was decreased and mRNA expression of RAB5A and NFKBIA was increased in the pathological tissues. The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues. Conclusion. In this study, 7 miRNAs and 13 hub genes were identified, which might be candidate markers. miR-301a-3p, RAB5A, and NFKBIA were abnormally expressed in osteosarcoma tissues. --- ## Body ## 1. Introduction Osteosarcoma is a frequent malignant bone disease in children and older patients, which is characterized with poor prognosis including physical disability and metastases [1, 2]. Surgery excision, radiotherapy, and chemotherapy have been widely used for osteosarcoma treatment, which can effectively inhibit the development of the tumor progression in the early stage [3]. Nevertheless, considerable patients have been confirmed to be at the advanced stage in their first clinical diagnosis. Moreover, high metastasis rates of osteosarcoma also make the clinical intervention become tricky and then lead to treatment failure [4]. Although the survival times of the patients have been significantly prolonged with the modern medicine techniques, the treatment effect remains unsatisfactory for patients [5, 6]. At present, some reports have focused on revealing the potential mechanism of osteosarcoma, which can provide valuable reference for the progression of medicine strategies [7, 8]. At present, there have been many studies on the molecular mechanism of osteosarcoma, and several osteosarcoma-driving genes have been identified, such as TP53, RB1, and PTEN. There have also been targeted drugs for osteosarcoma, such as pazopanib, appatinib, cabotinib, and ivermex. However, these studies have not clearly explained the pathogenesis and metastasis of osteosarcoma. Therefore, it is urgent to further study the potential molecular mechanism of osteosarcoma cells, identify reliable molecular markers, and identify new drug targets.Microarray analysis is a useful method which has been used for screening the key genes in diseases [9]. Recently, the academic and guiding value of bioinformatics methods on improving the clinical practice have been proven by numerous researches [10]. MicroRNA is a class of the short noncoding RNA with 18-20 nucleotides, which plays a great part in the cellular life activity [11]. The abnormal expression of miRNA is a biomarker event in multiple diseases, especially in caner. In osteosarcoma, many studies have indicated that miRNA can regulate the cellular phenotype to influence the progression of the tumor via intervening the expression of key proteins [12]. However, the miRNA-mRNA interaction network of osteosarcoma is still far from complete clarification.In this project, the purpose was to identify the pivotal biomarkers and related mechanism of the osteosarcoma using bioinformatics methods through obtaining the open-source datasets in the GEO database. ## 2. Materials and Methods ### 2.1. Data Source We searched the datasets comparing mRNA or miRNA expression profiles of osteosarcoma and normal samples using “osteosarcoma” as search terms for the GEO datasets (https://www.ncbi.nlm.nih.gov/geo/). The datasets including GSE70367 and GSE69470 were obtained. GSE69470 contained the expression profile of 15 samples, including 10 osteosarcoma samples and 5 normal samples, which was based on platform GPL20275. For GSE70367 based on GPL16384, 5 samples of tumor cell lines and 1 sample of the hMSC cell line were used for analysis. ### 2.2. Identification of Differentially Expressed Genes The DEGs of the datasets were analyzed with the GEO2R tool of the GEO database to obtain the related matrix files. The genes with thelogFC>2 and P value < 0.05 were selected as the DEGs. ### 2.3. KEGG and GO Enrichment Analysis The targets of the DEGs were predicted with the mirDIP database (http://ophid.utoronto.ca/mirDIP/index.jsp), and the top 5% genes in the results were selected as potential targets of the DEGs in GSE70367 and GSE69470. The KEGG and GO enrichment of DEGs was performed by the DAVID database. In brief, the targets of the DEGS were uploaded into the DAVID database. The pathways and the related functional modules in the results with P value < 0.05 were visualized with the R language. ### 2.4. Network Analysis The protein-protein interaction network was performed to identify the hub nodes of the DEGs. Briefly, the targets of the DEGs were uploaded to the STRING database (https://cn.string-db.org/) to analyze and obtain protein interaction information, and then, Cytoscape software was applied to visualize the PPI network. ### 2.5. Clinical Tissues The pathological tissues and adjacent healthy tissues were requested from the Qilu Hospital of Shandong University. The experiments were approved by the ethics committee of the hospital. Besides, all tissues were frozen at -70°C. ### 2.6. qRT-PCR The RNAs in the tissues were extracted with a TRIzol reagent. The commercial kit (Shanghai Lianmai Biological Engineering Co., Ltd., Shanghai, China) was applied for the reverse transcription of cDNA. Subsequently, the PCR reaction was performed for the quantification of the genes. Moreover, the abundance of the RNAs were measured with the 2−ΔΔCt method. ### 2.7. Western Blot Analysis The protein was extracted by the RIPA buffer. Protein was separated by SDS-PAGE and transferred to the nitrocellulose membrane and blocked in 5% skim milk powder solution for 2 h. Then, the membrane was incubated with a primary antibody at 4°C for 12 h. Then, the membrane was incubated by a second antibody at 25°C for 2 h. The protein bands were colored, and the gray value was read under ImageJ software. ### 2.8. Immunohistochemistry (IHC) IHC was conducted using paraffin-embedded tissue sections. After being deparaffinized and hydrated, the antigen was extracted at 95°C. After treating with 3% H2O2, sections were incubated with primary antibody at 4°C overnight and then treated with a second antibody at 37°C for 30 min. Staining was conducted using DAB (Golden Bridge, China). ### 2.9. Data Analysis The experiments were repeated for three times, independently. SPSS 19.0 and GraphPad Prism 8.0 was applied for data analysis and visualization, respectively. Moreover, the chi-squared test or ANOVA with Tukey’s post hoc test was selected for calculating the difference of data, andP<0.05 represented that the difference was statistically significant. ## 2.1. Data Source We searched the datasets comparing mRNA or miRNA expression profiles of osteosarcoma and normal samples using “osteosarcoma” as search terms for the GEO datasets (https://www.ncbi.nlm.nih.gov/geo/). The datasets including GSE70367 and GSE69470 were obtained. GSE69470 contained the expression profile of 15 samples, including 10 osteosarcoma samples and 5 normal samples, which was based on platform GPL20275. For GSE70367 based on GPL16384, 5 samples of tumor cell lines and 1 sample of the hMSC cell line were used for analysis. ## 2.2. Identification of Differentially Expressed Genes The DEGs of the datasets were analyzed with the GEO2R tool of the GEO database to obtain the related matrix files. The genes with thelogFC>2 and P value < 0.05 were selected as the DEGs. ## 2.3. KEGG and GO Enrichment Analysis The targets of the DEGs were predicted with the mirDIP database (http://ophid.utoronto.ca/mirDIP/index.jsp), and the top 5% genes in the results were selected as potential targets of the DEGs in GSE70367 and GSE69470. The KEGG and GO enrichment of DEGs was performed by the DAVID database. In brief, the targets of the DEGS were uploaded into the DAVID database. The pathways and the related functional modules in the results with P value < 0.05 were visualized with the R language. ## 2.4. Network Analysis The protein-protein interaction network was performed to identify the hub nodes of the DEGs. Briefly, the targets of the DEGs were uploaded to the STRING database (https://cn.string-db.org/) to analyze and obtain protein interaction information, and then, Cytoscape software was applied to visualize the PPI network. ## 2.5. Clinical Tissues The pathological tissues and adjacent healthy tissues were requested from the Qilu Hospital of Shandong University. The experiments were approved by the ethics committee of the hospital. Besides, all tissues were frozen at -70°C. ## 2.6. qRT-PCR The RNAs in the tissues were extracted with a TRIzol reagent. The commercial kit (Shanghai Lianmai Biological Engineering Co., Ltd., Shanghai, China) was applied for the reverse transcription of cDNA. Subsequently, the PCR reaction was performed for the quantification of the genes. Moreover, the abundance of the RNAs were measured with the 2−ΔΔCt method. ## 2.7. Western Blot Analysis The protein was extracted by the RIPA buffer. Protein was separated by SDS-PAGE and transferred to the nitrocellulose membrane and blocked in 5% skim milk powder solution for 2 h. Then, the membrane was incubated with a primary antibody at 4°C for 12 h. Then, the membrane was incubated by a second antibody at 25°C for 2 h. The protein bands were colored, and the gray value was read under ImageJ software. ## 2.8. Immunohistochemistry (IHC) IHC was conducted using paraffin-embedded tissue sections. After being deparaffinized and hydrated, the antigen was extracted at 95°C. After treating with 3% H2O2, sections were incubated with primary antibody at 4°C overnight and then treated with a second antibody at 37°C for 30 min. Staining was conducted using DAB (Golden Bridge, China). ## 2.9. Data Analysis The experiments were repeated for three times, independently. SPSS 19.0 and GraphPad Prism 8.0 was applied for data analysis and visualization, respectively. Moreover, the chi-squared test or ANOVA with Tukey’s post hoc test was selected for calculating the difference of data, andP<0.05 represented that the difference was statistically significant. ## 3. Results ### 3.1. DEG Identification To investigate the gene profiles of OS, GSE70367 and GSE69470 were obtained from the GEO database and then analyzed with the GEO2R tool. 9 upregulated DEGs and 37 downregulated DEGs were found in GSE69470, and 31 upregulated DEGs and 55 downregulated DEGs were found in GSE70367 (Figures1(a) and 1(b)). Moreover, the abundance of the DEGs of GSE70367 and GSE69470 were exhibited in Figure 2. Moreover, 21 downregulated genes were found in GSE70367 and GSE69470 (Figure 1(c)). Those observations suggested that there were significant differences in gene profile of tumor cells and normal cells.Figure 1 The DEGs in GSE69470 and GSE70367 were visualized with volcano plots. (a) The DEGs in GSE69470. (b) The DEGs in GSE70367. (c) The common genes of GSE69470 and GSE70367 were screened by Venn diagram. (a)(b)(c)Figure 2 The expressions of DEGs in the samples of GSE69470 and GSE70367 were visualized by heat map: (a) the DEGs in GSE69470; (b) the DEGs in GSE70367. (a)(b) ### 3.2. Identification of Function Model To investigate the functions of the genes in the progression of osteosarcoma, the targets of the DEGs in GSE70367 and GSE69470 were analyzed with GO enrichment. The results proved that the DEGs in GSE69470 were associated with the regulation of extracellular structure, regulation of protein serine/threonine kinase activity, regulation of GTPase activity, and so on. For GSE70367, the DEGs were related with regulation of cell development, positive regulation of catabolic process, skeletal system development, and so on (Figures3(a) and 3(b)). Moreover, the common DEGs of GSE70367 and GSE69470 were also related with regulation of cell development, covalent chromatin modification, and histone modification (Figure 3).Figure 3 GO enrichment analysis of the DEGs. (a) The GO enrichment analysis of the DEGs in GSE69470. (b) The GO enrichment analysis of the DEGs in GSE70367. (c) The GO enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ### 3.3. KEGG Enrichment Analysis For revealing the regulation mechanisms of osteosarcoma, the DEGs of the datasets were analyzed with KEGG enrichment. It was proven that the DEGs in GSE69470 were connected with the extracellular matrix (ECM) receptor interaction, focal adhesion, PI3K/AKT pathways, P53 pathways, TGF-β, Wnt pathway, etc. (Figure 4(a)). The DEGs in GSE70367 were related with the ECM-receptor interaction, PI3K/AKT pathways, TGF-β pathway, Hippo pathway, p53 pathway, Wnt pathway, and so on (Figure 4(b)). In addition, the common DEGs of GSE69470 and GSE70367 were related with the MAPK signaling pathway, mTOR signaling pathway, AMPK signaling pathway, Ras signaling pathway, and so on (Figure 4(c)).Figure 4 The KEGG enrichment analysis of the DEGs. (a) The KEGG enrichment analysis of the DEGs in GSE69470. (b) The KEGG enrichment analysis of the DEGs in GSE70367. (c) The KEGG enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ### 3.4. PPI Network To illustrate the molecular mechanism of osteosarcoma, the protein interactions of DEGs were analyzed to obtain the hub nodes. The results mirrored that for GSE69470, 3 clusters were found in the targets, including cluster 1 with 16 nodes and 234 edges, cluster 2 with 54 nodes and 520 edges, and 84 nodes and 480 edges (Figure5(a)). For GSE70367, 3 clusters were found in targets, including cluster 1 with 33 nodes and 322 edges, cluster 2 with 43 nodes and 298 edges, and cluster 3 with 71 nodes and 376 edges (Figure 5(b)). Moreover, for the common miRNAs of GSE69470 and GSE70367, there were three clusters including cluster 1 with 22 nodes and 126 edges, cluster 2 with 56 nodes and 308 edges, and cluster 3 with 5 nodes and 20 edges. The results showed that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were selected as the hub nodes (Figure 5(c)). In addition, the miRNA-mRNA network was also established (Figure 5(d)). Besides, to verify the relationship of the genes and the progression of osteosarcoma, the screened genes were identified with the published studies or qRT-PCR. It was found that decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues (Figures 6(a)–6(c)). In addition, IHC results showed that RAB5A and NFKBIA were highly expressed in pathological tissues (Figure 6(d)). The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues (Figure 6(e)).Figure 5 PPI-network analysis and miRNA-mRNA network analysis of DEGs and the related targets. (a) The DEGs in GSE69470 (big and blue sizes were selected as hub nodes). (b) The DEGs in GSE70367 (big and blue sizes were selected as hub nodes). (c) The common genes of GSE69470 and GSE70367. (d) miRNA-mRNA network (red: hsa-miR-433-3p; green: protein). (a)(b)(c)(d)Figure 6 Decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues. (a–c) The related abundance of miR-301a-3p (a), RAB5A (b), and NFKBIA (c) in the pathological tissues. (d) IHC staining for RAB5A and NFKBIA was performed. (e) Protein expression was measured via western blot. (a)(b)(c)(d)(e) ## 3.1. DEG Identification To investigate the gene profiles of OS, GSE70367 and GSE69470 were obtained from the GEO database and then analyzed with the GEO2R tool. 9 upregulated DEGs and 37 downregulated DEGs were found in GSE69470, and 31 upregulated DEGs and 55 downregulated DEGs were found in GSE70367 (Figures1(a) and 1(b)). Moreover, the abundance of the DEGs of GSE70367 and GSE69470 were exhibited in Figure 2. Moreover, 21 downregulated genes were found in GSE70367 and GSE69470 (Figure 1(c)). Those observations suggested that there were significant differences in gene profile of tumor cells and normal cells.Figure 1 The DEGs in GSE69470 and GSE70367 were visualized with volcano plots. (a) The DEGs in GSE69470. (b) The DEGs in GSE70367. (c) The common genes of GSE69470 and GSE70367 were screened by Venn diagram. (a)(b)(c)Figure 2 The expressions of DEGs in the samples of GSE69470 and GSE70367 were visualized by heat map: (a) the DEGs in GSE69470; (b) the DEGs in GSE70367. (a)(b) ## 3.2. Identification of Function Model To investigate the functions of the genes in the progression of osteosarcoma, the targets of the DEGs in GSE70367 and GSE69470 were analyzed with GO enrichment. The results proved that the DEGs in GSE69470 were associated with the regulation of extracellular structure, regulation of protein serine/threonine kinase activity, regulation of GTPase activity, and so on. For GSE70367, the DEGs were related with regulation of cell development, positive regulation of catabolic process, skeletal system development, and so on (Figures3(a) and 3(b)). Moreover, the common DEGs of GSE70367 and GSE69470 were also related with regulation of cell development, covalent chromatin modification, and histone modification (Figure 3).Figure 3 GO enrichment analysis of the DEGs. (a) The GO enrichment analysis of the DEGs in GSE69470. (b) The GO enrichment analysis of the DEGs in GSE70367. (c) The GO enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ## 3.3. KEGG Enrichment Analysis For revealing the regulation mechanisms of osteosarcoma, the DEGs of the datasets were analyzed with KEGG enrichment. It was proven that the DEGs in GSE69470 were connected with the extracellular matrix (ECM) receptor interaction, focal adhesion, PI3K/AKT pathways, P53 pathways, TGF-β, Wnt pathway, etc. (Figure 4(a)). The DEGs in GSE70367 were related with the ECM-receptor interaction, PI3K/AKT pathways, TGF-β pathway, Hippo pathway, p53 pathway, Wnt pathway, and so on (Figure 4(b)). In addition, the common DEGs of GSE69470 and GSE70367 were related with the MAPK signaling pathway, mTOR signaling pathway, AMPK signaling pathway, Ras signaling pathway, and so on (Figure 4(c)).Figure 4 The KEGG enrichment analysis of the DEGs. (a) The KEGG enrichment analysis of the DEGs in GSE69470. (b) The KEGG enrichment analysis of the DEGs in GSE70367. (c) The KEGG enrichment analysis of the common genes in GSE69470 and GSE70367. (a)(b)(c) ## 3.4. PPI Network To illustrate the molecular mechanism of osteosarcoma, the protein interactions of DEGs were analyzed to obtain the hub nodes. The results mirrored that for GSE69470, 3 clusters were found in the targets, including cluster 1 with 16 nodes and 234 edges, cluster 2 with 54 nodes and 520 edges, and 84 nodes and 480 edges (Figure5(a)). For GSE70367, 3 clusters were found in targets, including cluster 1 with 33 nodes and 322 edges, cluster 2 with 43 nodes and 298 edges, and cluster 3 with 71 nodes and 376 edges (Figure 5(b)). Moreover, for the common miRNAs of GSE69470 and GSE70367, there were three clusters including cluster 1 with 22 nodes and 126 edges, cluster 2 with 56 nodes and 308 edges, and cluster 3 with 5 nodes and 20 edges. The results showed that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA were selected as the hub nodes (Figure 5(c)). In addition, the miRNA-mRNA network was also established (Figure 5(d)). Besides, to verify the relationship of the genes and the progression of osteosarcoma, the screened genes were identified with the published studies or qRT-PCR. It was found that decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues (Figures 6(a)–6(c)). In addition, IHC results showed that RAB5A and NFKBIA were highly expressed in pathological tissues (Figure 6(d)). The AKT-PI3K-mTOR signaling pathway was activated in pathological tissues (Figure 6(e)).Figure 5 PPI-network analysis and miRNA-mRNA network analysis of DEGs and the related targets. (a) The DEGs in GSE69470 (big and blue sizes were selected as hub nodes). (b) The DEGs in GSE70367 (big and blue sizes were selected as hub nodes). (c) The common genes of GSE69470 and GSE70367. (d) miRNA-mRNA network (red: hsa-miR-433-3p; green: protein). (a)(b)(c)(d)Figure 6 Decreased miR-301a-3p and increased RAB5A and NFKBIA were detected in the pathological tissues. (a–c) The related abundance of miR-301a-3p (a), RAB5A (b), and NFKBIA (c) in the pathological tissues. (d) IHC staining for RAB5A and NFKBIA was performed. (e) Protein expression was measured via western blot. (a)(b)(c)(d)(e) ## 4. Discussion Osteosarcoma is one of the dangerous diseases with high incidence, and there are few effective strategies to completely heal this disease [5]. Bioinformatics analysis has been verified as a promising strategy for identifying the biomarkers and researching the molecular mechanism of cancer [13]. In this investigation, the datasets including GSE69470 and GSE70367 were obtained from the GEO database and then used for identifying the hub nodes in osteosarcoma.Osteosarcoma is characterized with aberrant expression of genes which may involve the some malignant behaviors of the tumor cells. In this project, the expressions of genes in tumor cell lines and normal cell lines were investigated, and 21 downregulated genes were found in GSE69470 and GSE70367. Moreover, downregulation of miR-127-3p, miR-154-5p, miR-323a-3p, miR-409-3p, miR-431-5p, miR-432-5p, miR-433-3p, miR-485-3p, miR-487b-3p, miR-495-3p, and miR-125b-5p was related with cancer development. For instance, miR-127-3p serves as an inhibitor role in the progression of multiple tumors such as glioblastoma and prostate cancer [14, 15]. For osteosarcoma, all of those miRNAs are also involved in the malignant behaviors such as invasion and proliferation.Cancer development always involves the changes of multiple signaling pathways, such as PI3K/AKT pathways, P53 pathway, and Wnt/β-catenin pathway [16, 17]. For osteosarcoma, the disorder of the cellular signal pathways has also been confirmed as the direct reasons leading to tumor cell proliferation and invasion [18]. The PI3K/AKT pathway is related with cellular proliferation, and the activated PI3K/AKT pathway has been confirmed to involve the progression of multiple cancers. The study of Yang et al. has indicated that the PI3K/AKT pathway was aberrantly activated in the osteosarcoma cells, and inhibiting the PI3K/AKT pathway could effectively impede the proliferation of tumor cells [19]. In this project, it was proven that the DEGs in GSE69470 or GSE70367 were associated with multiple pathways including the PI3K/AKT, TGF-β, Hippo, P53, Wnt, and MAPK pathways. The dysfunctions of signal pathways in tumor cells are closely connected with the miRNA disorder. Increased miR-127-3p, miR-495-3p, and miR-125b-5p have been proven to take part in suppressing the activity of the PI3K/AKT pathway [20–22]. Moreover, the report has proven that miR-409-3p involves regulation of MAPK to block cervical cancer development [23]. Beside, decreased miR-301a-3p was also found in the pathological tissues.miRNA can obstruct the translation progression of proteins via inducing the degradation of the special mRNAs [23]. In this project, the targets of the DEGs in GSE69470 and GSE70367 were predicted and used to reveal the molecular mechanism of osteosarcoma. It was found that the factors including HSPA5, PPARG, MAPK14, RAB11A, RAB5A, MAPK8, LEF1, GATA3, HIF1A, CAV1, GS3KB, FOXO3, IGF1, NFKBIA, and so on were selected as the hub nodes. The study has indicated that HSPA5 inhibition is a promising method for inducing the endoplasmic reticulum stress, autophagy, and apoptosis of tumor cells [24]. MAPK family serves as an important role in the progression of multiple tumors. The studies have indicated that increased MAPK8 plays a critical role in the development of colorectal cancer, and MAPK14 downregulation could effectively impede the poor behaviors of the clear cell renal cell carcinoma [25, 26]. Increasing studies have revealed that the disorder of the RAS oncogene family was related with the progression of tumor. In this study, RAB11A and RAB5A were also identified as the hub nodes of osteosarcoma. RAB11A involves the regulation of the Wnt/β-catenin pathway to promote the deterioration of prostate cancer, and RAB5A upregulation is related with the proliferation invasion and EMT of ovarian cancer [15, 27]. LEF1 upregulation is related to the resistance of cancer. Fakhr et al. have proven that LEF1 silence could improve the lethal effect of chemotherapy drugs on colorectal cancer cells [28]. HIF1A serves as a key role in regulating the formation of the blood vessel under the hypoxic condition. Some reports have indicated that HIF1A upregulation is related with the invasion and metastasis of tumor cells. In this study, HIF1A was also identified as a hub node. Moreover, CAV1, GS3KB, FOXO3, IGF1, and NFKBIA have also been proven as the biomarkers for prognosis of multiple cancers. Moreover, increased RAB5A and NFKBIA were detected in the pathological tissues.In conclusion, in this study, 7 miRNAs and 13 hub genes were identified, which might be candidate markers. miR-301a-3p, RAB5A, and NFKBIA were abnormally expressed in osteosarcoma tissues. However, one of the limitations of this study was that no more experiments have been conducted to verify whether miR-301a-3p, RAB5A, and NFKBIA affect tumor progression. In addition, this study lacks more dataset analysis to verify the conclusions of this study. --- *Source: 1015593-2022-11-16.xml*
2022
# Piecewise Approximate Analytical Solutions of High-Order Singular Perturbation Problems with a Discontinuous Source Term **Authors:** Essam R. El-Zahar **Journal:** International Journal of Differential Equations (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1015634 --- ## Abstract A reliable algorithm is presented to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion singular perturbation problems with a discontinuous source term. The algorithm is based on an asymptotic expansion approximation and Differential Transform Method (DTM). First, the original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion of the solution is constructed. Then a piecewise smooth solution of the terminal value reduced system is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented. The results show that the method is a reliable and convenient asymptotic semianalytical numerical method for treating high-order singular perturbation problems with a discontinuous source term. --- ## Body ## 1. Introduction Many mathematical problems that model real-life phenomena cannot be solved completely by analytical means. Some of the most important mathematical problems arising in applied mathematics are singular perturbation problems. These problems commonly occur in many branches of applied mathematics such as transition points in quantum mechanics, edge layers in solid mechanics, boundary layers in fluid mechanics, skin layers in electrical applications, and shock layers in fluid and solid mechanics. The numerical treatment of these problems is accompanied by major computational difficulties due to the presence of sharp boundary and/or interior layers in the solution. Therefore, more efficient and simpler computational methods are required to solve these problems.For the past two decades, many numerical methods have appeared in the literature, which cover mostly second-order singular perturbation boundary value problems (SPBVPs) [1–3]. But only few authors have developed numerical methods for higher order SPBVPs (see, e.g., [4–10]). However, most of them have concentrated on problems with smooth data. In fact some authors have developed numerical methods for problems with discontinuous data which gives rise to an interior layer in the exact solution of the problem, in addition to the boundary layer at the outflow boundary point. Most notable among these methods are piecewise-uniform mesh finite difference method [11–14] and fitted mesh finite element method [15, 16] for third- and fourth-order SPBVPs with a discontinuous source term. The aim of this paper is to employ a semianalytical method which is Differential Transform Method (DTM) as an alternative to existing methods for solving high-order SPBVPs with a discontinuous source term.DTM is introduced by Zhou [17] in a study of electric circuits. This method is a formalized modified version of Taylor series method where the derivatives are evaluated through recurrence relations and not symbolically as the traditional Taylor series method. The method has been used effectively to obtain highly accurate solutions for large classes of linear and nonlinear problems (see, e.g., [17–21]). There is no need for discretization, perturbations, and further large computational work and round-off errors are avoided. Additionally, DTM does not generate secular terms (noise terms) and does not need analytical integrations as other semianalytical methods like HPM, HAM, ADM, or VIM and so DTM is an attractive tool for solving differential equations.In this paper, a reliable algorithm is presented to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion singular perturbation problems with a discontinuous source term. The algorithm is based on an asymptotic expansion approximation and DTM. First, the original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion of the solution is constructed. Then a piecewise smooth solution of the terminal value reduced system is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented. The results show that the method is a reliable and convenient asymptotic semianalytical method for treating high-order singular perturbation problems with a discontinuous source term. ## 2. Differential Transform Method for ODE System Let us describe the DTM for solving the following system of ODEs:(1)u1′t=f1t,u1,u2,…,un,u2′t=f2t,u1,u2,…,un,⋮un′t=fnt,u1,u2,…,un,subject to the initial conditions(2)uit0=ci,i=1,2,…,n.Let [t0,T] be the interval over which we want to find the solution of (1)-(2). In actual applications of the DTM, the Nth-order approximate solution of (1)-(2) can be expressed by the finite series(3)uit=∑k=0NUikt-t0k,t∈t0,T,i=1,2,…,n,where(4)Uik=1k!dkuitdtkt=t0,i=1,2,…,n,which implies that ∑k=N+1∞Ui(k)(t-t0)k is negligibly small. Using some fundamental properties of DTM (Table 1), the ODE system (1)-(2) can be transformed into the following recurrence relations:(5)Uik+1=Fik,U1,U2,…,Unk+1,Yi0=ci,i=1,2,…,n,where Fi(k,U1,U2,…,Un) is the differential transform of the function fi(t,u1,u2,…,un), for i=1,2,…,n. Solving the recurrence relation (5), the differential transform Ui(k), k>0, can be easily obtained.Table 1 Some fundamental operations of DTM. Original function Transformed function u ( t ) = β v ( t ) ± w ( t ) U ( k ) = β V ( k ) ± β W ( k ) u ( t ) = v ( t ) w ( t ) U ( k ) = ∑ l = 0 k V ( l ) W ( k - l ) u ( t ) = d m v ( t ) d t m U ( k ) = ( k + m ) ! k ! V ( k + m ) u ( t ) = β + t m U ( k ) = H [ m , k ] m ! k ! m - k ! β + t 0 m - k, H[m,k]=1,ifm-k≥00,ifm-k<0 u ( t ) = e λ t U ( k ) = λ k k ! e λ t 0 u ( t ) = sin ⁡ ω t + β U ( k ) = ω k k ! sin ⁡ ω t 0 + β + k π 2 u ( t ) = cos ⁡ ω t + β U ( k ) = ω k k ! cos ⁡ ω t 0 + β + k π 2 ## 3. Description of the Method Motivated by the works of [11, 13, 16], we, in the present paper, suggest an asymptotic semianalytic method which is DTM to develop piecewise approximate analytical solutions for the following class of SPBVPs.Third-Order SPBVP [13]. Find y∈C1(Ω-)∩C2(Ω)∩C3Ω-∪Ω+ such that (6)-εy′′′t+aty′′t+bty′t+ctyt=ht,t∈Ω-∪Ω+,y0=p,y′0=q,y′1=r,where a(t), b(t), and c(t) are sufficiently smooth functions on Ω- satisfying the following conditions:(7)at<0,bt≥0,0≥ct≥-γ,γ>0,α-θγ≥η>0,where θ  is  arbitrarily  close  to  1, for some η.Fourth-Order SPBVP [11]. Find y∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ such that (8)-εyivt+aty′′′t+bty′′t-ctyt=-ht,t∈Ω-∪Ω+,y0=p,y1=q,y′′0=-r,y′′1=-s,where a(t), b(t), and c(t) are sufficiently smooth functions on Ω- satisfying the following conditions:(9)at<0,bt≥0,0≥ct≥-γ,γ>0,α-θγ≥η>0,where θis  arbitrarily  close  to  1,  for  someη. For both the problems defined above, Ω=(0,1), Ω-=(0,d), Ω+=(d,1), Ω-=Ω-∪Ω+, Ω-=Ω∖{d}, and 0<ε≪1. It is assumed that h(t) is sufficiently smooth on Ω- and its derivatives have discontinuity at the point d and the jump at d is given as h(d)=h(d+)-h(d-). ### 3.1. Zero-Order Asymptotic Expansion Approximations The SPBVP (6) can be transformed into an equivalent problem of the form(10)y1′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y20=q,y21=r,where y1∈C1(Ω-) and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [14].Similarly the SPBVP (8) can be transformed into(11)-y1′′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y11=q,y20=r,y21=s,where y1∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [11].Remark 1. Hereafter, only the above systems (10) and (11) are considered.Using some standard perturbation methods [11, 13, 16, 22] one can construct an asymptotic expansion for the solution of (10) and (11) as follows.Find a continuous functionu=(u1,u2)T of the terminal value reduced system of (10) such that(12)u1′t-u2t=0,atu2′t+btu2t+ctu1t=ht,u10=p,u1d-=u1d+,u2d-=u2d+,u21=r.That is, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(13)atu1′′t+btu1′t+ctu1t=ht,t∈Ω-,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′1=r.Then find(14)u2t=u1′t.Define yas=(y1,as,y2,as)T on Ω- as(15)y1,ast=u1t+εa0q-u20eta0/ε+kt,t∈Ω-∪0,d,kεadet-dad/ε,t∈Ω+∪1,y2,ast=u2t+q-u20eta0/ε+k,t∈Ω-∪0,d,ket-dad/ε,t∈Ω+∪1,where k=-εu2′(d+)-u2′(d-)/a(d).Similarly one can construct an asymptotic expansion for the solution of (11). In fact, for this problem u=(u1,u2)T is the solution of the terminal value reduced system (16)-u1′′t-u2t=0,atu2′t+btu2t+ctu1t=ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u2d-=u2d+,u11=q,u21=s.That is, in particular, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(17)atu1′′′t+btu1′′t-ctu1t=-ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′′d-=u1′′d+,u11=q,u1′′1=-s.Then find u2(t)=-u1′′(t).Defineyas=(y1,as,y2,as)T on Ω- as(18) y 1 , a s t = u 1 t + - k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , - k 2 ε 2 a d 2 e t - d a d / ε , t ∈ Ω + ∪ 1 , y 2 , a s t = u 2 t + k 1 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , k 2 e t - d a d / ε , t ∈ Ω + ∪ 1 ,where(19)k1=r-u20,k2=-εa0u2′d+-u2′d-+k1ea0d/ε.Theorem 2 (see [11, 13]). The zero-order asymptotic expansionyas=(y1,as,y2,as)T defined above for the solution y=(y1,y2)T of (10) and (11) satisfies the inequality(20)y-yas≤Cε.Now, in order to obtain piecewise analytical solutions of (10) and (11), we only need to obtain piecewise analytical solutions of the terminal value reduced systems (12) and (16), that is, the solution of equivalent reduced BVPs (13) and (17). ### 3.2. Piecewise Approximate Analytical Solutions The solutionu1(t) of BVP (13) can be represented as a piecewise solution form:(21)u1t=u1Lt,t∈Ω-u1Rt,t∈Ω+.Thus the BVP (13) is transformed into(22)atu1L′′t+btu1L′t+ctu1Lt=ht,u1L0=p,u1L′0=α1,t∈Ω-,atu1R′′t+btu1R′t+ctu1Rt=ht,u1R1=β1,u1R′1=r,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+) and α1, β1 are unknown constants.ApplyingNth-order DTM on (22) results in the recurrence relations(23)∑l=0kAk-ll+2l+1U1Ll+2+∑l=0kBk-ll+1U1Ll+1+∑l=0kCk-lU1Ll=Hk,U1L0=p,U1L1=α1,∑l=0kAk-ll+2l+1U1Rl+2+∑l=0kBk-ll+1U1Rl+1+∑l=0kCk-lU1Rl=Hk,U1R0=β1,U1R1=r,where A(k), B(k), C(k), H(k), U1L(k), and U1R(k) are the differential transform of a(t), b(t), c(t), h(t), u1L(t), and u1R(t), respectively, and α1 and β1 values are determined from the transformed continuity and smoothness conditions:(24)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k.The recurrence relations (23) with transformed conditions (24) represent a system of algebraic equations in the coefficients of the power series solution of the reduced BVP (13) and the unknowns α1 and β1. Solving this algebraic system, the piecewise smooth approximate solution u~=(u~1(t),u~2(t))T of (13) is obtained and given by(25) u ~ 1 t = ∑ k = 0 N U 1 L k t k , t ∈ Ω - ∑ k = 0 N U 1 R k t - 1 k , t ∈ Ω + , u ~ 2 t = ∑ k = 0 N k + 1 U 1 L k + 1 t k , t ∈ Ω - ∑ k = 0 N k + 1 U 1 R k + 1 t - 1 k , t ∈ Ω + .And thus, the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (10) is obtained and given by(26)y1,apt=u~1t+εa0q-u~20eta0/ε+kεt,t∈0,d,κε2adet-dad/ε,t∈d,1,y2,apt=u~2t+q-u~20eta0/ε+kε,t∈0,d,kεet-dad/ε,t∈d,1,where k=u~2′(d-)-u~2′(d+)/a(d).Similarly the reduced BVP (17) can be transformed into(27)atu1L′′′t+btu1L′′t-ctu1Lt=-ht,u1L0=p,u1L′0=α1,u1L′′0=α2,t∈Ω-,atu1R′′′t+btu1R′′t-ctu1Rt=-ht,u1R1=q,u1R′1=β1,u1R′′1=-s,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+), u1L′′(d-)=u1R′′(d+) and α1, α2, β1 are unknown constants.ApplyingNth-order DTM on (27) results in the recurrence relations(28)∑l=0kAk-ll+3l+2l+1U1Ll+3+∑l=0kBk-ll+2l+1U1Ll+2-∑l=0kCk-lU1Ll=-Hk,U1L0=p,U1L1=α1,2U1L2=α2,∑l=0kAk-ll+3l+2l+1U1Rl+3+∑l=0kBk-ll+2l+1U1Rl+2-∑l=0kCk-lU1Rl=-Hk,U1R0=q,U1R1=β1,2U1R2=-s,where the unknown constants α1, α2, and β1 are determined from the transformed continuity and smoothness conditions:(29)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k,∑k=0Nk+2k+1U1Lk+2d-k=∑k=0Nk+2k+1U1Rk+2d+-1k.And the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (11) is obtained and given by(30) y 1 , a p t = u ~ 1 t + k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ 0 , d , k 2 ε 2 a d 2 e t - d a d / ε , t ∈ d , 1 , y 2 , a p t = u ~ 2 t + k 1 e a 0 t / ε , t ∈ 0 , d , k 2 e t - d a d / ε , t ∈ d , 1 ,where(31)k1=r-u~20,k2=-εa0u~2′d--u~2′d++r-u~20ea0d/ε. ### 3.3. Error Estimate The error estimate of the present method has two sources: one from the asymptotic approximation and the other from the truncated series approximation by DTM.Theorem 3. Lety=(y1,y2)T be the solution of (10). Further let yap=(y1,ap,y2,ap)T be the approximate solution (26). Then(32)y-yap≤Cε+1N+1!.Proof. Since the DTM is a formalized modified version of the Taylor series method, then we have a bounded error given by(33)u-u~≤MN+1!,M≤uN+1ξ,0≤ξ≤1.From Theorem 2 and the above bounded error, we have(34)y-yap≤y-yas+yas-yap≤C1ε+MN+1!.Since the singular perturbation parameter ε is extremely small, the present method works well for singular perturbation problems.Remark 4. A similar statement is true for the solution of (11) and the approximate solution (30). ## 3.1. Zero-Order Asymptotic Expansion Approximations The SPBVP (6) can be transformed into an equivalent problem of the form(10)y1′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y20=q,y21=r,where y1∈C1(Ω-) and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [14].Similarly the SPBVP (8) can be transformed into(11)-y1′′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y11=q,y20=r,y21=s,where y1∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [11].Remark 1. Hereafter, only the above systems (10) and (11) are considered.Using some standard perturbation methods [11, 13, 16, 22] one can construct an asymptotic expansion for the solution of (10) and (11) as follows.Find a continuous functionu=(u1,u2)T of the terminal value reduced system of (10) such that(12)u1′t-u2t=0,atu2′t+btu2t+ctu1t=ht,u10=p,u1d-=u1d+,u2d-=u2d+,u21=r.That is, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(13)atu1′′t+btu1′t+ctu1t=ht,t∈Ω-,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′1=r.Then find(14)u2t=u1′t.Define yas=(y1,as,y2,as)T on Ω- as(15)y1,ast=u1t+εa0q-u20eta0/ε+kt,t∈Ω-∪0,d,kεadet-dad/ε,t∈Ω+∪1,y2,ast=u2t+q-u20eta0/ε+k,t∈Ω-∪0,d,ket-dad/ε,t∈Ω+∪1,where k=-εu2′(d+)-u2′(d-)/a(d).Similarly one can construct an asymptotic expansion for the solution of (11). In fact, for this problem u=(u1,u2)T is the solution of the terminal value reduced system (16)-u1′′t-u2t=0,atu2′t+btu2t+ctu1t=ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u2d-=u2d+,u11=q,u21=s.That is, in particular, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(17)atu1′′′t+btu1′′t-ctu1t=-ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′′d-=u1′′d+,u11=q,u1′′1=-s.Then find u2(t)=-u1′′(t).Defineyas=(y1,as,y2,as)T on Ω- as(18) y 1 , a s t = u 1 t + - k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , - k 2 ε 2 a d 2 e t - d a d / ε , t ∈ Ω + ∪ 1 , y 2 , a s t = u 2 t + k 1 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , k 2 e t - d a d / ε , t ∈ Ω + ∪ 1 ,where(19)k1=r-u20,k2=-εa0u2′d+-u2′d-+k1ea0d/ε.Theorem 2 (see [11, 13]). The zero-order asymptotic expansionyas=(y1,as,y2,as)T defined above for the solution y=(y1,y2)T of (10) and (11) satisfies the inequality(20)y-yas≤Cε.Now, in order to obtain piecewise analytical solutions of (10) and (11), we only need to obtain piecewise analytical solutions of the terminal value reduced systems (12) and (16), that is, the solution of equivalent reduced BVPs (13) and (17). ## 3.2. Piecewise Approximate Analytical Solutions The solutionu1(t) of BVP (13) can be represented as a piecewise solution form:(21)u1t=u1Lt,t∈Ω-u1Rt,t∈Ω+.Thus the BVP (13) is transformed into(22)atu1L′′t+btu1L′t+ctu1Lt=ht,u1L0=p,u1L′0=α1,t∈Ω-,atu1R′′t+btu1R′t+ctu1Rt=ht,u1R1=β1,u1R′1=r,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+) and α1, β1 are unknown constants.ApplyingNth-order DTM on (22) results in the recurrence relations(23)∑l=0kAk-ll+2l+1U1Ll+2+∑l=0kBk-ll+1U1Ll+1+∑l=0kCk-lU1Ll=Hk,U1L0=p,U1L1=α1,∑l=0kAk-ll+2l+1U1Rl+2+∑l=0kBk-ll+1U1Rl+1+∑l=0kCk-lU1Rl=Hk,U1R0=β1,U1R1=r,where A(k), B(k), C(k), H(k), U1L(k), and U1R(k) are the differential transform of a(t), b(t), c(t), h(t), u1L(t), and u1R(t), respectively, and α1 and β1 values are determined from the transformed continuity and smoothness conditions:(24)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k.The recurrence relations (23) with transformed conditions (24) represent a system of algebraic equations in the coefficients of the power series solution of the reduced BVP (13) and the unknowns α1 and β1. Solving this algebraic system, the piecewise smooth approximate solution u~=(u~1(t),u~2(t))T of (13) is obtained and given by(25) u ~ 1 t = ∑ k = 0 N U 1 L k t k , t ∈ Ω - ∑ k = 0 N U 1 R k t - 1 k , t ∈ Ω + , u ~ 2 t = ∑ k = 0 N k + 1 U 1 L k + 1 t k , t ∈ Ω - ∑ k = 0 N k + 1 U 1 R k + 1 t - 1 k , t ∈ Ω + .And thus, the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (10) is obtained and given by(26)y1,apt=u~1t+εa0q-u~20eta0/ε+kεt,t∈0,d,κε2adet-dad/ε,t∈d,1,y2,apt=u~2t+q-u~20eta0/ε+kε,t∈0,d,kεet-dad/ε,t∈d,1,where k=u~2′(d-)-u~2′(d+)/a(d).Similarly the reduced BVP (17) can be transformed into(27)atu1L′′′t+btu1L′′t-ctu1Lt=-ht,u1L0=p,u1L′0=α1,u1L′′0=α2,t∈Ω-,atu1R′′′t+btu1R′′t-ctu1Rt=-ht,u1R1=q,u1R′1=β1,u1R′′1=-s,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+), u1L′′(d-)=u1R′′(d+) and α1, α2, β1 are unknown constants.ApplyingNth-order DTM on (27) results in the recurrence relations(28)∑l=0kAk-ll+3l+2l+1U1Ll+3+∑l=0kBk-ll+2l+1U1Ll+2-∑l=0kCk-lU1Ll=-Hk,U1L0=p,U1L1=α1,2U1L2=α2,∑l=0kAk-ll+3l+2l+1U1Rl+3+∑l=0kBk-ll+2l+1U1Rl+2-∑l=0kCk-lU1Rl=-Hk,U1R0=q,U1R1=β1,2U1R2=-s,where the unknown constants α1, α2, and β1 are determined from the transformed continuity and smoothness conditions:(29)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k,∑k=0Nk+2k+1U1Lk+2d-k=∑k=0Nk+2k+1U1Rk+2d+-1k.And the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (11) is obtained and given by(30) y 1 , a p t = u ~ 1 t + k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ 0 , d , k 2 ε 2 a d 2 e t - d a d / ε , t ∈ d , 1 , y 2 , a p t = u ~ 2 t + k 1 e a 0 t / ε , t ∈ 0 , d , k 2 e t - d a d / ε , t ∈ d , 1 ,where(31)k1=r-u~20,k2=-εa0u~2′d--u~2′d++r-u~20ea0d/ε. ## 3.3. Error Estimate The error estimate of the present method has two sources: one from the asymptotic approximation and the other from the truncated series approximation by DTM.Theorem 3. Lety=(y1,y2)T be the solution of (10). Further let yap=(y1,ap,y2,ap)T be the approximate solution (26). Then(32)y-yap≤Cε+1N+1!.Proof. Since the DTM is a formalized modified version of the Taylor series method, then we have a bounded error given by(33)u-u~≤MN+1!,M≤uN+1ξ,0≤ξ≤1.From Theorem 2 and the above bounded error, we have(34)y-yap≤y-yas+yas-yap≤C1ε+MN+1!.Since the singular perturbation parameter ε is extremely small, the present method works well for singular perturbation problems.Remark 4. A similar statement is true for the solution of (11) and the approximate solution (30). ## 4. Illustrating Examples In this section we will apply the method described in the previous section to find piecewise approximate analytical solutions for three SPBVPs with a discontinuous source term.Example 1. Consider the third-order SPBVP from [13, 16](35)-εy′′′t-2y′′t+4y′t-2yt=ht,y0=1,y′0=0,y′1=0,where(36)ht=0.7,0≤t≤0.5-0.6,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(37)y1,apt=1+0.575447t-0.099553t2-0.162277t3-0.072842t4-0.021023t5+0.287723εe-2t/ε+ε20.966783-0.849226et,t∈0,0.5,1.330163-0.515081t-12-0.343388t-13-0.128770t-14-0.034339t-15+0.287723εe-2t/ε-ε240.966783-0.849226ee-2t-0.5/ε,t∈0.5,1.0,y2,apt=0.575447-0.199107t-0.486830t2-0.291369t3-0.105115t4-0.575447e-2t/ε+ε20.966783-0.849226e,t∈0,0.5,-1.030163t+1.030163-1.030163t-12-0.515081t-13-0.171694t-14-0.575447e-2t/ε+ε20.966783-0.849226ee-2t-0.5/ε,t∈0.5,1.0.Example 2. Consider the third-order SPBVP with variable coefficients from [13, 16](38)-εy′′′t-2ety′′t+cos⁡πt4y′t-1+xyt=ht,y0=0,y′0=0,y′1=1,where(39)ht=2t3,0≤t≤0.510t+1,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(40)y1,apt=1.865844t+0.466461t2-0.233230t3-0.072568t4+0.000470t5+0.932922εe-2t/ep-0.513641εt,t∈0,0.5,0.814311+t-1.280360t-12-0.069249t-13+0.181448t-14-0.052651t-15+0.932922εe-2t/ε+0.155770ε2e-3.297442t-0.5/ε,t∈0.5,1.0,y2,apt=1.865844+0.932922t-0.699692t2-0.290271t3+0.002350t4-1.865844e-2t/ε-0.513641εt,t∈0,0.5,3.560719-2.560719t-0.207747t-12+0.725794t-13-0.263254t-14-1.865844e-2t/ε-0.513641εe-3.297442t-0.5/ε,t∈0.5,1.0.Example 3. Consider the fourth-order SPBVP from [13, 16] (41)-εyivt-4y′′′t+4y′′t=-ht,t∈Ω-∪Ω+,y0=1,y1=1,y′′0=-1,y′′1=-1,where(42)ht=0.7,0≤t≤0.5-0.6,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(43)y1,apt=1+0.256751t-0.199842t2-0.037447t3-0.009362t4-0.0018724t5-0.037520ε2e-4t/ε,t∈0,0.5,1.353655-0.353655t-12t-12-23120t-13-23480t-14-232400t-15-1162.5εE-11+0.600316e-2.0/εε2e-4t-0.5/ε,t∈0.5,1.0,y2,apt=0.399683+0.224683t+0.112342t2+0.037447t3+0.600316e-4t/ε,t∈0,0.5,-320+2320t+2340t-12+23120t-13+2.5εE-11+0.600316e-2.0/εe-4t-0.5/ε,t∈0,1.0.The numerical solution for each example is presented overall the problem domain as shown in Figures 1–3. The corresponding maximum pointwise errors are taken to be(44)EεN=maxti∈Ω-εNyapNti-yap50ti,EN=max⁡EεN,where yapN(ti) is the obtained approximate solution using Nth-order DTM over a uniform mesh ti=ih, ti∈[0,1], h=10-3, i=0,1,2,…and yap50(ti) is our numerical reference solution obtained using DTM with order N=50. The computed maximum pointwise errorsEεN and EN for the above solved BVPs are given in Tables 2–7. The numerical results in Tables 2–7 agree with the theoretical ones present in this paper where the obtained solutions and their derivatives converge rapidly to the reference solutions with increasing the order of the DTM.Table 2 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 1. ε Approximation order of DTM,N 4 6 8 10 2 - 3 2.9574 e - 3 6.5563 e - 5 5.7278 e - 7 6.3740 e - 9 2 - 9 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 2 - 15 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 2 - 21 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 E N 2.9574 e - 3 6.5563 e - 5 5.7278 e - 7 6.3740 e - 9Table 3 Maximum pointwise errorsEεN and EN for the first derivative solution y2,ap of Example 1. ε Approximation order of DTM,N 4 6 8 10 2 - 3 8.8781 e - 3 1.6938 e - 4 1.3570 e - 6 8.6900 e - 9 2 - 9 1.0321 e - 2 1.3075 e - 4 7.8084 e - 7 2.3233 e - 9 2 - 15 1.0444 e - 2 1.3312 e - 4 8.0028 e - 7 2.6769 e - 9 2 - 21 1.0444 e - 2 1.3316 e - 4 8.0067 e - 7 2.4481 e - 9 E N 1.0444 e - 2 1.6938 e - 4 8.0067 e - 7 8.6900 e - 9Table 4 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 2. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.2547 e - 2 1.0369 e - 3 1.3608 e - 5 1.5365 e - 6 2 - 9 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8490 e - 7 2 - 15 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8100 e - 7 2 - 21 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8100 e - 7 E N 1.2547 e - 2 1.0369 e - 3 1.3488 e - 5 1.5365 e - 6Table 5 Maximum pointwise errorsEεN and EN for the first derivative solution y2,ap of Example 2. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.8191 e - 2 2.2563 e - 3 2.9397 e - 5 3.2970 e - 6 2 - 9 2.1534 e - 2 2.0165 e - 3 2.6724 e - 5 1.5660 e - 6 2 - 15 2.1583 e - 2 2.0124 e - 3 2.6679 e - 5 1.5390 e - 6 2 - 21 2.1584 e - 2 2.0123 e - 3 2.6678 e - 5 1.5390 e - 6 E N 2.1584 e - 2 2.2563 e - 3 2.9397 e - 5 3.2970 e - 6Table 6 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 3. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 9 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 15 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 21 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 E N 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9Table 7 Maximum pointwise errorsEεN and EN for the second derivative solution y2,ap of Example 3. ε Approximation order of DTM,N 4 6 8 10 2 - 3 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 9 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 15 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 21 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 E N 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9Figure 1 Graphs of the approximate solutiony1,ap and its first derivative y2,ap for Example 1 at ε=2-9 and N=5. (a) (b)Figure 2 Graphs of the approximate solutiony1,ap and its first derivative y2,ap for Example 2 at ε=2-9 and N=5. (a) (b)Figure 3 Graphs of the approximate solutiony1,ap and the second derivative y2,ap for Example 3 at ε=2-9 and N=5. (a) (b) ## 5. Conclusion We have presented a new reliable algorithm to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion SPBVPs with a discontinuous source term. The algorithm is based on constructing a zero-order asymptotic expansion of the solution and the DTM which provides the solutions in terms of convergent series with easily computable components. The original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion for the solution of the transformed system is constructed. For simplicity, the result terminal value reduced system is replaced by its equivalent reduced BVP with suitable continuity and smoothness conditions. Then a piecewise smooth solution of the reduced BVP is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented and shows that the method results in high-order convergence for small values of the singular perturbation parameter. We have applied the method on three SPBVPs and the piecewise analytical solution is presented for each one overall the problem domain. The numerical results confirm that the obtained solutions and their derivatives converge rapidly to the reference solutions with increasing the order of the DTM. The results show that the method is a reliable and convenient asymptotic semianalytical numerical method for treating high-order SPBVPs with a discontinuous source term. The method is based on a straightforward procedure, suitable for engineers. --- *Source: 1015634-2016-11-15.xml*
1015634-2016-11-15_1015634-2016-11-15.md
27,946
Piecewise Approximate Analytical Solutions of High-Order Singular Perturbation Problems with a Discontinuous Source Term
Essam R. El-Zahar
International Journal of Differential Equations (2016)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1015634
1015634-2016-11-15.xml
--- ## Abstract A reliable algorithm is presented to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion singular perturbation problems with a discontinuous source term. The algorithm is based on an asymptotic expansion approximation and Differential Transform Method (DTM). First, the original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion of the solution is constructed. Then a piecewise smooth solution of the terminal value reduced system is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented. The results show that the method is a reliable and convenient asymptotic semianalytical numerical method for treating high-order singular perturbation problems with a discontinuous source term. --- ## Body ## 1. Introduction Many mathematical problems that model real-life phenomena cannot be solved completely by analytical means. Some of the most important mathematical problems arising in applied mathematics are singular perturbation problems. These problems commonly occur in many branches of applied mathematics such as transition points in quantum mechanics, edge layers in solid mechanics, boundary layers in fluid mechanics, skin layers in electrical applications, and shock layers in fluid and solid mechanics. The numerical treatment of these problems is accompanied by major computational difficulties due to the presence of sharp boundary and/or interior layers in the solution. Therefore, more efficient and simpler computational methods are required to solve these problems.For the past two decades, many numerical methods have appeared in the literature, which cover mostly second-order singular perturbation boundary value problems (SPBVPs) [1–3]. But only few authors have developed numerical methods for higher order SPBVPs (see, e.g., [4–10]). However, most of them have concentrated on problems with smooth data. In fact some authors have developed numerical methods for problems with discontinuous data which gives rise to an interior layer in the exact solution of the problem, in addition to the boundary layer at the outflow boundary point. Most notable among these methods are piecewise-uniform mesh finite difference method [11–14] and fitted mesh finite element method [15, 16] for third- and fourth-order SPBVPs with a discontinuous source term. The aim of this paper is to employ a semianalytical method which is Differential Transform Method (DTM) as an alternative to existing methods for solving high-order SPBVPs with a discontinuous source term.DTM is introduced by Zhou [17] in a study of electric circuits. This method is a formalized modified version of Taylor series method where the derivatives are evaluated through recurrence relations and not symbolically as the traditional Taylor series method. The method has been used effectively to obtain highly accurate solutions for large classes of linear and nonlinear problems (see, e.g., [17–21]). There is no need for discretization, perturbations, and further large computational work and round-off errors are avoided. Additionally, DTM does not generate secular terms (noise terms) and does not need analytical integrations as other semianalytical methods like HPM, HAM, ADM, or VIM and so DTM is an attractive tool for solving differential equations.In this paper, a reliable algorithm is presented to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion singular perturbation problems with a discontinuous source term. The algorithm is based on an asymptotic expansion approximation and DTM. First, the original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion of the solution is constructed. Then a piecewise smooth solution of the terminal value reduced system is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented. The results show that the method is a reliable and convenient asymptotic semianalytical method for treating high-order singular perturbation problems with a discontinuous source term. ## 2. Differential Transform Method for ODE System Let us describe the DTM for solving the following system of ODEs:(1)u1′t=f1t,u1,u2,…,un,u2′t=f2t,u1,u2,…,un,⋮un′t=fnt,u1,u2,…,un,subject to the initial conditions(2)uit0=ci,i=1,2,…,n.Let [t0,T] be the interval over which we want to find the solution of (1)-(2). In actual applications of the DTM, the Nth-order approximate solution of (1)-(2) can be expressed by the finite series(3)uit=∑k=0NUikt-t0k,t∈t0,T,i=1,2,…,n,where(4)Uik=1k!dkuitdtkt=t0,i=1,2,…,n,which implies that ∑k=N+1∞Ui(k)(t-t0)k is negligibly small. Using some fundamental properties of DTM (Table 1), the ODE system (1)-(2) can be transformed into the following recurrence relations:(5)Uik+1=Fik,U1,U2,…,Unk+1,Yi0=ci,i=1,2,…,n,where Fi(k,U1,U2,…,Un) is the differential transform of the function fi(t,u1,u2,…,un), for i=1,2,…,n. Solving the recurrence relation (5), the differential transform Ui(k), k>0, can be easily obtained.Table 1 Some fundamental operations of DTM. Original function Transformed function u ( t ) = β v ( t ) ± w ( t ) U ( k ) = β V ( k ) ± β W ( k ) u ( t ) = v ( t ) w ( t ) U ( k ) = ∑ l = 0 k V ( l ) W ( k - l ) u ( t ) = d m v ( t ) d t m U ( k ) = ( k + m ) ! k ! V ( k + m ) u ( t ) = β + t m U ( k ) = H [ m , k ] m ! k ! m - k ! β + t 0 m - k, H[m,k]=1,ifm-k≥00,ifm-k<0 u ( t ) = e λ t U ( k ) = λ k k ! e λ t 0 u ( t ) = sin ⁡ ω t + β U ( k ) = ω k k ! sin ⁡ ω t 0 + β + k π 2 u ( t ) = cos ⁡ ω t + β U ( k ) = ω k k ! cos ⁡ ω t 0 + β + k π 2 ## 3. Description of the Method Motivated by the works of [11, 13, 16], we, in the present paper, suggest an asymptotic semianalytic method which is DTM to develop piecewise approximate analytical solutions for the following class of SPBVPs.Third-Order SPBVP [13]. Find y∈C1(Ω-)∩C2(Ω)∩C3Ω-∪Ω+ such that (6)-εy′′′t+aty′′t+bty′t+ctyt=ht,t∈Ω-∪Ω+,y0=p,y′0=q,y′1=r,where a(t), b(t), and c(t) are sufficiently smooth functions on Ω- satisfying the following conditions:(7)at<0,bt≥0,0≥ct≥-γ,γ>0,α-θγ≥η>0,where θ  is  arbitrarily  close  to  1, for some η.Fourth-Order SPBVP [11]. Find y∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ such that (8)-εyivt+aty′′′t+bty′′t-ctyt=-ht,t∈Ω-∪Ω+,y0=p,y1=q,y′′0=-r,y′′1=-s,where a(t), b(t), and c(t) are sufficiently smooth functions on Ω- satisfying the following conditions:(9)at<0,bt≥0,0≥ct≥-γ,γ>0,α-θγ≥η>0,where θis  arbitrarily  close  to  1,  for  someη. For both the problems defined above, Ω=(0,1), Ω-=(0,d), Ω+=(d,1), Ω-=Ω-∪Ω+, Ω-=Ω∖{d}, and 0<ε≪1. It is assumed that h(t) is sufficiently smooth on Ω- and its derivatives have discontinuity at the point d and the jump at d is given as h(d)=h(d+)-h(d-). ### 3.1. Zero-Order Asymptotic Expansion Approximations The SPBVP (6) can be transformed into an equivalent problem of the form(10)y1′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y20=q,y21=r,where y1∈C1(Ω-) and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [14].Similarly the SPBVP (8) can be transformed into(11)-y1′′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y11=q,y20=r,y21=s,where y1∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [11].Remark 1. Hereafter, only the above systems (10) and (11) are considered.Using some standard perturbation methods [11, 13, 16, 22] one can construct an asymptotic expansion for the solution of (10) and (11) as follows.Find a continuous functionu=(u1,u2)T of the terminal value reduced system of (10) such that(12)u1′t-u2t=0,atu2′t+btu2t+ctu1t=ht,u10=p,u1d-=u1d+,u2d-=u2d+,u21=r.That is, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(13)atu1′′t+btu1′t+ctu1t=ht,t∈Ω-,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′1=r.Then find(14)u2t=u1′t.Define yas=(y1,as,y2,as)T on Ω- as(15)y1,ast=u1t+εa0q-u20eta0/ε+kt,t∈Ω-∪0,d,kεadet-dad/ε,t∈Ω+∪1,y2,ast=u2t+q-u20eta0/ε+k,t∈Ω-∪0,d,ket-dad/ε,t∈Ω+∪1,where k=-εu2′(d+)-u2′(d-)/a(d).Similarly one can construct an asymptotic expansion for the solution of (11). In fact, for this problem u=(u1,u2)T is the solution of the terminal value reduced system (16)-u1′′t-u2t=0,atu2′t+btu2t+ctu1t=ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u2d-=u2d+,u11=q,u21=s.That is, in particular, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(17)atu1′′′t+btu1′′t-ctu1t=-ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′′d-=u1′′d+,u11=q,u1′′1=-s.Then find u2(t)=-u1′′(t).Defineyas=(y1,as,y2,as)T on Ω- as(18) y 1 , a s t = u 1 t + - k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , - k 2 ε 2 a d 2 e t - d a d / ε , t ∈ Ω + ∪ 1 , y 2 , a s t = u 2 t + k 1 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , k 2 e t - d a d / ε , t ∈ Ω + ∪ 1 ,where(19)k1=r-u20,k2=-εa0u2′d+-u2′d-+k1ea0d/ε.Theorem 2 (see [11, 13]). The zero-order asymptotic expansionyas=(y1,as,y2,as)T defined above for the solution y=(y1,y2)T of (10) and (11) satisfies the inequality(20)y-yas≤Cε.Now, in order to obtain piecewise analytical solutions of (10) and (11), we only need to obtain piecewise analytical solutions of the terminal value reduced systems (12) and (16), that is, the solution of equivalent reduced BVPs (13) and (17). ### 3.2. Piecewise Approximate Analytical Solutions The solutionu1(t) of BVP (13) can be represented as a piecewise solution form:(21)u1t=u1Lt,t∈Ω-u1Rt,t∈Ω+.Thus the BVP (13) is transformed into(22)atu1L′′t+btu1L′t+ctu1Lt=ht,u1L0=p,u1L′0=α1,t∈Ω-,atu1R′′t+btu1R′t+ctu1Rt=ht,u1R1=β1,u1R′1=r,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+) and α1, β1 are unknown constants.ApplyingNth-order DTM on (22) results in the recurrence relations(23)∑l=0kAk-ll+2l+1U1Ll+2+∑l=0kBk-ll+1U1Ll+1+∑l=0kCk-lU1Ll=Hk,U1L0=p,U1L1=α1,∑l=0kAk-ll+2l+1U1Rl+2+∑l=0kBk-ll+1U1Rl+1+∑l=0kCk-lU1Rl=Hk,U1R0=β1,U1R1=r,where A(k), B(k), C(k), H(k), U1L(k), and U1R(k) are the differential transform of a(t), b(t), c(t), h(t), u1L(t), and u1R(t), respectively, and α1 and β1 values are determined from the transformed continuity and smoothness conditions:(24)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k.The recurrence relations (23) with transformed conditions (24) represent a system of algebraic equations in the coefficients of the power series solution of the reduced BVP (13) and the unknowns α1 and β1. Solving this algebraic system, the piecewise smooth approximate solution u~=(u~1(t),u~2(t))T of (13) is obtained and given by(25) u ~ 1 t = ∑ k = 0 N U 1 L k t k , t ∈ Ω - ∑ k = 0 N U 1 R k t - 1 k , t ∈ Ω + , u ~ 2 t = ∑ k = 0 N k + 1 U 1 L k + 1 t k , t ∈ Ω - ∑ k = 0 N k + 1 U 1 R k + 1 t - 1 k , t ∈ Ω + .And thus, the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (10) is obtained and given by(26)y1,apt=u~1t+εa0q-u~20eta0/ε+kεt,t∈0,d,κε2adet-dad/ε,t∈d,1,y2,apt=u~2t+q-u~20eta0/ε+kε,t∈0,d,kεet-dad/ε,t∈d,1,where k=u~2′(d-)-u~2′(d+)/a(d).Similarly the reduced BVP (17) can be transformed into(27)atu1L′′′t+btu1L′′t-ctu1Lt=-ht,u1L0=p,u1L′0=α1,u1L′′0=α2,t∈Ω-,atu1R′′′t+btu1R′′t-ctu1Rt=-ht,u1R1=q,u1R′1=β1,u1R′′1=-s,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+), u1L′′(d-)=u1R′′(d+) and α1, α2, β1 are unknown constants.ApplyingNth-order DTM on (27) results in the recurrence relations(28)∑l=0kAk-ll+3l+2l+1U1Ll+3+∑l=0kBk-ll+2l+1U1Ll+2-∑l=0kCk-lU1Ll=-Hk,U1L0=p,U1L1=α1,2U1L2=α2,∑l=0kAk-ll+3l+2l+1U1Rl+3+∑l=0kBk-ll+2l+1U1Rl+2-∑l=0kCk-lU1Rl=-Hk,U1R0=q,U1R1=β1,2U1R2=-s,where the unknown constants α1, α2, and β1 are determined from the transformed continuity and smoothness conditions:(29)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k,∑k=0Nk+2k+1U1Lk+2d-k=∑k=0Nk+2k+1U1Rk+2d+-1k.And the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (11) is obtained and given by(30) y 1 , a p t = u ~ 1 t + k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ 0 , d , k 2 ε 2 a d 2 e t - d a d / ε , t ∈ d , 1 , y 2 , a p t = u ~ 2 t + k 1 e a 0 t / ε , t ∈ 0 , d , k 2 e t - d a d / ε , t ∈ d , 1 ,where(31)k1=r-u~20,k2=-εa0u~2′d--u~2′d++r-u~20ea0d/ε. ### 3.3. Error Estimate The error estimate of the present method has two sources: one from the asymptotic approximation and the other from the truncated series approximation by DTM.Theorem 3. Lety=(y1,y2)T be the solution of (10). Further let yap=(y1,ap,y2,ap)T be the approximate solution (26). Then(32)y-yap≤Cε+1N+1!.Proof. Since the DTM is a formalized modified version of the Taylor series method, then we have a bounded error given by(33)u-u~≤MN+1!,M≤uN+1ξ,0≤ξ≤1.From Theorem 2 and the above bounded error, we have(34)y-yap≤y-yas+yas-yap≤C1ε+MN+1!.Since the singular perturbation parameter ε is extremely small, the present method works well for singular perturbation problems.Remark 4. A similar statement is true for the solution of (11) and the approximate solution (30). ## 3.1. Zero-Order Asymptotic Expansion Approximations The SPBVP (6) can be transformed into an equivalent problem of the form(10)y1′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y20=q,y21=r,where y1∈C1(Ω-) and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [14].Similarly the SPBVP (8) can be transformed into(11)-y1′′t-y2t=0,t∈Ω∪1,-εy2′′t+aty2′t+bty2t+cty1t=ht,t∈Ω-,y10=p,y11=q,y20=r,y21=s,where y1∈C2(Ω-)∩C3(Ω)∩C4Ω-∪Ω+ and y2∈C0(Ω-)∩C1(Ω)∩C2Ω-∪Ω+ [11].Remark 1. Hereafter, only the above systems (10) and (11) are considered.Using some standard perturbation methods [11, 13, 16, 22] one can construct an asymptotic expansion for the solution of (10) and (11) as follows.Find a continuous functionu=(u1,u2)T of the terminal value reduced system of (10) such that(12)u1′t-u2t=0,atu2′t+btu2t+ctu1t=ht,u10=p,u1d-=u1d+,u2d-=u2d+,u21=r.That is, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(13)atu1′′t+btu1′t+ctu1t=ht,t∈Ω-,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′1=r.Then find(14)u2t=u1′t.Define yas=(y1,as,y2,as)T on Ω- as(15)y1,ast=u1t+εa0q-u20eta0/ε+kt,t∈Ω-∪0,d,kεadet-dad/ε,t∈Ω+∪1,y2,ast=u2t+q-u20eta0/ε+k,t∈Ω-∪0,d,ket-dad/ε,t∈Ω+∪1,where k=-εu2′(d+)-u2′(d-)/a(d).Similarly one can construct an asymptotic expansion for the solution of (11). In fact, for this problem u=(u1,u2)T is the solution of the terminal value reduced system (16)-u1′′t-u2t=0,atu2′t+btu2t+ctu1t=ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u2d-=u2d+,u11=q,u21=s.That is, in particular, find a smooth function u1 on Ω- that satisfies the following equivalent reduced BVP:(17)atu1′′′t+btu1′′t-ctu1t=-ht,t∈Ω-∪Ω+,u10=p,u1d-=u1d+,u1′d-=u1′d+,u1′′d-=u1′′d+,u11=q,u1′′1=-s.Then find u2(t)=-u1′′(t).Defineyas=(y1,as,y2,as)T on Ω- as(18) y 1 , a s t = u 1 t + - k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , - k 2 ε 2 a d 2 e t - d a d / ε , t ∈ Ω + ∪ 1 , y 2 , a s t = u 2 t + k 1 e a 0 t / ε , t ∈ Ω - ∪ 0 , d , k 2 e t - d a d / ε , t ∈ Ω + ∪ 1 ,where(19)k1=r-u20,k2=-εa0u2′d+-u2′d-+k1ea0d/ε.Theorem 2 (see [11, 13]). The zero-order asymptotic expansionyas=(y1,as,y2,as)T defined above for the solution y=(y1,y2)T of (10) and (11) satisfies the inequality(20)y-yas≤Cε.Now, in order to obtain piecewise analytical solutions of (10) and (11), we only need to obtain piecewise analytical solutions of the terminal value reduced systems (12) and (16), that is, the solution of equivalent reduced BVPs (13) and (17). ## 3.2. Piecewise Approximate Analytical Solutions The solutionu1(t) of BVP (13) can be represented as a piecewise solution form:(21)u1t=u1Lt,t∈Ω-u1Rt,t∈Ω+.Thus the BVP (13) is transformed into(22)atu1L′′t+btu1L′t+ctu1Lt=ht,u1L0=p,u1L′0=α1,t∈Ω-,atu1R′′t+btu1R′t+ctu1Rt=ht,u1R1=β1,u1R′1=r,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+) and α1, β1 are unknown constants.ApplyingNth-order DTM on (22) results in the recurrence relations(23)∑l=0kAk-ll+2l+1U1Ll+2+∑l=0kBk-ll+1U1Ll+1+∑l=0kCk-lU1Ll=Hk,U1L0=p,U1L1=α1,∑l=0kAk-ll+2l+1U1Rl+2+∑l=0kBk-ll+1U1Rl+1+∑l=0kCk-lU1Rl=Hk,U1R0=β1,U1R1=r,where A(k), B(k), C(k), H(k), U1L(k), and U1R(k) are the differential transform of a(t), b(t), c(t), h(t), u1L(t), and u1R(t), respectively, and α1 and β1 values are determined from the transformed continuity and smoothness conditions:(24)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k.The recurrence relations (23) with transformed conditions (24) represent a system of algebraic equations in the coefficients of the power series solution of the reduced BVP (13) and the unknowns α1 and β1. Solving this algebraic system, the piecewise smooth approximate solution u~=(u~1(t),u~2(t))T of (13) is obtained and given by(25) u ~ 1 t = ∑ k = 0 N U 1 L k t k , t ∈ Ω - ∑ k = 0 N U 1 R k t - 1 k , t ∈ Ω + , u ~ 2 t = ∑ k = 0 N k + 1 U 1 L k + 1 t k , t ∈ Ω - ∑ k = 0 N k + 1 U 1 R k + 1 t - 1 k , t ∈ Ω + .And thus, the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (10) is obtained and given by(26)y1,apt=u~1t+εa0q-u~20eta0/ε+kεt,t∈0,d,κε2adet-dad/ε,t∈d,1,y2,apt=u~2t+q-u~20eta0/ε+kε,t∈0,d,kεet-dad/ε,t∈d,1,where k=u~2′(d-)-u~2′(d+)/a(d).Similarly the reduced BVP (17) can be transformed into(27)atu1L′′′t+btu1L′′t-ctu1Lt=-ht,u1L0=p,u1L′0=α1,u1L′′0=α2,t∈Ω-,atu1R′′′t+btu1R′′t-ctu1Rt=-ht,u1R1=q,u1R′1=β1,u1R′′1=-s,t∈Ω+,with continuity and smoothness conditions u1L(d-)=u1R(d+), u1L′(d-)=u1R′(d+), u1L′′(d-)=u1R′′(d+) and α1, α2, β1 are unknown constants.ApplyingNth-order DTM on (27) results in the recurrence relations(28)∑l=0kAk-ll+3l+2l+1U1Ll+3+∑l=0kBk-ll+2l+1U1Ll+2-∑l=0kCk-lU1Ll=-Hk,U1L0=p,U1L1=α1,2U1L2=α2,∑l=0kAk-ll+3l+2l+1U1Rl+3+∑l=0kBk-ll+2l+1U1Rl+2-∑l=0kCk-lU1Rl=-Hk,U1R0=q,U1R1=β1,2U1R2=-s,where the unknown constants α1, α2, and β1 are determined from the transformed continuity and smoothness conditions:(29)∑k=0NU1Lkd-k=∑k=0NU1Rkd+-1k,∑k=0Nk+1U1Lk+1d-k=∑k=0Nk+1U1Rk+1d+-1k,∑k=0Nk+2k+1U1Lk+2d-k=∑k=0Nk+2k+1U1Rk+2d+-1k.And the piecewise approximate analytical solution yap=(y1,ap,y2,ap)T of (11) is obtained and given by(30) y 1 , a p t = u ~ 1 t + k 1 ε 2 a 0 2 e a 0 t / ε , t ∈ 0 , d , k 2 ε 2 a d 2 e t - d a d / ε , t ∈ d , 1 , y 2 , a p t = u ~ 2 t + k 1 e a 0 t / ε , t ∈ 0 , d , k 2 e t - d a d / ε , t ∈ d , 1 ,where(31)k1=r-u~20,k2=-εa0u~2′d--u~2′d++r-u~20ea0d/ε. ## 3.3. Error Estimate The error estimate of the present method has two sources: one from the asymptotic approximation and the other from the truncated series approximation by DTM.Theorem 3. Lety=(y1,y2)T be the solution of (10). Further let yap=(y1,ap,y2,ap)T be the approximate solution (26). Then(32)y-yap≤Cε+1N+1!.Proof. Since the DTM is a formalized modified version of the Taylor series method, then we have a bounded error given by(33)u-u~≤MN+1!,M≤uN+1ξ,0≤ξ≤1.From Theorem 2 and the above bounded error, we have(34)y-yap≤y-yas+yas-yap≤C1ε+MN+1!.Since the singular perturbation parameter ε is extremely small, the present method works well for singular perturbation problems.Remark 4. A similar statement is true for the solution of (11) and the approximate solution (30). ## 4. Illustrating Examples In this section we will apply the method described in the previous section to find piecewise approximate analytical solutions for three SPBVPs with a discontinuous source term.Example 1. Consider the third-order SPBVP from [13, 16](35)-εy′′′t-2y′′t+4y′t-2yt=ht,y0=1,y′0=0,y′1=0,where(36)ht=0.7,0≤t≤0.5-0.6,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(37)y1,apt=1+0.575447t-0.099553t2-0.162277t3-0.072842t4-0.021023t5+0.287723εe-2t/ε+ε20.966783-0.849226et,t∈0,0.5,1.330163-0.515081t-12-0.343388t-13-0.128770t-14-0.034339t-15+0.287723εe-2t/ε-ε240.966783-0.849226ee-2t-0.5/ε,t∈0.5,1.0,y2,apt=0.575447-0.199107t-0.486830t2-0.291369t3-0.105115t4-0.575447e-2t/ε+ε20.966783-0.849226e,t∈0,0.5,-1.030163t+1.030163-1.030163t-12-0.515081t-13-0.171694t-14-0.575447e-2t/ε+ε20.966783-0.849226ee-2t-0.5/ε,t∈0.5,1.0.Example 2. Consider the third-order SPBVP with variable coefficients from [13, 16](38)-εy′′′t-2ety′′t+cos⁡πt4y′t-1+xyt=ht,y0=0,y′0=0,y′1=1,where(39)ht=2t3,0≤t≤0.510t+1,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(40)y1,apt=1.865844t+0.466461t2-0.233230t3-0.072568t4+0.000470t5+0.932922εe-2t/ep-0.513641εt,t∈0,0.5,0.814311+t-1.280360t-12-0.069249t-13+0.181448t-14-0.052651t-15+0.932922εe-2t/ε+0.155770ε2e-3.297442t-0.5/ε,t∈0.5,1.0,y2,apt=1.865844+0.932922t-0.699692t2-0.290271t3+0.002350t4-1.865844e-2t/ε-0.513641εt,t∈0,0.5,3.560719-2.560719t-0.207747t-12+0.725794t-13-0.263254t-14-1.865844e-2t/ε-0.513641εe-3.297442t-0.5/ε,t∈0.5,1.0.Example 3. Consider the fourth-order SPBVP from [13, 16] (41)-εyivt-4y′′′t+4y′′t=-ht,t∈Ω-∪Ω+,y0=1,y1=1,y′′0=-1,y′′1=-1,where(42)ht=0.7,0≤t≤0.5-0.6,0.5<t≤1.Using the present method with 5th-order DTM, the piecewise analytical solution is given by(43)y1,apt=1+0.256751t-0.199842t2-0.037447t3-0.009362t4-0.0018724t5-0.037520ε2e-4t/ε,t∈0,0.5,1.353655-0.353655t-12t-12-23120t-13-23480t-14-232400t-15-1162.5εE-11+0.600316e-2.0/εε2e-4t-0.5/ε,t∈0.5,1.0,y2,apt=0.399683+0.224683t+0.112342t2+0.037447t3+0.600316e-4t/ε,t∈0,0.5,-320+2320t+2340t-12+23120t-13+2.5εE-11+0.600316e-2.0/εe-4t-0.5/ε,t∈0,1.0.The numerical solution for each example is presented overall the problem domain as shown in Figures 1–3. The corresponding maximum pointwise errors are taken to be(44)EεN=maxti∈Ω-εNyapNti-yap50ti,EN=max⁡EεN,where yapN(ti) is the obtained approximate solution using Nth-order DTM over a uniform mesh ti=ih, ti∈[0,1], h=10-3, i=0,1,2,…and yap50(ti) is our numerical reference solution obtained using DTM with order N=50. The computed maximum pointwise errorsEεN and EN for the above solved BVPs are given in Tables 2–7. The numerical results in Tables 2–7 agree with the theoretical ones present in this paper where the obtained solutions and their derivatives converge rapidly to the reference solutions with increasing the order of the DTM.Table 2 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 1. ε Approximation order of DTM,N 4 6 8 10 2 - 3 2.9574 e - 3 6.5563 e - 5 5.7278 e - 7 6.3740 e - 9 2 - 9 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 2 - 15 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 2 - 21 2.9573 e - 3 3.0803 e - 5 1.6200 e - 7 4.0000 e - 9 E N 2.9574 e - 3 6.5563 e - 5 5.7278 e - 7 6.3740 e - 9Table 3 Maximum pointwise errorsEεN and EN for the first derivative solution y2,ap of Example 1. ε Approximation order of DTM,N 4 6 8 10 2 - 3 8.8781 e - 3 1.6938 e - 4 1.3570 e - 6 8.6900 e - 9 2 - 9 1.0321 e - 2 1.3075 e - 4 7.8084 e - 7 2.3233 e - 9 2 - 15 1.0444 e - 2 1.3312 e - 4 8.0028 e - 7 2.6769 e - 9 2 - 21 1.0444 e - 2 1.3316 e - 4 8.0067 e - 7 2.4481 e - 9 E N 1.0444 e - 2 1.6938 e - 4 8.0067 e - 7 8.6900 e - 9Table 4 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 2. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.2547 e - 2 1.0369 e - 3 1.3608 e - 5 1.5365 e - 6 2 - 9 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8490 e - 7 2 - 15 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8100 e - 7 2 - 21 1.2547 e - 2 1.0077 e - 3 1.3488 e - 5 6.8100 e - 7 E N 1.2547 e - 2 1.0369 e - 3 1.3488 e - 5 1.5365 e - 6Table 5 Maximum pointwise errorsEεN and EN for the first derivative solution y2,ap of Example 2. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.8191 e - 2 2.2563 e - 3 2.9397 e - 5 3.2970 e - 6 2 - 9 2.1534 e - 2 2.0165 e - 3 2.6724 e - 5 1.5660 e - 6 2 - 15 2.1583 e - 2 2.0124 e - 3 2.6679 e - 5 1.5390 e - 6 2 - 21 2.1584 e - 2 2.0123 e - 3 2.6678 e - 5 1.5390 e - 6 E N 2.1584 e - 2 2.2563 e - 3 2.9397 e - 5 3.2970 e - 6Table 6 Maximum pointwise errorsEεN and EN for the solution y1,ap of Example 3. ε Approximation order of DTM,N 4 6 8 10 2 - 3 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 9 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 15 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 2 - 21 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9 E N 1.8918 e - 3 2.3001 e - 5 1.3700 e - 7 3.3000 e - 9Table 7 Maximum pointwise errorsEεN and EN for the second derivative solution y2,ap of Example 3. ε Approximation order of DTM,N 4 6 8 10 2 - 3 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 9 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 15 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 2 - 21 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9 E N 2.1578 e - 2 2.8775 e - 4 1.7839 e - 6 6.5000 e - 9Figure 1 Graphs of the approximate solutiony1,ap and its first derivative y2,ap for Example 1 at ε=2-9 and N=5. (a) (b)Figure 2 Graphs of the approximate solutiony1,ap and its first derivative y2,ap for Example 2 at ε=2-9 and N=5. (a) (b)Figure 3 Graphs of the approximate solutiony1,ap and the second derivative y2,ap for Example 3 at ε=2-9 and N=5. (a) (b) ## 5. Conclusion We have presented a new reliable algorithm to develop piecewise approximate analytical solutions of third- and fourth-order convection diffusion SPBVPs with a discontinuous source term. The algorithm is based on constructing a zero-order asymptotic expansion of the solution and the DTM which provides the solutions in terms of convergent series with easily computable components. The original problem is transformed into a weakly coupled system of ODEs and a zero-order asymptotic expansion for the solution of the transformed system is constructed. For simplicity, the result terminal value reduced system is replaced by its equivalent reduced BVP with suitable continuity and smoothness conditions. Then a piecewise smooth solution of the reduced BVP is obtained by using DTM and imposing the continuity and smoothness conditions. The error estimate of the method is presented and shows that the method results in high-order convergence for small values of the singular perturbation parameter. We have applied the method on three SPBVPs and the piecewise analytical solution is presented for each one overall the problem domain. The numerical results confirm that the obtained solutions and their derivatives converge rapidly to the reference solutions with increasing the order of the DTM. The results show that the method is a reliable and convenient asymptotic semianalytical numerical method for treating high-order SPBVPs with a discontinuous source term. The method is based on a straightforward procedure, suitable for engineers. --- *Source: 1015634-2016-11-15.xml*
2016
# VO2 Kinetics during Moderate Effort in Muscles of Different Masses and Training Level **Authors:** Omri Inbar; Marcello Faina; Sabrina Demarie; Brian J. Whipp **Journal:** ISRN Physiology (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101565 --- ## Abstract Purpose. To examine the relative importance of central or peripheral factors in the on-transient VO2 response dynamics to exercise with “trained” and relatively “untrained” muscles. Methods. Seven professional road cyclists and seven elite kayak paddlers volunteered to participate in this study. Each completed two bouts of constant-load “square-wave” rest-to-exercise transition cycling and arm-cranking exercise at a power output 50–60% of the mode-specific VO2peak presented in a randomized order. Results. In the cyclists, the mean response time (MRT) as well as the phase II VO2 time constant (τ2) was significantly slower in the untrained compared with the trained muscles. The opposite was the case in the kayakers. With respect to the relatively untrained muscle groups, while both demonstrated faster VO2 kinetics than normal (moderately fit) subjects, the kayakers evidenced faster VO2 kinetics than the cyclists. This suggests that there is a greater stabilizing-counterforce involvement of the legs in the task of kayaking than of the arms for cycling. Conclusions. The results of the present study provide no support for the “transfer” of a training effect onto the VO2 on-transient response for moderate exercise, but rather support earlier reports demonstrating that peripheral effects may be important in dictating this kinetics. --- ## Body ## 1. Introduction The time course of the pulmonary oxygen uptake (VO2) response to constant-load exercise of moderate intensity can be characterized by two transient phases. In Phase I, the initial, usually rapid, increase in VO2 is mediated by an increase in cardiac output, or more properly pulmonary blood flow, whilst the gas contents of the mixed-venous blood perfusing the lungs remain similar to those at rest. Phase II transition is triggered by the gas contents of the blood perfusing the lungs being altered by the influence of active muscle metabolism; it therefore represents the blood transport delay between the active muscles and the lungs. During Phase II VO2 reflects the decreasing mixed venous O2 content supplementing the continuing increase in pulmonary blood flow. This is characterized by a monoexponential rise in VO2 up to the asymptotic or steady-state level (Phase III), the time course of which closely reflects that of the increased muscle oxygen consumption [1, 2]. However, if the work rate is appreciably above the individual’s lactate threshold (LT), VO2 may not reach a steady state, associated with a continued slower rise in VO2 (slow component; VO2slow) of a delayed onset. Opinions are divided over whether VO2 kinetics is limited by the rate of O2 delivery to the working muscle [3] or by peripheral factors such as oxidative enzyme activity within the muscle mitochondria or the rate at which carbohydrates are processed into the mitochondria at the pyruvate-acetyl CoA site, that is, rate limitation of O2 utilization by working muscles despite adequate O2 delivery [4, 5].It has been repeatedly shown that single-legged training (relatively small muscle mass) causes significant local (peripheral) changes (such as concentration of high energy phosphate compounds, ratio of ATP to ADP, and inorganic phosphate), with only minor alterations of the cardiovascular (central) system [6]. It has also been acknowledged [6] that, in order to induce contralateral training modifications (cross-training), larger muscle mass that produces both peripheral and central adaptations should be involved. Indeed several reports (e.g., [7]) have demonstrated that arm training did not produce significant alterations in heart rate, stoke volume, or peripheral blood flow, either at rest or during exercise performed with the nontrained muscles (legs). After leg training; however, the increase in the centrally mediated variables was approximately the same in the trained (legs) and the nontrained muscles (arms) [6, 8]. It is indeed widely accepted that there is more of a transfer effect on the central hemodynamic after training with large muscle groups, compared with training with small muscle groups. It is, therefore, suggested that arm muscles have a greater potential for local (peripheral) rather than centrally mediated improvement in function and that central circulatory changes occur in proportion to the muscle mass used during the training. In most of the studies that have examined the transfer of training phenomenon, the conclusions have been based on changes occurring in the maximal aerobic power (VO2max) and its determinates. However, in recent years, the on-transient VO2 kinetics during exercise has been considered to be a valid indicator of the integrated cardiovascular, respiratory, and muscular systems’ response to meet the increased metabolic demand of the exercise [8, 9]. VO2 kinetics has also been shown to be faster in relatively fit individuals and to be speeded by training, both in normal subjects and in those with cardiovascular and/or pulmonary disease [10, 11]. However, there are only a limited number of studies where VO2 kinetics has been applied to trained athletes [12, 13].The metabolic and physiological responses to arm cranking differ markedly from those of leg cycling (see [14, 15] for reviews). At the same absolute power output, arm exercise results in higher rates of VO2, carbon dioxide output (VCO2), ventilation (Ve), heart rate (HR), and greater increases in core temperature (Tre), plasma epinephrine, and blood lactate, than does leg exercise [14, 15]. In untrained individuals VO2peak during arm cranking is approximately 60–70% of their leg-cycling VO2peak [15]. When the physiological and metabolic responses to arm exercise are expressed as a percentage of the mode-specific VO2peak, the differences between arm and leg exercise become less pronounced [15, 16]. The above-mentioned differences coupled with established records indicating that arm cranking results in an increased recruitment of type II muscle fibers [15, 17] and that type II muscle fibers have significantly lower metabolic efficiency than type I fibers [1] explains at least partially, why mechanical efficiency is lower in arm cranking than in leg cycling [14, 15, 17].There is evidence that arm cranking results in slower Phase II VO2 kinetics than leg cycling at similar absolute [8] and relative (to mode-specific maximal load) power outputs [17, 18]. Furthermore, arm muscles have been shown to have a lower capillary-to-muscle fiber ratio, reduced total capillary cross-sectional area and may induce intramuscular pressures during exercise that exceed blood perfusion pressure, when compared with leg muscles [14, 15]. It is also well documented that the proportion of type II muscle fibers is significantly higher in the muscles of the upper body compared to those of the lower body [19]. The reduced relative perfusion of arm muscle fibers combined with findings that type II muscle fibers have slower VO2 kinetics than type I fibers could result in slower active muscle oxygen consumption kinetics, thereby slowing Phase II VO2 kinetics. Several other studies have already compared the on- and off-transient VO2 responses of arms and legs [8, 17]. Our present study, however, addresses the issue of the relative importance of central or peripheral factors in the on-transient VO2 kinetics to exercise with “trained” and “untrained” muscles in elite competitive athletes specializing in sport disciplines that require intensive and long-term training, predominantly with their arm muscles (kayakers) or leg muscles (cyclists). ## 2. Material and Methods Seven professional road cyclists and seven elite flat water kayak paddlers volunteered to participate in this study during the maintenance phase of their normal training, after giving their written informed consent. All procedures were conducted in accordance with ethical standards of the Institutional Committee of the Italian National Olympic Committee and with the Helsinki Declaration of 1975.Table1 lists their physical characteristics. Each had trained and competed extensively at national and international levels for 5 to 10 years. Their training regimen included largely intensive and long-term aerobic activities, predominantly with their arm (kayakers) or leg (cyclists) muscles. Subjects came to the laboratory on four occasions to perform arm- and leg-cranking exercise studies. Each test was scheduled at a similar time of day in order to minimize the effect of diurnal biological fluctuation.Table 1 Physical characteristics of subjects by group. Age Height Weight Cyclists 24.0 ± 3.7 175.0 ± 6.7 65.9 + 5.9 Kayakers 22.0 ± 2.8 180.6 ± 5.5 79.7 ± 7.9 *Bold letters denote significant difference between groups (P<0.05). ### 2.1. Measurement of Arm-Cranking and Leg-Cycling Peak Oxygen Uptake (VO2peak) and Gas Exchange Threshold (GET) During the first two visits to the laboratory each subject performed two incremental exercise tests to the limit of tolerance in order to determine the arm-cranking and leg-cycling GET and VO2peak. For the lower limbs, all athletes were tested on a cycle ergometer (Ergoline, Germany). For the upper limbs, the cyclists used a standard arm-cranking ergometer (Technogym, Top.XT, Italy); the kayakers used an arm paddling ergometer (Technogym, K-Race, Italy), mimicking the actual arm movement for which the kayakers’ arm muscles were trained. The subjects were seated upright such that the crank axis of the ergometer was aligned with the glenohumeral joint. The height of the seat was adjusted to allow for a slight bend in the elbow when the crank handles were at their greatest distance from the subject. Additionally, the legs were not braced and the feet were placed on a footrest, mimicking the actual position in the kayak. The subjects were encouraged to use only their arms and shoulders to perform the exercise, whereas the use of lower back and legs was discouraged. For the leg-cycling test, the seat height was adjusted such that the legs were in slight flexion (170°) at the nadir of the down stroke, while the handlebars were set according to the individual preferences. Handle bar arm-pull was discouraged during leg cycling. During all tests the crank/cycle cadence was strictly kept between 90 and 100 strokes/revolutions per minute (SPM/RPM), respectively (typical rhythm for both activities), despite the fact that the ergometers provided speed-independent power. All three ergometers were calibrated prior to the beginning of the study using a dynamic calibration rig (Cerini, Italy).After a 15-minute standardized warm-up, consisting of either pedaling or cranking at 60 RPM or SPM at a work rate of 100 and 50 W, respectively, and following a 5-min, rest, the subject then commenced the leg-cycling or arm-cranking task. The power output was increased progressively every minute from an initial work rate for the cyclists of 100 and 50 W for the legs and arms, respectively, and 75 and 100 for the kayakers in order to bring the subject to the limit of tolerance in 8–12 minutes. This was achieved by increasing the power output in increments of 25 watts·min−1 for leg cycling and 20 watts·min−1 for arm cranking until the subject was unable to keep the pedaling (or stroke) rate above 50 per minute.During both incremental exercise tests, HR, VO2, VCO2, and Ve were measured breath by breath via standard open-circuit spirometry techniques using computerized metabolic cart (Quark b2, Cosmed, Italy). Daily calibration of volume (with 3-L syringe) and pretest calibration of carbon dioxide and oxygen gas analyzers (with precision gas mixers) were carried out. Heart rate was continuously monitored by means of a telemetric system (Polar, Electro, Finland).Peak oxygen uptake (VO2max) was defined as the highest average VO2 during a 30-second period of the last 90 seconds of the test. The criteria for the noninvasive determination of the gas exchange threshold (GET) were as follows. (1) The modified V-slope method [20], in which VCO2 is plotted against VO2 The GET was defined as the last value prior to the departing of VCO2 versus VO2 slope from linearity. (2) A systematic increase in the ventilatory equivalent for O2 (Ve/VO2) without an increase in the ventilatory equivalent for CO2 (Ve/VCO2), when plotted against exercise time [20]. The GET was determined by inspection in a blinded manner by two investigators. A third investigator was consulted to adjudicate between the two when the two investigators did not agree on threshold placement. ### 2.2. Oxygen Kinetics during Moderate-Intensity Constant-Load Exercise On the two following visits, each subject performed one arm-cranking and one leg-cycling 4-5-minute constant-load test, in a randomized order. Following 10 minutes at rest, the work rate, equal to that which elicited 50% of VO2max during the incremental test, was applied instantaneously (from absolute rest) without prior warning given to the subject. During both tests the crank/cycle cadence was strictly kept between 90 and 100 SPM and RPM, respectively. Gas exchange was measured breath-by-breath using the same apparatus as for the incremental tests. Heart rate was monitored and recorded continuously during all constant-load exercise tests by means of a telemetric system (Polar, Electro, Finland). ### 2.3. Calculation of Oxygen Uptake Kinetics Individual responses during the rest-to-exercise transitions were linearly interpolated to give 1-s values. For each subject and each exercise protocol, data were time-aligned to the start of exercise, superimposed, and averaged to reduce the breath-to-breath noise and enhance the underlying physiological response characteristics. The baseline VO2 (VO2bs) was defined as the average VO2 measured during the last two minutes before the start of exercise (rest period). The VO2 mean response time (MRT) was fitted by combining the first and the second exponential terms. The MRT was then used to indicate the overall rate of change of the VO2 toward its new steady state. The MRT for a single-term exponential model is equivalent to τ+TD and therefore provides response information including not only the time constant (τ) but also the time delay (TD). At the MRT, this response has attained 63% of its final value. To estimate the phase II time constant for the VO2 kinetics (τ2) we used a nonlinear least-squares monoexponential fit to the data as previously described [17, 21]. However, in order to maximize the amount of transient data available for the characterization (an important determinant of the goodness of fit [22]) we chose to discard the first 15 sec rather than the more common 20 sec—reasoning that the more rapid cardiac output kinetics in our fit subjects would reduce the limb-to-lung transit time and hence the duration of Phase I.VO2 kinetics tends to be slower at higher work rates even when the work rates are not associated with a sustained increase in blood lactate [23]. Therefore, and in order to facilitate comparison across subjects exercising at different absolute work rates, the relative gain of the response (G=A/work rate) and the exercise specific relative oxygen deficit were computed using the following equation: (1)O2D/W=G(VO2/W)×MRT(sec)=mLO2/min/W. ### 2.4. Data Analysis Group data are reported as means and standard deviation. A two-way ANOVA with repeated measures for training status (trained or nontrained) and for muscle group (upper or lower extremity) (independent variables) was used to determine differences and relationships in and among the various dependent (O2 kinetics parameters) and independent parameters between arm cranking and leg cycling and between the trained and nontrained muscles. Tukey’s post hoc test was utilized to determine where significant differences existed. Statistical significance was accepted at P<0.05. ## 2.1. Measurement of Arm-Cranking and Leg-Cycling Peak Oxygen Uptake (VO2peak) and Gas Exchange Threshold (GET) During the first two visits to the laboratory each subject performed two incremental exercise tests to the limit of tolerance in order to determine the arm-cranking and leg-cycling GET and VO2peak. For the lower limbs, all athletes were tested on a cycle ergometer (Ergoline, Germany). For the upper limbs, the cyclists used a standard arm-cranking ergometer (Technogym, Top.XT, Italy); the kayakers used an arm paddling ergometer (Technogym, K-Race, Italy), mimicking the actual arm movement for which the kayakers’ arm muscles were trained. The subjects were seated upright such that the crank axis of the ergometer was aligned with the glenohumeral joint. The height of the seat was adjusted to allow for a slight bend in the elbow when the crank handles were at their greatest distance from the subject. Additionally, the legs were not braced and the feet were placed on a footrest, mimicking the actual position in the kayak. The subjects were encouraged to use only their arms and shoulders to perform the exercise, whereas the use of lower back and legs was discouraged. For the leg-cycling test, the seat height was adjusted such that the legs were in slight flexion (170°) at the nadir of the down stroke, while the handlebars were set according to the individual preferences. Handle bar arm-pull was discouraged during leg cycling. During all tests the crank/cycle cadence was strictly kept between 90 and 100 strokes/revolutions per minute (SPM/RPM), respectively (typical rhythm for both activities), despite the fact that the ergometers provided speed-independent power. All three ergometers were calibrated prior to the beginning of the study using a dynamic calibration rig (Cerini, Italy).After a 15-minute standardized warm-up, consisting of either pedaling or cranking at 60 RPM or SPM at a work rate of 100 and 50 W, respectively, and following a 5-min, rest, the subject then commenced the leg-cycling or arm-cranking task. The power output was increased progressively every minute from an initial work rate for the cyclists of 100 and 50 W for the legs and arms, respectively, and 75 and 100 for the kayakers in order to bring the subject to the limit of tolerance in 8–12 minutes. This was achieved by increasing the power output in increments of 25 watts·min−1 for leg cycling and 20 watts·min−1 for arm cranking until the subject was unable to keep the pedaling (or stroke) rate above 50 per minute.During both incremental exercise tests, HR, VO2, VCO2, and Ve were measured breath by breath via standard open-circuit spirometry techniques using computerized metabolic cart (Quark b2, Cosmed, Italy). Daily calibration of volume (with 3-L syringe) and pretest calibration of carbon dioxide and oxygen gas analyzers (with precision gas mixers) were carried out. Heart rate was continuously monitored by means of a telemetric system (Polar, Electro, Finland).Peak oxygen uptake (VO2max) was defined as the highest average VO2 during a 30-second period of the last 90 seconds of the test. The criteria for the noninvasive determination of the gas exchange threshold (GET) were as follows. (1) The modified V-slope method [20], in which VCO2 is plotted against VO2 The GET was defined as the last value prior to the departing of VCO2 versus VO2 slope from linearity. (2) A systematic increase in the ventilatory equivalent for O2 (Ve/VO2) without an increase in the ventilatory equivalent for CO2 (Ve/VCO2), when plotted against exercise time [20]. The GET was determined by inspection in a blinded manner by two investigators. A third investigator was consulted to adjudicate between the two when the two investigators did not agree on threshold placement. ## 2.2. Oxygen Kinetics during Moderate-Intensity Constant-Load Exercise On the two following visits, each subject performed one arm-cranking and one leg-cycling 4-5-minute constant-load test, in a randomized order. Following 10 minutes at rest, the work rate, equal to that which elicited 50% of VO2max during the incremental test, was applied instantaneously (from absolute rest) without prior warning given to the subject. During both tests the crank/cycle cadence was strictly kept between 90 and 100 SPM and RPM, respectively. Gas exchange was measured breath-by-breath using the same apparatus as for the incremental tests. Heart rate was monitored and recorded continuously during all constant-load exercise tests by means of a telemetric system (Polar, Electro, Finland). ## 2.3. Calculation of Oxygen Uptake Kinetics Individual responses during the rest-to-exercise transitions were linearly interpolated to give 1-s values. For each subject and each exercise protocol, data were time-aligned to the start of exercise, superimposed, and averaged to reduce the breath-to-breath noise and enhance the underlying physiological response characteristics. The baseline VO2 (VO2bs) was defined as the average VO2 measured during the last two minutes before the start of exercise (rest period). The VO2 mean response time (MRT) was fitted by combining the first and the second exponential terms. The MRT was then used to indicate the overall rate of change of the VO2 toward its new steady state. The MRT for a single-term exponential model is equivalent to τ+TD and therefore provides response information including not only the time constant (τ) but also the time delay (TD). At the MRT, this response has attained 63% of its final value. To estimate the phase II time constant for the VO2 kinetics (τ2) we used a nonlinear least-squares monoexponential fit to the data as previously described [17, 21]. However, in order to maximize the amount of transient data available for the characterization (an important determinant of the goodness of fit [22]) we chose to discard the first 15 sec rather than the more common 20 sec—reasoning that the more rapid cardiac output kinetics in our fit subjects would reduce the limb-to-lung transit time and hence the duration of Phase I.VO2 kinetics tends to be slower at higher work rates even when the work rates are not associated with a sustained increase in blood lactate [23]. Therefore, and in order to facilitate comparison across subjects exercising at different absolute work rates, the relative gain of the response (G=A/work rate) and the exercise specific relative oxygen deficit were computed using the following equation: (1)O2D/W=G(VO2/W)×MRT(sec)=mLO2/min/W. ## 2.4. Data Analysis Group data are reported as means and standard deviation. A two-way ANOVA with repeated measures for training status (trained or nontrained) and for muscle group (upper or lower extremity) (independent variables) was used to determine differences and relationships in and among the various dependent (O2 kinetics parameters) and independent parameters between arm cranking and leg cycling and between the trained and nontrained muscles. Tukey’s post hoc test was utilized to determine where significant differences existed. Statistical significance was accepted at P<0.05. ## 3. Results ### 3.1. Peak and Related Values Table2 presents data obtained during and at the end of all incremental and submaximal constant-load tests performed with the trained and untrained-muscle groups.Table 2 Peak and sub-maximal responses to arm and leg exercise by each group (mean ± SD). Kayakers Cyclists Lega Armb Legc Armd Peak work rate (W) 298 ± 42cd 279 ± 20cd 390 ± 52abd 157 ± 17abc Peak VO2 (mL/min) 4268± 656cd 4087 ± 499cd 4921 ± 380abd 3147 ± 436abc Peak HR (b/min) 183 ± 3 183 ± 8 192 ± 11d 178 ± 11c VO 2ss (mL/min) 2015 ± 368cd 1943 ± 404cd 2952 ± 195abd 1501 ± 384abc Work ratess (W) 108 ± 31cd 106 ± 15cd 170 ± 16abd 69 ± 12abc Work ratess (% VO2peak)** 39 ± 3 40 ± 4 44 ± 4 43 ± 3 VO 2ss (% VO2peak)** 47 ± 6c 47 ± 6c 58 ± 4abd 50 ± 8c VO2 at GET (L/min) 2.84 ± 0.39c 3.31 ± 0.25cd 3.86 ± 0.33abc 2.50 ± 0.33abc VO 2ss/VO2 at GET (%) 72.3 ± 21bd 62.5 ± 15ac 74.1 ± 11bd 61.6 ± 17ac HRss (% VO2peak)** 66 ± 7 63 ± 6 67 ± 5 62 ± 6 Like letters denote significant difference (P<0.05). **Values are percentages of their respective peak values. Peak work rate: work rate achieved at exhaustion; Peak VO2: rate of oxygen uptake at exhaustion; Peak HR: heart rate at exhaustion; VO2ss: rate of oxygen uptake at steady state level; Work ratess: work rate at steady state level; VO2 at GET: rate of oxygen uptake at the gas exchange threshold; VO2ss/VO2 at GET: ratio of oxygen uptake at steady state level to oxygen uptake at the gas exchange threshold; HRss: heart rate at steady state level. #### 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). #### 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ### 3.2. VO2 Kinetics Figures1–4 compare the groups’ mean Phase II of the VO2 response kinetics (excluding Phase I in each response), during the transition from rest to constant moderate exercise level between and within groups and muscles, along with the best exponential fit to each mean response. Visual inspection of these plots reveals that phase II of the VO2 response rose in biphasic fashion toward phase III (the exercise steady state levels). It seems that the relative load selected for this study (50–60% mode-specific VO2peak) was not only physiologically similar (in relative terms) (see Table 2), but also sufficiently low for both the lower and upper body musculatures for an attainment of a steady-state VO2 (after 2-3 min) without a development of VO2slow component, in the trained and untrained muscles alike.Figure 1 Average phase II of the VO2 response during transition from rest to moderate exercise using untrained arms (⋄) and trained legs (▴) by the cyclists.Figure 2 Average phase II of the VO2 response during transition from rest to moderate exercise using trained arms (⋄) and untrained legs (▴) by the kayakers.Figure 3 Average phase II of the VO2 response during transition from rest to moderate exercise using lower limbs muscles by the cyclists trained (▴) and the kayakers untrained (⋄).Figure 4 Average phase II of the VO2 response during transition from rest to moderate exercise using upper limbs muscles by the cyclists (untrained) (▴) and the kayakers (trained) (⋄).A more quantitative assessment of the relative speed of VO2 response as a function of muscle group and training status is presented in Table 3 (means ± SD).Table 3 Parameters of oxygen uptake response during moderate exercise as a function of exercise modality (muscle group involved) (mean ± SD). Variable Kayakers Cyclists Legsa Armsb Legc Armd VO 2bs (L/min) 0.46 ± 0.04 0.46 ± 0.09 0.41 ± 0.07 0.40 ± 0.10 VO 2ss (L/min) 1.99 ± 0.34cd 1.94 ± 0.40cd 2.85 ± 0.33abd 1.50 ± 0.38ac A (L/min) 1.54 ± 0.34cd 1.45 ± 0.36cd 2.45 ± 0.38abd 1.03 ± 0.26abc G (mL O2/min/W) 14.35 ± 1.11 13.6 ± 2.21 14.38 ± 0.84 14.5 ± 3.83 τ 2 (sec) 16.50 ± 2.6bd 18.81 ± 3.7ac 14.90 ± 2.7bd 19.70 ± 3.6ac MRT (sec) 22.62 ± 3.1bcd 26.73 ± 8.9ad 25.76 ± 4.9d 40.90 ± 5.6abc O2D (mL) 34.6 ± 11.9c 39.4 ± 11.5c 61.2 ± 17.4abd 40.7 ± 15.3c Relative O2D (mL O2/min/W) 323.2 ± 97d 363.2 ± 92d 370.3 ± 90d 594.5 ± 155abc Like letters denote significant difference (P<0.05). VO 2bs: average value over the two min of resting baseline; VO2ss: rate of oxygen uptake at steady state level; A: the asymptotic amplitude for the exponential term; τ2: time constant of primary phase; MRT: mean VO2 response time; O2D: calculated oxygen deficit; G: relative (to work rate) gain of the VO2 response; Relative O2D: O2 deficit normalized to work rate. #### 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). #### 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 3.1. Peak and Related Values Table2 presents data obtained during and at the end of all incremental and submaximal constant-load tests performed with the trained and untrained-muscle groups.Table 2 Peak and sub-maximal responses to arm and leg exercise by each group (mean ± SD). Kayakers Cyclists Lega Armb Legc Armd Peak work rate (W) 298 ± 42cd 279 ± 20cd 390 ± 52abd 157 ± 17abc Peak VO2 (mL/min) 4268± 656cd 4087 ± 499cd 4921 ± 380abd 3147 ± 436abc Peak HR (b/min) 183 ± 3 183 ± 8 192 ± 11d 178 ± 11c VO 2ss (mL/min) 2015 ± 368cd 1943 ± 404cd 2952 ± 195abd 1501 ± 384abc Work ratess (W) 108 ± 31cd 106 ± 15cd 170 ± 16abd 69 ± 12abc Work ratess (% VO2peak)** 39 ± 3 40 ± 4 44 ± 4 43 ± 3 VO 2ss (% VO2peak)** 47 ± 6c 47 ± 6c 58 ± 4abd 50 ± 8c VO2 at GET (L/min) 2.84 ± 0.39c 3.31 ± 0.25cd 3.86 ± 0.33abc 2.50 ± 0.33abc VO 2ss/VO2 at GET (%) 72.3 ± 21bd 62.5 ± 15ac 74.1 ± 11bd 61.6 ± 17ac HRss (% VO2peak)** 66 ± 7 63 ± 6 67 ± 5 62 ± 6 Like letters denote significant difference (P<0.05). **Values are percentages of their respective peak values. Peak work rate: work rate achieved at exhaustion; Peak VO2: rate of oxygen uptake at exhaustion; Peak HR: heart rate at exhaustion; VO2ss: rate of oxygen uptake at steady state level; Work ratess: work rate at steady state level; VO2 at GET: rate of oxygen uptake at the gas exchange threshold; VO2ss/VO2 at GET: ratio of oxygen uptake at steady state level to oxygen uptake at the gas exchange threshold; HRss: heart rate at steady state level. ### 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). ### 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ## 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). ## 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ## 3.2. VO2 Kinetics Figures1–4 compare the groups’ mean Phase II of the VO2 response kinetics (excluding Phase I in each response), during the transition from rest to constant moderate exercise level between and within groups and muscles, along with the best exponential fit to each mean response. Visual inspection of these plots reveals that phase II of the VO2 response rose in biphasic fashion toward phase III (the exercise steady state levels). It seems that the relative load selected for this study (50–60% mode-specific VO2peak) was not only physiologically similar (in relative terms) (see Table 2), but also sufficiently low for both the lower and upper body musculatures for an attainment of a steady-state VO2 (after 2-3 min) without a development of VO2slow component, in the trained and untrained muscles alike.Figure 1 Average phase II of the VO2 response during transition from rest to moderate exercise using untrained arms (⋄) and trained legs (▴) by the cyclists.Figure 2 Average phase II of the VO2 response during transition from rest to moderate exercise using trained arms (⋄) and untrained legs (▴) by the kayakers.Figure 3 Average phase II of the VO2 response during transition from rest to moderate exercise using lower limbs muscles by the cyclists trained (▴) and the kayakers untrained (⋄).Figure 4 Average phase II of the VO2 response during transition from rest to moderate exercise using upper limbs muscles by the cyclists (untrained) (▴) and the kayakers (trained) (⋄).A more quantitative assessment of the relative speed of VO2 response as a function of muscle group and training status is presented in Table 3 (means ± SD).Table 3 Parameters of oxygen uptake response during moderate exercise as a function of exercise modality (muscle group involved) (mean ± SD). Variable Kayakers Cyclists Legsa Armsb Legc Armd VO 2bs (L/min) 0.46 ± 0.04 0.46 ± 0.09 0.41 ± 0.07 0.40 ± 0.10 VO 2ss (L/min) 1.99 ± 0.34cd 1.94 ± 0.40cd 2.85 ± 0.33abd 1.50 ± 0.38ac A (L/min) 1.54 ± 0.34cd 1.45 ± 0.36cd 2.45 ± 0.38abd 1.03 ± 0.26abc G (mL O2/min/W) 14.35 ± 1.11 13.6 ± 2.21 14.38 ± 0.84 14.5 ± 3.83 τ 2 (sec) 16.50 ± 2.6bd 18.81 ± 3.7ac 14.90 ± 2.7bd 19.70 ± 3.6ac MRT (sec) 22.62 ± 3.1bcd 26.73 ± 8.9ad 25.76 ± 4.9d 40.90 ± 5.6abc O2D (mL) 34.6 ± 11.9c 39.4 ± 11.5c 61.2 ± 17.4abd 40.7 ± 15.3c Relative O2D (mL O2/min/W) 323.2 ± 97d 363.2 ± 92d 370.3 ± 90d 594.5 ± 155abc Like letters denote significant difference (P<0.05). VO 2bs: average value over the two min of resting baseline; VO2ss: rate of oxygen uptake at steady state level; A: the asymptotic amplitude for the exponential term; τ2: time constant of primary phase; MRT: mean VO2 response time; O2D: calculated oxygen deficit; G: relative (to work rate) gain of the VO2 response; Relative O2D: O2 deficit normalized to work rate. ### 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). ### 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). ## 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 4. Discussion To our knowledge, this is the first study to consider the kinetics of the VO2 response to dynamic muscular exercise in highly trained athletes who compete in events predominantly utilizing the lower or upper extremities, that is, cyclists and kayakers.As expected, bothVO2peak and peak power output were higher for the trained than for the untrained muscles in each group. However, to make a valid comparison of VO2 kinetics across exercise modes, we chose to normalize the exercise intensity to 50–60% of the mode-specific VO2peak (i.e., below the GET in this trained subjects). In this exercise intensity domain, it is believed that a metabolic inertia within the muscle cells themselves is the principal limitation to the acceleration of oxidative metabolism after the onset of exercise [4, 24, 25].Hence, despite differences in the absolute VO2, the relative intensity of each square-wave transition was successfully matched across exercise modes: the % ΔVO2, the % of the mode-specific peak power output, and the % of mode-specific of maximal heart rate were not significantly different (Table 2). Therefore, our study allows a comparison of the fundamental components of the VO2 kinetics between cycling and arm cranking within the same intensity domain.Unsurprisingly, the VO2 response to moderate exercise in the cyclists revealed different patterns for the trained (legs) and the untrained (arms) muscles (Figure 2; Table 3). The higher absolute work rate in the trained-muscle group naturally resulted in a higher steady-state amplitude of the VO2 response than for the untrained-muscle group. However, the phase II time constant (a functional correlate of the muscle VO2 time constant) [2, 9, 24] as well as the VO2 mean response time (MRS) reflecting, in addition, the utilization of oxygen from the oxygen stores [26] was significantly faster in the trained than the untrained muscles. Consequently, the oxygen deficit per unit power output (O2D/W) was significantly larger in the tests with the untrained muscles than with the trained muscles (Table 3).While the VO2 kinetics for the cyclists’ arms (τ2 = 19.7 sec; MRT = 40.9 sec) were slower than those of their leg (τ2 = 14.9 sec; MRT = 25.8 sec), they were appreciably faster than those previously reported for arm-cranking exercise in normal untrained males (60–80 s) [8, 18]. In fact, the VO2 time constant for the cyclists’ arms was similar to, and sometimes even faster than, those previously reported for normal nontrained legs (30–40 s) [8, 27]. This suggests that the muscles used by the cyclists for the arm-cranking exercise ought not be considered “untrained,” that is, reflecting the additional compensatory component arising from dynamic stabilization of the body during cycling and even periods of active “pulling” on the handlebars. Support for the above contention comes from studies by Baker et al. [28, 29] who reported that power generated during sprint cycling was significantly higher in a protocol that allowed for the gripping of the handle bars than in another protocol without the gripping of handle bars. Their results demonstrated that the arms and the upper body were involved in stabilizing the entire body so that the lower limbs could exert forces downwards onto the cycle pedals to generate the mechanical power and that the contribution of muscle groups not directly involved during sprint cycling toward power generation cannot be discounted.Alternatively, of course, a “transfer effect” on the VO2 kinetics may have been contributory, resulting from an increased amount of blood and, therefore, oxygen, available to the cyclists’ arms as a result of a “central” training effect consequent to the leg training. However, for this to be contributory, the VO2 kinetics would need to be limited by oxygen delivery at this work intensity. However, there is no convincing evidence that increases in maximum VO2 and, hence, maximum cardiac output induced by training alter the steady-state cardiac output response to a moderate-intensity work rate, at least for leg exercise (i.e., [26]). Furthermore, the VO2 time constant for moderate exercise has been demonstrated not to be speeded by: experimentally-induced increases in muscle blood flow [4], beginning the exercise when blood flow has remained high following a prior bout of higher intensity exercise [30], and even by increased inspired O2 fractions [31]. Hence, any improvement in central indices of cardiovascular function is unlikely to be contributory.The VO2 response to moderate exercise in the kayakers, in contrast, revealed similar patterns (amplitude, kinetics, and O2D) for the arm and the leg exercise, both in absolute and relative terms (Figure 2; Table 3). These results suggest either that the long-term intensive training with the relative small muscle mass of the arms did not cause any appreciable cross-training effect on the VO2 kinetics and/or that there is a significant leg contribution to kayaking [32]. The fact that the values for the VO2 time constant (16.5 sec) and MRT (22.6 sec) for leg exercise in the kayakers are appreciably faster than for normal untrained cycle ergometry [2, 8] suggests that the latter explanation is more likely.Our finding that the VO2 kinetics for the kayakers’ arms are not significantly different from those of the cyclists’ arms, despite the “central” capacity (as reflected by the leg VO2peak) being appreciably lower in the kayakers, suggests that the VO2 kinetics at the onset of subthreshold square-wave exercise depend primarily on peripheral factors (muscle mass, distribution of muscle fiber type, number of mitochondria, activity levels of oxidative enzymes, and possibly muscle vascularization) and not on central factors (cardiac output, pulmonary ventilation, etc.). These results are in line with previous reports suggesting that on-transient VO2 kinetics for moderate square-wave exercise is mainly reflective of and dictated by peripheral (local) rather than central attributes [4, 5, 33].While the phase II time constant (reflective of the kinetics of muscle oxygen utilization, that is, [1, 24]) for the exercise involving the trained-muscle groups were both fast relative to normal subjects, the value in the cyclists (14.9 sec) was even (and significantly) faster than that for the kayakers (18.8 sec). However, the mean ratio of leg to arm τ2 in our highly trained subjects was 81±5%, being appreciably higher than the respective ratio of 50–60% observed in healthy untrained subjects [8, 17, 27]. That is, while both muscle groups were evidently “highly trained,” the cyclists were trained for longer-duration exercise (i.e., hours); the kayaker’s training program, preparing for all-out races lasting 2–7 minutes, included both aerobic- and anaerobic-type activities (including resistance training) [32]. Furthermore, the energy demand for the kayakers, during both competitions and training, was frequently in excess of VO2peak, which was not the case for the cyclists. These factors may have contributed to even greater improvements in aerobic enzymatic function and capillarity—thought to be important contributors to VO2 kinetics [5, 24].With respect to the relatively untrained-muscle groups, the kayakers evidenced faster VO2 kinetics both with respect to τ2 and MRT, than the cyclists (Table 3). This suggests that, while both muscle groups may be considered to be relatively trained [29, 32] with respect to normal moderately fit subjects, it seems that there is a greater stabilizing-counterforce involvement of the legs in the task of kayaking [32] than of the arms for cycling.Another interesting and, possibly, surprising finding of the present study was the similarity of the VO2 response kinetics for the trained (cyclists) and untrained (kayakers) leg muscles. This finding is not only surprising, but also contrary to several previous reports demonstrating a significant speeding of these kinetics following endurance training [13, 34]. One possible explanation for this unexpected outcome is that the speeding of the VO2 kinetics with endurance training does not increase pari passu with the increase of VO2peak, but rather effectively attains a plateau. We speculate that this “asymptotic” level of the phase II time constant is relatively “easy” to reach since even the relatively mild “whole body” training (including leg muscles) of our kayakers group appeared sufficient for their leg VO2 kinetics to attain this assumed critical limiting level. This suggestion is further supported by the similar MRT in the trained and untrained leg muscles, despite significant differences in the respective VO2peak values (73.2 versus 53.4 mL/kg/min). Furthermore, when comparing the ratio of τ2 of the leg muscles to that of the arm muscles (τ2 legs/τ2 arms) between the two elite athlete groups, it becomes evident that the kayakers arm muscles’ “machinery” (as determined by the τ2) is significantly closer to that of their leg muscles (88.3 versus 73.4% in the kayakers and cyclists, resp.). It is clear that such proximity is not due to relatively slow legs’ τ2 in the kayakers, as the latter is as fast in the kayakers as it is in the cyclists (see Table 3). Such high ratio implies that (a) arm training, as used by our kayakers, provides stronger stimulus for improving the VO2 response kinetics during constant and moderate exercise task, and (b) for achieving high level in kayak competition, one needs to promote his arm muscle functioning to a level very close to that of his legs. It should be pointed out that the difference in the MRT legs-to-arms ratio between the two groups was even greater (86.7 versus 63.2% for the kayakers and cyclists, resp.).Limitations of Study Design. Probably, the most powerful way to test the hypothesis of this study would be to conduct a “classic” training study (pre- versus posttraining comparisons). However, legalistically and logistically, such “classic” approach will not allow a long and intensive training regimen, such as that pursued by our subjects. Further, the design of this study does not allow to exclude the possibility of genetic predisposition influence on the observed results and hence on the final conclusions of the study.Also, in the present study we used differing testing modes of arm exercise between groups. The logic for using different exercise modalities to test and compare arm exercise between cyclists and kayakers was intended to allow subjects in each group to perform exercise (in their respective trained muscles) similar to those they were most accustomed to and trained for.Notwithstanding the above-mentioned limitations, the results of the present study provide no support for the “transfer” of a training effect onto the VO2 on-transient response for moderate exercise, but rather support earlier reports demonstrating that peripheral (local) and not central (hemodynamic) effects may be important in dictating these kinetics. As a consequence, we suggest that predominantly local and/or specific training is required to speed the muscle O2 consumption response to moderate exercise. This consequently reduces the associated oxygen deficit and hence the reliance on stored energy resources, predominantly phosphocreatine and O2, and anaerobic lactate production.Finally and in line with the above-mentioned limitations, further effort should be attempted to validate the study’s findings. --- *Source: 101565-2012-12-31.xml*
101565-2012-12-31_101565-2012-12-31.md
57,373
VO2 Kinetics during Moderate Effort in Muscles of Different Masses and Training Level
Omri Inbar; Marcello Faina; Sabrina Demarie; Brian J. Whipp
ISRN Physiology (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101565
101565-2012-12-31.xml
--- ## Abstract Purpose. To examine the relative importance of central or peripheral factors in the on-transient VO2 response dynamics to exercise with “trained” and relatively “untrained” muscles. Methods. Seven professional road cyclists and seven elite kayak paddlers volunteered to participate in this study. Each completed two bouts of constant-load “square-wave” rest-to-exercise transition cycling and arm-cranking exercise at a power output 50–60% of the mode-specific VO2peak presented in a randomized order. Results. In the cyclists, the mean response time (MRT) as well as the phase II VO2 time constant (τ2) was significantly slower in the untrained compared with the trained muscles. The opposite was the case in the kayakers. With respect to the relatively untrained muscle groups, while both demonstrated faster VO2 kinetics than normal (moderately fit) subjects, the kayakers evidenced faster VO2 kinetics than the cyclists. This suggests that there is a greater stabilizing-counterforce involvement of the legs in the task of kayaking than of the arms for cycling. Conclusions. The results of the present study provide no support for the “transfer” of a training effect onto the VO2 on-transient response for moderate exercise, but rather support earlier reports demonstrating that peripheral effects may be important in dictating this kinetics. --- ## Body ## 1. Introduction The time course of the pulmonary oxygen uptake (VO2) response to constant-load exercise of moderate intensity can be characterized by two transient phases. In Phase I, the initial, usually rapid, increase in VO2 is mediated by an increase in cardiac output, or more properly pulmonary blood flow, whilst the gas contents of the mixed-venous blood perfusing the lungs remain similar to those at rest. Phase II transition is triggered by the gas contents of the blood perfusing the lungs being altered by the influence of active muscle metabolism; it therefore represents the blood transport delay between the active muscles and the lungs. During Phase II VO2 reflects the decreasing mixed venous O2 content supplementing the continuing increase in pulmonary blood flow. This is characterized by a monoexponential rise in VO2 up to the asymptotic or steady-state level (Phase III), the time course of which closely reflects that of the increased muscle oxygen consumption [1, 2]. However, if the work rate is appreciably above the individual’s lactate threshold (LT), VO2 may not reach a steady state, associated with a continued slower rise in VO2 (slow component; VO2slow) of a delayed onset. Opinions are divided over whether VO2 kinetics is limited by the rate of O2 delivery to the working muscle [3] or by peripheral factors such as oxidative enzyme activity within the muscle mitochondria or the rate at which carbohydrates are processed into the mitochondria at the pyruvate-acetyl CoA site, that is, rate limitation of O2 utilization by working muscles despite adequate O2 delivery [4, 5].It has been repeatedly shown that single-legged training (relatively small muscle mass) causes significant local (peripheral) changes (such as concentration of high energy phosphate compounds, ratio of ATP to ADP, and inorganic phosphate), with only minor alterations of the cardiovascular (central) system [6]. It has also been acknowledged [6] that, in order to induce contralateral training modifications (cross-training), larger muscle mass that produces both peripheral and central adaptations should be involved. Indeed several reports (e.g., [7]) have demonstrated that arm training did not produce significant alterations in heart rate, stoke volume, or peripheral blood flow, either at rest or during exercise performed with the nontrained muscles (legs). After leg training; however, the increase in the centrally mediated variables was approximately the same in the trained (legs) and the nontrained muscles (arms) [6, 8]. It is indeed widely accepted that there is more of a transfer effect on the central hemodynamic after training with large muscle groups, compared with training with small muscle groups. It is, therefore, suggested that arm muscles have a greater potential for local (peripheral) rather than centrally mediated improvement in function and that central circulatory changes occur in proportion to the muscle mass used during the training. In most of the studies that have examined the transfer of training phenomenon, the conclusions have been based on changes occurring in the maximal aerobic power (VO2max) and its determinates. However, in recent years, the on-transient VO2 kinetics during exercise has been considered to be a valid indicator of the integrated cardiovascular, respiratory, and muscular systems’ response to meet the increased metabolic demand of the exercise [8, 9]. VO2 kinetics has also been shown to be faster in relatively fit individuals and to be speeded by training, both in normal subjects and in those with cardiovascular and/or pulmonary disease [10, 11]. However, there are only a limited number of studies where VO2 kinetics has been applied to trained athletes [12, 13].The metabolic and physiological responses to arm cranking differ markedly from those of leg cycling (see [14, 15] for reviews). At the same absolute power output, arm exercise results in higher rates of VO2, carbon dioxide output (VCO2), ventilation (Ve), heart rate (HR), and greater increases in core temperature (Tre), plasma epinephrine, and blood lactate, than does leg exercise [14, 15]. In untrained individuals VO2peak during arm cranking is approximately 60–70% of their leg-cycling VO2peak [15]. When the physiological and metabolic responses to arm exercise are expressed as a percentage of the mode-specific VO2peak, the differences between arm and leg exercise become less pronounced [15, 16]. The above-mentioned differences coupled with established records indicating that arm cranking results in an increased recruitment of type II muscle fibers [15, 17] and that type II muscle fibers have significantly lower metabolic efficiency than type I fibers [1] explains at least partially, why mechanical efficiency is lower in arm cranking than in leg cycling [14, 15, 17].There is evidence that arm cranking results in slower Phase II VO2 kinetics than leg cycling at similar absolute [8] and relative (to mode-specific maximal load) power outputs [17, 18]. Furthermore, arm muscles have been shown to have a lower capillary-to-muscle fiber ratio, reduced total capillary cross-sectional area and may induce intramuscular pressures during exercise that exceed blood perfusion pressure, when compared with leg muscles [14, 15]. It is also well documented that the proportion of type II muscle fibers is significantly higher in the muscles of the upper body compared to those of the lower body [19]. The reduced relative perfusion of arm muscle fibers combined with findings that type II muscle fibers have slower VO2 kinetics than type I fibers could result in slower active muscle oxygen consumption kinetics, thereby slowing Phase II VO2 kinetics. Several other studies have already compared the on- and off-transient VO2 responses of arms and legs [8, 17]. Our present study, however, addresses the issue of the relative importance of central or peripheral factors in the on-transient VO2 kinetics to exercise with “trained” and “untrained” muscles in elite competitive athletes specializing in sport disciplines that require intensive and long-term training, predominantly with their arm muscles (kayakers) or leg muscles (cyclists). ## 2. Material and Methods Seven professional road cyclists and seven elite flat water kayak paddlers volunteered to participate in this study during the maintenance phase of their normal training, after giving their written informed consent. All procedures were conducted in accordance with ethical standards of the Institutional Committee of the Italian National Olympic Committee and with the Helsinki Declaration of 1975.Table1 lists their physical characteristics. Each had trained and competed extensively at national and international levels for 5 to 10 years. Their training regimen included largely intensive and long-term aerobic activities, predominantly with their arm (kayakers) or leg (cyclists) muscles. Subjects came to the laboratory on four occasions to perform arm- and leg-cranking exercise studies. Each test was scheduled at a similar time of day in order to minimize the effect of diurnal biological fluctuation.Table 1 Physical characteristics of subjects by group. Age Height Weight Cyclists 24.0 ± 3.7 175.0 ± 6.7 65.9 + 5.9 Kayakers 22.0 ± 2.8 180.6 ± 5.5 79.7 ± 7.9 *Bold letters denote significant difference between groups (P<0.05). ### 2.1. Measurement of Arm-Cranking and Leg-Cycling Peak Oxygen Uptake (VO2peak) and Gas Exchange Threshold (GET) During the first two visits to the laboratory each subject performed two incremental exercise tests to the limit of tolerance in order to determine the arm-cranking and leg-cycling GET and VO2peak. For the lower limbs, all athletes were tested on a cycle ergometer (Ergoline, Germany). For the upper limbs, the cyclists used a standard arm-cranking ergometer (Technogym, Top.XT, Italy); the kayakers used an arm paddling ergometer (Technogym, K-Race, Italy), mimicking the actual arm movement for which the kayakers’ arm muscles were trained. The subjects were seated upright such that the crank axis of the ergometer was aligned with the glenohumeral joint. The height of the seat was adjusted to allow for a slight bend in the elbow when the crank handles were at their greatest distance from the subject. Additionally, the legs were not braced and the feet were placed on a footrest, mimicking the actual position in the kayak. The subjects were encouraged to use only their arms and shoulders to perform the exercise, whereas the use of lower back and legs was discouraged. For the leg-cycling test, the seat height was adjusted such that the legs were in slight flexion (170°) at the nadir of the down stroke, while the handlebars were set according to the individual preferences. Handle bar arm-pull was discouraged during leg cycling. During all tests the crank/cycle cadence was strictly kept between 90 and 100 strokes/revolutions per minute (SPM/RPM), respectively (typical rhythm for both activities), despite the fact that the ergometers provided speed-independent power. All three ergometers were calibrated prior to the beginning of the study using a dynamic calibration rig (Cerini, Italy).After a 15-minute standardized warm-up, consisting of either pedaling or cranking at 60 RPM or SPM at a work rate of 100 and 50 W, respectively, and following a 5-min, rest, the subject then commenced the leg-cycling or arm-cranking task. The power output was increased progressively every minute from an initial work rate for the cyclists of 100 and 50 W for the legs and arms, respectively, and 75 and 100 for the kayakers in order to bring the subject to the limit of tolerance in 8–12 minutes. This was achieved by increasing the power output in increments of 25 watts·min−1 for leg cycling and 20 watts·min−1 for arm cranking until the subject was unable to keep the pedaling (or stroke) rate above 50 per minute.During both incremental exercise tests, HR, VO2, VCO2, and Ve were measured breath by breath via standard open-circuit spirometry techniques using computerized metabolic cart (Quark b2, Cosmed, Italy). Daily calibration of volume (with 3-L syringe) and pretest calibration of carbon dioxide and oxygen gas analyzers (with precision gas mixers) were carried out. Heart rate was continuously monitored by means of a telemetric system (Polar, Electro, Finland).Peak oxygen uptake (VO2max) was defined as the highest average VO2 during a 30-second period of the last 90 seconds of the test. The criteria for the noninvasive determination of the gas exchange threshold (GET) were as follows. (1) The modified V-slope method [20], in which VCO2 is plotted against VO2 The GET was defined as the last value prior to the departing of VCO2 versus VO2 slope from linearity. (2) A systematic increase in the ventilatory equivalent for O2 (Ve/VO2) without an increase in the ventilatory equivalent for CO2 (Ve/VCO2), when plotted against exercise time [20]. The GET was determined by inspection in a blinded manner by two investigators. A third investigator was consulted to adjudicate between the two when the two investigators did not agree on threshold placement. ### 2.2. Oxygen Kinetics during Moderate-Intensity Constant-Load Exercise On the two following visits, each subject performed one arm-cranking and one leg-cycling 4-5-minute constant-load test, in a randomized order. Following 10 minutes at rest, the work rate, equal to that which elicited 50% of VO2max during the incremental test, was applied instantaneously (from absolute rest) without prior warning given to the subject. During both tests the crank/cycle cadence was strictly kept between 90 and 100 SPM and RPM, respectively. Gas exchange was measured breath-by-breath using the same apparatus as for the incremental tests. Heart rate was monitored and recorded continuously during all constant-load exercise tests by means of a telemetric system (Polar, Electro, Finland). ### 2.3. Calculation of Oxygen Uptake Kinetics Individual responses during the rest-to-exercise transitions were linearly interpolated to give 1-s values. For each subject and each exercise protocol, data were time-aligned to the start of exercise, superimposed, and averaged to reduce the breath-to-breath noise and enhance the underlying physiological response characteristics. The baseline VO2 (VO2bs) was defined as the average VO2 measured during the last two minutes before the start of exercise (rest period). The VO2 mean response time (MRT) was fitted by combining the first and the second exponential terms. The MRT was then used to indicate the overall rate of change of the VO2 toward its new steady state. The MRT for a single-term exponential model is equivalent to τ+TD and therefore provides response information including not only the time constant (τ) but also the time delay (TD). At the MRT, this response has attained 63% of its final value. To estimate the phase II time constant for the VO2 kinetics (τ2) we used a nonlinear least-squares monoexponential fit to the data as previously described [17, 21]. However, in order to maximize the amount of transient data available for the characterization (an important determinant of the goodness of fit [22]) we chose to discard the first 15 sec rather than the more common 20 sec—reasoning that the more rapid cardiac output kinetics in our fit subjects would reduce the limb-to-lung transit time and hence the duration of Phase I.VO2 kinetics tends to be slower at higher work rates even when the work rates are not associated with a sustained increase in blood lactate [23]. Therefore, and in order to facilitate comparison across subjects exercising at different absolute work rates, the relative gain of the response (G=A/work rate) and the exercise specific relative oxygen deficit were computed using the following equation: (1)O2D/W=G(VO2/W)×MRT(sec)=mLO2/min/W. ### 2.4. Data Analysis Group data are reported as means and standard deviation. A two-way ANOVA with repeated measures for training status (trained or nontrained) and for muscle group (upper or lower extremity) (independent variables) was used to determine differences and relationships in and among the various dependent (O2 kinetics parameters) and independent parameters between arm cranking and leg cycling and between the trained and nontrained muscles. Tukey’s post hoc test was utilized to determine where significant differences existed. Statistical significance was accepted at P<0.05. ## 2.1. Measurement of Arm-Cranking and Leg-Cycling Peak Oxygen Uptake (VO2peak) and Gas Exchange Threshold (GET) During the first two visits to the laboratory each subject performed two incremental exercise tests to the limit of tolerance in order to determine the arm-cranking and leg-cycling GET and VO2peak. For the lower limbs, all athletes were tested on a cycle ergometer (Ergoline, Germany). For the upper limbs, the cyclists used a standard arm-cranking ergometer (Technogym, Top.XT, Italy); the kayakers used an arm paddling ergometer (Technogym, K-Race, Italy), mimicking the actual arm movement for which the kayakers’ arm muscles were trained. The subjects were seated upright such that the crank axis of the ergometer was aligned with the glenohumeral joint. The height of the seat was adjusted to allow for a slight bend in the elbow when the crank handles were at their greatest distance from the subject. Additionally, the legs were not braced and the feet were placed on a footrest, mimicking the actual position in the kayak. The subjects were encouraged to use only their arms and shoulders to perform the exercise, whereas the use of lower back and legs was discouraged. For the leg-cycling test, the seat height was adjusted such that the legs were in slight flexion (170°) at the nadir of the down stroke, while the handlebars were set according to the individual preferences. Handle bar arm-pull was discouraged during leg cycling. During all tests the crank/cycle cadence was strictly kept between 90 and 100 strokes/revolutions per minute (SPM/RPM), respectively (typical rhythm for both activities), despite the fact that the ergometers provided speed-independent power. All three ergometers were calibrated prior to the beginning of the study using a dynamic calibration rig (Cerini, Italy).After a 15-minute standardized warm-up, consisting of either pedaling or cranking at 60 RPM or SPM at a work rate of 100 and 50 W, respectively, and following a 5-min, rest, the subject then commenced the leg-cycling or arm-cranking task. The power output was increased progressively every minute from an initial work rate for the cyclists of 100 and 50 W for the legs and arms, respectively, and 75 and 100 for the kayakers in order to bring the subject to the limit of tolerance in 8–12 minutes. This was achieved by increasing the power output in increments of 25 watts·min−1 for leg cycling and 20 watts·min−1 for arm cranking until the subject was unable to keep the pedaling (or stroke) rate above 50 per minute.During both incremental exercise tests, HR, VO2, VCO2, and Ve were measured breath by breath via standard open-circuit spirometry techniques using computerized metabolic cart (Quark b2, Cosmed, Italy). Daily calibration of volume (with 3-L syringe) and pretest calibration of carbon dioxide and oxygen gas analyzers (with precision gas mixers) were carried out. Heart rate was continuously monitored by means of a telemetric system (Polar, Electro, Finland).Peak oxygen uptake (VO2max) was defined as the highest average VO2 during a 30-second period of the last 90 seconds of the test. The criteria for the noninvasive determination of the gas exchange threshold (GET) were as follows. (1) The modified V-slope method [20], in which VCO2 is plotted against VO2 The GET was defined as the last value prior to the departing of VCO2 versus VO2 slope from linearity. (2) A systematic increase in the ventilatory equivalent for O2 (Ve/VO2) without an increase in the ventilatory equivalent for CO2 (Ve/VCO2), when plotted against exercise time [20]. The GET was determined by inspection in a blinded manner by two investigators. A third investigator was consulted to adjudicate between the two when the two investigators did not agree on threshold placement. ## 2.2. Oxygen Kinetics during Moderate-Intensity Constant-Load Exercise On the two following visits, each subject performed one arm-cranking and one leg-cycling 4-5-minute constant-load test, in a randomized order. Following 10 minutes at rest, the work rate, equal to that which elicited 50% of VO2max during the incremental test, was applied instantaneously (from absolute rest) without prior warning given to the subject. During both tests the crank/cycle cadence was strictly kept between 90 and 100 SPM and RPM, respectively. Gas exchange was measured breath-by-breath using the same apparatus as for the incremental tests. Heart rate was monitored and recorded continuously during all constant-load exercise tests by means of a telemetric system (Polar, Electro, Finland). ## 2.3. Calculation of Oxygen Uptake Kinetics Individual responses during the rest-to-exercise transitions were linearly interpolated to give 1-s values. For each subject and each exercise protocol, data were time-aligned to the start of exercise, superimposed, and averaged to reduce the breath-to-breath noise and enhance the underlying physiological response characteristics. The baseline VO2 (VO2bs) was defined as the average VO2 measured during the last two minutes before the start of exercise (rest period). The VO2 mean response time (MRT) was fitted by combining the first and the second exponential terms. The MRT was then used to indicate the overall rate of change of the VO2 toward its new steady state. The MRT for a single-term exponential model is equivalent to τ+TD and therefore provides response information including not only the time constant (τ) but also the time delay (TD). At the MRT, this response has attained 63% of its final value. To estimate the phase II time constant for the VO2 kinetics (τ2) we used a nonlinear least-squares monoexponential fit to the data as previously described [17, 21]. However, in order to maximize the amount of transient data available for the characterization (an important determinant of the goodness of fit [22]) we chose to discard the first 15 sec rather than the more common 20 sec—reasoning that the more rapid cardiac output kinetics in our fit subjects would reduce the limb-to-lung transit time and hence the duration of Phase I.VO2 kinetics tends to be slower at higher work rates even when the work rates are not associated with a sustained increase in blood lactate [23]. Therefore, and in order to facilitate comparison across subjects exercising at different absolute work rates, the relative gain of the response (G=A/work rate) and the exercise specific relative oxygen deficit were computed using the following equation: (1)O2D/W=G(VO2/W)×MRT(sec)=mLO2/min/W. ## 2.4. Data Analysis Group data are reported as means and standard deviation. A two-way ANOVA with repeated measures for training status (trained or nontrained) and for muscle group (upper or lower extremity) (independent variables) was used to determine differences and relationships in and among the various dependent (O2 kinetics parameters) and independent parameters between arm cranking and leg cycling and between the trained and nontrained muscles. Tukey’s post hoc test was utilized to determine where significant differences existed. Statistical significance was accepted at P<0.05. ## 3. Results ### 3.1. Peak and Related Values Table2 presents data obtained during and at the end of all incremental and submaximal constant-load tests performed with the trained and untrained-muscle groups.Table 2 Peak and sub-maximal responses to arm and leg exercise by each group (mean ± SD). Kayakers Cyclists Lega Armb Legc Armd Peak work rate (W) 298 ± 42cd 279 ± 20cd 390 ± 52abd 157 ± 17abc Peak VO2 (mL/min) 4268± 656cd 4087 ± 499cd 4921 ± 380abd 3147 ± 436abc Peak HR (b/min) 183 ± 3 183 ± 8 192 ± 11d 178 ± 11c VO 2ss (mL/min) 2015 ± 368cd 1943 ± 404cd 2952 ± 195abd 1501 ± 384abc Work ratess (W) 108 ± 31cd 106 ± 15cd 170 ± 16abd 69 ± 12abc Work ratess (% VO2peak)** 39 ± 3 40 ± 4 44 ± 4 43 ± 3 VO 2ss (% VO2peak)** 47 ± 6c 47 ± 6c 58 ± 4abd 50 ± 8c VO2 at GET (L/min) 2.84 ± 0.39c 3.31 ± 0.25cd 3.86 ± 0.33abc 2.50 ± 0.33abc VO 2ss/VO2 at GET (%) 72.3 ± 21bd 62.5 ± 15ac 74.1 ± 11bd 61.6 ± 17ac HRss (% VO2peak)** 66 ± 7 63 ± 6 67 ± 5 62 ± 6 Like letters denote significant difference (P<0.05). **Values are percentages of their respective peak values. Peak work rate: work rate achieved at exhaustion; Peak VO2: rate of oxygen uptake at exhaustion; Peak HR: heart rate at exhaustion; VO2ss: rate of oxygen uptake at steady state level; Work ratess: work rate at steady state level; VO2 at GET: rate of oxygen uptake at the gas exchange threshold; VO2ss/VO2 at GET: ratio of oxygen uptake at steady state level to oxygen uptake at the gas exchange threshold; HRss: heart rate at steady state level. #### 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). #### 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ### 3.2. VO2 Kinetics Figures1–4 compare the groups’ mean Phase II of the VO2 response kinetics (excluding Phase I in each response), during the transition from rest to constant moderate exercise level between and within groups and muscles, along with the best exponential fit to each mean response. Visual inspection of these plots reveals that phase II of the VO2 response rose in biphasic fashion toward phase III (the exercise steady state levels). It seems that the relative load selected for this study (50–60% mode-specific VO2peak) was not only physiologically similar (in relative terms) (see Table 2), but also sufficiently low for both the lower and upper body musculatures for an attainment of a steady-state VO2 (after 2-3 min) without a development of VO2slow component, in the trained and untrained muscles alike.Figure 1 Average phase II of the VO2 response during transition from rest to moderate exercise using untrained arms (⋄) and trained legs (▴) by the cyclists.Figure 2 Average phase II of the VO2 response during transition from rest to moderate exercise using trained arms (⋄) and untrained legs (▴) by the kayakers.Figure 3 Average phase II of the VO2 response during transition from rest to moderate exercise using lower limbs muscles by the cyclists trained (▴) and the kayakers untrained (⋄).Figure 4 Average phase II of the VO2 response during transition from rest to moderate exercise using upper limbs muscles by the cyclists (untrained) (▴) and the kayakers (trained) (⋄).A more quantitative assessment of the relative speed of VO2 response as a function of muscle group and training status is presented in Table 3 (means ± SD).Table 3 Parameters of oxygen uptake response during moderate exercise as a function of exercise modality (muscle group involved) (mean ± SD). Variable Kayakers Cyclists Legsa Armsb Legc Armd VO 2bs (L/min) 0.46 ± 0.04 0.46 ± 0.09 0.41 ± 0.07 0.40 ± 0.10 VO 2ss (L/min) 1.99 ± 0.34cd 1.94 ± 0.40cd 2.85 ± 0.33abd 1.50 ± 0.38ac A (L/min) 1.54 ± 0.34cd 1.45 ± 0.36cd 2.45 ± 0.38abd 1.03 ± 0.26abc G (mL O2/min/W) 14.35 ± 1.11 13.6 ± 2.21 14.38 ± 0.84 14.5 ± 3.83 τ 2 (sec) 16.50 ± 2.6bd 18.81 ± 3.7ac 14.90 ± 2.7bd 19.70 ± 3.6ac MRT (sec) 22.62 ± 3.1bcd 26.73 ± 8.9ad 25.76 ± 4.9d 40.90 ± 5.6abc O2D (mL) 34.6 ± 11.9c 39.4 ± 11.5c 61.2 ± 17.4abd 40.7 ± 15.3c Relative O2D (mL O2/min/W) 323.2 ± 97d 363.2 ± 92d 370.3 ± 90d 594.5 ± 155abc Like letters denote significant difference (P<0.05). VO 2bs: average value over the two min of resting baseline; VO2ss: rate of oxygen uptake at steady state level; A: the asymptotic amplitude for the exponential term; τ2: time constant of primary phase; MRT: mean VO2 response time; O2D: calculated oxygen deficit; G: relative (to work rate) gain of the VO2 response; Relative O2D: O2 deficit normalized to work rate. #### 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). #### 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 3.1. Peak and Related Values Table2 presents data obtained during and at the end of all incremental and submaximal constant-load tests performed with the trained and untrained-muscle groups.Table 2 Peak and sub-maximal responses to arm and leg exercise by each group (mean ± SD). Kayakers Cyclists Lega Armb Legc Armd Peak work rate (W) 298 ± 42cd 279 ± 20cd 390 ± 52abd 157 ± 17abc Peak VO2 (mL/min) 4268± 656cd 4087 ± 499cd 4921 ± 380abd 3147 ± 436abc Peak HR (b/min) 183 ± 3 183 ± 8 192 ± 11d 178 ± 11c VO 2ss (mL/min) 2015 ± 368cd 1943 ± 404cd 2952 ± 195abd 1501 ± 384abc Work ratess (W) 108 ± 31cd 106 ± 15cd 170 ± 16abd 69 ± 12abc Work ratess (% VO2peak)** 39 ± 3 40 ± 4 44 ± 4 43 ± 3 VO 2ss (% VO2peak)** 47 ± 6c 47 ± 6c 58 ± 4abd 50 ± 8c VO2 at GET (L/min) 2.84 ± 0.39c 3.31 ± 0.25cd 3.86 ± 0.33abc 2.50 ± 0.33abc VO 2ss/VO2 at GET (%) 72.3 ± 21bd 62.5 ± 15ac 74.1 ± 11bd 61.6 ± 17ac HRss (% VO2peak)** 66 ± 7 63 ± 6 67 ± 5 62 ± 6 Like letters denote significant difference (P<0.05). **Values are percentages of their respective peak values. Peak work rate: work rate achieved at exhaustion; Peak VO2: rate of oxygen uptake at exhaustion; Peak HR: heart rate at exhaustion; VO2ss: rate of oxygen uptake at steady state level; Work ratess: work rate at steady state level; VO2 at GET: rate of oxygen uptake at the gas exchange threshold; VO2ss/VO2 at GET: ratio of oxygen uptake at steady state level to oxygen uptake at the gas exchange threshold; HRss: heart rate at steady state level. ### 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). ### 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ## 3.1.1. Cyclists As expected for cyclists, all peak mechanical and cardiovascular-related responses (central) were significantly higher in the trained (legs) compared with the nontrained (arms) muscles. The exception was the peak respiratory exchange ratio values (RERs), which were similar in the two muscle groups (1.16±0.05 versus 1.16±0.06), and of a magnitude, which was consistent with maximal effort in both. The arms-to-legs ratio of VO2peak in this group was 64% (Table 2). ## 3.1.2. Kayakers For these athletes the results were considerably different, demonstrating similar peak responses (mechanical and physiological) in both the trained and the nontrained muscles (Table2). The arms to legs ratio of VO2peak in this group was 96%.Also presented in Table2 are values representing relative physiologic stress and strain during the constant submaximal load exercise challenges. Work rates, VO2, and HR achieved during the steady-state phase of the respective submaximal constant-load exercise, in percent of their respective peak values, as well as the ratio between the measured VO2 during the constant-load exercise challenges, and the respective VO2 of the muscle group-specific GETs, were not significantly different within and between groups (Table 2). ## 3.2. VO2 Kinetics Figures1–4 compare the groups’ mean Phase II of the VO2 response kinetics (excluding Phase I in each response), during the transition from rest to constant moderate exercise level between and within groups and muscles, along with the best exponential fit to each mean response. Visual inspection of these plots reveals that phase II of the VO2 response rose in biphasic fashion toward phase III (the exercise steady state levels). It seems that the relative load selected for this study (50–60% mode-specific VO2peak) was not only physiologically similar (in relative terms) (see Table 2), but also sufficiently low for both the lower and upper body musculatures for an attainment of a steady-state VO2 (after 2-3 min) without a development of VO2slow component, in the trained and untrained muscles alike.Figure 1 Average phase II of the VO2 response during transition from rest to moderate exercise using untrained arms (⋄) and trained legs (▴) by the cyclists.Figure 2 Average phase II of the VO2 response during transition from rest to moderate exercise using trained arms (⋄) and untrained legs (▴) by the kayakers.Figure 3 Average phase II of the VO2 response during transition from rest to moderate exercise using lower limbs muscles by the cyclists trained (▴) and the kayakers untrained (⋄).Figure 4 Average phase II of the VO2 response during transition from rest to moderate exercise using upper limbs muscles by the cyclists (untrained) (▴) and the kayakers (trained) (⋄).A more quantitative assessment of the relative speed of VO2 response as a function of muscle group and training status is presented in Table 3 (means ± SD).Table 3 Parameters of oxygen uptake response during moderate exercise as a function of exercise modality (muscle group involved) (mean ± SD). Variable Kayakers Cyclists Legsa Armsb Legc Armd VO 2bs (L/min) 0.46 ± 0.04 0.46 ± 0.09 0.41 ± 0.07 0.40 ± 0.10 VO 2ss (L/min) 1.99 ± 0.34cd 1.94 ± 0.40cd 2.85 ± 0.33abd 1.50 ± 0.38ac A (L/min) 1.54 ± 0.34cd 1.45 ± 0.36cd 2.45 ± 0.38abd 1.03 ± 0.26abc G (mL O2/min/W) 14.35 ± 1.11 13.6 ± 2.21 14.38 ± 0.84 14.5 ± 3.83 τ 2 (sec) 16.50 ± 2.6bd 18.81 ± 3.7ac 14.90 ± 2.7bd 19.70 ± 3.6ac MRT (sec) 22.62 ± 3.1bcd 26.73 ± 8.9ad 25.76 ± 4.9d 40.90 ± 5.6abc O2D (mL) 34.6 ± 11.9c 39.4 ± 11.5c 61.2 ± 17.4abd 40.7 ± 15.3c Relative O2D (mL O2/min/W) 323.2 ± 97d 363.2 ± 92d 370.3 ± 90d 594.5 ± 155abc Like letters denote significant difference (P<0.05). VO 2bs: average value over the two min of resting baseline; VO2ss: rate of oxygen uptake at steady state level; A: the asymptotic amplitude for the exponential term; τ2: time constant of primary phase; MRT: mean VO2 response time; O2D: calculated oxygen deficit; G: relative (to work rate) gain of the VO2 response; Relative O2D: O2 deficit normalized to work rate. ### 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). ### 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 3.2.1. Within-Group Comparisons Trained versus Untrained Muscles(1) Cyclists. Onset of lower and upper extremity exercise at 50 to 60% of the mode-specific VO2max was associated with significantly higher amplitude-related values (VO2ss and A), and with faster overall (MRT) and Phase II response time (τ2) in the trained muscles (legs) compared with the nontrained muscles (arms) (Figure 1 and Table 3).It should be pointed out that when normalized for work rate attained by each muscle group (170 versus 79 W for the legs and arms, resp.), the “relative” rise (amplitude) in VO2 per unit load (G) was similar in the two muscle groups. In contrast, relative oxygen deficit (O2D/W) remained significantly larger in the untrained (arm) muscles compared with the trained (leg) muscles (594.5 versus 370.3 mL/min/W) (Table 3).(2) Kayakers. Although arm training (kayakers) did not bring about superiority in any of the upper body musculature VO2 amplitude- or response-related parameters at the onset of a below threshold square-wave exercise, such training promoted the VO2 response-related parameters to a level approaching that of their legs (Table 3). ## 3.2.2. Between-Groups Comparisons Trained Muscles (Leg (Cyclists) versus Arm (Kayakers) Muscles).As expected, and partially due to the differences in muscle mass and consequently in work rate, VO2ss and A, were significantly higher in the trained lower limbs than in the trained upper limbs (Figure 3 and Table 3). Similarly, the Phase II time constant (τ2), was faster in the trained lower than the trained upper limbs. Nevertheless, the overall VO2 transient during the square wave exercise (MRT) and the relative O2D did not differ significantly between the large and the relatively small trained-muscle groups (Table 3).Nontrained Muscles (Leg (Kayakers) versus Arm (Cyclists) Muscles).Except for VO2bs and G, all other VO2 response kinetic parameters (A, MRT, and τ2) were higher (or faster) in the untrained lower limbs compared with the untrained upper limbs (Table 3). Similarly, relative O2D showed significantly smaller volume when exercising with the legs compared with arm exercise (323.2 versus 594.5 mL O2/min/W, resp.).Trained versus Nontrained Muscles(1) Lower Limbs.Whereas load- or muscle mass-associated variables (VO2ss and A) were significantly higher in the trained legs compared with the nontrained legs, variables related to the rate of VO2 response during moderate constant load exercise (τ2 and MRT) showed no significant differences between the trained and the untrained legs (Table 3). Similar trend was also evident in the load-normalized rise in oxygen uptake (G) and oxygen deficit (O2D) being statistically similar in the trained and the nontrained leg muscles (Table 3).(2) Upper Body. Except for the τ2 (statistically similar in the trained and untrained upper body muscles), and unlike in the lower limbs, the trained arms demonstrated significantly higher (A, VO2ss) and faster (MRT) O2 kinetics-related values than the untrained arms (Figure 4 and Table 3). Relative O2D showed significantly smaller volume in the trained muscles compared with that of the untrained upper body muscles (Table 3). ## 4. Discussion To our knowledge, this is the first study to consider the kinetics of the VO2 response to dynamic muscular exercise in highly trained athletes who compete in events predominantly utilizing the lower or upper extremities, that is, cyclists and kayakers.As expected, bothVO2peak and peak power output were higher for the trained than for the untrained muscles in each group. However, to make a valid comparison of VO2 kinetics across exercise modes, we chose to normalize the exercise intensity to 50–60% of the mode-specific VO2peak (i.e., below the GET in this trained subjects). In this exercise intensity domain, it is believed that a metabolic inertia within the muscle cells themselves is the principal limitation to the acceleration of oxidative metabolism after the onset of exercise [4, 24, 25].Hence, despite differences in the absolute VO2, the relative intensity of each square-wave transition was successfully matched across exercise modes: the % ΔVO2, the % of the mode-specific peak power output, and the % of mode-specific of maximal heart rate were not significantly different (Table 2). Therefore, our study allows a comparison of the fundamental components of the VO2 kinetics between cycling and arm cranking within the same intensity domain.Unsurprisingly, the VO2 response to moderate exercise in the cyclists revealed different patterns for the trained (legs) and the untrained (arms) muscles (Figure 2; Table 3). The higher absolute work rate in the trained-muscle group naturally resulted in a higher steady-state amplitude of the VO2 response than for the untrained-muscle group. However, the phase II time constant (a functional correlate of the muscle VO2 time constant) [2, 9, 24] as well as the VO2 mean response time (MRS) reflecting, in addition, the utilization of oxygen from the oxygen stores [26] was significantly faster in the trained than the untrained muscles. Consequently, the oxygen deficit per unit power output (O2D/W) was significantly larger in the tests with the untrained muscles than with the trained muscles (Table 3).While the VO2 kinetics for the cyclists’ arms (τ2 = 19.7 sec; MRT = 40.9 sec) were slower than those of their leg (τ2 = 14.9 sec; MRT = 25.8 sec), they were appreciably faster than those previously reported for arm-cranking exercise in normal untrained males (60–80 s) [8, 18]. In fact, the VO2 time constant for the cyclists’ arms was similar to, and sometimes even faster than, those previously reported for normal nontrained legs (30–40 s) [8, 27]. This suggests that the muscles used by the cyclists for the arm-cranking exercise ought not be considered “untrained,” that is, reflecting the additional compensatory component arising from dynamic stabilization of the body during cycling and even periods of active “pulling” on the handlebars. Support for the above contention comes from studies by Baker et al. [28, 29] who reported that power generated during sprint cycling was significantly higher in a protocol that allowed for the gripping of the handle bars than in another protocol without the gripping of handle bars. Their results demonstrated that the arms and the upper body were involved in stabilizing the entire body so that the lower limbs could exert forces downwards onto the cycle pedals to generate the mechanical power and that the contribution of muscle groups not directly involved during sprint cycling toward power generation cannot be discounted.Alternatively, of course, a “transfer effect” on the VO2 kinetics may have been contributory, resulting from an increased amount of blood and, therefore, oxygen, available to the cyclists’ arms as a result of a “central” training effect consequent to the leg training. However, for this to be contributory, the VO2 kinetics would need to be limited by oxygen delivery at this work intensity. However, there is no convincing evidence that increases in maximum VO2 and, hence, maximum cardiac output induced by training alter the steady-state cardiac output response to a moderate-intensity work rate, at least for leg exercise (i.e., [26]). Furthermore, the VO2 time constant for moderate exercise has been demonstrated not to be speeded by: experimentally-induced increases in muscle blood flow [4], beginning the exercise when blood flow has remained high following a prior bout of higher intensity exercise [30], and even by increased inspired O2 fractions [31]. Hence, any improvement in central indices of cardiovascular function is unlikely to be contributory.The VO2 response to moderate exercise in the kayakers, in contrast, revealed similar patterns (amplitude, kinetics, and O2D) for the arm and the leg exercise, both in absolute and relative terms (Figure 2; Table 3). These results suggest either that the long-term intensive training with the relative small muscle mass of the arms did not cause any appreciable cross-training effect on the VO2 kinetics and/or that there is a significant leg contribution to kayaking [32]. The fact that the values for the VO2 time constant (16.5 sec) and MRT (22.6 sec) for leg exercise in the kayakers are appreciably faster than for normal untrained cycle ergometry [2, 8] suggests that the latter explanation is more likely.Our finding that the VO2 kinetics for the kayakers’ arms are not significantly different from those of the cyclists’ arms, despite the “central” capacity (as reflected by the leg VO2peak) being appreciably lower in the kayakers, suggests that the VO2 kinetics at the onset of subthreshold square-wave exercise depend primarily on peripheral factors (muscle mass, distribution of muscle fiber type, number of mitochondria, activity levels of oxidative enzymes, and possibly muscle vascularization) and not on central factors (cardiac output, pulmonary ventilation, etc.). These results are in line with previous reports suggesting that on-transient VO2 kinetics for moderate square-wave exercise is mainly reflective of and dictated by peripheral (local) rather than central attributes [4, 5, 33].While the phase II time constant (reflective of the kinetics of muscle oxygen utilization, that is, [1, 24]) for the exercise involving the trained-muscle groups were both fast relative to normal subjects, the value in the cyclists (14.9 sec) was even (and significantly) faster than that for the kayakers (18.8 sec). However, the mean ratio of leg to arm τ2 in our highly trained subjects was 81±5%, being appreciably higher than the respective ratio of 50–60% observed in healthy untrained subjects [8, 17, 27]. That is, while both muscle groups were evidently “highly trained,” the cyclists were trained for longer-duration exercise (i.e., hours); the kayaker’s training program, preparing for all-out races lasting 2–7 minutes, included both aerobic- and anaerobic-type activities (including resistance training) [32]. Furthermore, the energy demand for the kayakers, during both competitions and training, was frequently in excess of VO2peak, which was not the case for the cyclists. These factors may have contributed to even greater improvements in aerobic enzymatic function and capillarity—thought to be important contributors to VO2 kinetics [5, 24].With respect to the relatively untrained-muscle groups, the kayakers evidenced faster VO2 kinetics both with respect to τ2 and MRT, than the cyclists (Table 3). This suggests that, while both muscle groups may be considered to be relatively trained [29, 32] with respect to normal moderately fit subjects, it seems that there is a greater stabilizing-counterforce involvement of the legs in the task of kayaking [32] than of the arms for cycling.Another interesting and, possibly, surprising finding of the present study was the similarity of the VO2 response kinetics for the trained (cyclists) and untrained (kayakers) leg muscles. This finding is not only surprising, but also contrary to several previous reports demonstrating a significant speeding of these kinetics following endurance training [13, 34]. One possible explanation for this unexpected outcome is that the speeding of the VO2 kinetics with endurance training does not increase pari passu with the increase of VO2peak, but rather effectively attains a plateau. We speculate that this “asymptotic” level of the phase II time constant is relatively “easy” to reach since even the relatively mild “whole body” training (including leg muscles) of our kayakers group appeared sufficient for their leg VO2 kinetics to attain this assumed critical limiting level. This suggestion is further supported by the similar MRT in the trained and untrained leg muscles, despite significant differences in the respective VO2peak values (73.2 versus 53.4 mL/kg/min). Furthermore, when comparing the ratio of τ2 of the leg muscles to that of the arm muscles (τ2 legs/τ2 arms) between the two elite athlete groups, it becomes evident that the kayakers arm muscles’ “machinery” (as determined by the τ2) is significantly closer to that of their leg muscles (88.3 versus 73.4% in the kayakers and cyclists, resp.). It is clear that such proximity is not due to relatively slow legs’ τ2 in the kayakers, as the latter is as fast in the kayakers as it is in the cyclists (see Table 3). Such high ratio implies that (a) arm training, as used by our kayakers, provides stronger stimulus for improving the VO2 response kinetics during constant and moderate exercise task, and (b) for achieving high level in kayak competition, one needs to promote his arm muscle functioning to a level very close to that of his legs. It should be pointed out that the difference in the MRT legs-to-arms ratio between the two groups was even greater (86.7 versus 63.2% for the kayakers and cyclists, resp.).Limitations of Study Design. Probably, the most powerful way to test the hypothesis of this study would be to conduct a “classic” training study (pre- versus posttraining comparisons). However, legalistically and logistically, such “classic” approach will not allow a long and intensive training regimen, such as that pursued by our subjects. Further, the design of this study does not allow to exclude the possibility of genetic predisposition influence on the observed results and hence on the final conclusions of the study.Also, in the present study we used differing testing modes of arm exercise between groups. The logic for using different exercise modalities to test and compare arm exercise between cyclists and kayakers was intended to allow subjects in each group to perform exercise (in their respective trained muscles) similar to those they were most accustomed to and trained for.Notwithstanding the above-mentioned limitations, the results of the present study provide no support for the “transfer” of a training effect onto the VO2 on-transient response for moderate exercise, but rather support earlier reports demonstrating that peripheral (local) and not central (hemodynamic) effects may be important in dictating these kinetics. As a consequence, we suggest that predominantly local and/or specific training is required to speed the muscle O2 consumption response to moderate exercise. This consequently reduces the associated oxygen deficit and hence the reliance on stored energy resources, predominantly phosphocreatine and O2, and anaerobic lactate production.Finally and in line with the above-mentioned limitations, further effort should be attempted to validate the study’s findings. --- *Source: 101565-2012-12-31.xml*
2013
# Therapeutic Effect of P-Cymene on Lipid Profile, Liver Enzyme, and Akt/Mtor Pathway in Streptozotocin-Induced Diabetes Mellitus in Wistar Rats **Authors:** Maryam Arabloei Sani; Parichehreh Yaghmaei; Zahra Hajebrahimi; Nasim Hayati Roodbari **Journal:** Journal of Obesity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015669 --- ## Abstract Diabetes is a serious public health problem in low- and middle-income countries. There is a strong link between hyperglycemia, oxidative stress, inflammation, and the development of diabetes mellitus. PI3K/Akt/mTOR is the main signaling pathway of insulin for controlling lipid and glucose metabolism. P-cymene is an aromatic monoterpene with a widespread range of therapeutic properties including antioxidant and anti-inflammatory activity. In the present study, the antidiabetic effects of p-cymene were investigated. Diabetes was induced using streptozotocin in male Wistar rats. The effects of p-cymene and metformin were studied on levels of glucose (Glu), lipid profile, liver enzymes, oxidative stress, and the expression of Akt, phospho-Akt, and mTOR (mammalian target of rapamycin) proteins, using biochemical, histological, and immunohistochemical analysis. Data have shown that p-cymene can improve serum levels of Glu, total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein (LDL), very-low-density lipoprotein (VLDL), alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), malondialdehyde (MDA), and the expression of mTOR, Akt, and phospho-Akt protein in diabetic animals. These results suggest that p-cymene has hypoglycemia, hypolipidemia, and antioxidant properties. It can regulate Akt/mTOR pathway and reduce hepatic and pancreas injury. It can be suggested for diabetes management alone or simultaneously with metformin. --- ## Body ## 1. Background Type 2 diabetes mellitus (T2DM), also used to be called adult-onset diabetes, is the most common chronic metabolic disorder that is characterized mainly by high levels of glucose (Glu) concentration in the blood, resulting from insulin resistance and/or relative insufficient insulin secretion in peripheral tissues [1, 2]. Based on the reports, T2DM is the most frequent type of diabetes mellitus that accounts for 87% to 91% of all diabetes patients [3]. The World Health Organization estimated that 439 million people would have T2DM by the year 2030 [4]. As a multifactorial disease, T2DM development is caused by a combination of genetics and environmental factors, such as obesity, lack of physical activity, unhealthy diet, stress, cigarette smoking, and generous consumption of alcohol [5].Extremely high blood glucose concentrations in T2DM people can lead to serious, potentially life-threatening vascular complications including atherosclerosis, retinopathy, neuropathy, nephropathy, and amputations [6]. A growing body of studies revealed an increase in biomarkers of oxidative stress in patients with T2DM, and especially in subjects with diabetes complications including micro- and macrovascular abnormalities [7–11]. As already mentioned, T2DM is a chronic metabolic disorder in which mitochondria have a key role as the most common source of ROS (reactive oxygen species) production. There is an important association between high levels of glucose in blood and induction of oxidative stress and inflammation on the one hand and the development of insulin resistance, T2DM, and its complications on the other. Hyperglycemia stimulates the production of ROS, the increased oxidative stress induces inflammation, and inflammation, in turn, increases the generation of ROS. Although many features of type 2 diabetes mellitus are not yet clear, it is revealed that enhanced production of ROS and inflammatory mediators have a central role in the development and progression of T2DM [12, 13]. Therefore, one strategy for T2DM therapy is controlling ROS production and using medication that improves insulin resistance.There are a number of different types of diabetes drugs with some having similar ways of acting such as improving insulin resistance, controlling glucose levels, and reducing oxidative stress and inflammation [14–16]. However, most of these antidiabetic drugs have limited efficacy and many undesirable side effects such as drug resistance, weight gain, dropsy, and high rates of secondary failure [7, 10]. Therefore, there is a need for further development of low toxicity, effective and economic antidiabetic agents, and controlling T2DM complications, especially in long-term medication. In recent decades, using plant-based drugs has become popular in the world to treat many diseases instead of the use of chemical agents due to their minimal toxicity, easy access, cost-effectiveness, and easy use [17, 18]. In this regard, hypoglycaemic, antihyperlipidemic, antioxidant, and anti-inflammation effects of many monoterpenes have been reported in several experimental studies [19–21]. Monoterepens belong to the terpenoids group of secondary plant metabolites that are synthesized through the isoprenoid acid pathway. P-cymene is an aromatic organic monoterpene isolated from more than 100 various medicinal plants. A widespread range of therapeutic properties of p-cymene has been demonstrated including antioxidant, anti-inflammatory, antinociceptive, anxiolytic, anticancer, and antimicrobial effects [22–24].With respect to the above description, the present study aims to investigate the effects of p-cymene on the prevention and treatment of T2DM in the rat model of diabetes. We found that p-cymene can improve serum levels of Glu, total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein (LDL), Alkaline phosphatase (ALP), alanine aminotransferase (ALT or SGPT), aspartate aminotransferase (AST or SGOT), malondialdehyde (MDA), and very-low-density lipoproteins (VLDL) in diabetic rats. Also, we analyzed the expression of mTOR, Akt, and phosphorylated Akt (phospho-Akt) protein. The mechanistic target of rapamycin (mTOR), also known as the mammalian target of rapamycin, is a serine/threonine-protein kinase in the phosphatidylinositol 3-kinase (PI3K-) related kinase (PIKK) family. It is regulated by the nutrient state, glucose, amino acids, and growth factors, so it plays an important role in the regulation of cell growth, cell proliferation, cell survival, protein synthesis, and metabolism of lipids and glucose [25]. Dysregulation in mTOR signaling is involved in several diseases such as obesity, type 2 diabetes mellitus, cancer, and neurodegenerative disorders [25]. Experimental studies suggested that obesity and overnutrition activate mTOR in various tissues in islets of humans [26, 27]. Akt or protein kinase B (PKB) is also a serine/threonine-protein kinase that regulates cellular processes including glucose metabolism, cell survival, and cell proliferation. Some reports have demonstrated that the Akt signaling pathway is associated with pathophysiological processes of diabetes mellitus and its complications [28]. ## 2. Methods ### 2.1. Experimental Animals Fifty-four male Wistar rats weighing between 200 and 250 g were purchased from Razi laboratory animal, Islamic Azad University, Science and Research Branch, Tehran, Iran. Animals were kept 4 per cage in the animal house of the Science and Research Branch of Islamic Azad University, under standard laboratory conditions of controlled room temperature (22°C) and humidity (50 ± 10%) with 12 hours of light and dark cycles before the experiment. Throughout the study period, the rat was allowed free access to water and standard chow. All efforts were made to avoid animal pain and suffering in accordance with the guidelines for the Care and Use of Laboratory Animals (Committee for the update of the guide for the Care and Use of Laboratory Animals, 1996). Applications were approved by the Animal Care and Use Committee of Islamic Azad University, Science and Research Branch. The animals were familiarized with the laboratory conditions for one week prior to starting the procedures. Diabetes was induced by a single intraperitoneal (i.p.) injection of 55 mg/kg streptozotocin (STZ) (Sigma-Aldrich, USA) freshly dissolved in 0.1 M sodium citrate buffer, pH 4.5 [29]. After 48 hours, blood samples were collected from the rat tail vein, and the amount of whole blood sugar was measured using a glucometer (Cera pet, South Korea). The animals with glucose levels greater than 300 mg/dl were considered diabetic and selected for further study. P-cymene was used at a dose of 25, 50, and 100, according to previous studies [22, 30].In fact, animals were randomly divided into nine groups (n = 6) as follows: (1) the control group (C) that were fed with a standard diet and water ad libitumand received no treatment; (2) the sham operation group (D) or diabetic rats with single i.p. injection of 55 mg/kg STZ; (3) Metformin group (Met): diabetic rats with oral administration of metformin as a positive control of diabetic rats (55 mg/kg, Osve Pharmaceutical Co, Iran) for 4 weeks; (4) control-25 (C25): control rats treated with p-cymene (25 mg/kg, Sigma Chemical Co, St. Louis, MO, USA) for 4 weeks; (5) control-50 (C50): control rats treated with p-cymene (50 mg/kg) for 4 weeks; (6) control-100 (C100): control rats treated with p-cymene (100 mg/kg) for 4 weeks; (7) diabet-25 (D25): diabetic rats treated with p-cymene (25 mg/kg) for 4 weeks; (8) diabet-50 (D50): diabetic rats treated with p-cymene (50 mg/kg) for 4 weeks; (9) diabet-100 (D100): diabetic rats treated with p-cymene (100 mg/kg) for 4 weeks. The supplemented p-cymene was given by oral gavage. The body weights of animals were recorded at the beginning of the experiment and at the end of the experimental period. ### 2.2. Blood Sampling and Biochemical Analysis Serum samples and liver tissues were collected at the end of the 28th day. For serum collecting, animals were anesthetized with ketamine (0.8 mg/kg, i.p.) and xylazine (0.5 mg/kg, i.p.). Blood samples were then collected from the left ventricle of the heart and kept at room temperature for 2 hours. Serum was obtained through centrifugation at2500g for 5 min and then maintained at −20°C until subsequent analysis. The concentration of metabolic parameters such as TC, TG, HDL, LDL, ALP, ALT, AST, and VLDL was estimated by commercially available animal spectrophotometric assay kits (Pars Azmun Company, Tehran, Iran) according to the manufacturer’s recommendations.The method of Placer et al. was used to determine the amount of malondialdehyde (MDA) as an index of oxidative damage [31]. The assay is based on the reaction of MDA with thiobarbituric acid (TBA) and the production of a red compound with a maximum absorption peak at 532 nm. For this purpose, liver tissues were homogenized 1 : 10 with cold phosphate-buffered saline and centrifuged (14000g, 15 min, 4°C). The resulting supernatants were used for the assay of the MDA. Briefly, 250 μl of homogenate was added to 500 μl of a solution containing 15% trichloroacetic acid, 0.375% thiobarbituric acid, and 0.25N hydrochloric acid and placed in a boiling water bath for 10 min. The samples were cooled and centrifuged (3000 rpm, 10 min). Then, 200 μl of the resulting supernatant was removed and quantified at 532 nm. Data were expressed as μmol/l. ### 2.3. Histological Procedures, Dithizone Staining, and Immunohistochemistry All animals were anesthetized at the end of the 4th week, and pancreatic tissues were excised and evaluated for standard histological procedures. Samples were trimmed free of fat and preserved in 10% paraformaldehyde and then embedded in paraffin after standard processing of dehydration in increasing concentration of ethanol and clearing in xylene. Then, paraffin samples were sectioned into the rotary microtome to obtain 5-6μm thick sections and then mounted on glass slides for dithizone (DTZ) staining. Dithizone is specific staining for a staining B-cell in pancreatic islets. It is a zinc-binding agent that selectively stains the islet’s beta-cells and therefore turns the islets red. Beta-cells contain large amounts of zinc ions. Dithizone (Sigma-Aldrich) solution was prepared by dissolving 100 mg of dithizone in 10 mL dimethylsulfoxide (DMSO, Sigma-Aldrich). After 10 minutes, 40 ml PBS was added and filtered using a 0.2 μm filter and used for staining tissue at 37°C for 15 minutes and the stained cells were observed under a light microscope [32].Immunohistochemistry was done according to the standard protocol for immunohistochemistry, which was described everywhere [33–35]. Briefly, the sections of pancreatic tissue were cut at five micron-thick and placed on APES ((3-aminopropyl) triethoxysilane) coated microscopic slides for 1 hour at 37°C. Later, the tissues were deparaffinized with 2 changes of xylene, 5 minutes each, and rehydrated in 2 changes of 100% ethanol for 3 minutes each, 95% and 80% ethanol for 1 minute each. Then, they were rinsed in distilled water for 5 minutes. For antigen retrieval, slides were boiled in antigen retrieval solution (TBS 1X, Sigma-T5912, 20 min) and washed with PBS (3 times, 5 min, Sigma-P4417). Slides were rinsed in 0.3% triton for 30 minutes to permeabilize the cell membrane. Following PBS wash, the slides were blocked with 10% normal goat serum for 30 minutes. For detection of Akt, phospho-Akt, and mTOR protein, slides were incubated with primary mouse monoclonal anti-Akt1 antibody (1 : 100; sc-5298, Santa Cruz Biotechnology, Inc., United States), primary rabbit anti-phospho-Akt antibody (1 : 100; anti-phospho-Akt (ser473), #4060, Cell Signaling Technology, Danvers, MA, USA), and mouse monoclonal anti-mTOR antibody (1 : 100; sc-517464, Santa Cruz Biotechnology, Inc., United States) at 4°C overnight, respectively. The next day, the sections were washed with PBS for 4 × 5 minutes. Later, slides were incubated with secondary antibody for 90 minutes at 37°C in the dark (for Akt and mTOR: FITC Goat Anti-Mouse (IgG) antibody, 1 : 150, ab6785, Abcam; and for phospho-Akt: FITC Goat Anti-Rabbit IgG (H + L) antibody, 1 : 150, orb688925, Biorbyt Ltd., USA). After four times washes, DAPI (Sigma-D9542) was added to slides and washed with PBS after 20 min. Images were observed using a Labomed microscope (USA). Quantification of the positive area was performed using ImageJ software (National Institute of Health, USA; https://imagej.nih.gov/ij). ### 2.4. Statistical Analysis The data are presented as the means ± SEM One-Way ANOVA with Tukey’s post hoc test, which was used for comparison among groups. All data were analyzed using IBM SPSS Statistics for Windows, version 20 (IBM Corp., Armonk, NY, USA). The charts were drawn using Microsoft Excel 2010.p<0.05 was set as significant. ## 2.1. Experimental Animals Fifty-four male Wistar rats weighing between 200 and 250 g were purchased from Razi laboratory animal, Islamic Azad University, Science and Research Branch, Tehran, Iran. Animals were kept 4 per cage in the animal house of the Science and Research Branch of Islamic Azad University, under standard laboratory conditions of controlled room temperature (22°C) and humidity (50 ± 10%) with 12 hours of light and dark cycles before the experiment. Throughout the study period, the rat was allowed free access to water and standard chow. All efforts were made to avoid animal pain and suffering in accordance with the guidelines for the Care and Use of Laboratory Animals (Committee for the update of the guide for the Care and Use of Laboratory Animals, 1996). Applications were approved by the Animal Care and Use Committee of Islamic Azad University, Science and Research Branch. The animals were familiarized with the laboratory conditions for one week prior to starting the procedures. Diabetes was induced by a single intraperitoneal (i.p.) injection of 55 mg/kg streptozotocin (STZ) (Sigma-Aldrich, USA) freshly dissolved in 0.1 M sodium citrate buffer, pH 4.5 [29]. After 48 hours, blood samples were collected from the rat tail vein, and the amount of whole blood sugar was measured using a glucometer (Cera pet, South Korea). The animals with glucose levels greater than 300 mg/dl were considered diabetic and selected for further study. P-cymene was used at a dose of 25, 50, and 100, according to previous studies [22, 30].In fact, animals were randomly divided into nine groups (n = 6) as follows: (1) the control group (C) that were fed with a standard diet and water ad libitumand received no treatment; (2) the sham operation group (D) or diabetic rats with single i.p. injection of 55 mg/kg STZ; (3) Metformin group (Met): diabetic rats with oral administration of metformin as a positive control of diabetic rats (55 mg/kg, Osve Pharmaceutical Co, Iran) for 4 weeks; (4) control-25 (C25): control rats treated with p-cymene (25 mg/kg, Sigma Chemical Co, St. Louis, MO, USA) for 4 weeks; (5) control-50 (C50): control rats treated with p-cymene (50 mg/kg) for 4 weeks; (6) control-100 (C100): control rats treated with p-cymene (100 mg/kg) for 4 weeks; (7) diabet-25 (D25): diabetic rats treated with p-cymene (25 mg/kg) for 4 weeks; (8) diabet-50 (D50): diabetic rats treated with p-cymene (50 mg/kg) for 4 weeks; (9) diabet-100 (D100): diabetic rats treated with p-cymene (100 mg/kg) for 4 weeks. The supplemented p-cymene was given by oral gavage. The body weights of animals were recorded at the beginning of the experiment and at the end of the experimental period. ## 2.2. Blood Sampling and Biochemical Analysis Serum samples and liver tissues were collected at the end of the 28th day. For serum collecting, animals were anesthetized with ketamine (0.8 mg/kg, i.p.) and xylazine (0.5 mg/kg, i.p.). Blood samples were then collected from the left ventricle of the heart and kept at room temperature for 2 hours. Serum was obtained through centrifugation at2500g for 5 min and then maintained at −20°C until subsequent analysis. The concentration of metabolic parameters such as TC, TG, HDL, LDL, ALP, ALT, AST, and VLDL was estimated by commercially available animal spectrophotometric assay kits (Pars Azmun Company, Tehran, Iran) according to the manufacturer’s recommendations.The method of Placer et al. was used to determine the amount of malondialdehyde (MDA) as an index of oxidative damage [31]. The assay is based on the reaction of MDA with thiobarbituric acid (TBA) and the production of a red compound with a maximum absorption peak at 532 nm. For this purpose, liver tissues were homogenized 1 : 10 with cold phosphate-buffered saline and centrifuged (14000g, 15 min, 4°C). The resulting supernatants were used for the assay of the MDA. Briefly, 250 μl of homogenate was added to 500 μl of a solution containing 15% trichloroacetic acid, 0.375% thiobarbituric acid, and 0.25N hydrochloric acid and placed in a boiling water bath for 10 min. The samples were cooled and centrifuged (3000 rpm, 10 min). Then, 200 μl of the resulting supernatant was removed and quantified at 532 nm. Data were expressed as μmol/l. ## 2.3. Histological Procedures, Dithizone Staining, and Immunohistochemistry All animals were anesthetized at the end of the 4th week, and pancreatic tissues were excised and evaluated for standard histological procedures. Samples were trimmed free of fat and preserved in 10% paraformaldehyde and then embedded in paraffin after standard processing of dehydration in increasing concentration of ethanol and clearing in xylene. Then, paraffin samples were sectioned into the rotary microtome to obtain 5-6μm thick sections and then mounted on glass slides for dithizone (DTZ) staining. Dithizone is specific staining for a staining B-cell in pancreatic islets. It is a zinc-binding agent that selectively stains the islet’s beta-cells and therefore turns the islets red. Beta-cells contain large amounts of zinc ions. Dithizone (Sigma-Aldrich) solution was prepared by dissolving 100 mg of dithizone in 10 mL dimethylsulfoxide (DMSO, Sigma-Aldrich). After 10 minutes, 40 ml PBS was added and filtered using a 0.2 μm filter and used for staining tissue at 37°C for 15 minutes and the stained cells were observed under a light microscope [32].Immunohistochemistry was done according to the standard protocol for immunohistochemistry, which was described everywhere [33–35]. Briefly, the sections of pancreatic tissue were cut at five micron-thick and placed on APES ((3-aminopropyl) triethoxysilane) coated microscopic slides for 1 hour at 37°C. Later, the tissues were deparaffinized with 2 changes of xylene, 5 minutes each, and rehydrated in 2 changes of 100% ethanol for 3 minutes each, 95% and 80% ethanol for 1 minute each. Then, they were rinsed in distilled water for 5 minutes. For antigen retrieval, slides were boiled in antigen retrieval solution (TBS 1X, Sigma-T5912, 20 min) and washed with PBS (3 times, 5 min, Sigma-P4417). Slides were rinsed in 0.3% triton for 30 minutes to permeabilize the cell membrane. Following PBS wash, the slides were blocked with 10% normal goat serum for 30 minutes. For detection of Akt, phospho-Akt, and mTOR protein, slides were incubated with primary mouse monoclonal anti-Akt1 antibody (1 : 100; sc-5298, Santa Cruz Biotechnology, Inc., United States), primary rabbit anti-phospho-Akt antibody (1 : 100; anti-phospho-Akt (ser473), #4060, Cell Signaling Technology, Danvers, MA, USA), and mouse monoclonal anti-mTOR antibody (1 : 100; sc-517464, Santa Cruz Biotechnology, Inc., United States) at 4°C overnight, respectively. The next day, the sections were washed with PBS for 4 × 5 minutes. Later, slides were incubated with secondary antibody for 90 minutes at 37°C in the dark (for Akt and mTOR: FITC Goat Anti-Mouse (IgG) antibody, 1 : 150, ab6785, Abcam; and for phospho-Akt: FITC Goat Anti-Rabbit IgG (H + L) antibody, 1 : 150, orb688925, Biorbyt Ltd., USA). After four times washes, DAPI (Sigma-D9542) was added to slides and washed with PBS after 20 min. Images were observed using a Labomed microscope (USA). Quantification of the positive area was performed using ImageJ software (National Institute of Health, USA; https://imagej.nih.gov/ij). ## 2.4. Statistical Analysis The data are presented as the means ± SEM One-Way ANOVA with Tukey’s post hoc test, which was used for comparison among groups. All data were analyzed using IBM SPSS Statistics for Windows, version 20 (IBM Corp., Armonk, NY, USA). The charts were drawn using Microsoft Excel 2010.p<0.05 was set as significant. ## 3. Results ### 3.1. Biochemical Results Table1 summarizes the serum levels of Glu, TG, TC, LDL, HDL, and VLDL in all animal groups. As indicated in Table 1, injection of STZ significantly increased the serum level of Glu, TG, TC, and VLDL in the sham group (D) in comparison to control animals. Administration of metformin or p-cymene improved Glu’s, TG’s, TC’s, and VLDL’s value in serum (in D25 and D100 groups for Glu, D25, D50, and D100 groups for TG; D25 and D50 groups for TC; and in D25, D50, and D100 groups for VLDL) when compared to the sham ones. Changes in Glu, TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 25, 100, 25, and 25 or 50 mg/kg, respectively. It seems that the serum levels of LDL increased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly decreased the blood level of LDL in C25, C100, and D100 groups (p≤0.05) in comparison to the sham group. The serum levels of HDL decreased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly increased the blood level of HDL in all groups (p≤0.001) in comparison to the sham group, and in C25, C50, and C100 groups (p≤0.001) when compared to control ones. P-cymene treatment did not have any effects on serum levels of Glu, TG, TC, and LDL in C25, C50, and C100 groups in comparison to controls (exception for Glu in the C50 group). In contrast, p-cymene treatment increased the levels of VLDL in C25, C50, and C100 animals when compared to control rats, but these increases were significantly lower compared to the VLDL level in the sham group.Table 1 Biochemical results of the diabetes rats and controls. GroupGlu (mg/dl)TG (mg/dl)TC (mg/dl)LDL (mg/dl)HDL (mg/dl)VLDL (mg/dl)Control (C)156.50 ± 11.80###79.83 ± 3.19+++#61.67 ± 2.22+##18.67 ± 2.2234.33 ± 1.897.55 ± 0.58###Sham (D)317.00 ± 18.01∗∗∗+++101.33 ± 5.50∗+++78.83 ± 3.18∗∗+++22.67 ± 1.33++28.50 ± 1.75+++23.00 ± 1.53∗∗∗+++Met161.00 ± 18.88###36.67 ± 2.65∗∗∗###48.50 ± 3.92∗###15.50 ± 1.15##44.33 ± 2.50###7.62 ± 0.70###C25173.83 ± 16.09###81.33 ± 6.16+++57.83 ± 2.61###16.17 ± 0.54#53.17 ± 4.80∗∗∗###14.60 ± 0.61∗∗∗###+++C50232.00 ± 26.9787.17 ± 2.68+++71.50 ± 2.17+++18.83 ± 1.2560.83 ± 1.08∗∗∗###+++16.33 ± 1.22∗∗∗###+++C100191.17 ± 15.37###67.67 ± 5.23+++###66.50 ± 1.95+++16.83 ± 1.30#52.5 ± 2.03∗∗∗###12.38 ± 0.73∗∗###++D25207.67 ± 13.53##37.00 ± 2.66∗∗∗###58.00 ± 2.35###18.50 ± 1.2644.0 ± 1.37##7.80 ± 0.60###D50233.00 ± 20.2338.00 ± 2.79∗∗∗###65.83 ± 3.46#++18.67 ± 0.3347.67 ± 2.08∗∗###7.32 ± 0.52###D100205.50 ± 29.34##64.83 ± 8.09++###68.17 ± 2.83+++17.00 ± 0.86#45.83 ± 2.52∗###13.03 ± 0.80∗∗###++Values are presented as mean ± SEM(n = 6/each group).∗Statistically different from the control rats (p<0.05), ∗∗statistically different from the control rats (c), ∗∗∗statistically different from the control rats (p<0.001), #statistically different from the sham rats (p<0.05), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +Statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin(55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). TG: Triglyceride, TC: Total Cholesterol, LDL: low-density lipoprotein, HDL: high-density lipoprotein, VLDL: very low-density lipoproteins.The serum levels of AST, ALT, and ALP are presented in Table2. We observed that injection of STZ significantly increased (p≤0.001) the serum level of AST, ALT, and ALP in the sham group in comparison to control animals. Administration of metformin or p-cymene improved the serum level of these factors in diabetic rats (p≤0.001) when compared to the sham group (except for AST in the D100 group; p≥0.05). Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. P-cymene treatment did not have any effects (p≥0.05) on serum levels of AST and ALT in C25, C50, and C100 groups in comparison to controls (except for AST in C25 group; p≤0.01). In contrast, p-cymene treatment increased the levels of ALP in C25, C50, and C100 animals when compared to control rats (p≤0.001), but these increases were significantly lower compared to the ALP levels in the sham group (p≤0.001).Table 2 Results of the AST, ALT, ALP, and MDA in diabetes rats and controls. GroupAST (mg/dl)ALT (mg/dl)ALP (mg/dl)MDA (μmol/l)Control (C)81.50 ± 2.53###66.17 ± 4.10###473.80 ± 26.11###5.17 ± 0.40###Sham (D)110.67 ± 1.48∗∗∗+++97.17 ± 1.17∗∗∗+++1007.50 ± 16.01∗∗∗+++10.17 ± 0.48∗∗∗+++Met92.00 ± 2.83###57.00 ± 2.84###391.67 ± 14.40###5.25 ± 0.48###C2595.25 ± 1.28∗∗###65.33 ± 3.38###777.75 ± 29.58∗∗∗##+++5.83 ± 0.31###C5092.17 ± 2.77###72.83 ± 4.73###741.20 ± 27.94∗∗∗###+++5.33 ± 0.21###C10091.33 ± 2.09###56.50 ± 4.51###730.50 ± 36.38∗∗∗###+++5.00 ± 0.37###D2587.00 ± 2.32###66.00 ± 6.52###500.33 ± 52.85###6.67 ± 0.33###D5094.50 ± 1.80∗∗###59.00 ± 2.00###457.50 ± 48.08###7.00 ± 0.37∗###+D100102.33 ± 2.08∗∗∗+58.83 ± 1.14###523.20 ± 33.68###6.63 ± 0.33###Values are presented as mean ± SEM (n = 6/each group). ∗∗Statistically different from the control rats (p<0.01), ∗∗∗statistically different from the control rats (p<0.001); ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001); +Statistically different from the Met rats (p<0.05), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). AST: aspartate aminotransferase, ALT: alanine aminotransferase, ALP: alkaline phosphatase, MDA: malondialdehyde.The levels of MDA in liver tissues are presented in Table2. Injection of STZ markedly increased (p≤0.001) the amount of MDA in the sham group in comparison to the control ones. Administration of metformin or p-cymene improved the amount of MDA in diabetic rats (p≤0.001) when compared to the sham group. No difference was observed between the three groups of D25, D50, and D100. ### 3.2. Dithizone Staining of Pancreatic Tissues Zinc-binding substance diphenylthiocarbazone (dithizone or DTZ) was used to stain pancreatic tissues. Previous studies have demonstrated that DTZ staining does not adversely affect islet function eitherin vitro or in vivo. Figure 1 shows the pancreatic tissues stained with the specific DTZ agent. The red or DTZ positive regions indicate the B-cells in islets of Langerhans with normal morphology and without significantly altering islets in the control, C25, C50, and C100 groups. DTZ staining showed loss of beta-cell mass of the Langerhans islets in the diabetic group (the sham or D). Administration of metformin or p-cymene improved the beta-cell mass of the Langerhans islets in diabetic rats (Met, D50, and D100) in a dose-dependent manner when compared to the sham group. Therefore, it seems that administration of p-cymene (100 mg/kg) may improve the changes of Langerhans islets in diabetic rats.Figure 1 Dithizone staining of pancreatic tissues in all animal groups. The red or DTZ positive regions indicate theB-cells in islets of Langerhans. Administration of metformin or p-cymene obviously increased the B-cell mass in Met, D50, and D100 groups in a dose-dependent manner. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x Scale bar 100 μm. ### 3.3. Immunohistochemical Analysis of Akt, phospho-Akt, and mTOR Figures2–5 show the level of Akt, phospho-Akt, and mTOR protein in the pancreas of rats in all animal groups, respectively. As indicated in Figure 5, the level of Akt, phospho-Akt, and mTOR protein in the control groups (control, C25, C50, and C100) was markedly higher than that in the diabetic groups (the sham, Met, D25, D50, and D100). Injection of STZ significantly reduced the expression of Akt, phospho-Akt, and mTOR proteins in the sham group in comparison to control animals. Data showed that the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Administration of metformin or p-cymene (100 mg/kg) increased Akt’s, phospho-Akt’s, and mTOR’s expression in Met and D100 groups, respectively. The increase in phospho-Akt was much greater than the increase in Akt. There was no significant difference in Akt’s, phospho-Akt’s, and mTOR’s expression between the sham, D25, and D50 groups.Figure 2 Fluorescence immunocytochemistry analysis of Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 3 Fluorescence immunocytochemistry analysis of phospho-Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg), and pAkt: phospho-Akt. Magnification 400x. Scale bar 100μm.Figure 4 Fluorescence immunocytochemistry analysis of mTOR. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 5 The expression of Akt, pAkt (phospho-Akt), and mTOR protein.∗Statistically different from the control rats (p<0.05), ∗∗∗statistically different from the control rats (p<0.001), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). ## 3.1. Biochemical Results Table1 summarizes the serum levels of Glu, TG, TC, LDL, HDL, and VLDL in all animal groups. As indicated in Table 1, injection of STZ significantly increased the serum level of Glu, TG, TC, and VLDL in the sham group (D) in comparison to control animals. Administration of metformin or p-cymene improved Glu’s, TG’s, TC’s, and VLDL’s value in serum (in D25 and D100 groups for Glu, D25, D50, and D100 groups for TG; D25 and D50 groups for TC; and in D25, D50, and D100 groups for VLDL) when compared to the sham ones. Changes in Glu, TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 25, 100, 25, and 25 or 50 mg/kg, respectively. It seems that the serum levels of LDL increased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly decreased the blood level of LDL in C25, C100, and D100 groups (p≤0.05) in comparison to the sham group. The serum levels of HDL decreased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly increased the blood level of HDL in all groups (p≤0.001) in comparison to the sham group, and in C25, C50, and C100 groups (p≤0.001) when compared to control ones. P-cymene treatment did not have any effects on serum levels of Glu, TG, TC, and LDL in C25, C50, and C100 groups in comparison to controls (exception for Glu in the C50 group). In contrast, p-cymene treatment increased the levels of VLDL in C25, C50, and C100 animals when compared to control rats, but these increases were significantly lower compared to the VLDL level in the sham group.Table 1 Biochemical results of the diabetes rats and controls. GroupGlu (mg/dl)TG (mg/dl)TC (mg/dl)LDL (mg/dl)HDL (mg/dl)VLDL (mg/dl)Control (C)156.50 ± 11.80###79.83 ± 3.19+++#61.67 ± 2.22+##18.67 ± 2.2234.33 ± 1.897.55 ± 0.58###Sham (D)317.00 ± 18.01∗∗∗+++101.33 ± 5.50∗+++78.83 ± 3.18∗∗+++22.67 ± 1.33++28.50 ± 1.75+++23.00 ± 1.53∗∗∗+++Met161.00 ± 18.88###36.67 ± 2.65∗∗∗###48.50 ± 3.92∗###15.50 ± 1.15##44.33 ± 2.50###7.62 ± 0.70###C25173.83 ± 16.09###81.33 ± 6.16+++57.83 ± 2.61###16.17 ± 0.54#53.17 ± 4.80∗∗∗###14.60 ± 0.61∗∗∗###+++C50232.00 ± 26.9787.17 ± 2.68+++71.50 ± 2.17+++18.83 ± 1.2560.83 ± 1.08∗∗∗###+++16.33 ± 1.22∗∗∗###+++C100191.17 ± 15.37###67.67 ± 5.23+++###66.50 ± 1.95+++16.83 ± 1.30#52.5 ± 2.03∗∗∗###12.38 ± 0.73∗∗###++D25207.67 ± 13.53##37.00 ± 2.66∗∗∗###58.00 ± 2.35###18.50 ± 1.2644.0 ± 1.37##7.80 ± 0.60###D50233.00 ± 20.2338.00 ± 2.79∗∗∗###65.83 ± 3.46#++18.67 ± 0.3347.67 ± 2.08∗∗###7.32 ± 0.52###D100205.50 ± 29.34##64.83 ± 8.09++###68.17 ± 2.83+++17.00 ± 0.86#45.83 ± 2.52∗###13.03 ± 0.80∗∗###++Values are presented as mean ± SEM(n = 6/each group).∗Statistically different from the control rats (p<0.05), ∗∗statistically different from the control rats (c), ∗∗∗statistically different from the control rats (p<0.001), #statistically different from the sham rats (p<0.05), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +Statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin(55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). TG: Triglyceride, TC: Total Cholesterol, LDL: low-density lipoprotein, HDL: high-density lipoprotein, VLDL: very low-density lipoproteins.The serum levels of AST, ALT, and ALP are presented in Table2. We observed that injection of STZ significantly increased (p≤0.001) the serum level of AST, ALT, and ALP in the sham group in comparison to control animals. Administration of metformin or p-cymene improved the serum level of these factors in diabetic rats (p≤0.001) when compared to the sham group (except for AST in the D100 group; p≥0.05). Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. P-cymene treatment did not have any effects (p≥0.05) on serum levels of AST and ALT in C25, C50, and C100 groups in comparison to controls (except for AST in C25 group; p≤0.01). In contrast, p-cymene treatment increased the levels of ALP in C25, C50, and C100 animals when compared to control rats (p≤0.001), but these increases were significantly lower compared to the ALP levels in the sham group (p≤0.001).Table 2 Results of the AST, ALT, ALP, and MDA in diabetes rats and controls. GroupAST (mg/dl)ALT (mg/dl)ALP (mg/dl)MDA (μmol/l)Control (C)81.50 ± 2.53###66.17 ± 4.10###473.80 ± 26.11###5.17 ± 0.40###Sham (D)110.67 ± 1.48∗∗∗+++97.17 ± 1.17∗∗∗+++1007.50 ± 16.01∗∗∗+++10.17 ± 0.48∗∗∗+++Met92.00 ± 2.83###57.00 ± 2.84###391.67 ± 14.40###5.25 ± 0.48###C2595.25 ± 1.28∗∗###65.33 ± 3.38###777.75 ± 29.58∗∗∗##+++5.83 ± 0.31###C5092.17 ± 2.77###72.83 ± 4.73###741.20 ± 27.94∗∗∗###+++5.33 ± 0.21###C10091.33 ± 2.09###56.50 ± 4.51###730.50 ± 36.38∗∗∗###+++5.00 ± 0.37###D2587.00 ± 2.32###66.00 ± 6.52###500.33 ± 52.85###6.67 ± 0.33###D5094.50 ± 1.80∗∗###59.00 ± 2.00###457.50 ± 48.08###7.00 ± 0.37∗###+D100102.33 ± 2.08∗∗∗+58.83 ± 1.14###523.20 ± 33.68###6.63 ± 0.33###Values are presented as mean ± SEM (n = 6/each group). ∗∗Statistically different from the control rats (p<0.01), ∗∗∗statistically different from the control rats (p<0.001); ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001); +Statistically different from the Met rats (p<0.05), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). AST: aspartate aminotransferase, ALT: alanine aminotransferase, ALP: alkaline phosphatase, MDA: malondialdehyde.The levels of MDA in liver tissues are presented in Table2. Injection of STZ markedly increased (p≤0.001) the amount of MDA in the sham group in comparison to the control ones. Administration of metformin or p-cymene improved the amount of MDA in diabetic rats (p≤0.001) when compared to the sham group. No difference was observed between the three groups of D25, D50, and D100. ## 3.2. Dithizone Staining of Pancreatic Tissues Zinc-binding substance diphenylthiocarbazone (dithizone or DTZ) was used to stain pancreatic tissues. Previous studies have demonstrated that DTZ staining does not adversely affect islet function eitherin vitro or in vivo. Figure 1 shows the pancreatic tissues stained with the specific DTZ agent. The red or DTZ positive regions indicate the B-cells in islets of Langerhans with normal morphology and without significantly altering islets in the control, C25, C50, and C100 groups. DTZ staining showed loss of beta-cell mass of the Langerhans islets in the diabetic group (the sham or D). Administration of metformin or p-cymene improved the beta-cell mass of the Langerhans islets in diabetic rats (Met, D50, and D100) in a dose-dependent manner when compared to the sham group. Therefore, it seems that administration of p-cymene (100 mg/kg) may improve the changes of Langerhans islets in diabetic rats.Figure 1 Dithizone staining of pancreatic tissues in all animal groups. The red or DTZ positive regions indicate theB-cells in islets of Langerhans. Administration of metformin or p-cymene obviously increased the B-cell mass in Met, D50, and D100 groups in a dose-dependent manner. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x Scale bar 100 μm. ## 3.3. Immunohistochemical Analysis of Akt, phospho-Akt, and mTOR Figures2–5 show the level of Akt, phospho-Akt, and mTOR protein in the pancreas of rats in all animal groups, respectively. As indicated in Figure 5, the level of Akt, phospho-Akt, and mTOR protein in the control groups (control, C25, C50, and C100) was markedly higher than that in the diabetic groups (the sham, Met, D25, D50, and D100). Injection of STZ significantly reduced the expression of Akt, phospho-Akt, and mTOR proteins in the sham group in comparison to control animals. Data showed that the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Administration of metformin or p-cymene (100 mg/kg) increased Akt’s, phospho-Akt’s, and mTOR’s expression in Met and D100 groups, respectively. The increase in phospho-Akt was much greater than the increase in Akt. There was no significant difference in Akt’s, phospho-Akt’s, and mTOR’s expression between the sham, D25, and D50 groups.Figure 2 Fluorescence immunocytochemistry analysis of Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 3 Fluorescence immunocytochemistry analysis of phospho-Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg), and pAkt: phospho-Akt. Magnification 400x. Scale bar 100μm.Figure 4 Fluorescence immunocytochemistry analysis of mTOR. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 5 The expression of Akt, pAkt (phospho-Akt), and mTOR protein.∗Statistically different from the control rats (p<0.05), ∗∗∗statistically different from the control rats (p<0.001), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). ## 4. Discussion In the present study, we found that p-cymene ameliorates some pathophysiological features in a Wistar rat model of diabetes. Data have shown that p-cymene can improve serum levels of Glu, TC, TG, HDL-c, LDL, ALP, ALT, AST, MDA, and VLDL in diabetic rats. Also, it improved the expression of mTOR, Akt, and phospho-Akt protein in diabetic animals. Intraperitoneal injection of streptozotocin was used for the experimental developing models of diabetes mellitus. Many studies have utilized streptozotocin for the induction of diabetes models in animals. This model is based on preferential necrosis of beta-cells by streptozotocin resulting in hypoinsulinemia and hyperglycemia. Streptozotocin is an antimicrobial compound extracted from the bacteriumStreptomyces achromogenes that selectively damages B-cells in pancreatic islets [36]. It is structurally similar to a glucose molecule; therefore, it can bind to the glucose transport (GLUT2) receptor and enter the cell. Pancreatic beta-cells are rich in GLU2 receptors, so they are a specific choice for streptozotocin [36]. In the present study, DTZ staining showed a loss of beta-cell mass of the Langerhans islets in the streptozotocin-induced diabetic group. Based on the obtained data, extensive pathological changes, such as hyperglycemia, hyperlipidemia, increased activities of liver enzymes, and increased oxidative stress were observed in the diabetic group. The serum levels of TC, TG, and VLDL were increased in the diabetic rats in comparison to control ones, indicating hyperlipidemia. Hyperlipidemia is very common in subjects with diabetes mellitus which is one of the reasons for the high risk of coronary heart disease in these individuals [37]. Hyperlipidemia is a result of a decrease in the activity of the lipoprotein lipases in patients with diabetes and hypoinsulinemia conditions [38]. Increased AST, ALT, and ALP were also observed in the serum of the diabetic group in comparison to control rats. AST, ALT, and ALP act as markers of liver function and their increase indicates liver injury [39]. Previous reports have shown that the serum levels of ASP, ALT, and ALP increase in patients with diabetes, insulin resistance, and metabolic syndrome [40–42]. Because of the central role of the liver in the homeostasis of glucose, hypoinsulinemia affects the liver and leads to hepatic injury [43]. The mechanism of induction of liver damage by streptozotocin is not well understood. Zafar et al. [44] showed elevated liver enzymes such as AST, ALT, and ALP following streptozotocin treatment. Also, they showed accumulation of lipid droplets, lymphocytic infiltration, and increased fibrous content in the liver of treated animals. They suggested that diabetic complications in the liver may be the result of changes in liver enzymes.Liver tissues analysis showed an increase in oxidative stress in STZ-induced diabetic rats. Many studies revealed an increase in biomarkers of oxidative stress in patients with diabetes [7–11]. Diabetes is a chronic metabolic disorder in which mitochondria have a key role as the most common source of ROS production. There is an important association between high levels of glucose in the blood and the induction of oxidative stress [12, 13]. Therefore, one strategy for T2DM therapy is controlling ROS production. MDA is a known oxidative stress biomarker that results from lipid peroxidation of polyunsaturated fatty acids by ROS [45]. In the current study, injection of STZ markedly increased the amount of MDA in the sham rats which indicated oxidative stress in diabetic animals.The expressions of mTOR, Akt, and phospho-Akt proteins were determined in the diabetic animals, too. The PI3K/Akt/mTOR pathway is crucial in the regulation of signal transduction and many cellular mechanisms including survival, proliferation, growth, metabolism, angiogenesis, and metastasis, in both normal and pathological conditions. Dysregulation of this pathway is associated with many human disorders including diabetes [46]. Activation of PI3K results in Akt phosphorylation, Akt activity, and Akt recruiting to the cell membrane. Phosphorylated Akt mediates the phosphorylation of mTOR to activate it, which subsequently regulates the growth and metabolism of glucose and lipid [47, 48]. Among the various factors that are identified to enhance the PI3K/Akt pathway, insulin is a crucial activator [49]. Therefore, changes in insulin can affect Akt’s activity. Both decreased and increased Akt’s function has been reported in diabetes mellitus [28]. In the present study, the expression of Akt, phospho-Akt, and mTOR proteins was decreased in the pancreatic tissues of diabetic animals that are in line with the results of Bathina and Das [50]. Bathina and Das examined the alteration of the PI3K/Akt/mTOR pathway in the brain of streptozotocin-induced type 2 diabetes rats. They have shown that oxidative stress and apoptosis of pancreatic beta-cells can be increased following treatment with streptozotocin. They also showed that streptozotocin can reduce the expression of phosphorylated Akt and phosphorylated mTOR in treated animals [50]. In the present work, the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Therefore, streptozotocin may block signal transduction via dysregulation of the PI3K/Akt/mTOR pathway, which is subsequently associated with pathophysiological processes of diabetes mellitus and its complications.Here, we investigated the effects of p-cymene on the treatment of T2DM in a rat model of diabetes and we compared its effects with metformin. P-cymene [1-methyl-4-(1-methylethyl)-benzene] is an aromatic organic monoterpene isolated from more than 100 various medicinal plants which belong to theThymus genus. A widespread range of pharmaceutical properties of p-cymene has been demonstrated including antioxidant, anti-inflammatory, antinociceptive, anxiolytic, anticancer, and antimicrobial effects [22–24]. Metformin is a first-line medicine to control high blood glucose in patients with type 2 diabetes mellitus. It reduces the amount of glucose absorbed from food and the amount of glucose produced by the liver. Also, it increases the response of the body to insulin [51]. In the present study, p-cymene was used at three doses including 25, 50, and 100 mg/kg. Administration of p-cymene in the current study ameliorated adverse features induced by streptozotocin. P-cymene treatment resulted in a significant decrease in serum glucose levels of diabetic rats in a similar way to metformin, which is in line with the results of Lotfi et al. [30]. They showed that p-cymene and metformin, alone or in combination, can decrease the blood amounts of glucose in mice with a high-fat diet [30]. Similar results were reported in studies by Ghazanfar et al. [52] and Bayramoglu et al. [53]. They suggested antidiabetic and blood-glucose-lowering properties of Artemisia amygdalina extracts and Oregano oil in STZ-induced diabetic rats [52, 53]. One of the active components in these extracts is p-cymene. Also, we observed that glucose level was increased by the p-cymene treatment in the control rats, although overall not significantly. The only significant effect on glucose levels in control rats was observed by the use of 50 mg/kg p-cymene. A similar result was observed in the D50 group. Although p-cymene decreased the amount of glucose in diabetic groups, this reduction was much greater in D25 and D100 groups than in the D50 group. It seems that low and high doses of p-cymene can improve the glucose level in streptozotocin-induced diabetic rats, and the median doses of it (50 mg/kg) have the opposite effect on the blood glucose level.The lipid profile was also influenced by the p-cymene treatment. Data analyses have shown a positive effect of the p-cymene on lipid factors including TG, TC, and VLDL in diabetic animals, which are in line with the results of Lotfi et al. [30], Ghazanfar et al. [52], and Bayramoglu et al. [53]. Changes in TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 100, 25, and 25 or 50 mg/kg, respectively. However, administration of p-cymene improved TG’s, TC’s, and VLDL’s value in serum of all diabetic rats (exception for TC in the D100 group) when compared to the sham ones. Therefore, it seems that p-cymene at a dose of 25 and 50 mg/kg is much more effective than p-cymene at a dose of 100 mg/kg. Streptozotocin had no effect on blood levels of HDL; however, p-cymene increased the blood HDL in all treated animals (C25, C50, C100, D50, and D100 groups). With respect to the p-cymene effect on lipid profile, since a decrease was observed in TG, TC, and VLDL amount in treated groups in comparison to the sham group, and an increase was observed in HDL amount in treated animals in comparison to control one, the hypolipidemic potential of p-cymene could not be discarded, but further experiment with other suitable animal models of diabetic needs to be done.The profile of the liver enzyme was also influenced by the p-cymene administration in a similar way to the metformin. Administration of metformin or p-cymene markedly decreased the rate of AST, ALT, and ALP in the blood of streptozotocin-induced diabetic (D25, D50, and D100 groups) animals, except for AST in the D100 group. Therefore, it seems that p-cymene can prevent beta-cell destruction and can reduce liver injury in streptozotocin-induced diabetic rats. Similar results were also observed in studies by Lotfi et al. [30], Ghazanfar et al. [52], and Bayramoglu et al. [53]. Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. Therefore, it seems that p-cymene at a dose of 25 and 50 mg/kg is much more effective than p-cymene at a dose of 100 mg/kg. ALP level was increased by the p-cymene treatment in the control rats (C25, C50, and C100 group) although its increase was much lower compared to the sham rats. This observation may limit the administration of p-cymene in a healthy population. Additional studies are needed to fully distinguish this observation.P-cymene treatment also decreased oxidative stress in diabetic animals. Administration of p-cymene significantly decreased MDA levels in the diabetic groups, indicating antioxidant properties of p-cymene. The antioxidant activity of p-cymene was suggested by Oliveira et al. [22]. No difference was observed between the three doses of 25, 50, and 100 mg/kg of p-cymene.The effect of p-cymene on the expression of Akt, phospho-Akt, and mTOR protein was also examined. Our experiments have shown a positive but small effect of metformin or p-cymene on the expression of Akt, phospho-Akt, and mTOR protein in streptozotocin-induced diabetic animals. In fact, the expression of Akt, phospho-Akt, and mTOR protein decreased in diabetic animals, but after metformin or p-cymene treatment, the amount of these proteins increased moderately. The increase in phospho-Akt was much greater than the increase in Akt. In addition, this increase was much greater at the dose of 100 mg/kg of p-cymene, suggesting that p-cymene affects Akt/mTOR signaling pathway in a dose-dependent manner. No difference was observed between the two doses of 25 and 50 mg/kg of p-cymene. ## 5. Conclusions Overall, this study showed that hyperglycemia, hyperlipidemia, injury of the liver, oxidative stress, and suppression of the Akt/mTOR signaling pathway occur in streptozotocin-induced diabetes rats. Administration of p-cymene significantly prevented the progression of diabetes. It probably has promising antidiabetic potential and can reduce liver injury and oxidative stress and can improve Akt/mTOR signaling pathway. According to the results, p-cymene may be suggested for the control of diabetes in diabetic individuals. However, the effective dose, period of treatment, and interaction with other supplements must be investigated. Further studies are required to investigate the mechanism responsible for the antidiabetic characteristic of p-cymene. The antidiabetic effects of p-cymene are comparable with metformin and may be used as adjunct treatments for diabetic patients. --- *Source: 1015669-2022-04-26.xml*
1015669-2022-04-26_1015669-2022-04-26.md
60,637
Therapeutic Effect of P-Cymene on Lipid Profile, Liver Enzyme, and Akt/Mtor Pathway in Streptozotocin-Induced Diabetes Mellitus in Wistar Rats
Maryam Arabloei Sani; Parichehreh Yaghmaei; Zahra Hajebrahimi; Nasim Hayati Roodbari
Journal of Obesity (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015669
1015669-2022-04-26.xml
--- ## Abstract Diabetes is a serious public health problem in low- and middle-income countries. There is a strong link between hyperglycemia, oxidative stress, inflammation, and the development of diabetes mellitus. PI3K/Akt/mTOR is the main signaling pathway of insulin for controlling lipid and glucose metabolism. P-cymene is an aromatic monoterpene with a widespread range of therapeutic properties including antioxidant and anti-inflammatory activity. In the present study, the antidiabetic effects of p-cymene were investigated. Diabetes was induced using streptozotocin in male Wistar rats. The effects of p-cymene and metformin were studied on levels of glucose (Glu), lipid profile, liver enzymes, oxidative stress, and the expression of Akt, phospho-Akt, and mTOR (mammalian target of rapamycin) proteins, using biochemical, histological, and immunohistochemical analysis. Data have shown that p-cymene can improve serum levels of Glu, total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein (LDL), very-low-density lipoprotein (VLDL), alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), malondialdehyde (MDA), and the expression of mTOR, Akt, and phospho-Akt protein in diabetic animals. These results suggest that p-cymene has hypoglycemia, hypolipidemia, and antioxidant properties. It can regulate Akt/mTOR pathway and reduce hepatic and pancreas injury. It can be suggested for diabetes management alone or simultaneously with metformin. --- ## Body ## 1. Background Type 2 diabetes mellitus (T2DM), also used to be called adult-onset diabetes, is the most common chronic metabolic disorder that is characterized mainly by high levels of glucose (Glu) concentration in the blood, resulting from insulin resistance and/or relative insufficient insulin secretion in peripheral tissues [1, 2]. Based on the reports, T2DM is the most frequent type of diabetes mellitus that accounts for 87% to 91% of all diabetes patients [3]. The World Health Organization estimated that 439 million people would have T2DM by the year 2030 [4]. As a multifactorial disease, T2DM development is caused by a combination of genetics and environmental factors, such as obesity, lack of physical activity, unhealthy diet, stress, cigarette smoking, and generous consumption of alcohol [5].Extremely high blood glucose concentrations in T2DM people can lead to serious, potentially life-threatening vascular complications including atherosclerosis, retinopathy, neuropathy, nephropathy, and amputations [6]. A growing body of studies revealed an increase in biomarkers of oxidative stress in patients with T2DM, and especially in subjects with diabetes complications including micro- and macrovascular abnormalities [7–11]. As already mentioned, T2DM is a chronic metabolic disorder in which mitochondria have a key role as the most common source of ROS (reactive oxygen species) production. There is an important association between high levels of glucose in blood and induction of oxidative stress and inflammation on the one hand and the development of insulin resistance, T2DM, and its complications on the other. Hyperglycemia stimulates the production of ROS, the increased oxidative stress induces inflammation, and inflammation, in turn, increases the generation of ROS. Although many features of type 2 diabetes mellitus are not yet clear, it is revealed that enhanced production of ROS and inflammatory mediators have a central role in the development and progression of T2DM [12, 13]. Therefore, one strategy for T2DM therapy is controlling ROS production and using medication that improves insulin resistance.There are a number of different types of diabetes drugs with some having similar ways of acting such as improving insulin resistance, controlling glucose levels, and reducing oxidative stress and inflammation [14–16]. However, most of these antidiabetic drugs have limited efficacy and many undesirable side effects such as drug resistance, weight gain, dropsy, and high rates of secondary failure [7, 10]. Therefore, there is a need for further development of low toxicity, effective and economic antidiabetic agents, and controlling T2DM complications, especially in long-term medication. In recent decades, using plant-based drugs has become popular in the world to treat many diseases instead of the use of chemical agents due to their minimal toxicity, easy access, cost-effectiveness, and easy use [17, 18]. In this regard, hypoglycaemic, antihyperlipidemic, antioxidant, and anti-inflammation effects of many monoterpenes have been reported in several experimental studies [19–21]. Monoterepens belong to the terpenoids group of secondary plant metabolites that are synthesized through the isoprenoid acid pathway. P-cymene is an aromatic organic monoterpene isolated from more than 100 various medicinal plants. A widespread range of therapeutic properties of p-cymene has been demonstrated including antioxidant, anti-inflammatory, antinociceptive, anxiolytic, anticancer, and antimicrobial effects [22–24].With respect to the above description, the present study aims to investigate the effects of p-cymene on the prevention and treatment of T2DM in the rat model of diabetes. We found that p-cymene can improve serum levels of Glu, total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein (LDL), Alkaline phosphatase (ALP), alanine aminotransferase (ALT or SGPT), aspartate aminotransferase (AST or SGOT), malondialdehyde (MDA), and very-low-density lipoproteins (VLDL) in diabetic rats. Also, we analyzed the expression of mTOR, Akt, and phosphorylated Akt (phospho-Akt) protein. The mechanistic target of rapamycin (mTOR), also known as the mammalian target of rapamycin, is a serine/threonine-protein kinase in the phosphatidylinositol 3-kinase (PI3K-) related kinase (PIKK) family. It is regulated by the nutrient state, glucose, amino acids, and growth factors, so it plays an important role in the regulation of cell growth, cell proliferation, cell survival, protein synthesis, and metabolism of lipids and glucose [25]. Dysregulation in mTOR signaling is involved in several diseases such as obesity, type 2 diabetes mellitus, cancer, and neurodegenerative disorders [25]. Experimental studies suggested that obesity and overnutrition activate mTOR in various tissues in islets of humans [26, 27]. Akt or protein kinase B (PKB) is also a serine/threonine-protein kinase that regulates cellular processes including glucose metabolism, cell survival, and cell proliferation. Some reports have demonstrated that the Akt signaling pathway is associated with pathophysiological processes of diabetes mellitus and its complications [28]. ## 2. Methods ### 2.1. Experimental Animals Fifty-four male Wistar rats weighing between 200 and 250 g were purchased from Razi laboratory animal, Islamic Azad University, Science and Research Branch, Tehran, Iran. Animals were kept 4 per cage in the animal house of the Science and Research Branch of Islamic Azad University, under standard laboratory conditions of controlled room temperature (22°C) and humidity (50 ± 10%) with 12 hours of light and dark cycles before the experiment. Throughout the study period, the rat was allowed free access to water and standard chow. All efforts were made to avoid animal pain and suffering in accordance with the guidelines for the Care and Use of Laboratory Animals (Committee for the update of the guide for the Care and Use of Laboratory Animals, 1996). Applications were approved by the Animal Care and Use Committee of Islamic Azad University, Science and Research Branch. The animals were familiarized with the laboratory conditions for one week prior to starting the procedures. Diabetes was induced by a single intraperitoneal (i.p.) injection of 55 mg/kg streptozotocin (STZ) (Sigma-Aldrich, USA) freshly dissolved in 0.1 M sodium citrate buffer, pH 4.5 [29]. After 48 hours, blood samples were collected from the rat tail vein, and the amount of whole blood sugar was measured using a glucometer (Cera pet, South Korea). The animals with glucose levels greater than 300 mg/dl were considered diabetic and selected for further study. P-cymene was used at a dose of 25, 50, and 100, according to previous studies [22, 30].In fact, animals were randomly divided into nine groups (n = 6) as follows: (1) the control group (C) that were fed with a standard diet and water ad libitumand received no treatment; (2) the sham operation group (D) or diabetic rats with single i.p. injection of 55 mg/kg STZ; (3) Metformin group (Met): diabetic rats with oral administration of metformin as a positive control of diabetic rats (55 mg/kg, Osve Pharmaceutical Co, Iran) for 4 weeks; (4) control-25 (C25): control rats treated with p-cymene (25 mg/kg, Sigma Chemical Co, St. Louis, MO, USA) for 4 weeks; (5) control-50 (C50): control rats treated with p-cymene (50 mg/kg) for 4 weeks; (6) control-100 (C100): control rats treated with p-cymene (100 mg/kg) for 4 weeks; (7) diabet-25 (D25): diabetic rats treated with p-cymene (25 mg/kg) for 4 weeks; (8) diabet-50 (D50): diabetic rats treated with p-cymene (50 mg/kg) for 4 weeks; (9) diabet-100 (D100): diabetic rats treated with p-cymene (100 mg/kg) for 4 weeks. The supplemented p-cymene was given by oral gavage. The body weights of animals were recorded at the beginning of the experiment and at the end of the experimental period. ### 2.2. Blood Sampling and Biochemical Analysis Serum samples and liver tissues were collected at the end of the 28th day. For serum collecting, animals were anesthetized with ketamine (0.8 mg/kg, i.p.) and xylazine (0.5 mg/kg, i.p.). Blood samples were then collected from the left ventricle of the heart and kept at room temperature for 2 hours. Serum was obtained through centrifugation at2500g for 5 min and then maintained at −20°C until subsequent analysis. The concentration of metabolic parameters such as TC, TG, HDL, LDL, ALP, ALT, AST, and VLDL was estimated by commercially available animal spectrophotometric assay kits (Pars Azmun Company, Tehran, Iran) according to the manufacturer’s recommendations.The method of Placer et al. was used to determine the amount of malondialdehyde (MDA) as an index of oxidative damage [31]. The assay is based on the reaction of MDA with thiobarbituric acid (TBA) and the production of a red compound with a maximum absorption peak at 532 nm. For this purpose, liver tissues were homogenized 1 : 10 with cold phosphate-buffered saline and centrifuged (14000g, 15 min, 4°C). The resulting supernatants were used for the assay of the MDA. Briefly, 250 μl of homogenate was added to 500 μl of a solution containing 15% trichloroacetic acid, 0.375% thiobarbituric acid, and 0.25N hydrochloric acid and placed in a boiling water bath for 10 min. The samples were cooled and centrifuged (3000 rpm, 10 min). Then, 200 μl of the resulting supernatant was removed and quantified at 532 nm. Data were expressed as μmol/l. ### 2.3. Histological Procedures, Dithizone Staining, and Immunohistochemistry All animals were anesthetized at the end of the 4th week, and pancreatic tissues were excised and evaluated for standard histological procedures. Samples were trimmed free of fat and preserved in 10% paraformaldehyde and then embedded in paraffin after standard processing of dehydration in increasing concentration of ethanol and clearing in xylene. Then, paraffin samples were sectioned into the rotary microtome to obtain 5-6μm thick sections and then mounted on glass slides for dithizone (DTZ) staining. Dithizone is specific staining for a staining B-cell in pancreatic islets. It is a zinc-binding agent that selectively stains the islet’s beta-cells and therefore turns the islets red. Beta-cells contain large amounts of zinc ions. Dithizone (Sigma-Aldrich) solution was prepared by dissolving 100 mg of dithizone in 10 mL dimethylsulfoxide (DMSO, Sigma-Aldrich). After 10 minutes, 40 ml PBS was added and filtered using a 0.2 μm filter and used for staining tissue at 37°C for 15 minutes and the stained cells were observed under a light microscope [32].Immunohistochemistry was done according to the standard protocol for immunohistochemistry, which was described everywhere [33–35]. Briefly, the sections of pancreatic tissue were cut at five micron-thick and placed on APES ((3-aminopropyl) triethoxysilane) coated microscopic slides for 1 hour at 37°C. Later, the tissues were deparaffinized with 2 changes of xylene, 5 minutes each, and rehydrated in 2 changes of 100% ethanol for 3 minutes each, 95% and 80% ethanol for 1 minute each. Then, they were rinsed in distilled water for 5 minutes. For antigen retrieval, slides were boiled in antigen retrieval solution (TBS 1X, Sigma-T5912, 20 min) and washed with PBS (3 times, 5 min, Sigma-P4417). Slides were rinsed in 0.3% triton for 30 minutes to permeabilize the cell membrane. Following PBS wash, the slides were blocked with 10% normal goat serum for 30 minutes. For detection of Akt, phospho-Akt, and mTOR protein, slides were incubated with primary mouse monoclonal anti-Akt1 antibody (1 : 100; sc-5298, Santa Cruz Biotechnology, Inc., United States), primary rabbit anti-phospho-Akt antibody (1 : 100; anti-phospho-Akt (ser473), #4060, Cell Signaling Technology, Danvers, MA, USA), and mouse monoclonal anti-mTOR antibody (1 : 100; sc-517464, Santa Cruz Biotechnology, Inc., United States) at 4°C overnight, respectively. The next day, the sections were washed with PBS for 4 × 5 minutes. Later, slides were incubated with secondary antibody for 90 minutes at 37°C in the dark (for Akt and mTOR: FITC Goat Anti-Mouse (IgG) antibody, 1 : 150, ab6785, Abcam; and for phospho-Akt: FITC Goat Anti-Rabbit IgG (H + L) antibody, 1 : 150, orb688925, Biorbyt Ltd., USA). After four times washes, DAPI (Sigma-D9542) was added to slides and washed with PBS after 20 min. Images were observed using a Labomed microscope (USA). Quantification of the positive area was performed using ImageJ software (National Institute of Health, USA; https://imagej.nih.gov/ij). ### 2.4. Statistical Analysis The data are presented as the means ± SEM One-Way ANOVA with Tukey’s post hoc test, which was used for comparison among groups. All data were analyzed using IBM SPSS Statistics for Windows, version 20 (IBM Corp., Armonk, NY, USA). The charts were drawn using Microsoft Excel 2010.p<0.05 was set as significant. ## 2.1. Experimental Animals Fifty-four male Wistar rats weighing between 200 and 250 g were purchased from Razi laboratory animal, Islamic Azad University, Science and Research Branch, Tehran, Iran. Animals were kept 4 per cage in the animal house of the Science and Research Branch of Islamic Azad University, under standard laboratory conditions of controlled room temperature (22°C) and humidity (50 ± 10%) with 12 hours of light and dark cycles before the experiment. Throughout the study period, the rat was allowed free access to water and standard chow. All efforts were made to avoid animal pain and suffering in accordance with the guidelines for the Care and Use of Laboratory Animals (Committee for the update of the guide for the Care and Use of Laboratory Animals, 1996). Applications were approved by the Animal Care and Use Committee of Islamic Azad University, Science and Research Branch. The animals were familiarized with the laboratory conditions for one week prior to starting the procedures. Diabetes was induced by a single intraperitoneal (i.p.) injection of 55 mg/kg streptozotocin (STZ) (Sigma-Aldrich, USA) freshly dissolved in 0.1 M sodium citrate buffer, pH 4.5 [29]. After 48 hours, blood samples were collected from the rat tail vein, and the amount of whole blood sugar was measured using a glucometer (Cera pet, South Korea). The animals with glucose levels greater than 300 mg/dl were considered diabetic and selected for further study. P-cymene was used at a dose of 25, 50, and 100, according to previous studies [22, 30].In fact, animals were randomly divided into nine groups (n = 6) as follows: (1) the control group (C) that were fed with a standard diet and water ad libitumand received no treatment; (2) the sham operation group (D) or diabetic rats with single i.p. injection of 55 mg/kg STZ; (3) Metformin group (Met): diabetic rats with oral administration of metformin as a positive control of diabetic rats (55 mg/kg, Osve Pharmaceutical Co, Iran) for 4 weeks; (4) control-25 (C25): control rats treated with p-cymene (25 mg/kg, Sigma Chemical Co, St. Louis, MO, USA) for 4 weeks; (5) control-50 (C50): control rats treated with p-cymene (50 mg/kg) for 4 weeks; (6) control-100 (C100): control rats treated with p-cymene (100 mg/kg) for 4 weeks; (7) diabet-25 (D25): diabetic rats treated with p-cymene (25 mg/kg) for 4 weeks; (8) diabet-50 (D50): diabetic rats treated with p-cymene (50 mg/kg) for 4 weeks; (9) diabet-100 (D100): diabetic rats treated with p-cymene (100 mg/kg) for 4 weeks. The supplemented p-cymene was given by oral gavage. The body weights of animals were recorded at the beginning of the experiment and at the end of the experimental period. ## 2.2. Blood Sampling and Biochemical Analysis Serum samples and liver tissues were collected at the end of the 28th day. For serum collecting, animals were anesthetized with ketamine (0.8 mg/kg, i.p.) and xylazine (0.5 mg/kg, i.p.). Blood samples were then collected from the left ventricle of the heart and kept at room temperature for 2 hours. Serum was obtained through centrifugation at2500g for 5 min and then maintained at −20°C until subsequent analysis. The concentration of metabolic parameters such as TC, TG, HDL, LDL, ALP, ALT, AST, and VLDL was estimated by commercially available animal spectrophotometric assay kits (Pars Azmun Company, Tehran, Iran) according to the manufacturer’s recommendations.The method of Placer et al. was used to determine the amount of malondialdehyde (MDA) as an index of oxidative damage [31]. The assay is based on the reaction of MDA with thiobarbituric acid (TBA) and the production of a red compound with a maximum absorption peak at 532 nm. For this purpose, liver tissues were homogenized 1 : 10 with cold phosphate-buffered saline and centrifuged (14000g, 15 min, 4°C). The resulting supernatants were used for the assay of the MDA. Briefly, 250 μl of homogenate was added to 500 μl of a solution containing 15% trichloroacetic acid, 0.375% thiobarbituric acid, and 0.25N hydrochloric acid and placed in a boiling water bath for 10 min. The samples were cooled and centrifuged (3000 rpm, 10 min). Then, 200 μl of the resulting supernatant was removed and quantified at 532 nm. Data were expressed as μmol/l. ## 2.3. Histological Procedures, Dithizone Staining, and Immunohistochemistry All animals were anesthetized at the end of the 4th week, and pancreatic tissues were excised and evaluated for standard histological procedures. Samples were trimmed free of fat and preserved in 10% paraformaldehyde and then embedded in paraffin after standard processing of dehydration in increasing concentration of ethanol and clearing in xylene. Then, paraffin samples were sectioned into the rotary microtome to obtain 5-6μm thick sections and then mounted on glass slides for dithizone (DTZ) staining. Dithizone is specific staining for a staining B-cell in pancreatic islets. It is a zinc-binding agent that selectively stains the islet’s beta-cells and therefore turns the islets red. Beta-cells contain large amounts of zinc ions. Dithizone (Sigma-Aldrich) solution was prepared by dissolving 100 mg of dithizone in 10 mL dimethylsulfoxide (DMSO, Sigma-Aldrich). After 10 minutes, 40 ml PBS was added and filtered using a 0.2 μm filter and used for staining tissue at 37°C for 15 minutes and the stained cells were observed under a light microscope [32].Immunohistochemistry was done according to the standard protocol for immunohistochemistry, which was described everywhere [33–35]. Briefly, the sections of pancreatic tissue were cut at five micron-thick and placed on APES ((3-aminopropyl) triethoxysilane) coated microscopic slides for 1 hour at 37°C. Later, the tissues were deparaffinized with 2 changes of xylene, 5 minutes each, and rehydrated in 2 changes of 100% ethanol for 3 minutes each, 95% and 80% ethanol for 1 minute each. Then, they were rinsed in distilled water for 5 minutes. For antigen retrieval, slides were boiled in antigen retrieval solution (TBS 1X, Sigma-T5912, 20 min) and washed with PBS (3 times, 5 min, Sigma-P4417). Slides were rinsed in 0.3% triton for 30 minutes to permeabilize the cell membrane. Following PBS wash, the slides were blocked with 10% normal goat serum for 30 minutes. For detection of Akt, phospho-Akt, and mTOR protein, slides were incubated with primary mouse monoclonal anti-Akt1 antibody (1 : 100; sc-5298, Santa Cruz Biotechnology, Inc., United States), primary rabbit anti-phospho-Akt antibody (1 : 100; anti-phospho-Akt (ser473), #4060, Cell Signaling Technology, Danvers, MA, USA), and mouse monoclonal anti-mTOR antibody (1 : 100; sc-517464, Santa Cruz Biotechnology, Inc., United States) at 4°C overnight, respectively. The next day, the sections were washed with PBS for 4 × 5 minutes. Later, slides were incubated with secondary antibody for 90 minutes at 37°C in the dark (for Akt and mTOR: FITC Goat Anti-Mouse (IgG) antibody, 1 : 150, ab6785, Abcam; and for phospho-Akt: FITC Goat Anti-Rabbit IgG (H + L) antibody, 1 : 150, orb688925, Biorbyt Ltd., USA). After four times washes, DAPI (Sigma-D9542) was added to slides and washed with PBS after 20 min. Images were observed using a Labomed microscope (USA). Quantification of the positive area was performed using ImageJ software (National Institute of Health, USA; https://imagej.nih.gov/ij). ## 2.4. Statistical Analysis The data are presented as the means ± SEM One-Way ANOVA with Tukey’s post hoc test, which was used for comparison among groups. All data were analyzed using IBM SPSS Statistics for Windows, version 20 (IBM Corp., Armonk, NY, USA). The charts were drawn using Microsoft Excel 2010.p<0.05 was set as significant. ## 3. Results ### 3.1. Biochemical Results Table1 summarizes the serum levels of Glu, TG, TC, LDL, HDL, and VLDL in all animal groups. As indicated in Table 1, injection of STZ significantly increased the serum level of Glu, TG, TC, and VLDL in the sham group (D) in comparison to control animals. Administration of metformin or p-cymene improved Glu’s, TG’s, TC’s, and VLDL’s value in serum (in D25 and D100 groups for Glu, D25, D50, and D100 groups for TG; D25 and D50 groups for TC; and in D25, D50, and D100 groups for VLDL) when compared to the sham ones. Changes in Glu, TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 25, 100, 25, and 25 or 50 mg/kg, respectively. It seems that the serum levels of LDL increased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly decreased the blood level of LDL in C25, C100, and D100 groups (p≤0.05) in comparison to the sham group. The serum levels of HDL decreased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly increased the blood level of HDL in all groups (p≤0.001) in comparison to the sham group, and in C25, C50, and C100 groups (p≤0.001) when compared to control ones. P-cymene treatment did not have any effects on serum levels of Glu, TG, TC, and LDL in C25, C50, and C100 groups in comparison to controls (exception for Glu in the C50 group). In contrast, p-cymene treatment increased the levels of VLDL in C25, C50, and C100 animals when compared to control rats, but these increases were significantly lower compared to the VLDL level in the sham group.Table 1 Biochemical results of the diabetes rats and controls. GroupGlu (mg/dl)TG (mg/dl)TC (mg/dl)LDL (mg/dl)HDL (mg/dl)VLDL (mg/dl)Control (C)156.50 ± 11.80###79.83 ± 3.19+++#61.67 ± 2.22+##18.67 ± 2.2234.33 ± 1.897.55 ± 0.58###Sham (D)317.00 ± 18.01∗∗∗+++101.33 ± 5.50∗+++78.83 ± 3.18∗∗+++22.67 ± 1.33++28.50 ± 1.75+++23.00 ± 1.53∗∗∗+++Met161.00 ± 18.88###36.67 ± 2.65∗∗∗###48.50 ± 3.92∗###15.50 ± 1.15##44.33 ± 2.50###7.62 ± 0.70###C25173.83 ± 16.09###81.33 ± 6.16+++57.83 ± 2.61###16.17 ± 0.54#53.17 ± 4.80∗∗∗###14.60 ± 0.61∗∗∗###+++C50232.00 ± 26.9787.17 ± 2.68+++71.50 ± 2.17+++18.83 ± 1.2560.83 ± 1.08∗∗∗###+++16.33 ± 1.22∗∗∗###+++C100191.17 ± 15.37###67.67 ± 5.23+++###66.50 ± 1.95+++16.83 ± 1.30#52.5 ± 2.03∗∗∗###12.38 ± 0.73∗∗###++D25207.67 ± 13.53##37.00 ± 2.66∗∗∗###58.00 ± 2.35###18.50 ± 1.2644.0 ± 1.37##7.80 ± 0.60###D50233.00 ± 20.2338.00 ± 2.79∗∗∗###65.83 ± 3.46#++18.67 ± 0.3347.67 ± 2.08∗∗###7.32 ± 0.52###D100205.50 ± 29.34##64.83 ± 8.09++###68.17 ± 2.83+++17.00 ± 0.86#45.83 ± 2.52∗###13.03 ± 0.80∗∗###++Values are presented as mean ± SEM(n = 6/each group).∗Statistically different from the control rats (p<0.05), ∗∗statistically different from the control rats (c), ∗∗∗statistically different from the control rats (p<0.001), #statistically different from the sham rats (p<0.05), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +Statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin(55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). TG: Triglyceride, TC: Total Cholesterol, LDL: low-density lipoprotein, HDL: high-density lipoprotein, VLDL: very low-density lipoproteins.The serum levels of AST, ALT, and ALP are presented in Table2. We observed that injection of STZ significantly increased (p≤0.001) the serum level of AST, ALT, and ALP in the sham group in comparison to control animals. Administration of metformin or p-cymene improved the serum level of these factors in diabetic rats (p≤0.001) when compared to the sham group (except for AST in the D100 group; p≥0.05). Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. P-cymene treatment did not have any effects (p≥0.05) on serum levels of AST and ALT in C25, C50, and C100 groups in comparison to controls (except for AST in C25 group; p≤0.01). In contrast, p-cymene treatment increased the levels of ALP in C25, C50, and C100 animals when compared to control rats (p≤0.001), but these increases were significantly lower compared to the ALP levels in the sham group (p≤0.001).Table 2 Results of the AST, ALT, ALP, and MDA in diabetes rats and controls. GroupAST (mg/dl)ALT (mg/dl)ALP (mg/dl)MDA (μmol/l)Control (C)81.50 ± 2.53###66.17 ± 4.10###473.80 ± 26.11###5.17 ± 0.40###Sham (D)110.67 ± 1.48∗∗∗+++97.17 ± 1.17∗∗∗+++1007.50 ± 16.01∗∗∗+++10.17 ± 0.48∗∗∗+++Met92.00 ± 2.83###57.00 ± 2.84###391.67 ± 14.40###5.25 ± 0.48###C2595.25 ± 1.28∗∗###65.33 ± 3.38###777.75 ± 29.58∗∗∗##+++5.83 ± 0.31###C5092.17 ± 2.77###72.83 ± 4.73###741.20 ± 27.94∗∗∗###+++5.33 ± 0.21###C10091.33 ± 2.09###56.50 ± 4.51###730.50 ± 36.38∗∗∗###+++5.00 ± 0.37###D2587.00 ± 2.32###66.00 ± 6.52###500.33 ± 52.85###6.67 ± 0.33###D5094.50 ± 1.80∗∗###59.00 ± 2.00###457.50 ± 48.08###7.00 ± 0.37∗###+D100102.33 ± 2.08∗∗∗+58.83 ± 1.14###523.20 ± 33.68###6.63 ± 0.33###Values are presented as mean ± SEM (n = 6/each group). ∗∗Statistically different from the control rats (p<0.01), ∗∗∗statistically different from the control rats (p<0.001); ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001); +Statistically different from the Met rats (p<0.05), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). AST: aspartate aminotransferase, ALT: alanine aminotransferase, ALP: alkaline phosphatase, MDA: malondialdehyde.The levels of MDA in liver tissues are presented in Table2. Injection of STZ markedly increased (p≤0.001) the amount of MDA in the sham group in comparison to the control ones. Administration of metformin or p-cymene improved the amount of MDA in diabetic rats (p≤0.001) when compared to the sham group. No difference was observed between the three groups of D25, D50, and D100. ### 3.2. Dithizone Staining of Pancreatic Tissues Zinc-binding substance diphenylthiocarbazone (dithizone or DTZ) was used to stain pancreatic tissues. Previous studies have demonstrated that DTZ staining does not adversely affect islet function eitherin vitro or in vivo. Figure 1 shows the pancreatic tissues stained with the specific DTZ agent. The red or DTZ positive regions indicate the B-cells in islets of Langerhans with normal morphology and without significantly altering islets in the control, C25, C50, and C100 groups. DTZ staining showed loss of beta-cell mass of the Langerhans islets in the diabetic group (the sham or D). Administration of metformin or p-cymene improved the beta-cell mass of the Langerhans islets in diabetic rats (Met, D50, and D100) in a dose-dependent manner when compared to the sham group. Therefore, it seems that administration of p-cymene (100 mg/kg) may improve the changes of Langerhans islets in diabetic rats.Figure 1 Dithizone staining of pancreatic tissues in all animal groups. The red or DTZ positive regions indicate theB-cells in islets of Langerhans. Administration of metformin or p-cymene obviously increased the B-cell mass in Met, D50, and D100 groups in a dose-dependent manner. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x Scale bar 100 μm. ### 3.3. Immunohistochemical Analysis of Akt, phospho-Akt, and mTOR Figures2–5 show the level of Akt, phospho-Akt, and mTOR protein in the pancreas of rats in all animal groups, respectively. As indicated in Figure 5, the level of Akt, phospho-Akt, and mTOR protein in the control groups (control, C25, C50, and C100) was markedly higher than that in the diabetic groups (the sham, Met, D25, D50, and D100). Injection of STZ significantly reduced the expression of Akt, phospho-Akt, and mTOR proteins in the sham group in comparison to control animals. Data showed that the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Administration of metformin or p-cymene (100 mg/kg) increased Akt’s, phospho-Akt’s, and mTOR’s expression in Met and D100 groups, respectively. The increase in phospho-Akt was much greater than the increase in Akt. There was no significant difference in Akt’s, phospho-Akt’s, and mTOR’s expression between the sham, D25, and D50 groups.Figure 2 Fluorescence immunocytochemistry analysis of Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 3 Fluorescence immunocytochemistry analysis of phospho-Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg), and pAkt: phospho-Akt. Magnification 400x. Scale bar 100μm.Figure 4 Fluorescence immunocytochemistry analysis of mTOR. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 5 The expression of Akt, pAkt (phospho-Akt), and mTOR protein.∗Statistically different from the control rats (p<0.05), ∗∗∗statistically different from the control rats (p<0.001), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). ## 3.1. Biochemical Results Table1 summarizes the serum levels of Glu, TG, TC, LDL, HDL, and VLDL in all animal groups. As indicated in Table 1, injection of STZ significantly increased the serum level of Glu, TG, TC, and VLDL in the sham group (D) in comparison to control animals. Administration of metformin or p-cymene improved Glu’s, TG’s, TC’s, and VLDL’s value in serum (in D25 and D100 groups for Glu, D25, D50, and D100 groups for TG; D25 and D50 groups for TC; and in D25, D50, and D100 groups for VLDL) when compared to the sham ones. Changes in Glu, TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 25, 100, 25, and 25 or 50 mg/kg, respectively. It seems that the serum levels of LDL increased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly decreased the blood level of LDL in C25, C100, and D100 groups (p≤0.05) in comparison to the sham group. The serum levels of HDL decreased in the sham operation group but there was no significant difference between the control group and the sham group (p≥0.05). Administration of metformin or p-cymene significantly increased the blood level of HDL in all groups (p≤0.001) in comparison to the sham group, and in C25, C50, and C100 groups (p≤0.001) when compared to control ones. P-cymene treatment did not have any effects on serum levels of Glu, TG, TC, and LDL in C25, C50, and C100 groups in comparison to controls (exception for Glu in the C50 group). In contrast, p-cymene treatment increased the levels of VLDL in C25, C50, and C100 animals when compared to control rats, but these increases were significantly lower compared to the VLDL level in the sham group.Table 1 Biochemical results of the diabetes rats and controls. GroupGlu (mg/dl)TG (mg/dl)TC (mg/dl)LDL (mg/dl)HDL (mg/dl)VLDL (mg/dl)Control (C)156.50 ± 11.80###79.83 ± 3.19+++#61.67 ± 2.22+##18.67 ± 2.2234.33 ± 1.897.55 ± 0.58###Sham (D)317.00 ± 18.01∗∗∗+++101.33 ± 5.50∗+++78.83 ± 3.18∗∗+++22.67 ± 1.33++28.50 ± 1.75+++23.00 ± 1.53∗∗∗+++Met161.00 ± 18.88###36.67 ± 2.65∗∗∗###48.50 ± 3.92∗###15.50 ± 1.15##44.33 ± 2.50###7.62 ± 0.70###C25173.83 ± 16.09###81.33 ± 6.16+++57.83 ± 2.61###16.17 ± 0.54#53.17 ± 4.80∗∗∗###14.60 ± 0.61∗∗∗###+++C50232.00 ± 26.9787.17 ± 2.68+++71.50 ± 2.17+++18.83 ± 1.2560.83 ± 1.08∗∗∗###+++16.33 ± 1.22∗∗∗###+++C100191.17 ± 15.37###67.67 ± 5.23+++###66.50 ± 1.95+++16.83 ± 1.30#52.5 ± 2.03∗∗∗###12.38 ± 0.73∗∗###++D25207.67 ± 13.53##37.00 ± 2.66∗∗∗###58.00 ± 2.35###18.50 ± 1.2644.0 ± 1.37##7.80 ± 0.60###D50233.00 ± 20.2338.00 ± 2.79∗∗∗###65.83 ± 3.46#++18.67 ± 0.3347.67 ± 2.08∗∗###7.32 ± 0.52###D100205.50 ± 29.34##64.83 ± 8.09++###68.17 ± 2.83+++17.00 ± 0.86#45.83 ± 2.52∗###13.03 ± 0.80∗∗###++Values are presented as mean ± SEM(n = 6/each group).∗Statistically different from the control rats (p<0.05), ∗∗statistically different from the control rats (c), ∗∗∗statistically different from the control rats (p<0.001), #statistically different from the sham rats (p<0.05), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +Statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin(55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). TG: Triglyceride, TC: Total Cholesterol, LDL: low-density lipoprotein, HDL: high-density lipoprotein, VLDL: very low-density lipoproteins.The serum levels of AST, ALT, and ALP are presented in Table2. We observed that injection of STZ significantly increased (p≤0.001) the serum level of AST, ALT, and ALP in the sham group in comparison to control animals. Administration of metformin or p-cymene improved the serum level of these factors in diabetic rats (p≤0.001) when compared to the sham group (except for AST in the D100 group; p≥0.05). Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. P-cymene treatment did not have any effects (p≥0.05) on serum levels of AST and ALT in C25, C50, and C100 groups in comparison to controls (except for AST in C25 group; p≤0.01). In contrast, p-cymene treatment increased the levels of ALP in C25, C50, and C100 animals when compared to control rats (p≤0.001), but these increases were significantly lower compared to the ALP levels in the sham group (p≤0.001).Table 2 Results of the AST, ALT, ALP, and MDA in diabetes rats and controls. GroupAST (mg/dl)ALT (mg/dl)ALP (mg/dl)MDA (μmol/l)Control (C)81.50 ± 2.53###66.17 ± 4.10###473.80 ± 26.11###5.17 ± 0.40###Sham (D)110.67 ± 1.48∗∗∗+++97.17 ± 1.17∗∗∗+++1007.50 ± 16.01∗∗∗+++10.17 ± 0.48∗∗∗+++Met92.00 ± 2.83###57.00 ± 2.84###391.67 ± 14.40###5.25 ± 0.48###C2595.25 ± 1.28∗∗###65.33 ± 3.38###777.75 ± 29.58∗∗∗##+++5.83 ± 0.31###C5092.17 ± 2.77###72.83 ± 4.73###741.20 ± 27.94∗∗∗###+++5.33 ± 0.21###C10091.33 ± 2.09###56.50 ± 4.51###730.50 ± 36.38∗∗∗###+++5.00 ± 0.37###D2587.00 ± 2.32###66.00 ± 6.52###500.33 ± 52.85###6.67 ± 0.33###D5094.50 ± 1.80∗∗###59.00 ± 2.00###457.50 ± 48.08###7.00 ± 0.37∗###+D100102.33 ± 2.08∗∗∗+58.83 ± 1.14###523.20 ± 33.68###6.63 ± 0.33###Values are presented as mean ± SEM (n = 6/each group). ∗∗Statistically different from the control rats (p<0.01), ∗∗∗statistically different from the control rats (p<0.001); ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001); +Statistically different from the Met rats (p<0.05), +++statistically different from the Met rats (p<0.001). The sham (D): diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). AST: aspartate aminotransferase, ALT: alanine aminotransferase, ALP: alkaline phosphatase, MDA: malondialdehyde.The levels of MDA in liver tissues are presented in Table2. Injection of STZ markedly increased (p≤0.001) the amount of MDA in the sham group in comparison to the control ones. Administration of metformin or p-cymene improved the amount of MDA in diabetic rats (p≤0.001) when compared to the sham group. No difference was observed between the three groups of D25, D50, and D100. ## 3.2. Dithizone Staining of Pancreatic Tissues Zinc-binding substance diphenylthiocarbazone (dithizone or DTZ) was used to stain pancreatic tissues. Previous studies have demonstrated that DTZ staining does not adversely affect islet function eitherin vitro or in vivo. Figure 1 shows the pancreatic tissues stained with the specific DTZ agent. The red or DTZ positive regions indicate the B-cells in islets of Langerhans with normal morphology and without significantly altering islets in the control, C25, C50, and C100 groups. DTZ staining showed loss of beta-cell mass of the Langerhans islets in the diabetic group (the sham or D). Administration of metformin or p-cymene improved the beta-cell mass of the Langerhans islets in diabetic rats (Met, D50, and D100) in a dose-dependent manner when compared to the sham group. Therefore, it seems that administration of p-cymene (100 mg/kg) may improve the changes of Langerhans islets in diabetic rats.Figure 1 Dithizone staining of pancreatic tissues in all animal groups. The red or DTZ positive regions indicate theB-cells in islets of Langerhans. Administration of metformin or p-cymene obviously increased the B-cell mass in Met, D50, and D100 groups in a dose-dependent manner. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x Scale bar 100 μm. ## 3.3. Immunohistochemical Analysis of Akt, phospho-Akt, and mTOR Figures2–5 show the level of Akt, phospho-Akt, and mTOR protein in the pancreas of rats in all animal groups, respectively. As indicated in Figure 5, the level of Akt, phospho-Akt, and mTOR protein in the control groups (control, C25, C50, and C100) was markedly higher than that in the diabetic groups (the sham, Met, D25, D50, and D100). Injection of STZ significantly reduced the expression of Akt, phospho-Akt, and mTOR proteins in the sham group in comparison to control animals. Data showed that the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Administration of metformin or p-cymene (100 mg/kg) increased Akt’s, phospho-Akt’s, and mTOR’s expression in Met and D100 groups, respectively. The increase in phospho-Akt was much greater than the increase in Akt. There was no significant difference in Akt’s, phospho-Akt’s, and mTOR’s expression between the sham, D25, and D50 groups.Figure 2 Fluorescence immunocytochemistry analysis of Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 3 Fluorescence immunocytochemistry analysis of phospho-Akt. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg), and pAkt: phospho-Akt. Magnification 400x. Scale bar 100μm.Figure 4 Fluorescence immunocytochemistry analysis of mTOR. C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), D100: diabetic rats + p-cymene (100 mg/kg). Magnification 400x. Scale bar 50μm.Figure 5 The expression of Akt, pAkt (phospho-Akt), and mTOR protein.∗Statistically different from the control rats (p<0.05), ∗∗∗statistically different from the control rats (p<0.001), ##statistically different from the sham rats (p<0.01), ###statistically different from the sham rats (p<0.001), +statistically different from the Met rats (p<0.05), ++statistically different from the Met rats (p<0.01), +++statistically different from the Met rats (p<0.001). C: control rats, D: the sham or diabetic rats, MET: diabetic rats + metformin (55 mg/kg), C25: control rats + p-cymene (25 mg/kg), C50: control rats + p-cymene (50 mg/kg), C100: control rats + p-cymene (100 mg/kg), D25: diabetic rats + p-cymene (25 mg/kg), D50: diabetic rats + p-cymene (50 mg/kg), and D100: diabetic rats + p-cymene (100 mg/kg). ## 4. Discussion In the present study, we found that p-cymene ameliorates some pathophysiological features in a Wistar rat model of diabetes. Data have shown that p-cymene can improve serum levels of Glu, TC, TG, HDL-c, LDL, ALP, ALT, AST, MDA, and VLDL in diabetic rats. Also, it improved the expression of mTOR, Akt, and phospho-Akt protein in diabetic animals. Intraperitoneal injection of streptozotocin was used for the experimental developing models of diabetes mellitus. Many studies have utilized streptozotocin for the induction of diabetes models in animals. This model is based on preferential necrosis of beta-cells by streptozotocin resulting in hypoinsulinemia and hyperglycemia. Streptozotocin is an antimicrobial compound extracted from the bacteriumStreptomyces achromogenes that selectively damages B-cells in pancreatic islets [36]. It is structurally similar to a glucose molecule; therefore, it can bind to the glucose transport (GLUT2) receptor and enter the cell. Pancreatic beta-cells are rich in GLU2 receptors, so they are a specific choice for streptozotocin [36]. In the present study, DTZ staining showed a loss of beta-cell mass of the Langerhans islets in the streptozotocin-induced diabetic group. Based on the obtained data, extensive pathological changes, such as hyperglycemia, hyperlipidemia, increased activities of liver enzymes, and increased oxidative stress were observed in the diabetic group. The serum levels of TC, TG, and VLDL were increased in the diabetic rats in comparison to control ones, indicating hyperlipidemia. Hyperlipidemia is very common in subjects with diabetes mellitus which is one of the reasons for the high risk of coronary heart disease in these individuals [37]. Hyperlipidemia is a result of a decrease in the activity of the lipoprotein lipases in patients with diabetes and hypoinsulinemia conditions [38]. Increased AST, ALT, and ALP were also observed in the serum of the diabetic group in comparison to control rats. AST, ALT, and ALP act as markers of liver function and their increase indicates liver injury [39]. Previous reports have shown that the serum levels of ASP, ALT, and ALP increase in patients with diabetes, insulin resistance, and metabolic syndrome [40–42]. Because of the central role of the liver in the homeostasis of glucose, hypoinsulinemia affects the liver and leads to hepatic injury [43]. The mechanism of induction of liver damage by streptozotocin is not well understood. Zafar et al. [44] showed elevated liver enzymes such as AST, ALT, and ALP following streptozotocin treatment. Also, they showed accumulation of lipid droplets, lymphocytic infiltration, and increased fibrous content in the liver of treated animals. They suggested that diabetic complications in the liver may be the result of changes in liver enzymes.Liver tissues analysis showed an increase in oxidative stress in STZ-induced diabetic rats. Many studies revealed an increase in biomarkers of oxidative stress in patients with diabetes [7–11]. Diabetes is a chronic metabolic disorder in which mitochondria have a key role as the most common source of ROS production. There is an important association between high levels of glucose in the blood and the induction of oxidative stress [12, 13]. Therefore, one strategy for T2DM therapy is controlling ROS production. MDA is a known oxidative stress biomarker that results from lipid peroxidation of polyunsaturated fatty acids by ROS [45]. In the current study, injection of STZ markedly increased the amount of MDA in the sham rats which indicated oxidative stress in diabetic animals.The expressions of mTOR, Akt, and phospho-Akt proteins were determined in the diabetic animals, too. The PI3K/Akt/mTOR pathway is crucial in the regulation of signal transduction and many cellular mechanisms including survival, proliferation, growth, metabolism, angiogenesis, and metastasis, in both normal and pathological conditions. Dysregulation of this pathway is associated with many human disorders including diabetes [46]. Activation of PI3K results in Akt phosphorylation, Akt activity, and Akt recruiting to the cell membrane. Phosphorylated Akt mediates the phosphorylation of mTOR to activate it, which subsequently regulates the growth and metabolism of glucose and lipid [47, 48]. Among the various factors that are identified to enhance the PI3K/Akt pathway, insulin is a crucial activator [49]. Therefore, changes in insulin can affect Akt’s activity. Both decreased and increased Akt’s function has been reported in diabetes mellitus [28]. In the present study, the expression of Akt, phospho-Akt, and mTOR proteins was decreased in the pancreatic tissues of diabetic animals that are in line with the results of Bathina and Das [50]. Bathina and Das examined the alteration of the PI3K/Akt/mTOR pathway in the brain of streptozotocin-induced type 2 diabetes rats. They have shown that oxidative stress and apoptosis of pancreatic beta-cells can be increased following treatment with streptozotocin. They also showed that streptozotocin can reduce the expression of phosphorylated Akt and phosphorylated mTOR in treated animals [50]. In the present work, the decrease in phospho-Akt in the diabetic group (the sham or D) was much greater than the decrease in Akt. Therefore, streptozotocin may block signal transduction via dysregulation of the PI3K/Akt/mTOR pathway, which is subsequently associated with pathophysiological processes of diabetes mellitus and its complications.Here, we investigated the effects of p-cymene on the treatment of T2DM in a rat model of diabetes and we compared its effects with metformin. P-cymene [1-methyl-4-(1-methylethyl)-benzene] is an aromatic organic monoterpene isolated from more than 100 various medicinal plants which belong to theThymus genus. A widespread range of pharmaceutical properties of p-cymene has been demonstrated including antioxidant, anti-inflammatory, antinociceptive, anxiolytic, anticancer, and antimicrobial effects [22–24]. Metformin is a first-line medicine to control high blood glucose in patients with type 2 diabetes mellitus. It reduces the amount of glucose absorbed from food and the amount of glucose produced by the liver. Also, it increases the response of the body to insulin [51]. In the present study, p-cymene was used at three doses including 25, 50, and 100 mg/kg. Administration of p-cymene in the current study ameliorated adverse features induced by streptozotocin. P-cymene treatment resulted in a significant decrease in serum glucose levels of diabetic rats in a similar way to metformin, which is in line with the results of Lotfi et al. [30]. They showed that p-cymene and metformin, alone or in combination, can decrease the blood amounts of glucose in mice with a high-fat diet [30]. Similar results were reported in studies by Ghazanfar et al. [52] and Bayramoglu et al. [53]. They suggested antidiabetic and blood-glucose-lowering properties of Artemisia amygdalina extracts and Oregano oil in STZ-induced diabetic rats [52, 53]. One of the active components in these extracts is p-cymene. Also, we observed that glucose level was increased by the p-cymene treatment in the control rats, although overall not significantly. The only significant effect on glucose levels in control rats was observed by the use of 50 mg/kg p-cymene. A similar result was observed in the D50 group. Although p-cymene decreased the amount of glucose in diabetic groups, this reduction was much greater in D25 and D100 groups than in the D50 group. It seems that low and high doses of p-cymene can improve the glucose level in streptozotocin-induced diabetic rats, and the median doses of it (50 mg/kg) have the opposite effect on the blood glucose level.The lipid profile was also influenced by the p-cymene treatment. Data analyses have shown a positive effect of the p-cymene on lipid factors including TG, TC, and VLDL in diabetic animals, which are in line with the results of Lotfi et al. [30], Ghazanfar et al. [52], and Bayramoglu et al. [53]. Changes in TG, TC, and VLDL levels were most similar to the control group when treating diabetic animals with p-cymene at the dose of 100, 25, and 25 or 50 mg/kg, respectively. However, administration of p-cymene improved TG’s, TC’s, and VLDL’s value in serum of all diabetic rats (exception for TC in the D100 group) when compared to the sham ones. Therefore, it seems that p-cymene at a dose of 25 and 50 mg/kg is much more effective than p-cymene at a dose of 100 mg/kg. Streptozotocin had no effect on blood levels of HDL; however, p-cymene increased the blood HDL in all treated animals (C25, C50, C100, D50, and D100 groups). With respect to the p-cymene effect on lipid profile, since a decrease was observed in TG, TC, and VLDL amount in treated groups in comparison to the sham group, and an increase was observed in HDL amount in treated animals in comparison to control one, the hypolipidemic potential of p-cymene could not be discarded, but further experiment with other suitable animal models of diabetic needs to be done.The profile of the liver enzyme was also influenced by the p-cymene administration in a similar way to the metformin. Administration of metformin or p-cymene markedly decreased the rate of AST, ALT, and ALP in the blood of streptozotocin-induced diabetic (D25, D50, and D100 groups) animals, except for AST in the D100 group. Therefore, it seems that p-cymene can prevent beta-cell destruction and can reduce liver injury in streptozotocin-induced diabetic rats. Similar results were also observed in studies by Lotfi et al. [30], Ghazanfar et al. [52], and Bayramoglu et al. [53]. Changes in AST, ALT, and ALP levels were most similar to control animals when treating diabetic animals with p-cymene at the dose of 25, 25, and 50 mg/kg, respectively. Therefore, it seems that p-cymene at a dose of 25 and 50 mg/kg is much more effective than p-cymene at a dose of 100 mg/kg. ALP level was increased by the p-cymene treatment in the control rats (C25, C50, and C100 group) although its increase was much lower compared to the sham rats. This observation may limit the administration of p-cymene in a healthy population. Additional studies are needed to fully distinguish this observation.P-cymene treatment also decreased oxidative stress in diabetic animals. Administration of p-cymene significantly decreased MDA levels in the diabetic groups, indicating antioxidant properties of p-cymene. The antioxidant activity of p-cymene was suggested by Oliveira et al. [22]. No difference was observed between the three doses of 25, 50, and 100 mg/kg of p-cymene.The effect of p-cymene on the expression of Akt, phospho-Akt, and mTOR protein was also examined. Our experiments have shown a positive but small effect of metformin or p-cymene on the expression of Akt, phospho-Akt, and mTOR protein in streptozotocin-induced diabetic animals. In fact, the expression of Akt, phospho-Akt, and mTOR protein decreased in diabetic animals, but after metformin or p-cymene treatment, the amount of these proteins increased moderately. The increase in phospho-Akt was much greater than the increase in Akt. In addition, this increase was much greater at the dose of 100 mg/kg of p-cymene, suggesting that p-cymene affects Akt/mTOR signaling pathway in a dose-dependent manner. No difference was observed between the two doses of 25 and 50 mg/kg of p-cymene. ## 5. Conclusions Overall, this study showed that hyperglycemia, hyperlipidemia, injury of the liver, oxidative stress, and suppression of the Akt/mTOR signaling pathway occur in streptozotocin-induced diabetes rats. Administration of p-cymene significantly prevented the progression of diabetes. It probably has promising antidiabetic potential and can reduce liver injury and oxidative stress and can improve Akt/mTOR signaling pathway. According to the results, p-cymene may be suggested for the control of diabetes in diabetic individuals. However, the effective dose, period of treatment, and interaction with other supplements must be investigated. Further studies are required to investigate the mechanism responsible for the antidiabetic characteristic of p-cymene. The antidiabetic effects of p-cymene are comparable with metformin and may be used as adjunct treatments for diabetic patients. --- *Source: 1015669-2022-04-26.xml*
2022
# Numerical Simulation of Projectile Oblique Impact on Microspacecraft Structure **Authors:** Zhiyuan Zhang; Runqiang Chi; Baojun Pang; Gongshun Guan **Journal:** International Journal of Aerospace Engineering (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1015674 --- ## Abstract In the present study, the microspacecraft bulkhead was reduced to the double honeycomb panel, and the projectile oblique hypervelocity impact on the double honeycomb panel was simulated. The distribution of the debris cloud and the damage of a honeycomb sandwich panel were investigated when the incident angles were set to be 60°, 45°, and 30°. The results showed that as incident angle decreased, the distribution of debris cloud was increased gradually, while the maximum perforation size of the rear face sheet was firstly increased with the decrease of the incident angle and then decreased. On the other hand, the damage area and the damage degree of the front face sheet of the second honeycomb panel layer were increased with the decrease of the incident angle. Finally, the critical angle of front and rear face sheets of the honeycomb sandwich panel was obtained under oblique hypervelocity impact. --- ## Body ## 1. Introduction A large amount of the space debris has been accumulated as the increase of the human space activities within the scope of the near-earth space, which posed a serious threat to the safe operation of the spacecraft in orbit [1, 2]. Honeycomb sandwich panel is commonly used as a kind of structure material of the spacecraft bulkhead which is made up of panels and honeycomb core sticking together [3, 4]. The space debris can easily penetrate honeycomb sandwich panel with an average speed of 10 km/s in the earth orbit [1, 3]. As for the spacecraft bulkhead, the honeycomb sandwich panel will be firstly suffered the impact of the space debris [5, 6].A lot of researches have analyzed the damage characteristics induced by the impact of the space debris on the monolayer honeycomb sandwich panel [7] by experiments and numerical simulations by considering the effects of size [8], the impact velocity [8, 9], the materials [10], and the collision limit [11]. But the hypervelocity impact on double honeycomb sandwich panels which is different from two layer honeycomb sandwich panels being bonded together [5] has not yet been reported. One side of the spacecraft bulkhead will be penetrated; then the inside the equipment and the other side of the bulkhead can also be damaged due to the high speed and the large kinetic energy of the space debris. The study of the effects of the high speed impact on the front and rear bulkheads is necessary.In view of the space debris impact on both sides of the spacecraft bulkhead, the spacecraft was simplified to the structure of double honeycomb sandwich panel, which used to study the breaking condition of the projectile and the damage of the honeycomb sandwich panel under the oblique impact. Finally, the critical incident angle of the single honeycomb sandwich panel is calculated in this paper. ## 2. Model of Simulation Figure1 shows a double honeycomb sandwich panel structure of the simplified model of the spacecraft. The distance between two layers of the honeycomb sandwich plates is 500 mm and the space debris is simplified as spherical aluminum alloy projectiles. Smoothed Particle Hydrodynamics (SPH) is used as aluminum alloy projectile model which can solve the problem that the mesh deformation of projectile is too large. Finite element method (FE) is used in the honeycomb sandwich panel model and the shell element is used for the honeycomb core. Particle size of SPH is set to be 0.2 mm. Grid size near impact point and outside are set to be 0.3 mm and 0.8 mm. Each honeycomb core is equal to the grid on the panel correspondingly. By using the model in Figure 2, the nodes of honeycomb core and panel grids are processed. The process of the projectile impact on honeycomb sandwich panel at a high speed was simulated using AUTODYN 15.0 and the simulation parameters were shown in Table 1. Geometric strain was used as the erosion model, and the value was set 2. In this model, D is the projectile diameter, h is thickness of honeycomb sandwich panel, L is a side length of hexagonal honeycomb core, H is height of the honeycomb core, and t is thickness of cellular wall. The model parameters of the honeycomb sandwich panel in this paper were based on data from microspacecraft project, which can apply to communication, ground remote sensing, the interplanetary exploration, scientific research, and so forth.Table 1 Description of the simulation model. Material Dimension/mm EOS Strength Failure Projectile Al 2017 Sphere/D=5 Shock Johnson-Cook Principal stress Face sheet Al 5A06 h = 0.8 Shock Johnson-Cook Plastic strain Honeycomb core Al 5A06 L = 4 , H = 20 , t = 0.025 Linear Johnson-Cook Plastic strainFigure 1 Double honeycomb sandwich panel structure diagram.Figure 2 Honeycomb panel model.Johnson-Cook model can describe the nonlinear process of the high speed impact, which can be written as(1)σ=A+Bεn1+Cln⁡ε˙∗1-T∗min which A, B, n, C, m are constants related to materials, ε˙∗ is the ratio of reference strain rate and strain rate, T∗=(T-Tr)/(Tm-Tr), Tr is room temperature (300 K), and Tm is the melting point of materials. Material parameters of numerical simulation are shown in Table 2.Table 2 Parameters of materials [12]. A/MPa B/MPa n C m T m /K Density/g·cm−3 Al 5A06 235.4 622.3 0.58 0.0174 1.05 853 2.64 Al 2017 249.9 426.0 0.34 0.015 1.0 775 2.79We have validated the accuracy of the simulation model by experiment as shown in Figure3 [13] and Table 2. The parameters of the projectile and the honeycomb sandwich panel were shown in Table 1. The impact velocity of projectile was 1.915 km/s. The perforation diameter of rear face sheet is about 14 mm in both experimental and simulation results. The others experiments were shown in Table 3. The experimental results coincide with the simulation results, which verified the correctness of the simulation model.Table 3 Experiment and simulation results of rear face sheet of honeycomb sandwich panel. Projectile diameter/mm Velocity/km·s−1 Experiment result/mm Simulation result/mm Error Test 1 5 1.915 14.1 15.3 7.8% Test 2 8 1.7 22.5 24.6 8.5% Test 3 5 3.065 25.4 23.9 5.9%Figure 3 Experiment and simulation results [12]. (a) Experimental result (b) Simulation result ## 3. Projectile Oblique Impact on Structure of Double Honeycomb Sandwich Panel ### 3.1. The Configuration of Debris Cloud Half symmetry model was adopted, including 145164 solid elements, 239056 shell elements, and 4072 SPH particles. The projectile velocity is set to be 3.07 km/s. 3 km/s (or lower) was studied because it is in the extent of speed of orbital debris, and less than or equal to 3 km/s is an important part of the ballistic limit. Every SPH particle contains various kinds of physical quantities, such as mass and speed, and so forth. The distribution of debris cloud can be described by the analysis of the projectile mass fraction of debris cloud. The distribution of debris cloud after the projectile impacted the first layer of honeycomb sandwich panel was shown in Figure4. In Figure 4(a), the debris cloud distribution area was divided into three regions. Region I was the part above the front face sheet of honeycomb sandwich panel, which was the backwash debris cloud. Region II was the interior of the honeycomb sandwich panel, which was made of the radial motion parts of projectile debris cloud and the backwash debris cloud which formed due to influence of the rear face sheet. Region III was the part below rear face sheet, which was the debris cloud part which go through the rear face sheet.Figure 4 The distribution of the debris cloud in9.5E-002 ms. (a) The division of debris cloud distribution area (b) α = 60 ° (c) α = 45 ° (d) α = 30 °Figures4(b), 4(c), and 4(d) show the distribution of debris cloud of projectile when the impact angle was 60°, 45°, and 30°, respectively. The debris cloud distribution was obviously influenced by the impact angles. In order to analyze the distribution of debris cloud accurately, the projectile mass fraction of the debris cloud in different debris cloud distribution areas was obtained as shown in Table 4. From Figure 4 and Table 4, it can be seen that Region III increased gradually with the decrease of the impact angle. The projectile debris cloud was mainly distributed in Region III when the impact angle was 30°, and the main movement direction of the debris cloud was in the vertical direction.Table 4 Projectile mass fraction of debris cloud. Incident angle Region I Region II Region III 60° 18.33% 33.33% 48.33% 45° 4.76% 23.67% 71.67% 30° 1.67% 18.33% 80.00% ### 3.2. Damage of the First Layer of Double Honeycomb Sandwich Panel Damage of the panel and honeycomb core after the projectile impact the first layer of double honeycomb sandwich panel was formed as shown in Figure5. The debris cloud of projectile enters the honeycomb sandwich panel inside can be divided into two parts, one was the “main part” which was between the two straight lines as shown in Figure 5, and the other part was the “secondary part” which was between the two curves except “main part”; this part of debris cloud did not have enough kinetic energy to get through the panel but only expanded inside the honeycomb core after the backwash and then penetrated into the cellular wall. Since penetrating the cellular wall will consume the kinetic energy of debris cloud, the energy of the debris cloud decreased so that it cannot penetrate into the honeycomb core again after penetrating a certain number of the cellular walls.Figure 5 Damage of the first layer of honeycomb sandwich panel. (a) Coordinate system (b) α = 60 ° (c) α = 45 ° (d) α = 30 °A coordinate system was given to describe the damage degree of honeycomb panel (Figure5(a)), and the coordinate origin can move along x-axis. With the decrease of the projectile impact angle, the projectile velocity along x-axis was reduced accordingly. The projectile mass fraction of inflation debris cloud along x-axis was also reduced. As a result, damage area of the honeycomb core caused by debris cloud was decreased. It can be seen from Figure 6 that the expansion scope of debris cloud was reduced with the decrease of the impact angle. When the impact angle was 30° (Figure 5(d)), the damage range of the honeycomb core was reduced significantly compared with 60° (Figure 5(b)) and 45° (Figure 5(c)). Thus, there are obvious relationships between the damage of honeycomb sandwich panel and the impact angle.Figure 6 Second layer of double honeycomb sandwich panels (incidence angle is 45°). (a) The configuration of debris cloud (b) Damage of the second layer of double honeycomb sandwich panelDifferent impact angles lead to various perforation shape and size of rear face sheet as shown in Table5. It can be found that the perforation shape can be described by an ellipse approximately. The perforation shape was more likely an oval shape when the impact angle was 60° compared with that of 45° and 30°; this was induced by the degeneration of elliptic equations.Table 5 The perforation shape of panel. Incident angle 60° 45° 30° Front face sheet Rear face sheetAs shown in Table5, the largest perforation size of the front face sheet reduces with the decrease of the impact angle. Meanwhile, the largest perforation size of the rear face sheet was increased firstly and then decreased. The largest perforation size of the front face sheet was slightly greater than the rear face sheet when the impact angle was 60°, but the former is significantly smaller than the latter when the impact angle was 45° and 30°. This is because the velocity component of z-axis was larger when the impact angle was smaller and more debris cloud particles impact the rear face sheet, resulting in the greater damage in the rear face sheet. ### 3.3. Damage of the Second Layer of Double Honeycomb Sandwich Panel For the double honeycomb sandwich panel structure, the projectile was broken after it penetrated the first layer of the honeycomb sandwich panel, and then the projectile continued to move in the form of debris cloud until it impacted the second layer of honeycomb sandwich panel. The shape of debris cloud was related to the incident angle of the projectile, including the projectile broken partly or completely. The damage degree of rear face sheet was associated with the shape of projectile debris cloud. Because of the size and velocity, the nonsignificant fragment of front face sheet of the first layer of honeycomb sandwich panel was observed, so we ignored the effect of fragment of face sheet on damage in this paper.The debris cloud, which formed after the projectile penetrated into the first layer of the honeycomb sandwich panel, impacted the second layer of honeycomb sandwich panel, where distance to the first layer is 500 mm. The projectile mass fraction of debris cloud which penetrated the first layer of the honeycomb sandwich panel was not much and broken fully (Figure4(b)); the kinetic energy of debris cloud was too small to induce obvious damage on the honeycomb sandwich panel plate when impact angle was 60°. Therefore, damage of the second layer of honeycomb sandwich panel was studied only under the case of the impact angles of 45° and 30° in this paper, as shown in Figures 6 and 7.Figure 7 Second layer of honeycomb sandwich panels (incidence angle is 30°). (a) The configuration of debris cloud (b) Damage of the second layer of honeycomb sandwich panelFigure4(c) shows the projectile debris cloud form on the situation that the projectile has penetrated the first layer of honeycomb sandwich panel but did not impact the second layer when the impact angle was 45°. Big pieces did not break in the front of debris cloud, which are the main factors that lead to the perforation on front face sheet and then impacted the second layer of double honeycomb sandwich panel. Most of the scattered debris cloud cannot penetrate the front face sheet of second layer of double honeycomb sandwich panel, but splash after impacted the front face sheet as shown in Figure 6(a). Figure 6(b) shows the damage of the second layer of honeycomb sandwich panel. According to the two perforations in front face sheet, we can know that there were two pieces of debris that impacted the front face sheet and formed punches and smaller piece of debris fully broken after getting through the front face sheet so that it cannot cause damage to the rear face sheet again. Large pieces of debris impact the rear face sheet after penetrating the front face sheet. The kinetic energy of debris cloud was too small to penetrate the back panel at that moment and only craters in the rear face sheet were formed.Distribution and damage of debris cloud after the projectile impact the second layer of honeycomb sandwich panel of double when impact angle was 30° as shown in Figure7. There are three obvious perforations on the rear face sheet. The projectile was further broken when debris cloud penetrated the second layer of honeycomb sandwich panel so that the rear face sheet without obvious damage due to the velocity component of z-axis at the impact angle of 30° is larger than 45°.According to the analysis of the damage of projectile impact double honeycomb sandwich panel, it can be found that the impact damage was closely related to the oblique impact angle. The projectile broken degree, the debris cloud distribution, and the damage on each layer of honeycomb sandwich panel were different under different impact angles. Ruining of the first layer of honeycomb sandwich panel was the most serious when projectile oblique impact double honeycomb sandwich panel structure. Debris cloud forms were uncommon after the projectile penetrated the first layer of honeycomb sandwich panel at different impact angles, resulting the different damage shapes on the second layer of honeycomb sandwich panel. Incident angle of space debris impact the spacecraft is also different because the space debris and spacecraft orbit were different in the real space environment. It is terrible that the internal equipment of spacecraft and the other side of the bulkhead will be damaged if the space debris penetrated one side of the bulkhead. Critical penetration angle of the projectile oblique impact single honeycomb sandwich panel was studied in this paper based on using single honeycomb sandwich panel to simulate one side of the spacecraft bulkhead. ## 3.1. The Configuration of Debris Cloud Half symmetry model was adopted, including 145164 solid elements, 239056 shell elements, and 4072 SPH particles. The projectile velocity is set to be 3.07 km/s. 3 km/s (or lower) was studied because it is in the extent of speed of orbital debris, and less than or equal to 3 km/s is an important part of the ballistic limit. Every SPH particle contains various kinds of physical quantities, such as mass and speed, and so forth. The distribution of debris cloud can be described by the analysis of the projectile mass fraction of debris cloud. The distribution of debris cloud after the projectile impacted the first layer of honeycomb sandwich panel was shown in Figure4. In Figure 4(a), the debris cloud distribution area was divided into three regions. Region I was the part above the front face sheet of honeycomb sandwich panel, which was the backwash debris cloud. Region II was the interior of the honeycomb sandwich panel, which was made of the radial motion parts of projectile debris cloud and the backwash debris cloud which formed due to influence of the rear face sheet. Region III was the part below rear face sheet, which was the debris cloud part which go through the rear face sheet.Figure 4 The distribution of the debris cloud in9.5E-002 ms. (a) The division of debris cloud distribution area (b) α = 60 ° (c) α = 45 ° (d) α = 30 °Figures4(b), 4(c), and 4(d) show the distribution of debris cloud of projectile when the impact angle was 60°, 45°, and 30°, respectively. The debris cloud distribution was obviously influenced by the impact angles. In order to analyze the distribution of debris cloud accurately, the projectile mass fraction of the debris cloud in different debris cloud distribution areas was obtained as shown in Table 4. From Figure 4 and Table 4, it can be seen that Region III increased gradually with the decrease of the impact angle. The projectile debris cloud was mainly distributed in Region III when the impact angle was 30°, and the main movement direction of the debris cloud was in the vertical direction.Table 4 Projectile mass fraction of debris cloud. Incident angle Region I Region II Region III 60° 18.33% 33.33% 48.33% 45° 4.76% 23.67% 71.67% 30° 1.67% 18.33% 80.00% ## 3.2. Damage of the First Layer of Double Honeycomb Sandwich Panel Damage of the panel and honeycomb core after the projectile impact the first layer of double honeycomb sandwich panel was formed as shown in Figure5. The debris cloud of projectile enters the honeycomb sandwich panel inside can be divided into two parts, one was the “main part” which was between the two straight lines as shown in Figure 5, and the other part was the “secondary part” which was between the two curves except “main part”; this part of debris cloud did not have enough kinetic energy to get through the panel but only expanded inside the honeycomb core after the backwash and then penetrated into the cellular wall. Since penetrating the cellular wall will consume the kinetic energy of debris cloud, the energy of the debris cloud decreased so that it cannot penetrate into the honeycomb core again after penetrating a certain number of the cellular walls.Figure 5 Damage of the first layer of honeycomb sandwich panel. (a) Coordinate system (b) α = 60 ° (c) α = 45 ° (d) α = 30 °A coordinate system was given to describe the damage degree of honeycomb panel (Figure5(a)), and the coordinate origin can move along x-axis. With the decrease of the projectile impact angle, the projectile velocity along x-axis was reduced accordingly. The projectile mass fraction of inflation debris cloud along x-axis was also reduced. As a result, damage area of the honeycomb core caused by debris cloud was decreased. It can be seen from Figure 6 that the expansion scope of debris cloud was reduced with the decrease of the impact angle. When the impact angle was 30° (Figure 5(d)), the damage range of the honeycomb core was reduced significantly compared with 60° (Figure 5(b)) and 45° (Figure 5(c)). Thus, there are obvious relationships between the damage of honeycomb sandwich panel and the impact angle.Figure 6 Second layer of double honeycomb sandwich panels (incidence angle is 45°). (a) The configuration of debris cloud (b) Damage of the second layer of double honeycomb sandwich panelDifferent impact angles lead to various perforation shape and size of rear face sheet as shown in Table5. It can be found that the perforation shape can be described by an ellipse approximately. The perforation shape was more likely an oval shape when the impact angle was 60° compared with that of 45° and 30°; this was induced by the degeneration of elliptic equations.Table 5 The perforation shape of panel. Incident angle 60° 45° 30° Front face sheet Rear face sheetAs shown in Table5, the largest perforation size of the front face sheet reduces with the decrease of the impact angle. Meanwhile, the largest perforation size of the rear face sheet was increased firstly and then decreased. The largest perforation size of the front face sheet was slightly greater than the rear face sheet when the impact angle was 60°, but the former is significantly smaller than the latter when the impact angle was 45° and 30°. This is because the velocity component of z-axis was larger when the impact angle was smaller and more debris cloud particles impact the rear face sheet, resulting in the greater damage in the rear face sheet. ## 3.3. Damage of the Second Layer of Double Honeycomb Sandwich Panel For the double honeycomb sandwich panel structure, the projectile was broken after it penetrated the first layer of the honeycomb sandwich panel, and then the projectile continued to move in the form of debris cloud until it impacted the second layer of honeycomb sandwich panel. The shape of debris cloud was related to the incident angle of the projectile, including the projectile broken partly or completely. The damage degree of rear face sheet was associated with the shape of projectile debris cloud. Because of the size and velocity, the nonsignificant fragment of front face sheet of the first layer of honeycomb sandwich panel was observed, so we ignored the effect of fragment of face sheet on damage in this paper.The debris cloud, which formed after the projectile penetrated into the first layer of the honeycomb sandwich panel, impacted the second layer of honeycomb sandwich panel, where distance to the first layer is 500 mm. The projectile mass fraction of debris cloud which penetrated the first layer of the honeycomb sandwich panel was not much and broken fully (Figure4(b)); the kinetic energy of debris cloud was too small to induce obvious damage on the honeycomb sandwich panel plate when impact angle was 60°. Therefore, damage of the second layer of honeycomb sandwich panel was studied only under the case of the impact angles of 45° and 30° in this paper, as shown in Figures 6 and 7.Figure 7 Second layer of honeycomb sandwich panels (incidence angle is 30°). (a) The configuration of debris cloud (b) Damage of the second layer of honeycomb sandwich panelFigure4(c) shows the projectile debris cloud form on the situation that the projectile has penetrated the first layer of honeycomb sandwich panel but did not impact the second layer when the impact angle was 45°. Big pieces did not break in the front of debris cloud, which are the main factors that lead to the perforation on front face sheet and then impacted the second layer of double honeycomb sandwich panel. Most of the scattered debris cloud cannot penetrate the front face sheet of second layer of double honeycomb sandwich panel, but splash after impacted the front face sheet as shown in Figure 6(a). Figure 6(b) shows the damage of the second layer of honeycomb sandwich panel. According to the two perforations in front face sheet, we can know that there were two pieces of debris that impacted the front face sheet and formed punches and smaller piece of debris fully broken after getting through the front face sheet so that it cannot cause damage to the rear face sheet again. Large pieces of debris impact the rear face sheet after penetrating the front face sheet. The kinetic energy of debris cloud was too small to penetrate the back panel at that moment and only craters in the rear face sheet were formed.Distribution and damage of debris cloud after the projectile impact the second layer of honeycomb sandwich panel of double when impact angle was 30° as shown in Figure7. There are three obvious perforations on the rear face sheet. The projectile was further broken when debris cloud penetrated the second layer of honeycomb sandwich panel so that the rear face sheet without obvious damage due to the velocity component of z-axis at the impact angle of 30° is larger than 45°.According to the analysis of the damage of projectile impact double honeycomb sandwich panel, it can be found that the impact damage was closely related to the oblique impact angle. The projectile broken degree, the debris cloud distribution, and the damage on each layer of honeycomb sandwich panel were different under different impact angles. Ruining of the first layer of honeycomb sandwich panel was the most serious when projectile oblique impact double honeycomb sandwich panel structure. Debris cloud forms were uncommon after the projectile penetrated the first layer of honeycomb sandwich panel at different impact angles, resulting the different damage shapes on the second layer of honeycomb sandwich panel. Incident angle of space debris impact the spacecraft is also different because the space debris and spacecraft orbit were different in the real space environment. It is terrible that the internal equipment of spacecraft and the other side of the bulkhead will be damaged if the space debris penetrated one side of the bulkhead. Critical penetration angle of the projectile oblique impact single honeycomb sandwich panel was studied in this paper based on using single honeycomb sandwich panel to simulate one side of the spacecraft bulkhead. ## 4. Critical Penetration Angle of Projectile Oblique Impact on Honeycomb Sandwich Panel ### 4.1. Critical Penetration Angle of the Front Face Sheet The projectile penetrated first layer honeycomb sandwich panel would lead to more or less damage on the second layer honeycomb sandwich panel according to the study of the double honeycomb sandwich panel structure. On the other hand, critical perforation angle of single layer honeycomb sandwich panel is a part of a double honeycomb sandwich panel structure, so the critical penetration angle of projectile oblique impact on honeycomb sandwich panel was studied firstly.The critical penetration is infinitely close to penetration and unpenetration. The critical penetration angle of the front face sheet of a honeycomb sandwich panel was found through a lot of simulation tests based on dichotomy. Impact speed is 3 km/s, for example, to analyze the critical penetration angle of the front face sheet. The critical angle that the front face sheet cannot be penetrated was 87° as shown in Figure8(a). The projectile was not broken, and only plastic deformation occurred at that time. Impact craters were formed on the front face sheet of the honeycomb sandwich panel, but it was unable to be punched. Critical penetration angle of the front face sheet was 85° as shown in Figure 8(b). Local plastic deformation and broken of projectile occurred, and there were holes on the front face sheet at the same moment, so the critical penetration angle of the front face sheet was 86°.Figure 8 Critical penetration of the front face sheet. (a) Impact angle is 87° (b) Impact angle is 85°It can be viewed according to Figure9 that projectile was not broken when projectile angle was greater than the critical penetration angle of the front face sheet. Therefore, critical penetration angle of the front face sheet is also the critical value of projectile broken. Symbol C in this formula represents the collection of projectiles, which were not broken (see (2)) and Symbol C- represents the collection of projectiles, which were broken (see (3)), θ represents the projectile incident angle, and θ^ represents critical penetration angle of the front face sheet:(2)C=θ>θ^∣0°≤θ≤90°(3)C-=θ>θ^∣0°≤θ≤90°.Figure 9 Critical penetration of the rear face sheet. (a) Impact angle is 72° (b) Impact angle is 70° ### 4.2. Critical Penetration Angle of the Rear Face Sheet The critical angle of the rear face sheet of honeycomb sandwich panel was 72° at the impact speed of 3 km/s as shown in Figures9(a) and 10(a). Projectile was broken after impacting the front face sheet and part of the debris cloud backwash then moved along the horizontal direction. Meanwhile, another part of the debris cloud moves into the honeycomb core layer. The motion of the debris cloud lateral leaded to the rupture and the collapse of the honeycomb core layer.Figure 10 Damage of rear face sheet with critical perforated. (a) Impact angle is 72° (b) Impact angle is 70°The critical angle of the rear face sheet of honeycomb sandwich panel was 70° as shown in Figures9(b) and 10(b). Compared to the incidence angle of 72°, the number of SPH particles that enter inside of honeycomb core and vertical component of the projectile velocity was increased. Damage on the rear face sheet was also increased so that the critical perforation at the incidence angle of 70° occurred in rear face sheet. Therefore, the critical perforation angle of the rear face sheet was 71°.The ability of projectile penetrating honeycomb sandwich panel is different when the impact speed of projectile or diameter is different. In order to study the critical perforation angle of front and rear face sheets of honeycomb sandwich panel at different impact speeds or diameters when oblique impacting, the impact test was simulated within the impacting speed range of 1.8~3.4 km/s and the diameter is 3 mm, 4 mm, 5 mm, and 6 mm as shown in Figure11.Figure 11 Critical perforation angles under various diameter and velocity of projectile.Simulation results were compared with the ballistic limit of a sandwich panel in order to testify the accuracy of the simulation. Equation (4) is an example of the generic form of the ballistic limit of a sandwich panel [14]. Particle diameter at ballistic limit was calculated at a diameter of 3 mm as shown in Table 6. The simulation result showed good agreement with the calculation results.(4)dcvn=tw/K3Sτ/400000.5+tb0.6cos⁡θvn0.677ρp0.50.947.In (4), dc is a particle diameter at ballistic limit, tw is thickness of rear wall (cm), K3S is a baseline (1.4 [15]), τ is rear wall yield stress, tb is thickness of bumper, θ is incidence angle, vn is normal component of velocity, and ρp is projectile density.Table 6 Particle diameter at ballistic limit (the particle diameter of simulation was 3 mm). Velocity/km·s−1 1.8 2 2.2 2.4 2.6 2.8 3 Calculation results of (4)/mm 3.53 3.36 3.34 3.22 3.07 2.92 2.79It can be seen that the critical perforation angle of front and rear face sheet increased slowly or are unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly. ## 4.1. Critical Penetration Angle of the Front Face Sheet The projectile penetrated first layer honeycomb sandwich panel would lead to more or less damage on the second layer honeycomb sandwich panel according to the study of the double honeycomb sandwich panel structure. On the other hand, critical perforation angle of single layer honeycomb sandwich panel is a part of a double honeycomb sandwich panel structure, so the critical penetration angle of projectile oblique impact on honeycomb sandwich panel was studied firstly.The critical penetration is infinitely close to penetration and unpenetration. The critical penetration angle of the front face sheet of a honeycomb sandwich panel was found through a lot of simulation tests based on dichotomy. Impact speed is 3 km/s, for example, to analyze the critical penetration angle of the front face sheet. The critical angle that the front face sheet cannot be penetrated was 87° as shown in Figure8(a). The projectile was not broken, and only plastic deformation occurred at that time. Impact craters were formed on the front face sheet of the honeycomb sandwich panel, but it was unable to be punched. Critical penetration angle of the front face sheet was 85° as shown in Figure 8(b). Local plastic deformation and broken of projectile occurred, and there were holes on the front face sheet at the same moment, so the critical penetration angle of the front face sheet was 86°.Figure 8 Critical penetration of the front face sheet. (a) Impact angle is 87° (b) Impact angle is 85°It can be viewed according to Figure9 that projectile was not broken when projectile angle was greater than the critical penetration angle of the front face sheet. Therefore, critical penetration angle of the front face sheet is also the critical value of projectile broken. Symbol C in this formula represents the collection of projectiles, which were not broken (see (2)) and Symbol C- represents the collection of projectiles, which were broken (see (3)), θ represents the projectile incident angle, and θ^ represents critical penetration angle of the front face sheet:(2)C=θ>θ^∣0°≤θ≤90°(3)C-=θ>θ^∣0°≤θ≤90°.Figure 9 Critical penetration of the rear face sheet. (a) Impact angle is 72° (b) Impact angle is 70° ## 4.2. Critical Penetration Angle of the Rear Face Sheet The critical angle of the rear face sheet of honeycomb sandwich panel was 72° at the impact speed of 3 km/s as shown in Figures9(a) and 10(a). Projectile was broken after impacting the front face sheet and part of the debris cloud backwash then moved along the horizontal direction. Meanwhile, another part of the debris cloud moves into the honeycomb core layer. The motion of the debris cloud lateral leaded to the rupture and the collapse of the honeycomb core layer.Figure 10 Damage of rear face sheet with critical perforated. (a) Impact angle is 72° (b) Impact angle is 70°The critical angle of the rear face sheet of honeycomb sandwich panel was 70° as shown in Figures9(b) and 10(b). Compared to the incidence angle of 72°, the number of SPH particles that enter inside of honeycomb core and vertical component of the projectile velocity was increased. Damage on the rear face sheet was also increased so that the critical perforation at the incidence angle of 70° occurred in rear face sheet. Therefore, the critical perforation angle of the rear face sheet was 71°.The ability of projectile penetrating honeycomb sandwich panel is different when the impact speed of projectile or diameter is different. In order to study the critical perforation angle of front and rear face sheets of honeycomb sandwich panel at different impact speeds or diameters when oblique impacting, the impact test was simulated within the impacting speed range of 1.8~3.4 km/s and the diameter is 3 mm, 4 mm, 5 mm, and 6 mm as shown in Figure11.Figure 11 Critical perforation angles under various diameter and velocity of projectile.Simulation results were compared with the ballistic limit of a sandwich panel in order to testify the accuracy of the simulation. Equation (4) is an example of the generic form of the ballistic limit of a sandwich panel [14]. Particle diameter at ballistic limit was calculated at a diameter of 3 mm as shown in Table 6. The simulation result showed good agreement with the calculation results.(4)dcvn=tw/K3Sτ/400000.5+tb0.6cos⁡θvn0.677ρp0.50.947.In (4), dc is a particle diameter at ballistic limit, tw is thickness of rear wall (cm), K3S is a baseline (1.4 [15]), τ is rear wall yield stress, tb is thickness of bumper, θ is incidence angle, vn is normal component of velocity, and ρp is projectile density.Table 6 Particle diameter at ballistic limit (the particle diameter of simulation was 3 mm). Velocity/km·s−1 1.8 2 2.2 2.4 2.6 2.8 3 Calculation results of (4)/mm 3.53 3.36 3.34 3.22 3.07 2.92 2.79It can be seen that the critical perforation angle of front and rear face sheet increased slowly or are unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly. ## 5. Conclusions The distribution of the debris cloud and the damage of double honeycomb sandwich panel were simulated under oblique hypervelocity impact. In addition, the critical penetration angle of front and rear face sheets of honeycomb sandwich panel was obtained. The conclusions are as follows.Projectile mass fraction of debris cloud decreases with the decrease of the impact angle which back-splashed and stayed inside of the first layer of honeycomb sandwich panel and then across the honeycomb core. However, the damage of first layer of honeycomb sandwich panel increased gradually. That result from the biggest perforation size of the front face sheet decreased accordingly, and the biggest perforation size of the rear face sheet increased firstly but then decreased.The projectile mass fraction of debris cloud which get through the first layer of honeycomb sandwich panel was too little to damage the second layer when the impact angle was larger, but it increased gradually with the decrease of impact angle so that the damage of the front face sheet of second layer honeycomb panel became more and more obvious, and the damage degree of the second layer was related to crushing degree of projectile after penetrated the first layer. The projectile will not break when projectile angle was greater than the critical perforation angle of the front face sheet, but it will break gradually when projectile angle is less than the critical perforation angle.The critical perforation angle of front and rear face sheet increased slowly or is unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly.For the research on the projectile oblique hypervelocity impact on double honeycomb panel, we only embarked on the study the distribution of projectile debris cloud and the level of damage of honeycomb panel under a velocity (3.07 km/s) and three incidence angles (60°, 45°, and 30°) of projectile. Therefore, future studies are worthy to be carried out to explore this subject further, such that a wide range of the speed conditions will be investigated, especially at higher velocity (about 10 km/s). On the other hand, different geometric models that will affect the fragmentation degree of projectile, which in turn will affect the damage degree of honeycomb panel, will also be studied. At last, we expect to obtain the critical penetration angle of projectile oblique impact on double honeycomb sandwich panel by more experiments. --- *Source: 1015674-2017-04-16.xml*
1015674-2017-04-16_1015674-2017-04-16.md
40,227
Numerical Simulation of Projectile Oblique Impact on Microspacecraft Structure
Zhiyuan Zhang; Runqiang Chi; Baojun Pang; Gongshun Guan
International Journal of Aerospace Engineering (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1015674
1015674-2017-04-16.xml
--- ## Abstract In the present study, the microspacecraft bulkhead was reduced to the double honeycomb panel, and the projectile oblique hypervelocity impact on the double honeycomb panel was simulated. The distribution of the debris cloud and the damage of a honeycomb sandwich panel were investigated when the incident angles were set to be 60°, 45°, and 30°. The results showed that as incident angle decreased, the distribution of debris cloud was increased gradually, while the maximum perforation size of the rear face sheet was firstly increased with the decrease of the incident angle and then decreased. On the other hand, the damage area and the damage degree of the front face sheet of the second honeycomb panel layer were increased with the decrease of the incident angle. Finally, the critical angle of front and rear face sheets of the honeycomb sandwich panel was obtained under oblique hypervelocity impact. --- ## Body ## 1. Introduction A large amount of the space debris has been accumulated as the increase of the human space activities within the scope of the near-earth space, which posed a serious threat to the safe operation of the spacecraft in orbit [1, 2]. Honeycomb sandwich panel is commonly used as a kind of structure material of the spacecraft bulkhead which is made up of panels and honeycomb core sticking together [3, 4]. The space debris can easily penetrate honeycomb sandwich panel with an average speed of 10 km/s in the earth orbit [1, 3]. As for the spacecraft bulkhead, the honeycomb sandwich panel will be firstly suffered the impact of the space debris [5, 6].A lot of researches have analyzed the damage characteristics induced by the impact of the space debris on the monolayer honeycomb sandwich panel [7] by experiments and numerical simulations by considering the effects of size [8], the impact velocity [8, 9], the materials [10], and the collision limit [11]. But the hypervelocity impact on double honeycomb sandwich panels which is different from two layer honeycomb sandwich panels being bonded together [5] has not yet been reported. One side of the spacecraft bulkhead will be penetrated; then the inside the equipment and the other side of the bulkhead can also be damaged due to the high speed and the large kinetic energy of the space debris. The study of the effects of the high speed impact on the front and rear bulkheads is necessary.In view of the space debris impact on both sides of the spacecraft bulkhead, the spacecraft was simplified to the structure of double honeycomb sandwich panel, which used to study the breaking condition of the projectile and the damage of the honeycomb sandwich panel under the oblique impact. Finally, the critical incident angle of the single honeycomb sandwich panel is calculated in this paper. ## 2. Model of Simulation Figure1 shows a double honeycomb sandwich panel structure of the simplified model of the spacecraft. The distance between two layers of the honeycomb sandwich plates is 500 mm and the space debris is simplified as spherical aluminum alloy projectiles. Smoothed Particle Hydrodynamics (SPH) is used as aluminum alloy projectile model which can solve the problem that the mesh deformation of projectile is too large. Finite element method (FE) is used in the honeycomb sandwich panel model and the shell element is used for the honeycomb core. Particle size of SPH is set to be 0.2 mm. Grid size near impact point and outside are set to be 0.3 mm and 0.8 mm. Each honeycomb core is equal to the grid on the panel correspondingly. By using the model in Figure 2, the nodes of honeycomb core and panel grids are processed. The process of the projectile impact on honeycomb sandwich panel at a high speed was simulated using AUTODYN 15.0 and the simulation parameters were shown in Table 1. Geometric strain was used as the erosion model, and the value was set 2. In this model, D is the projectile diameter, h is thickness of honeycomb sandwich panel, L is a side length of hexagonal honeycomb core, H is height of the honeycomb core, and t is thickness of cellular wall. The model parameters of the honeycomb sandwich panel in this paper were based on data from microspacecraft project, which can apply to communication, ground remote sensing, the interplanetary exploration, scientific research, and so forth.Table 1 Description of the simulation model. Material Dimension/mm EOS Strength Failure Projectile Al 2017 Sphere/D=5 Shock Johnson-Cook Principal stress Face sheet Al 5A06 h = 0.8 Shock Johnson-Cook Plastic strain Honeycomb core Al 5A06 L = 4 , H = 20 , t = 0.025 Linear Johnson-Cook Plastic strainFigure 1 Double honeycomb sandwich panel structure diagram.Figure 2 Honeycomb panel model.Johnson-Cook model can describe the nonlinear process of the high speed impact, which can be written as(1)σ=A+Bεn1+Cln⁡ε˙∗1-T∗min which A, B, n, C, m are constants related to materials, ε˙∗ is the ratio of reference strain rate and strain rate, T∗=(T-Tr)/(Tm-Tr), Tr is room temperature (300 K), and Tm is the melting point of materials. Material parameters of numerical simulation are shown in Table 2.Table 2 Parameters of materials [12]. A/MPa B/MPa n C m T m /K Density/g·cm−3 Al 5A06 235.4 622.3 0.58 0.0174 1.05 853 2.64 Al 2017 249.9 426.0 0.34 0.015 1.0 775 2.79We have validated the accuracy of the simulation model by experiment as shown in Figure3 [13] and Table 2. The parameters of the projectile and the honeycomb sandwich panel were shown in Table 1. The impact velocity of projectile was 1.915 km/s. The perforation diameter of rear face sheet is about 14 mm in both experimental and simulation results. The others experiments were shown in Table 3. The experimental results coincide with the simulation results, which verified the correctness of the simulation model.Table 3 Experiment and simulation results of rear face sheet of honeycomb sandwich panel. Projectile diameter/mm Velocity/km·s−1 Experiment result/mm Simulation result/mm Error Test 1 5 1.915 14.1 15.3 7.8% Test 2 8 1.7 22.5 24.6 8.5% Test 3 5 3.065 25.4 23.9 5.9%Figure 3 Experiment and simulation results [12]. (a) Experimental result (b) Simulation result ## 3. Projectile Oblique Impact on Structure of Double Honeycomb Sandwich Panel ### 3.1. The Configuration of Debris Cloud Half symmetry model was adopted, including 145164 solid elements, 239056 shell elements, and 4072 SPH particles. The projectile velocity is set to be 3.07 km/s. 3 km/s (or lower) was studied because it is in the extent of speed of orbital debris, and less than or equal to 3 km/s is an important part of the ballistic limit. Every SPH particle contains various kinds of physical quantities, such as mass and speed, and so forth. The distribution of debris cloud can be described by the analysis of the projectile mass fraction of debris cloud. The distribution of debris cloud after the projectile impacted the first layer of honeycomb sandwich panel was shown in Figure4. In Figure 4(a), the debris cloud distribution area was divided into three regions. Region I was the part above the front face sheet of honeycomb sandwich panel, which was the backwash debris cloud. Region II was the interior of the honeycomb sandwich panel, which was made of the radial motion parts of projectile debris cloud and the backwash debris cloud which formed due to influence of the rear face sheet. Region III was the part below rear face sheet, which was the debris cloud part which go through the rear face sheet.Figure 4 The distribution of the debris cloud in9.5E-002 ms. (a) The division of debris cloud distribution area (b) α = 60 ° (c) α = 45 ° (d) α = 30 °Figures4(b), 4(c), and 4(d) show the distribution of debris cloud of projectile when the impact angle was 60°, 45°, and 30°, respectively. The debris cloud distribution was obviously influenced by the impact angles. In order to analyze the distribution of debris cloud accurately, the projectile mass fraction of the debris cloud in different debris cloud distribution areas was obtained as shown in Table 4. From Figure 4 and Table 4, it can be seen that Region III increased gradually with the decrease of the impact angle. The projectile debris cloud was mainly distributed in Region III when the impact angle was 30°, and the main movement direction of the debris cloud was in the vertical direction.Table 4 Projectile mass fraction of debris cloud. Incident angle Region I Region II Region III 60° 18.33% 33.33% 48.33% 45° 4.76% 23.67% 71.67% 30° 1.67% 18.33% 80.00% ### 3.2. Damage of the First Layer of Double Honeycomb Sandwich Panel Damage of the panel and honeycomb core after the projectile impact the first layer of double honeycomb sandwich panel was formed as shown in Figure5. The debris cloud of projectile enters the honeycomb sandwich panel inside can be divided into two parts, one was the “main part” which was between the two straight lines as shown in Figure 5, and the other part was the “secondary part” which was between the two curves except “main part”; this part of debris cloud did not have enough kinetic energy to get through the panel but only expanded inside the honeycomb core after the backwash and then penetrated into the cellular wall. Since penetrating the cellular wall will consume the kinetic energy of debris cloud, the energy of the debris cloud decreased so that it cannot penetrate into the honeycomb core again after penetrating a certain number of the cellular walls.Figure 5 Damage of the first layer of honeycomb sandwich panel. (a) Coordinate system (b) α = 60 ° (c) α = 45 ° (d) α = 30 °A coordinate system was given to describe the damage degree of honeycomb panel (Figure5(a)), and the coordinate origin can move along x-axis. With the decrease of the projectile impact angle, the projectile velocity along x-axis was reduced accordingly. The projectile mass fraction of inflation debris cloud along x-axis was also reduced. As a result, damage area of the honeycomb core caused by debris cloud was decreased. It can be seen from Figure 6 that the expansion scope of debris cloud was reduced with the decrease of the impact angle. When the impact angle was 30° (Figure 5(d)), the damage range of the honeycomb core was reduced significantly compared with 60° (Figure 5(b)) and 45° (Figure 5(c)). Thus, there are obvious relationships between the damage of honeycomb sandwich panel and the impact angle.Figure 6 Second layer of double honeycomb sandwich panels (incidence angle is 45°). (a) The configuration of debris cloud (b) Damage of the second layer of double honeycomb sandwich panelDifferent impact angles lead to various perforation shape and size of rear face sheet as shown in Table5. It can be found that the perforation shape can be described by an ellipse approximately. The perforation shape was more likely an oval shape when the impact angle was 60° compared with that of 45° and 30°; this was induced by the degeneration of elliptic equations.Table 5 The perforation shape of panel. Incident angle 60° 45° 30° Front face sheet Rear face sheetAs shown in Table5, the largest perforation size of the front face sheet reduces with the decrease of the impact angle. Meanwhile, the largest perforation size of the rear face sheet was increased firstly and then decreased. The largest perforation size of the front face sheet was slightly greater than the rear face sheet when the impact angle was 60°, but the former is significantly smaller than the latter when the impact angle was 45° and 30°. This is because the velocity component of z-axis was larger when the impact angle was smaller and more debris cloud particles impact the rear face sheet, resulting in the greater damage in the rear face sheet. ### 3.3. Damage of the Second Layer of Double Honeycomb Sandwich Panel For the double honeycomb sandwich panel structure, the projectile was broken after it penetrated the first layer of the honeycomb sandwich panel, and then the projectile continued to move in the form of debris cloud until it impacted the second layer of honeycomb sandwich panel. The shape of debris cloud was related to the incident angle of the projectile, including the projectile broken partly or completely. The damage degree of rear face sheet was associated with the shape of projectile debris cloud. Because of the size and velocity, the nonsignificant fragment of front face sheet of the first layer of honeycomb sandwich panel was observed, so we ignored the effect of fragment of face sheet on damage in this paper.The debris cloud, which formed after the projectile penetrated into the first layer of the honeycomb sandwich panel, impacted the second layer of honeycomb sandwich panel, where distance to the first layer is 500 mm. The projectile mass fraction of debris cloud which penetrated the first layer of the honeycomb sandwich panel was not much and broken fully (Figure4(b)); the kinetic energy of debris cloud was too small to induce obvious damage on the honeycomb sandwich panel plate when impact angle was 60°. Therefore, damage of the second layer of honeycomb sandwich panel was studied only under the case of the impact angles of 45° and 30° in this paper, as shown in Figures 6 and 7.Figure 7 Second layer of honeycomb sandwich panels (incidence angle is 30°). (a) The configuration of debris cloud (b) Damage of the second layer of honeycomb sandwich panelFigure4(c) shows the projectile debris cloud form on the situation that the projectile has penetrated the first layer of honeycomb sandwich panel but did not impact the second layer when the impact angle was 45°. Big pieces did not break in the front of debris cloud, which are the main factors that lead to the perforation on front face sheet and then impacted the second layer of double honeycomb sandwich panel. Most of the scattered debris cloud cannot penetrate the front face sheet of second layer of double honeycomb sandwich panel, but splash after impacted the front face sheet as shown in Figure 6(a). Figure 6(b) shows the damage of the second layer of honeycomb sandwich panel. According to the two perforations in front face sheet, we can know that there were two pieces of debris that impacted the front face sheet and formed punches and smaller piece of debris fully broken after getting through the front face sheet so that it cannot cause damage to the rear face sheet again. Large pieces of debris impact the rear face sheet after penetrating the front face sheet. The kinetic energy of debris cloud was too small to penetrate the back panel at that moment and only craters in the rear face sheet were formed.Distribution and damage of debris cloud after the projectile impact the second layer of honeycomb sandwich panel of double when impact angle was 30° as shown in Figure7. There are three obvious perforations on the rear face sheet. The projectile was further broken when debris cloud penetrated the second layer of honeycomb sandwich panel so that the rear face sheet without obvious damage due to the velocity component of z-axis at the impact angle of 30° is larger than 45°.According to the analysis of the damage of projectile impact double honeycomb sandwich panel, it can be found that the impact damage was closely related to the oblique impact angle. The projectile broken degree, the debris cloud distribution, and the damage on each layer of honeycomb sandwich panel were different under different impact angles. Ruining of the first layer of honeycomb sandwich panel was the most serious when projectile oblique impact double honeycomb sandwich panel structure. Debris cloud forms were uncommon after the projectile penetrated the first layer of honeycomb sandwich panel at different impact angles, resulting the different damage shapes on the second layer of honeycomb sandwich panel. Incident angle of space debris impact the spacecraft is also different because the space debris and spacecraft orbit were different in the real space environment. It is terrible that the internal equipment of spacecraft and the other side of the bulkhead will be damaged if the space debris penetrated one side of the bulkhead. Critical penetration angle of the projectile oblique impact single honeycomb sandwich panel was studied in this paper based on using single honeycomb sandwich panel to simulate one side of the spacecraft bulkhead. ## 3.1. The Configuration of Debris Cloud Half symmetry model was adopted, including 145164 solid elements, 239056 shell elements, and 4072 SPH particles. The projectile velocity is set to be 3.07 km/s. 3 km/s (or lower) was studied because it is in the extent of speed of orbital debris, and less than or equal to 3 km/s is an important part of the ballistic limit. Every SPH particle contains various kinds of physical quantities, such as mass and speed, and so forth. The distribution of debris cloud can be described by the analysis of the projectile mass fraction of debris cloud. The distribution of debris cloud after the projectile impacted the first layer of honeycomb sandwich panel was shown in Figure4. In Figure 4(a), the debris cloud distribution area was divided into three regions. Region I was the part above the front face sheet of honeycomb sandwich panel, which was the backwash debris cloud. Region II was the interior of the honeycomb sandwich panel, which was made of the radial motion parts of projectile debris cloud and the backwash debris cloud which formed due to influence of the rear face sheet. Region III was the part below rear face sheet, which was the debris cloud part which go through the rear face sheet.Figure 4 The distribution of the debris cloud in9.5E-002 ms. (a) The division of debris cloud distribution area (b) α = 60 ° (c) α = 45 ° (d) α = 30 °Figures4(b), 4(c), and 4(d) show the distribution of debris cloud of projectile when the impact angle was 60°, 45°, and 30°, respectively. The debris cloud distribution was obviously influenced by the impact angles. In order to analyze the distribution of debris cloud accurately, the projectile mass fraction of the debris cloud in different debris cloud distribution areas was obtained as shown in Table 4. From Figure 4 and Table 4, it can be seen that Region III increased gradually with the decrease of the impact angle. The projectile debris cloud was mainly distributed in Region III when the impact angle was 30°, and the main movement direction of the debris cloud was in the vertical direction.Table 4 Projectile mass fraction of debris cloud. Incident angle Region I Region II Region III 60° 18.33% 33.33% 48.33% 45° 4.76% 23.67% 71.67% 30° 1.67% 18.33% 80.00% ## 3.2. Damage of the First Layer of Double Honeycomb Sandwich Panel Damage of the panel and honeycomb core after the projectile impact the first layer of double honeycomb sandwich panel was formed as shown in Figure5. The debris cloud of projectile enters the honeycomb sandwich panel inside can be divided into two parts, one was the “main part” which was between the two straight lines as shown in Figure 5, and the other part was the “secondary part” which was between the two curves except “main part”; this part of debris cloud did not have enough kinetic energy to get through the panel but only expanded inside the honeycomb core after the backwash and then penetrated into the cellular wall. Since penetrating the cellular wall will consume the kinetic energy of debris cloud, the energy of the debris cloud decreased so that it cannot penetrate into the honeycomb core again after penetrating a certain number of the cellular walls.Figure 5 Damage of the first layer of honeycomb sandwich panel. (a) Coordinate system (b) α = 60 ° (c) α = 45 ° (d) α = 30 °A coordinate system was given to describe the damage degree of honeycomb panel (Figure5(a)), and the coordinate origin can move along x-axis. With the decrease of the projectile impact angle, the projectile velocity along x-axis was reduced accordingly. The projectile mass fraction of inflation debris cloud along x-axis was also reduced. As a result, damage area of the honeycomb core caused by debris cloud was decreased. It can be seen from Figure 6 that the expansion scope of debris cloud was reduced with the decrease of the impact angle. When the impact angle was 30° (Figure 5(d)), the damage range of the honeycomb core was reduced significantly compared with 60° (Figure 5(b)) and 45° (Figure 5(c)). Thus, there are obvious relationships between the damage of honeycomb sandwich panel and the impact angle.Figure 6 Second layer of double honeycomb sandwich panels (incidence angle is 45°). (a) The configuration of debris cloud (b) Damage of the second layer of double honeycomb sandwich panelDifferent impact angles lead to various perforation shape and size of rear face sheet as shown in Table5. It can be found that the perforation shape can be described by an ellipse approximately. The perforation shape was more likely an oval shape when the impact angle was 60° compared with that of 45° and 30°; this was induced by the degeneration of elliptic equations.Table 5 The perforation shape of panel. Incident angle 60° 45° 30° Front face sheet Rear face sheetAs shown in Table5, the largest perforation size of the front face sheet reduces with the decrease of the impact angle. Meanwhile, the largest perforation size of the rear face sheet was increased firstly and then decreased. The largest perforation size of the front face sheet was slightly greater than the rear face sheet when the impact angle was 60°, but the former is significantly smaller than the latter when the impact angle was 45° and 30°. This is because the velocity component of z-axis was larger when the impact angle was smaller and more debris cloud particles impact the rear face sheet, resulting in the greater damage in the rear face sheet. ## 3.3. Damage of the Second Layer of Double Honeycomb Sandwich Panel For the double honeycomb sandwich panel structure, the projectile was broken after it penetrated the first layer of the honeycomb sandwich panel, and then the projectile continued to move in the form of debris cloud until it impacted the second layer of honeycomb sandwich panel. The shape of debris cloud was related to the incident angle of the projectile, including the projectile broken partly or completely. The damage degree of rear face sheet was associated with the shape of projectile debris cloud. Because of the size and velocity, the nonsignificant fragment of front face sheet of the first layer of honeycomb sandwich panel was observed, so we ignored the effect of fragment of face sheet on damage in this paper.The debris cloud, which formed after the projectile penetrated into the first layer of the honeycomb sandwich panel, impacted the second layer of honeycomb sandwich panel, where distance to the first layer is 500 mm. The projectile mass fraction of debris cloud which penetrated the first layer of the honeycomb sandwich panel was not much and broken fully (Figure4(b)); the kinetic energy of debris cloud was too small to induce obvious damage on the honeycomb sandwich panel plate when impact angle was 60°. Therefore, damage of the second layer of honeycomb sandwich panel was studied only under the case of the impact angles of 45° and 30° in this paper, as shown in Figures 6 and 7.Figure 7 Second layer of honeycomb sandwich panels (incidence angle is 30°). (a) The configuration of debris cloud (b) Damage of the second layer of honeycomb sandwich panelFigure4(c) shows the projectile debris cloud form on the situation that the projectile has penetrated the first layer of honeycomb sandwich panel but did not impact the second layer when the impact angle was 45°. Big pieces did not break in the front of debris cloud, which are the main factors that lead to the perforation on front face sheet and then impacted the second layer of double honeycomb sandwich panel. Most of the scattered debris cloud cannot penetrate the front face sheet of second layer of double honeycomb sandwich panel, but splash after impacted the front face sheet as shown in Figure 6(a). Figure 6(b) shows the damage of the second layer of honeycomb sandwich panel. According to the two perforations in front face sheet, we can know that there were two pieces of debris that impacted the front face sheet and formed punches and smaller piece of debris fully broken after getting through the front face sheet so that it cannot cause damage to the rear face sheet again. Large pieces of debris impact the rear face sheet after penetrating the front face sheet. The kinetic energy of debris cloud was too small to penetrate the back panel at that moment and only craters in the rear face sheet were formed.Distribution and damage of debris cloud after the projectile impact the second layer of honeycomb sandwich panel of double when impact angle was 30° as shown in Figure7. There are three obvious perforations on the rear face sheet. The projectile was further broken when debris cloud penetrated the second layer of honeycomb sandwich panel so that the rear face sheet without obvious damage due to the velocity component of z-axis at the impact angle of 30° is larger than 45°.According to the analysis of the damage of projectile impact double honeycomb sandwich panel, it can be found that the impact damage was closely related to the oblique impact angle. The projectile broken degree, the debris cloud distribution, and the damage on each layer of honeycomb sandwich panel were different under different impact angles. Ruining of the first layer of honeycomb sandwich panel was the most serious when projectile oblique impact double honeycomb sandwich panel structure. Debris cloud forms were uncommon after the projectile penetrated the first layer of honeycomb sandwich panel at different impact angles, resulting the different damage shapes on the second layer of honeycomb sandwich panel. Incident angle of space debris impact the spacecraft is also different because the space debris and spacecraft orbit were different in the real space environment. It is terrible that the internal equipment of spacecraft and the other side of the bulkhead will be damaged if the space debris penetrated one side of the bulkhead. Critical penetration angle of the projectile oblique impact single honeycomb sandwich panel was studied in this paper based on using single honeycomb sandwich panel to simulate one side of the spacecraft bulkhead. ## 4. Critical Penetration Angle of Projectile Oblique Impact on Honeycomb Sandwich Panel ### 4.1. Critical Penetration Angle of the Front Face Sheet The projectile penetrated first layer honeycomb sandwich panel would lead to more or less damage on the second layer honeycomb sandwich panel according to the study of the double honeycomb sandwich panel structure. On the other hand, critical perforation angle of single layer honeycomb sandwich panel is a part of a double honeycomb sandwich panel structure, so the critical penetration angle of projectile oblique impact on honeycomb sandwich panel was studied firstly.The critical penetration is infinitely close to penetration and unpenetration. The critical penetration angle of the front face sheet of a honeycomb sandwich panel was found through a lot of simulation tests based on dichotomy. Impact speed is 3 km/s, for example, to analyze the critical penetration angle of the front face sheet. The critical angle that the front face sheet cannot be penetrated was 87° as shown in Figure8(a). The projectile was not broken, and only plastic deformation occurred at that time. Impact craters were formed on the front face sheet of the honeycomb sandwich panel, but it was unable to be punched. Critical penetration angle of the front face sheet was 85° as shown in Figure 8(b). Local plastic deformation and broken of projectile occurred, and there were holes on the front face sheet at the same moment, so the critical penetration angle of the front face sheet was 86°.Figure 8 Critical penetration of the front face sheet. (a) Impact angle is 87° (b) Impact angle is 85°It can be viewed according to Figure9 that projectile was not broken when projectile angle was greater than the critical penetration angle of the front face sheet. Therefore, critical penetration angle of the front face sheet is also the critical value of projectile broken. Symbol C in this formula represents the collection of projectiles, which were not broken (see (2)) and Symbol C- represents the collection of projectiles, which were broken (see (3)), θ represents the projectile incident angle, and θ^ represents critical penetration angle of the front face sheet:(2)C=θ>θ^∣0°≤θ≤90°(3)C-=θ>θ^∣0°≤θ≤90°.Figure 9 Critical penetration of the rear face sheet. (a) Impact angle is 72° (b) Impact angle is 70° ### 4.2. Critical Penetration Angle of the Rear Face Sheet The critical angle of the rear face sheet of honeycomb sandwich panel was 72° at the impact speed of 3 km/s as shown in Figures9(a) and 10(a). Projectile was broken after impacting the front face sheet and part of the debris cloud backwash then moved along the horizontal direction. Meanwhile, another part of the debris cloud moves into the honeycomb core layer. The motion of the debris cloud lateral leaded to the rupture and the collapse of the honeycomb core layer.Figure 10 Damage of rear face sheet with critical perforated. (a) Impact angle is 72° (b) Impact angle is 70°The critical angle of the rear face sheet of honeycomb sandwich panel was 70° as shown in Figures9(b) and 10(b). Compared to the incidence angle of 72°, the number of SPH particles that enter inside of honeycomb core and vertical component of the projectile velocity was increased. Damage on the rear face sheet was also increased so that the critical perforation at the incidence angle of 70° occurred in rear face sheet. Therefore, the critical perforation angle of the rear face sheet was 71°.The ability of projectile penetrating honeycomb sandwich panel is different when the impact speed of projectile or diameter is different. In order to study the critical perforation angle of front and rear face sheets of honeycomb sandwich panel at different impact speeds or diameters when oblique impacting, the impact test was simulated within the impacting speed range of 1.8~3.4 km/s and the diameter is 3 mm, 4 mm, 5 mm, and 6 mm as shown in Figure11.Figure 11 Critical perforation angles under various diameter and velocity of projectile.Simulation results were compared with the ballistic limit of a sandwich panel in order to testify the accuracy of the simulation. Equation (4) is an example of the generic form of the ballistic limit of a sandwich panel [14]. Particle diameter at ballistic limit was calculated at a diameter of 3 mm as shown in Table 6. The simulation result showed good agreement with the calculation results.(4)dcvn=tw/K3Sτ/400000.5+tb0.6cos⁡θvn0.677ρp0.50.947.In (4), dc is a particle diameter at ballistic limit, tw is thickness of rear wall (cm), K3S is a baseline (1.4 [15]), τ is rear wall yield stress, tb is thickness of bumper, θ is incidence angle, vn is normal component of velocity, and ρp is projectile density.Table 6 Particle diameter at ballistic limit (the particle diameter of simulation was 3 mm). Velocity/km·s−1 1.8 2 2.2 2.4 2.6 2.8 3 Calculation results of (4)/mm 3.53 3.36 3.34 3.22 3.07 2.92 2.79It can be seen that the critical perforation angle of front and rear face sheet increased slowly or are unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly. ## 4.1. Critical Penetration Angle of the Front Face Sheet The projectile penetrated first layer honeycomb sandwich panel would lead to more or less damage on the second layer honeycomb sandwich panel according to the study of the double honeycomb sandwich panel structure. On the other hand, critical perforation angle of single layer honeycomb sandwich panel is a part of a double honeycomb sandwich panel structure, so the critical penetration angle of projectile oblique impact on honeycomb sandwich panel was studied firstly.The critical penetration is infinitely close to penetration and unpenetration. The critical penetration angle of the front face sheet of a honeycomb sandwich panel was found through a lot of simulation tests based on dichotomy. Impact speed is 3 km/s, for example, to analyze the critical penetration angle of the front face sheet. The critical angle that the front face sheet cannot be penetrated was 87° as shown in Figure8(a). The projectile was not broken, and only plastic deformation occurred at that time. Impact craters were formed on the front face sheet of the honeycomb sandwich panel, but it was unable to be punched. Critical penetration angle of the front face sheet was 85° as shown in Figure 8(b). Local plastic deformation and broken of projectile occurred, and there were holes on the front face sheet at the same moment, so the critical penetration angle of the front face sheet was 86°.Figure 8 Critical penetration of the front face sheet. (a) Impact angle is 87° (b) Impact angle is 85°It can be viewed according to Figure9 that projectile was not broken when projectile angle was greater than the critical penetration angle of the front face sheet. Therefore, critical penetration angle of the front face sheet is also the critical value of projectile broken. Symbol C in this formula represents the collection of projectiles, which were not broken (see (2)) and Symbol C- represents the collection of projectiles, which were broken (see (3)), θ represents the projectile incident angle, and θ^ represents critical penetration angle of the front face sheet:(2)C=θ>θ^∣0°≤θ≤90°(3)C-=θ>θ^∣0°≤θ≤90°.Figure 9 Critical penetration of the rear face sheet. (a) Impact angle is 72° (b) Impact angle is 70° ## 4.2. Critical Penetration Angle of the Rear Face Sheet The critical angle of the rear face sheet of honeycomb sandwich panel was 72° at the impact speed of 3 km/s as shown in Figures9(a) and 10(a). Projectile was broken after impacting the front face sheet and part of the debris cloud backwash then moved along the horizontal direction. Meanwhile, another part of the debris cloud moves into the honeycomb core layer. The motion of the debris cloud lateral leaded to the rupture and the collapse of the honeycomb core layer.Figure 10 Damage of rear face sheet with critical perforated. (a) Impact angle is 72° (b) Impact angle is 70°The critical angle of the rear face sheet of honeycomb sandwich panel was 70° as shown in Figures9(b) and 10(b). Compared to the incidence angle of 72°, the number of SPH particles that enter inside of honeycomb core and vertical component of the projectile velocity was increased. Damage on the rear face sheet was also increased so that the critical perforation at the incidence angle of 70° occurred in rear face sheet. Therefore, the critical perforation angle of the rear face sheet was 71°.The ability of projectile penetrating honeycomb sandwich panel is different when the impact speed of projectile or diameter is different. In order to study the critical perforation angle of front and rear face sheets of honeycomb sandwich panel at different impact speeds or diameters when oblique impacting, the impact test was simulated within the impacting speed range of 1.8~3.4 km/s and the diameter is 3 mm, 4 mm, 5 mm, and 6 mm as shown in Figure11.Figure 11 Critical perforation angles under various diameter and velocity of projectile.Simulation results were compared with the ballistic limit of a sandwich panel in order to testify the accuracy of the simulation. Equation (4) is an example of the generic form of the ballistic limit of a sandwich panel [14]. Particle diameter at ballistic limit was calculated at a diameter of 3 mm as shown in Table 6. The simulation result showed good agreement with the calculation results.(4)dcvn=tw/K3Sτ/400000.5+tb0.6cos⁡θvn0.677ρp0.50.947.In (4), dc is a particle diameter at ballistic limit, tw is thickness of rear wall (cm), K3S is a baseline (1.4 [15]), τ is rear wall yield stress, tb is thickness of bumper, θ is incidence angle, vn is normal component of velocity, and ρp is projectile density.Table 6 Particle diameter at ballistic limit (the particle diameter of simulation was 3 mm). Velocity/km·s−1 1.8 2 2.2 2.4 2.6 2.8 3 Calculation results of (4)/mm 3.53 3.36 3.34 3.22 3.07 2.92 2.79It can be seen that the critical perforation angle of front and rear face sheet increased slowly or are unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly. ## 5. Conclusions The distribution of the debris cloud and the damage of double honeycomb sandwich panel were simulated under oblique hypervelocity impact. In addition, the critical penetration angle of front and rear face sheets of honeycomb sandwich panel was obtained. The conclusions are as follows.Projectile mass fraction of debris cloud decreases with the decrease of the impact angle which back-splashed and stayed inside of the first layer of honeycomb sandwich panel and then across the honeycomb core. However, the damage of first layer of honeycomb sandwich panel increased gradually. That result from the biggest perforation size of the front face sheet decreased accordingly, and the biggest perforation size of the rear face sheet increased firstly but then decreased.The projectile mass fraction of debris cloud which get through the first layer of honeycomb sandwich panel was too little to damage the second layer when the impact angle was larger, but it increased gradually with the decrease of impact angle so that the damage of the front face sheet of second layer honeycomb panel became more and more obvious, and the damage degree of the second layer was related to crushing degree of projectile after penetrated the first layer. The projectile will not break when projectile angle was greater than the critical perforation angle of the front face sheet, but it will break gradually when projectile angle is less than the critical perforation angle.The critical perforation angle of front and rear face sheet increased slowly or is unchanged with the increase of impact velocity. The critical perforation angle of rear face sheet increased dramatically with the increase of diameter, especially the diameter from 3 mm to 4 mm, but the front face sheet increased slowly.For the research on the projectile oblique hypervelocity impact on double honeycomb panel, we only embarked on the study the distribution of projectile debris cloud and the level of damage of honeycomb panel under a velocity (3.07 km/s) and three incidence angles (60°, 45°, and 30°) of projectile. Therefore, future studies are worthy to be carried out to explore this subject further, such that a wide range of the speed conditions will be investigated, especially at higher velocity (about 10 km/s). On the other hand, different geometric models that will affect the fragmentation degree of projectile, which in turn will affect the damage degree of honeycomb panel, will also be studied. At last, we expect to obtain the critical penetration angle of projectile oblique impact on double honeycomb sandwich panel by more experiments. --- *Source: 1015674-2017-04-16.xml*
2017
# Potential Novel Biomarkers of Obstructive Nephropathy in Children with Hydronephrosis **Authors:** Beata Bieniaś; Przemysław Sikora **Journal:** Disease Markers (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1015726 --- ## Abstract Obstructive nephropathy (ON) secondary to the congenital hydronephrosis (HN) is one of the most common causes of chronic kidney disease in children. Neither currently used imaging techniques nor conventional laboratory parameters are sufficient to assess the onset and outcome of this condition; hence, there is a need to prove the usefulness of newly discovered biomarkers of kidney injury in this respect. The purpose of the study was to assess the urinary excretion of alpha-GST, pi-GST, NGAL, and KIM-1 and the serum level of NGAL in children with congenital unilateral hydronephrosis secondary to ureteropelvic junction obstruction. The results were evaluated in relation to severity of HN, the presence of ON, relative function of an obstructed kidney, and the presence of proteinuria. The study comprised 45 children with HN of different grades and 21 healthy controls. Urinary and serum concentrations of biomarkers were measured using specific ELISA kits. Urinary biomarker excretions were expressed as a biomarker/creatinine (Cr) ratio. Patients with the highest grades of HN showed significantly increased values of all measured biomarkers, whereas those with the lowest grades of HN displayed only significant elevation of urinary alpha-GST and the serum NGAL. Urinary NGAL positively correlated with percentage loss of relative function of an obstructed kidney in renal scintigraphy. In patients with proteinuria, significantly higher urinary alpha-GST excretion was revealed as compared to those without this symptom. The ROC curve analysis showed the best diagnostic profile for urinary alpha-GST/Cr and NGAL/Cr ratios in the detection of ON. In conclusion, the results of the study showed that urinary alpha-GST and NGAL are promising biomarkers of ON. Ambiguous results of the remaining biomarkers, i.e., urinary pi-GST and KIM-1, and serum NGAL level may be related to a relatively small study group. Their utility in an early diagnosis of ON should be reevaluated. --- ## Body ## 1. Introduction Obstructive nephropathy (ON) is a chronic inflammatory process characterized by renal scarring resulting from obstructive uropathy (hydronephrosis). Scarring of an obstructed kidney may lead to impairment of its function.ON secondary to the congenital hydronephrosis (HN) is one of the most common causes of chronic kidney disease (CKD) in children [1–3]. Ureteropelvic junction obstruction (UPJO) has been revealed as the main cause of significant HN [3].Etiopathogenesis of ON is complex, but the primary and secondary injuries to the renal tubular epithelial cells are believed to be especially important [4]. They lead to tubulointerstitial inflammation, tubular atrophy, and fibrosis. Unfortunately, neither currently used imaging techniques nor conventional laboratory parameters are sufficient to assess the onset and outcome of this condition. In the recent years, several biomarkers of tubulointerstitial fibrosis have been discovered and studied in different renal diseases. Some of them like neutrophil gelatinase-associated lipocalin (NGAL) and kidney injury molecule-1 (KIM-1) have been tested with uncertain results in patients with ON, whereas other biomarkers like glutathione S-transferases (GSTs) are still waiting for evaluation.To provide a new insight into this issue, we studied the usefulness of GSTs, NGAL, and KIM-1 as potential biomarkers of ON. ## 2. Purpose of the Study The purpose of the study was to assess the urinary excretion of alpha-GST, pi-GST, NGAL, and KIM-1 and the serum level of NGAL in children with congenital unilateral hydronephrosis secondary to UPJO. These biomarkers were evaluated in relation to severity of HN, the presence of ON, relative function of an obstructed kidney, and the presence of proteinuria. ## 3. Patients, Material, and Methods Baseline characteristics of patients and controls are presented in Table1. The study comprised 45 children (31 boys and 14 girls) aged 2–17 years (median = 11.0 years) with congenital unilateral HN due to UPJO diagnosed and treated in the Department of Pediatric Nephrology, Children’s University Hospital in Lublin, Poland. In 25 children, the HN was diagnosed prenatally. The patients were divided into three subgroups A–C according to the Onen HN ultrasound grading system [5] as follows: stage 1—dilatation of renal pelvis alone, stage 2—like stage 1 plus caliceal dilatation, stage 3—like stage 2 plus <1/2 (mild-to-moderate) renal parenchymal loss, and stage 4—like stage 3 plus >1/2 (severe) renal parenchymal loss (cyst-like kidney with no visually significant renal parenchyma). 25/45 (55.6%) children with HN grades 3 and 4 were classified into the group A, 11/45 (24.4%) with HN grade 2 into the group B, and 9/45 (20%) with HN grade 1 into the group C. To detect ON defined as renal parenchymal defects with decreased relative function of an obstructed kidney, a dynamic renal scintigraphy using technetium-99m-L,L-ethylenedicysteine was performed. 28/45 (62.2%) patients predominantly from the group A and group B (21 and 7, respectively) showed features of ON with impaired relative function of an obstructed kidney from 15 to 35%.Table 1 Characteristics of study and control groups. Parameter Study group median (range) & number of patients Control group Number of patients 45 21 Gender (male/female) 31/14 16/5 Age (years) 11.0 (2–17) 12.3 (3–17) GFR (ml/min/1.73 m2) 126.3 (97–162) 139 (102–145) Number of patients in groups (A/B/C) 25/11/9 — Number of patients with obstructive nephropathy 28/45 (62.2%) — Number of patients with proteinuria 10/45 (22.2%) — Protein/creatinine ratio (mg/mg) 0.24 (0.21–0.4) 0.09 (0–0.15)10/45 (22.2%) patients from the group A, including 7 with ON, disclosed pathological proteinuria (urinary protein/creatinine ratio: 0.21–0.4 mg/mg, median: 0.24 mg/mg).All children had normal estimated glomerular filtration rate (eGFR) > 90 ml/min/1.73 m2 calculated by the Schwartz formula: 0.55 × body height (cm)/serum creatinine level (mg/dl) [6]. The age- and sex-matched 21 healthy children were controls. They were referred to the outpatient clinic, Children’s University Hospital, with suspicion of renal diseases that were excluded.To evaluate the designed laboratory parameters, the midstream first morning urine and serum samples were collected from each study participant on the same day. Serum and urinary creatinine concentrations were determined by Jaffe’s test. Standard laboratory techniques were used to assess the magnitude of proteinuria.Urinary alpha-GST (USCNK, China), urinary pi-GST (Immundiagnostik AG, Germany), urinary KIM-1, and urinary and serum NGAL (USCNK, China) concentrations were measured using specific enzyme-linked immunosorbent assay (ELISA) kits after prior preparation of urine and serum samples following the manufacturer’s instructions.The urinary biomarker excretions were expressed as a biomarker/creatinine ratio in nanograms per milligram of creatinine (ng/mg Cr). The urinary protein excretion was showed as a protein/creatinine ratio in milligram per milligram of creatinine (mg/mg Cr).The differences of urinary alpha-GST, urinary pi-GST, urinary KIM-1, and urinary and serum NGAL between groups of patients with ON and patients with HN without ON were assessed. Additionally, the comparison of measured biomarkers in groups of patients with proteinuria and patients with HN without proteinuria was performed.The statistical analysis was performed using STATISTICA 12.5. Differences between groups were assessed using a nonparametric Mann-WhitneyU test, and correlation coefficients were calculated using a Spearman test. p value ≤ 0.05 was considered significant.The evaluation of clinical utility and significance of measured parameters as biomarkers of ON was performed by ROC curve analyses. ## 4. Ethics Statement The study was approved by Ethics Committee of the Medical University of Lublin.Informed consent was obtained from all individual participants included in the study, either the patients or the parents or legal guardians. ## 5. Results ### 5.1. Biomarker Measurements The results of assessed biomarkers in the study groups as compared to controls are presented in Tables2–4.Table 2 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group A and control group. Study group A median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.51 (0.54–17.3) 1.11 (0.26–3.5) p=0.0001 pi-GST/Cr (ng/mg) 30.4 (17.5–24.9) 14.6 (7.4–28.5) p=0.03 uNGAL/Cr (ng/mg) 1.73 (0.17–10.0) 0.83 (0.04–9.5) p=0.02 sNGAL (ng/ml) 59.9 (45.2–85.3) 4.8 (2.1–10.4) p=0.00003 KIM-1/Cr (ng/mg) 2.4 (0.2–5.1) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 3 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group B and control group. Study group B median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 6.17 (0.8–11.74) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 17.9 (7.8–20.83) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 1.41 (0.3–5.2) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 58.5 (13.4–73.8) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.58 (0.2–0.96) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 4 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group C and control group. Study group C median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.73 (0.81–11.7) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 16.3 (7.6–23.1) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 0.51 (0.07–9.7) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 33.6 (8.3–68.4) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.68 (0.2–1.3) 0.28 (0.06–1.06) NS uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.In the group A, median urinary alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr ratios and the serum NGAL level were significantly higher than those in controls (p<0.05) (Table 2).In the group B, median urinary alpha-GST/Cr and KIM-1/Cr ratios and the serum NGAL level were significantly higher in comparison with controls (p<0.05). No significant differences of urinary pi-GST/Cr and NGAL/Cr ratios between patients from the group B and controls were observed (Table 3).In the group C, significant elevation of the median urinary alpha-GST/Cr ratio and serum NGAL level was found (p<0.05). In comparison with controls, there were no significant differences of urinary pi-GST/Cr, NGAL/Cr, and KIM/Cr ratios (Table 4).Patients with ON had significantly increased median urinary alpha-GST/Cr and NGAL/Cr ratios in comparison with those without ON (p=0.03 and p=0.01, respectively) (Figure 1). There were no significant differences of urinary pi-GST/Cr and KIM-1/Cr ratios and the serum NGAL level.Figure 1 Urinary alpha-GST (a), pi-GST (b), NGAL (c, d), and KIM-1 (e) ratios in patients with obstructive nephropathy (ON), patients with hydronephrosis without obstructive nephropathy (HN without ON), and controls. (a) (b) (c) (d) (e)In addition, there was positive correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney (r=0.5, p<0.05) (Figure 2). The latter did not significantly correlate with urinary alpha-GST/Cr, pi-GST/Cr, and KIM-1/Cr ratios and the serum NGAL level.Figure 2 Correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney in dynamic renal scintigraphy.In patients with proteinuria, only the median urinary alpha-GST/Cr ratio was significantly higher as compared to those without this symptom (p=0.02) (Figure 3).Figure 3 Urinary alpha-GST/Cr ratio in hydronephrotic patients (HN) with and without proteinuria. ### 5.2. ROC Analysis The analysis showed the best diagnostic profiles in the detection of ON for the urinary alpha-GST/Cr ratio (area under the curve (AUC) of 0.75, an optimal cut-off value of 0.098 ng/mg with sensitivity and specificity of 84.6% and 69.2%, respectively) and urinary NGAL/Cr ratio (AUC of 0.805, an optimal cut-off value of 0.08 ng/mg with sensitivity and specificity of 78.6% and 58.3%, respectively). AUC for the serum NGAL level was 0.6, with an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, and specificity of 12.5%. The urinary pi-GST/Cr ratio was characterized by AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, and specificity of 7.7%, and the urinary KIM-1/Cr ratio was characterized by AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, and specificity of 16.7% (Figures4–8).Figure 4 ROC analysis for the urinary alpha-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.902, an optimal cut-off value of 0.046 ng/mg Cr, sensitivity of 81.8%, specificity of 84.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.750, an optimal cut-off value of 0.098 ng/mg Cr, sensitivity of 84.6%, specificity of 69.2%. (a) (b)Figure 5 ROC analysis for the urinary pi-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.3, an optimal cut-off value of 0.082 ng/mg Cr, sensitivity of 92.3%, specificity of <1%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, specificity of 7.7%. (a) (b)Figure 6 ROC analysis for the serum NGAL level in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 1.0, an optimal cut-off value of 0.0 ng/ml, sensitivity of 100%, specificity of 100%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.6, an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, specificity of 12.5%. (a) (b)Figure 7 ROC analysis for the urinary NGAL/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.663, an optimal cut-off value of 0.091 ng/mg Cr, sensitivity of 39.3%, specificity of 58.3%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.805, an optimal cut-off value of 0.079 ng/mg Cr, sensitivity of 78.6%, specificity of 58.3%. (a) (b)Figure 8 ROC analysis for the urinary KIM-1/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.653, an optimal cut-off value of 0.084 ng/mg Cr, sensitivity of 55.6%, specificity of 69.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, specificity of 16.7%. (a) (b) ## 5.1. Biomarker Measurements The results of assessed biomarkers in the study groups as compared to controls are presented in Tables2–4.Table 2 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group A and control group. Study group A median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.51 (0.54–17.3) 1.11 (0.26–3.5) p=0.0001 pi-GST/Cr (ng/mg) 30.4 (17.5–24.9) 14.6 (7.4–28.5) p=0.03 uNGAL/Cr (ng/mg) 1.73 (0.17–10.0) 0.83 (0.04–9.5) p=0.02 sNGAL (ng/ml) 59.9 (45.2–85.3) 4.8 (2.1–10.4) p=0.00003 KIM-1/Cr (ng/mg) 2.4 (0.2–5.1) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 3 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group B and control group. Study group B median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 6.17 (0.8–11.74) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 17.9 (7.8–20.83) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 1.41 (0.3–5.2) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 58.5 (13.4–73.8) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.58 (0.2–0.96) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 4 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group C and control group. Study group C median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.73 (0.81–11.7) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 16.3 (7.6–23.1) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 0.51 (0.07–9.7) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 33.6 (8.3–68.4) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.68 (0.2–1.3) 0.28 (0.06–1.06) NS uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.In the group A, median urinary alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr ratios and the serum NGAL level were significantly higher than those in controls (p<0.05) (Table 2).In the group B, median urinary alpha-GST/Cr and KIM-1/Cr ratios and the serum NGAL level were significantly higher in comparison with controls (p<0.05). No significant differences of urinary pi-GST/Cr and NGAL/Cr ratios between patients from the group B and controls were observed (Table 3).In the group C, significant elevation of the median urinary alpha-GST/Cr ratio and serum NGAL level was found (p<0.05). In comparison with controls, there were no significant differences of urinary pi-GST/Cr, NGAL/Cr, and KIM/Cr ratios (Table 4).Patients with ON had significantly increased median urinary alpha-GST/Cr and NGAL/Cr ratios in comparison with those without ON (p=0.03 and p=0.01, respectively) (Figure 1). There were no significant differences of urinary pi-GST/Cr and KIM-1/Cr ratios and the serum NGAL level.Figure 1 Urinary alpha-GST (a), pi-GST (b), NGAL (c, d), and KIM-1 (e) ratios in patients with obstructive nephropathy (ON), patients with hydronephrosis without obstructive nephropathy (HN without ON), and controls. (a) (b) (c) (d) (e)In addition, there was positive correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney (r=0.5, p<0.05) (Figure 2). The latter did not significantly correlate with urinary alpha-GST/Cr, pi-GST/Cr, and KIM-1/Cr ratios and the serum NGAL level.Figure 2 Correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney in dynamic renal scintigraphy.In patients with proteinuria, only the median urinary alpha-GST/Cr ratio was significantly higher as compared to those without this symptom (p=0.02) (Figure 3).Figure 3 Urinary alpha-GST/Cr ratio in hydronephrotic patients (HN) with and without proteinuria. ## 5.2. ROC Analysis The analysis showed the best diagnostic profiles in the detection of ON for the urinary alpha-GST/Cr ratio (area under the curve (AUC) of 0.75, an optimal cut-off value of 0.098 ng/mg with sensitivity and specificity of 84.6% and 69.2%, respectively) and urinary NGAL/Cr ratio (AUC of 0.805, an optimal cut-off value of 0.08 ng/mg with sensitivity and specificity of 78.6% and 58.3%, respectively). AUC for the serum NGAL level was 0.6, with an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, and specificity of 12.5%. The urinary pi-GST/Cr ratio was characterized by AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, and specificity of 7.7%, and the urinary KIM-1/Cr ratio was characterized by AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, and specificity of 16.7% (Figures4–8).Figure 4 ROC analysis for the urinary alpha-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.902, an optimal cut-off value of 0.046 ng/mg Cr, sensitivity of 81.8%, specificity of 84.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.750, an optimal cut-off value of 0.098 ng/mg Cr, sensitivity of 84.6%, specificity of 69.2%. (a) (b)Figure 5 ROC analysis for the urinary pi-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.3, an optimal cut-off value of 0.082 ng/mg Cr, sensitivity of 92.3%, specificity of <1%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, specificity of 7.7%. (a) (b)Figure 6 ROC analysis for the serum NGAL level in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 1.0, an optimal cut-off value of 0.0 ng/ml, sensitivity of 100%, specificity of 100%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.6, an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, specificity of 12.5%. (a) (b)Figure 7 ROC analysis for the urinary NGAL/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.663, an optimal cut-off value of 0.091 ng/mg Cr, sensitivity of 39.3%, specificity of 58.3%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.805, an optimal cut-off value of 0.079 ng/mg Cr, sensitivity of 78.6%, specificity of 58.3%. (a) (b)Figure 8 ROC analysis for the urinary KIM-1/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.653, an optimal cut-off value of 0.084 ng/mg Cr, sensitivity of 55.6%, specificity of 69.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, specificity of 16.7%. (a) (b) ## 6. Discussion GST is a cytosolic enzyme. Its isoforms alpha and pi (alpha-GST, pi-GST) are typical of a human kidney [7]. The alpha-GST is expressed in proximal tubular epithelial cells, whereas the pi-GST is a characteristic of distal tubular epithelial cells. Both isoforms of GST are excessively released from injured tubular epithelial cells into the urine, and they recently were proposed as promising biomarkers of tubulointerstitial fibrosis in patients with proteinuric kidney disease [8, 9]. To the best of our knowledge, in patients with HN, no studies on urinary GST excretion and their usefulness as biomarkers of ON have been reported so far. In our study, all patients with HN were characterized by significantly higher urinary alpha-GST excretion in comparison with controls. In addition, urinary alpha-GST excretion was significantly higher in children with HN and ON as compared to those without ON. Similarly, our patients with proteinuria displayed significantly higher urinary alpha-GST excretion than those without this symptom. The ROC curve analysis showed a very good diagnostic profile for the urinary alpha-GST/Cr ratio in the detection of ON.In our study, urinary pi-GST excretion was significantly higher in patients with grades 3 and 4 HN as compared to controls but the ROC curve analysis did not confirm clinical utility of the urinary pi-GST/Cr ratio in the detection of ON.NGAL is considered to be another sensitive and early biomarker of tubulointerstitial fibrosis. It is a small (25 kD) protein, locally synthesized in renal tubular epithelial cells [10] and released into urine. Initially, NGAL was thought to be a biomarker of acute kidney injury (AKI) [11] but recent studies demonstrated its increased urinary excretion also in patients with various chronic nephropathies. Furthermore, correlation between urinary NGAL excretion and severity of local inflammation and kidney function was observed [12–15]. NGAL may also be released into the circulation from damaged renal tubular epithelial cells. Its elevated serum level is suggested to be a risk factor for CKD progression [16–19]. In our study, it was also showed that urinary NGAL excretion may be a potential biomarker of ON in patients with HN due to UPJO [18–22]. We found that urinary NGAL excretion was significantly higher in patients with HN and ON as compared to patients with HN and without ON. In our study, urinary NGAL excretion positively correlated with a deterioration of relative function of an obstructed kidney in dynamic renal scintigraphy. Moreover, similar to the urinary alpha-GST/Cr ratio, the ROC curve analysis showed a very good diagnostic profile for the urinary NGAL/Cr ratio in the detection of ON. Interestingly, Gerber et al. [23] did not find significant differences in urinary NGAL excretion between patients with UPJO and controls. Nevertheless, their results might be influenced by relatively small number of cases.In our study, in contrast to urinary NGAL excretion, the serum NGAL level was significantly higher in all hydronephrotic children in comparison with controls. Unfortunately, in the ROC curve analysis, a diagnostic profile for the serum NGAL level in the detection of ON was not so good as those for urinary NGAL/Cr and urinary alpha-GST/Cr ratios.Transmembrane renal tubular epithelial cell glycoprotein KIM-1 is the most recently recognized biomarker of tubulointerstitial fibrosis. Its physiological role is still unclear, but it is markedly upregulated in proximal tubular epithelial cells in experimental and clinical conditions associated with kidney damage [24]. Its elevated urinary excretion was observed in both AKI [25, 26] and various chronic kidney diseases [12, 27, 28]. KIM-1 was also suggested to be an indicator of the conversion of AKI to CKD [29]. Vaidya et al. [28] showed that urinary KIM-1 excretion reflected better severity of tubulointerstitial fibrosis than a traditional biomarker—N-acetyl-β-D-glucosaminidase.The usefulness of urinary KIM-1 excretion as a new early biomarker of ON in patients with HN was reported by several recent studies [21, 22, 25, 29, 30]. However, in the study by Noyan et al. [20], hydronephrotic children with kidney dysfunction were not characterized by increased urinary KIM-1 excretion. Similarly, Gerber et al. [23] did not observe elevated urinary KIM-1 excretion in patients with UPJO. In our study, significantly higher urinary KIM-1 excretion was noted in patients with grades 2–4 HN but not in those with ON and in those with proteinuria. The ROC curve analysis did not show a sufficient diagnostic profile for the urinary KIM-1/Cr ratio in the detection of ON. ## 7. Conclusions Our results suggest that urinary alpha-GST and NGAL are promising biomarkers of ON. It is conceivable that ambiguous results regarding the remaining biomarkers, i.e., urinary pi-GST and KIM-1, and serum NGAL level may be related to a relatively small study group. Therefore, their utility in an early diagnosis of ON should be reevaluated by more extended investigations. --- *Source: 1015726-2018-09-13.xml*
1015726-2018-09-13_1015726-2018-09-13.md
28,088
Potential Novel Biomarkers of Obstructive Nephropathy in Children with Hydronephrosis
Beata Bieniaś; Przemysław Sikora
Disease Markers (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1015726
1015726-2018-09-13.xml
--- ## Abstract Obstructive nephropathy (ON) secondary to the congenital hydronephrosis (HN) is one of the most common causes of chronic kidney disease in children. Neither currently used imaging techniques nor conventional laboratory parameters are sufficient to assess the onset and outcome of this condition; hence, there is a need to prove the usefulness of newly discovered biomarkers of kidney injury in this respect. The purpose of the study was to assess the urinary excretion of alpha-GST, pi-GST, NGAL, and KIM-1 and the serum level of NGAL in children with congenital unilateral hydronephrosis secondary to ureteropelvic junction obstruction. The results were evaluated in relation to severity of HN, the presence of ON, relative function of an obstructed kidney, and the presence of proteinuria. The study comprised 45 children with HN of different grades and 21 healthy controls. Urinary and serum concentrations of biomarkers were measured using specific ELISA kits. Urinary biomarker excretions were expressed as a biomarker/creatinine (Cr) ratio. Patients with the highest grades of HN showed significantly increased values of all measured biomarkers, whereas those with the lowest grades of HN displayed only significant elevation of urinary alpha-GST and the serum NGAL. Urinary NGAL positively correlated with percentage loss of relative function of an obstructed kidney in renal scintigraphy. In patients with proteinuria, significantly higher urinary alpha-GST excretion was revealed as compared to those without this symptom. The ROC curve analysis showed the best diagnostic profile for urinary alpha-GST/Cr and NGAL/Cr ratios in the detection of ON. In conclusion, the results of the study showed that urinary alpha-GST and NGAL are promising biomarkers of ON. Ambiguous results of the remaining biomarkers, i.e., urinary pi-GST and KIM-1, and serum NGAL level may be related to a relatively small study group. Their utility in an early diagnosis of ON should be reevaluated. --- ## Body ## 1. Introduction Obstructive nephropathy (ON) is a chronic inflammatory process characterized by renal scarring resulting from obstructive uropathy (hydronephrosis). Scarring of an obstructed kidney may lead to impairment of its function.ON secondary to the congenital hydronephrosis (HN) is one of the most common causes of chronic kidney disease (CKD) in children [1–3]. Ureteropelvic junction obstruction (UPJO) has been revealed as the main cause of significant HN [3].Etiopathogenesis of ON is complex, but the primary and secondary injuries to the renal tubular epithelial cells are believed to be especially important [4]. They lead to tubulointerstitial inflammation, tubular atrophy, and fibrosis. Unfortunately, neither currently used imaging techniques nor conventional laboratory parameters are sufficient to assess the onset and outcome of this condition. In the recent years, several biomarkers of tubulointerstitial fibrosis have been discovered and studied in different renal diseases. Some of them like neutrophil gelatinase-associated lipocalin (NGAL) and kidney injury molecule-1 (KIM-1) have been tested with uncertain results in patients with ON, whereas other biomarkers like glutathione S-transferases (GSTs) are still waiting for evaluation.To provide a new insight into this issue, we studied the usefulness of GSTs, NGAL, and KIM-1 as potential biomarkers of ON. ## 2. Purpose of the Study The purpose of the study was to assess the urinary excretion of alpha-GST, pi-GST, NGAL, and KIM-1 and the serum level of NGAL in children with congenital unilateral hydronephrosis secondary to UPJO. These biomarkers were evaluated in relation to severity of HN, the presence of ON, relative function of an obstructed kidney, and the presence of proteinuria. ## 3. Patients, Material, and Methods Baseline characteristics of patients and controls are presented in Table1. The study comprised 45 children (31 boys and 14 girls) aged 2–17 years (median = 11.0 years) with congenital unilateral HN due to UPJO diagnosed and treated in the Department of Pediatric Nephrology, Children’s University Hospital in Lublin, Poland. In 25 children, the HN was diagnosed prenatally. The patients were divided into three subgroups A–C according to the Onen HN ultrasound grading system [5] as follows: stage 1—dilatation of renal pelvis alone, stage 2—like stage 1 plus caliceal dilatation, stage 3—like stage 2 plus <1/2 (mild-to-moderate) renal parenchymal loss, and stage 4—like stage 3 plus >1/2 (severe) renal parenchymal loss (cyst-like kidney with no visually significant renal parenchyma). 25/45 (55.6%) children with HN grades 3 and 4 were classified into the group A, 11/45 (24.4%) with HN grade 2 into the group B, and 9/45 (20%) with HN grade 1 into the group C. To detect ON defined as renal parenchymal defects with decreased relative function of an obstructed kidney, a dynamic renal scintigraphy using technetium-99m-L,L-ethylenedicysteine was performed. 28/45 (62.2%) patients predominantly from the group A and group B (21 and 7, respectively) showed features of ON with impaired relative function of an obstructed kidney from 15 to 35%.Table 1 Characteristics of study and control groups. Parameter Study group median (range) & number of patients Control group Number of patients 45 21 Gender (male/female) 31/14 16/5 Age (years) 11.0 (2–17) 12.3 (3–17) GFR (ml/min/1.73 m2) 126.3 (97–162) 139 (102–145) Number of patients in groups (A/B/C) 25/11/9 — Number of patients with obstructive nephropathy 28/45 (62.2%) — Number of patients with proteinuria 10/45 (22.2%) — Protein/creatinine ratio (mg/mg) 0.24 (0.21–0.4) 0.09 (0–0.15)10/45 (22.2%) patients from the group A, including 7 with ON, disclosed pathological proteinuria (urinary protein/creatinine ratio: 0.21–0.4 mg/mg, median: 0.24 mg/mg).All children had normal estimated glomerular filtration rate (eGFR) > 90 ml/min/1.73 m2 calculated by the Schwartz formula: 0.55 × body height (cm)/serum creatinine level (mg/dl) [6]. The age- and sex-matched 21 healthy children were controls. They were referred to the outpatient clinic, Children’s University Hospital, with suspicion of renal diseases that were excluded.To evaluate the designed laboratory parameters, the midstream first morning urine and serum samples were collected from each study participant on the same day. Serum and urinary creatinine concentrations were determined by Jaffe’s test. Standard laboratory techniques were used to assess the magnitude of proteinuria.Urinary alpha-GST (USCNK, China), urinary pi-GST (Immundiagnostik AG, Germany), urinary KIM-1, and urinary and serum NGAL (USCNK, China) concentrations were measured using specific enzyme-linked immunosorbent assay (ELISA) kits after prior preparation of urine and serum samples following the manufacturer’s instructions.The urinary biomarker excretions were expressed as a biomarker/creatinine ratio in nanograms per milligram of creatinine (ng/mg Cr). The urinary protein excretion was showed as a protein/creatinine ratio in milligram per milligram of creatinine (mg/mg Cr).The differences of urinary alpha-GST, urinary pi-GST, urinary KIM-1, and urinary and serum NGAL between groups of patients with ON and patients with HN without ON were assessed. Additionally, the comparison of measured biomarkers in groups of patients with proteinuria and patients with HN without proteinuria was performed.The statistical analysis was performed using STATISTICA 12.5. Differences between groups were assessed using a nonparametric Mann-WhitneyU test, and correlation coefficients were calculated using a Spearman test. p value ≤ 0.05 was considered significant.The evaluation of clinical utility and significance of measured parameters as biomarkers of ON was performed by ROC curve analyses. ## 4. Ethics Statement The study was approved by Ethics Committee of the Medical University of Lublin.Informed consent was obtained from all individual participants included in the study, either the patients or the parents or legal guardians. ## 5. Results ### 5.1. Biomarker Measurements The results of assessed biomarkers in the study groups as compared to controls are presented in Tables2–4.Table 2 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group A and control group. Study group A median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.51 (0.54–17.3) 1.11 (0.26–3.5) p=0.0001 pi-GST/Cr (ng/mg) 30.4 (17.5–24.9) 14.6 (7.4–28.5) p=0.03 uNGAL/Cr (ng/mg) 1.73 (0.17–10.0) 0.83 (0.04–9.5) p=0.02 sNGAL (ng/ml) 59.9 (45.2–85.3) 4.8 (2.1–10.4) p=0.00003 KIM-1/Cr (ng/mg) 2.4 (0.2–5.1) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 3 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group B and control group. Study group B median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 6.17 (0.8–11.74) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 17.9 (7.8–20.83) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 1.41 (0.3–5.2) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 58.5 (13.4–73.8) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.58 (0.2–0.96) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 4 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group C and control group. Study group C median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.73 (0.81–11.7) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 16.3 (7.6–23.1) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 0.51 (0.07–9.7) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 33.6 (8.3–68.4) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.68 (0.2–1.3) 0.28 (0.06–1.06) NS uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.In the group A, median urinary alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr ratios and the serum NGAL level were significantly higher than those in controls (p<0.05) (Table 2).In the group B, median urinary alpha-GST/Cr and KIM-1/Cr ratios and the serum NGAL level were significantly higher in comparison with controls (p<0.05). No significant differences of urinary pi-GST/Cr and NGAL/Cr ratios between patients from the group B and controls were observed (Table 3).In the group C, significant elevation of the median urinary alpha-GST/Cr ratio and serum NGAL level was found (p<0.05). In comparison with controls, there were no significant differences of urinary pi-GST/Cr, NGAL/Cr, and KIM/Cr ratios (Table 4).Patients with ON had significantly increased median urinary alpha-GST/Cr and NGAL/Cr ratios in comparison with those without ON (p=0.03 and p=0.01, respectively) (Figure 1). There were no significant differences of urinary pi-GST/Cr and KIM-1/Cr ratios and the serum NGAL level.Figure 1 Urinary alpha-GST (a), pi-GST (b), NGAL (c, d), and KIM-1 (e) ratios in patients with obstructive nephropathy (ON), patients with hydronephrosis without obstructive nephropathy (HN without ON), and controls. (a) (b) (c) (d) (e)In addition, there was positive correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney (r=0.5, p<0.05) (Figure 2). The latter did not significantly correlate with urinary alpha-GST/Cr, pi-GST/Cr, and KIM-1/Cr ratios and the serum NGAL level.Figure 2 Correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney in dynamic renal scintigraphy.In patients with proteinuria, only the median urinary alpha-GST/Cr ratio was significantly higher as compared to those without this symptom (p=0.02) (Figure 3).Figure 3 Urinary alpha-GST/Cr ratio in hydronephrotic patients (HN) with and without proteinuria. ### 5.2. ROC Analysis The analysis showed the best diagnostic profiles in the detection of ON for the urinary alpha-GST/Cr ratio (area under the curve (AUC) of 0.75, an optimal cut-off value of 0.098 ng/mg with sensitivity and specificity of 84.6% and 69.2%, respectively) and urinary NGAL/Cr ratio (AUC of 0.805, an optimal cut-off value of 0.08 ng/mg with sensitivity and specificity of 78.6% and 58.3%, respectively). AUC for the serum NGAL level was 0.6, with an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, and specificity of 12.5%. The urinary pi-GST/Cr ratio was characterized by AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, and specificity of 7.7%, and the urinary KIM-1/Cr ratio was characterized by AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, and specificity of 16.7% (Figures4–8).Figure 4 ROC analysis for the urinary alpha-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.902, an optimal cut-off value of 0.046 ng/mg Cr, sensitivity of 81.8%, specificity of 84.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.750, an optimal cut-off value of 0.098 ng/mg Cr, sensitivity of 84.6%, specificity of 69.2%. (a) (b)Figure 5 ROC analysis for the urinary pi-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.3, an optimal cut-off value of 0.082 ng/mg Cr, sensitivity of 92.3%, specificity of <1%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, specificity of 7.7%. (a) (b)Figure 6 ROC analysis for the serum NGAL level in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 1.0, an optimal cut-off value of 0.0 ng/ml, sensitivity of 100%, specificity of 100%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.6, an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, specificity of 12.5%. (a) (b)Figure 7 ROC analysis for the urinary NGAL/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.663, an optimal cut-off value of 0.091 ng/mg Cr, sensitivity of 39.3%, specificity of 58.3%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.805, an optimal cut-off value of 0.079 ng/mg Cr, sensitivity of 78.6%, specificity of 58.3%. (a) (b)Figure 8 ROC analysis for the urinary KIM-1/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.653, an optimal cut-off value of 0.084 ng/mg Cr, sensitivity of 55.6%, specificity of 69.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, specificity of 16.7%. (a) (b) ## 5.1. Biomarker Measurements The results of assessed biomarkers in the study groups as compared to controls are presented in Tables2–4.Table 2 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group A and control group. Study group A median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.51 (0.54–17.3) 1.11 (0.26–3.5) p=0.0001 pi-GST/Cr (ng/mg) 30.4 (17.5–24.9) 14.6 (7.4–28.5) p=0.03 uNGAL/Cr (ng/mg) 1.73 (0.17–10.0) 0.83 (0.04–9.5) p=0.02 sNGAL (ng/ml) 59.9 (45.2–85.3) 4.8 (2.1–10.4) p=0.00003 KIM-1/Cr (ng/mg) 2.4 (0.2–5.1) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 3 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group B and control group. Study group B median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 6.17 (0.8–11.74) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 17.9 (7.8–20.83) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 1.41 (0.3–5.2) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 58.5 (13.4–73.8) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.58 (0.2–0.96) 0.28 (0.06–1.06) p=0.02 uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.Table 4 The results of the urinary excretion of alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr and serum NGAL levels in the study group C and control group. Study group C median (range) Control group median (range) Statistical analysis Alpha-GST/Cr (ng/mg) 4.73 (0.81–11.7) 1.11 (0.26–3.5) p=0.008 pi-GST/Cr (ng/mg) 16.3 (7.6–23.1) 14.6 (7.4–28.5) NS uNGAL/Cr (ng/mg) 0.51 (0.07–9.7) 0.83 (0.04–9.5) NS sNGAL (ng/ml) 33.6 (8.3–68.4) 4.8 (2.1–10.4) p=0.001 KIM-1/Cr (ng/mg) 0.68 (0.2–1.3) 0.28 (0.06–1.06) NS uNGAL: urinary NGAL; sNGAL: serum NGAL; Cr: creatinine.In the group A, median urinary alpha-GST/Cr, pi-GST/Cr, NGAL/Cr, and KIM-1/Cr ratios and the serum NGAL level were significantly higher than those in controls (p<0.05) (Table 2).In the group B, median urinary alpha-GST/Cr and KIM-1/Cr ratios and the serum NGAL level were significantly higher in comparison with controls (p<0.05). No significant differences of urinary pi-GST/Cr and NGAL/Cr ratios between patients from the group B and controls were observed (Table 3).In the group C, significant elevation of the median urinary alpha-GST/Cr ratio and serum NGAL level was found (p<0.05). In comparison with controls, there were no significant differences of urinary pi-GST/Cr, NGAL/Cr, and KIM/Cr ratios (Table 4).Patients with ON had significantly increased median urinary alpha-GST/Cr and NGAL/Cr ratios in comparison with those without ON (p=0.03 and p=0.01, respectively) (Figure 1). There were no significant differences of urinary pi-GST/Cr and KIM-1/Cr ratios and the serum NGAL level.Figure 1 Urinary alpha-GST (a), pi-GST (b), NGAL (c, d), and KIM-1 (e) ratios in patients with obstructive nephropathy (ON), patients with hydronephrosis without obstructive nephropathy (HN without ON), and controls. (a) (b) (c) (d) (e)In addition, there was positive correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney (r=0.5, p<0.05) (Figure 2). The latter did not significantly correlate with urinary alpha-GST/Cr, pi-GST/Cr, and KIM-1/Cr ratios and the serum NGAL level.Figure 2 Correlation between the urinary NGAL/Cr ratio and percentage loss of relative function of an obstructed kidney in dynamic renal scintigraphy.In patients with proteinuria, only the median urinary alpha-GST/Cr ratio was significantly higher as compared to those without this symptom (p=0.02) (Figure 3).Figure 3 Urinary alpha-GST/Cr ratio in hydronephrotic patients (HN) with and without proteinuria. ## 5.2. ROC Analysis The analysis showed the best diagnostic profiles in the detection of ON for the urinary alpha-GST/Cr ratio (area under the curve (AUC) of 0.75, an optimal cut-off value of 0.098 ng/mg with sensitivity and specificity of 84.6% and 69.2%, respectively) and urinary NGAL/Cr ratio (AUC of 0.805, an optimal cut-off value of 0.08 ng/mg with sensitivity and specificity of 78.6% and 58.3%, respectively). AUC for the serum NGAL level was 0.6, with an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, and specificity of 12.5%. The urinary pi-GST/Cr ratio was characterized by AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, and specificity of 7.7%, and the urinary KIM-1/Cr ratio was characterized by AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, and specificity of 16.7% (Figures4–8).Figure 4 ROC analysis for the urinary alpha-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.902, an optimal cut-off value of 0.046 ng/mg Cr, sensitivity of 81.8%, specificity of 84.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.750, an optimal cut-off value of 0.098 ng/mg Cr, sensitivity of 84.6%, specificity of 69.2%. (a) (b)Figure 5 ROC analysis for the urinary pi-GST/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.3, an optimal cut-off value of 0.082 ng/mg Cr, sensitivity of 92.3%, specificity of <1%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.574, an optimal cut-off value of 0.103 ng/mg Cr, sensitivity of 92.3%, specificity of 7.7%. (a) (b)Figure 6 ROC analysis for the serum NGAL level in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 1.0, an optimal cut-off value of 0.0 ng/ml, sensitivity of 100%, specificity of 100%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.6, an optimal cut-off value of 0.145 ng/ml, sensitivity of 100%, specificity of 12.5%. (a) (b)Figure 7 ROC analysis for the urinary NGAL/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.663, an optimal cut-off value of 0.091 ng/mg Cr, sensitivity of 39.3%, specificity of 58.3%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.805, an optimal cut-off value of 0.079 ng/mg Cr, sensitivity of 78.6%, specificity of 58.3%. (a) (b)Figure 8 ROC analysis for the urinary KIM-1/Cr ratio in the detection of obstructive nephropathy (ON). (a) Patients with obstructive nephropathy (ON) vs. controls. AUC of 0.653, an optimal cut-off value of 0.084 ng/mg Cr, sensitivity of 55.6%, specificity of 69.6%. (b) Patients with obstructive nephropathy (ON) vs. patients with hydronephrosis without obstructive nephropathy (HN without ON). AUC of 0.487, an optimal cut-off value of 0.119 ng/mg Cr, sensitivity of 88.9%, specificity of 16.7%. (a) (b) ## 6. Discussion GST is a cytosolic enzyme. Its isoforms alpha and pi (alpha-GST, pi-GST) are typical of a human kidney [7]. The alpha-GST is expressed in proximal tubular epithelial cells, whereas the pi-GST is a characteristic of distal tubular epithelial cells. Both isoforms of GST are excessively released from injured tubular epithelial cells into the urine, and they recently were proposed as promising biomarkers of tubulointerstitial fibrosis in patients with proteinuric kidney disease [8, 9]. To the best of our knowledge, in patients with HN, no studies on urinary GST excretion and their usefulness as biomarkers of ON have been reported so far. In our study, all patients with HN were characterized by significantly higher urinary alpha-GST excretion in comparison with controls. In addition, urinary alpha-GST excretion was significantly higher in children with HN and ON as compared to those without ON. Similarly, our patients with proteinuria displayed significantly higher urinary alpha-GST excretion than those without this symptom. The ROC curve analysis showed a very good diagnostic profile for the urinary alpha-GST/Cr ratio in the detection of ON.In our study, urinary pi-GST excretion was significantly higher in patients with grades 3 and 4 HN as compared to controls but the ROC curve analysis did not confirm clinical utility of the urinary pi-GST/Cr ratio in the detection of ON.NGAL is considered to be another sensitive and early biomarker of tubulointerstitial fibrosis. It is a small (25 kD) protein, locally synthesized in renal tubular epithelial cells [10] and released into urine. Initially, NGAL was thought to be a biomarker of acute kidney injury (AKI) [11] but recent studies demonstrated its increased urinary excretion also in patients with various chronic nephropathies. Furthermore, correlation between urinary NGAL excretion and severity of local inflammation and kidney function was observed [12–15]. NGAL may also be released into the circulation from damaged renal tubular epithelial cells. Its elevated serum level is suggested to be a risk factor for CKD progression [16–19]. In our study, it was also showed that urinary NGAL excretion may be a potential biomarker of ON in patients with HN due to UPJO [18–22]. We found that urinary NGAL excretion was significantly higher in patients with HN and ON as compared to patients with HN and without ON. In our study, urinary NGAL excretion positively correlated with a deterioration of relative function of an obstructed kidney in dynamic renal scintigraphy. Moreover, similar to the urinary alpha-GST/Cr ratio, the ROC curve analysis showed a very good diagnostic profile for the urinary NGAL/Cr ratio in the detection of ON. Interestingly, Gerber et al. [23] did not find significant differences in urinary NGAL excretion between patients with UPJO and controls. Nevertheless, their results might be influenced by relatively small number of cases.In our study, in contrast to urinary NGAL excretion, the serum NGAL level was significantly higher in all hydronephrotic children in comparison with controls. Unfortunately, in the ROC curve analysis, a diagnostic profile for the serum NGAL level in the detection of ON was not so good as those for urinary NGAL/Cr and urinary alpha-GST/Cr ratios.Transmembrane renal tubular epithelial cell glycoprotein KIM-1 is the most recently recognized biomarker of tubulointerstitial fibrosis. Its physiological role is still unclear, but it is markedly upregulated in proximal tubular epithelial cells in experimental and clinical conditions associated with kidney damage [24]. Its elevated urinary excretion was observed in both AKI [25, 26] and various chronic kidney diseases [12, 27, 28]. KIM-1 was also suggested to be an indicator of the conversion of AKI to CKD [29]. Vaidya et al. [28] showed that urinary KIM-1 excretion reflected better severity of tubulointerstitial fibrosis than a traditional biomarker—N-acetyl-β-D-glucosaminidase.The usefulness of urinary KIM-1 excretion as a new early biomarker of ON in patients with HN was reported by several recent studies [21, 22, 25, 29, 30]. However, in the study by Noyan et al. [20], hydronephrotic children with kidney dysfunction were not characterized by increased urinary KIM-1 excretion. Similarly, Gerber et al. [23] did not observe elevated urinary KIM-1 excretion in patients with UPJO. In our study, significantly higher urinary KIM-1 excretion was noted in patients with grades 2–4 HN but not in those with ON and in those with proteinuria. The ROC curve analysis did not show a sufficient diagnostic profile for the urinary KIM-1/Cr ratio in the detection of ON. ## 7. Conclusions Our results suggest that urinary alpha-GST and NGAL are promising biomarkers of ON. It is conceivable that ambiguous results regarding the remaining biomarkers, i.e., urinary pi-GST and KIM-1, and serum NGAL level may be related to a relatively small study group. Therefore, their utility in an early diagnosis of ON should be reevaluated by more extended investigations. --- *Source: 1015726-2018-09-13.xml*
2018
# Atrial Repolarization Waves (Ta) Mimicking Inferior Wall ST Segment Elevation Myocardial Infarction in a Patient with Ectopic Atrial Rhythm **Authors:** Janaki Rami Reddy Manne **Journal:** Case Reports in Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1015730 --- ## Abstract We present a case of atrial repolarization waves from an ectopic atrial rhythm mimicking inferior ST segment elevation myocardial infarction in a 78-year-old male patient who presented with left sided chest wall and shoulder pain. His ischemic workup was negative, and the ST elevations completely resolved upon the resumption of sinus rhythm before discharge. --- ## Body ## 1. Introduction Clinical conditions other than myocardial ischemia can affect the ST segment resulting in either ST segment elevation or depression. Pseudoinfarct patterns on electrocardiogram mimicking myocardial infarction are seen in clinical conditions like pericarditis, myocarditis, ventricular aneurysm, takotsubo cardiomyopathy, early repolarization, and cardiac memory. Atrial repolarization waves can simulate myocardial ischemia by causing ST segment elevation or depression depending upon the site of origin of the atrial impulse. Awareness and identification of pseudoinfarct patterns on ECGs is important to avoid unnecessary diagnostic interventions and treatment. ## 2. Case Presentation A man in his late 70s presented with a two-week history of constant nonexertional left sided chest pain and neck pain. His history included dyslipidemia, type 2 diabetes mellitus, and coronary artery disease that had been treated with right coronary artery stenting 10 years ago. On arrival, he had a blood pressure of 132/58 mm Hg, heart rate of 50 bpm, and oxygen saturation of 98% on room air. He received aspirin, sublingual nitroglycerin, and intravenous morphine in the emergency room which improved his chest pain. Physical examination was unremarkable except for reproducible left sided chest wall and neck pain. A 12-lead ECG obtained on admission (Figure1) shows bradycardia with a slow ventricular rate of 50 beats/min. The P waves are inverted in leads II, III, aVF, and V4–V6 and upright in lead aVR, suggesting an ectopic atrial rhythm. The QRS duration is 90 ms, and the QT and QTc intervals are 402 and 391 ms, respectively. Prominent positive atrial repolarization waves (Ta) are seen after the QRS complexes in leads II, III, and aVF giving rise to ST segment elevation, mimicking ST elevation myocardial infarction.Figure 1 Admission electrocardiogram showing negative P waves and ST segment elevation in leads II, III, and aVF.Serial highly sensitive cardiac troponin I levels were less than 16 ng/L (reference range 0–45 ng/L). All the laboratory data including the inflammatory markers and electrolytes were within normal limits. A repeat ECG (Figure2) obtained 10 minutes after his initial presentation showed resumption of sinus rhythm and complete resolution of ST segment elevation in the inferior leads.Figure 2 A repeat ECG showing resumption of sinus P waves and resolution ST segment elevations in the inferior leads.The transthoracic echocardiogram showed normal biventricular function without any regional wall motion abnormalities. A regadenoson myocardial perfusion imaging study was negative for any reversible ischemia. The patient’s chest wall and neck pain significantly improved with tramadol, and he was discharged home with a diagnosis of musculoskeletal chest wall pain. ## 3. Discussion Atrial repolarization wave (Ta wave) is usually not perceptible on the ECG as it has low magnitude of 100–200 microvolts and is usually concealed by the ensuing QRS complex [1]. Occasionally, they are seen as shallow negative deflections right after the P wave in conditions with prolonged PR interval, but they are best seen in patients with complete heart block, when the Ta waves and QRS complexes are uncoupled [2]. In contrast to the QRS complex and T wave which under normal conditions have the same polarity, the polarity of the P wave is always opposite to that of the Ta wave in all leads [2]. The duration of Ta wave (average duration of 323 ± 56 ms) is generally 2-3 times longer than the P wave (average duration of 124 ± 16 ms) [2].Identifying the discernible Ta wave and its location is relevant as it has some important clinical and diagnostic implications. In conditions with short PR interval like sinus tachycardia, the Ta wave can blend into the ST segment and cause ST segment depression mimicking myocardial ischemia. The Ta wave voltage of an inverted or retrograde P wave is always larger than that of a sinus P wave [3]. In low atrial rhythm, the atrial activation initiates from an ectopic focus rather than the sinoatrial node, and it spreads from below to upwards in the atria. The retrograde conduction to the sinoatrial node causes the inverted P waves in inferior leads while the anterograde conduction through the atrioventricular node results in normal QRS complexes. Hence, the retrograde activation of the atrium or ectopic rhythm originating in the low atrium results in a negative P wave in the inferior leads and consequently induces a positive Ta wave. This combination of negative P waves and exaggerated positive Ta waves extending into the ST segment in inferior leads can simulate an acute ST elevation myocardial infarction as seen in our patient [4]. The early repolarization pattern and atrial escape rhythm that are seen in association with increased vagal tone [5] can give rise to similar pseudo-ST elevation changes [6]; however, in our case no early repolarization changes (ST elevation at J point and terminal QRS slurring/notching) were seen on the admission or baseline electrocardiogram recordings. Exaggerated Ta waves can also be a cause a false positive response during treadmill stress test. The exercise-induced tachycardia increases the magnitude of the P and Ta waves and shortens the PR segment, thus shifting the atrial repolarization wave into the ST segment. In leads with prominent and upright P waves, this results in marked depression of the PR segment and the ST segment during exercise resembling myocardial ischemia [7].As noted here, pseudoinfarct pattern with ST segment elevation or depression mimicking myocardial infarction are seen in other clinical conditions [8, 9]. Many times careful attention to the ST-T and QRS morphology, the leads involved and clinical setting in which the ST segment changes occurring are helpful in differentiating these pseudoinfarct patterns from myocardial ischemia. For example in the presence of P-R depression or prominent atrial repolarization wave, measuring the ST segment deviation relative to the TP segment results in an inaccurate measurement. To avoid errors, it should be measured in relation to the end of the PR segment, not the TP segment [10]. However, all these conditions including atrial repolarization are diagnoses of exclusion, and in some cases appropriate diagnostic testing may be necessary to exclude myocardial ischemia before establishing a definitive diagnosis.Our case illustrates a key concept that conditions other than myocardial infarction can cause ST segment elevation. In this patient, positive Ta waves generated by the ectopic atrial rhythm resulted in erroneous ST elevation in inferior leads mimicking acute myocardial infarction. Misinterpretation of this ECG finding could have resulted in unnecessary treatment and cardiac catheterization. --- *Source: 1015730-2018-01-18.xml*
1015730-2018-01-18_1015730-2018-01-18.md
7,650
Atrial Repolarization Waves (Ta) Mimicking Inferior Wall ST Segment Elevation Myocardial Infarction in a Patient with Ectopic Atrial Rhythm
Janaki Rami Reddy Manne
Case Reports in Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1015730
1015730-2018-01-18.xml
--- ## Abstract We present a case of atrial repolarization waves from an ectopic atrial rhythm mimicking inferior ST segment elevation myocardial infarction in a 78-year-old male patient who presented with left sided chest wall and shoulder pain. His ischemic workup was negative, and the ST elevations completely resolved upon the resumption of sinus rhythm before discharge. --- ## Body ## 1. Introduction Clinical conditions other than myocardial ischemia can affect the ST segment resulting in either ST segment elevation or depression. Pseudoinfarct patterns on electrocardiogram mimicking myocardial infarction are seen in clinical conditions like pericarditis, myocarditis, ventricular aneurysm, takotsubo cardiomyopathy, early repolarization, and cardiac memory. Atrial repolarization waves can simulate myocardial ischemia by causing ST segment elevation or depression depending upon the site of origin of the atrial impulse. Awareness and identification of pseudoinfarct patterns on ECGs is important to avoid unnecessary diagnostic interventions and treatment. ## 2. Case Presentation A man in his late 70s presented with a two-week history of constant nonexertional left sided chest pain and neck pain. His history included dyslipidemia, type 2 diabetes mellitus, and coronary artery disease that had been treated with right coronary artery stenting 10 years ago. On arrival, he had a blood pressure of 132/58 mm Hg, heart rate of 50 bpm, and oxygen saturation of 98% on room air. He received aspirin, sublingual nitroglycerin, and intravenous morphine in the emergency room which improved his chest pain. Physical examination was unremarkable except for reproducible left sided chest wall and neck pain. A 12-lead ECG obtained on admission (Figure1) shows bradycardia with a slow ventricular rate of 50 beats/min. The P waves are inverted in leads II, III, aVF, and V4–V6 and upright in lead aVR, suggesting an ectopic atrial rhythm. The QRS duration is 90 ms, and the QT and QTc intervals are 402 and 391 ms, respectively. Prominent positive atrial repolarization waves (Ta) are seen after the QRS complexes in leads II, III, and aVF giving rise to ST segment elevation, mimicking ST elevation myocardial infarction.Figure 1 Admission electrocardiogram showing negative P waves and ST segment elevation in leads II, III, and aVF.Serial highly sensitive cardiac troponin I levels were less than 16 ng/L (reference range 0–45 ng/L). All the laboratory data including the inflammatory markers and electrolytes were within normal limits. A repeat ECG (Figure2) obtained 10 minutes after his initial presentation showed resumption of sinus rhythm and complete resolution of ST segment elevation in the inferior leads.Figure 2 A repeat ECG showing resumption of sinus P waves and resolution ST segment elevations in the inferior leads.The transthoracic echocardiogram showed normal biventricular function without any regional wall motion abnormalities. A regadenoson myocardial perfusion imaging study was negative for any reversible ischemia. The patient’s chest wall and neck pain significantly improved with tramadol, and he was discharged home with a diagnosis of musculoskeletal chest wall pain. ## 3. Discussion Atrial repolarization wave (Ta wave) is usually not perceptible on the ECG as it has low magnitude of 100–200 microvolts and is usually concealed by the ensuing QRS complex [1]. Occasionally, they are seen as shallow negative deflections right after the P wave in conditions with prolonged PR interval, but they are best seen in patients with complete heart block, when the Ta waves and QRS complexes are uncoupled [2]. In contrast to the QRS complex and T wave which under normal conditions have the same polarity, the polarity of the P wave is always opposite to that of the Ta wave in all leads [2]. The duration of Ta wave (average duration of 323 ± 56 ms) is generally 2-3 times longer than the P wave (average duration of 124 ± 16 ms) [2].Identifying the discernible Ta wave and its location is relevant as it has some important clinical and diagnostic implications. In conditions with short PR interval like sinus tachycardia, the Ta wave can blend into the ST segment and cause ST segment depression mimicking myocardial ischemia. The Ta wave voltage of an inverted or retrograde P wave is always larger than that of a sinus P wave [3]. In low atrial rhythm, the atrial activation initiates from an ectopic focus rather than the sinoatrial node, and it spreads from below to upwards in the atria. The retrograde conduction to the sinoatrial node causes the inverted P waves in inferior leads while the anterograde conduction through the atrioventricular node results in normal QRS complexes. Hence, the retrograde activation of the atrium or ectopic rhythm originating in the low atrium results in a negative P wave in the inferior leads and consequently induces a positive Ta wave. This combination of negative P waves and exaggerated positive Ta waves extending into the ST segment in inferior leads can simulate an acute ST elevation myocardial infarction as seen in our patient [4]. The early repolarization pattern and atrial escape rhythm that are seen in association with increased vagal tone [5] can give rise to similar pseudo-ST elevation changes [6]; however, in our case no early repolarization changes (ST elevation at J point and terminal QRS slurring/notching) were seen on the admission or baseline electrocardiogram recordings. Exaggerated Ta waves can also be a cause a false positive response during treadmill stress test. The exercise-induced tachycardia increases the magnitude of the P and Ta waves and shortens the PR segment, thus shifting the atrial repolarization wave into the ST segment. In leads with prominent and upright P waves, this results in marked depression of the PR segment and the ST segment during exercise resembling myocardial ischemia [7].As noted here, pseudoinfarct pattern with ST segment elevation or depression mimicking myocardial infarction are seen in other clinical conditions [8, 9]. Many times careful attention to the ST-T and QRS morphology, the leads involved and clinical setting in which the ST segment changes occurring are helpful in differentiating these pseudoinfarct patterns from myocardial ischemia. For example in the presence of P-R depression or prominent atrial repolarization wave, measuring the ST segment deviation relative to the TP segment results in an inaccurate measurement. To avoid errors, it should be measured in relation to the end of the PR segment, not the TP segment [10]. However, all these conditions including atrial repolarization are diagnoses of exclusion, and in some cases appropriate diagnostic testing may be necessary to exclude myocardial ischemia before establishing a definitive diagnosis.Our case illustrates a key concept that conditions other than myocardial infarction can cause ST segment elevation. In this patient, positive Ta waves generated by the ectopic atrial rhythm resulted in erroneous ST elevation in inferior leads mimicking acute myocardial infarction. Misinterpretation of this ECG finding could have resulted in unnecessary treatment and cardiac catheterization. --- *Source: 1015730-2018-01-18.xml*
2018
# Targeting Nrf2-Mediated Oxidative Stress Response in Traumatic Brain Injury: Therapeutic Perspectives of Phytochemicals **Authors:** An-Guo Wu; Yuan-Yuan Yong; Yi-Ru Pan; Li Zhang; Jian-Ming Wu; Yue Zhang; Yong Tang; Jing Wei; Lu Yu; Betty Yuen-Kwan Law; Chong-Lin Yu; Jian Liu; Cai Lan; Ru-Xiang Xu; Xiao-Gang Zhou; Da-Lian Qin **Journal:** Oxidative Medicine and Cellular Longevity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015791 --- ## Abstract Traumatic brain injury (TBI), known as mechanical damage to the brain, impairs the normal function of the brain seriously. Its clinical symptoms manifest as behavioral impairment, cognitive decline, communication difficulties, etc. The pathophysiological mechanisms of TBI are complex and involve inflammatory response, oxidative stress, mitochondrial dysfunction, blood-brain barrier (BBB) disruption, and so on. Among them, oxidative stress, one of the important mechanisms, occurs at the beginning and accompanies the whole process of TBI. Most importantly, excessive oxidative stress causes BBB disruption and brings injury to lipids, proteins, and DNA, leading to the generation of lipid peroxidation, damage of nuclear and mitochondrial DNA, neuronal apoptosis, and neuroinflammatory response. Transcription factor NF-E2 related factor 2 (Nrf2), a basic leucine zipper protein, plays an important role in the regulation of antioxidant proteins, such as oxygenase-1(HO-1), NAD(P)H Quinone Dehydrogenase 1 (NQO1), and glutathione peroxidase (GPx), to protect against oxidative stress, neuroinflammation, and neuronal apoptosis. Recently, emerging evidence indicated the knockout (KO) of Nrf2 aggravates the pathology of TBI, while the treatment of Nrf2 activators inhibits neuronal apoptosis and neuroinflammatory responses via reducing oxidative damage. Phytochemicals from fruits, vegetables, grains, and other medical herbs have been demonstrated to activate the Nrf2 signaling pathway and exert neuroprotective effects in TBI. In this review, we emphasized the contributive role of oxidative stress in the pathology of TBI and the protective mechanism of the Nrf2-mediated oxidative stress response for the treatment of TBI. In addition, we summarized the research advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI. Although there is still limited clinical application evidence for these natural Nrf2 activators, we believe that the combinational use of phytochemicals such as Nrf2 activators with gene and stem cell therapy will be a promising therapeutic strategy for TBI in the future. --- ## Body ## 1. Introduction Traumatic brain injury (TBI) refers to the damage to the brain structure and function caused by mechanical and external forces, including two stages of primary injury and secondary injury [1]. It is a global neurological disease and is the biggest cause of death and disability in the population under 40 years of age [2]. The current clinical treatments for TBI mainly include interventional treatments such as hyperventilation, hypertonic therapy, hypothermia therapy, surgical treatment, drug therapy, hyperbaric oxygen therapy, and rehabilitation therapy [3, 4]. In the past few decades, the main interventions that have had the greatest impact and can reduce the mortality rate of severe TBI by several times are immediate surgical intervention and follow-up care by specialist intensive care physicians [5]. Post-traumatic intracranial hypertension (ICH) makes patient care more complicated, but new data shows that hypertonic therapy is the use of hypertonic solutions, such as mannitol and hypertonic saline (HTS) in the early treatment of ICH after severe TBI, which can reduce the burden of ICH and improve survival and functional outcomes [6]. Hypothermia therapy can reduce the effects of TBI through a variety of possible mechanisms, including reducing intracranial pressure (ICP), reducing innate inflammation, and brain metabolic rate. However, the results of a randomized POLAR clinical trial showed that early preventive hypothermia did not improve the neurological outcome at 6 months in patients with severe TBI [7]. Therefore, the effectiveness of hypothermia for TBI remains to be discussed. Surgical treatments include decompressive craniectomy (DC), which is a method that removes most of the top of the skull to reduce ICP and the subsequent harmful sequelae. However, the treatment effects for TBI are not satisfactory [8]. Many patients have a poor prognosis and will be left with serious disabilities and require lifelong care [9]. In addition, chemicals including corticosteroids, progesterone, erythropoietin, amantadine, tranexamic acid, citicoline, and recombinant interleukin-1 receptor (IL-1R) antagonist are used for the treatment of TBI [2]. However, these drugs are less safe and cannot work well, or may lead to unfavorable physiological conditions [10]. Recently, many studies have begun to investigate the possibility of using natural compounds with high safety as therapeutic interventions after TBI [11]. The latest evidence indicates that phytochemicals, including quercetin, curcumin, formononetin, and catechin, exert neuroprotective effects in TBI and other brain diseases via attenuating oxidative stress [12]. It is known to us, transcription factor NF-E2 related factor 2 (Nrf2) plays an important role in the regulation of heme oxygenase-1 (HO-1), NAD(P)H Quinone Dehydrogenase 1 (NQO1), glutathione peroxidase (GPx), and other antioxidant proteins, which ameliorates oxidative damage in TBI [13]. In this review, we discussed the critical role of oxidative stress in the pathology of TBI and the regulation of Nrf2-mediated oxidative stress response in TBI. In addition, we summarized the study advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI in vivo and in vitro. Finally, we hope this review sheds light on the study on the treatment of TBI using phytochemicals as Nrf2 activators. Moreover, the combinational use of phytochemicals such as Nrf2 activators with gene and stem cell therapy will be a promising strategy for the treatment of TBI. ## 2. TBI TBI, also known as acquired intracranial injury, occurs in the brain. It is caused by an external force, including a blow, bump, or jolt to the head, and the sudden and serious hit of the head by an object or the deep pierce of an object into the skull through the brain tissue [14]. According to the data from the Center for Disease Control and Prevention (CDC) of Unite States (U.S.), the most common causes mainly include violence, transportation, accidents, construction, and sports. In addition, there are about 288,000 hospitalizations for TBI a year, and males hold 78.8% [15, 16]. Usually, older adults (>75 years) have the highest rates of TBI. Therefore, TBI brings serious economic and spiritual burdens to the family and society [17].TBI is classified in various ways, including type, severity, location, mechanism of injury, and the physiological response to injury [18]. In general, the Glasgow Coma Scale (GCS) score and neurobehavioral deficits are extensively used, and TBI is classified into mild, moderate, and severe types [19]. The clinical symptoms of TBI are greatly dependent on the severity of the brain injury and mainly include perceptual loss, cognitive decline, communication difficulties, behavioral impairment, affective changes, and otherwise [20] (Figure 1). The pathophysiology of TBI includes the primary injury, which is directly caused by physical forces, and the secondary injury referring to the further damage of tissue and cells in the brain [21]. The physical forces on the brain cause both focal and diffuse injuries. Emerging evidence indicates that patients who suffer from moderate or severe TBI are found to have focal and diffuse injuries simultaneously [22]. Most seriously, secondary brain injury is followed owing to the occurrence of biochemical, cellular, and physiological events during the primary brain injury [23]. Mechanistic studies demonstrate that several factors, including inflammation, oxidative stress, mitochondrial dysfunction, BBB disruption, DNA damage, glutamate excitotoxicity, complement activation, and neurotrophic impairment, are involved in the pathology and progression of TBI [24] (Figure 1). Currently, there is a growing body of studies showing that increasingly abnormal proteins or molecules are biomarkers closely associated with TBI, which helps to better understand the mechanism of TBI [25]. For example, the level of early structural damage biomarkers, including S100B protein in cerebrospinal fluid or blood nerve glial acidic protein, ubiquitin carboxyl-terminal hydrolase L1 and Tau helps to determine whether the head scan is required after TBI [26].Figure 1 The clinical symptoms and molecular mechanism of TBI, an injury to the brain caused by an external force. The clinical symptoms of TBI mainly manifest as perceptual loss, cognitive decline, communication difficulties, behavioral impairment, and affective changes. The molecular mechanisms of TBI include inflammation, oxidative stress, mitochondrial dysfunction, blood-brain barrier (BBB) disruption, DNA damage, glutamate excitotoxicity, complement activation, and neurotrophic impairment.At present, the therapeutic strategies for TBI include hyperbaric oxygen therapy (HBOT), hyperventilation and hypertonic therapy, noninvasive brain stimulation, drug therapy, and biological therapy [27]. Most importantly, the combinational use of novel biological reagents (genes and stem cells) and pharmacological intervention preparations can decrease the complications and mortality of TBI [28]. Stem cell therapy includes stem cells regeneration transplantation and induction of endogenous stem cells activation through pharmacological or environmental stimuli. Several studies have shown that some drugs can not only improve the survival rate of stem cells but also enhance their efficacy. For example, the intravenous injection of mesenchymal stem cells (MSCs) and atorvastatin or simvastatin 24 hours after TBI could improve the recovery of modified neurological severity score (mNSS) [29]. In addition, the administration of calpain inhibitors 30 minutes after TBI followed by the transplantation of MSCs 24 hours after TBI could reduce the proinflammatory cytokines around the lesions, increase the survival rate of MSCs, and improve mNSS [30]. Moreover, the pretreatment with minocycline for 24 hours could protect transplanted stem cells from ischemia-reperfusion injury by inducing Nrf2 nuclear translocation and increasing the expression of downstream proteins [31]. Therefore, the in-deep clarification of the mechanism of TBI and adopting targeted methods for precise intervention will help the recovery of post-traumatic neurological function and further prevent the occurrence and development of complications, and ultimately open up a new way for effective treatment of TBI [32]. ## 3. The Role of Oxidative Stress in TBI Oxidative stress occurs owing to an imbalance of free radicals and antioxidants in the body, which lead to cell and tissue damage [32]. Therefore, oxidative stress plays a critical role in the development of diseases. As is known to us, diet, lifestyle, and environmental factors such as pollution and radiation contribute to the induction of oxidative stress, resulting in the excessive generation of free radicals [33]. In general, free radicals, including superoxide, hydroxyl radicals, and nitric oxide radicals, are molecules with one or more unpaired electrons [34]. It is well known that oxidative stress is implicated in the pathogenesis of various diseases, such as atherosclerosis, hypertension, diabetes mellitus, ischemic disease, neurodegeneration, and other central nervous system (CNS) related diseases [35, 36]. During normal metabolic processes, although many free radicals are generated, the body’s cells can produce antioxidants to neutralize these free radicals and maintain a balance between antioxidants and free radicals [37]. A large body of evidence indicates that the overgenerated free radicals attack biological molecules, such as lipids, proteins, and DNA, ultimately breaking this balance and resulting in long-term oxidative stress [38]. However, oxidative stress also plays a useful role in some cases, such as physiologic adaptation and the modulation of intracellular signal transduction [39]. Thus, a more accurate definition of oxidative stress may be a state in which the oxidation system exceeds the antioxidant system owing to the occurrence of imbalance between them. At present, the biomarkers of oxidative stress, which are used to evaluate the pathological conditions of diseases and the efficacy of drugs, are becoming popular and attract increasing interest [40]. For example, lipid peroxides, 4-hydroxynonenal (4-HNE), and malondialdehyde (MDA) are the indicators of oxidative damage to lipids [41]. Thymine glycol (TG) and 8-oxoguanine (8-oxoG) are the biomarkers of oxidative damage to DNA [42]. In addition, a variety of proteins and amino acids, including carbonyl protein, dehydrovaline, nitrotyrosine, and hydroxyleucine, are oxidized and generate several products that are recognized to be biomarkers of oxidative stress. Among them, lipid peroxide as one of the most important biomarkers was determined in clinical. Furthermore, oxidative stress plays a pivotal role in the regulation of signaling transduction, including the activation of protein kinases and transcription factors, which affect many biological processes such as apoptosis, inflammatory response, and cell differentiation [43]. For example, gene transcription factors include nuclear factor κB (NF-κB) and activator protein-1 (AP-1) sense oxidative stress via oxidation and reduction cycling [44]. In addition, the generation of active oxygen species leads to the activation of NF-κB, resulting in proinflammatory responses in various diseases such as neurodegenerative diseases, spinal cord injury, and TBI [45]. Therefore, oxidative stress is one of the important mechanisms that has been implicated in the pathology of CNS-related diseases.Although the initial brain insult of TBI is an acute and irreversible primary damage to the parenchyma, the ensuing secondary brain injury progressing slowly over months to years seriously affects the treatment and prognosis of TBI [46]. Therefore, therapeutic interventions during secondary brain injury are essential. To date, many hallmarks are exhibited during delayed secondary CNS damage, mainly including mitochondrial dysfunction, Wallerian degeneration of axons, excitotoxicity, oxidative stress, and eventually neuronal death and overactivation of glial cells [24]. Recently, emerging evidence indicates that oxidative stress plays an important role in the development and pathogenesis of TBI [46]. In general, oxidative stress is resulted from or is accompanied by other molecular mechanisms, such as mitochondrial dysfunction, activation of neuroexcitation pathways, and activated neutrophils [47]. Kontos HA et al. first reported that superoxide radicals are immediately increased in brain microvessels after injury in a fluid percussion TBI model, while the scavengers of oxygen radicals including superoxide dismutase (SOD) and catalase significantly decrease the level of superoxide radicals and partly reverse the injury of the brain [48]. During the beginning minutes or hours after brain injury, a large number of superoxide radicals are generated owing to the enzymatic reaction or autoxidation of biogenic amine neurotransmitters, arachidonic acid cascade, damaged mitochondria, and oxidized extravasated hemoglobin [49]. Soon afterwards, the microglia are overactivated and neutrophils and macrophages are infiltrated, which also contribute to the production of superoxide radicals [50, 51]. In addition, iron overload and its resultant generation of several hydroxyl radicals and lipid peroxidation induce oxidative stress and neuronal ferroptosis, which significantly aggravate the pathogenesis of TBI from the following aspects, such as cerebral blood flow, brain plasticity, and the promotion of immunosuppression [52]. In this review, we focused on the research advance of the role of oxidative stress in TBI. At neutral pH, the iron in plasma is bound to the transferrin protein in the form of Fe3+, which also can be sequestered intracellularly by ferritin, an iron storage protein. Thus, iron in the brain is maintained at a relatively low level under normal conditions. However, the value of pH is decreased in the brain of TBI, which is accompanied by the release of iron from both transferrin and ferritin. Then, the excessive levels of active iron catalyze the oxygen radical reaction and induce oxidative damage and ferroptosis [53]. Additionally, hemoglobin is the second source that catalyzes the active iron after the mechanical trauma of the brain [54]. Iron is released from hemoglobin owing to the stimulation of hydrogen peroxide (H2O2) or lipid hydroperoxides, and the level of iron can be further increased as the pH decreases to 6.5 or even below [55]. Therefore, targeting the inhibition of iron levels by iron chelators may be a promising strategy for the treatment of TBI. For example, deferoxamine (DFO), a potent chelator of iron, can attenuate iron-induced long-term neurotoxicity and improve the spatial learning and memory deficit of TBI rats [56]. Moreover, nitric oxide (NO) is involved in the cascade of injury triggered by TBI. The activity of nitric oxide synthase (NOS) contributing to the generation of NO is increased as the accumulation of Ca2+ in TBI secondary injury. Then, NO reacts with free radical superoxide to generate “reactive nitrogen species” peroxynitrite (PN) in the forms of 3-nitrotyrosine (3-NE) and 4-HNE, which are found in the ipsilateral cortex and hippocampus of TBI animal models [24]. For example, N(omega)-nitro-L-arginine methyl ester (L-NAME), a NO-synthase inhibitor, was reported to attenuate neurological impairment in TBI and reduce the formation of NE and the number of NE-positive neurons [14]. Therefore, targeting the inhibition of oxidative stress in the brain is a promising strategy for the treatment of TBI. ## 4. Nrf2 Signaling-Mediated Oxidative Stress Response In 1995, Itoh, K. et al. first discovered and reported that Nrf2 was the homolog of the hematopoietic transcription factor p45 NF-E2 [57]. To date, a total of 6 members including NF-E2, Nrf1, Nrf2, Nrf3, Bach1, and Bach2 are identified from the Cap “n” Collar (CNC) family [58]. Among them, Nrf2 is a conserved basic leucine zipper (bZIP) transcription factor. The literature reports that Nrf2 possesses seven highly conserved functional domains from NRF2-ECH homology 1 (Neh1) to Neh7, which are identified in multiple species including humans, mice, and chicken (Figure 2) [59]. Of these domains, Neh2, located in the N-terminal of Nrf2, possesses seven lysine residues and ETGE and DLG motifs, which are responsible for the ubiquitin conjugation and the binding of Nrf2 to its cytosolic repressor Keap1 at the Kelch domain, then facilitating the Cullin 3 (Cul3)-dependent E3 ubiquitination and proteasome degradation [60]. Both Neh4 and Neh5 domains with a lot of acidic residues act as transactivation domains to bind to cAMP response element-binding protein (CREB), which regulates the transactivation of Nrf2. Neh7 is a domain that interacts with the retinoic X receptor (RXRα), which can inhibit CNC-bZIP factors and the transcription of genes targeting Nrf2. Neh6 has two motifs including DSGIS and DSAPGS of β-transducing repeat-containing protein (β-TrCP) functioning as a substrate receptor for the Cul3-Rbx1/Roc1 ubiquitin ligase complex [61]. DSGIS is modulated by glycogen synthase kinase-3 (GSK-3) activity and enables β-TrCP to ubiquitinate Nrf2 [62]. The Neh1 domain has a Cap “N” Collar Basic Leucine Zipper (CNC-bZIP) DNA-binding motif, which allows Nrf2 to dimerize with small Maf proteins including MAFF, MAFG, and MAFK [63]. The Neh3 domain in the C-terminal of Nrf2 protein regulates chromoATPase/helicase DNA-binding protein 6 (CHD6), which is known as the Nrf2 transcriptional co-activator [64]. In addition, Neh3 also plays a role in the regulation of Nrf2 protein stability.Figure 2 Structures of Nrf2 and Keap1 protein domains. (a) Nrf2 consists of 589 amino acids and has seven evolutionarily highly conserved domains (Neh1-7). Neh1 contains a bZIP motif and is responsible for DNA recognition and mediates the dimerization with the small MAF (sMAF) protein. Neh6 acts as a degron to mediate the degradation of Nrf2 in the nucleus. Neh4 and 5 are transactivation domains. Neh2 contains ETGE and DLG motifs which are required for the binding of Nrf2 to Keap1. Neh7 is a domain that interacts with RXRα to inhibit CNC-bZIP factors and the transcription of genes. Neh3 regulates CHD6. (b) Keap1 consists of 624 amino acids and has five domains. BTB domain together with the N-terminal region (NTR) of IVR to mediate the homodimerization of Keap1 and binding to Cul3. The Kelch domain and the C-terminal region (CTR) mediate the interaction with Neh2 of Nrf2 at the ETGE and DLG motifs. (a)(b)Under normal conditions, Nrf2 is kept in the cytoplasm by a cluster of proteins including Keap1 and Cul3, which then undergoes degradation via the ubiquitin-proteasome system (UPS) [65]. In brief, Cul3 ubiquitinates Nrf2 and Keap1 act as a substrate adaptor to facilitate the reaction. Then, Nrf2 is transported to the proteasome for its degradation and recycling, and the half-time of Nrf2 is only 20 minutes. Under the condition of oxidative stress or the treatment of Nrf2 activators, the Keap1-Cul3 ubiquitination system is disrupted. Then, Nrf2 is translocated from the cytoplasm into the nucleus and forms a heterodimer with one of the sMAF proteins, which binds with the ARE and initiates the transcription of many antioxidative genes including HO-1, glutamate-cysteine ligase catalytic subunit (GCLC), SOD, and NQO1 (Figure 3). Emerging evidence indicates that Nrf2 is the most important protein that induces various gene expression to counter oxidative stress or activate the antioxidant response, which protects against cell damage, and death triggered by various stimuli including environmental factors such as pollution, lifestyle factors such as smoking or exercise, and other factors. There is a growing body of evidence showing that Nrf2 plays multiple roles in the regulation of oxidative stress, inflammation, metabolism, autophagy, mitochondrial physiology, and other biological processes [64]. It has been reported that Nrf2-KO mice are susceptible to suffering diseases, which are associated with oxidative damage [66]. Therefore, Nrf2 plays a critical role in cell defense and the regulation of cell survival in various diseases such as TBI.Figure 3 The regulation of the Nrf2 signaling pathway in TBI. Under basal conditions (a), Keap1 functions as a substrate adaptor protein for Cul3 to mediate the degradation of Nrf2 via the UPS pathway. Under Nrf2 activation (b), the stress condition or the treatment of Nrf2 activators induces the dissociation of Nrf2 from Keap1 and leads to the accumulation of Nrf2 in the cytoplasm and the nuclear translocation of Nrf2. Then, Nrf2 binds to sMAF and ARE to regulate the expression of its downstream transcription factors including HO-1, NQO1, GST, GSH-Px, GCLC, and SOD. Then, oxidative damage, inflammation, neuronal apoptosis, and mitochondrial dysfunction are inhibited. ## 5. The Potential Therapy of Phytochemicals as Nrf2 Activators in TBI Because the primary injuries in TBI commonly result in acute physical damage and irreversible neuronal death, the therapies mainly aim at stabilizing the injury site and preventing it from secondary damage. As described above, the secondary damage of TBI is induced by various risks such as oxidative stress and develops progressively. To date, multiple therapeutic manners are developed, including the inhibition of excitotoxicity by glutamate receptor antagonists such as dexanbionol, the improvement of mitochondrial dysfunction using neuroprotective agents such as cyclosporine A, and the inhibition of axonal degeneration by calpain inhibitors such as MDL 28170 [67]. Emerging evidence indicates that oxidative stress is not only one of the pathogenesis of TBI but also the initiator and promoter of excitotoxicity, mitochondrial dysfunction, neuroinflammation, and other risks. Nrf2 plays a protective role in TBI via fighting against oxidative damage and inflammatory response in TBI [68], while the genetic deletion of Nrf2 delayed the recovery of post-TBI motor function and the cognitive function [69]. Therefore, targeting the discovery of Nrf2 activators to alleviate oxidative damage is a promising therapeutic strategy for TBI [70]. Recently, there are a lot of phytochemicals isolated from natural plants such as fruits, vegetables, grains, and other medicinal herbs and reported to activate the Nrf2 signaling pathway to exert neuroprotective effects in TBI [71]. In general, these natural phytochemicals as Nrf2 activators are used for the alleviation of the secondary damage of TBI. In this review, we summarize the research advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI during secondary injury (Table 1).Table 1 Phytochemicals from various plants possess multiple pharmacological effects via the antioxidant mechanism in various in vitro and in vivo models of TBI. PhytochemicalsPlantsModelsPharmacological effectsDetected markersAntioxidant mechanismRefPolyphenolQuercetinOnions, tomatoes, etc.Weight drop-induced TBI mice/rats,Improved behavioral function, neuronal viability, and mitochondrial function; reduced brain edema and microgliosis, oxidative damage and nitrosative stress, neuronal apoptosis, inflammatory responseMotor coordination; latency period; NSS; brain water content; MDA; SOD; catalase; GPx; lipid peroxidation; neuronal morphology; cytochrome c; Bax; MMP; ATP; Iba-1; TNF-α; iNOS; cNOS; IL-1β; Nrf2; HO-1Nrf2 pathway[74–77]CurcuminCurcuma longaFPI-induced TBI rats; Feeney or weight drop-induced TBI WT, Nrf2-KO or TLR4-KO mice; LPS-induced microglia or the co-culture of neuron and microgliaImproved cognitive function; reduced axonal injury, neuronal apoptosis, inflammatory response, and oxidative damageNSS; brain water content; Tuj1; H&E; Nissl; Congo red, silver, TUNEL, MPO, and FJC staining; caspase 3; Bcl-2; NeuN/BrdU double labeling; Iba-1; GFAP; TNF-α; IL-6; IL-1β; MCP-1; RANTES; CD11B; DCX; TLR4; MyD88; NF-κB; IkB; AQP4; Nrf2; HO-1; NQO1; PERK; eIF2α; ATF4; CHOP; GSK3β; p-tau; β-APP; NF-HNrf2 pathway; PERK/Nrf2 pathway[80, 81, 86–88]FormononetinRed cloverWeight drop-induced TBI ratsReduced brain edema, pathological lesions, inflammatory response, and oxidative damage,NSS; brain water content; H&E and Nissl staining; neuronal ultrastructural organization; SOD; GPx; MDA; TNF-α; IL-6; COX-2; IL-10; Nrf2Nrf2 pathway[91, 95]BaicalinScutellaria baicalensisWeight drop-induced TBI ratsImproved behavioral function and neuronal survival; reduced brain edema, oxidative damage, BBB disruption, and mitochondrial apoptosisNSS; brain water content; EB leakage, Nissl, and TUNEL staining, grip test score; cleaved caspase 3; Bcl-2, cytochrome c, p53, SOD, MDA, GPx, NeuN, Nrf2, HO-1, NQO1, AMPK, mTOR, LC3, Beclin-1, p62Akt/Nrf2 pathway[96, 97]CatechinCocoa, tea, grapes, etc.CCI- or weight drop-induced TBI ratsImproved long-term neurological outcomes, neuronal survival, and white matter recovery; reduced brain edema, brain lesion volume, neurodegeneration, inflammatory response, BBB disruption, neutrophil infiltration, and oxidative damageNSS; brain water content; brain infarct volume; forelimb score; Hindlimb score; latency; quadrant time; EB extravasation; ZO-1; Occludin; TNF-α;IL-1β;IL-6; iNOS; arginase; TUNEL, PI, FJB, Cresyl violet, and MPO staining; myelin; caspase 3; caspase 8; Bcl-2; Bax; BDNF; ROS; MMP-2; MMP-9; Nrf2, Keap1, SOD1; HO-1; NQO1; NF-κBNrf2-dependent and Nrf2-independent pathways[103, 104]FisetinCotinus coggygria, onions, cucumbers, etc.Weight drop-induced TBI miceImproved neurological function; reduced cerebral edema, brain lesion, oxidative damage, and BBB disruptionNSS; brain water content; grip score; EB extravasation; lesion volume; MDA; GPx Nissl and TUNEL staining; caspase 3; Bcl-2; Bax; Nrf2, HO-1; NQO1; TLR4; NF-κB; NeuN; TNF-α;IL-1β;IL-6; MPP-9; ZO-1; EB leakageNrf2-ARE signaling pathway[109, 110]LuteolinCarrots, green tea, celery, etc.Marmarou’s weight drop-induced TBI mice/rats; scratch injury-induced TBI primary neuronsImproved motor performance, and learning and memory; reduced cerebral edema, apoptosis index, and oxidative damageLatency time; brain water content; grip score; MDA; GPx; catalase; SOD; TUNEL, H&E, Cresyl violet, and TB staining; ROS; LDH release assay; Nrf2; HO-1; NQO1Nrf2-ARE signaling pathway[116, 117]IsoliquiritigeninSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odoriferaCCI-induced TBI mice/rats; ODG-induced SH-SY5Y cellsImproved motor performance, cognitive function, and cell viability; reduced cerebral edema, neuronal apoptosis, inflammatory response, BBB damage, and oxidative damageGarcia neuroscore; MWM test; beam-balance latency; beam-walk latency; brain water content; contusion volume; EB extravasation; apoptosis rate; MDA; GPx; SOD; H2O2; H&E and Nissl staining; GFAP; NFL; AQP4; caspase 3; Bcl-2; Bcl-xL; Bax; Nrf2, HO-1; NQO1; TNF-α; INF-γ; IL-1β; IL-6; IL-10; Iba-1; CD68; AKT; GSK3β; P-120; Occludin; NF-κB; IkB; CCK-8 assayNrf2-ARE signaling pathway[121–123]Tannic acidGreen and black tea, nuts, fruits, and vegetablesCCI-induced TBI mice/rats; ODG-induced SH-SY5Y cellsImproved behavioral performance; reduced cerebral edema, neuronal apoptosis, inflammatory response, and oxidative damageGrip test score; Rotarod test; beam balance; brain water content; GSH; LPO; GST; GPx; CAT; SOD; Nissl staining; caspase 3; Bcl-2; Bax; PARP; Nrf2; PGC-1α; Tfam; HO-1; NQO1; TNF-α; IL-1β; 4HNE; GFAPPGC-1α/Nrf2/HO-1 pathway[133]Ellagic acidVarious berries, walnuts, and nutsExperimental diffuse TBI rats; CCl4-induced brain injury ratsImproved memory, hippocampus electrophysiology and long-term potentiation deficit; reduced neuronal apoptosis, inflammatory response, oxidative damage, and BBB disruptionInitial latency; step through latency; EB leakage; NSS; MDA; GSH; CAT; caspase 3; Bcl-2; NF-κB; PARP; Nrf2; Cox-2, VEGF; TNF-α; IL-1β; IL-6Nrf2 signaling pathway[134, 135]BreviscapineErigeronWeight drop- or CCI-induced TBI ratsImproved neurobehavior; reduced neuronal apoptosis, inflammatory response, and oxidative damageNSS; TUNEL staining; MDA; GSH; CAT; caspase 3; Bcl-2; Bax; IL-6; Nrf2; HO-1; NQO1; GSK3β; SYPNrf2 signaling pathway[138–140]TerpenoidsAsiatic acidCentella asiaticaCCI-induced TBI ratsImproved neurological deficits; inhibited brain edema, neuronal apoptosis, and oxidative damageNSS; brain water content; TUNEL staining; MDA; 4-HNE; 8-OhdG; Nrf2; HO-1Nrf2 signaling pathway[149]AucubinEucommia ulmoidesWeight drop-induced TBI mice; H2O2-induced primary cortical neuronsImproved neurological deficits, and cognitive function; reduced brain edema, neuronal apoptosis and loss, inflammatory response, and oxidative damageNSS; brain water content; TUNEL and Nissl staining; MWM test; Bcl-2; Bax; CC3; MAP2; MMP-9; MDA; SOD; GSH; GPx; 8-HdG; NeuN; Iba-1; HMGB1; TLR4; MyD88; NF-κB; iNOS; COX2; IL-1β; Nrf2; HO-1; NQO1Nrf2 signaling pathway[155]Ursolic acidApples, bilberries, lavender, hawthorn, etc.Weight drop-induced TBI miceImproved neurobehavioral and mitochondrial function; reduced brain edema, oxidative damage, and neuronal cytoskeletal degradationNSS; brain water content; TUNEL and Nissl staining; MDA; SOD; GPx; AKT; 4-HNE; 3-NE; ADP rate; succinate rate; Spectrin; Nrf2; HO-1; NQO1AKT/Nrf2 signaling pathway[158]Carnosic acidRosmarinus officinalis and Salvia officinalis.CCI-induced acute post-TBI mice;Improved motor and cognitive function, and neuronal viability; reduced brain edema, neuronal apoptosis and loss, inflammatory response, and oxidative damageDuration of apnea; mitochondrial respiration; Barnes maze test; novel object recognition (NOR) task; GFAP; Iba-1; NeuN; MAP2; vGlut1; HO-1Nrf2-ARE signaling pathway[157–159]Natural pigmentsFucoxanthinRosmarinus officinalis and Salvia officinalisWeight drop-induced TBI mice; scratch injury-induced TBI primary cortical neuronsImproved neurobehavioral function, and neuronal viability; reduced brain edema, neuronal apoptosis, and oxidative damageNSS; grip test score; brain water content; lesion volume; TUNEL staining; caspase 3; PARP; cytochrome c; MDA; GPx; ROS; LC3; NeuN; p62; Nrf2; HO-1; NQO1Nrf2-ARE and Nrf2-autophagy[170]β-CaroteneFungi, plants, and fruitsWeight drop-induced TBI miceImproved neurological function; reduced brain edema, BBB disruption, neuronal apoptosis, and oxidative damageNeurological deficit score; wire hanging; brain water content; EB extravasation; MDA; SOD; NeuN; Nissl and TUNEL staining; caspase 3; Bcl-2; Keap1; Nrf2; HO-1; NQO1Keap1-Nrf2 signaling pathway[174]AstaxanthinSalmon, rainbow trout, shrimp, and lobsterCCI- or weight drop-induced TBI mice; H2O2-induced primary cortical neuronsImproved neurological, motor, and cognitive function; reduced brain edema, BBB disruption, neuronal apoptosis, and oxidative damageNSS; Rotarod test time; neurological deficit scores; rotarod performance; beam walking score; wire hanging test; MWM test; brain water content; 8-OhdG; immobility time; latency to immobility; SOD1; MDA; H2O2; GSH; ROS; CC3; Nissl, Cresyl violet, and TUNEL staining; Prx2; SIRT1; ASK1; p38; NeuN; Bax; Bcl-2; caspase 3; Nrf2; HO-1; NQO1Nrf2 signaling pathway; SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway[172, 173]LuteinCalendula officinalis, spinach, and Brassica oleraceaCCI-induced STBI mice; H2O2-induced primary cortical neuronsImproved motor and cognitive function; reduced brain edema, contusion volume, inflammatory response; and oxidative damageForelimb reaching test; immobility time; latency to immobility; brain water content; 8-OhdG; TNF-α; IL-1β; IL-6; MCP-1; ROS; SOD; GSH; ICAM-1; COX-2 NF-κB; ET-1; MDA; H2O2; CC3; Nissl, Cresyl violet, and TUNEL staining; Prx2; SIRT1; ASK1; p38; NeuN; Bax; Bcl-2; caspase 3; Nrf2; HO-1; NQO1ICAM-1/Nrf-2 signaling pathway[176, 177]OthersSodium aescinateAesculus chinensis Bunge and chestnutWeight drop-induced TBI mice; scratch injury-induced primary cortical neuronsImproved neurological function; reduced brain edema, inflammatory response; and oxidative damageNSS; brain water content; lesion volume; MDA; GPx; Nissl and TUNEL staining; Bax; Bcl-2; cytochrome c; caspase 3; cell survival; ROS; Nrf2; HO-1; NQO1Nrf2-ARE pathway[187]MelatoninPlants, animals, fungus, and bacteriaMarmarou’s weight drop-induced TBI miceReduced brain edema, neuronal degeneration and apoptosis, and oxidative damageBrain water content; MDA; 3-NT; GPx; SOD; FJC staining; NeuN; Beclin-1; Nrf2; HO-1; NQO1Nrf2-ARE pathway[193]SinomenineSinomenium acutum (Thunb.) Rehd. Et Wils. and Sinomenium acutum var. cinereum Rehd. Et Wils.Marmarou’s weight drop-induced TBI miceImproved motor performance; reduced brain edema, neuronal apoptosis, and oxidative damageGrip test score; brain water content; NeuN and TUNEL staining; Bcl-2; caspase 3; MDA; GPx; SOD; Nrf2; HO-1; NQO1Nrf2-ARE pathway[196]SulforaphaneVegetable, including cabbage, broccoli, and cauliflowerCCI-induced TBI miceImproved motor performance and cognitive function, reduced brain edema, BBB permeability, mitochondrial dysfunction, and oxidative damageMWZ test; EB extravasation; brain water content; Occludin; Claudin-5; RECA-1; vWF; EBA; ZO-1; AQP4; GPx; GSTα3; 4-HNE; ADP rate; succinate rate; NeuN; Nrf2; HO-1; NQO1Nrf2 signaling pathway[158, 196–198] ### 5.1. Quercetin Quercetin belonging to flavonoids is commonly found in dietary plants including vegetables and fruits such as onions, tomatoes, soy, and beans [72]. Emerging evidence indicates quercetin exerts a variety of pharmacological effects mainly involving antioxidation, anti-inflammation, antivirus, anticancer, neuroprotection, and cardiovascular protection [73]. It is known to us that the inflammatory response promotes oxidative damage in TBI [74]. In weight drop injury (WDI)-induced TBI mice, quercetin was reported to significantly inhibit neuroinflammation-medicated oxidative stress and histological alterations as demonstrated by the decreased lipid peroxidation and increased activities of SOD, catalase, and GPx [75]. Meanwhile, quercetin could significantly reduce the brain water content and improve the neurobehavioral status, which is closely associated with the activation of the Nrf2/HO-1 pathway [74]. The impairment of mitochondria function leads to an increase in reactive oxygen species (ROS) production and damages mitochondrial proteins, DNA, and lipids [72]. Quercetin was reported to significantly inhibit mitochondrial damage of TBI male Institute of Cancer Research (ICR) mice as evidenced by the decreased expression of Bax and increased levels of cytochrome c in mitochondria, as well as increased mitochondrial SOD and decreased mitochondrial MDA content, and the recovery of mitochondrial membrane potential (MMP) and intracellular ATP content. The mechanistic study demonstrated that quercetin promoted the translocation of Nrf2 from the cytoplasm to the nucleus, suggesting that quercetin exerts neuroprotective effects in TBI mice via maintaining mitochondrial homeostasis through the activation of Nrf2 signaling pathway [76]. In moderate TBI rats, quercetin inhibited oxidative nitrosative stress by reducing the activity of NOS including inducible nitric oxide synthase (iNOS) and constructive nitric oxide synthase (cNOS), as well as the concentration of thiobarbituric acid (TBA)-lipid peroxidation in the cerebral hemisphere and periodontal tissues [77]. Therefore, quercetin exerts neuroprotective effects in TBI via multiple biological activities, including inhibition of oxidative damage, nitrosative stress, and inflammatory response, as well as the improvement of mitochondrial dysfunction and neuronal function through the Nrf2 signaling pathway. ### 5.2. Curcumin Curcumin, a polyphenol isolated fromCurcuma longa rhizomes, has been reported to possess multiple biological activities, including antioxidative, anti-inflammatory, and anticancer effects [78]. Most importantly, curcumin is also demonstrated to cross the BBB and exert neuroprotection in various neurodegenerative diseases, such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and amyotrophic lateral sclerosis (ALS), via the inhibition of neuronal death and neuroinflammation [79]. In addition, emerging evidence indicates that curcumin presents protective effects in TBI and activates the Nrf2 signaling pathway in vivo and in vitro [78, 80–82]. In mild fluid percussion injury (FPI)-induced TBI rats, curcumin significantly attenuated oxidative damage by decreasing the oxidized protein levels and reversing the reduction in the levels of brain-derived neurotrophic factor (DBNF), synapsin I, and cyclic AMP (cAMP)-response element-binding protein 1 (CREB) [81]. Meanwhile, curcumin improved the cognitive and behavioral function of TBI rats [81, 83–85]. In addition, the intraperitoneal administration of curcumin could improve the neurobehavioral function and decrease the brain water content in Feeney or Marmarou’s weight drop-induced TBI mice. Furthermore, curcumin reduced the oxidative stress in the ipsilateral cortex by decreasing the level of MDA and increasing the levels of SOD and GPx, as well as promoted neuronal regeneration and inhibited neuronal apoptosis [80, 85]. Moreover, curcumin inhibited the neuroinflammatory response as demonstrated by the decreased number of myeloperoxidase (MPO) positive cells and increased levels of cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), and interleukin-1beta (IL-1β) [80]. The mechanistic study found that curcumin promoted the nuclear translocation of Nrf2 and increased the expression of downstream genes, including HO-1, NQO1, and GCLC, while the neuroprotective effects of curcumin including antioxidation, antiapoptosis, and anti-inflammation were attenuated in Nrf2-KO mice after TBI [80]. In addition, the anti-inflammatory effect of curcumin in TBI was also regulated by the TLR4/MyD88/NF-κB signaling pathway [86] and aquaporin-4 (AQP4) [87]. Diffuse axonal injury (DAI), a type of TBI, is recognized as an important cause that results in long-term problems in motor and cognition, while curcumin could ameliorate axonal injury and neuronal degeneration of rats after DAI. In addition, curcumin overcame endoplasmic reticulum (ER) stress via strengthening the ability of the unfolded protein response (UPR) process and reducing the levels of plasma tau, β-APP, and NF-H. The mechanistic study revealed that curcumin activated the PERK/Nrf2 signaling pathway [88]. Most importantly, the combinational use of curcumin and candesartan, an angiotensin II receptor blocker used for the treatment of hypertension, showed better antioxidative, antiapoptotic, and anti-inflammatory effects than curcumin or candesartan alone [89]. In addition, tetrahydrocurcumin, the metabolite of curcumin, could also alleviate brain edema and reduce neuronal cell apoptosis, as well as improve neurobehavioral function via the Nrf2 signaling pathway in weight drop-induced TBI mice [90]. Taken together, curcumin together with its metabolites are useful for the treatment of TBI. ### 5.3. Formononetin Formononetin, an O-methylated isoflavone phytoestrogen, is commonly found in plants such as red clover [91]. Accumulating studies show that formononetin has various biological activities, including the improvement of blood microcirculation, anticancer, and antioxidative [92]. In addition, formononetin exhibits neuroprotection in AD, PD, spinal cord injury, and TBI [93, 94]. It has been reported that the administration of formononetin could decrease the neurological score and cerebral moisture content of TBI rats [91]. In addition, the HE staining images showed that formononetin attenuated the edema and necrosis in the lesioned zones of the brain and increased the number of neural cells. At the same time, the oxidative stress was significantly reversed by formononetin as indicated by the increased enzymatic activity of SOD and GPx activity and decreased MDA content. The inflammatory cytokines including TNF-α and IL-6 as well as the mRNA level of Cyclooxygenase-2(COX-2) were also reduced by formononetin. The mechanistic study revealed that formononetin increased the protein expression of Nrf2 [95]. Furthermore, the same research team found that microRNA-155 (miR-155) is involved in the neuroprotection of formononetin in TBI. The pretreatment of formononetin significantly increased the expression of miR-155 and HO-1, which is accompanied by the downregulation of BACH1 [91]. All evidence suggests that formononetin provides neuroprotection in TBI via the Nrf2/HO-1 signaling pathway. ### 5.4. Baicalin Baicalin, known as 7-D-Glucuronic acid-5,6-dihydroxyflavone, is a major flavone found in the radix ofScutellaria baicalensis [96]. Emerging evidence indicates that baicalin can cross the BBB and exert neuroprotective effects in various CNS-related diseases including AD, cerebral ischemia, spinal cord injury, and TBI [97]. In addition, baicalin was reported to activate the Nrf2 signaling pathway and attenuate subarachnoid hemorrhagic brain injury [98]. In weight drop-induced TBI mice, baicalin significantly reduced the neurological soft signs (NSS) score and the brain water content, and inhibited neuronal apoptosis as evidenced by the decreased terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive neurons, Bax/Bcl-2 ratio, and the cleavage of caspase 3. Meanwhile, baicalin attenuated oxidative damage by decreasing MDA levels and increasing GPx and SOD activity and expression. The mechanistic study found that baicalin increased the expression of Nrf2 and promoted the nuclear translocation of Nrf2, meanwhile upregulated the mRNA and protein expression of HO-1 and NQO1, while the treatment of ly294002 reversed the effect of baicalin on antiapoptosis, antioxidation, and activation of the Nrf2 signaling pathway, suggesting that baicalin exerts neuroprotective effects via the Akt/Nrf2 pathway in TBI [96]. As is known to us, autophagy plays a protective mechanism in neurodegenerative diseases. Furthermore, the same research team found that baicalin induced autophagy and alleviated the BBB disruption and inhibited neuronal apoptosis of mice after TBI, while the co-treatment of 3-MA partly abolished the neuroprotective effect of baicalin. Therefore, baicalin provides a beneficial effect via the Nrf-2 regulated antioxidative pathway and autophagy induction. ### 5.5. Catechin Catechin is a flavan-3-ol and belongs to a type of natural polyphenols [99]. It is a plant secondary metabolite and a potent antioxidant [100]. Structurally, it has four diastereoisomers, including two isomers with trans configuration called (+)-catechin and two isomers with cis configuration called (-)-epicatechin [101]. They are commonly found in food and fruits, such as cocoa, tea, and grapes. The pharmacological activity of catechin mainly involves antioxidative, anti-inflammatory, antifungal, antidiabetic, antibacterial, and antitumor effects [102]. In addition, catechin also exhibits neuroprotective effects in CCI-induced TBI rats by inhibiting the disruption of BBB and excessive inflammatory responses [103]. The expression of junction proteins including occludin and zonula occludens protein-1 (ZO-1) associated with BBB integrity was increased, while the levels of proinflammatory cytokines including IL-1β, iNOS, and IL-6 were decreased by catechin. At the same time, catechin significantly alleviated the brain damage as revealed by the decrease in the brain water content and brain infarction volume, as well as improved motor and cognitive deficits [103]. In addition, catechin inhibited cell apoptosis and induced neurotrophic factors in rats after TBI [104]. In CCI-induced TBI mice, the administration of epicatechin significantly attenuated the neutrophil infiltration and oxidative damage. Specifically, epicatechin could reduce lesion volume, edema, and cell death, as well as improve neurological function, cognitive performance, and depression-like behaviors. In addition, epicatechin decreased white matter injury, HO-1 expression, and the deposition of ferric iron. The mechanistic study found that epicatechin decreased the Keap1 expression while increasing the nuclear translocation of Nrf2. Meanwhile, epicatechin reduced the activity of Matrix metallopeptidase 9 (MMP9) and increased the expression of SOD1 and quinone 1 [102]. Therefore, epicatechin exerts neuroprotective effects in TBI mice via modulating Nrf2-regulated oxidative stress response and inhibiting iron deposition. ### 5.6. Fisetin Fisetin, also known as 3,3′,4′,7-tetrahydroxyflavone, is a flavonol compound and was first extracted from Cotinus coggygria by Jacob Schmid in 1886 [105], and its structure was elucidated by Joseph Hergig in 1891. In addition, fisetin is also found in many vegetables and fruits, such as onions, cucumbers, persimmon, strawberries, and apples [106]. Emerging evidence indicates that fisetin acting as a potent antioxidant possesses multiple biological activities, including anti-inflammatory, antiviral, anticarcinogenic, and other effects [107]. Fisetin also presents neuroprotective effects via the antioxidative stress in AD, PD, etc. [108]. In addition, fisetin also showed protective effects in weight drop-induced TBI mice as shown by the decreased NSS, brain water content, Evans blue (EB) extravasation, and lesion volume of brain tissue, as well as the increased grip test score. Meanwhile, the MDA level was decreased and GPx activity was increased by fisetin, suggesting that fisetin provides a neuroprotective effect via suppressing TBI-induced oxidative stress [109]. In addition, the neuronal cell outline and structure stained by Nissl solution showed that fisetin improved neuronal viability, while neuronal apoptosis was inhibited by fisetin as demonstrated by the decreased TUNEL signals, and the reduced protein expression of Bax/Bcl-2 and cleaved caspase-3. The mechanistic study demonstrated that fisetin promoted the Nrf2 nuclear translocation and increased the expression of HO-1 and NQO1, while the KO of Nrf2 abrogated the neuroprotective effect of fisetin including antioxidation and antiapoptosis [109]. Moreover, fisetin was reported to exert anti-inflammatory effects in TBI mice via the TLR4/NF-κB pathway, and the level of TNF-α, IL-1β, and IL-6 was significantly decreased. Meanwhile, the BBB disruption of TBI mice was attenuated by fisetin [110]. Therefore, fisetin exerts neuroprotective effects in TBI via the Nrf2-regulated oxidative stress and the NF-κB-mediated inflammatory signaling pathway. ### 5.7. Luteolin Luteolin, belonging to flavonoids, is abundant in fruits and vegetables such as carrots, green tea, and celery [111]. Emerging evidence indicates luteolin has a wide variety of biological activities including antioxidative and anti-inflammatory effects [112, 113]. In addition, several studies have demonstrated the neuroprotective effect of luteolin in multiple in vivo and in vitro models [114, 115]. For example, luteolin could recover motor performance and reduce post-traumatic cerebral edema in weight drop-induced TBI mice. The oxidative damage was reduced by luteolin as demonstrated by the decrease in MDA levels and the increase in GPx activity in the ipsilateral cortex. The mechanistic study found that luteolin promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expressions of HO-1 and NQO1 [116]. In addition, luteolin significantly improved TBI-induced learning and memory impairment in rats after TBI, which was closely associated with the attenuation of oxidative damage indicated by the decreased MDA level and increased SOD and CAT activity [117, 118]. Therefore, the Nrf2-regulated oxidative stress response plays an important role in luteolin against TBI. ### 5.8. Isoliquiritigenin Isoliquiritigenin, a chalcone compound, is often found in plants includingSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odorifera. [119]. Isoliquiritigenin has been reported to attenuate oxidative damage, inhibit the inflammatory response, and suppress tumor growth [120]. In addition, isoliquiritigenin activates the Nrf2 signaling pathway to exert antioxidative and anti-inflammatory effects in multiple cellular and animal models. Isoliquirtigenin also exerts a neuroprotective effect in CCI-induced TBI mice via the Nrf2-ARE signaling pathway [121]. For example, isoliquiritigenin increased the Garcia Neuro score and decreased the brain water content, as well as the expression of aquaporin 4 (AQP4) and EB leakage. The glia activation indicated by GFAP expression was inhibited and the neuron viability showed by neurofilament light (NFL) expression was increased by isoliquiritigenin. In addition, isoliquiritigenin increased the number of Nissl staining-positive neurons and inhibited neuronal apoptosis as evidenced by the decreased expression of cleaved caspase-3. Furthermore, the oxidative damage was ameliorated by isoliquiritigenin as shown by the increased GPx activity, SOD levels, and decreased H2O2 concentration and MDA levels. However, the KO of Nrf2 significantly attenuated the neuroprotective effect of isoliquiritigenin in mice after TBI. The mechanistic study demonstrated that isoliquiritigenin increased the nuclear translocation of Nrf2 and the protein and mRNA expression of NQO1 and HO-1. In the in vitro study, isoliquiritigenin also activated the Nrf2-ARE signaling pathway and increased the cell viability in oxygen and glucose deprivation (OGD)-induced SH-SY5Y cells. In addition, isoliquiritigenin inhibited shear stress-induced cell apoptosis in SH-SY5Y cells, as well as suppressed the inflammatory response and inhibited neuronal apoptosis in CCI-induced TBI mice or rats via the PI3K/AKT/GSK-3β/NF-κB signaling pathway [122, 123]. Moreover, isoliquiritigenin protected against BBB damage in mice after TBI via inhibiting the PI3K/AKT/GSK-3β pathway [123]. Therefore, isoliquiritigenin may be a promising agent for the treatment of TBI via the inhibition of oxidative stress, inflammatory response, and BBB disruption in TBI. ### 5.9. Tannic Acid Tannic acid, a natural polyphenol, is commonly found in green and black teas as well as nuts, fruits, and vegetables [124]. Emerging evidence indicates that tannic acid possesses multiple biological activities such as antioxidative, anti-inflammatory, antiviral, and antiapoptotic effects [125–127]. In addition, tannic acid exhibits neuroprotective effects as shown by the improvement of behavioral deficits and the inhibition of neurodegeneration [128]. Recently, tannic acid has been proven to ameliorate the oxidative damage and behavioral impairments of mice after TBI [128]. For example, tannic acid significantly increased the score in the grip test and the motor coordination time as well as decreased the stay time in the balance test. In addition, tannic acid inhibited neuronal damage and reduced the brain water content of TBI mice. A further study found that tannic acid could attenuate oxidative stress as evidenced by increased glutathione (GSH) levels, 1-Chloro-2,4-dinitrobenzene (CDNB) conjunction, NADPH oxidation, and H2O2 consumption. In addition, apoptosis-related proteins including cleaved caspase-3 and Poly (ADP-ribose) polymerase (PARP), as well as Bax/Bcl-2, were significantly inhibited by tannic acid. Meanwhile, the inflammatory response indicated by the increased levels of TNF-α and IL-1β and GFAP immunofluorescence intensity was also suppressed. The mechanistic study demonstrated that tannic acid increased the protein expression of Nrf2, PGC-1α, Tfam, and HO-1. Therefore, tannic acid exerts a neuroprotective effect in TBI via activating the PGC-1α/Nrf2/HO-1 signaling pathway. ### 5.10. Ellagic Acid Ellagic acid, an innate polyphenol, is commonly found in various berries such as blueberries, strawberries, blackberries, together with walnuts and nuts [129]. Several studies show that ellagic acid exerts multiple biological activities, including anti-inflammatory, antioxidative, antifibrosis, antidepressant, and neuroprotective effects [130]. In addition, ellagic acid also exhibits protective effects in various brain injuries such as neonatal hypoxic brain injury, cerebral ischemia/reperfusion injury, carbon tetrachloride (CCl4)-induced brain damage, and TBI [131–133]. Here, we summarize the neuroprotective effect of ellagic acid in TBI and its mechanism of action. In experimental diffuse TBI rats, the treatment of ellagic acid significantly improved memory and hippocampus electrophysiology deficits [134]. Meanwhile, the inflammatory responses indicated by the elevated TNF-α, IL-1β, and IL-6 levels were reduced by ellagic acid [134, 135]. In addition, ellagic acid could also decrease the BBB permeability of mice after TBI [135]. In CCl4-induced brain injury rats, ellagic acid decreased MDA levels, increased GSH content, and CAT activity. The mechanistic study demonstrated that ellagic acid inhibited the protein expression of NF-κB and COX-2 while increasing the protein expression of Nrf2 [133]. Therefore, ellagic acid exerts an antioxidative effect via activating the Nrf2 pathway and exhibits anti-inflammatory effects via inhibiting the NF-κB pathway in TBI. ### 5.11. Breviscapine Breviscapine is an aglycone flavonoid and is isolated from the Erigeron plant [136]. Modern pharmacological studies indicate that breviscapine can expand blood vessels to assist in microcirculation, suggesting its potential therapeutic role in cardiovascular and CNS-related diseases [137]. In addition, breviscapine, acting as a scavenger of oxygen-free radicals, is demonstrated to improve ATPase and SOD activity. Recently, breviscapine is also reported to improve neurobehavior and decrease neuronal apoptosis in TBI mice, which is closely associated with the translocation of Nrf2 from the cytoplasm into the nuclear and the subsequent upregulation of Nrf2 downstream factors such as HO-1 and NQO1 [138]. In addition, the inhibition of glycogen synthase kinase-3β (GSK-3β) and IL-6 by breviscapine is associated with its neuroprotective effect in TBI [139, 140]. Therefore, breviscapine exerts neuroprotective effects in TBI via antioxidation, antiapoptosis, and anti-inflammatory responses. ### 5.12. Asiatic Acid Asiatic acid belonging to pentacyclic triterpene is isolated from natural plants such asCentella asiatica [141]. Studies have shown that asiatic acid exhibits potent anti-inflammatory and antioxidative properties, which contributes to its protective effects in spinal cord injury, ischemic stroke, cardiac hypertrophy, liver injury, and lung injury through multiple mechanisms [142]. For example, the administration of asiatic acid could increase Basso, Beattie, Bresnahan scores and the plane test score in spinal cord injury (SCI) rats. Meanwhile, asiatic acid inhibited the inflammatory response by reducing the levels of IL-1β, IL-18, IL-6, and TNF-α, and counteracted oxidative stress by decreasing ROS, H2O2, and MDA levels while increasing SOD activity and glutathione production. The underlying mechanisms include the activation of Nrf2/HO-1 and the inhibition of the NLRP3 inflammasome pathway. In addition, asiatic acid could alleviate tert-butyl hydroperoxide (tBHP)-induced oxidative stress in HepG2 cells. The researchers found that asiatic acid significantly inhibited tBHP-induced cytotoxicity, apoptosis, and the generation of ROS, which attributed to the activation of Keap1/Nrf2/ARE signaling pathway and the upregulation of transcription factors including HO-1, NQO-1, and GCLC [143]. In a CCI-induced TBI model, the administration of asiatic acid significantly improved neurological deficits and reduced brain edema. Meanwhile, asiatic acid counteracted oxidative damage as evidenced by the reduced levels of MDA, 4-HNE, and 8-hydroxy-2′-deoxyguanosine (8-OHdG). The mechanistic study further found that asiatic acid could increase the mRNA and protein expression of Nrf2 and HO-1 [144]. Taken together, asiatic acid improves neurological deficits in TBI via activating the Nrf2/HO-1 signaling pathway. ### 5.13. Aucubin Aucubin, an iridoid glycoside isolated from natural plants such asEucommia ulmoides [145], is reported to have several pharmacological effects including antioxidation, antifibrosis, antiageing, and anti-inflammation [145–147]. Recently, emerging evidence indicates that aucubin exerts neuroprotective effects via antioxidation and anti-inflammation [148]. In addition, aucubin also inhibited lipid accumulation and attenuated oxidative stress via activating the Nrf2/HO-1 and AMP-activated protein kinase (AMPK) signaling pathways [147]. Moreover, aucubin inhibited lipopolysaccharide (LPS)-induced acute pulmonary injury through the regulation of Nrf2 and AMPK pathways [149]. In H2O2-induced primary cortical neurons and weight drop-induced TBI mouse model, aucubin was found to significantly decrease the excessive generation of ROS and inhibit neuronal apoptosis. In addition, aucubin could reduce brain edema, improve cognitive function, decrease neural apoptosis and loss of neurons, attenuate oxidative stress, and suppress the inflammatory response in the cortex of TBI mice. The mechanistic study demonstrated that aucubin activated the Nrf2-ARE signaling pathway and upregulated the expression of HO-1 and NQO1, while the neuroprotective effect of aucubin in Nrf2-KO mice after TBI was reversed [150]. Therefore, aucubin provides a protective effect in TBI via activating the Nrf2 signaling pathway. ### 5.14. Ursolic Acid Ursolic acid, a pentacyclic triterpenoid compound, is widely found in various fruits and vegetables such as apples, bilberries, lavender, and hawthorn. [151]. Ursolic acid has been reported to possess multiple pharmacological effects including anti-inflammatory, antioxidative, antifungal, antibacterial, and neuroprotective properties [152]. In addition, ursolic acid activates the Nrf2/ARE signaling pathway to exert a protective effect in cerebral ischemia, liver fibrosis, and TBI [153]. In weight drop-induced TBI mice, the administration of ursolic acid could improve neurobehavioral functions and reduce the cerebral edema of mice after TBI. In addition, ursolic acid inhibited neuronal apoptosis as shown by the Nissl staining images and TUNEL staining. Meanwhile, ursolic acid ameliorated oxidative stress by increasing SOD and GPx activity as well as decreasing MDA levels. The mechanistic study demonstrated that ursolic acid promoted the nuclear translocation of Nrf2 and increased the levels of transcription factors including HO-1 and NQO1, while the KO of Nrf2 could partly abolish the protective effect of ursolic acid in TBI [153]. Therefore, ursolic acid exerts a neuroprotective effect in TBI via partly activating the Nrf2 signaling pathway. ### 5.15. Carnosic Acid Carnosic acid, a natural benzenediol abietane diterpene, is found inRosmarinus officinalis and Salvia officinalis [154]. Carnosic acid and carnosol are two major antioxidants in Rosmarinus officinalis [155]. Emerging evidence indicates that carnosic acid is a potent activator of Nrf2 and exerts a neuroprotective effect in various neurodegenerative diseases [156]. In CCI-induced acute post-TBI mice, carnosic acid could reduce TBI-induced oxidative damage by decreasing the level of 4-HNE and 3-NE in the brain tissues. A further study demonstrated that carnosic acid maintained mitochondrial respiratory function and attenuated oxidative damage by reducing the amount of 4-HNE bound to cortical mitochondria [157, 158]. In addition, carnosic acid showed a potent neuroprotective effect in repetitive mild TBI (rmTBI) as evidenced by the significant improvement of motor and cognitive performance. Meanwhile, the expression of GFAP and Iba1 expression was inhibited, suggesting that carnosic acid inhibited the neuroinflammation in TBI [159]. Therefore, carnosic acid exerts a neuroprotective effect via inhibiting the mitochondrial oxidative damage in TBI through the Nrf2-ARE signaling pathway. ### 5.16. Fucoxanthin Fucoxanthin, a carotenoid isolated from natural plants such as seaweeds and microalgae, is considered a potent antioxidant [160]. Several studies show that fucoxanthin exerts various pharmacological activities such as antioxidation, anti-inflammation, anticancer, and health protection effects [161]. In addition, fucoxanthin exerts anti-inflammatory effects in LPS-induced BV-2 microglial cells via increasing the Nrf2/HO-1 signaling pathway [162] and inhibits the overactivation of NLRP3 inflammasome via the NF-κB signaling pathway in bone marrow-derived immune cells and astrocytes [163]. In mouse hepatic BNL CL.2 cells, fucoxanthin was reported to upregulate the mRNA and protein expression of HO-1 and NQO1 via increasing the phosphorylation of ERK and p38 and activating the Nrf2/ARE pathway, which contributes to its antioxidant activity [164]. Recently, it has been reported that the neuroprotective effect of fucoxanthin in TBI mice was regulated via the Nrf2-ARE and Nrf2-autophagy pathways [165]. In this study, the researchers found that fucoxanthin alleviated neurological deficits, cerebral edema, brain lesions, and neuronal apoptosis of TBI mice. In addition, fucoxanthin significantly decreased the generation of MDA and increased the activity of GPx, suggesting its antioxidative effect in TBI. Furthermore, in vitro experiments revealed that fucoxanthin could improve neuronal survival and reduce the production of ROS in primary cultured neurons. A further mechanistic study revealed that fucoxanthin activated the Nrf2-ARE pathway and autophagy in vivo and in vitro, while fucoxanthin failed to activate autophagy and exert a neuroprotective effect in Nrf2−/− mice after TBI. Therefore, fucoxanthin activates the Nrf-2 signaling pathway and induces autophagy to exert a neuroprotective effect in TBI. ### 5.17.β-carotene β-carotene, abundant in fungi, plants, and fruits, is a member of the carotenes and belongs to terpenoids [166]. Accumulating studies indicate that β-carotene acting as an antioxidant has potential therapeutic effects in various diseases, such as cardiovascular disease, cancer, and neurodegenerative diseases [167, 168]. Meanwhile, the neuroprotective effect of β-carotene was also reported in CCI-induced TBI mice; the administration of β-carotene significantly improved neurological function and brain edema as evidenced by the decreased neurological deficit score and brain water content and increased time of wire hanging of mice after TBI. In addition, β-carotene could maintain the BBB permeability as indicated by the EB extravasation and ameliorate oxidative stress as showed by the increased SOD level and decreased MDA levels. The mechanistic study demonstrated that β-carotene activated the Keap1-Nrf2 signaling pathway and promoted the expression of HO-1 and NQO1 [169]. Therefore, β-carotene provides a neuroprotective effect in TBI via inhibiting oxidative stress through the Nrf2 pathway. ### 5.18. Astaxanthin Astaxanthin is a carotenoid and is commonly found in certain plants and animals, such as salmon, rainbow trout, shrimp, and lobster [170]. Emerging evidence indicates that astaxanthin exhibits multiple biological activities, including antiageing, anticancer, heart protection, and neuroprotection [171]. Recently, astaxanthin was reported to present neuroprotection in CCI-induced TBI mice, such as increasing NSS score and immobility time and increasing rotarod time and latency to immobility. In addition, astaxanthin increased SOD1 levels and inhibited the protein expression of cleaved caspase 3 and the number of TUNEL-positive cells, suggesting that astaxanthin exerted antioxidative and antiapoptotic effects. The mechanistic study demonstrated that astaxanthin increased the protein and mRNA expressions of Nrf2, HO-1, NQO1, and SOD1 [172]. Moreover, in weight drop-induced TBI mice, astaxanthin significantly reduced brain edema and improved behavioral functions including neurological scores, rotarod performance, beam walking performance, and falling latency during the hanging test. In addition, astaxanthin improved neuronal survival indicated by Nissl staining. Furthermore, astaxanthin exerted an antioxidative effect by increasing the SOD1 protein expression and inhibited neuronal apoptosis by reducing the level of cleaved caspase 3 and the number of TUNEL-positive cells. The mechanistic study revealed that astaxanthin promoted the activation of the Nrf2 signaling pathway as demonstrated by the increased mRNA levels and protein expressions of Nrf2, HO-1, and NQO1, while the inhibition of Prx2 or SIRT1 reversed the antioxidative and antiapoptotic effect of astaxanthin. Therefore, astaxanthin activated the SIRT1/Nrf2/Prx2/ASK1 signaling pathway in TBI. Moreover, astaxanthin also provided a neuroprotective effect in H2O2-induced primary cortical neurons by reducing oxidative damage and inhibiting apoptosis via the SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway [173]. Therefore, astaxanthin exerts a neuroprotective effect including antioxidation and antiapoptosis via activating the Nrf2 signaling pathway in TBI. ### 5.19. Lutein Lutein, a natural carotenoid, is commonly found in a variety of flowers, vegetables, and fruits, such asCalendula officinalis, spinach, and Brassica oleracea [174]. Accumulating studies demonstrate that lutein is a potent antioxidant and exhibits benefits in various diseases, including ischemia/reperfusion injury, diabetic retinopathy, heart disease, AD, and TBI [175]. In severe TBI rats, the administration of lutein significantly increased the inhibition of skilled motor function and reversed the increase in contusion volume of TBI rats. In addition, lutein suppressed the inflammatory response by decreasing the levels of TNF-α, IL-1β, IL-6, and Monocyte chemoattractant protein-1 (MCP-1). Meanwhile, lutein decreased ROS production and increased SOD and GSH activity, suggesting that lutein attenuated TBI-induced oxidative damage. Moreover, the mechanistic study found that lutein inhibited the protein expression of intercellular adhesion molecule-1 (ICAM-1), COX-2, and NF-κB, while increasing the protein expression of ET-1 and Nrf2. Therefore, the neuroprotective effect of lutein in TBI may be regulated via the NF-κB/ICAM-1/Nrf2 signaling pathway [176]. It is known that zeaxanthin and lutein are isomers and have identical chemical formulas. Recently, it is reported that lutein/zeaxanthin exerted a neuroprotective effect in TBI mice induced by a liquid nitrogen-cooled copper probe, and the brain infarct and brain swelling were remarkably declined by lutein/zeaxanthin. The protein expression of Growth-Associated Protein 43 (GAP43), ICAM, neural cell adhesion molecule (NCAM), brain-derived neurotrophic factor (BDNF), and Nrf2 were increased, while the protein expression of GFAP, IL-1β, IL-6, and NF-κB was inhibited by lutein/zeaxanthin [177]. Therefore, lutein/zeaxanthin presents antioxidative and anti-inflammatory effects via the Nrf2 and NF-κB signaling pathways. ### 5.20. Sodium Aescinate Sodium aescinate (SA) is a mixture of triterpene saponins isolated from the seeds ofAesculus chinensis Bunge and chestnut [178]. Amounting studies show that SA exerts anti-inflammatory, anticancer, and antioxidative effects [179–181]. In addition, SA has been reported to exhibit neuroprotective effect in 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD mice and mutant huntingtin (mHTT) overexpressing HT22 cells [182, 183]. A recent study reported that SA could attenuate brain injury in weight drop-induced TBI mice [182]. The intraperitoneal administration of SA significantly decreased NSS, brain water content, and lesion volume of mice after TBI. A further study found that SA suppressed TBI-induced oxidative stress as evidenced by the decreased MDA levels and increased GPx activity. The Nissl staining images displayed that SA increased the viability of neurons, and the TUNEL staining showed that SA inhibited neuronal apoptosis. Meanwhile, SA decreased the ratio of Bax/Bcl-2 and the cleaved form of caspase-3, while increasing the release of cytochrome c from mitochondria into the cytoplasm. The mechanistic study demonstrated that SA promoted the translocation of Nrf2 from the cytoplasm into the nuclear and subsequently increased the expression of HO-1 and NQO1. Moreover, the neuroprotective effect and mechanism of action of SA have been confirmed in scratch injury-induced TBI primary neurons and Nrf2-KO mice after TBI. Therefore, SA exerts a neuroprotective effect in TBI via activating the Nrf2 signaling pathway. ### 5.21. Melatonin Melatonin, commonly found in plants, animals, fungi, and bacteria, plays an important role in the regulation of the biological clock [184]. Melatonin as a dietary supplement is widely used to treat insomnia. Emerging evidence indicates that melatonin exerts neuroprotection in various diseases including brain injury, spinal cord injury, and cerebral ischemia [185]. In addition, melatonin is demonstrated to be a potent antioxidant with the ability to reduce oxidative stress, inhibit the inflammatory response, and attenuate neuronal apoptosis [186]. In craniocerebral trauma, melatonin showed a neuroprotective effect due to its antioxidative, anti-inflammatory, and inhibitory effects on activation adhesion molecules [187]. In Marmarou’s weight drop-induced TBI mice, melatonin significantly inhibited neuronal degeneration and reduced cerebral edema in the brain. Meanwhile, melatonin also attenuated the oxidative stress induced by TBI as evidenced by the decreased MDA levels and 3-NE expression, as well as increased GPx and SOD levels. The mechanistic study demonstrated that melatonin increased the nuclear translocation of Nrf2 and promoted the protein expression and mRNA levels of HO-1 and NQO1, while the KO of Nrf2 could partly reverse the neuroprotective effect of melatonin, including antioxidation, inhibition of neuronal degeneration, and alleviation of cerebral edema in mice after TBI. Therefore, melatonin provides a neuroprotective effect in TBI via the Nrf-ARE signaling pathway [188]. Due to the complex pathophysiology of TBI, the combinational use of melatonin and minocycline, a bacteriostatic agent reported to inhibit neuroinflammation, did not exhibit a better neuroprotective effect than either agent alone. The dosing and/or administration issues may attribute to this result [189]. Therefore, the optimal combination should be explored for the treatment of TBI. ### 5.22. Sinomenine Sinomenine is an alkaloid compound that is isolated from the roots of climbing plants includingSinomenium acutum (Thunb.) Rehd. et Wils. and Sinomenium acutum var. cinereum Rehd. et Wils [190]. Sinomenine has been demonstrated to exhibit an antihypertensive and anti-inflammatory effect and is commonly used to treat various acute and chronic arthritis, rheumatism, and rheumatoid arthritis (RA). In addition, sinomenine provides a neuroprotective effect in Marmarou’s weight drop-induced TBI mice. The administration of sinomenine significantly increased the grip test score and decreased brain water content. In addition, the neuronal viability was increased by sinomenine as shown by the increased NeuN-positive neurons and decreased TUNEL-positive neurons. Meanwhile, sinomenine increased Bcl-2 protein expression and decreased cleaved caspase-3 expression. Furthermore, sinomenine attenuated oxidative stress by decreasing MDA levels and increasing SOD and GPx activity. The mechanistic study revealed that sinomenine promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expression of HO-1 and NQO1 in mice after TBI [191]. Therefore, sinomenine, acting as a potent anti-inflammatory agent, provides antiapoptotic and antioxidative effects in TBI via the Nrf2-ARE signaling pathway. ### 5.23. Sulforaphane Sulforaphane, also known as isothiocyanate, is commonly found in certain kinds of vegetables, including cabbage, broccoli, and cauliflower [192]. Emerging evidence indicates that sulforaphane is widely used to treat prostate cancer, autism, asthma, and many other diseases [193–195]. In addition, sulforaphane also showed a neuroprotective effect in TBI. For example, sulforaphane decreased BBB permeability in CCI-induced TBI rats as evidenced by the decreased EB extravasation and the relative fluorescence intensity of fluorescein [196]. Meanwhile, the loss of tight junction proteins (TJs) including occluding and claudin-5 was attenuated by sulforaphane. The mechanistic study found that sulforaphane increased the mRNA level of Nrf2-driven genes including GST-alpha3(GSTα3), GPx, and HO-1, as well as enhanced the enzymatic activity of NQO1 in the brain and brain microvessels of TBI mice, suggesting that sulforaphane activated the Nrf2-ARE signaling pathway to protect BBB integrity. Furthermore, sulforaphane could reduce brain edema as evidenced by the decrease in brain water content, which was closely associated with the attenuation of AQP4 loss in the injury core and the further increase of AQP4 level in the penumbra region [197]. Moreover, the Morris water maze (MWZ) test showed that sulforaphane improved spatial memory and spatial working memory. Meanwhile, TBI-induced oxidative damage was significantly attenuated by sulforaphane as demonstrated by the reduced 4-HNE levels [198]. In addition, sulforaphane also attenuated 4-HNE induced dysfunction in isolated cortical mitochondria [158]. Taken together, sulforaphane provides a neuroprotective effect in TBI via the activation of the Nrf2-ARE signaling pathway. ## 5.1. Quercetin Quercetin belonging to flavonoids is commonly found in dietary plants including vegetables and fruits such as onions, tomatoes, soy, and beans [72]. Emerging evidence indicates quercetin exerts a variety of pharmacological effects mainly involving antioxidation, anti-inflammation, antivirus, anticancer, neuroprotection, and cardiovascular protection [73]. It is known to us that the inflammatory response promotes oxidative damage in TBI [74]. In weight drop injury (WDI)-induced TBI mice, quercetin was reported to significantly inhibit neuroinflammation-medicated oxidative stress and histological alterations as demonstrated by the decreased lipid peroxidation and increased activities of SOD, catalase, and GPx [75]. Meanwhile, quercetin could significantly reduce the brain water content and improve the neurobehavioral status, which is closely associated with the activation of the Nrf2/HO-1 pathway [74]. The impairment of mitochondria function leads to an increase in reactive oxygen species (ROS) production and damages mitochondrial proteins, DNA, and lipids [72]. Quercetin was reported to significantly inhibit mitochondrial damage of TBI male Institute of Cancer Research (ICR) mice as evidenced by the decreased expression of Bax and increased levels of cytochrome c in mitochondria, as well as increased mitochondrial SOD and decreased mitochondrial MDA content, and the recovery of mitochondrial membrane potential (MMP) and intracellular ATP content. The mechanistic study demonstrated that quercetin promoted the translocation of Nrf2 from the cytoplasm to the nucleus, suggesting that quercetin exerts neuroprotective effects in TBI mice via maintaining mitochondrial homeostasis through the activation of Nrf2 signaling pathway [76]. In moderate TBI rats, quercetin inhibited oxidative nitrosative stress by reducing the activity of NOS including inducible nitric oxide synthase (iNOS) and constructive nitric oxide synthase (cNOS), as well as the concentration of thiobarbituric acid (TBA)-lipid peroxidation in the cerebral hemisphere and periodontal tissues [77]. Therefore, quercetin exerts neuroprotective effects in TBI via multiple biological activities, including inhibition of oxidative damage, nitrosative stress, and inflammatory response, as well as the improvement of mitochondrial dysfunction and neuronal function through the Nrf2 signaling pathway. ## 5.2. Curcumin Curcumin, a polyphenol isolated fromCurcuma longa rhizomes, has been reported to possess multiple biological activities, including antioxidative, anti-inflammatory, and anticancer effects [78]. Most importantly, curcumin is also demonstrated to cross the BBB and exert neuroprotection in various neurodegenerative diseases, such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and amyotrophic lateral sclerosis (ALS), via the inhibition of neuronal death and neuroinflammation [79]. In addition, emerging evidence indicates that curcumin presents protective effects in TBI and activates the Nrf2 signaling pathway in vivo and in vitro [78, 80–82]. In mild fluid percussion injury (FPI)-induced TBI rats, curcumin significantly attenuated oxidative damage by decreasing the oxidized protein levels and reversing the reduction in the levels of brain-derived neurotrophic factor (DBNF), synapsin I, and cyclic AMP (cAMP)-response element-binding protein 1 (CREB) [81]. Meanwhile, curcumin improved the cognitive and behavioral function of TBI rats [81, 83–85]. In addition, the intraperitoneal administration of curcumin could improve the neurobehavioral function and decrease the brain water content in Feeney or Marmarou’s weight drop-induced TBI mice. Furthermore, curcumin reduced the oxidative stress in the ipsilateral cortex by decreasing the level of MDA and increasing the levels of SOD and GPx, as well as promoted neuronal regeneration and inhibited neuronal apoptosis [80, 85]. Moreover, curcumin inhibited the neuroinflammatory response as demonstrated by the decreased number of myeloperoxidase (MPO) positive cells and increased levels of cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), and interleukin-1beta (IL-1β) [80]. The mechanistic study found that curcumin promoted the nuclear translocation of Nrf2 and increased the expression of downstream genes, including HO-1, NQO1, and GCLC, while the neuroprotective effects of curcumin including antioxidation, antiapoptosis, and anti-inflammation were attenuated in Nrf2-KO mice after TBI [80]. In addition, the anti-inflammatory effect of curcumin in TBI was also regulated by the TLR4/MyD88/NF-κB signaling pathway [86] and aquaporin-4 (AQP4) [87]. Diffuse axonal injury (DAI), a type of TBI, is recognized as an important cause that results in long-term problems in motor and cognition, while curcumin could ameliorate axonal injury and neuronal degeneration of rats after DAI. In addition, curcumin overcame endoplasmic reticulum (ER) stress via strengthening the ability of the unfolded protein response (UPR) process and reducing the levels of plasma tau, β-APP, and NF-H. The mechanistic study revealed that curcumin activated the PERK/Nrf2 signaling pathway [88]. Most importantly, the combinational use of curcumin and candesartan, an angiotensin II receptor blocker used for the treatment of hypertension, showed better antioxidative, antiapoptotic, and anti-inflammatory effects than curcumin or candesartan alone [89]. In addition, tetrahydrocurcumin, the metabolite of curcumin, could also alleviate brain edema and reduce neuronal cell apoptosis, as well as improve neurobehavioral function via the Nrf2 signaling pathway in weight drop-induced TBI mice [90]. Taken together, curcumin together with its metabolites are useful for the treatment of TBI. ## 5.3. Formononetin Formononetin, an O-methylated isoflavone phytoestrogen, is commonly found in plants such as red clover [91]. Accumulating studies show that formononetin has various biological activities, including the improvement of blood microcirculation, anticancer, and antioxidative [92]. In addition, formononetin exhibits neuroprotection in AD, PD, spinal cord injury, and TBI [93, 94]. It has been reported that the administration of formononetin could decrease the neurological score and cerebral moisture content of TBI rats [91]. In addition, the HE staining images showed that formononetin attenuated the edema and necrosis in the lesioned zones of the brain and increased the number of neural cells. At the same time, the oxidative stress was significantly reversed by formononetin as indicated by the increased enzymatic activity of SOD and GPx activity and decreased MDA content. The inflammatory cytokines including TNF-α and IL-6 as well as the mRNA level of Cyclooxygenase-2(COX-2) were also reduced by formononetin. The mechanistic study revealed that formononetin increased the protein expression of Nrf2 [95]. Furthermore, the same research team found that microRNA-155 (miR-155) is involved in the neuroprotection of formononetin in TBI. The pretreatment of formononetin significantly increased the expression of miR-155 and HO-1, which is accompanied by the downregulation of BACH1 [91]. All evidence suggests that formononetin provides neuroprotection in TBI via the Nrf2/HO-1 signaling pathway. ## 5.4. Baicalin Baicalin, known as 7-D-Glucuronic acid-5,6-dihydroxyflavone, is a major flavone found in the radix ofScutellaria baicalensis [96]. Emerging evidence indicates that baicalin can cross the BBB and exert neuroprotective effects in various CNS-related diseases including AD, cerebral ischemia, spinal cord injury, and TBI [97]. In addition, baicalin was reported to activate the Nrf2 signaling pathway and attenuate subarachnoid hemorrhagic brain injury [98]. In weight drop-induced TBI mice, baicalin significantly reduced the neurological soft signs (NSS) score and the brain water content, and inhibited neuronal apoptosis as evidenced by the decreased terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive neurons, Bax/Bcl-2 ratio, and the cleavage of caspase 3. Meanwhile, baicalin attenuated oxidative damage by decreasing MDA levels and increasing GPx and SOD activity and expression. The mechanistic study found that baicalin increased the expression of Nrf2 and promoted the nuclear translocation of Nrf2, meanwhile upregulated the mRNA and protein expression of HO-1 and NQO1, while the treatment of ly294002 reversed the effect of baicalin on antiapoptosis, antioxidation, and activation of the Nrf2 signaling pathway, suggesting that baicalin exerts neuroprotective effects via the Akt/Nrf2 pathway in TBI [96]. As is known to us, autophagy plays a protective mechanism in neurodegenerative diseases. Furthermore, the same research team found that baicalin induced autophagy and alleviated the BBB disruption and inhibited neuronal apoptosis of mice after TBI, while the co-treatment of 3-MA partly abolished the neuroprotective effect of baicalin. Therefore, baicalin provides a beneficial effect via the Nrf-2 regulated antioxidative pathway and autophagy induction. ## 5.5. Catechin Catechin is a flavan-3-ol and belongs to a type of natural polyphenols [99]. It is a plant secondary metabolite and a potent antioxidant [100]. Structurally, it has four diastereoisomers, including two isomers with trans configuration called (+)-catechin and two isomers with cis configuration called (-)-epicatechin [101]. They are commonly found in food and fruits, such as cocoa, tea, and grapes. The pharmacological activity of catechin mainly involves antioxidative, anti-inflammatory, antifungal, antidiabetic, antibacterial, and antitumor effects [102]. In addition, catechin also exhibits neuroprotective effects in CCI-induced TBI rats by inhibiting the disruption of BBB and excessive inflammatory responses [103]. The expression of junction proteins including occludin and zonula occludens protein-1 (ZO-1) associated with BBB integrity was increased, while the levels of proinflammatory cytokines including IL-1β, iNOS, and IL-6 were decreased by catechin. At the same time, catechin significantly alleviated the brain damage as revealed by the decrease in the brain water content and brain infarction volume, as well as improved motor and cognitive deficits [103]. In addition, catechin inhibited cell apoptosis and induced neurotrophic factors in rats after TBI [104]. In CCI-induced TBI mice, the administration of epicatechin significantly attenuated the neutrophil infiltration and oxidative damage. Specifically, epicatechin could reduce lesion volume, edema, and cell death, as well as improve neurological function, cognitive performance, and depression-like behaviors. In addition, epicatechin decreased white matter injury, HO-1 expression, and the deposition of ferric iron. The mechanistic study found that epicatechin decreased the Keap1 expression while increasing the nuclear translocation of Nrf2. Meanwhile, epicatechin reduced the activity of Matrix metallopeptidase 9 (MMP9) and increased the expression of SOD1 and quinone 1 [102]. Therefore, epicatechin exerts neuroprotective effects in TBI mice via modulating Nrf2-regulated oxidative stress response and inhibiting iron deposition. ## 5.6. Fisetin Fisetin, also known as 3,3′,4′,7-tetrahydroxyflavone, is a flavonol compound and was first extracted from Cotinus coggygria by Jacob Schmid in 1886 [105], and its structure was elucidated by Joseph Hergig in 1891. In addition, fisetin is also found in many vegetables and fruits, such as onions, cucumbers, persimmon, strawberries, and apples [106]. Emerging evidence indicates that fisetin acting as a potent antioxidant possesses multiple biological activities, including anti-inflammatory, antiviral, anticarcinogenic, and other effects [107]. Fisetin also presents neuroprotective effects via the antioxidative stress in AD, PD, etc. [108]. In addition, fisetin also showed protective effects in weight drop-induced TBI mice as shown by the decreased NSS, brain water content, Evans blue (EB) extravasation, and lesion volume of brain tissue, as well as the increased grip test score. Meanwhile, the MDA level was decreased and GPx activity was increased by fisetin, suggesting that fisetin provides a neuroprotective effect via suppressing TBI-induced oxidative stress [109]. In addition, the neuronal cell outline and structure stained by Nissl solution showed that fisetin improved neuronal viability, while neuronal apoptosis was inhibited by fisetin as demonstrated by the decreased TUNEL signals, and the reduced protein expression of Bax/Bcl-2 and cleaved caspase-3. The mechanistic study demonstrated that fisetin promoted the Nrf2 nuclear translocation and increased the expression of HO-1 and NQO1, while the KO of Nrf2 abrogated the neuroprotective effect of fisetin including antioxidation and antiapoptosis [109]. Moreover, fisetin was reported to exert anti-inflammatory effects in TBI mice via the TLR4/NF-κB pathway, and the level of TNF-α, IL-1β, and IL-6 was significantly decreased. Meanwhile, the BBB disruption of TBI mice was attenuated by fisetin [110]. Therefore, fisetin exerts neuroprotective effects in TBI via the Nrf2-regulated oxidative stress and the NF-κB-mediated inflammatory signaling pathway. ## 5.7. Luteolin Luteolin, belonging to flavonoids, is abundant in fruits and vegetables such as carrots, green tea, and celery [111]. Emerging evidence indicates luteolin has a wide variety of biological activities including antioxidative and anti-inflammatory effects [112, 113]. In addition, several studies have demonstrated the neuroprotective effect of luteolin in multiple in vivo and in vitro models [114, 115]. For example, luteolin could recover motor performance and reduce post-traumatic cerebral edema in weight drop-induced TBI mice. The oxidative damage was reduced by luteolin as demonstrated by the decrease in MDA levels and the increase in GPx activity in the ipsilateral cortex. The mechanistic study found that luteolin promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expressions of HO-1 and NQO1 [116]. In addition, luteolin significantly improved TBI-induced learning and memory impairment in rats after TBI, which was closely associated with the attenuation of oxidative damage indicated by the decreased MDA level and increased SOD and CAT activity [117, 118]. Therefore, the Nrf2-regulated oxidative stress response plays an important role in luteolin against TBI. ## 5.8. Isoliquiritigenin Isoliquiritigenin, a chalcone compound, is often found in plants includingSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odorifera. [119]. Isoliquiritigenin has been reported to attenuate oxidative damage, inhibit the inflammatory response, and suppress tumor growth [120]. In addition, isoliquiritigenin activates the Nrf2 signaling pathway to exert antioxidative and anti-inflammatory effects in multiple cellular and animal models. Isoliquirtigenin also exerts a neuroprotective effect in CCI-induced TBI mice via the Nrf2-ARE signaling pathway [121]. For example, isoliquiritigenin increased the Garcia Neuro score and decreased the brain water content, as well as the expression of aquaporin 4 (AQP4) and EB leakage. The glia activation indicated by GFAP expression was inhibited and the neuron viability showed by neurofilament light (NFL) expression was increased by isoliquiritigenin. In addition, isoliquiritigenin increased the number of Nissl staining-positive neurons and inhibited neuronal apoptosis as evidenced by the decreased expression of cleaved caspase-3. Furthermore, the oxidative damage was ameliorated by isoliquiritigenin as shown by the increased GPx activity, SOD levels, and decreased H2O2 concentration and MDA levels. However, the KO of Nrf2 significantly attenuated the neuroprotective effect of isoliquiritigenin in mice after TBI. The mechanistic study demonstrated that isoliquiritigenin increased the nuclear translocation of Nrf2 and the protein and mRNA expression of NQO1 and HO-1. In the in vitro study, isoliquiritigenin also activated the Nrf2-ARE signaling pathway and increased the cell viability in oxygen and glucose deprivation (OGD)-induced SH-SY5Y cells. In addition, isoliquiritigenin inhibited shear stress-induced cell apoptosis in SH-SY5Y cells, as well as suppressed the inflammatory response and inhibited neuronal apoptosis in CCI-induced TBI mice or rats via the PI3K/AKT/GSK-3β/NF-κB signaling pathway [122, 123]. Moreover, isoliquiritigenin protected against BBB damage in mice after TBI via inhibiting the PI3K/AKT/GSK-3β pathway [123]. Therefore, isoliquiritigenin may be a promising agent for the treatment of TBI via the inhibition of oxidative stress, inflammatory response, and BBB disruption in TBI. ## 5.9. Tannic Acid Tannic acid, a natural polyphenol, is commonly found in green and black teas as well as nuts, fruits, and vegetables [124]. Emerging evidence indicates that tannic acid possesses multiple biological activities such as antioxidative, anti-inflammatory, antiviral, and antiapoptotic effects [125–127]. In addition, tannic acid exhibits neuroprotective effects as shown by the improvement of behavioral deficits and the inhibition of neurodegeneration [128]. Recently, tannic acid has been proven to ameliorate the oxidative damage and behavioral impairments of mice after TBI [128]. For example, tannic acid significantly increased the score in the grip test and the motor coordination time as well as decreased the stay time in the balance test. In addition, tannic acid inhibited neuronal damage and reduced the brain water content of TBI mice. A further study found that tannic acid could attenuate oxidative stress as evidenced by increased glutathione (GSH) levels, 1-Chloro-2,4-dinitrobenzene (CDNB) conjunction, NADPH oxidation, and H2O2 consumption. In addition, apoptosis-related proteins including cleaved caspase-3 and Poly (ADP-ribose) polymerase (PARP), as well as Bax/Bcl-2, were significantly inhibited by tannic acid. Meanwhile, the inflammatory response indicated by the increased levels of TNF-α and IL-1β and GFAP immunofluorescence intensity was also suppressed. The mechanistic study demonstrated that tannic acid increased the protein expression of Nrf2, PGC-1α, Tfam, and HO-1. Therefore, tannic acid exerts a neuroprotective effect in TBI via activating the PGC-1α/Nrf2/HO-1 signaling pathway. ## 5.10. Ellagic Acid Ellagic acid, an innate polyphenol, is commonly found in various berries such as blueberries, strawberries, blackberries, together with walnuts and nuts [129]. Several studies show that ellagic acid exerts multiple biological activities, including anti-inflammatory, antioxidative, antifibrosis, antidepressant, and neuroprotective effects [130]. In addition, ellagic acid also exhibits protective effects in various brain injuries such as neonatal hypoxic brain injury, cerebral ischemia/reperfusion injury, carbon tetrachloride (CCl4)-induced brain damage, and TBI [131–133]. Here, we summarize the neuroprotective effect of ellagic acid in TBI and its mechanism of action. In experimental diffuse TBI rats, the treatment of ellagic acid significantly improved memory and hippocampus electrophysiology deficits [134]. Meanwhile, the inflammatory responses indicated by the elevated TNF-α, IL-1β, and IL-6 levels were reduced by ellagic acid [134, 135]. In addition, ellagic acid could also decrease the BBB permeability of mice after TBI [135]. In CCl4-induced brain injury rats, ellagic acid decreased MDA levels, increased GSH content, and CAT activity. The mechanistic study demonstrated that ellagic acid inhibited the protein expression of NF-κB and COX-2 while increasing the protein expression of Nrf2 [133]. Therefore, ellagic acid exerts an antioxidative effect via activating the Nrf2 pathway and exhibits anti-inflammatory effects via inhibiting the NF-κB pathway in TBI. ## 5.11. Breviscapine Breviscapine is an aglycone flavonoid and is isolated from the Erigeron plant [136]. Modern pharmacological studies indicate that breviscapine can expand blood vessels to assist in microcirculation, suggesting its potential therapeutic role in cardiovascular and CNS-related diseases [137]. In addition, breviscapine, acting as a scavenger of oxygen-free radicals, is demonstrated to improve ATPase and SOD activity. Recently, breviscapine is also reported to improve neurobehavior and decrease neuronal apoptosis in TBI mice, which is closely associated with the translocation of Nrf2 from the cytoplasm into the nuclear and the subsequent upregulation of Nrf2 downstream factors such as HO-1 and NQO1 [138]. In addition, the inhibition of glycogen synthase kinase-3β (GSK-3β) and IL-6 by breviscapine is associated with its neuroprotective effect in TBI [139, 140]. Therefore, breviscapine exerts neuroprotective effects in TBI via antioxidation, antiapoptosis, and anti-inflammatory responses. ## 5.12. Asiatic Acid Asiatic acid belonging to pentacyclic triterpene is isolated from natural plants such asCentella asiatica [141]. Studies have shown that asiatic acid exhibits potent anti-inflammatory and antioxidative properties, which contributes to its protective effects in spinal cord injury, ischemic stroke, cardiac hypertrophy, liver injury, and lung injury through multiple mechanisms [142]. For example, the administration of asiatic acid could increase Basso, Beattie, Bresnahan scores and the plane test score in spinal cord injury (SCI) rats. Meanwhile, asiatic acid inhibited the inflammatory response by reducing the levels of IL-1β, IL-18, IL-6, and TNF-α, and counteracted oxidative stress by decreasing ROS, H2O2, and MDA levels while increasing SOD activity and glutathione production. The underlying mechanisms include the activation of Nrf2/HO-1 and the inhibition of the NLRP3 inflammasome pathway. In addition, asiatic acid could alleviate tert-butyl hydroperoxide (tBHP)-induced oxidative stress in HepG2 cells. The researchers found that asiatic acid significantly inhibited tBHP-induced cytotoxicity, apoptosis, and the generation of ROS, which attributed to the activation of Keap1/Nrf2/ARE signaling pathway and the upregulation of transcription factors including HO-1, NQO-1, and GCLC [143]. In a CCI-induced TBI model, the administration of asiatic acid significantly improved neurological deficits and reduced brain edema. Meanwhile, asiatic acid counteracted oxidative damage as evidenced by the reduced levels of MDA, 4-HNE, and 8-hydroxy-2′-deoxyguanosine (8-OHdG). The mechanistic study further found that asiatic acid could increase the mRNA and protein expression of Nrf2 and HO-1 [144]. Taken together, asiatic acid improves neurological deficits in TBI via activating the Nrf2/HO-1 signaling pathway. ## 5.13. Aucubin Aucubin, an iridoid glycoside isolated from natural plants such asEucommia ulmoides [145], is reported to have several pharmacological effects including antioxidation, antifibrosis, antiageing, and anti-inflammation [145–147]. Recently, emerging evidence indicates that aucubin exerts neuroprotective effects via antioxidation and anti-inflammation [148]. In addition, aucubin also inhibited lipid accumulation and attenuated oxidative stress via activating the Nrf2/HO-1 and AMP-activated protein kinase (AMPK) signaling pathways [147]. Moreover, aucubin inhibited lipopolysaccharide (LPS)-induced acute pulmonary injury through the regulation of Nrf2 and AMPK pathways [149]. In H2O2-induced primary cortical neurons and weight drop-induced TBI mouse model, aucubin was found to significantly decrease the excessive generation of ROS and inhibit neuronal apoptosis. In addition, aucubin could reduce brain edema, improve cognitive function, decrease neural apoptosis and loss of neurons, attenuate oxidative stress, and suppress the inflammatory response in the cortex of TBI mice. The mechanistic study demonstrated that aucubin activated the Nrf2-ARE signaling pathway and upregulated the expression of HO-1 and NQO1, while the neuroprotective effect of aucubin in Nrf2-KO mice after TBI was reversed [150]. Therefore, aucubin provides a protective effect in TBI via activating the Nrf2 signaling pathway. ## 5.14. Ursolic Acid Ursolic acid, a pentacyclic triterpenoid compound, is widely found in various fruits and vegetables such as apples, bilberries, lavender, and hawthorn. [151]. Ursolic acid has been reported to possess multiple pharmacological effects including anti-inflammatory, antioxidative, antifungal, antibacterial, and neuroprotective properties [152]. In addition, ursolic acid activates the Nrf2/ARE signaling pathway to exert a protective effect in cerebral ischemia, liver fibrosis, and TBI [153]. In weight drop-induced TBI mice, the administration of ursolic acid could improve neurobehavioral functions and reduce the cerebral edema of mice after TBI. In addition, ursolic acid inhibited neuronal apoptosis as shown by the Nissl staining images and TUNEL staining. Meanwhile, ursolic acid ameliorated oxidative stress by increasing SOD and GPx activity as well as decreasing MDA levels. The mechanistic study demonstrated that ursolic acid promoted the nuclear translocation of Nrf2 and increased the levels of transcription factors including HO-1 and NQO1, while the KO of Nrf2 could partly abolish the protective effect of ursolic acid in TBI [153]. Therefore, ursolic acid exerts a neuroprotective effect in TBI via partly activating the Nrf2 signaling pathway. ## 5.15. Carnosic Acid Carnosic acid, a natural benzenediol abietane diterpene, is found inRosmarinus officinalis and Salvia officinalis [154]. Carnosic acid and carnosol are two major antioxidants in Rosmarinus officinalis [155]. Emerging evidence indicates that carnosic acid is a potent activator of Nrf2 and exerts a neuroprotective effect in various neurodegenerative diseases [156]. In CCI-induced acute post-TBI mice, carnosic acid could reduce TBI-induced oxidative damage by decreasing the level of 4-HNE and 3-NE in the brain tissues. A further study demonstrated that carnosic acid maintained mitochondrial respiratory function and attenuated oxidative damage by reducing the amount of 4-HNE bound to cortical mitochondria [157, 158]. In addition, carnosic acid showed a potent neuroprotective effect in repetitive mild TBI (rmTBI) as evidenced by the significant improvement of motor and cognitive performance. Meanwhile, the expression of GFAP and Iba1 expression was inhibited, suggesting that carnosic acid inhibited the neuroinflammation in TBI [159]. Therefore, carnosic acid exerts a neuroprotective effect via inhibiting the mitochondrial oxidative damage in TBI through the Nrf2-ARE signaling pathway. ## 5.16. Fucoxanthin Fucoxanthin, a carotenoid isolated from natural plants such as seaweeds and microalgae, is considered a potent antioxidant [160]. Several studies show that fucoxanthin exerts various pharmacological activities such as antioxidation, anti-inflammation, anticancer, and health protection effects [161]. In addition, fucoxanthin exerts anti-inflammatory effects in LPS-induced BV-2 microglial cells via increasing the Nrf2/HO-1 signaling pathway [162] and inhibits the overactivation of NLRP3 inflammasome via the NF-κB signaling pathway in bone marrow-derived immune cells and astrocytes [163]. In mouse hepatic BNL CL.2 cells, fucoxanthin was reported to upregulate the mRNA and protein expression of HO-1 and NQO1 via increasing the phosphorylation of ERK and p38 and activating the Nrf2/ARE pathway, which contributes to its antioxidant activity [164]. Recently, it has been reported that the neuroprotective effect of fucoxanthin in TBI mice was regulated via the Nrf2-ARE and Nrf2-autophagy pathways [165]. In this study, the researchers found that fucoxanthin alleviated neurological deficits, cerebral edema, brain lesions, and neuronal apoptosis of TBI mice. In addition, fucoxanthin significantly decreased the generation of MDA and increased the activity of GPx, suggesting its antioxidative effect in TBI. Furthermore, in vitro experiments revealed that fucoxanthin could improve neuronal survival and reduce the production of ROS in primary cultured neurons. A further mechanistic study revealed that fucoxanthin activated the Nrf2-ARE pathway and autophagy in vivo and in vitro, while fucoxanthin failed to activate autophagy and exert a neuroprotective effect in Nrf2−/− mice after TBI. Therefore, fucoxanthin activates the Nrf-2 signaling pathway and induces autophagy to exert a neuroprotective effect in TBI. ## 5.17.β-carotene β-carotene, abundant in fungi, plants, and fruits, is a member of the carotenes and belongs to terpenoids [166]. Accumulating studies indicate that β-carotene acting as an antioxidant has potential therapeutic effects in various diseases, such as cardiovascular disease, cancer, and neurodegenerative diseases [167, 168]. Meanwhile, the neuroprotective effect of β-carotene was also reported in CCI-induced TBI mice; the administration of β-carotene significantly improved neurological function and brain edema as evidenced by the decreased neurological deficit score and brain water content and increased time of wire hanging of mice after TBI. In addition, β-carotene could maintain the BBB permeability as indicated by the EB extravasation and ameliorate oxidative stress as showed by the increased SOD level and decreased MDA levels. The mechanistic study demonstrated that β-carotene activated the Keap1-Nrf2 signaling pathway and promoted the expression of HO-1 and NQO1 [169]. Therefore, β-carotene provides a neuroprotective effect in TBI via inhibiting oxidative stress through the Nrf2 pathway. ## 5.18. Astaxanthin Astaxanthin is a carotenoid and is commonly found in certain plants and animals, such as salmon, rainbow trout, shrimp, and lobster [170]. Emerging evidence indicates that astaxanthin exhibits multiple biological activities, including antiageing, anticancer, heart protection, and neuroprotection [171]. Recently, astaxanthin was reported to present neuroprotection in CCI-induced TBI mice, such as increasing NSS score and immobility time and increasing rotarod time and latency to immobility. In addition, astaxanthin increased SOD1 levels and inhibited the protein expression of cleaved caspase 3 and the number of TUNEL-positive cells, suggesting that astaxanthin exerted antioxidative and antiapoptotic effects. The mechanistic study demonstrated that astaxanthin increased the protein and mRNA expressions of Nrf2, HO-1, NQO1, and SOD1 [172]. Moreover, in weight drop-induced TBI mice, astaxanthin significantly reduced brain edema and improved behavioral functions including neurological scores, rotarod performance, beam walking performance, and falling latency during the hanging test. In addition, astaxanthin improved neuronal survival indicated by Nissl staining. Furthermore, astaxanthin exerted an antioxidative effect by increasing the SOD1 protein expression and inhibited neuronal apoptosis by reducing the level of cleaved caspase 3 and the number of TUNEL-positive cells. The mechanistic study revealed that astaxanthin promoted the activation of the Nrf2 signaling pathway as demonstrated by the increased mRNA levels and protein expressions of Nrf2, HO-1, and NQO1, while the inhibition of Prx2 or SIRT1 reversed the antioxidative and antiapoptotic effect of astaxanthin. Therefore, astaxanthin activated the SIRT1/Nrf2/Prx2/ASK1 signaling pathway in TBI. Moreover, astaxanthin also provided a neuroprotective effect in H2O2-induced primary cortical neurons by reducing oxidative damage and inhibiting apoptosis via the SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway [173]. Therefore, astaxanthin exerts a neuroprotective effect including antioxidation and antiapoptosis via activating the Nrf2 signaling pathway in TBI. ## 5.19. Lutein Lutein, a natural carotenoid, is commonly found in a variety of flowers, vegetables, and fruits, such asCalendula officinalis, spinach, and Brassica oleracea [174]. Accumulating studies demonstrate that lutein is a potent antioxidant and exhibits benefits in various diseases, including ischemia/reperfusion injury, diabetic retinopathy, heart disease, AD, and TBI [175]. In severe TBI rats, the administration of lutein significantly increased the inhibition of skilled motor function and reversed the increase in contusion volume of TBI rats. In addition, lutein suppressed the inflammatory response by decreasing the levels of TNF-α, IL-1β, IL-6, and Monocyte chemoattractant protein-1 (MCP-1). Meanwhile, lutein decreased ROS production and increased SOD and GSH activity, suggesting that lutein attenuated TBI-induced oxidative damage. Moreover, the mechanistic study found that lutein inhibited the protein expression of intercellular adhesion molecule-1 (ICAM-1), COX-2, and NF-κB, while increasing the protein expression of ET-1 and Nrf2. Therefore, the neuroprotective effect of lutein in TBI may be regulated via the NF-κB/ICAM-1/Nrf2 signaling pathway [176]. It is known that zeaxanthin and lutein are isomers and have identical chemical formulas. Recently, it is reported that lutein/zeaxanthin exerted a neuroprotective effect in TBI mice induced by a liquid nitrogen-cooled copper probe, and the brain infarct and brain swelling were remarkably declined by lutein/zeaxanthin. The protein expression of Growth-Associated Protein 43 (GAP43), ICAM, neural cell adhesion molecule (NCAM), brain-derived neurotrophic factor (BDNF), and Nrf2 were increased, while the protein expression of GFAP, IL-1β, IL-6, and NF-κB was inhibited by lutein/zeaxanthin [177]. Therefore, lutein/zeaxanthin presents antioxidative and anti-inflammatory effects via the Nrf2 and NF-κB signaling pathways. ## 5.20. Sodium Aescinate Sodium aescinate (SA) is a mixture of triterpene saponins isolated from the seeds ofAesculus chinensis Bunge and chestnut [178]. Amounting studies show that SA exerts anti-inflammatory, anticancer, and antioxidative effects [179–181]. In addition, SA has been reported to exhibit neuroprotective effect in 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD mice and mutant huntingtin (mHTT) overexpressing HT22 cells [182, 183]. A recent study reported that SA could attenuate brain injury in weight drop-induced TBI mice [182]. The intraperitoneal administration of SA significantly decreased NSS, brain water content, and lesion volume of mice after TBI. A further study found that SA suppressed TBI-induced oxidative stress as evidenced by the decreased MDA levels and increased GPx activity. The Nissl staining images displayed that SA increased the viability of neurons, and the TUNEL staining showed that SA inhibited neuronal apoptosis. Meanwhile, SA decreased the ratio of Bax/Bcl-2 and the cleaved form of caspase-3, while increasing the release of cytochrome c from mitochondria into the cytoplasm. The mechanistic study demonstrated that SA promoted the translocation of Nrf2 from the cytoplasm into the nuclear and subsequently increased the expression of HO-1 and NQO1. Moreover, the neuroprotective effect and mechanism of action of SA have been confirmed in scratch injury-induced TBI primary neurons and Nrf2-KO mice after TBI. Therefore, SA exerts a neuroprotective effect in TBI via activating the Nrf2 signaling pathway. ## 5.21. Melatonin Melatonin, commonly found in plants, animals, fungi, and bacteria, plays an important role in the regulation of the biological clock [184]. Melatonin as a dietary supplement is widely used to treat insomnia. Emerging evidence indicates that melatonin exerts neuroprotection in various diseases including brain injury, spinal cord injury, and cerebral ischemia [185]. In addition, melatonin is demonstrated to be a potent antioxidant with the ability to reduce oxidative stress, inhibit the inflammatory response, and attenuate neuronal apoptosis [186]. In craniocerebral trauma, melatonin showed a neuroprotective effect due to its antioxidative, anti-inflammatory, and inhibitory effects on activation adhesion molecules [187]. In Marmarou’s weight drop-induced TBI mice, melatonin significantly inhibited neuronal degeneration and reduced cerebral edema in the brain. Meanwhile, melatonin also attenuated the oxidative stress induced by TBI as evidenced by the decreased MDA levels and 3-NE expression, as well as increased GPx and SOD levels. The mechanistic study demonstrated that melatonin increased the nuclear translocation of Nrf2 and promoted the protein expression and mRNA levels of HO-1 and NQO1, while the KO of Nrf2 could partly reverse the neuroprotective effect of melatonin, including antioxidation, inhibition of neuronal degeneration, and alleviation of cerebral edema in mice after TBI. Therefore, melatonin provides a neuroprotective effect in TBI via the Nrf-ARE signaling pathway [188]. Due to the complex pathophysiology of TBI, the combinational use of melatonin and minocycline, a bacteriostatic agent reported to inhibit neuroinflammation, did not exhibit a better neuroprotective effect than either agent alone. The dosing and/or administration issues may attribute to this result [189]. Therefore, the optimal combination should be explored for the treatment of TBI. ## 5.22. Sinomenine Sinomenine is an alkaloid compound that is isolated from the roots of climbing plants includingSinomenium acutum (Thunb.) Rehd. et Wils. and Sinomenium acutum var. cinereum Rehd. et Wils [190]. Sinomenine has been demonstrated to exhibit an antihypertensive and anti-inflammatory effect and is commonly used to treat various acute and chronic arthritis, rheumatism, and rheumatoid arthritis (RA). In addition, sinomenine provides a neuroprotective effect in Marmarou’s weight drop-induced TBI mice. The administration of sinomenine significantly increased the grip test score and decreased brain water content. In addition, the neuronal viability was increased by sinomenine as shown by the increased NeuN-positive neurons and decreased TUNEL-positive neurons. Meanwhile, sinomenine increased Bcl-2 protein expression and decreased cleaved caspase-3 expression. Furthermore, sinomenine attenuated oxidative stress by decreasing MDA levels and increasing SOD and GPx activity. The mechanistic study revealed that sinomenine promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expression of HO-1 and NQO1 in mice after TBI [191]. Therefore, sinomenine, acting as a potent anti-inflammatory agent, provides antiapoptotic and antioxidative effects in TBI via the Nrf2-ARE signaling pathway. ## 5.23. Sulforaphane Sulforaphane, also known as isothiocyanate, is commonly found in certain kinds of vegetables, including cabbage, broccoli, and cauliflower [192]. Emerging evidence indicates that sulforaphane is widely used to treat prostate cancer, autism, asthma, and many other diseases [193–195]. In addition, sulforaphane also showed a neuroprotective effect in TBI. For example, sulforaphane decreased BBB permeability in CCI-induced TBI rats as evidenced by the decreased EB extravasation and the relative fluorescence intensity of fluorescein [196]. Meanwhile, the loss of tight junction proteins (TJs) including occluding and claudin-5 was attenuated by sulforaphane. The mechanistic study found that sulforaphane increased the mRNA level of Nrf2-driven genes including GST-alpha3(GSTα3), GPx, and HO-1, as well as enhanced the enzymatic activity of NQO1 in the brain and brain microvessels of TBI mice, suggesting that sulforaphane activated the Nrf2-ARE signaling pathway to protect BBB integrity. Furthermore, sulforaphane could reduce brain edema as evidenced by the decrease in brain water content, which was closely associated with the attenuation of AQP4 loss in the injury core and the further increase of AQP4 level in the penumbra region [197]. Moreover, the Morris water maze (MWZ) test showed that sulforaphane improved spatial memory and spatial working memory. Meanwhile, TBI-induced oxidative damage was significantly attenuated by sulforaphane as demonstrated by the reduced 4-HNE levels [198]. In addition, sulforaphane also attenuated 4-HNE induced dysfunction in isolated cortical mitochondria [158]. Taken together, sulforaphane provides a neuroprotective effect in TBI via the activation of the Nrf2-ARE signaling pathway. ## 6. Conclusions and Perspective It is known to us that TBI causes irreversible primary mechanical damage, followed by secondary injury. Studies have shown that multiple mechanisms contribute to the development of TBI during secondary injury, mainly including inflammatory response, oxidative stress, mitochondrial dysfunction, BBB disruption, and otherwise. Among them, oxidative stress leads to mitochondrial dysfunction, BBB disruption, and neuroinflammation. Therefore, oxidative stress plays a central role in the pathogenesis of TBI. Nrf2 is a conserved bZIP transcription factor, and the activation of the Nrf2 signaling pathway protects against oxidative damage. Under stress conditions or the treatment of Nrf2 activators, Nrf2 is translocated from the cytoplasm into the nucleus where it protects against oxidative damage via the ARE-mediated transcriptional activation of genes, including HO-1, NQO1, and GCLC, thereby inhibiting mitochondrial dysfunction, apoptosis, inflammation, and oxidative damage. Therefore, targeting the activation of the Nrf2 signaling pathway is a promising therapeutical strategy for TBI. To date, increased Nrf2 activators were reported to exert neuroprotective effects in various neurodegenerative diseases, cerebral ischemia, cerebral hemorrhage, and TBI [199, 200]. Phytochemicals are rich and isolated from fruits, vegetables, grains, and other medicinal herbs. In this review, polyphenols, terpenoids, natural pigments, and other phytochemicals were summarized. They exhibit potent neuroprotective effects, including the improvement of BBB integrity, recovery of neuronal viability, and inhibition of microglial overactivation via the Nrf2-mediated oxidative stress response (Figure 4).Figure 4 The potential therapy of phytochemicals for TBI. Oxidative damage in TBI plays an important role in the pathology of TBI, including BBB disruption and then neuronal death and microglial overactivation, while the treatment of phytochemicals with antioxidative properties can improve BBB integrity and then recover neuronal viability and inhibit microglial overactivation.Although a large number of studies have demonstrated the neuroprotective effect of most of the phytochemicals in vivo and in vitro models of TBI, there is a lack of effective clinical application evidence. In addition, little is known about the safety and pharmacokinetics of these phytochemicals. Therefore, increasing studies are needed to be performed to accelerate the process of phytochemicals entering the clinic. In the later period of TBI recovery, the selective permeability of BBB also gradually recovered. At this time, BBB is a big obstacle, which greatly limits the neuroprotective effect of drugs, and the use of drugs based on nanomaterials effectively improves the BBB permeability of drugs, bringing new hope for these phytochemicals. In addition, the combinational use of phytochemicals targeting multi-targets such as Nrf2, NF-κB, NADPH oxidase-2 (NOX-2) with gene, and stem cell therapy will be a promising strategy for the treatment of TBI. --- *Source: 1015791-2022-04-04.xml*
1015791-2022-04-04_1015791-2022-04-04.md
120,665
Targeting Nrf2-Mediated Oxidative Stress Response in Traumatic Brain Injury: Therapeutic Perspectives of Phytochemicals
An-Guo Wu; Yuan-Yuan Yong; Yi-Ru Pan; Li Zhang; Jian-Ming Wu; Yue Zhang; Yong Tang; Jing Wei; Lu Yu; Betty Yuen-Kwan Law; Chong-Lin Yu; Jian Liu; Cai Lan; Ru-Xiang Xu; Xiao-Gang Zhou; Da-Lian Qin
Oxidative Medicine and Cellular Longevity (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015791
1015791-2022-04-04.xml
--- ## Abstract Traumatic brain injury (TBI), known as mechanical damage to the brain, impairs the normal function of the brain seriously. Its clinical symptoms manifest as behavioral impairment, cognitive decline, communication difficulties, etc. The pathophysiological mechanisms of TBI are complex and involve inflammatory response, oxidative stress, mitochondrial dysfunction, blood-brain barrier (BBB) disruption, and so on. Among them, oxidative stress, one of the important mechanisms, occurs at the beginning and accompanies the whole process of TBI. Most importantly, excessive oxidative stress causes BBB disruption and brings injury to lipids, proteins, and DNA, leading to the generation of lipid peroxidation, damage of nuclear and mitochondrial DNA, neuronal apoptosis, and neuroinflammatory response. Transcription factor NF-E2 related factor 2 (Nrf2), a basic leucine zipper protein, plays an important role in the regulation of antioxidant proteins, such as oxygenase-1(HO-1), NAD(P)H Quinone Dehydrogenase 1 (NQO1), and glutathione peroxidase (GPx), to protect against oxidative stress, neuroinflammation, and neuronal apoptosis. Recently, emerging evidence indicated the knockout (KO) of Nrf2 aggravates the pathology of TBI, while the treatment of Nrf2 activators inhibits neuronal apoptosis and neuroinflammatory responses via reducing oxidative damage. Phytochemicals from fruits, vegetables, grains, and other medical herbs have been demonstrated to activate the Nrf2 signaling pathway and exert neuroprotective effects in TBI. In this review, we emphasized the contributive role of oxidative stress in the pathology of TBI and the protective mechanism of the Nrf2-mediated oxidative stress response for the treatment of TBI. In addition, we summarized the research advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI. Although there is still limited clinical application evidence for these natural Nrf2 activators, we believe that the combinational use of phytochemicals such as Nrf2 activators with gene and stem cell therapy will be a promising therapeutic strategy for TBI in the future. --- ## Body ## 1. Introduction Traumatic brain injury (TBI) refers to the damage to the brain structure and function caused by mechanical and external forces, including two stages of primary injury and secondary injury [1]. It is a global neurological disease and is the biggest cause of death and disability in the population under 40 years of age [2]. The current clinical treatments for TBI mainly include interventional treatments such as hyperventilation, hypertonic therapy, hypothermia therapy, surgical treatment, drug therapy, hyperbaric oxygen therapy, and rehabilitation therapy [3, 4]. In the past few decades, the main interventions that have had the greatest impact and can reduce the mortality rate of severe TBI by several times are immediate surgical intervention and follow-up care by specialist intensive care physicians [5]. Post-traumatic intracranial hypertension (ICH) makes patient care more complicated, but new data shows that hypertonic therapy is the use of hypertonic solutions, such as mannitol and hypertonic saline (HTS) in the early treatment of ICH after severe TBI, which can reduce the burden of ICH and improve survival and functional outcomes [6]. Hypothermia therapy can reduce the effects of TBI through a variety of possible mechanisms, including reducing intracranial pressure (ICP), reducing innate inflammation, and brain metabolic rate. However, the results of a randomized POLAR clinical trial showed that early preventive hypothermia did not improve the neurological outcome at 6 months in patients with severe TBI [7]. Therefore, the effectiveness of hypothermia for TBI remains to be discussed. Surgical treatments include decompressive craniectomy (DC), which is a method that removes most of the top of the skull to reduce ICP and the subsequent harmful sequelae. However, the treatment effects for TBI are not satisfactory [8]. Many patients have a poor prognosis and will be left with serious disabilities and require lifelong care [9]. In addition, chemicals including corticosteroids, progesterone, erythropoietin, amantadine, tranexamic acid, citicoline, and recombinant interleukin-1 receptor (IL-1R) antagonist are used for the treatment of TBI [2]. However, these drugs are less safe and cannot work well, or may lead to unfavorable physiological conditions [10]. Recently, many studies have begun to investigate the possibility of using natural compounds with high safety as therapeutic interventions after TBI [11]. The latest evidence indicates that phytochemicals, including quercetin, curcumin, formononetin, and catechin, exert neuroprotective effects in TBI and other brain diseases via attenuating oxidative stress [12]. It is known to us, transcription factor NF-E2 related factor 2 (Nrf2) plays an important role in the regulation of heme oxygenase-1 (HO-1), NAD(P)H Quinone Dehydrogenase 1 (NQO1), glutathione peroxidase (GPx), and other antioxidant proteins, which ameliorates oxidative damage in TBI [13]. In this review, we discussed the critical role of oxidative stress in the pathology of TBI and the regulation of Nrf2-mediated oxidative stress response in TBI. In addition, we summarized the study advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI in vivo and in vitro. Finally, we hope this review sheds light on the study on the treatment of TBI using phytochemicals as Nrf2 activators. Moreover, the combinational use of phytochemicals such as Nrf2 activators with gene and stem cell therapy will be a promising strategy for the treatment of TBI. ## 2. TBI TBI, also known as acquired intracranial injury, occurs in the brain. It is caused by an external force, including a blow, bump, or jolt to the head, and the sudden and serious hit of the head by an object or the deep pierce of an object into the skull through the brain tissue [14]. According to the data from the Center for Disease Control and Prevention (CDC) of Unite States (U.S.), the most common causes mainly include violence, transportation, accidents, construction, and sports. In addition, there are about 288,000 hospitalizations for TBI a year, and males hold 78.8% [15, 16]. Usually, older adults (>75 years) have the highest rates of TBI. Therefore, TBI brings serious economic and spiritual burdens to the family and society [17].TBI is classified in various ways, including type, severity, location, mechanism of injury, and the physiological response to injury [18]. In general, the Glasgow Coma Scale (GCS) score and neurobehavioral deficits are extensively used, and TBI is classified into mild, moderate, and severe types [19]. The clinical symptoms of TBI are greatly dependent on the severity of the brain injury and mainly include perceptual loss, cognitive decline, communication difficulties, behavioral impairment, affective changes, and otherwise [20] (Figure 1). The pathophysiology of TBI includes the primary injury, which is directly caused by physical forces, and the secondary injury referring to the further damage of tissue and cells in the brain [21]. The physical forces on the brain cause both focal and diffuse injuries. Emerging evidence indicates that patients who suffer from moderate or severe TBI are found to have focal and diffuse injuries simultaneously [22]. Most seriously, secondary brain injury is followed owing to the occurrence of biochemical, cellular, and physiological events during the primary brain injury [23]. Mechanistic studies demonstrate that several factors, including inflammation, oxidative stress, mitochondrial dysfunction, BBB disruption, DNA damage, glutamate excitotoxicity, complement activation, and neurotrophic impairment, are involved in the pathology and progression of TBI [24] (Figure 1). Currently, there is a growing body of studies showing that increasingly abnormal proteins or molecules are biomarkers closely associated with TBI, which helps to better understand the mechanism of TBI [25]. For example, the level of early structural damage biomarkers, including S100B protein in cerebrospinal fluid or blood nerve glial acidic protein, ubiquitin carboxyl-terminal hydrolase L1 and Tau helps to determine whether the head scan is required after TBI [26].Figure 1 The clinical symptoms and molecular mechanism of TBI, an injury to the brain caused by an external force. The clinical symptoms of TBI mainly manifest as perceptual loss, cognitive decline, communication difficulties, behavioral impairment, and affective changes. The molecular mechanisms of TBI include inflammation, oxidative stress, mitochondrial dysfunction, blood-brain barrier (BBB) disruption, DNA damage, glutamate excitotoxicity, complement activation, and neurotrophic impairment.At present, the therapeutic strategies for TBI include hyperbaric oxygen therapy (HBOT), hyperventilation and hypertonic therapy, noninvasive brain stimulation, drug therapy, and biological therapy [27]. Most importantly, the combinational use of novel biological reagents (genes and stem cells) and pharmacological intervention preparations can decrease the complications and mortality of TBI [28]. Stem cell therapy includes stem cells regeneration transplantation and induction of endogenous stem cells activation through pharmacological or environmental stimuli. Several studies have shown that some drugs can not only improve the survival rate of stem cells but also enhance their efficacy. For example, the intravenous injection of mesenchymal stem cells (MSCs) and atorvastatin or simvastatin 24 hours after TBI could improve the recovery of modified neurological severity score (mNSS) [29]. In addition, the administration of calpain inhibitors 30 minutes after TBI followed by the transplantation of MSCs 24 hours after TBI could reduce the proinflammatory cytokines around the lesions, increase the survival rate of MSCs, and improve mNSS [30]. Moreover, the pretreatment with minocycline for 24 hours could protect transplanted stem cells from ischemia-reperfusion injury by inducing Nrf2 nuclear translocation and increasing the expression of downstream proteins [31]. Therefore, the in-deep clarification of the mechanism of TBI and adopting targeted methods for precise intervention will help the recovery of post-traumatic neurological function and further prevent the occurrence and development of complications, and ultimately open up a new way for effective treatment of TBI [32]. ## 3. The Role of Oxidative Stress in TBI Oxidative stress occurs owing to an imbalance of free radicals and antioxidants in the body, which lead to cell and tissue damage [32]. Therefore, oxidative stress plays a critical role in the development of diseases. As is known to us, diet, lifestyle, and environmental factors such as pollution and radiation contribute to the induction of oxidative stress, resulting in the excessive generation of free radicals [33]. In general, free radicals, including superoxide, hydroxyl radicals, and nitric oxide radicals, are molecules with one or more unpaired electrons [34]. It is well known that oxidative stress is implicated in the pathogenesis of various diseases, such as atherosclerosis, hypertension, diabetes mellitus, ischemic disease, neurodegeneration, and other central nervous system (CNS) related diseases [35, 36]. During normal metabolic processes, although many free radicals are generated, the body’s cells can produce antioxidants to neutralize these free radicals and maintain a balance between antioxidants and free radicals [37]. A large body of evidence indicates that the overgenerated free radicals attack biological molecules, such as lipids, proteins, and DNA, ultimately breaking this balance and resulting in long-term oxidative stress [38]. However, oxidative stress also plays a useful role in some cases, such as physiologic adaptation and the modulation of intracellular signal transduction [39]. Thus, a more accurate definition of oxidative stress may be a state in which the oxidation system exceeds the antioxidant system owing to the occurrence of imbalance between them. At present, the biomarkers of oxidative stress, which are used to evaluate the pathological conditions of diseases and the efficacy of drugs, are becoming popular and attract increasing interest [40]. For example, lipid peroxides, 4-hydroxynonenal (4-HNE), and malondialdehyde (MDA) are the indicators of oxidative damage to lipids [41]. Thymine glycol (TG) and 8-oxoguanine (8-oxoG) are the biomarkers of oxidative damage to DNA [42]. In addition, a variety of proteins and amino acids, including carbonyl protein, dehydrovaline, nitrotyrosine, and hydroxyleucine, are oxidized and generate several products that are recognized to be biomarkers of oxidative stress. Among them, lipid peroxide as one of the most important biomarkers was determined in clinical. Furthermore, oxidative stress plays a pivotal role in the regulation of signaling transduction, including the activation of protein kinases and transcription factors, which affect many biological processes such as apoptosis, inflammatory response, and cell differentiation [43]. For example, gene transcription factors include nuclear factor κB (NF-κB) and activator protein-1 (AP-1) sense oxidative stress via oxidation and reduction cycling [44]. In addition, the generation of active oxygen species leads to the activation of NF-κB, resulting in proinflammatory responses in various diseases such as neurodegenerative diseases, spinal cord injury, and TBI [45]. Therefore, oxidative stress is one of the important mechanisms that has been implicated in the pathology of CNS-related diseases.Although the initial brain insult of TBI is an acute and irreversible primary damage to the parenchyma, the ensuing secondary brain injury progressing slowly over months to years seriously affects the treatment and prognosis of TBI [46]. Therefore, therapeutic interventions during secondary brain injury are essential. To date, many hallmarks are exhibited during delayed secondary CNS damage, mainly including mitochondrial dysfunction, Wallerian degeneration of axons, excitotoxicity, oxidative stress, and eventually neuronal death and overactivation of glial cells [24]. Recently, emerging evidence indicates that oxidative stress plays an important role in the development and pathogenesis of TBI [46]. In general, oxidative stress is resulted from or is accompanied by other molecular mechanisms, such as mitochondrial dysfunction, activation of neuroexcitation pathways, and activated neutrophils [47]. Kontos HA et al. first reported that superoxide radicals are immediately increased in brain microvessels after injury in a fluid percussion TBI model, while the scavengers of oxygen radicals including superoxide dismutase (SOD) and catalase significantly decrease the level of superoxide radicals and partly reverse the injury of the brain [48]. During the beginning minutes or hours after brain injury, a large number of superoxide radicals are generated owing to the enzymatic reaction or autoxidation of biogenic amine neurotransmitters, arachidonic acid cascade, damaged mitochondria, and oxidized extravasated hemoglobin [49]. Soon afterwards, the microglia are overactivated and neutrophils and macrophages are infiltrated, which also contribute to the production of superoxide radicals [50, 51]. In addition, iron overload and its resultant generation of several hydroxyl radicals and lipid peroxidation induce oxidative stress and neuronal ferroptosis, which significantly aggravate the pathogenesis of TBI from the following aspects, such as cerebral blood flow, brain plasticity, and the promotion of immunosuppression [52]. In this review, we focused on the research advance of the role of oxidative stress in TBI. At neutral pH, the iron in plasma is bound to the transferrin protein in the form of Fe3+, which also can be sequestered intracellularly by ferritin, an iron storage protein. Thus, iron in the brain is maintained at a relatively low level under normal conditions. However, the value of pH is decreased in the brain of TBI, which is accompanied by the release of iron from both transferrin and ferritin. Then, the excessive levels of active iron catalyze the oxygen radical reaction and induce oxidative damage and ferroptosis [53]. Additionally, hemoglobin is the second source that catalyzes the active iron after the mechanical trauma of the brain [54]. Iron is released from hemoglobin owing to the stimulation of hydrogen peroxide (H2O2) or lipid hydroperoxides, and the level of iron can be further increased as the pH decreases to 6.5 or even below [55]. Therefore, targeting the inhibition of iron levels by iron chelators may be a promising strategy for the treatment of TBI. For example, deferoxamine (DFO), a potent chelator of iron, can attenuate iron-induced long-term neurotoxicity and improve the spatial learning and memory deficit of TBI rats [56]. Moreover, nitric oxide (NO) is involved in the cascade of injury triggered by TBI. The activity of nitric oxide synthase (NOS) contributing to the generation of NO is increased as the accumulation of Ca2+ in TBI secondary injury. Then, NO reacts with free radical superoxide to generate “reactive nitrogen species” peroxynitrite (PN) in the forms of 3-nitrotyrosine (3-NE) and 4-HNE, which are found in the ipsilateral cortex and hippocampus of TBI animal models [24]. For example, N(omega)-nitro-L-arginine methyl ester (L-NAME), a NO-synthase inhibitor, was reported to attenuate neurological impairment in TBI and reduce the formation of NE and the number of NE-positive neurons [14]. Therefore, targeting the inhibition of oxidative stress in the brain is a promising strategy for the treatment of TBI. ## 4. Nrf2 Signaling-Mediated Oxidative Stress Response In 1995, Itoh, K. et al. first discovered and reported that Nrf2 was the homolog of the hematopoietic transcription factor p45 NF-E2 [57]. To date, a total of 6 members including NF-E2, Nrf1, Nrf2, Nrf3, Bach1, and Bach2 are identified from the Cap “n” Collar (CNC) family [58]. Among them, Nrf2 is a conserved basic leucine zipper (bZIP) transcription factor. The literature reports that Nrf2 possesses seven highly conserved functional domains from NRF2-ECH homology 1 (Neh1) to Neh7, which are identified in multiple species including humans, mice, and chicken (Figure 2) [59]. Of these domains, Neh2, located in the N-terminal of Nrf2, possesses seven lysine residues and ETGE and DLG motifs, which are responsible for the ubiquitin conjugation and the binding of Nrf2 to its cytosolic repressor Keap1 at the Kelch domain, then facilitating the Cullin 3 (Cul3)-dependent E3 ubiquitination and proteasome degradation [60]. Both Neh4 and Neh5 domains with a lot of acidic residues act as transactivation domains to bind to cAMP response element-binding protein (CREB), which regulates the transactivation of Nrf2. Neh7 is a domain that interacts with the retinoic X receptor (RXRα), which can inhibit CNC-bZIP factors and the transcription of genes targeting Nrf2. Neh6 has two motifs including DSGIS and DSAPGS of β-transducing repeat-containing protein (β-TrCP) functioning as a substrate receptor for the Cul3-Rbx1/Roc1 ubiquitin ligase complex [61]. DSGIS is modulated by glycogen synthase kinase-3 (GSK-3) activity and enables β-TrCP to ubiquitinate Nrf2 [62]. The Neh1 domain has a Cap “N” Collar Basic Leucine Zipper (CNC-bZIP) DNA-binding motif, which allows Nrf2 to dimerize with small Maf proteins including MAFF, MAFG, and MAFK [63]. The Neh3 domain in the C-terminal of Nrf2 protein regulates chromoATPase/helicase DNA-binding protein 6 (CHD6), which is known as the Nrf2 transcriptional co-activator [64]. In addition, Neh3 also plays a role in the regulation of Nrf2 protein stability.Figure 2 Structures of Nrf2 and Keap1 protein domains. (a) Nrf2 consists of 589 amino acids and has seven evolutionarily highly conserved domains (Neh1-7). Neh1 contains a bZIP motif and is responsible for DNA recognition and mediates the dimerization with the small MAF (sMAF) protein. Neh6 acts as a degron to mediate the degradation of Nrf2 in the nucleus. Neh4 and 5 are transactivation domains. Neh2 contains ETGE and DLG motifs which are required for the binding of Nrf2 to Keap1. Neh7 is a domain that interacts with RXRα to inhibit CNC-bZIP factors and the transcription of genes. Neh3 regulates CHD6. (b) Keap1 consists of 624 amino acids and has five domains. BTB domain together with the N-terminal region (NTR) of IVR to mediate the homodimerization of Keap1 and binding to Cul3. The Kelch domain and the C-terminal region (CTR) mediate the interaction with Neh2 of Nrf2 at the ETGE and DLG motifs. (a)(b)Under normal conditions, Nrf2 is kept in the cytoplasm by a cluster of proteins including Keap1 and Cul3, which then undergoes degradation via the ubiquitin-proteasome system (UPS) [65]. In brief, Cul3 ubiquitinates Nrf2 and Keap1 act as a substrate adaptor to facilitate the reaction. Then, Nrf2 is transported to the proteasome for its degradation and recycling, and the half-time of Nrf2 is only 20 minutes. Under the condition of oxidative stress or the treatment of Nrf2 activators, the Keap1-Cul3 ubiquitination system is disrupted. Then, Nrf2 is translocated from the cytoplasm into the nucleus and forms a heterodimer with one of the sMAF proteins, which binds with the ARE and initiates the transcription of many antioxidative genes including HO-1, glutamate-cysteine ligase catalytic subunit (GCLC), SOD, and NQO1 (Figure 3). Emerging evidence indicates that Nrf2 is the most important protein that induces various gene expression to counter oxidative stress or activate the antioxidant response, which protects against cell damage, and death triggered by various stimuli including environmental factors such as pollution, lifestyle factors such as smoking or exercise, and other factors. There is a growing body of evidence showing that Nrf2 plays multiple roles in the regulation of oxidative stress, inflammation, metabolism, autophagy, mitochondrial physiology, and other biological processes [64]. It has been reported that Nrf2-KO mice are susceptible to suffering diseases, which are associated with oxidative damage [66]. Therefore, Nrf2 plays a critical role in cell defense and the regulation of cell survival in various diseases such as TBI.Figure 3 The regulation of the Nrf2 signaling pathway in TBI. Under basal conditions (a), Keap1 functions as a substrate adaptor protein for Cul3 to mediate the degradation of Nrf2 via the UPS pathway. Under Nrf2 activation (b), the stress condition or the treatment of Nrf2 activators induces the dissociation of Nrf2 from Keap1 and leads to the accumulation of Nrf2 in the cytoplasm and the nuclear translocation of Nrf2. Then, Nrf2 binds to sMAF and ARE to regulate the expression of its downstream transcription factors including HO-1, NQO1, GST, GSH-Px, GCLC, and SOD. Then, oxidative damage, inflammation, neuronal apoptosis, and mitochondrial dysfunction are inhibited. ## 5. The Potential Therapy of Phytochemicals as Nrf2 Activators in TBI Because the primary injuries in TBI commonly result in acute physical damage and irreversible neuronal death, the therapies mainly aim at stabilizing the injury site and preventing it from secondary damage. As described above, the secondary damage of TBI is induced by various risks such as oxidative stress and develops progressively. To date, multiple therapeutic manners are developed, including the inhibition of excitotoxicity by glutamate receptor antagonists such as dexanbionol, the improvement of mitochondrial dysfunction using neuroprotective agents such as cyclosporine A, and the inhibition of axonal degeneration by calpain inhibitors such as MDL 28170 [67]. Emerging evidence indicates that oxidative stress is not only one of the pathogenesis of TBI but also the initiator and promoter of excitotoxicity, mitochondrial dysfunction, neuroinflammation, and other risks. Nrf2 plays a protective role in TBI via fighting against oxidative damage and inflammatory response in TBI [68], while the genetic deletion of Nrf2 delayed the recovery of post-TBI motor function and the cognitive function [69]. Therefore, targeting the discovery of Nrf2 activators to alleviate oxidative damage is a promising therapeutic strategy for TBI [70]. Recently, there are a lot of phytochemicals isolated from natural plants such as fruits, vegetables, grains, and other medicinal herbs and reported to activate the Nrf2 signaling pathway to exert neuroprotective effects in TBI [71]. In general, these natural phytochemicals as Nrf2 activators are used for the alleviation of the secondary damage of TBI. In this review, we summarize the research advances of phytochemicals, including polyphenols, terpenoids, natural pigments, and otherwise, in the activation of Nrf2 signaling and their potential therapies for TBI during secondary injury (Table 1).Table 1 Phytochemicals from various plants possess multiple pharmacological effects via the antioxidant mechanism in various in vitro and in vivo models of TBI. PhytochemicalsPlantsModelsPharmacological effectsDetected markersAntioxidant mechanismRefPolyphenolQuercetinOnions, tomatoes, etc.Weight drop-induced TBI mice/rats,Improved behavioral function, neuronal viability, and mitochondrial function; reduced brain edema and microgliosis, oxidative damage and nitrosative stress, neuronal apoptosis, inflammatory responseMotor coordination; latency period; NSS; brain water content; MDA; SOD; catalase; GPx; lipid peroxidation; neuronal morphology; cytochrome c; Bax; MMP; ATP; Iba-1; TNF-α; iNOS; cNOS; IL-1β; Nrf2; HO-1Nrf2 pathway[74–77]CurcuminCurcuma longaFPI-induced TBI rats; Feeney or weight drop-induced TBI WT, Nrf2-KO or TLR4-KO mice; LPS-induced microglia or the co-culture of neuron and microgliaImproved cognitive function; reduced axonal injury, neuronal apoptosis, inflammatory response, and oxidative damageNSS; brain water content; Tuj1; H&E; Nissl; Congo red, silver, TUNEL, MPO, and FJC staining; caspase 3; Bcl-2; NeuN/BrdU double labeling; Iba-1; GFAP; TNF-α; IL-6; IL-1β; MCP-1; RANTES; CD11B; DCX; TLR4; MyD88; NF-κB; IkB; AQP4; Nrf2; HO-1; NQO1; PERK; eIF2α; ATF4; CHOP; GSK3β; p-tau; β-APP; NF-HNrf2 pathway; PERK/Nrf2 pathway[80, 81, 86–88]FormononetinRed cloverWeight drop-induced TBI ratsReduced brain edema, pathological lesions, inflammatory response, and oxidative damage,NSS; brain water content; H&E and Nissl staining; neuronal ultrastructural organization; SOD; GPx; MDA; TNF-α; IL-6; COX-2; IL-10; Nrf2Nrf2 pathway[91, 95]BaicalinScutellaria baicalensisWeight drop-induced TBI ratsImproved behavioral function and neuronal survival; reduced brain edema, oxidative damage, BBB disruption, and mitochondrial apoptosisNSS; brain water content; EB leakage, Nissl, and TUNEL staining, grip test score; cleaved caspase 3; Bcl-2, cytochrome c, p53, SOD, MDA, GPx, NeuN, Nrf2, HO-1, NQO1, AMPK, mTOR, LC3, Beclin-1, p62Akt/Nrf2 pathway[96, 97]CatechinCocoa, tea, grapes, etc.CCI- or weight drop-induced TBI ratsImproved long-term neurological outcomes, neuronal survival, and white matter recovery; reduced brain edema, brain lesion volume, neurodegeneration, inflammatory response, BBB disruption, neutrophil infiltration, and oxidative damageNSS; brain water content; brain infarct volume; forelimb score; Hindlimb score; latency; quadrant time; EB extravasation; ZO-1; Occludin; TNF-α;IL-1β;IL-6; iNOS; arginase; TUNEL, PI, FJB, Cresyl violet, and MPO staining; myelin; caspase 3; caspase 8; Bcl-2; Bax; BDNF; ROS; MMP-2; MMP-9; Nrf2, Keap1, SOD1; HO-1; NQO1; NF-κBNrf2-dependent and Nrf2-independent pathways[103, 104]FisetinCotinus coggygria, onions, cucumbers, etc.Weight drop-induced TBI miceImproved neurological function; reduced cerebral edema, brain lesion, oxidative damage, and BBB disruptionNSS; brain water content; grip score; EB extravasation; lesion volume; MDA; GPx Nissl and TUNEL staining; caspase 3; Bcl-2; Bax; Nrf2, HO-1; NQO1; TLR4; NF-κB; NeuN; TNF-α;IL-1β;IL-6; MPP-9; ZO-1; EB leakageNrf2-ARE signaling pathway[109, 110]LuteolinCarrots, green tea, celery, etc.Marmarou’s weight drop-induced TBI mice/rats; scratch injury-induced TBI primary neuronsImproved motor performance, and learning and memory; reduced cerebral edema, apoptosis index, and oxidative damageLatency time; brain water content; grip score; MDA; GPx; catalase; SOD; TUNEL, H&E, Cresyl violet, and TB staining; ROS; LDH release assay; Nrf2; HO-1; NQO1Nrf2-ARE signaling pathway[116, 117]IsoliquiritigeninSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odoriferaCCI-induced TBI mice/rats; ODG-induced SH-SY5Y cellsImproved motor performance, cognitive function, and cell viability; reduced cerebral edema, neuronal apoptosis, inflammatory response, BBB damage, and oxidative damageGarcia neuroscore; MWM test; beam-balance latency; beam-walk latency; brain water content; contusion volume; EB extravasation; apoptosis rate; MDA; GPx; SOD; H2O2; H&E and Nissl staining; GFAP; NFL; AQP4; caspase 3; Bcl-2; Bcl-xL; Bax; Nrf2, HO-1; NQO1; TNF-α; INF-γ; IL-1β; IL-6; IL-10; Iba-1; CD68; AKT; GSK3β; P-120; Occludin; NF-κB; IkB; CCK-8 assayNrf2-ARE signaling pathway[121–123]Tannic acidGreen and black tea, nuts, fruits, and vegetablesCCI-induced TBI mice/rats; ODG-induced SH-SY5Y cellsImproved behavioral performance; reduced cerebral edema, neuronal apoptosis, inflammatory response, and oxidative damageGrip test score; Rotarod test; beam balance; brain water content; GSH; LPO; GST; GPx; CAT; SOD; Nissl staining; caspase 3; Bcl-2; Bax; PARP; Nrf2; PGC-1α; Tfam; HO-1; NQO1; TNF-α; IL-1β; 4HNE; GFAPPGC-1α/Nrf2/HO-1 pathway[133]Ellagic acidVarious berries, walnuts, and nutsExperimental diffuse TBI rats; CCl4-induced brain injury ratsImproved memory, hippocampus electrophysiology and long-term potentiation deficit; reduced neuronal apoptosis, inflammatory response, oxidative damage, and BBB disruptionInitial latency; step through latency; EB leakage; NSS; MDA; GSH; CAT; caspase 3; Bcl-2; NF-κB; PARP; Nrf2; Cox-2, VEGF; TNF-α; IL-1β; IL-6Nrf2 signaling pathway[134, 135]BreviscapineErigeronWeight drop- or CCI-induced TBI ratsImproved neurobehavior; reduced neuronal apoptosis, inflammatory response, and oxidative damageNSS; TUNEL staining; MDA; GSH; CAT; caspase 3; Bcl-2; Bax; IL-6; Nrf2; HO-1; NQO1; GSK3β; SYPNrf2 signaling pathway[138–140]TerpenoidsAsiatic acidCentella asiaticaCCI-induced TBI ratsImproved neurological deficits; inhibited brain edema, neuronal apoptosis, and oxidative damageNSS; brain water content; TUNEL staining; MDA; 4-HNE; 8-OhdG; Nrf2; HO-1Nrf2 signaling pathway[149]AucubinEucommia ulmoidesWeight drop-induced TBI mice; H2O2-induced primary cortical neuronsImproved neurological deficits, and cognitive function; reduced brain edema, neuronal apoptosis and loss, inflammatory response, and oxidative damageNSS; brain water content; TUNEL and Nissl staining; MWM test; Bcl-2; Bax; CC3; MAP2; MMP-9; MDA; SOD; GSH; GPx; 8-HdG; NeuN; Iba-1; HMGB1; TLR4; MyD88; NF-κB; iNOS; COX2; IL-1β; Nrf2; HO-1; NQO1Nrf2 signaling pathway[155]Ursolic acidApples, bilberries, lavender, hawthorn, etc.Weight drop-induced TBI miceImproved neurobehavioral and mitochondrial function; reduced brain edema, oxidative damage, and neuronal cytoskeletal degradationNSS; brain water content; TUNEL and Nissl staining; MDA; SOD; GPx; AKT; 4-HNE; 3-NE; ADP rate; succinate rate; Spectrin; Nrf2; HO-1; NQO1AKT/Nrf2 signaling pathway[158]Carnosic acidRosmarinus officinalis and Salvia officinalis.CCI-induced acute post-TBI mice;Improved motor and cognitive function, and neuronal viability; reduced brain edema, neuronal apoptosis and loss, inflammatory response, and oxidative damageDuration of apnea; mitochondrial respiration; Barnes maze test; novel object recognition (NOR) task; GFAP; Iba-1; NeuN; MAP2; vGlut1; HO-1Nrf2-ARE signaling pathway[157–159]Natural pigmentsFucoxanthinRosmarinus officinalis and Salvia officinalisWeight drop-induced TBI mice; scratch injury-induced TBI primary cortical neuronsImproved neurobehavioral function, and neuronal viability; reduced brain edema, neuronal apoptosis, and oxidative damageNSS; grip test score; brain water content; lesion volume; TUNEL staining; caspase 3; PARP; cytochrome c; MDA; GPx; ROS; LC3; NeuN; p62; Nrf2; HO-1; NQO1Nrf2-ARE and Nrf2-autophagy[170]β-CaroteneFungi, plants, and fruitsWeight drop-induced TBI miceImproved neurological function; reduced brain edema, BBB disruption, neuronal apoptosis, and oxidative damageNeurological deficit score; wire hanging; brain water content; EB extravasation; MDA; SOD; NeuN; Nissl and TUNEL staining; caspase 3; Bcl-2; Keap1; Nrf2; HO-1; NQO1Keap1-Nrf2 signaling pathway[174]AstaxanthinSalmon, rainbow trout, shrimp, and lobsterCCI- or weight drop-induced TBI mice; H2O2-induced primary cortical neuronsImproved neurological, motor, and cognitive function; reduced brain edema, BBB disruption, neuronal apoptosis, and oxidative damageNSS; Rotarod test time; neurological deficit scores; rotarod performance; beam walking score; wire hanging test; MWM test; brain water content; 8-OhdG; immobility time; latency to immobility; SOD1; MDA; H2O2; GSH; ROS; CC3; Nissl, Cresyl violet, and TUNEL staining; Prx2; SIRT1; ASK1; p38; NeuN; Bax; Bcl-2; caspase 3; Nrf2; HO-1; NQO1Nrf2 signaling pathway; SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway[172, 173]LuteinCalendula officinalis, spinach, and Brassica oleraceaCCI-induced STBI mice; H2O2-induced primary cortical neuronsImproved motor and cognitive function; reduced brain edema, contusion volume, inflammatory response; and oxidative damageForelimb reaching test; immobility time; latency to immobility; brain water content; 8-OhdG; TNF-α; IL-1β; IL-6; MCP-1; ROS; SOD; GSH; ICAM-1; COX-2 NF-κB; ET-1; MDA; H2O2; CC3; Nissl, Cresyl violet, and TUNEL staining; Prx2; SIRT1; ASK1; p38; NeuN; Bax; Bcl-2; caspase 3; Nrf2; HO-1; NQO1ICAM-1/Nrf-2 signaling pathway[176, 177]OthersSodium aescinateAesculus chinensis Bunge and chestnutWeight drop-induced TBI mice; scratch injury-induced primary cortical neuronsImproved neurological function; reduced brain edema, inflammatory response; and oxidative damageNSS; brain water content; lesion volume; MDA; GPx; Nissl and TUNEL staining; Bax; Bcl-2; cytochrome c; caspase 3; cell survival; ROS; Nrf2; HO-1; NQO1Nrf2-ARE pathway[187]MelatoninPlants, animals, fungus, and bacteriaMarmarou’s weight drop-induced TBI miceReduced brain edema, neuronal degeneration and apoptosis, and oxidative damageBrain water content; MDA; 3-NT; GPx; SOD; FJC staining; NeuN; Beclin-1; Nrf2; HO-1; NQO1Nrf2-ARE pathway[193]SinomenineSinomenium acutum (Thunb.) Rehd. Et Wils. and Sinomenium acutum var. cinereum Rehd. Et Wils.Marmarou’s weight drop-induced TBI miceImproved motor performance; reduced brain edema, neuronal apoptosis, and oxidative damageGrip test score; brain water content; NeuN and TUNEL staining; Bcl-2; caspase 3; MDA; GPx; SOD; Nrf2; HO-1; NQO1Nrf2-ARE pathway[196]SulforaphaneVegetable, including cabbage, broccoli, and cauliflowerCCI-induced TBI miceImproved motor performance and cognitive function, reduced brain edema, BBB permeability, mitochondrial dysfunction, and oxidative damageMWZ test; EB extravasation; brain water content; Occludin; Claudin-5; RECA-1; vWF; EBA; ZO-1; AQP4; GPx; GSTα3; 4-HNE; ADP rate; succinate rate; NeuN; Nrf2; HO-1; NQO1Nrf2 signaling pathway[158, 196–198] ### 5.1. Quercetin Quercetin belonging to flavonoids is commonly found in dietary plants including vegetables and fruits such as onions, tomatoes, soy, and beans [72]. Emerging evidence indicates quercetin exerts a variety of pharmacological effects mainly involving antioxidation, anti-inflammation, antivirus, anticancer, neuroprotection, and cardiovascular protection [73]. It is known to us that the inflammatory response promotes oxidative damage in TBI [74]. In weight drop injury (WDI)-induced TBI mice, quercetin was reported to significantly inhibit neuroinflammation-medicated oxidative stress and histological alterations as demonstrated by the decreased lipid peroxidation and increased activities of SOD, catalase, and GPx [75]. Meanwhile, quercetin could significantly reduce the brain water content and improve the neurobehavioral status, which is closely associated with the activation of the Nrf2/HO-1 pathway [74]. The impairment of mitochondria function leads to an increase in reactive oxygen species (ROS) production and damages mitochondrial proteins, DNA, and lipids [72]. Quercetin was reported to significantly inhibit mitochondrial damage of TBI male Institute of Cancer Research (ICR) mice as evidenced by the decreased expression of Bax and increased levels of cytochrome c in mitochondria, as well as increased mitochondrial SOD and decreased mitochondrial MDA content, and the recovery of mitochondrial membrane potential (MMP) and intracellular ATP content. The mechanistic study demonstrated that quercetin promoted the translocation of Nrf2 from the cytoplasm to the nucleus, suggesting that quercetin exerts neuroprotective effects in TBI mice via maintaining mitochondrial homeostasis through the activation of Nrf2 signaling pathway [76]. In moderate TBI rats, quercetin inhibited oxidative nitrosative stress by reducing the activity of NOS including inducible nitric oxide synthase (iNOS) and constructive nitric oxide synthase (cNOS), as well as the concentration of thiobarbituric acid (TBA)-lipid peroxidation in the cerebral hemisphere and periodontal tissues [77]. Therefore, quercetin exerts neuroprotective effects in TBI via multiple biological activities, including inhibition of oxidative damage, nitrosative stress, and inflammatory response, as well as the improvement of mitochondrial dysfunction and neuronal function through the Nrf2 signaling pathway. ### 5.2. Curcumin Curcumin, a polyphenol isolated fromCurcuma longa rhizomes, has been reported to possess multiple biological activities, including antioxidative, anti-inflammatory, and anticancer effects [78]. Most importantly, curcumin is also demonstrated to cross the BBB and exert neuroprotection in various neurodegenerative diseases, such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and amyotrophic lateral sclerosis (ALS), via the inhibition of neuronal death and neuroinflammation [79]. In addition, emerging evidence indicates that curcumin presents protective effects in TBI and activates the Nrf2 signaling pathway in vivo and in vitro [78, 80–82]. In mild fluid percussion injury (FPI)-induced TBI rats, curcumin significantly attenuated oxidative damage by decreasing the oxidized protein levels and reversing the reduction in the levels of brain-derived neurotrophic factor (DBNF), synapsin I, and cyclic AMP (cAMP)-response element-binding protein 1 (CREB) [81]. Meanwhile, curcumin improved the cognitive and behavioral function of TBI rats [81, 83–85]. In addition, the intraperitoneal administration of curcumin could improve the neurobehavioral function and decrease the brain water content in Feeney or Marmarou’s weight drop-induced TBI mice. Furthermore, curcumin reduced the oxidative stress in the ipsilateral cortex by decreasing the level of MDA and increasing the levels of SOD and GPx, as well as promoted neuronal regeneration and inhibited neuronal apoptosis [80, 85]. Moreover, curcumin inhibited the neuroinflammatory response as demonstrated by the decreased number of myeloperoxidase (MPO) positive cells and increased levels of cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), and interleukin-1beta (IL-1β) [80]. The mechanistic study found that curcumin promoted the nuclear translocation of Nrf2 and increased the expression of downstream genes, including HO-1, NQO1, and GCLC, while the neuroprotective effects of curcumin including antioxidation, antiapoptosis, and anti-inflammation were attenuated in Nrf2-KO mice after TBI [80]. In addition, the anti-inflammatory effect of curcumin in TBI was also regulated by the TLR4/MyD88/NF-κB signaling pathway [86] and aquaporin-4 (AQP4) [87]. Diffuse axonal injury (DAI), a type of TBI, is recognized as an important cause that results in long-term problems in motor and cognition, while curcumin could ameliorate axonal injury and neuronal degeneration of rats after DAI. In addition, curcumin overcame endoplasmic reticulum (ER) stress via strengthening the ability of the unfolded protein response (UPR) process and reducing the levels of plasma tau, β-APP, and NF-H. The mechanistic study revealed that curcumin activated the PERK/Nrf2 signaling pathway [88]. Most importantly, the combinational use of curcumin and candesartan, an angiotensin II receptor blocker used for the treatment of hypertension, showed better antioxidative, antiapoptotic, and anti-inflammatory effects than curcumin or candesartan alone [89]. In addition, tetrahydrocurcumin, the metabolite of curcumin, could also alleviate brain edema and reduce neuronal cell apoptosis, as well as improve neurobehavioral function via the Nrf2 signaling pathway in weight drop-induced TBI mice [90]. Taken together, curcumin together with its metabolites are useful for the treatment of TBI. ### 5.3. Formononetin Formononetin, an O-methylated isoflavone phytoestrogen, is commonly found in plants such as red clover [91]. Accumulating studies show that formononetin has various biological activities, including the improvement of blood microcirculation, anticancer, and antioxidative [92]. In addition, formononetin exhibits neuroprotection in AD, PD, spinal cord injury, and TBI [93, 94]. It has been reported that the administration of formononetin could decrease the neurological score and cerebral moisture content of TBI rats [91]. In addition, the HE staining images showed that formononetin attenuated the edema and necrosis in the lesioned zones of the brain and increased the number of neural cells. At the same time, the oxidative stress was significantly reversed by formononetin as indicated by the increased enzymatic activity of SOD and GPx activity and decreased MDA content. The inflammatory cytokines including TNF-α and IL-6 as well as the mRNA level of Cyclooxygenase-2(COX-2) were also reduced by formononetin. The mechanistic study revealed that formononetin increased the protein expression of Nrf2 [95]. Furthermore, the same research team found that microRNA-155 (miR-155) is involved in the neuroprotection of formononetin in TBI. The pretreatment of formononetin significantly increased the expression of miR-155 and HO-1, which is accompanied by the downregulation of BACH1 [91]. All evidence suggests that formononetin provides neuroprotection in TBI via the Nrf2/HO-1 signaling pathway. ### 5.4. Baicalin Baicalin, known as 7-D-Glucuronic acid-5,6-dihydroxyflavone, is a major flavone found in the radix ofScutellaria baicalensis [96]. Emerging evidence indicates that baicalin can cross the BBB and exert neuroprotective effects in various CNS-related diseases including AD, cerebral ischemia, spinal cord injury, and TBI [97]. In addition, baicalin was reported to activate the Nrf2 signaling pathway and attenuate subarachnoid hemorrhagic brain injury [98]. In weight drop-induced TBI mice, baicalin significantly reduced the neurological soft signs (NSS) score and the brain water content, and inhibited neuronal apoptosis as evidenced by the decreased terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive neurons, Bax/Bcl-2 ratio, and the cleavage of caspase 3. Meanwhile, baicalin attenuated oxidative damage by decreasing MDA levels and increasing GPx and SOD activity and expression. The mechanistic study found that baicalin increased the expression of Nrf2 and promoted the nuclear translocation of Nrf2, meanwhile upregulated the mRNA and protein expression of HO-1 and NQO1, while the treatment of ly294002 reversed the effect of baicalin on antiapoptosis, antioxidation, and activation of the Nrf2 signaling pathway, suggesting that baicalin exerts neuroprotective effects via the Akt/Nrf2 pathway in TBI [96]. As is known to us, autophagy plays a protective mechanism in neurodegenerative diseases. Furthermore, the same research team found that baicalin induced autophagy and alleviated the BBB disruption and inhibited neuronal apoptosis of mice after TBI, while the co-treatment of 3-MA partly abolished the neuroprotective effect of baicalin. Therefore, baicalin provides a beneficial effect via the Nrf-2 regulated antioxidative pathway and autophagy induction. ### 5.5. Catechin Catechin is a flavan-3-ol and belongs to a type of natural polyphenols [99]. It is a plant secondary metabolite and a potent antioxidant [100]. Structurally, it has four diastereoisomers, including two isomers with trans configuration called (+)-catechin and two isomers with cis configuration called (-)-epicatechin [101]. They are commonly found in food and fruits, such as cocoa, tea, and grapes. The pharmacological activity of catechin mainly involves antioxidative, anti-inflammatory, antifungal, antidiabetic, antibacterial, and antitumor effects [102]. In addition, catechin also exhibits neuroprotective effects in CCI-induced TBI rats by inhibiting the disruption of BBB and excessive inflammatory responses [103]. The expression of junction proteins including occludin and zonula occludens protein-1 (ZO-1) associated with BBB integrity was increased, while the levels of proinflammatory cytokines including IL-1β, iNOS, and IL-6 were decreased by catechin. At the same time, catechin significantly alleviated the brain damage as revealed by the decrease in the brain water content and brain infarction volume, as well as improved motor and cognitive deficits [103]. In addition, catechin inhibited cell apoptosis and induced neurotrophic factors in rats after TBI [104]. In CCI-induced TBI mice, the administration of epicatechin significantly attenuated the neutrophil infiltration and oxidative damage. Specifically, epicatechin could reduce lesion volume, edema, and cell death, as well as improve neurological function, cognitive performance, and depression-like behaviors. In addition, epicatechin decreased white matter injury, HO-1 expression, and the deposition of ferric iron. The mechanistic study found that epicatechin decreased the Keap1 expression while increasing the nuclear translocation of Nrf2. Meanwhile, epicatechin reduced the activity of Matrix metallopeptidase 9 (MMP9) and increased the expression of SOD1 and quinone 1 [102]. Therefore, epicatechin exerts neuroprotective effects in TBI mice via modulating Nrf2-regulated oxidative stress response and inhibiting iron deposition. ### 5.6. Fisetin Fisetin, also known as 3,3′,4′,7-tetrahydroxyflavone, is a flavonol compound and was first extracted from Cotinus coggygria by Jacob Schmid in 1886 [105], and its structure was elucidated by Joseph Hergig in 1891. In addition, fisetin is also found in many vegetables and fruits, such as onions, cucumbers, persimmon, strawberries, and apples [106]. Emerging evidence indicates that fisetin acting as a potent antioxidant possesses multiple biological activities, including anti-inflammatory, antiviral, anticarcinogenic, and other effects [107]. Fisetin also presents neuroprotective effects via the antioxidative stress in AD, PD, etc. [108]. In addition, fisetin also showed protective effects in weight drop-induced TBI mice as shown by the decreased NSS, brain water content, Evans blue (EB) extravasation, and lesion volume of brain tissue, as well as the increased grip test score. Meanwhile, the MDA level was decreased and GPx activity was increased by fisetin, suggesting that fisetin provides a neuroprotective effect via suppressing TBI-induced oxidative stress [109]. In addition, the neuronal cell outline and structure stained by Nissl solution showed that fisetin improved neuronal viability, while neuronal apoptosis was inhibited by fisetin as demonstrated by the decreased TUNEL signals, and the reduced protein expression of Bax/Bcl-2 and cleaved caspase-3. The mechanistic study demonstrated that fisetin promoted the Nrf2 nuclear translocation and increased the expression of HO-1 and NQO1, while the KO of Nrf2 abrogated the neuroprotective effect of fisetin including antioxidation and antiapoptosis [109]. Moreover, fisetin was reported to exert anti-inflammatory effects in TBI mice via the TLR4/NF-κB pathway, and the level of TNF-α, IL-1β, and IL-6 was significantly decreased. Meanwhile, the BBB disruption of TBI mice was attenuated by fisetin [110]. Therefore, fisetin exerts neuroprotective effects in TBI via the Nrf2-regulated oxidative stress and the NF-κB-mediated inflammatory signaling pathway. ### 5.7. Luteolin Luteolin, belonging to flavonoids, is abundant in fruits and vegetables such as carrots, green tea, and celery [111]. Emerging evidence indicates luteolin has a wide variety of biological activities including antioxidative and anti-inflammatory effects [112, 113]. In addition, several studies have demonstrated the neuroprotective effect of luteolin in multiple in vivo and in vitro models [114, 115]. For example, luteolin could recover motor performance and reduce post-traumatic cerebral edema in weight drop-induced TBI mice. The oxidative damage was reduced by luteolin as demonstrated by the decrease in MDA levels and the increase in GPx activity in the ipsilateral cortex. The mechanistic study found that luteolin promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expressions of HO-1 and NQO1 [116]. In addition, luteolin significantly improved TBI-induced learning and memory impairment in rats after TBI, which was closely associated with the attenuation of oxidative damage indicated by the decreased MDA level and increased SOD and CAT activity [117, 118]. Therefore, the Nrf2-regulated oxidative stress response plays an important role in luteolin against TBI. ### 5.8. Isoliquiritigenin Isoliquiritigenin, a chalcone compound, is often found in plants includingSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odorifera. [119]. Isoliquiritigenin has been reported to attenuate oxidative damage, inhibit the inflammatory response, and suppress tumor growth [120]. In addition, isoliquiritigenin activates the Nrf2 signaling pathway to exert antioxidative and anti-inflammatory effects in multiple cellular and animal models. Isoliquirtigenin also exerts a neuroprotective effect in CCI-induced TBI mice via the Nrf2-ARE signaling pathway [121]. For example, isoliquiritigenin increased the Garcia Neuro score and decreased the brain water content, as well as the expression of aquaporin 4 (AQP4) and EB leakage. The glia activation indicated by GFAP expression was inhibited and the neuron viability showed by neurofilament light (NFL) expression was increased by isoliquiritigenin. In addition, isoliquiritigenin increased the number of Nissl staining-positive neurons and inhibited neuronal apoptosis as evidenced by the decreased expression of cleaved caspase-3. Furthermore, the oxidative damage was ameliorated by isoliquiritigenin as shown by the increased GPx activity, SOD levels, and decreased H2O2 concentration and MDA levels. However, the KO of Nrf2 significantly attenuated the neuroprotective effect of isoliquiritigenin in mice after TBI. The mechanistic study demonstrated that isoliquiritigenin increased the nuclear translocation of Nrf2 and the protein and mRNA expression of NQO1 and HO-1. In the in vitro study, isoliquiritigenin also activated the Nrf2-ARE signaling pathway and increased the cell viability in oxygen and glucose deprivation (OGD)-induced SH-SY5Y cells. In addition, isoliquiritigenin inhibited shear stress-induced cell apoptosis in SH-SY5Y cells, as well as suppressed the inflammatory response and inhibited neuronal apoptosis in CCI-induced TBI mice or rats via the PI3K/AKT/GSK-3β/NF-κB signaling pathway [122, 123]. Moreover, isoliquiritigenin protected against BBB damage in mice after TBI via inhibiting the PI3K/AKT/GSK-3β pathway [123]. Therefore, isoliquiritigenin may be a promising agent for the treatment of TBI via the inhibition of oxidative stress, inflammatory response, and BBB disruption in TBI. ### 5.9. Tannic Acid Tannic acid, a natural polyphenol, is commonly found in green and black teas as well as nuts, fruits, and vegetables [124]. Emerging evidence indicates that tannic acid possesses multiple biological activities such as antioxidative, anti-inflammatory, antiviral, and antiapoptotic effects [125–127]. In addition, tannic acid exhibits neuroprotective effects as shown by the improvement of behavioral deficits and the inhibition of neurodegeneration [128]. Recently, tannic acid has been proven to ameliorate the oxidative damage and behavioral impairments of mice after TBI [128]. For example, tannic acid significantly increased the score in the grip test and the motor coordination time as well as decreased the stay time in the balance test. In addition, tannic acid inhibited neuronal damage and reduced the brain water content of TBI mice. A further study found that tannic acid could attenuate oxidative stress as evidenced by increased glutathione (GSH) levels, 1-Chloro-2,4-dinitrobenzene (CDNB) conjunction, NADPH oxidation, and H2O2 consumption. In addition, apoptosis-related proteins including cleaved caspase-3 and Poly (ADP-ribose) polymerase (PARP), as well as Bax/Bcl-2, were significantly inhibited by tannic acid. Meanwhile, the inflammatory response indicated by the increased levels of TNF-α and IL-1β and GFAP immunofluorescence intensity was also suppressed. The mechanistic study demonstrated that tannic acid increased the protein expression of Nrf2, PGC-1α, Tfam, and HO-1. Therefore, tannic acid exerts a neuroprotective effect in TBI via activating the PGC-1α/Nrf2/HO-1 signaling pathway. ### 5.10. Ellagic Acid Ellagic acid, an innate polyphenol, is commonly found in various berries such as blueberries, strawberries, blackberries, together with walnuts and nuts [129]. Several studies show that ellagic acid exerts multiple biological activities, including anti-inflammatory, antioxidative, antifibrosis, antidepressant, and neuroprotective effects [130]. In addition, ellagic acid also exhibits protective effects in various brain injuries such as neonatal hypoxic brain injury, cerebral ischemia/reperfusion injury, carbon tetrachloride (CCl4)-induced brain damage, and TBI [131–133]. Here, we summarize the neuroprotective effect of ellagic acid in TBI and its mechanism of action. In experimental diffuse TBI rats, the treatment of ellagic acid significantly improved memory and hippocampus electrophysiology deficits [134]. Meanwhile, the inflammatory responses indicated by the elevated TNF-α, IL-1β, and IL-6 levels were reduced by ellagic acid [134, 135]. In addition, ellagic acid could also decrease the BBB permeability of mice after TBI [135]. In CCl4-induced brain injury rats, ellagic acid decreased MDA levels, increased GSH content, and CAT activity. The mechanistic study demonstrated that ellagic acid inhibited the protein expression of NF-κB and COX-2 while increasing the protein expression of Nrf2 [133]. Therefore, ellagic acid exerts an antioxidative effect via activating the Nrf2 pathway and exhibits anti-inflammatory effects via inhibiting the NF-κB pathway in TBI. ### 5.11. Breviscapine Breviscapine is an aglycone flavonoid and is isolated from the Erigeron plant [136]. Modern pharmacological studies indicate that breviscapine can expand blood vessels to assist in microcirculation, suggesting its potential therapeutic role in cardiovascular and CNS-related diseases [137]. In addition, breviscapine, acting as a scavenger of oxygen-free radicals, is demonstrated to improve ATPase and SOD activity. Recently, breviscapine is also reported to improve neurobehavior and decrease neuronal apoptosis in TBI mice, which is closely associated with the translocation of Nrf2 from the cytoplasm into the nuclear and the subsequent upregulation of Nrf2 downstream factors such as HO-1 and NQO1 [138]. In addition, the inhibition of glycogen synthase kinase-3β (GSK-3β) and IL-6 by breviscapine is associated with its neuroprotective effect in TBI [139, 140]. Therefore, breviscapine exerts neuroprotective effects in TBI via antioxidation, antiapoptosis, and anti-inflammatory responses. ### 5.12. Asiatic Acid Asiatic acid belonging to pentacyclic triterpene is isolated from natural plants such asCentella asiatica [141]. Studies have shown that asiatic acid exhibits potent anti-inflammatory and antioxidative properties, which contributes to its protective effects in spinal cord injury, ischemic stroke, cardiac hypertrophy, liver injury, and lung injury through multiple mechanisms [142]. For example, the administration of asiatic acid could increase Basso, Beattie, Bresnahan scores and the plane test score in spinal cord injury (SCI) rats. Meanwhile, asiatic acid inhibited the inflammatory response by reducing the levels of IL-1β, IL-18, IL-6, and TNF-α, and counteracted oxidative stress by decreasing ROS, H2O2, and MDA levels while increasing SOD activity and glutathione production. The underlying mechanisms include the activation of Nrf2/HO-1 and the inhibition of the NLRP3 inflammasome pathway. In addition, asiatic acid could alleviate tert-butyl hydroperoxide (tBHP)-induced oxidative stress in HepG2 cells. The researchers found that asiatic acid significantly inhibited tBHP-induced cytotoxicity, apoptosis, and the generation of ROS, which attributed to the activation of Keap1/Nrf2/ARE signaling pathway and the upregulation of transcription factors including HO-1, NQO-1, and GCLC [143]. In a CCI-induced TBI model, the administration of asiatic acid significantly improved neurological deficits and reduced brain edema. Meanwhile, asiatic acid counteracted oxidative damage as evidenced by the reduced levels of MDA, 4-HNE, and 8-hydroxy-2′-deoxyguanosine (8-OHdG). The mechanistic study further found that asiatic acid could increase the mRNA and protein expression of Nrf2 and HO-1 [144]. Taken together, asiatic acid improves neurological deficits in TBI via activating the Nrf2/HO-1 signaling pathway. ### 5.13. Aucubin Aucubin, an iridoid glycoside isolated from natural plants such asEucommia ulmoides [145], is reported to have several pharmacological effects including antioxidation, antifibrosis, antiageing, and anti-inflammation [145–147]. Recently, emerging evidence indicates that aucubin exerts neuroprotective effects via antioxidation and anti-inflammation [148]. In addition, aucubin also inhibited lipid accumulation and attenuated oxidative stress via activating the Nrf2/HO-1 and AMP-activated protein kinase (AMPK) signaling pathways [147]. Moreover, aucubin inhibited lipopolysaccharide (LPS)-induced acute pulmonary injury through the regulation of Nrf2 and AMPK pathways [149]. In H2O2-induced primary cortical neurons and weight drop-induced TBI mouse model, aucubin was found to significantly decrease the excessive generation of ROS and inhibit neuronal apoptosis. In addition, aucubin could reduce brain edema, improve cognitive function, decrease neural apoptosis and loss of neurons, attenuate oxidative stress, and suppress the inflammatory response in the cortex of TBI mice. The mechanistic study demonstrated that aucubin activated the Nrf2-ARE signaling pathway and upregulated the expression of HO-1 and NQO1, while the neuroprotective effect of aucubin in Nrf2-KO mice after TBI was reversed [150]. Therefore, aucubin provides a protective effect in TBI via activating the Nrf2 signaling pathway. ### 5.14. Ursolic Acid Ursolic acid, a pentacyclic triterpenoid compound, is widely found in various fruits and vegetables such as apples, bilberries, lavender, and hawthorn. [151]. Ursolic acid has been reported to possess multiple pharmacological effects including anti-inflammatory, antioxidative, antifungal, antibacterial, and neuroprotective properties [152]. In addition, ursolic acid activates the Nrf2/ARE signaling pathway to exert a protective effect in cerebral ischemia, liver fibrosis, and TBI [153]. In weight drop-induced TBI mice, the administration of ursolic acid could improve neurobehavioral functions and reduce the cerebral edema of mice after TBI. In addition, ursolic acid inhibited neuronal apoptosis as shown by the Nissl staining images and TUNEL staining. Meanwhile, ursolic acid ameliorated oxidative stress by increasing SOD and GPx activity as well as decreasing MDA levels. The mechanistic study demonstrated that ursolic acid promoted the nuclear translocation of Nrf2 and increased the levels of transcription factors including HO-1 and NQO1, while the KO of Nrf2 could partly abolish the protective effect of ursolic acid in TBI [153]. Therefore, ursolic acid exerts a neuroprotective effect in TBI via partly activating the Nrf2 signaling pathway. ### 5.15. Carnosic Acid Carnosic acid, a natural benzenediol abietane diterpene, is found inRosmarinus officinalis and Salvia officinalis [154]. Carnosic acid and carnosol are two major antioxidants in Rosmarinus officinalis [155]. Emerging evidence indicates that carnosic acid is a potent activator of Nrf2 and exerts a neuroprotective effect in various neurodegenerative diseases [156]. In CCI-induced acute post-TBI mice, carnosic acid could reduce TBI-induced oxidative damage by decreasing the level of 4-HNE and 3-NE in the brain tissues. A further study demonstrated that carnosic acid maintained mitochondrial respiratory function and attenuated oxidative damage by reducing the amount of 4-HNE bound to cortical mitochondria [157, 158]. In addition, carnosic acid showed a potent neuroprotective effect in repetitive mild TBI (rmTBI) as evidenced by the significant improvement of motor and cognitive performance. Meanwhile, the expression of GFAP and Iba1 expression was inhibited, suggesting that carnosic acid inhibited the neuroinflammation in TBI [159]. Therefore, carnosic acid exerts a neuroprotective effect via inhibiting the mitochondrial oxidative damage in TBI through the Nrf2-ARE signaling pathway. ### 5.16. Fucoxanthin Fucoxanthin, a carotenoid isolated from natural plants such as seaweeds and microalgae, is considered a potent antioxidant [160]. Several studies show that fucoxanthin exerts various pharmacological activities such as antioxidation, anti-inflammation, anticancer, and health protection effects [161]. In addition, fucoxanthin exerts anti-inflammatory effects in LPS-induced BV-2 microglial cells via increasing the Nrf2/HO-1 signaling pathway [162] and inhibits the overactivation of NLRP3 inflammasome via the NF-κB signaling pathway in bone marrow-derived immune cells and astrocytes [163]. In mouse hepatic BNL CL.2 cells, fucoxanthin was reported to upregulate the mRNA and protein expression of HO-1 and NQO1 via increasing the phosphorylation of ERK and p38 and activating the Nrf2/ARE pathway, which contributes to its antioxidant activity [164]. Recently, it has been reported that the neuroprotective effect of fucoxanthin in TBI mice was regulated via the Nrf2-ARE and Nrf2-autophagy pathways [165]. In this study, the researchers found that fucoxanthin alleviated neurological deficits, cerebral edema, brain lesions, and neuronal apoptosis of TBI mice. In addition, fucoxanthin significantly decreased the generation of MDA and increased the activity of GPx, suggesting its antioxidative effect in TBI. Furthermore, in vitro experiments revealed that fucoxanthin could improve neuronal survival and reduce the production of ROS in primary cultured neurons. A further mechanistic study revealed that fucoxanthin activated the Nrf2-ARE pathway and autophagy in vivo and in vitro, while fucoxanthin failed to activate autophagy and exert a neuroprotective effect in Nrf2−/− mice after TBI. Therefore, fucoxanthin activates the Nrf-2 signaling pathway and induces autophagy to exert a neuroprotective effect in TBI. ### 5.17.β-carotene β-carotene, abundant in fungi, plants, and fruits, is a member of the carotenes and belongs to terpenoids [166]. Accumulating studies indicate that β-carotene acting as an antioxidant has potential therapeutic effects in various diseases, such as cardiovascular disease, cancer, and neurodegenerative diseases [167, 168]. Meanwhile, the neuroprotective effect of β-carotene was also reported in CCI-induced TBI mice; the administration of β-carotene significantly improved neurological function and brain edema as evidenced by the decreased neurological deficit score and brain water content and increased time of wire hanging of mice after TBI. In addition, β-carotene could maintain the BBB permeability as indicated by the EB extravasation and ameliorate oxidative stress as showed by the increased SOD level and decreased MDA levels. The mechanistic study demonstrated that β-carotene activated the Keap1-Nrf2 signaling pathway and promoted the expression of HO-1 and NQO1 [169]. Therefore, β-carotene provides a neuroprotective effect in TBI via inhibiting oxidative stress through the Nrf2 pathway. ### 5.18. Astaxanthin Astaxanthin is a carotenoid and is commonly found in certain plants and animals, such as salmon, rainbow trout, shrimp, and lobster [170]. Emerging evidence indicates that astaxanthin exhibits multiple biological activities, including antiageing, anticancer, heart protection, and neuroprotection [171]. Recently, astaxanthin was reported to present neuroprotection in CCI-induced TBI mice, such as increasing NSS score and immobility time and increasing rotarod time and latency to immobility. In addition, astaxanthin increased SOD1 levels and inhibited the protein expression of cleaved caspase 3 and the number of TUNEL-positive cells, suggesting that astaxanthin exerted antioxidative and antiapoptotic effects. The mechanistic study demonstrated that astaxanthin increased the protein and mRNA expressions of Nrf2, HO-1, NQO1, and SOD1 [172]. Moreover, in weight drop-induced TBI mice, astaxanthin significantly reduced brain edema and improved behavioral functions including neurological scores, rotarod performance, beam walking performance, and falling latency during the hanging test. In addition, astaxanthin improved neuronal survival indicated by Nissl staining. Furthermore, astaxanthin exerted an antioxidative effect by increasing the SOD1 protein expression and inhibited neuronal apoptosis by reducing the level of cleaved caspase 3 and the number of TUNEL-positive cells. The mechanistic study revealed that astaxanthin promoted the activation of the Nrf2 signaling pathway as demonstrated by the increased mRNA levels and protein expressions of Nrf2, HO-1, and NQO1, while the inhibition of Prx2 or SIRT1 reversed the antioxidative and antiapoptotic effect of astaxanthin. Therefore, astaxanthin activated the SIRT1/Nrf2/Prx2/ASK1 signaling pathway in TBI. Moreover, astaxanthin also provided a neuroprotective effect in H2O2-induced primary cortical neurons by reducing oxidative damage and inhibiting apoptosis via the SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway [173]. Therefore, astaxanthin exerts a neuroprotective effect including antioxidation and antiapoptosis via activating the Nrf2 signaling pathway in TBI. ### 5.19. Lutein Lutein, a natural carotenoid, is commonly found in a variety of flowers, vegetables, and fruits, such asCalendula officinalis, spinach, and Brassica oleracea [174]. Accumulating studies demonstrate that lutein is a potent antioxidant and exhibits benefits in various diseases, including ischemia/reperfusion injury, diabetic retinopathy, heart disease, AD, and TBI [175]. In severe TBI rats, the administration of lutein significantly increased the inhibition of skilled motor function and reversed the increase in contusion volume of TBI rats. In addition, lutein suppressed the inflammatory response by decreasing the levels of TNF-α, IL-1β, IL-6, and Monocyte chemoattractant protein-1 (MCP-1). Meanwhile, lutein decreased ROS production and increased SOD and GSH activity, suggesting that lutein attenuated TBI-induced oxidative damage. Moreover, the mechanistic study found that lutein inhibited the protein expression of intercellular adhesion molecule-1 (ICAM-1), COX-2, and NF-κB, while increasing the protein expression of ET-1 and Nrf2. Therefore, the neuroprotective effect of lutein in TBI may be regulated via the NF-κB/ICAM-1/Nrf2 signaling pathway [176]. It is known that zeaxanthin and lutein are isomers and have identical chemical formulas. Recently, it is reported that lutein/zeaxanthin exerted a neuroprotective effect in TBI mice induced by a liquid nitrogen-cooled copper probe, and the brain infarct and brain swelling were remarkably declined by lutein/zeaxanthin. The protein expression of Growth-Associated Protein 43 (GAP43), ICAM, neural cell adhesion molecule (NCAM), brain-derived neurotrophic factor (BDNF), and Nrf2 were increased, while the protein expression of GFAP, IL-1β, IL-6, and NF-κB was inhibited by lutein/zeaxanthin [177]. Therefore, lutein/zeaxanthin presents antioxidative and anti-inflammatory effects via the Nrf2 and NF-κB signaling pathways. ### 5.20. Sodium Aescinate Sodium aescinate (SA) is a mixture of triterpene saponins isolated from the seeds ofAesculus chinensis Bunge and chestnut [178]. Amounting studies show that SA exerts anti-inflammatory, anticancer, and antioxidative effects [179–181]. In addition, SA has been reported to exhibit neuroprotective effect in 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD mice and mutant huntingtin (mHTT) overexpressing HT22 cells [182, 183]. A recent study reported that SA could attenuate brain injury in weight drop-induced TBI mice [182]. The intraperitoneal administration of SA significantly decreased NSS, brain water content, and lesion volume of mice after TBI. A further study found that SA suppressed TBI-induced oxidative stress as evidenced by the decreased MDA levels and increased GPx activity. The Nissl staining images displayed that SA increased the viability of neurons, and the TUNEL staining showed that SA inhibited neuronal apoptosis. Meanwhile, SA decreased the ratio of Bax/Bcl-2 and the cleaved form of caspase-3, while increasing the release of cytochrome c from mitochondria into the cytoplasm. The mechanistic study demonstrated that SA promoted the translocation of Nrf2 from the cytoplasm into the nuclear and subsequently increased the expression of HO-1 and NQO1. Moreover, the neuroprotective effect and mechanism of action of SA have been confirmed in scratch injury-induced TBI primary neurons and Nrf2-KO mice after TBI. Therefore, SA exerts a neuroprotective effect in TBI via activating the Nrf2 signaling pathway. ### 5.21. Melatonin Melatonin, commonly found in plants, animals, fungi, and bacteria, plays an important role in the regulation of the biological clock [184]. Melatonin as a dietary supplement is widely used to treat insomnia. Emerging evidence indicates that melatonin exerts neuroprotection in various diseases including brain injury, spinal cord injury, and cerebral ischemia [185]. In addition, melatonin is demonstrated to be a potent antioxidant with the ability to reduce oxidative stress, inhibit the inflammatory response, and attenuate neuronal apoptosis [186]. In craniocerebral trauma, melatonin showed a neuroprotective effect due to its antioxidative, anti-inflammatory, and inhibitory effects on activation adhesion molecules [187]. In Marmarou’s weight drop-induced TBI mice, melatonin significantly inhibited neuronal degeneration and reduced cerebral edema in the brain. Meanwhile, melatonin also attenuated the oxidative stress induced by TBI as evidenced by the decreased MDA levels and 3-NE expression, as well as increased GPx and SOD levels. The mechanistic study demonstrated that melatonin increased the nuclear translocation of Nrf2 and promoted the protein expression and mRNA levels of HO-1 and NQO1, while the KO of Nrf2 could partly reverse the neuroprotective effect of melatonin, including antioxidation, inhibition of neuronal degeneration, and alleviation of cerebral edema in mice after TBI. Therefore, melatonin provides a neuroprotective effect in TBI via the Nrf-ARE signaling pathway [188]. Due to the complex pathophysiology of TBI, the combinational use of melatonin and minocycline, a bacteriostatic agent reported to inhibit neuroinflammation, did not exhibit a better neuroprotective effect than either agent alone. The dosing and/or administration issues may attribute to this result [189]. Therefore, the optimal combination should be explored for the treatment of TBI. ### 5.22. Sinomenine Sinomenine is an alkaloid compound that is isolated from the roots of climbing plants includingSinomenium acutum (Thunb.) Rehd. et Wils. and Sinomenium acutum var. cinereum Rehd. et Wils [190]. Sinomenine has been demonstrated to exhibit an antihypertensive and anti-inflammatory effect and is commonly used to treat various acute and chronic arthritis, rheumatism, and rheumatoid arthritis (RA). In addition, sinomenine provides a neuroprotective effect in Marmarou’s weight drop-induced TBI mice. The administration of sinomenine significantly increased the grip test score and decreased brain water content. In addition, the neuronal viability was increased by sinomenine as shown by the increased NeuN-positive neurons and decreased TUNEL-positive neurons. Meanwhile, sinomenine increased Bcl-2 protein expression and decreased cleaved caspase-3 expression. Furthermore, sinomenine attenuated oxidative stress by decreasing MDA levels and increasing SOD and GPx activity. The mechanistic study revealed that sinomenine promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expression of HO-1 and NQO1 in mice after TBI [191]. Therefore, sinomenine, acting as a potent anti-inflammatory agent, provides antiapoptotic and antioxidative effects in TBI via the Nrf2-ARE signaling pathway. ### 5.23. Sulforaphane Sulforaphane, also known as isothiocyanate, is commonly found in certain kinds of vegetables, including cabbage, broccoli, and cauliflower [192]. Emerging evidence indicates that sulforaphane is widely used to treat prostate cancer, autism, asthma, and many other diseases [193–195]. In addition, sulforaphane also showed a neuroprotective effect in TBI. For example, sulforaphane decreased BBB permeability in CCI-induced TBI rats as evidenced by the decreased EB extravasation and the relative fluorescence intensity of fluorescein [196]. Meanwhile, the loss of tight junction proteins (TJs) including occluding and claudin-5 was attenuated by sulforaphane. The mechanistic study found that sulforaphane increased the mRNA level of Nrf2-driven genes including GST-alpha3(GSTα3), GPx, and HO-1, as well as enhanced the enzymatic activity of NQO1 in the brain and brain microvessels of TBI mice, suggesting that sulforaphane activated the Nrf2-ARE signaling pathway to protect BBB integrity. Furthermore, sulforaphane could reduce brain edema as evidenced by the decrease in brain water content, which was closely associated with the attenuation of AQP4 loss in the injury core and the further increase of AQP4 level in the penumbra region [197]. Moreover, the Morris water maze (MWZ) test showed that sulforaphane improved spatial memory and spatial working memory. Meanwhile, TBI-induced oxidative damage was significantly attenuated by sulforaphane as demonstrated by the reduced 4-HNE levels [198]. In addition, sulforaphane also attenuated 4-HNE induced dysfunction in isolated cortical mitochondria [158]. Taken together, sulforaphane provides a neuroprotective effect in TBI via the activation of the Nrf2-ARE signaling pathway. ## 5.1. Quercetin Quercetin belonging to flavonoids is commonly found in dietary plants including vegetables and fruits such as onions, tomatoes, soy, and beans [72]. Emerging evidence indicates quercetin exerts a variety of pharmacological effects mainly involving antioxidation, anti-inflammation, antivirus, anticancer, neuroprotection, and cardiovascular protection [73]. It is known to us that the inflammatory response promotes oxidative damage in TBI [74]. In weight drop injury (WDI)-induced TBI mice, quercetin was reported to significantly inhibit neuroinflammation-medicated oxidative stress and histological alterations as demonstrated by the decreased lipid peroxidation and increased activities of SOD, catalase, and GPx [75]. Meanwhile, quercetin could significantly reduce the brain water content and improve the neurobehavioral status, which is closely associated with the activation of the Nrf2/HO-1 pathway [74]. The impairment of mitochondria function leads to an increase in reactive oxygen species (ROS) production and damages mitochondrial proteins, DNA, and lipids [72]. Quercetin was reported to significantly inhibit mitochondrial damage of TBI male Institute of Cancer Research (ICR) mice as evidenced by the decreased expression of Bax and increased levels of cytochrome c in mitochondria, as well as increased mitochondrial SOD and decreased mitochondrial MDA content, and the recovery of mitochondrial membrane potential (MMP) and intracellular ATP content. The mechanistic study demonstrated that quercetin promoted the translocation of Nrf2 from the cytoplasm to the nucleus, suggesting that quercetin exerts neuroprotective effects in TBI mice via maintaining mitochondrial homeostasis through the activation of Nrf2 signaling pathway [76]. In moderate TBI rats, quercetin inhibited oxidative nitrosative stress by reducing the activity of NOS including inducible nitric oxide synthase (iNOS) and constructive nitric oxide synthase (cNOS), as well as the concentration of thiobarbituric acid (TBA)-lipid peroxidation in the cerebral hemisphere and periodontal tissues [77]. Therefore, quercetin exerts neuroprotective effects in TBI via multiple biological activities, including inhibition of oxidative damage, nitrosative stress, and inflammatory response, as well as the improvement of mitochondrial dysfunction and neuronal function through the Nrf2 signaling pathway. ## 5.2. Curcumin Curcumin, a polyphenol isolated fromCurcuma longa rhizomes, has been reported to possess multiple biological activities, including antioxidative, anti-inflammatory, and anticancer effects [78]. Most importantly, curcumin is also demonstrated to cross the BBB and exert neuroprotection in various neurodegenerative diseases, such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and amyotrophic lateral sclerosis (ALS), via the inhibition of neuronal death and neuroinflammation [79]. In addition, emerging evidence indicates that curcumin presents protective effects in TBI and activates the Nrf2 signaling pathway in vivo and in vitro [78, 80–82]. In mild fluid percussion injury (FPI)-induced TBI rats, curcumin significantly attenuated oxidative damage by decreasing the oxidized protein levels and reversing the reduction in the levels of brain-derived neurotrophic factor (DBNF), synapsin I, and cyclic AMP (cAMP)-response element-binding protein 1 (CREB) [81]. Meanwhile, curcumin improved the cognitive and behavioral function of TBI rats [81, 83–85]. In addition, the intraperitoneal administration of curcumin could improve the neurobehavioral function and decrease the brain water content in Feeney or Marmarou’s weight drop-induced TBI mice. Furthermore, curcumin reduced the oxidative stress in the ipsilateral cortex by decreasing the level of MDA and increasing the levels of SOD and GPx, as well as promoted neuronal regeneration and inhibited neuronal apoptosis [80, 85]. Moreover, curcumin inhibited the neuroinflammatory response as demonstrated by the decreased number of myeloperoxidase (MPO) positive cells and increased levels of cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), and interleukin-1beta (IL-1β) [80]. The mechanistic study found that curcumin promoted the nuclear translocation of Nrf2 and increased the expression of downstream genes, including HO-1, NQO1, and GCLC, while the neuroprotective effects of curcumin including antioxidation, antiapoptosis, and anti-inflammation were attenuated in Nrf2-KO mice after TBI [80]. In addition, the anti-inflammatory effect of curcumin in TBI was also regulated by the TLR4/MyD88/NF-κB signaling pathway [86] and aquaporin-4 (AQP4) [87]. Diffuse axonal injury (DAI), a type of TBI, is recognized as an important cause that results in long-term problems in motor and cognition, while curcumin could ameliorate axonal injury and neuronal degeneration of rats after DAI. In addition, curcumin overcame endoplasmic reticulum (ER) stress via strengthening the ability of the unfolded protein response (UPR) process and reducing the levels of plasma tau, β-APP, and NF-H. The mechanistic study revealed that curcumin activated the PERK/Nrf2 signaling pathway [88]. Most importantly, the combinational use of curcumin and candesartan, an angiotensin II receptor blocker used for the treatment of hypertension, showed better antioxidative, antiapoptotic, and anti-inflammatory effects than curcumin or candesartan alone [89]. In addition, tetrahydrocurcumin, the metabolite of curcumin, could also alleviate brain edema and reduce neuronal cell apoptosis, as well as improve neurobehavioral function via the Nrf2 signaling pathway in weight drop-induced TBI mice [90]. Taken together, curcumin together with its metabolites are useful for the treatment of TBI. ## 5.3. Formononetin Formononetin, an O-methylated isoflavone phytoestrogen, is commonly found in plants such as red clover [91]. Accumulating studies show that formononetin has various biological activities, including the improvement of blood microcirculation, anticancer, and antioxidative [92]. In addition, formononetin exhibits neuroprotection in AD, PD, spinal cord injury, and TBI [93, 94]. It has been reported that the administration of formononetin could decrease the neurological score and cerebral moisture content of TBI rats [91]. In addition, the HE staining images showed that formononetin attenuated the edema and necrosis in the lesioned zones of the brain and increased the number of neural cells. At the same time, the oxidative stress was significantly reversed by formononetin as indicated by the increased enzymatic activity of SOD and GPx activity and decreased MDA content. The inflammatory cytokines including TNF-α and IL-6 as well as the mRNA level of Cyclooxygenase-2(COX-2) were also reduced by formononetin. The mechanistic study revealed that formononetin increased the protein expression of Nrf2 [95]. Furthermore, the same research team found that microRNA-155 (miR-155) is involved in the neuroprotection of formononetin in TBI. The pretreatment of formononetin significantly increased the expression of miR-155 and HO-1, which is accompanied by the downregulation of BACH1 [91]. All evidence suggests that formononetin provides neuroprotection in TBI via the Nrf2/HO-1 signaling pathway. ## 5.4. Baicalin Baicalin, known as 7-D-Glucuronic acid-5,6-dihydroxyflavone, is a major flavone found in the radix ofScutellaria baicalensis [96]. Emerging evidence indicates that baicalin can cross the BBB and exert neuroprotective effects in various CNS-related diseases including AD, cerebral ischemia, spinal cord injury, and TBI [97]. In addition, baicalin was reported to activate the Nrf2 signaling pathway and attenuate subarachnoid hemorrhagic brain injury [98]. In weight drop-induced TBI mice, baicalin significantly reduced the neurological soft signs (NSS) score and the brain water content, and inhibited neuronal apoptosis as evidenced by the decreased terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive neurons, Bax/Bcl-2 ratio, and the cleavage of caspase 3. Meanwhile, baicalin attenuated oxidative damage by decreasing MDA levels and increasing GPx and SOD activity and expression. The mechanistic study found that baicalin increased the expression of Nrf2 and promoted the nuclear translocation of Nrf2, meanwhile upregulated the mRNA and protein expression of HO-1 and NQO1, while the treatment of ly294002 reversed the effect of baicalin on antiapoptosis, antioxidation, and activation of the Nrf2 signaling pathway, suggesting that baicalin exerts neuroprotective effects via the Akt/Nrf2 pathway in TBI [96]. As is known to us, autophagy plays a protective mechanism in neurodegenerative diseases. Furthermore, the same research team found that baicalin induced autophagy and alleviated the BBB disruption and inhibited neuronal apoptosis of mice after TBI, while the co-treatment of 3-MA partly abolished the neuroprotective effect of baicalin. Therefore, baicalin provides a beneficial effect via the Nrf-2 regulated antioxidative pathway and autophagy induction. ## 5.5. Catechin Catechin is a flavan-3-ol and belongs to a type of natural polyphenols [99]. It is a plant secondary metabolite and a potent antioxidant [100]. Structurally, it has four diastereoisomers, including two isomers with trans configuration called (+)-catechin and two isomers with cis configuration called (-)-epicatechin [101]. They are commonly found in food and fruits, such as cocoa, tea, and grapes. The pharmacological activity of catechin mainly involves antioxidative, anti-inflammatory, antifungal, antidiabetic, antibacterial, and antitumor effects [102]. In addition, catechin also exhibits neuroprotective effects in CCI-induced TBI rats by inhibiting the disruption of BBB and excessive inflammatory responses [103]. The expression of junction proteins including occludin and zonula occludens protein-1 (ZO-1) associated with BBB integrity was increased, while the levels of proinflammatory cytokines including IL-1β, iNOS, and IL-6 were decreased by catechin. At the same time, catechin significantly alleviated the brain damage as revealed by the decrease in the brain water content and brain infarction volume, as well as improved motor and cognitive deficits [103]. In addition, catechin inhibited cell apoptosis and induced neurotrophic factors in rats after TBI [104]. In CCI-induced TBI mice, the administration of epicatechin significantly attenuated the neutrophil infiltration and oxidative damage. Specifically, epicatechin could reduce lesion volume, edema, and cell death, as well as improve neurological function, cognitive performance, and depression-like behaviors. In addition, epicatechin decreased white matter injury, HO-1 expression, and the deposition of ferric iron. The mechanistic study found that epicatechin decreased the Keap1 expression while increasing the nuclear translocation of Nrf2. Meanwhile, epicatechin reduced the activity of Matrix metallopeptidase 9 (MMP9) and increased the expression of SOD1 and quinone 1 [102]. Therefore, epicatechin exerts neuroprotective effects in TBI mice via modulating Nrf2-regulated oxidative stress response and inhibiting iron deposition. ## 5.6. Fisetin Fisetin, also known as 3,3′,4′,7-tetrahydroxyflavone, is a flavonol compound and was first extracted from Cotinus coggygria by Jacob Schmid in 1886 [105], and its structure was elucidated by Joseph Hergig in 1891. In addition, fisetin is also found in many vegetables and fruits, such as onions, cucumbers, persimmon, strawberries, and apples [106]. Emerging evidence indicates that fisetin acting as a potent antioxidant possesses multiple biological activities, including anti-inflammatory, antiviral, anticarcinogenic, and other effects [107]. Fisetin also presents neuroprotective effects via the antioxidative stress in AD, PD, etc. [108]. In addition, fisetin also showed protective effects in weight drop-induced TBI mice as shown by the decreased NSS, brain water content, Evans blue (EB) extravasation, and lesion volume of brain tissue, as well as the increased grip test score. Meanwhile, the MDA level was decreased and GPx activity was increased by fisetin, suggesting that fisetin provides a neuroprotective effect via suppressing TBI-induced oxidative stress [109]. In addition, the neuronal cell outline and structure stained by Nissl solution showed that fisetin improved neuronal viability, while neuronal apoptosis was inhibited by fisetin as demonstrated by the decreased TUNEL signals, and the reduced protein expression of Bax/Bcl-2 and cleaved caspase-3. The mechanistic study demonstrated that fisetin promoted the Nrf2 nuclear translocation and increased the expression of HO-1 and NQO1, while the KO of Nrf2 abrogated the neuroprotective effect of fisetin including antioxidation and antiapoptosis [109]. Moreover, fisetin was reported to exert anti-inflammatory effects in TBI mice via the TLR4/NF-κB pathway, and the level of TNF-α, IL-1β, and IL-6 was significantly decreased. Meanwhile, the BBB disruption of TBI mice was attenuated by fisetin [110]. Therefore, fisetin exerts neuroprotective effects in TBI via the Nrf2-regulated oxidative stress and the NF-κB-mediated inflammatory signaling pathway. ## 5.7. Luteolin Luteolin, belonging to flavonoids, is abundant in fruits and vegetables such as carrots, green tea, and celery [111]. Emerging evidence indicates luteolin has a wide variety of biological activities including antioxidative and anti-inflammatory effects [112, 113]. In addition, several studies have demonstrated the neuroprotective effect of luteolin in multiple in vivo and in vitro models [114, 115]. For example, luteolin could recover motor performance and reduce post-traumatic cerebral edema in weight drop-induced TBI mice. The oxidative damage was reduced by luteolin as demonstrated by the decrease in MDA levels and the increase in GPx activity in the ipsilateral cortex. The mechanistic study found that luteolin promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expressions of HO-1 and NQO1 [116]. In addition, luteolin significantly improved TBI-induced learning and memory impairment in rats after TBI, which was closely associated with the attenuation of oxidative damage indicated by the decreased MDA level and increased SOD and CAT activity [117, 118]. Therefore, the Nrf2-regulated oxidative stress response plays an important role in luteolin against TBI. ## 5.8. Isoliquiritigenin Isoliquiritigenin, a chalcone compound, is often found in plants includingSinofranchetia chinensis, Glycyrrhiza uralensis, and Dalbergia odorifera. [119]. Isoliquiritigenin has been reported to attenuate oxidative damage, inhibit the inflammatory response, and suppress tumor growth [120]. In addition, isoliquiritigenin activates the Nrf2 signaling pathway to exert antioxidative and anti-inflammatory effects in multiple cellular and animal models. Isoliquirtigenin also exerts a neuroprotective effect in CCI-induced TBI mice via the Nrf2-ARE signaling pathway [121]. For example, isoliquiritigenin increased the Garcia Neuro score and decreased the brain water content, as well as the expression of aquaporin 4 (AQP4) and EB leakage. The glia activation indicated by GFAP expression was inhibited and the neuron viability showed by neurofilament light (NFL) expression was increased by isoliquiritigenin. In addition, isoliquiritigenin increased the number of Nissl staining-positive neurons and inhibited neuronal apoptosis as evidenced by the decreased expression of cleaved caspase-3. Furthermore, the oxidative damage was ameliorated by isoliquiritigenin as shown by the increased GPx activity, SOD levels, and decreased H2O2 concentration and MDA levels. However, the KO of Nrf2 significantly attenuated the neuroprotective effect of isoliquiritigenin in mice after TBI. The mechanistic study demonstrated that isoliquiritigenin increased the nuclear translocation of Nrf2 and the protein and mRNA expression of NQO1 and HO-1. In the in vitro study, isoliquiritigenin also activated the Nrf2-ARE signaling pathway and increased the cell viability in oxygen and glucose deprivation (OGD)-induced SH-SY5Y cells. In addition, isoliquiritigenin inhibited shear stress-induced cell apoptosis in SH-SY5Y cells, as well as suppressed the inflammatory response and inhibited neuronal apoptosis in CCI-induced TBI mice or rats via the PI3K/AKT/GSK-3β/NF-κB signaling pathway [122, 123]. Moreover, isoliquiritigenin protected against BBB damage in mice after TBI via inhibiting the PI3K/AKT/GSK-3β pathway [123]. Therefore, isoliquiritigenin may be a promising agent for the treatment of TBI via the inhibition of oxidative stress, inflammatory response, and BBB disruption in TBI. ## 5.9. Tannic Acid Tannic acid, a natural polyphenol, is commonly found in green and black teas as well as nuts, fruits, and vegetables [124]. Emerging evidence indicates that tannic acid possesses multiple biological activities such as antioxidative, anti-inflammatory, antiviral, and antiapoptotic effects [125–127]. In addition, tannic acid exhibits neuroprotective effects as shown by the improvement of behavioral deficits and the inhibition of neurodegeneration [128]. Recently, tannic acid has been proven to ameliorate the oxidative damage and behavioral impairments of mice after TBI [128]. For example, tannic acid significantly increased the score in the grip test and the motor coordination time as well as decreased the stay time in the balance test. In addition, tannic acid inhibited neuronal damage and reduced the brain water content of TBI mice. A further study found that tannic acid could attenuate oxidative stress as evidenced by increased glutathione (GSH) levels, 1-Chloro-2,4-dinitrobenzene (CDNB) conjunction, NADPH oxidation, and H2O2 consumption. In addition, apoptosis-related proteins including cleaved caspase-3 and Poly (ADP-ribose) polymerase (PARP), as well as Bax/Bcl-2, were significantly inhibited by tannic acid. Meanwhile, the inflammatory response indicated by the increased levels of TNF-α and IL-1β and GFAP immunofluorescence intensity was also suppressed. The mechanistic study demonstrated that tannic acid increased the protein expression of Nrf2, PGC-1α, Tfam, and HO-1. Therefore, tannic acid exerts a neuroprotective effect in TBI via activating the PGC-1α/Nrf2/HO-1 signaling pathway. ## 5.10. Ellagic Acid Ellagic acid, an innate polyphenol, is commonly found in various berries such as blueberries, strawberries, blackberries, together with walnuts and nuts [129]. Several studies show that ellagic acid exerts multiple biological activities, including anti-inflammatory, antioxidative, antifibrosis, antidepressant, and neuroprotective effects [130]. In addition, ellagic acid also exhibits protective effects in various brain injuries such as neonatal hypoxic brain injury, cerebral ischemia/reperfusion injury, carbon tetrachloride (CCl4)-induced brain damage, and TBI [131–133]. Here, we summarize the neuroprotective effect of ellagic acid in TBI and its mechanism of action. In experimental diffuse TBI rats, the treatment of ellagic acid significantly improved memory and hippocampus electrophysiology deficits [134]. Meanwhile, the inflammatory responses indicated by the elevated TNF-α, IL-1β, and IL-6 levels were reduced by ellagic acid [134, 135]. In addition, ellagic acid could also decrease the BBB permeability of mice after TBI [135]. In CCl4-induced brain injury rats, ellagic acid decreased MDA levels, increased GSH content, and CAT activity. The mechanistic study demonstrated that ellagic acid inhibited the protein expression of NF-κB and COX-2 while increasing the protein expression of Nrf2 [133]. Therefore, ellagic acid exerts an antioxidative effect via activating the Nrf2 pathway and exhibits anti-inflammatory effects via inhibiting the NF-κB pathway in TBI. ## 5.11. Breviscapine Breviscapine is an aglycone flavonoid and is isolated from the Erigeron plant [136]. Modern pharmacological studies indicate that breviscapine can expand blood vessels to assist in microcirculation, suggesting its potential therapeutic role in cardiovascular and CNS-related diseases [137]. In addition, breviscapine, acting as a scavenger of oxygen-free radicals, is demonstrated to improve ATPase and SOD activity. Recently, breviscapine is also reported to improve neurobehavior and decrease neuronal apoptosis in TBI mice, which is closely associated with the translocation of Nrf2 from the cytoplasm into the nuclear and the subsequent upregulation of Nrf2 downstream factors such as HO-1 and NQO1 [138]. In addition, the inhibition of glycogen synthase kinase-3β (GSK-3β) and IL-6 by breviscapine is associated with its neuroprotective effect in TBI [139, 140]. Therefore, breviscapine exerts neuroprotective effects in TBI via antioxidation, antiapoptosis, and anti-inflammatory responses. ## 5.12. Asiatic Acid Asiatic acid belonging to pentacyclic triterpene is isolated from natural plants such asCentella asiatica [141]. Studies have shown that asiatic acid exhibits potent anti-inflammatory and antioxidative properties, which contributes to its protective effects in spinal cord injury, ischemic stroke, cardiac hypertrophy, liver injury, and lung injury through multiple mechanisms [142]. For example, the administration of asiatic acid could increase Basso, Beattie, Bresnahan scores and the plane test score in spinal cord injury (SCI) rats. Meanwhile, asiatic acid inhibited the inflammatory response by reducing the levels of IL-1β, IL-18, IL-6, and TNF-α, and counteracted oxidative stress by decreasing ROS, H2O2, and MDA levels while increasing SOD activity and glutathione production. The underlying mechanisms include the activation of Nrf2/HO-1 and the inhibition of the NLRP3 inflammasome pathway. In addition, asiatic acid could alleviate tert-butyl hydroperoxide (tBHP)-induced oxidative stress in HepG2 cells. The researchers found that asiatic acid significantly inhibited tBHP-induced cytotoxicity, apoptosis, and the generation of ROS, which attributed to the activation of Keap1/Nrf2/ARE signaling pathway and the upregulation of transcription factors including HO-1, NQO-1, and GCLC [143]. In a CCI-induced TBI model, the administration of asiatic acid significantly improved neurological deficits and reduced brain edema. Meanwhile, asiatic acid counteracted oxidative damage as evidenced by the reduced levels of MDA, 4-HNE, and 8-hydroxy-2′-deoxyguanosine (8-OHdG). The mechanistic study further found that asiatic acid could increase the mRNA and protein expression of Nrf2 and HO-1 [144]. Taken together, asiatic acid improves neurological deficits in TBI via activating the Nrf2/HO-1 signaling pathway. ## 5.13. Aucubin Aucubin, an iridoid glycoside isolated from natural plants such asEucommia ulmoides [145], is reported to have several pharmacological effects including antioxidation, antifibrosis, antiageing, and anti-inflammation [145–147]. Recently, emerging evidence indicates that aucubin exerts neuroprotective effects via antioxidation and anti-inflammation [148]. In addition, aucubin also inhibited lipid accumulation and attenuated oxidative stress via activating the Nrf2/HO-1 and AMP-activated protein kinase (AMPK) signaling pathways [147]. Moreover, aucubin inhibited lipopolysaccharide (LPS)-induced acute pulmonary injury through the regulation of Nrf2 and AMPK pathways [149]. In H2O2-induced primary cortical neurons and weight drop-induced TBI mouse model, aucubin was found to significantly decrease the excessive generation of ROS and inhibit neuronal apoptosis. In addition, aucubin could reduce brain edema, improve cognitive function, decrease neural apoptosis and loss of neurons, attenuate oxidative stress, and suppress the inflammatory response in the cortex of TBI mice. The mechanistic study demonstrated that aucubin activated the Nrf2-ARE signaling pathway and upregulated the expression of HO-1 and NQO1, while the neuroprotective effect of aucubin in Nrf2-KO mice after TBI was reversed [150]. Therefore, aucubin provides a protective effect in TBI via activating the Nrf2 signaling pathway. ## 5.14. Ursolic Acid Ursolic acid, a pentacyclic triterpenoid compound, is widely found in various fruits and vegetables such as apples, bilberries, lavender, and hawthorn. [151]. Ursolic acid has been reported to possess multiple pharmacological effects including anti-inflammatory, antioxidative, antifungal, antibacterial, and neuroprotective properties [152]. In addition, ursolic acid activates the Nrf2/ARE signaling pathway to exert a protective effect in cerebral ischemia, liver fibrosis, and TBI [153]. In weight drop-induced TBI mice, the administration of ursolic acid could improve neurobehavioral functions and reduce the cerebral edema of mice after TBI. In addition, ursolic acid inhibited neuronal apoptosis as shown by the Nissl staining images and TUNEL staining. Meanwhile, ursolic acid ameliorated oxidative stress by increasing SOD and GPx activity as well as decreasing MDA levels. The mechanistic study demonstrated that ursolic acid promoted the nuclear translocation of Nrf2 and increased the levels of transcription factors including HO-1 and NQO1, while the KO of Nrf2 could partly abolish the protective effect of ursolic acid in TBI [153]. Therefore, ursolic acid exerts a neuroprotective effect in TBI via partly activating the Nrf2 signaling pathway. ## 5.15. Carnosic Acid Carnosic acid, a natural benzenediol abietane diterpene, is found inRosmarinus officinalis and Salvia officinalis [154]. Carnosic acid and carnosol are two major antioxidants in Rosmarinus officinalis [155]. Emerging evidence indicates that carnosic acid is a potent activator of Nrf2 and exerts a neuroprotective effect in various neurodegenerative diseases [156]. In CCI-induced acute post-TBI mice, carnosic acid could reduce TBI-induced oxidative damage by decreasing the level of 4-HNE and 3-NE in the brain tissues. A further study demonstrated that carnosic acid maintained mitochondrial respiratory function and attenuated oxidative damage by reducing the amount of 4-HNE bound to cortical mitochondria [157, 158]. In addition, carnosic acid showed a potent neuroprotective effect in repetitive mild TBI (rmTBI) as evidenced by the significant improvement of motor and cognitive performance. Meanwhile, the expression of GFAP and Iba1 expression was inhibited, suggesting that carnosic acid inhibited the neuroinflammation in TBI [159]. Therefore, carnosic acid exerts a neuroprotective effect via inhibiting the mitochondrial oxidative damage in TBI through the Nrf2-ARE signaling pathway. ## 5.16. Fucoxanthin Fucoxanthin, a carotenoid isolated from natural plants such as seaweeds and microalgae, is considered a potent antioxidant [160]. Several studies show that fucoxanthin exerts various pharmacological activities such as antioxidation, anti-inflammation, anticancer, and health protection effects [161]. In addition, fucoxanthin exerts anti-inflammatory effects in LPS-induced BV-2 microglial cells via increasing the Nrf2/HO-1 signaling pathway [162] and inhibits the overactivation of NLRP3 inflammasome via the NF-κB signaling pathway in bone marrow-derived immune cells and astrocytes [163]. In mouse hepatic BNL CL.2 cells, fucoxanthin was reported to upregulate the mRNA and protein expression of HO-1 and NQO1 via increasing the phosphorylation of ERK and p38 and activating the Nrf2/ARE pathway, which contributes to its antioxidant activity [164]. Recently, it has been reported that the neuroprotective effect of fucoxanthin in TBI mice was regulated via the Nrf2-ARE and Nrf2-autophagy pathways [165]. In this study, the researchers found that fucoxanthin alleviated neurological deficits, cerebral edema, brain lesions, and neuronal apoptosis of TBI mice. In addition, fucoxanthin significantly decreased the generation of MDA and increased the activity of GPx, suggesting its antioxidative effect in TBI. Furthermore, in vitro experiments revealed that fucoxanthin could improve neuronal survival and reduce the production of ROS in primary cultured neurons. A further mechanistic study revealed that fucoxanthin activated the Nrf2-ARE pathway and autophagy in vivo and in vitro, while fucoxanthin failed to activate autophagy and exert a neuroprotective effect in Nrf2−/− mice after TBI. Therefore, fucoxanthin activates the Nrf-2 signaling pathway and induces autophagy to exert a neuroprotective effect in TBI. ## 5.17.β-carotene β-carotene, abundant in fungi, plants, and fruits, is a member of the carotenes and belongs to terpenoids [166]. Accumulating studies indicate that β-carotene acting as an antioxidant has potential therapeutic effects in various diseases, such as cardiovascular disease, cancer, and neurodegenerative diseases [167, 168]. Meanwhile, the neuroprotective effect of β-carotene was also reported in CCI-induced TBI mice; the administration of β-carotene significantly improved neurological function and brain edema as evidenced by the decreased neurological deficit score and brain water content and increased time of wire hanging of mice after TBI. In addition, β-carotene could maintain the BBB permeability as indicated by the EB extravasation and ameliorate oxidative stress as showed by the increased SOD level and decreased MDA levels. The mechanistic study demonstrated that β-carotene activated the Keap1-Nrf2 signaling pathway and promoted the expression of HO-1 and NQO1 [169]. Therefore, β-carotene provides a neuroprotective effect in TBI via inhibiting oxidative stress through the Nrf2 pathway. ## 5.18. Astaxanthin Astaxanthin is a carotenoid and is commonly found in certain plants and animals, such as salmon, rainbow trout, shrimp, and lobster [170]. Emerging evidence indicates that astaxanthin exhibits multiple biological activities, including antiageing, anticancer, heart protection, and neuroprotection [171]. Recently, astaxanthin was reported to present neuroprotection in CCI-induced TBI mice, such as increasing NSS score and immobility time and increasing rotarod time and latency to immobility. In addition, astaxanthin increased SOD1 levels and inhibited the protein expression of cleaved caspase 3 and the number of TUNEL-positive cells, suggesting that astaxanthin exerted antioxidative and antiapoptotic effects. The mechanistic study demonstrated that astaxanthin increased the protein and mRNA expressions of Nrf2, HO-1, NQO1, and SOD1 [172]. Moreover, in weight drop-induced TBI mice, astaxanthin significantly reduced brain edema and improved behavioral functions including neurological scores, rotarod performance, beam walking performance, and falling latency during the hanging test. In addition, astaxanthin improved neuronal survival indicated by Nissl staining. Furthermore, astaxanthin exerted an antioxidative effect by increasing the SOD1 protein expression and inhibited neuronal apoptosis by reducing the level of cleaved caspase 3 and the number of TUNEL-positive cells. The mechanistic study revealed that astaxanthin promoted the activation of the Nrf2 signaling pathway as demonstrated by the increased mRNA levels and protein expressions of Nrf2, HO-1, and NQO1, while the inhibition of Prx2 or SIRT1 reversed the antioxidative and antiapoptotic effect of astaxanthin. Therefore, astaxanthin activated the SIRT1/Nrf2/Prx2/ASK1 signaling pathway in TBI. Moreover, astaxanthin also provided a neuroprotective effect in H2O2-induced primary cortical neurons by reducing oxidative damage and inhibiting apoptosis via the SIRT1/Nrf2/Prx2/ASK1/p38 signaling pathway [173]. Therefore, astaxanthin exerts a neuroprotective effect including antioxidation and antiapoptosis via activating the Nrf2 signaling pathway in TBI. ## 5.19. Lutein Lutein, a natural carotenoid, is commonly found in a variety of flowers, vegetables, and fruits, such asCalendula officinalis, spinach, and Brassica oleracea [174]. Accumulating studies demonstrate that lutein is a potent antioxidant and exhibits benefits in various diseases, including ischemia/reperfusion injury, diabetic retinopathy, heart disease, AD, and TBI [175]. In severe TBI rats, the administration of lutein significantly increased the inhibition of skilled motor function and reversed the increase in contusion volume of TBI rats. In addition, lutein suppressed the inflammatory response by decreasing the levels of TNF-α, IL-1β, IL-6, and Monocyte chemoattractant protein-1 (MCP-1). Meanwhile, lutein decreased ROS production and increased SOD and GSH activity, suggesting that lutein attenuated TBI-induced oxidative damage. Moreover, the mechanistic study found that lutein inhibited the protein expression of intercellular adhesion molecule-1 (ICAM-1), COX-2, and NF-κB, while increasing the protein expression of ET-1 and Nrf2. Therefore, the neuroprotective effect of lutein in TBI may be regulated via the NF-κB/ICAM-1/Nrf2 signaling pathway [176]. It is known that zeaxanthin and lutein are isomers and have identical chemical formulas. Recently, it is reported that lutein/zeaxanthin exerted a neuroprotective effect in TBI mice induced by a liquid nitrogen-cooled copper probe, and the brain infarct and brain swelling were remarkably declined by lutein/zeaxanthin. The protein expression of Growth-Associated Protein 43 (GAP43), ICAM, neural cell adhesion molecule (NCAM), brain-derived neurotrophic factor (BDNF), and Nrf2 were increased, while the protein expression of GFAP, IL-1β, IL-6, and NF-κB was inhibited by lutein/zeaxanthin [177]. Therefore, lutein/zeaxanthin presents antioxidative and anti-inflammatory effects via the Nrf2 and NF-κB signaling pathways. ## 5.20. Sodium Aescinate Sodium aescinate (SA) is a mixture of triterpene saponins isolated from the seeds ofAesculus chinensis Bunge and chestnut [178]. Amounting studies show that SA exerts anti-inflammatory, anticancer, and antioxidative effects [179–181]. In addition, SA has been reported to exhibit neuroprotective effect in 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD mice and mutant huntingtin (mHTT) overexpressing HT22 cells [182, 183]. A recent study reported that SA could attenuate brain injury in weight drop-induced TBI mice [182]. The intraperitoneal administration of SA significantly decreased NSS, brain water content, and lesion volume of mice after TBI. A further study found that SA suppressed TBI-induced oxidative stress as evidenced by the decreased MDA levels and increased GPx activity. The Nissl staining images displayed that SA increased the viability of neurons, and the TUNEL staining showed that SA inhibited neuronal apoptosis. Meanwhile, SA decreased the ratio of Bax/Bcl-2 and the cleaved form of caspase-3, while increasing the release of cytochrome c from mitochondria into the cytoplasm. The mechanistic study demonstrated that SA promoted the translocation of Nrf2 from the cytoplasm into the nuclear and subsequently increased the expression of HO-1 and NQO1. Moreover, the neuroprotective effect and mechanism of action of SA have been confirmed in scratch injury-induced TBI primary neurons and Nrf2-KO mice after TBI. Therefore, SA exerts a neuroprotective effect in TBI via activating the Nrf2 signaling pathway. ## 5.21. Melatonin Melatonin, commonly found in plants, animals, fungi, and bacteria, plays an important role in the regulation of the biological clock [184]. Melatonin as a dietary supplement is widely used to treat insomnia. Emerging evidence indicates that melatonin exerts neuroprotection in various diseases including brain injury, spinal cord injury, and cerebral ischemia [185]. In addition, melatonin is demonstrated to be a potent antioxidant with the ability to reduce oxidative stress, inhibit the inflammatory response, and attenuate neuronal apoptosis [186]. In craniocerebral trauma, melatonin showed a neuroprotective effect due to its antioxidative, anti-inflammatory, and inhibitory effects on activation adhesion molecules [187]. In Marmarou’s weight drop-induced TBI mice, melatonin significantly inhibited neuronal degeneration and reduced cerebral edema in the brain. Meanwhile, melatonin also attenuated the oxidative stress induced by TBI as evidenced by the decreased MDA levels and 3-NE expression, as well as increased GPx and SOD levels. The mechanistic study demonstrated that melatonin increased the nuclear translocation of Nrf2 and promoted the protein expression and mRNA levels of HO-1 and NQO1, while the KO of Nrf2 could partly reverse the neuroprotective effect of melatonin, including antioxidation, inhibition of neuronal degeneration, and alleviation of cerebral edema in mice after TBI. Therefore, melatonin provides a neuroprotective effect in TBI via the Nrf-ARE signaling pathway [188]. Due to the complex pathophysiology of TBI, the combinational use of melatonin and minocycline, a bacteriostatic agent reported to inhibit neuroinflammation, did not exhibit a better neuroprotective effect than either agent alone. The dosing and/or administration issues may attribute to this result [189]. Therefore, the optimal combination should be explored for the treatment of TBI. ## 5.22. Sinomenine Sinomenine is an alkaloid compound that is isolated from the roots of climbing plants includingSinomenium acutum (Thunb.) Rehd. et Wils. and Sinomenium acutum var. cinereum Rehd. et Wils [190]. Sinomenine has been demonstrated to exhibit an antihypertensive and anti-inflammatory effect and is commonly used to treat various acute and chronic arthritis, rheumatism, and rheumatoid arthritis (RA). In addition, sinomenine provides a neuroprotective effect in Marmarou’s weight drop-induced TBI mice. The administration of sinomenine significantly increased the grip test score and decreased brain water content. In addition, the neuronal viability was increased by sinomenine as shown by the increased NeuN-positive neurons and decreased TUNEL-positive neurons. Meanwhile, sinomenine increased Bcl-2 protein expression and decreased cleaved caspase-3 expression. Furthermore, sinomenine attenuated oxidative stress by decreasing MDA levels and increasing SOD and GPx activity. The mechanistic study revealed that sinomenine promoted the nuclear translocation of Nrf2 and increased the mRNA and protein expression of HO-1 and NQO1 in mice after TBI [191]. Therefore, sinomenine, acting as a potent anti-inflammatory agent, provides antiapoptotic and antioxidative effects in TBI via the Nrf2-ARE signaling pathway. ## 5.23. Sulforaphane Sulforaphane, also known as isothiocyanate, is commonly found in certain kinds of vegetables, including cabbage, broccoli, and cauliflower [192]. Emerging evidence indicates that sulforaphane is widely used to treat prostate cancer, autism, asthma, and many other diseases [193–195]. In addition, sulforaphane also showed a neuroprotective effect in TBI. For example, sulforaphane decreased BBB permeability in CCI-induced TBI rats as evidenced by the decreased EB extravasation and the relative fluorescence intensity of fluorescein [196]. Meanwhile, the loss of tight junction proteins (TJs) including occluding and claudin-5 was attenuated by sulforaphane. The mechanistic study found that sulforaphane increased the mRNA level of Nrf2-driven genes including GST-alpha3(GSTα3), GPx, and HO-1, as well as enhanced the enzymatic activity of NQO1 in the brain and brain microvessels of TBI mice, suggesting that sulforaphane activated the Nrf2-ARE signaling pathway to protect BBB integrity. Furthermore, sulforaphane could reduce brain edema as evidenced by the decrease in brain water content, which was closely associated with the attenuation of AQP4 loss in the injury core and the further increase of AQP4 level in the penumbra region [197]. Moreover, the Morris water maze (MWZ) test showed that sulforaphane improved spatial memory and spatial working memory. Meanwhile, TBI-induced oxidative damage was significantly attenuated by sulforaphane as demonstrated by the reduced 4-HNE levels [198]. In addition, sulforaphane also attenuated 4-HNE induced dysfunction in isolated cortical mitochondria [158]. Taken together, sulforaphane provides a neuroprotective effect in TBI via the activation of the Nrf2-ARE signaling pathway. ## 6. Conclusions and Perspective It is known to us that TBI causes irreversible primary mechanical damage, followed by secondary injury. Studies have shown that multiple mechanisms contribute to the development of TBI during secondary injury, mainly including inflammatory response, oxidative stress, mitochondrial dysfunction, BBB disruption, and otherwise. Among them, oxidative stress leads to mitochondrial dysfunction, BBB disruption, and neuroinflammation. Therefore, oxidative stress plays a central role in the pathogenesis of TBI. Nrf2 is a conserved bZIP transcription factor, and the activation of the Nrf2 signaling pathway protects against oxidative damage. Under stress conditions or the treatment of Nrf2 activators, Nrf2 is translocated from the cytoplasm into the nucleus where it protects against oxidative damage via the ARE-mediated transcriptional activation of genes, including HO-1, NQO1, and GCLC, thereby inhibiting mitochondrial dysfunction, apoptosis, inflammation, and oxidative damage. Therefore, targeting the activation of the Nrf2 signaling pathway is a promising therapeutical strategy for TBI. To date, increased Nrf2 activators were reported to exert neuroprotective effects in various neurodegenerative diseases, cerebral ischemia, cerebral hemorrhage, and TBI [199, 200]. Phytochemicals are rich and isolated from fruits, vegetables, grains, and other medicinal herbs. In this review, polyphenols, terpenoids, natural pigments, and other phytochemicals were summarized. They exhibit potent neuroprotective effects, including the improvement of BBB integrity, recovery of neuronal viability, and inhibition of microglial overactivation via the Nrf2-mediated oxidative stress response (Figure 4).Figure 4 The potential therapy of phytochemicals for TBI. Oxidative damage in TBI plays an important role in the pathology of TBI, including BBB disruption and then neuronal death and microglial overactivation, while the treatment of phytochemicals with antioxidative properties can improve BBB integrity and then recover neuronal viability and inhibit microglial overactivation.Although a large number of studies have demonstrated the neuroprotective effect of most of the phytochemicals in vivo and in vitro models of TBI, there is a lack of effective clinical application evidence. In addition, little is known about the safety and pharmacokinetics of these phytochemicals. Therefore, increasing studies are needed to be performed to accelerate the process of phytochemicals entering the clinic. In the later period of TBI recovery, the selective permeability of BBB also gradually recovered. At this time, BBB is a big obstacle, which greatly limits the neuroprotective effect of drugs, and the use of drugs based on nanomaterials effectively improves the BBB permeability of drugs, bringing new hope for these phytochemicals. In addition, the combinational use of phytochemicals targeting multi-targets such as Nrf2, NF-κB, NADPH oxidase-2 (NOX-2) with gene, and stem cell therapy will be a promising strategy for the treatment of TBI. --- *Source: 1015791-2022-04-04.xml*
2022
# Severity of Lower Urinary Tract Symptoms among Middle Aged and Elderly Nigerian Men: Impact on Quality of Life **Authors:** Patrick Temi Adegun; Philip Babatunde Adebayo; Peter Olufemi Areo **Journal:** Advances in Urology (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1015796 --- ## Abstract Objectives. To compare the severity of LUTS among middle aged and elderly Nigerian men and determine the influence of LUTS severity on QoL.Methods. This cross-sectional study was conducted among new patients presenting with LUTS attending Urology clinic between 2011 and 2015. Assessment of symptoms was based on IPSS and bother score completed by the eligible subjects on the same day of their clinic visits.Results. Four hundred patients were studied comprising 229 middle aged and 171 elderly men. Interquartile range (IQR) of IPSS scores for men <65 years and those ≥65 years was 14.0 (16.0) and 19 (15.0), respectively (p < 0.001). Mild LUTS was significantly associated with best, good, and poor quality of life while moderate LUTS was associated with poor QoL. Severe LUTS was significantly associated with all the categories of QoL (Best-Worst). Among the cohort of subjects with poor QoL, elderly patients had a significantly higher median IPSS score (p < 0.05).Conclusions. There is no level of severity of LUTS in which patients’ QoL is not impaired although mild symptomatology may be associated with better QoL and severe symptomatology with poor QoL. Careful attention to QoL may help identify patients who require early and prompt treatment irrespective of the IPSS. --- ## Body ## 1. Introduction Olmsted County study, one of the largest longitudinal studies conducted in America, investigated age as one of many sociodemographic characteristics that may predict the incidence of LUTS/BPH. Also, age was reported to be one of the most reliable risk factors for the progression of LUTS/BPH. Its influence is greater than those of other sociodemographic characteristics [1].Aging as a process has been discovered to be associated with the development of various, sometimes distressing, symptoms of different organ systems in the body including the genitourinary tract [2].Besides, European and Korean EPIC studies have estimated that about 2/3rd of LUTS is present in the middle aged men whereas other researchers reported that the conditions leading to LUTS are among the most prevalent diseases of the elderly, with serious impairment of quality of life [QoL] [1, 3–15].However, as men age, there is an increasing prevalence of Lower Urinary Tract Symptoms (LUTS) but no clear difference in the impact of LUTS on the QoL between the middle aged and the elderly men [16].Therefore, there is the need to differentiate between the effects of severity of LUTS on the quality of life of middle aged and the elderly men. Comparison of impact of LUTS in these men is important to improve the management of LUTS in men, some of whom are still in the working class and the bread winners for their families.The objective of this study was to compare the severity of LUTS among middle aged and elderly Nigerian men attending the Urology clinic at Ekiti State University Teaching Hospital, South Western Nigeria. We also aimed to determine the influence of LUTS severity on the QoL of these men. ## 2. Methods ### 2.1. Settings This was a comparative cross-sectional study carried out on all new patients who presented to the Urology clinic of Ekiti State University Teaching Hospital, Ado-Ekiti, South Western Nigerian from July 1, 2011, to June 30, 2015. ### 2.2. Selection Criteria and Data Collection The inclusion criteria were male sex; 40 years aged and above; voluntary participation; and understanding and signing the consent form. The exclusion criteria were previous open prostatectomy; acute diseases such as sepsis syndrome, cardiovascular events, and trauma; surgeries or hospitalizations during the preceding month; and uncompensated chronic diseases.A questionnaire containing sociodemographics and concurrent medical conditions including history of alcohol ingestion and 8-item International Prostate Symptoms Score (IPSS) (English version was used because official national language in Nigeria is English) was completed by the eligible subjects on the same day of their clinic visits. The IPSS is a reliable and widely used instrument since 1991, and it is based on the answers to seven questions concerning urinary symptoms and one question concerning quality of life [17]. Each question concerning urinary symptoms allows the patient to choose one out of six answers indicating increasing severity of the particular symptom. The answers are assigned points from 0 to 5. The total score ranges from 0 to 35 (asymptomatic to very symptomatic). The questions refer to the following urinary symptoms: (i) incomplete emptying, (ii) frequency, (iii) intermittency, (iv) urgency, (v) weak stream, (vi) straining, and (vii) nocturia. Question  8 measures patient’s perceived quality of life (QoL) which comprises seven answers from 0 to 6. The quality of life or level of satisfaction of LUTS of patients was represented by seven grades: “no problem” (0 point  =  very satisfied), “I’m all right” (1 point), “somewhat satisfied” (2 points), “half satisfied, half dissatisfied” (3 points), “somewhat dissatisfied” (4 points), “distressed” (5 points), and “I can’t stand it” (6 points  = very dissatisfied).Self-administration of questionnaires was preferred, but face-to-face interviews were conducted whenever the participants presented with visual deficits, illiteracy, or semi-illiteracy that would preclude them from proper completion of the questionnaire. Thirteen (13) participants had their questionnaire interviewer administered. Trained medical staff conducted the interviews in private rooms, which lasted an average of 30 minutes. ### 2.3. Ethical Issues The study was conducted in accordance with Declaration of Helsinki (as revised in Edinburgh 2000). Written informed consent was obtained from all participants before participation in the study.Ethical approval was obtained from Ethical Research Committee of the Ekiti State University Teaching Hospital, Ado-Ekiti, Nigeria. ### 2.4. Statistical Analysis For statistical analysis, the subject’s demographic and clinical variables were summarized and presented as frequencies and percentages for categorical variables while numerical data were summarized as means and standard deviation when normally distributed and median with interquartile range (IQR) when skewed. IPSS was categorized into mild (0–7), moderate (8–19) and severe symptoms (20–35) while quality of life (QoL) defined by bother score was categorized as best for BS = 0-1; good for BS = 2-3; poor for BS = 4-5; and worst for BS = 6.Patients demographic and clinical data were summarized and presented as frequencies and percentages while the chi-squared test was used to test differences between the middle aged (<65 years) and elderly men (≥65 years). Skewed continuous variables were summarized as median (interquartile range) while Mann-WhitneyU nonparametric test was used to test the differences in the median values between middle aged and elderly subjects. Chi-squared test was used to analyse the difference between quality of life categories (bother score) and severity of LUTS (defined by IPSS). To detect the significant group, the chi-squared test was followed by a multiple pairwise comparison test with adjustment of the p values. All statistical analyses were performed using SPSS version 20.0 (SPSS, Chicago, Illinois). A p value < 0.05 was considered to be statistically significant while adjusted p value of <0.0042 was considered significant for the post hoc pairwise comparison. ## 2.1. Settings This was a comparative cross-sectional study carried out on all new patients who presented to the Urology clinic of Ekiti State University Teaching Hospital, Ado-Ekiti, South Western Nigerian from July 1, 2011, to June 30, 2015. ## 2.2. Selection Criteria and Data Collection The inclusion criteria were male sex; 40 years aged and above; voluntary participation; and understanding and signing the consent form. The exclusion criteria were previous open prostatectomy; acute diseases such as sepsis syndrome, cardiovascular events, and trauma; surgeries or hospitalizations during the preceding month; and uncompensated chronic diseases.A questionnaire containing sociodemographics and concurrent medical conditions including history of alcohol ingestion and 8-item International Prostate Symptoms Score (IPSS) (English version was used because official national language in Nigeria is English) was completed by the eligible subjects on the same day of their clinic visits. The IPSS is a reliable and widely used instrument since 1991, and it is based on the answers to seven questions concerning urinary symptoms and one question concerning quality of life [17]. Each question concerning urinary symptoms allows the patient to choose one out of six answers indicating increasing severity of the particular symptom. The answers are assigned points from 0 to 5. The total score ranges from 0 to 35 (asymptomatic to very symptomatic). The questions refer to the following urinary symptoms: (i) incomplete emptying, (ii) frequency, (iii) intermittency, (iv) urgency, (v) weak stream, (vi) straining, and (vii) nocturia. Question  8 measures patient’s perceived quality of life (QoL) which comprises seven answers from 0 to 6. The quality of life or level of satisfaction of LUTS of patients was represented by seven grades: “no problem” (0 point  =  very satisfied), “I’m all right” (1 point), “somewhat satisfied” (2 points), “half satisfied, half dissatisfied” (3 points), “somewhat dissatisfied” (4 points), “distressed” (5 points), and “I can’t stand it” (6 points  = very dissatisfied).Self-administration of questionnaires was preferred, but face-to-face interviews were conducted whenever the participants presented with visual deficits, illiteracy, or semi-illiteracy that would preclude them from proper completion of the questionnaire. Thirteen (13) participants had their questionnaire interviewer administered. Trained medical staff conducted the interviews in private rooms, which lasted an average of 30 minutes. ## 2.3. Ethical Issues The study was conducted in accordance with Declaration of Helsinki (as revised in Edinburgh 2000). Written informed consent was obtained from all participants before participation in the study.Ethical approval was obtained from Ethical Research Committee of the Ekiti State University Teaching Hospital, Ado-Ekiti, Nigeria. ## 2.4. Statistical Analysis For statistical analysis, the subject’s demographic and clinical variables were summarized and presented as frequencies and percentages for categorical variables while numerical data were summarized as means and standard deviation when normally distributed and median with interquartile range (IQR) when skewed. IPSS was categorized into mild (0–7), moderate (8–19) and severe symptoms (20–35) while quality of life (QoL) defined by bother score was categorized as best for BS = 0-1; good for BS = 2-3; poor for BS = 4-5; and worst for BS = 6.Patients demographic and clinical data were summarized and presented as frequencies and percentages while the chi-squared test was used to test differences between the middle aged (<65 years) and elderly men (≥65 years). Skewed continuous variables were summarized as median (interquartile range) while Mann-WhitneyU nonparametric test was used to test the differences in the median values between middle aged and elderly subjects. Chi-squared test was used to analyse the difference between quality of life categories (bother score) and severity of LUTS (defined by IPSS). To detect the significant group, the chi-squared test was followed by a multiple pairwise comparison test with adjustment of the p values. All statistical analyses were performed using SPSS version 20.0 (SPSS, Chicago, Illinois). A p value < 0.05 was considered to be statistically significant while adjusted p value of <0.0042 was considered significant for the post hoc pairwise comparison. ## 3. Results Table1 showed that while elderly men were either married or widowers or single or divorced men could be found among the middle aged. This finding was statistically significant. In addition, majority of the elderly men were retired. While majority of the middle aged cohort were overweight, normal weight and obese men were more common among the elderly. This was also significant. Among the comorbidities studied, systemic hypertension and diabetes mellitus were more prevalent in the elderly, while alcohol consumption was prevalent in the middle aged men. Table 2 showed that the severity of LUTS was significantly worse among subjects of 65 years and above (p < 0.05).Table 1 Demographic characteristics of the study population. Variables Age < 65 years; 229 (%) Age ≥ 65 years; 171 (%) Test statistics p value Marital status <0.001 Single 23 (10) 0 (0) 35.36 Married 193 (84.3) 166 (97.1) Divorced 13 (5.7) 0 (0) Widower 0 (0) 5 (2.9) Occupation 77.62 <0.001 Public servant 106 (46.3) 19 (11.1) Business 31 (13.5) 45 (26.3) Retired 39 (17.0) 79 (46.2) Others 53 (23.1) 28 (16.4) BMI Underweight 0 (0) 1 (0.6) 11.21 0.011 Normal 67 (29.3) 71 (41.5) Overweight 126 (55.0) 67 (39.2) Obese 36 (15.7) 32 (18.7) Comorbidities Hypertension 120 (52.4) 115 (67.3) 8.91 0.003 Diabetes mellitus 18 (7.9) 20 (11.7) 1.67 0.196 Alcohol 91 (39.7) 36 (21.1) 11.05 0.001 Ultrasound findings Normal size 52 (22.7) 2 (1.2) 11.95 0.003 Enlarged and benign 161 (70.3) 160 (93.6) Suspected cancer 16 (7.0) 9 (5.3) DRE 18.50 <0.001 Normal sized prostate 35 (15.3) 9 (5.3) Enlarged and benign 178 (77.7) 161 (94.2) Suspicious lesion 16 (7.0) 1 (0.6) Diagnosis 5.46 0.141 BPH 181 (79.0) 141 (82.5) CaP 23 (10.0) 22 (12.9) Urethral stricture 6 (2.6) 2 (1.2) Others for example OAB 19 (8.3) 6 (3.5) CaP = cancer of the prostate; OAB = overactive bladder.Table 2 IPSS, voiding, and storage subscores among the age group. Variables Age < 65 years Age ≥ 65 years Mann-WhitneyU test p value Median (IQR) Median (IQR) IPSS 14.0 (16.0) 19.0 (15.0) 15435.0 <0.001 Voiding symptoms 7.0 (12.0) 9.0 (10.0) 16041.0 0.002 Storage symptoms 7.0 (7.0) 9.0 (5.0) 15804.5 0.001Table3 showed the quality of life of patients with LUTS in relation to the severity of the symptoms where the IPSS score was categorized. There was a statistically significant relationship between the severity of LUTS and the QoL of the subjects.Table 3 Showing the quality of life of the subjects in relation to the severity of LUTS symptoms. IPSS severity Bother score Chi-squared test p value Best QoL Good QoL Poor QoL Worst QoL n (%) n (%) n (%) n (%) Mildly symptomatic 26 (68.4) 42 (55.3) 12 (4.3) 0 (0) 175.75 <0.001 Moderately symptomatic 10 (26.3) 21 (27.6) 136 (49.3) 0 (0) Severely symptomatic 2 (5.3) 13 (17.1) 128 (46.4) 10 (100) Total 36 (100) 76 (100) 276 (100) 10 (100)Table4 shows the results of post hoc multiple pairwise comparisons of the categories of the contingency Table 3 after the chi-squared test has rejected the null hypothesis of equality of proportions of subjects across the cells. The table showed the pairwise proportions among the multiple cross-classifications that led to the statistical significance observed in Table 3 after adjusting the p value to 0.0042. Mild symptomatic was significantly associated with best, good, and poor quality of life while moderate symptomatic was associated with poor QoL. Severe symptomatic was significantly associated with the entire categories of QoL (Best-Worst).Table 4 Pairwise comparison of IPSS severity and quality of life of patients with LUTS. Pair compared Chi-squared test p value Mildly symptomatic versus best QoL 60.84 <0.001∗ Mildly symptomatic versus good QoL 72.25 <0.001∗ Mildly symptomatic versus poor QoL 136.89 <0.001∗ Mildly symptomatic versus worst QoL 2.56 0.109 Moderately symptomatic versus best QoL 4.00 0.046 Moderately symptomatic versus good QoL 7.84 0.005 Moderately symptomatic versus poor QoL 21.16 <0.001∗ Moderately symptomatic versus worst QoL 7.29 0.007 Severely symptomatic versus best QoL 19.36 <0.001∗ Severely symptomatic versus good QoL 17.64 <0.001∗ Severely symptomatic versus poor QoL 25.00 <0.001∗ Severely symptomatic versus worst QoL 16.81 <0.001∗ Adjustedp value = 0.0042. ∗statistically significant.Figure1 showed the IPSS scores among the subjects with different perceived QoL according to the age group. Among the cohort with best quality of life, the elderly patients had a statistically significant median IPSS score (p > 0.05). No statistical difference exists between the median IPSS scores of the cohort who had good QoL (p > 0.05). Among the cohort of subjects with poor QoL, elderly patients had a statistically significantly higher median IPSS score (p > 0.05).Figure 1 Showing quality of life versus age in years. ## 4. Discussion We sought to compare the severity of symptoms on the QoL of middle aged and elderly Nigerian men. This result showed that QoL was affected by any category of symptoms (mild to severe) across the age. It was demonstrated in this study that even mild symptomatology of IPSS could be associated with poor quality of life whereas some severe symptomatology was associated with good QoL( p < 0.05 ). Therefore patients’ health seeking behaviour might have been influenced by their QoL rather than the severity of the IPSS score. This finding is in agreement with the report of Finkelstein et al. that discovered that an individual’s perceived impact on health-related quality of life might be a determinant for patients to seek medical advice [18].In addition, the study also showed that severity of IPSS increases with age (p < 0.001). This is similar to the findings of Engström et al. that reported that severity of symptoms increases with age [19].More importantly, among the cohort with best quality of life, the elderly patients had a statistically significant higher median IPSS score (p < 0.05). This might be due to high prevalence of hypertension in the elderly which could increase IPSS coupled with higher prevalence of alcohol consumption in the middle aged which might cause lower IPSS in them. However, a longitudinal study on the volume of alcohol consumption in this environment is necessary to corroborate this assertion. This is in line with Suh et al., Lu, and Mo that reported a similar protective effect of light-moderate alcohol consumption on IPSS [20, 21].Furthermore, this study showed that elderly men had higher frequency of poor QoL( p < 0.001 ). This is similar to the findings of Welch et al. that men with moderate and severe LUTS identified in a large US cohort had a poorer health status in several important quality of life dimensions [14]. It is important that these elderly men are adequately assessed for appropriate therapy. Surgical treatments have been strongly favoured for moderate to severe symptomatic patients and watchful waiting or conservative measures indicated for patients with mild complaints [22–24].Recognition of significant deterioration of QoL among mild, moderate, and severe LUTS patients is an evidence for the need of treatment (as opposed to watchful and conservative approaches) and justification for early treatment irrespective of IPSS score. This is in consonance with Finkelstein et al. that noted that the IPSS quality of life question had a larger effect size than all the other measures suggesting that this single-item measure may have high sensitivity to differentiate subgroups [18]. ## 5. Conclusion From the foregoing, it might be better to use the QoL as determinant of the choice of treatment rather than the IPSS scores alone for prompt treatment of LUTS and improved clinical outcome. ## 6. Limitation of the Study However, criticisms have been made regarding the poor standardization of QoL scales and the frequent inappropriate use of the term QoL. The use of a one-item scale to assess general QoL (the IPSS-QoL question, called “bother score”) and the misinterpretation of QoL as synonymous to symptom-control or perceived general health or functional status are the most frequent reasons for such criticisms. Besides, because this study was hospital-based, it may not be a true random sample of men with LUTS in the general population. It is difficult to eliminate selection bias.Furthermore, we did not evaluate for depressive symptomatology and other psychosocial contributors to QoL perception. However, the measure of QoL in this study is a disease specific instrument with psychometric properties well fitted to measure QoL as it relates to urinary symptoms. --- *Source: 1015796-2016-06-19.xml*
1015796-2016-06-19_1015796-2016-06-19.md
21,053
Severity of Lower Urinary Tract Symptoms among Middle Aged and Elderly Nigerian Men: Impact on Quality of Life
Patrick Temi Adegun; Philip Babatunde Adebayo; Peter Olufemi Areo
Advances in Urology (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1015796
1015796-2016-06-19.xml
--- ## Abstract Objectives. To compare the severity of LUTS among middle aged and elderly Nigerian men and determine the influence of LUTS severity on QoL.Methods. This cross-sectional study was conducted among new patients presenting with LUTS attending Urology clinic between 2011 and 2015. Assessment of symptoms was based on IPSS and bother score completed by the eligible subjects on the same day of their clinic visits.Results. Four hundred patients were studied comprising 229 middle aged and 171 elderly men. Interquartile range (IQR) of IPSS scores for men <65 years and those ≥65 years was 14.0 (16.0) and 19 (15.0), respectively (p < 0.001). Mild LUTS was significantly associated with best, good, and poor quality of life while moderate LUTS was associated with poor QoL. Severe LUTS was significantly associated with all the categories of QoL (Best-Worst). Among the cohort of subjects with poor QoL, elderly patients had a significantly higher median IPSS score (p < 0.05).Conclusions. There is no level of severity of LUTS in which patients’ QoL is not impaired although mild symptomatology may be associated with better QoL and severe symptomatology with poor QoL. Careful attention to QoL may help identify patients who require early and prompt treatment irrespective of the IPSS. --- ## Body ## 1. Introduction Olmsted County study, one of the largest longitudinal studies conducted in America, investigated age as one of many sociodemographic characteristics that may predict the incidence of LUTS/BPH. Also, age was reported to be one of the most reliable risk factors for the progression of LUTS/BPH. Its influence is greater than those of other sociodemographic characteristics [1].Aging as a process has been discovered to be associated with the development of various, sometimes distressing, symptoms of different organ systems in the body including the genitourinary tract [2].Besides, European and Korean EPIC studies have estimated that about 2/3rd of LUTS is present in the middle aged men whereas other researchers reported that the conditions leading to LUTS are among the most prevalent diseases of the elderly, with serious impairment of quality of life [QoL] [1, 3–15].However, as men age, there is an increasing prevalence of Lower Urinary Tract Symptoms (LUTS) but no clear difference in the impact of LUTS on the QoL between the middle aged and the elderly men [16].Therefore, there is the need to differentiate between the effects of severity of LUTS on the quality of life of middle aged and the elderly men. Comparison of impact of LUTS in these men is important to improve the management of LUTS in men, some of whom are still in the working class and the bread winners for their families.The objective of this study was to compare the severity of LUTS among middle aged and elderly Nigerian men attending the Urology clinic at Ekiti State University Teaching Hospital, South Western Nigeria. We also aimed to determine the influence of LUTS severity on the QoL of these men. ## 2. Methods ### 2.1. Settings This was a comparative cross-sectional study carried out on all new patients who presented to the Urology clinic of Ekiti State University Teaching Hospital, Ado-Ekiti, South Western Nigerian from July 1, 2011, to June 30, 2015. ### 2.2. Selection Criteria and Data Collection The inclusion criteria were male sex; 40 years aged and above; voluntary participation; and understanding and signing the consent form. The exclusion criteria were previous open prostatectomy; acute diseases such as sepsis syndrome, cardiovascular events, and trauma; surgeries or hospitalizations during the preceding month; and uncompensated chronic diseases.A questionnaire containing sociodemographics and concurrent medical conditions including history of alcohol ingestion and 8-item International Prostate Symptoms Score (IPSS) (English version was used because official national language in Nigeria is English) was completed by the eligible subjects on the same day of their clinic visits. The IPSS is a reliable and widely used instrument since 1991, and it is based on the answers to seven questions concerning urinary symptoms and one question concerning quality of life [17]. Each question concerning urinary symptoms allows the patient to choose one out of six answers indicating increasing severity of the particular symptom. The answers are assigned points from 0 to 5. The total score ranges from 0 to 35 (asymptomatic to very symptomatic). The questions refer to the following urinary symptoms: (i) incomplete emptying, (ii) frequency, (iii) intermittency, (iv) urgency, (v) weak stream, (vi) straining, and (vii) nocturia. Question  8 measures patient’s perceived quality of life (QoL) which comprises seven answers from 0 to 6. The quality of life or level of satisfaction of LUTS of patients was represented by seven grades: “no problem” (0 point  =  very satisfied), “I’m all right” (1 point), “somewhat satisfied” (2 points), “half satisfied, half dissatisfied” (3 points), “somewhat dissatisfied” (4 points), “distressed” (5 points), and “I can’t stand it” (6 points  = very dissatisfied).Self-administration of questionnaires was preferred, but face-to-face interviews were conducted whenever the participants presented with visual deficits, illiteracy, or semi-illiteracy that would preclude them from proper completion of the questionnaire. Thirteen (13) participants had their questionnaire interviewer administered. Trained medical staff conducted the interviews in private rooms, which lasted an average of 30 minutes. ### 2.3. Ethical Issues The study was conducted in accordance with Declaration of Helsinki (as revised in Edinburgh 2000). Written informed consent was obtained from all participants before participation in the study.Ethical approval was obtained from Ethical Research Committee of the Ekiti State University Teaching Hospital, Ado-Ekiti, Nigeria. ### 2.4. Statistical Analysis For statistical analysis, the subject’s demographic and clinical variables were summarized and presented as frequencies and percentages for categorical variables while numerical data were summarized as means and standard deviation when normally distributed and median with interquartile range (IQR) when skewed. IPSS was categorized into mild (0–7), moderate (8–19) and severe symptoms (20–35) while quality of life (QoL) defined by bother score was categorized as best for BS = 0-1; good for BS = 2-3; poor for BS = 4-5; and worst for BS = 6.Patients demographic and clinical data were summarized and presented as frequencies and percentages while the chi-squared test was used to test differences between the middle aged (<65 years) and elderly men (≥65 years). Skewed continuous variables were summarized as median (interquartile range) while Mann-WhitneyU nonparametric test was used to test the differences in the median values between middle aged and elderly subjects. Chi-squared test was used to analyse the difference between quality of life categories (bother score) and severity of LUTS (defined by IPSS). To detect the significant group, the chi-squared test was followed by a multiple pairwise comparison test with adjustment of the p values. All statistical analyses were performed using SPSS version 20.0 (SPSS, Chicago, Illinois). A p value < 0.05 was considered to be statistically significant while adjusted p value of <0.0042 was considered significant for the post hoc pairwise comparison. ## 2.1. Settings This was a comparative cross-sectional study carried out on all new patients who presented to the Urology clinic of Ekiti State University Teaching Hospital, Ado-Ekiti, South Western Nigerian from July 1, 2011, to June 30, 2015. ## 2.2. Selection Criteria and Data Collection The inclusion criteria were male sex; 40 years aged and above; voluntary participation; and understanding and signing the consent form. The exclusion criteria were previous open prostatectomy; acute diseases such as sepsis syndrome, cardiovascular events, and trauma; surgeries or hospitalizations during the preceding month; and uncompensated chronic diseases.A questionnaire containing sociodemographics and concurrent medical conditions including history of alcohol ingestion and 8-item International Prostate Symptoms Score (IPSS) (English version was used because official national language in Nigeria is English) was completed by the eligible subjects on the same day of their clinic visits. The IPSS is a reliable and widely used instrument since 1991, and it is based on the answers to seven questions concerning urinary symptoms and one question concerning quality of life [17]. Each question concerning urinary symptoms allows the patient to choose one out of six answers indicating increasing severity of the particular symptom. The answers are assigned points from 0 to 5. The total score ranges from 0 to 35 (asymptomatic to very symptomatic). The questions refer to the following urinary symptoms: (i) incomplete emptying, (ii) frequency, (iii) intermittency, (iv) urgency, (v) weak stream, (vi) straining, and (vii) nocturia. Question  8 measures patient’s perceived quality of life (QoL) which comprises seven answers from 0 to 6. The quality of life or level of satisfaction of LUTS of patients was represented by seven grades: “no problem” (0 point  =  very satisfied), “I’m all right” (1 point), “somewhat satisfied” (2 points), “half satisfied, half dissatisfied” (3 points), “somewhat dissatisfied” (4 points), “distressed” (5 points), and “I can’t stand it” (6 points  = very dissatisfied).Self-administration of questionnaires was preferred, but face-to-face interviews were conducted whenever the participants presented with visual deficits, illiteracy, or semi-illiteracy that would preclude them from proper completion of the questionnaire. Thirteen (13) participants had their questionnaire interviewer administered. Trained medical staff conducted the interviews in private rooms, which lasted an average of 30 minutes. ## 2.3. Ethical Issues The study was conducted in accordance with Declaration of Helsinki (as revised in Edinburgh 2000). Written informed consent was obtained from all participants before participation in the study.Ethical approval was obtained from Ethical Research Committee of the Ekiti State University Teaching Hospital, Ado-Ekiti, Nigeria. ## 2.4. Statistical Analysis For statistical analysis, the subject’s demographic and clinical variables were summarized and presented as frequencies and percentages for categorical variables while numerical data were summarized as means and standard deviation when normally distributed and median with interquartile range (IQR) when skewed. IPSS was categorized into mild (0–7), moderate (8–19) and severe symptoms (20–35) while quality of life (QoL) defined by bother score was categorized as best for BS = 0-1; good for BS = 2-3; poor for BS = 4-5; and worst for BS = 6.Patients demographic and clinical data were summarized and presented as frequencies and percentages while the chi-squared test was used to test differences between the middle aged (<65 years) and elderly men (≥65 years). Skewed continuous variables were summarized as median (interquartile range) while Mann-WhitneyU nonparametric test was used to test the differences in the median values between middle aged and elderly subjects. Chi-squared test was used to analyse the difference between quality of life categories (bother score) and severity of LUTS (defined by IPSS). To detect the significant group, the chi-squared test was followed by a multiple pairwise comparison test with adjustment of the p values. All statistical analyses were performed using SPSS version 20.0 (SPSS, Chicago, Illinois). A p value < 0.05 was considered to be statistically significant while adjusted p value of <0.0042 was considered significant for the post hoc pairwise comparison. ## 3. Results Table1 showed that while elderly men were either married or widowers or single or divorced men could be found among the middle aged. This finding was statistically significant. In addition, majority of the elderly men were retired. While majority of the middle aged cohort were overweight, normal weight and obese men were more common among the elderly. This was also significant. Among the comorbidities studied, systemic hypertension and diabetes mellitus were more prevalent in the elderly, while alcohol consumption was prevalent in the middle aged men. Table 2 showed that the severity of LUTS was significantly worse among subjects of 65 years and above (p < 0.05).Table 1 Demographic characteristics of the study population. Variables Age < 65 years; 229 (%) Age ≥ 65 years; 171 (%) Test statistics p value Marital status <0.001 Single 23 (10) 0 (0) 35.36 Married 193 (84.3) 166 (97.1) Divorced 13 (5.7) 0 (0) Widower 0 (0) 5 (2.9) Occupation 77.62 <0.001 Public servant 106 (46.3) 19 (11.1) Business 31 (13.5) 45 (26.3) Retired 39 (17.0) 79 (46.2) Others 53 (23.1) 28 (16.4) BMI Underweight 0 (0) 1 (0.6) 11.21 0.011 Normal 67 (29.3) 71 (41.5) Overweight 126 (55.0) 67 (39.2) Obese 36 (15.7) 32 (18.7) Comorbidities Hypertension 120 (52.4) 115 (67.3) 8.91 0.003 Diabetes mellitus 18 (7.9) 20 (11.7) 1.67 0.196 Alcohol 91 (39.7) 36 (21.1) 11.05 0.001 Ultrasound findings Normal size 52 (22.7) 2 (1.2) 11.95 0.003 Enlarged and benign 161 (70.3) 160 (93.6) Suspected cancer 16 (7.0) 9 (5.3) DRE 18.50 <0.001 Normal sized prostate 35 (15.3) 9 (5.3) Enlarged and benign 178 (77.7) 161 (94.2) Suspicious lesion 16 (7.0) 1 (0.6) Diagnosis 5.46 0.141 BPH 181 (79.0) 141 (82.5) CaP 23 (10.0) 22 (12.9) Urethral stricture 6 (2.6) 2 (1.2) Others for example OAB 19 (8.3) 6 (3.5) CaP = cancer of the prostate; OAB = overactive bladder.Table 2 IPSS, voiding, and storage subscores among the age group. Variables Age < 65 years Age ≥ 65 years Mann-WhitneyU test p value Median (IQR) Median (IQR) IPSS 14.0 (16.0) 19.0 (15.0) 15435.0 <0.001 Voiding symptoms 7.0 (12.0) 9.0 (10.0) 16041.0 0.002 Storage symptoms 7.0 (7.0) 9.0 (5.0) 15804.5 0.001Table3 showed the quality of life of patients with LUTS in relation to the severity of the symptoms where the IPSS score was categorized. There was a statistically significant relationship between the severity of LUTS and the QoL of the subjects.Table 3 Showing the quality of life of the subjects in relation to the severity of LUTS symptoms. IPSS severity Bother score Chi-squared test p value Best QoL Good QoL Poor QoL Worst QoL n (%) n (%) n (%) n (%) Mildly symptomatic 26 (68.4) 42 (55.3) 12 (4.3) 0 (0) 175.75 <0.001 Moderately symptomatic 10 (26.3) 21 (27.6) 136 (49.3) 0 (0) Severely symptomatic 2 (5.3) 13 (17.1) 128 (46.4) 10 (100) Total 36 (100) 76 (100) 276 (100) 10 (100)Table4 shows the results of post hoc multiple pairwise comparisons of the categories of the contingency Table 3 after the chi-squared test has rejected the null hypothesis of equality of proportions of subjects across the cells. The table showed the pairwise proportions among the multiple cross-classifications that led to the statistical significance observed in Table 3 after adjusting the p value to 0.0042. Mild symptomatic was significantly associated with best, good, and poor quality of life while moderate symptomatic was associated with poor QoL. Severe symptomatic was significantly associated with the entire categories of QoL (Best-Worst).Table 4 Pairwise comparison of IPSS severity and quality of life of patients with LUTS. Pair compared Chi-squared test p value Mildly symptomatic versus best QoL 60.84 <0.001∗ Mildly symptomatic versus good QoL 72.25 <0.001∗ Mildly symptomatic versus poor QoL 136.89 <0.001∗ Mildly symptomatic versus worst QoL 2.56 0.109 Moderately symptomatic versus best QoL 4.00 0.046 Moderately symptomatic versus good QoL 7.84 0.005 Moderately symptomatic versus poor QoL 21.16 <0.001∗ Moderately symptomatic versus worst QoL 7.29 0.007 Severely symptomatic versus best QoL 19.36 <0.001∗ Severely symptomatic versus good QoL 17.64 <0.001∗ Severely symptomatic versus poor QoL 25.00 <0.001∗ Severely symptomatic versus worst QoL 16.81 <0.001∗ Adjustedp value = 0.0042. ∗statistically significant.Figure1 showed the IPSS scores among the subjects with different perceived QoL according to the age group. Among the cohort with best quality of life, the elderly patients had a statistically significant median IPSS score (p > 0.05). No statistical difference exists between the median IPSS scores of the cohort who had good QoL (p > 0.05). Among the cohort of subjects with poor QoL, elderly patients had a statistically significantly higher median IPSS score (p > 0.05).Figure 1 Showing quality of life versus age in years. ## 4. Discussion We sought to compare the severity of symptoms on the QoL of middle aged and elderly Nigerian men. This result showed that QoL was affected by any category of symptoms (mild to severe) across the age. It was demonstrated in this study that even mild symptomatology of IPSS could be associated with poor quality of life whereas some severe symptomatology was associated with good QoL( p < 0.05 ). Therefore patients’ health seeking behaviour might have been influenced by their QoL rather than the severity of the IPSS score. This finding is in agreement with the report of Finkelstein et al. that discovered that an individual’s perceived impact on health-related quality of life might be a determinant for patients to seek medical advice [18].In addition, the study also showed that severity of IPSS increases with age (p < 0.001). This is similar to the findings of Engström et al. that reported that severity of symptoms increases with age [19].More importantly, among the cohort with best quality of life, the elderly patients had a statistically significant higher median IPSS score (p < 0.05). This might be due to high prevalence of hypertension in the elderly which could increase IPSS coupled with higher prevalence of alcohol consumption in the middle aged which might cause lower IPSS in them. However, a longitudinal study on the volume of alcohol consumption in this environment is necessary to corroborate this assertion. This is in line with Suh et al., Lu, and Mo that reported a similar protective effect of light-moderate alcohol consumption on IPSS [20, 21].Furthermore, this study showed that elderly men had higher frequency of poor QoL( p < 0.001 ). This is similar to the findings of Welch et al. that men with moderate and severe LUTS identified in a large US cohort had a poorer health status in several important quality of life dimensions [14]. It is important that these elderly men are adequately assessed for appropriate therapy. Surgical treatments have been strongly favoured for moderate to severe symptomatic patients and watchful waiting or conservative measures indicated for patients with mild complaints [22–24].Recognition of significant deterioration of QoL among mild, moderate, and severe LUTS patients is an evidence for the need of treatment (as opposed to watchful and conservative approaches) and justification for early treatment irrespective of IPSS score. This is in consonance with Finkelstein et al. that noted that the IPSS quality of life question had a larger effect size than all the other measures suggesting that this single-item measure may have high sensitivity to differentiate subgroups [18]. ## 5. Conclusion From the foregoing, it might be better to use the QoL as determinant of the choice of treatment rather than the IPSS scores alone for prompt treatment of LUTS and improved clinical outcome. ## 6. Limitation of the Study However, criticisms have been made regarding the poor standardization of QoL scales and the frequent inappropriate use of the term QoL. The use of a one-item scale to assess general QoL (the IPSS-QoL question, called “bother score”) and the misinterpretation of QoL as synonymous to symptom-control or perceived general health or functional status are the most frequent reasons for such criticisms. Besides, because this study was hospital-based, it may not be a true random sample of men with LUTS in the general population. It is difficult to eliminate selection bias.Furthermore, we did not evaluate for depressive symptomatology and other psychosocial contributors to QoL perception. However, the measure of QoL in this study is a disease specific instrument with psychometric properties well fitted to measure QoL as it relates to urinary symptoms. --- *Source: 1015796-2016-06-19.xml*
2016
# An Estimation of Angular Coordinates and Angular Distance of Two Moving Objects by Using the Monopulse, MUSIC, and Root-MUSIC Methods **Authors:** Wojciech Rosloniec **Journal:** ISRN Signal Processing (2011) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2011/101582 --- ## Abstract The paper presents results of extensive simulations carried out in order to assess the precision and angular resolution of subspace methods in real radar system. It has been assumed that such a system uses the 32-element uniform linear array (ULA) and radiates only 3 bursts consisting of 8 pulses in given direction. In order to avoid blind Doppler frequencies, pulse repetition interval (PRI) is different in each burst. It has been shown that change of PRI is not only necessary to avoid blind Doppler frequencies but also allows to avoid false values of angular coordinates when two objects are visible in the same beam, in the same range gate, and their echoes attain maximal values in the same Doppler filter. It has also been shown that precision and angular resolution of both MUSIC and root-MUSIC method can be improved by appropriate preprocessing of signal samples used by these methods. --- ## Body ## 1. Introduction The paper studies the problem of detection and estimation of angular coordinates of moving objects by means of MUSIC and root-MUSIC methods. MUSIC and root-MUSIC had been invented almost 30 years ago [1, 2]; however, application of these methods in real radar system has been possible only recently thanks to the progress in the field of active electronically scanned arrays, digital beamforming, and multiprocessor systems containing clusters of high-speed general purpose PowerPC processors and FPGA devices connected by means of high-speed serial bus such as RapidIO. An important reason stimulating the studies on these methods were difficulties concerning effective estimation of angular coordinates of closely spaced moving objects by using the monopulse methods, [3–5]. For example, it is not possible to separate individual objects even with different radial speeds if they are illuminated by the same antenna beam and are in the same range gate. More appropriate to solve problems of this kind are superresolution methods, represented here by MUSIC and root-MUSIC, [6, 7]. In application to estimation of angular coordinates of moving objects, these methods use the spatial correlation (covariance) matrix R formulated on the basis of complex samples of received signals. As a rule, these samples are obtained on the outputs of individual matched filters (MF) of the array antenna unit, which have functional scheme similar to that shown in Figure 1.Figure 1 Functional diagram of the array antenna unit. A/D: analog-digital converter, QPD: quadrature phase detector, and MF: matched filter (digital).All computer simulations presented in this paper have been performed under the assumption that this linear equidistant array antenna containsM=32 identical receiving elements spaced by the distance d=0.7λ0, where λ0 is the length of received wave.This 32-element array antenna has been used to create new 24-element array. Samplesxi(n), where 1≤i≤32, of signals received by the real, 32-element array antenna are grouped in the following way: (1)xk(e)(n)=∑l=08wlxk+l(n)9,for1≤k≤24, where wl=exp[jl(2πd/λ)sin(θmax)] is the lth coefficient of vector w shaping the directive gain characteristics of antenna elements, belonging to the 24-element array, and θmax determines the direction of maximum directivity of each antenna element. This kind of grouping is often called initial preprocessing. Samples xk(e)(n), where 1≤k≤24, corresponding to individual elements of the equivalent, 24-element array are subsequently processed using the MUSIC and root-MUSIC methods.It has been assumed that the antenna array described above radiates 24 pulses, grouped into three 8-pulse bursts differing by the repetition time (frequency), as shown in Figure2.Figure 2 Sequence of pulses used in radars which employ MTD processing.The signals (radar echoes) received at the outputs of elements of the array are used for digital beamforming,s(n)=wsumH·x(n) and are processed later according to the MTD (moving target detection) technique, [8]. Echo signals s(n) received after each pulse are subjected to coherent integration using predefined sets of weight coefficients wk(2)sdoppl(b)=∑k=18wks(b+kB), where b is the number of range gate determining the distance between the object and the radar, B is the number of range gates in each scan, and k is the number of pulse belonging to a given pulse sequence called the burst. This integration is called Doppler filtration, because it permits to distinguish between the echo signals from the objects moving with different radial speeds. The signal samples from individual array elements, weighted according to (1), form the vector x(e) that is the basis for calculation of the following estimate: (3)R̂=1N∑n=1Nx(e)x(e)H, of the correlation matrix R, where x(e)H is the Hermitian conjugate with respect to the vector x(e). In practical considerations, the factor 1/N appearing in (3) is usually neglected, because it has no influence on the values of the used eigenvectors, but only decides about the scale of the related eigenvalues of matrix R [2, 9]. ## 2. The Multiple Signal Classification (Music) Method As it was mentioned in the introduction, the MUSIC method belongs to the group of subspace methods in which the spectral functions are determined on the base of eigenvectors of the space correlation matrixR. Similarly, the matrix R is formulated on the base of complex samples of the received signals, [1, 6, 10]. In order to explain the essence of this method in the radar application, assume that the N-element antenna array receives P echo signals, reflected from P objects, where P<N. In a given moment, to each echo of the radiated pulse one can assign the vector of complex samples of the signals received in this moment by individual elements of the array antenna; see Figure 1. Therefore, we assume that to the first element of this array, from the direction θk, where 1≤k≤P, arrives the signal s1(t)=Acos(ωt+ϕk) represented in further considerations by its complex amplitude s1=Aexp(jϕk). It is easy to prove on the base of Figure 1 that complex amplitude of the signal coming from the same direction θk to the i-element of the array can be written as follows: si=s1exp[-jβd(i-1)sin(θk)], where 2≤i≤N, β=2πf/c is the propagation constant, c≈3·108m/s is the speed of light, and f denotes frequency of the narrow-band received signal. The complex samples of this signal, appearing on the outputs of individual matched filters MF, will be shifted in phase in a similar way. The set of these samples constitutes a N-dimensional column vector (4)xk=[x1x2x3⋮xN]=[1e-jβdsin⁡(θk)e-j2βdsin⁡(θk)⋮e-j(N-1)βdsin⁡(θk)]s1(θk)=ν(θk)s1(θk), where (5)v(θk)=[1,e-jβdsin⁡(θk),e-jβ2dsin⁡(θk),…,e-j(N-1)βdsin⁡(θk)]T,1≤k≤P, is the so-called array steering vector, [10]. Each element of the array shown in Figure 1 receives simultaneously the signals from all directions θk, where 1≤k≤P, and noise signal with variance σn2. It means that the resulting column vector of complex samples of all P signals and noise, before further digital processing, can be written as (6)x=∑k=1Pxk+n=∑k=1Pν(θk)s1(θk)+n=Vs+n, where (7)V=[111…1e-jβdsin(θ1)e-jβdsin(θ2)e-jβdsin(θ3)…e-jβdsin(θP)e-jβ2dsin(θ1)e-jβ2dsin(θ2)e-jβ2dsin(θ3)…e-jβ2dsin(θP)e-jβ3dsin(θ1)e-jβ3dsin(θ2)e-jβ3dsin(θ3)…e-jβ3dsin(θP)⋮⋮⋮…⋮e-jβ(N-1)dsin(θ1)e-jβ(N-1)dsin(θ2)e-jβ(N-1)dsin(θ3)…e-jβ(N-1)dsin(θP)]is the N×P matrix, the columns of which are the elements of direction vectors (5), s=[s1(θ1),s1(θ2),s1(θ3),…,s1(θP)]T is the P-element, column vector representing the signals received by the first element of the array, and n is the N-element column vector representing received noise signals. The correlation matrix R of samples of signals obtained on the outputs of antenna system, see Figure 1, can be written in the form of the sum, using the correlation matrix of desirable signals Rsand the noise correlation matrix Rn, [1, 10], namely, (8)R=E{xxH}=VRsVH+Rn=VRsVH+σn2I, where operator E{} denotes assignment of the expected value, I is the unitary matrix N×N, and xH is the Hermitian conjugate of the vector x. In other words, in this case, vector xH is the N-element row vector, elements of which are complex conjugate with respect to the corresponding complex elements of the column vector x. Similarly, the matrix VH is the complex conjugate of the matrix V. According to [1], the correlation matrix of desirable signals (9)Rs=E{SSH}=diag⁡⋅{σ12,σ22,σ32,…,σP2} is the diagonal matrix P×P, if the received desirable signals are not correlated. Under the assumption P<N, matrix VRsVH is the singular matrix, which means that (10)det⁡⁡[VRsVH]=det⁡⁡[R-σn2I]=0. It follows from (10) that σn2 is the eigenvalue of the matrix R, [9]. Space, in which the desirable signals are not defined has the dimension N-P. Hence, σn2 is (N-P)-order eigenvalue of the matrix R. The matrices R and VRsVH are not negative defined, and therefore, the matrix R has also P other eigenvalues λk, satisfying condition λk>σn2>0, where 1≤k≤P. The eigenvectors qk, assigned to these eigenvalues are mutually orthogonal [9, 10]. According to the general definition of matrix eigenvalues problem, (R-λk)qk=0, we can write (11)Rqk=[VRsVH+σn2I]qk=λkqkfor1≤k≤N, where λk>σn2>0 if 1≤k≤P and λk=σn2 if P+1≤k≤N. It follows from (11) that (12)[VRsVH]qk={(λk-σn2)qk,1≤k≤P,0,P+1≤k≤N. Relation (12) shows that N-dimensional space of signals and noise can be divided into two mutually orthogonal subspaces, that is, subspace of signals Qs≡[q1,q2,q3,…,qP] and subspace of noise Qn≡[qP+1,qP+2,qP+3,…,qN]. According to this partition, the correlation matrix R can be written as the following sum: (13)R=∑k=1P(λk-σn2)qkqkH+∑k=1Nσn2qkqkH, where λk denotes the eigenvalue corresponding to the vector qk and σn2 is the variance of the white noise received from the individual antenna elements. After appropriate grouping of the factors of sum (13), we obtain the relation, in which one can distinguish P eigenvectors representing desirable signals and N-P eigenvectors belonging to the noise subspace (14)R=∑k=1PλkqkqkH+∑k=P+1Nσn2qkqkH. Each of vectors (4) containing complex samples of desirable signal belongs to the signal subspace Qs, and therefore, it can be written in the form of sum of eigenvectors, defined in this subspace, namely, (15)xk=ν(θk)s1(θk)=∑k=1Pbkqk, where bk is the k coefficient of a suitable value. It should be pointed out that each component qk of vector (15) is orthogonal with respect to an arbitrary eigenvector qm from the noise subspace Qn. Consequently, the whole vector (15) is orthogonal to qm.This unique property can be expressed as follows: (16)xkHqm=[ν(θk)s1(θk)]Hqm=s1H(θk)[ν(θk)]Hqm=0for1≤k≤P<m≤N. Applying equation (16) to all eigenvectors of the noise subspace Qn, we find that the dot product of the vector v(θk), representing the signal received from direction θk, and the sum of eigenvectors from the noise subspace Qn, will also take the value close to zero. In the ideal case (exactly defined correlation matrix R, exactly evaluated eigenvalues and their corresponding eigenvectors, and precise partition onto the vectors from the subspaces of signal and noise), we have (17)vH(θk)∑k=P+1Nqk=0. Using the property described by (17), the following estimate of the spectral power density of the signal can be formulated: (18)P̂(θ)=1∑k=P+1N|vH(θ)qk|2=1∑k=P+1NvH(θ)qkqkHv(θ). This estimate is usually called the spectrum of the MUSIC method, [1, 6]. Placing all eigenvectors qk from the noise subspace into the columns of matrix Qn, spectrum (18) can be written in the equivalent, simpler form (19)P̂(θ)=1vH(θ)QNQNHv(θ). Function P̂(θ) attains local maximum values for angles θk determining directions of arrival of signals being received. ## 3. The Root-Music Method Determination of angular positionsθk on the base of spectrum (18) or (19), requires performing calculations for great number of discrete values of angle θ and next determination of all its maximum values in the given, relatively large scanning range θmin≤θ≤θmax. This task is especially laborious and time consuming when the angular resolution of the order of one tenth or one hundreds of degree is required. Therefore, in order to reduce the amount of calculations, the modified version of the MUSIC method, called root-MUSIC, has been elaborated. In this improved version, the problem of evaluation of the local maximum values of function (19) is replaced by the problem of finding the roots θk of the polynomial vH(θ)QNQNHv(θ). Estimated values of angular coordinates of objects can be evaluated for the assumed number of roots that should be equal to the number of received desirable signals multiplied by 2. This number is usually determined by means of special criteria, among which the most known are AIC (akaike information criterion) and MDL (minimum description length), [11].Denominator of function (19) is in general a polynomial, which can be written as (20)vH(θ)QNQNHv(θ)=vH(θ)Pv(θ)=C(z)=∑n=-N+1N-1cnzn, where P=QNQNH and z=exp[-jβdsin(θ)]. According to this definition, (21)P=QNQNH=[p11p12p13⋯p1,N-1p1,Np21p22p23⋯p2,N-1p2,Np31p32p33⋯p3,N-1p3,N⋮⋮⋮⋯⋮⋮pN-1,1pN-1,2pN-1,3⋯pN-1,N-1pN-1,NpN,1pN,2pN,3⋯pN,N-1pN,N] is the Hermitian matrix of degree N. Coefficients cn of the polynomial (20) can be determined by summing elements pkl of matrix P placed on its nth diagonals, namely, (22)cn=1N∑k-l=npkl.According to this formula,(23)c0=(p11+p22+p33+⋯+pN-1,N-1+pN,N)/N,c1=(p12+p23+p34+⋯+pN-2,N-1+pN-1,N)/N,c2=(p13+p24+p35+⋯+pN-3,N-1+pN-2,N)/N,⋯cN-1=(p1,N)/N.As was mentioned earlier,P is the Hermitian matrix, and therefore, the coefficient of polynomial (20) with indexes -n and n are mutually conjugate, so c-n=cn* for 1≤n≤N-1. Equation C(z)=0 is of degree 2N-1 and has 2N-1 roots, and to each root zn, there corresponds another root 1/zn*. These roots are most frequently evaluated by means of the companion matrix method, [12, 13]. It follows from the literature that this method ensures sufficient accuracy of calculations for all roots, even when polynomial C(z)=0 is of relatively high degree; for instance, 2N-1=63. Due to this valuable property, the companion matrix method has been implemented in the computational environment MATLAB.Estimates ofP angular positions of objects being detected are evaluated on a basis of 2P roots situated nearest to the unitary circle determined on the complex plane z=Re(z)+jIm(z), namely, on the basis of P pairs (zn, 1/zn*). Of course, each pair chosen in this manner determines only one location. With negligibly small power of the noise, σn2≈0, the roots lay exactly on the unitary circle mentioned above. From the substitution z=exp[-jβdsin(θ)] introduced above, it follows that estimates of angular positions θ̂n are (24)θ̂n=arcsin⁡[-1βdarg⁡(zn)], where 1≤n≤P,β=2π/λ and d is the antenna element spacing. ## 4. Application of Music and Root-Music Methods to the Estimation of Angular Coordinates of Moving Objects As was already mentioned in the introduction, exact estimation of the angular coordinates of moving objects by means of monopulse methods is in many cases impossible because of their limited angular resolution. For this reason, they cannot distinguish the objects illuminated by the same antenna beam and situated at the same slant distance. In order to compare angular resolution of the monopulse, MUSIC and root-MUSIC methods, some computer simulations have been carried out. In these simulations, angular coordinates of two objects (planes) moving with the speed ofv=100m/s have been evaluated. It has been assumed also that both planes are moving at 45° with respect to the north direction.The position of the first plane is determined by constant angleθ1=-14.1°, while the second plane changes its position θ2 gradually from -14.1° to -10.1° preserving the course and speed. In other words, angular separation Δθ=|θ1-θ2| between these planes (objects) changes in the range of Δθ=0°÷4°. A limited number of pulses radiated by the radar in given direction and consequently limited number of signals vectors x(e) cause that estimate of correlation matrix R̂ is inaccurate and have to be modified using diagonal loading technique before eigendecomposition and estimation of angular coordinates by means of subspace methods (25)R̂=1N∑n=1Nx(e)x(e)H+δ⋅I, where δ=4σn2- loading factor.The value of mean pulse repetition interval (mean PRI) has been set toTp=Tp-mean=2ms. The real PRI in ith burst is equal to 0.9·Tp-mean≤Tpi≤Tp-mean and is modified in each burst in order to avoid blind Doppler frequencies (Tp1=1.0·Tp-mean,Tp2=0.95·Tp-mean,Tp2=0.9·Tp-mean).The mean square errors of estimates of angular coordinates of both objects obtained by means of the monopulse method are illustrated in Figure3.Figure 3 Mean square errors of estimates of angular coordinates of two objects determined by means of the amplitude monopulse method.As it is seen, the second plane changing its angular position causes estimation errors of angular coordinates of the first of them. This effect can be eliminated in some extent by means of the additional Doppler filtration. This filtration allows to attenuate echo signals coming from the second plane when both signals have substantially different Doppler frequency shifts normalized with respect to pulse repetition frequency (PRF)Fp. The change of angular position of the second plane causes observable change of its radial speed, understood as the plane speed component with respect to the radar station and consequently causes the change of its Doppler frequency shift. The results of simulations have shown that for relatively small velocities (v=100m/s), the maximal difference of Doppler frequencies Δfd=fd1-fd2 is comparable with −3 dB bandwith of Doppler filter B-3dB=60Hz and is smaller than its −18 dB bandwith B-18dB=120Hz. The Doppler filters have the sidelobes level located at −18 dB. Therefore, if echo of the first object attain its maximal value in nth Doppler filter, Doppler frequency of echo of the second object is not located in the stopband of this filter and cannot be sufficiently attenuated. The difference of Doppler frequencies Δfd is proportional to the speed of both objects, and at some point, echoes of these objects can be separated using Doppler filtration. Unfortunately, at higher speeds (v=1000m/s) these echos may have similar normalized Doppler frequenciesfd-norm=fd1%Fp=fd2%Fp despite the fact that their Doppler frequencies are different fd1=fd-norm+n·Fp,fd2=fd-norm+m·Fp. This effect causes errors of estimation of angular coordinates of the objects being detected.The disadvantageous effect under discussion is well illustrated by the simulation results shown in Figure4. These simulations have been performed for the planes, the speed of which has been increased to v=1000m/s. The pulse repetition interval (PRI) has been also increased to Tp=4ms in order to lower the PRF and increase number of similar normalized Doppler frequencies.Figure 4 Mean square errors of estimates of angular coordinates of two objects determined by means of the amplitude monopulse method and Doppler filtration (v=1000m/s, Tp=4ms).From the results presented above, it follows that the additional Doppler filtration is not an universal solution ensuring proper distinction of the moving objects at all times.The signal to noise ratioS/Nnoise, given in Figures 3 and 4, determines the value of this parameter on the outputs of matched filters MF, see Figure 1. The power ratio of desired signal and jamming signal S/Njamm is calculated for the inputs of pulse compression blocks. This ratio has been defined before digital compression, because the noise signal can be differently correlated to the radiated LFM pulse. The second, very important advantage of MUSIC and root-MUSIC methods is their sufficient immunity to relatively high power jamming signals. In order to confirm this conclusion, the additional narrowband strong signal, situated in the direction defined by θjamm=1.4° has been introduced. The surface power density of this signal is 60 dB greater than the corresponding surface power density of both desirable radar signals. In this situation, the radar systems with receiving antennas nonadapted to the interferences, will be jammed and eventual detections may have improper estimates of angular coordinates. Mean square errors of such improper estimates of angular coordinates are illustrated in Figure 5. Of course, a negative influence of the single strong jamming signal can be decreased by using the receiving antenna in a form of multielement array, which can be adapted to this undesirable signal, [14, 15].Figure 5 Mean square errors of angular coordinates of two objects obtained by using the monopulse method and Doppler filtration in presence of the jamming signal (v=100m/s).In other words, the jamming signal should be attenuated by the array antenna in the highest degree. The simulation results presented in Figure6 confirm an effectiveness of application of the above approach to solve the radiolocation problem under consideration.Figure 6 Mean square errors of angular coordinates of two objects obtained using the monopulse method with Doppler filtration in presence of the jamming signal attenuated by the adaptive receiving antenna, (v=100m/s, Ljam=98.6dB).Next, the same radiolocation problem has been solved by using the MUSIC and root-MUSIC methods. Figures7 and 9 show mean square errors of estimates obtained by means of MUSIC method for S/Nnoise=0dB and S/Nnoise=30dB in presence of jamming signal determined above.Figure 7 Mean square errors of angular coordinates of two objects obtained using the MUSIC method in presence of the jamming signal, andS/Nnoise=0dB(v=100m/s).Similarly, the corresponding mean square errors of estimates evaluated using the root-MUSIC method forS/Nnoise=0dB, S/Nnoise=30dB and the same jamming signal are shown in Figures 8 and 10.Figure 8 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method in presence of the jamming signal andS/Nnoise=0dB(v=100m/s).Figure 9 Mean square errors of angular coordinates of two objects obtained using the MUSIC method in presence of the jamming signal, andS/Nnoise=30dB(v=100m/s).Figure 10 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method in presence of the jamming signal andS/Nnoise=30dB(v=100m/s).The values of angular resolution evaluated for these methods are given correspondingly in Tables1 and 2.Table 1 Angular resolution evaluated for MUSIC method. Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.450.1880no jamming2.050.0940−301.350.1700−601.00.1456no jamming1.5750.0646−301.0250.1066−600.7250.10510no jamming1.2750.05910−300.750.12110−600.4750.04820no jamming0.50.21320−300.50.04120−600.30.03330no jamming0.2750.14730−300.30.03430−60Table 2 Angular resolution evaluated for root-MUSIC method. Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.10.2020no jamming1.10.2080−301.0750.2060−600.450.1806no jamming0.4750.2016−300.50.2106−600.3250.20610no jamming0.3250.21510−300.350.19510−600.20.22020no jamming0.2250.13320−300.250.13720−600.1250.18830no jamming0.150.11330−300.1250.15230−60For comparison, Tables3 and 4 contain values of angular resolution obtained, when MUSIC and root-MUSIC methods determine angular coordinates on the basis of signal samples received by the real antenna array with 32-elements.Table 3 Angular resolution evaluated for MUSIC method (32 elements array). Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.70.1390no jamming2.1250.0800−302.050.1070−601.10.1346no jamming1.150.1226−301.20.1116−600.90.12110no jamming0.950.10910−300.9250.09610−600.4750.07220no jamming0.4750.07120−300.50.06020−600.350.03830no jamming0.3250.05030−300.350.04530−60Table 4 Angular resolution evaluated for root-MUSIC method (32 elements array). Angular resolution, degrees estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.70.0980no jamming2.1250.0670−302.050.0810−600.850.1146no jamming0.8750.1176−300.8750.0976−600.4750.09410no jamming0.450.13110−300.4750.10410−600.2750.12320no jamming0.2750.13020−300.2750.12420−600.1750.09130no jamming0.1750.09430−300.1750.09330−60In other words, in this process, the stage of initial preprocessing, see introduction, has been omitted. Comparing results given in Tables1 and 2, as well in Tables 3 and 4, we can see that using the proposed initial preprocessing, see relation (1), gives significant amelioration of precision and angular resolution. According to these results, using of initial preprocessing permits to reduce the S/Nnoise ratio for about 6 dB conserving necessary precision and resolution. Thus, the error magnitudes given in Tables 1 and 2 seem to be acceptable for most similar radiolocation problems encountered in practice. As it has been mentioned in the beginnig of the paper, all simulations have been carried out assuming that PRI is changing in each burst. The following values of PRI have been used: (26)Tp1=1.00⋅Tp-mean,Tp2=0.95⋅Tp-mean,Tp3=0.90⋅Tp-mean.Estimation of correlation matrix on basis of signals vectors received for 3 different intervals allows MUSIC and root-MUSIC methods to estimate the correct values of angular coordinates despite the fact that signals could have the same or very close normalized Doppler frequencyfd-norm=fd%Fp and be correlated for given PRI. This technique is well known in radar literature, but it has been mainly used to avoid blind speeds. Presented simulation show, that it also helps to mitigate the problem of estimation of angular coordinates of closely spaced highly correlated signals. When objects are moving with very high velocities (v=1000m/s) their angular coordinates could be false when emitted signals have the same PRI in each burst as shown in Figures 11 and 12.Figure 11 Values of normalized Doppler frequencies of two objects moving with speedv=1000m/s (Tp=2ms).Figure 12 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method (S/Nnoise=0dB, v=1000m/s, Tp=2ms).This effect is especially apparent when PRF is relatively small for instance whenFp=250 Hz (Tp=4ms). The values of normalized Doppler frequencies and mean square errors of angular coordinates obtained for this case are illustrated in Figures 13 and 14.Figure 13 Values of normalized Doppler frequencies of two objects moving with speedv=1000m/s (Tp=4ms).Figure 14 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method (S/Nnoise=0dB, v=1000m/s, Tp=4ms).The further study of this problem and comparison of these approach with well-known techniques of decoration of signals such as spatial smoothing or redundancy averaging is behind the scope of this paper. ## 5. Conclusions The radiolocation problem defined at the beginning of the introduction and in Section4 has been subsequently solved by different methods, that is, amplitude monopulse method, amplitude monopulse method aided by the coherent Doppler filtration, MUSIC, and root-MUSIC. Diagrams shown in Figures 3 and 10, respectively, illustrate the results of computer simulations obtained by means of these methods. Thus, on the basis of diagrams shown in Figure 3, we can deduce that traditional amplitude monopulse method is inadequate for this purpose. Some amelioration can be obtained using the additional coherent Doppler filtration. Unfortunately, this solution is not always effective, because false estimates can appear in cases when echo signals after the Doppler filtration [8] attain similar values in the same frequency channel or have similar normalized Doppler frequencies. This effect is illustrated on diagram shown in Figure 4. Partial elimination of this undesirable effect is possible using multiburst radar signals with variable pulse repetition time (frequency), that is, similar to that shown in Figure 2. The most effective and sufficiently precise for these applications, see Tables 1 and 2, proved to be MUSIC and root-MUSIC methods. This conclusion is well justified by the corresponding simulation results illustrated in Figures 7 and 10.The radiolocation problem under consideration becomes especially difficult to solve after introducing the narrowband strong jamming interference. In the paper, it has been assumed that this jamming signal is incoming from the directionθjamm=1.4°, and its power density is 60 dB greater than the corresponding surface power density of both desirable radar signals. The influence of this jamming signal on results of the similar simulations performed by using the MUSIC and root-MUSIC methods is illustrated by diagrams shown in Figures 7 and 10.The presented results confirm that MUSIC and root-MUSIC methods are suitable for effective solution of radiolocation problems similar to that discussed here. Moreover, they show that these methods can be treated as reliable for the radar signals, for which the power signal-noise ratio is not less than the signal-noise ratio, at which the signal surpasses a detection threshold. It has been also confirmed that the proposed initial preprocessing, see (1), makes possible significant amelioration of precision and angular resolution of MUSIC and root-MUSIC methods. Consequently, they can be applied for relatively weak radar signals, for which S/Nnoise≈0dB.All simulation results presented in this paper have been obtained under the assumption that the numberP of objects being detected is known exactly. This number is required to the appropriate partition of space, spanned on the correlation matrix R, into the useful signal subspace Qs and the noise subspace Qn. Of course, is not easy to determine this number for the majority of quasireal radiolocation scenarios, and for this reason, it can be a source of potential errors. Thus, the additional algorithm determining this number with highest possible precission, according to AIC or MDL criterion, is required. ## 6. Summary The main subject of considerations are MUSIC and root-MUSIC methods used to estimation of the angular coordinates (directions of arrival) and angular distance of two moving objects in presence of uncorrelated noise signal and an external, relatively strong narrowband jamming interference. At the receiving antenna, the 32-element uniform linear array (ULA) is used. Extensive computer simulations have been carried out to demonstrate the sufficient accuracy and good spatial resolution properties of these methods in the scenario defined above. It is also shown that using the proposed initial preprocessing, we can increase the accuracy and angular resolution of the methods under discussion. Most of simulation results, presented mainly in a graphical form, have been compared with the corresponding simulation results obtained by using the traditional amplitude monopulse method and the amplitude monopulse method aided by the coherent Doppler filtration. --- *Source: 101582-2011-06-08.xml*
101582-2011-06-08_101582-2011-06-08.md
32,092
An Estimation of Angular Coordinates and Angular Distance of Two Moving Objects
Wojciech Rosloniec
ISRN Signal Processing (2011)
Engineering & Technology
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2011/101582
101582-2011-06-08.xml
--- ## Abstract The paper presents results of extensive simulations carried out in order to assess the precision and angular resolution of subspace methods in real radar system. It has been assumed that such a system uses the 32-element uniform linear array (ULA) and radiates only 3 bursts consisting of 8 pulses in given direction. In order to avoid blind Doppler frequencies, pulse repetition interval (PRI) is different in each burst. It has been shown that change of PRI is not only necessary to avoid blind Doppler frequencies but also allows to avoid false values of angular coordinates when two objects are visible in the same beam, in the same range gate, and their echoes attain maximal values in the same Doppler filter. It has also been shown that precision and angular resolution of both MUSIC and root-MUSIC method can be improved by appropriate preprocessing of signal samples used by these methods. --- ## Body ## 1. Introduction The paper studies the problem of detection and estimation of angular coordinates of moving objects by means of MUSIC and root-MUSIC methods. MUSIC and root-MUSIC had been invented almost 30 years ago [1, 2]; however, application of these methods in real radar system has been possible only recently thanks to the progress in the field of active electronically scanned arrays, digital beamforming, and multiprocessor systems containing clusters of high-speed general purpose PowerPC processors and FPGA devices connected by means of high-speed serial bus such as RapidIO. An important reason stimulating the studies on these methods were difficulties concerning effective estimation of angular coordinates of closely spaced moving objects by using the monopulse methods, [3–5]. For example, it is not possible to separate individual objects even with different radial speeds if they are illuminated by the same antenna beam and are in the same range gate. More appropriate to solve problems of this kind are superresolution methods, represented here by MUSIC and root-MUSIC, [6, 7]. In application to estimation of angular coordinates of moving objects, these methods use the spatial correlation (covariance) matrix R formulated on the basis of complex samples of received signals. As a rule, these samples are obtained on the outputs of individual matched filters (MF) of the array antenna unit, which have functional scheme similar to that shown in Figure 1.Figure 1 Functional diagram of the array antenna unit. A/D: analog-digital converter, QPD: quadrature phase detector, and MF: matched filter (digital).All computer simulations presented in this paper have been performed under the assumption that this linear equidistant array antenna containsM=32 identical receiving elements spaced by the distance d=0.7λ0, where λ0 is the length of received wave.This 32-element array antenna has been used to create new 24-element array. Samplesxi(n), where 1≤i≤32, of signals received by the real, 32-element array antenna are grouped in the following way: (1)xk(e)(n)=∑l=08wlxk+l(n)9,for1≤k≤24, where wl=exp[jl(2πd/λ)sin(θmax)] is the lth coefficient of vector w shaping the directive gain characteristics of antenna elements, belonging to the 24-element array, and θmax determines the direction of maximum directivity of each antenna element. This kind of grouping is often called initial preprocessing. Samples xk(e)(n), where 1≤k≤24, corresponding to individual elements of the equivalent, 24-element array are subsequently processed using the MUSIC and root-MUSIC methods.It has been assumed that the antenna array described above radiates 24 pulses, grouped into three 8-pulse bursts differing by the repetition time (frequency), as shown in Figure2.Figure 2 Sequence of pulses used in radars which employ MTD processing.The signals (radar echoes) received at the outputs of elements of the array are used for digital beamforming,s(n)=wsumH·x(n) and are processed later according to the MTD (moving target detection) technique, [8]. Echo signals s(n) received after each pulse are subjected to coherent integration using predefined sets of weight coefficients wk(2)sdoppl(b)=∑k=18wks(b+kB), where b is the number of range gate determining the distance between the object and the radar, B is the number of range gates in each scan, and k is the number of pulse belonging to a given pulse sequence called the burst. This integration is called Doppler filtration, because it permits to distinguish between the echo signals from the objects moving with different radial speeds. The signal samples from individual array elements, weighted according to (1), form the vector x(e) that is the basis for calculation of the following estimate: (3)R̂=1N∑n=1Nx(e)x(e)H, of the correlation matrix R, where x(e)H is the Hermitian conjugate with respect to the vector x(e). In practical considerations, the factor 1/N appearing in (3) is usually neglected, because it has no influence on the values of the used eigenvectors, but only decides about the scale of the related eigenvalues of matrix R [2, 9]. ## 2. The Multiple Signal Classification (Music) Method As it was mentioned in the introduction, the MUSIC method belongs to the group of subspace methods in which the spectral functions are determined on the base of eigenvectors of the space correlation matrixR. Similarly, the matrix R is formulated on the base of complex samples of the received signals, [1, 6, 10]. In order to explain the essence of this method in the radar application, assume that the N-element antenna array receives P echo signals, reflected from P objects, where P<N. In a given moment, to each echo of the radiated pulse one can assign the vector of complex samples of the signals received in this moment by individual elements of the array antenna; see Figure 1. Therefore, we assume that to the first element of this array, from the direction θk, where 1≤k≤P, arrives the signal s1(t)=Acos(ωt+ϕk) represented in further considerations by its complex amplitude s1=Aexp(jϕk). It is easy to prove on the base of Figure 1 that complex amplitude of the signal coming from the same direction θk to the i-element of the array can be written as follows: si=s1exp[-jβd(i-1)sin(θk)], where 2≤i≤N, β=2πf/c is the propagation constant, c≈3·108m/s is the speed of light, and f denotes frequency of the narrow-band received signal. The complex samples of this signal, appearing on the outputs of individual matched filters MF, will be shifted in phase in a similar way. The set of these samples constitutes a N-dimensional column vector (4)xk=[x1x2x3⋮xN]=[1e-jβdsin⁡(θk)e-j2βdsin⁡(θk)⋮e-j(N-1)βdsin⁡(θk)]s1(θk)=ν(θk)s1(θk), where (5)v(θk)=[1,e-jβdsin⁡(θk),e-jβ2dsin⁡(θk),…,e-j(N-1)βdsin⁡(θk)]T,1≤k≤P, is the so-called array steering vector, [10]. Each element of the array shown in Figure 1 receives simultaneously the signals from all directions θk, where 1≤k≤P, and noise signal with variance σn2. It means that the resulting column vector of complex samples of all P signals and noise, before further digital processing, can be written as (6)x=∑k=1Pxk+n=∑k=1Pν(θk)s1(θk)+n=Vs+n, where (7)V=[111…1e-jβdsin(θ1)e-jβdsin(θ2)e-jβdsin(θ3)…e-jβdsin(θP)e-jβ2dsin(θ1)e-jβ2dsin(θ2)e-jβ2dsin(θ3)…e-jβ2dsin(θP)e-jβ3dsin(θ1)e-jβ3dsin(θ2)e-jβ3dsin(θ3)…e-jβ3dsin(θP)⋮⋮⋮…⋮e-jβ(N-1)dsin(θ1)e-jβ(N-1)dsin(θ2)e-jβ(N-1)dsin(θ3)…e-jβ(N-1)dsin(θP)]is the N×P matrix, the columns of which are the elements of direction vectors (5), s=[s1(θ1),s1(θ2),s1(θ3),…,s1(θP)]T is the P-element, column vector representing the signals received by the first element of the array, and n is the N-element column vector representing received noise signals. The correlation matrix R of samples of signals obtained on the outputs of antenna system, see Figure 1, can be written in the form of the sum, using the correlation matrix of desirable signals Rsand the noise correlation matrix Rn, [1, 10], namely, (8)R=E{xxH}=VRsVH+Rn=VRsVH+σn2I, where operator E{} denotes assignment of the expected value, I is the unitary matrix N×N, and xH is the Hermitian conjugate of the vector x. In other words, in this case, vector xH is the N-element row vector, elements of which are complex conjugate with respect to the corresponding complex elements of the column vector x. Similarly, the matrix VH is the complex conjugate of the matrix V. According to [1], the correlation matrix of desirable signals (9)Rs=E{SSH}=diag⁡⋅{σ12,σ22,σ32,…,σP2} is the diagonal matrix P×P, if the received desirable signals are not correlated. Under the assumption P<N, matrix VRsVH is the singular matrix, which means that (10)det⁡⁡[VRsVH]=det⁡⁡[R-σn2I]=0. It follows from (10) that σn2 is the eigenvalue of the matrix R, [9]. Space, in which the desirable signals are not defined has the dimension N-P. Hence, σn2 is (N-P)-order eigenvalue of the matrix R. The matrices R and VRsVH are not negative defined, and therefore, the matrix R has also P other eigenvalues λk, satisfying condition λk>σn2>0, where 1≤k≤P. The eigenvectors qk, assigned to these eigenvalues are mutually orthogonal [9, 10]. According to the general definition of matrix eigenvalues problem, (R-λk)qk=0, we can write (11)Rqk=[VRsVH+σn2I]qk=λkqkfor1≤k≤N, where λk>σn2>0 if 1≤k≤P and λk=σn2 if P+1≤k≤N. It follows from (11) that (12)[VRsVH]qk={(λk-σn2)qk,1≤k≤P,0,P+1≤k≤N. Relation (12) shows that N-dimensional space of signals and noise can be divided into two mutually orthogonal subspaces, that is, subspace of signals Qs≡[q1,q2,q3,…,qP] and subspace of noise Qn≡[qP+1,qP+2,qP+3,…,qN]. According to this partition, the correlation matrix R can be written as the following sum: (13)R=∑k=1P(λk-σn2)qkqkH+∑k=1Nσn2qkqkH, where λk denotes the eigenvalue corresponding to the vector qk and σn2 is the variance of the white noise received from the individual antenna elements. After appropriate grouping of the factors of sum (13), we obtain the relation, in which one can distinguish P eigenvectors representing desirable signals and N-P eigenvectors belonging to the noise subspace (14)R=∑k=1PλkqkqkH+∑k=P+1Nσn2qkqkH. Each of vectors (4) containing complex samples of desirable signal belongs to the signal subspace Qs, and therefore, it can be written in the form of sum of eigenvectors, defined in this subspace, namely, (15)xk=ν(θk)s1(θk)=∑k=1Pbkqk, where bk is the k coefficient of a suitable value. It should be pointed out that each component qk of vector (15) is orthogonal with respect to an arbitrary eigenvector qm from the noise subspace Qn. Consequently, the whole vector (15) is orthogonal to qm.This unique property can be expressed as follows: (16)xkHqm=[ν(θk)s1(θk)]Hqm=s1H(θk)[ν(θk)]Hqm=0for1≤k≤P<m≤N. Applying equation (16) to all eigenvectors of the noise subspace Qn, we find that the dot product of the vector v(θk), representing the signal received from direction θk, and the sum of eigenvectors from the noise subspace Qn, will also take the value close to zero. In the ideal case (exactly defined correlation matrix R, exactly evaluated eigenvalues and their corresponding eigenvectors, and precise partition onto the vectors from the subspaces of signal and noise), we have (17)vH(θk)∑k=P+1Nqk=0. Using the property described by (17), the following estimate of the spectral power density of the signal can be formulated: (18)P̂(θ)=1∑k=P+1N|vH(θ)qk|2=1∑k=P+1NvH(θ)qkqkHv(θ). This estimate is usually called the spectrum of the MUSIC method, [1, 6]. Placing all eigenvectors qk from the noise subspace into the columns of matrix Qn, spectrum (18) can be written in the equivalent, simpler form (19)P̂(θ)=1vH(θ)QNQNHv(θ). Function P̂(θ) attains local maximum values for angles θk determining directions of arrival of signals being received. ## 3. The Root-Music Method Determination of angular positionsθk on the base of spectrum (18) or (19), requires performing calculations for great number of discrete values of angle θ and next determination of all its maximum values in the given, relatively large scanning range θmin≤θ≤θmax. This task is especially laborious and time consuming when the angular resolution of the order of one tenth or one hundreds of degree is required. Therefore, in order to reduce the amount of calculations, the modified version of the MUSIC method, called root-MUSIC, has been elaborated. In this improved version, the problem of evaluation of the local maximum values of function (19) is replaced by the problem of finding the roots θk of the polynomial vH(θ)QNQNHv(θ). Estimated values of angular coordinates of objects can be evaluated for the assumed number of roots that should be equal to the number of received desirable signals multiplied by 2. This number is usually determined by means of special criteria, among which the most known are AIC (akaike information criterion) and MDL (minimum description length), [11].Denominator of function (19) is in general a polynomial, which can be written as (20)vH(θ)QNQNHv(θ)=vH(θ)Pv(θ)=C(z)=∑n=-N+1N-1cnzn, where P=QNQNH and z=exp[-jβdsin(θ)]. According to this definition, (21)P=QNQNH=[p11p12p13⋯p1,N-1p1,Np21p22p23⋯p2,N-1p2,Np31p32p33⋯p3,N-1p3,N⋮⋮⋮⋯⋮⋮pN-1,1pN-1,2pN-1,3⋯pN-1,N-1pN-1,NpN,1pN,2pN,3⋯pN,N-1pN,N] is the Hermitian matrix of degree N. Coefficients cn of the polynomial (20) can be determined by summing elements pkl of matrix P placed on its nth diagonals, namely, (22)cn=1N∑k-l=npkl.According to this formula,(23)c0=(p11+p22+p33+⋯+pN-1,N-1+pN,N)/N,c1=(p12+p23+p34+⋯+pN-2,N-1+pN-1,N)/N,c2=(p13+p24+p35+⋯+pN-3,N-1+pN-2,N)/N,⋯cN-1=(p1,N)/N.As was mentioned earlier,P is the Hermitian matrix, and therefore, the coefficient of polynomial (20) with indexes -n and n are mutually conjugate, so c-n=cn* for 1≤n≤N-1. Equation C(z)=0 is of degree 2N-1 and has 2N-1 roots, and to each root zn, there corresponds another root 1/zn*. These roots are most frequently evaluated by means of the companion matrix method, [12, 13]. It follows from the literature that this method ensures sufficient accuracy of calculations for all roots, even when polynomial C(z)=0 is of relatively high degree; for instance, 2N-1=63. Due to this valuable property, the companion matrix method has been implemented in the computational environment MATLAB.Estimates ofP angular positions of objects being detected are evaluated on a basis of 2P roots situated nearest to the unitary circle determined on the complex plane z=Re(z)+jIm(z), namely, on the basis of P pairs (zn, 1/zn*). Of course, each pair chosen in this manner determines only one location. With negligibly small power of the noise, σn2≈0, the roots lay exactly on the unitary circle mentioned above. From the substitution z=exp[-jβdsin(θ)] introduced above, it follows that estimates of angular positions θ̂n are (24)θ̂n=arcsin⁡[-1βdarg⁡(zn)], where 1≤n≤P,β=2π/λ and d is the antenna element spacing. ## 4. Application of Music and Root-Music Methods to the Estimation of Angular Coordinates of Moving Objects As was already mentioned in the introduction, exact estimation of the angular coordinates of moving objects by means of monopulse methods is in many cases impossible because of their limited angular resolution. For this reason, they cannot distinguish the objects illuminated by the same antenna beam and situated at the same slant distance. In order to compare angular resolution of the monopulse, MUSIC and root-MUSIC methods, some computer simulations have been carried out. In these simulations, angular coordinates of two objects (planes) moving with the speed ofv=100m/s have been evaluated. It has been assumed also that both planes are moving at 45° with respect to the north direction.The position of the first plane is determined by constant angleθ1=-14.1°, while the second plane changes its position θ2 gradually from -14.1° to -10.1° preserving the course and speed. In other words, angular separation Δθ=|θ1-θ2| between these planes (objects) changes in the range of Δθ=0°÷4°. A limited number of pulses radiated by the radar in given direction and consequently limited number of signals vectors x(e) cause that estimate of correlation matrix R̂ is inaccurate and have to be modified using diagonal loading technique before eigendecomposition and estimation of angular coordinates by means of subspace methods (25)R̂=1N∑n=1Nx(e)x(e)H+δ⋅I, where δ=4σn2- loading factor.The value of mean pulse repetition interval (mean PRI) has been set toTp=Tp-mean=2ms. The real PRI in ith burst is equal to 0.9·Tp-mean≤Tpi≤Tp-mean and is modified in each burst in order to avoid blind Doppler frequencies (Tp1=1.0·Tp-mean,Tp2=0.95·Tp-mean,Tp2=0.9·Tp-mean).The mean square errors of estimates of angular coordinates of both objects obtained by means of the monopulse method are illustrated in Figure3.Figure 3 Mean square errors of estimates of angular coordinates of two objects determined by means of the amplitude monopulse method.As it is seen, the second plane changing its angular position causes estimation errors of angular coordinates of the first of them. This effect can be eliminated in some extent by means of the additional Doppler filtration. This filtration allows to attenuate echo signals coming from the second plane when both signals have substantially different Doppler frequency shifts normalized with respect to pulse repetition frequency (PRF)Fp. The change of angular position of the second plane causes observable change of its radial speed, understood as the plane speed component with respect to the radar station and consequently causes the change of its Doppler frequency shift. The results of simulations have shown that for relatively small velocities (v=100m/s), the maximal difference of Doppler frequencies Δfd=fd1-fd2 is comparable with −3 dB bandwith of Doppler filter B-3dB=60Hz and is smaller than its −18 dB bandwith B-18dB=120Hz. The Doppler filters have the sidelobes level located at −18 dB. Therefore, if echo of the first object attain its maximal value in nth Doppler filter, Doppler frequency of echo of the second object is not located in the stopband of this filter and cannot be sufficiently attenuated. The difference of Doppler frequencies Δfd is proportional to the speed of both objects, and at some point, echoes of these objects can be separated using Doppler filtration. Unfortunately, at higher speeds (v=1000m/s) these echos may have similar normalized Doppler frequenciesfd-norm=fd1%Fp=fd2%Fp despite the fact that their Doppler frequencies are different fd1=fd-norm+n·Fp,fd2=fd-norm+m·Fp. This effect causes errors of estimation of angular coordinates of the objects being detected.The disadvantageous effect under discussion is well illustrated by the simulation results shown in Figure4. These simulations have been performed for the planes, the speed of which has been increased to v=1000m/s. The pulse repetition interval (PRI) has been also increased to Tp=4ms in order to lower the PRF and increase number of similar normalized Doppler frequencies.Figure 4 Mean square errors of estimates of angular coordinates of two objects determined by means of the amplitude monopulse method and Doppler filtration (v=1000m/s, Tp=4ms).From the results presented above, it follows that the additional Doppler filtration is not an universal solution ensuring proper distinction of the moving objects at all times.The signal to noise ratioS/Nnoise, given in Figures 3 and 4, determines the value of this parameter on the outputs of matched filters MF, see Figure 1. The power ratio of desired signal and jamming signal S/Njamm is calculated for the inputs of pulse compression blocks. This ratio has been defined before digital compression, because the noise signal can be differently correlated to the radiated LFM pulse. The second, very important advantage of MUSIC and root-MUSIC methods is their sufficient immunity to relatively high power jamming signals. In order to confirm this conclusion, the additional narrowband strong signal, situated in the direction defined by θjamm=1.4° has been introduced. The surface power density of this signal is 60 dB greater than the corresponding surface power density of both desirable radar signals. In this situation, the radar systems with receiving antennas nonadapted to the interferences, will be jammed and eventual detections may have improper estimates of angular coordinates. Mean square errors of such improper estimates of angular coordinates are illustrated in Figure 5. Of course, a negative influence of the single strong jamming signal can be decreased by using the receiving antenna in a form of multielement array, which can be adapted to this undesirable signal, [14, 15].Figure 5 Mean square errors of angular coordinates of two objects obtained by using the monopulse method and Doppler filtration in presence of the jamming signal (v=100m/s).In other words, the jamming signal should be attenuated by the array antenna in the highest degree. The simulation results presented in Figure6 confirm an effectiveness of application of the above approach to solve the radiolocation problem under consideration.Figure 6 Mean square errors of angular coordinates of two objects obtained using the monopulse method with Doppler filtration in presence of the jamming signal attenuated by the adaptive receiving antenna, (v=100m/s, Ljam=98.6dB).Next, the same radiolocation problem has been solved by using the MUSIC and root-MUSIC methods. Figures7 and 9 show mean square errors of estimates obtained by means of MUSIC method for S/Nnoise=0dB and S/Nnoise=30dB in presence of jamming signal determined above.Figure 7 Mean square errors of angular coordinates of two objects obtained using the MUSIC method in presence of the jamming signal, andS/Nnoise=0dB(v=100m/s).Similarly, the corresponding mean square errors of estimates evaluated using the root-MUSIC method forS/Nnoise=0dB, S/Nnoise=30dB and the same jamming signal are shown in Figures 8 and 10.Figure 8 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method in presence of the jamming signal andS/Nnoise=0dB(v=100m/s).Figure 9 Mean square errors of angular coordinates of two objects obtained using the MUSIC method in presence of the jamming signal, andS/Nnoise=30dB(v=100m/s).Figure 10 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method in presence of the jamming signal andS/Nnoise=30dB(v=100m/s).The values of angular resolution evaluated for these methods are given correspondingly in Tables1 and 2.Table 1 Angular resolution evaluated for MUSIC method. Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.450.1880no jamming2.050.0940−301.350.1700−601.00.1456no jamming1.5750.0646−301.0250.1066−600.7250.10510no jamming1.2750.05910−300.750.12110−600.4750.04820no jamming0.50.21320−300.50.04120−600.30.03330no jamming0.2750.14730−300.30.03430−60Table 2 Angular resolution evaluated for root-MUSIC method. Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.10.2020no jamming1.10.2080−301.0750.2060−600.450.1806no jamming0.4750.2016−300.50.2106−600.3250.20610no jamming0.3250.21510−300.350.19510−600.20.22020no jamming0.2250.13320−300.250.13720−600.1250.18830no jamming0.150.11330−300.1250.15230−60For comparison, Tables3 and 4 contain values of angular resolution obtained, when MUSIC and root-MUSIC methods determine angular coordinates on the basis of signal samples received by the real antenna array with 32-elements.Table 3 Angular resolution evaluated for MUSIC method (32 elements array). Angular resolution, degrees (estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.70.1390no jamming2.1250.0800−302.050.1070−601.10.1346no jamming1.150.1226−301.20.1116−600.90.12110no jamming0.950.10910−300.9250.09610−600.4750.07220no jamming0.4750.07120−300.50.06020−600.350.03830no jamming0.3250.05030−300.350.04530−60Table 4 Angular resolution evaluated for root-MUSIC method (32 elements array). Angular resolution, degrees estimation error < 0.23°)Maximum value of estimation error, degreesS/Nnoise,dBS/Njamm,dB1.70.0980no jamming2.1250.0670−302.050.0810−600.850.1146no jamming0.8750.1176−300.8750.0976−600.4750.09410no jamming0.450.13110−300.4750.10410−600.2750.12320no jamming0.2750.13020−300.2750.12420−600.1750.09130no jamming0.1750.09430−300.1750.09330−60In other words, in this process, the stage of initial preprocessing, see introduction, has been omitted. Comparing results given in Tables1 and 2, as well in Tables 3 and 4, we can see that using the proposed initial preprocessing, see relation (1), gives significant amelioration of precision and angular resolution. According to these results, using of initial preprocessing permits to reduce the S/Nnoise ratio for about 6 dB conserving necessary precision and resolution. Thus, the error magnitudes given in Tables 1 and 2 seem to be acceptable for most similar radiolocation problems encountered in practice. As it has been mentioned in the beginnig of the paper, all simulations have been carried out assuming that PRI is changing in each burst. The following values of PRI have been used: (26)Tp1=1.00⋅Tp-mean,Tp2=0.95⋅Tp-mean,Tp3=0.90⋅Tp-mean.Estimation of correlation matrix on basis of signals vectors received for 3 different intervals allows MUSIC and root-MUSIC methods to estimate the correct values of angular coordinates despite the fact that signals could have the same or very close normalized Doppler frequencyfd-norm=fd%Fp and be correlated for given PRI. This technique is well known in radar literature, but it has been mainly used to avoid blind speeds. Presented simulation show, that it also helps to mitigate the problem of estimation of angular coordinates of closely spaced highly correlated signals. When objects are moving with very high velocities (v=1000m/s) their angular coordinates could be false when emitted signals have the same PRI in each burst as shown in Figures 11 and 12.Figure 11 Values of normalized Doppler frequencies of two objects moving with speedv=1000m/s (Tp=2ms).Figure 12 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method (S/Nnoise=0dB, v=1000m/s, Tp=2ms).This effect is especially apparent when PRF is relatively small for instance whenFp=250 Hz (Tp=4ms). The values of normalized Doppler frequencies and mean square errors of angular coordinates obtained for this case are illustrated in Figures 13 and 14.Figure 13 Values of normalized Doppler frequencies of two objects moving with speedv=1000m/s (Tp=4ms).Figure 14 Mean square errors of angular coordinates of two objects obtained using the root-MUSIC method (S/Nnoise=0dB, v=1000m/s, Tp=4ms).The further study of this problem and comparison of these approach with well-known techniques of decoration of signals such as spatial smoothing or redundancy averaging is behind the scope of this paper. ## 5. Conclusions The radiolocation problem defined at the beginning of the introduction and in Section4 has been subsequently solved by different methods, that is, amplitude monopulse method, amplitude monopulse method aided by the coherent Doppler filtration, MUSIC, and root-MUSIC. Diagrams shown in Figures 3 and 10, respectively, illustrate the results of computer simulations obtained by means of these methods. Thus, on the basis of diagrams shown in Figure 3, we can deduce that traditional amplitude monopulse method is inadequate for this purpose. Some amelioration can be obtained using the additional coherent Doppler filtration. Unfortunately, this solution is not always effective, because false estimates can appear in cases when echo signals after the Doppler filtration [8] attain similar values in the same frequency channel or have similar normalized Doppler frequencies. This effect is illustrated on diagram shown in Figure 4. Partial elimination of this undesirable effect is possible using multiburst radar signals with variable pulse repetition time (frequency), that is, similar to that shown in Figure 2. The most effective and sufficiently precise for these applications, see Tables 1 and 2, proved to be MUSIC and root-MUSIC methods. This conclusion is well justified by the corresponding simulation results illustrated in Figures 7 and 10.The radiolocation problem under consideration becomes especially difficult to solve after introducing the narrowband strong jamming interference. In the paper, it has been assumed that this jamming signal is incoming from the directionθjamm=1.4°, and its power density is 60 dB greater than the corresponding surface power density of both desirable radar signals. The influence of this jamming signal on results of the similar simulations performed by using the MUSIC and root-MUSIC methods is illustrated by diagrams shown in Figures 7 and 10.The presented results confirm that MUSIC and root-MUSIC methods are suitable for effective solution of radiolocation problems similar to that discussed here. Moreover, they show that these methods can be treated as reliable for the radar signals, for which the power signal-noise ratio is not less than the signal-noise ratio, at which the signal surpasses a detection threshold. It has been also confirmed that the proposed initial preprocessing, see (1), makes possible significant amelioration of precision and angular resolution of MUSIC and root-MUSIC methods. Consequently, they can be applied for relatively weak radar signals, for which S/Nnoise≈0dB.All simulation results presented in this paper have been obtained under the assumption that the numberP of objects being detected is known exactly. This number is required to the appropriate partition of space, spanned on the correlation matrix R, into the useful signal subspace Qs and the noise subspace Qn. Of course, is not easy to determine this number for the majority of quasireal radiolocation scenarios, and for this reason, it can be a source of potential errors. Thus, the additional algorithm determining this number with highest possible precission, according to AIC or MDL criterion, is required. ## 6. Summary The main subject of considerations are MUSIC and root-MUSIC methods used to estimation of the angular coordinates (directions of arrival) and angular distance of two moving objects in presence of uncorrelated noise signal and an external, relatively strong narrowband jamming interference. At the receiving antenna, the 32-element uniform linear array (ULA) is used. Extensive computer simulations have been carried out to demonstrate the sufficient accuracy and good spatial resolution properties of these methods in the scenario defined above. It is also shown that using the proposed initial preprocessing, we can increase the accuracy and angular resolution of the methods under discussion. Most of simulation results, presented mainly in a graphical form, have been compared with the corresponding simulation results obtained by using the traditional amplitude monopulse method and the amplitude monopulse method aided by the coherent Doppler filtration. --- *Source: 101582-2011-06-08.xml*
2011
# Synthesis and Characterization of ZnO-ZrO2 Nanocomposites for Photocatalytic Degradation and Mineralization of Phenol **Authors:** M. C. Uribe López; M. A. Alvarez Lemus; M. C. Hidalgo; R. López González; P. Quintana Owen; S. Oros-Ruiz; S. A. Uribe López; J. Acosta **Journal:** Journal of Nanomaterials (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1015876 --- ## Abstract ZnO-ZrO2 nanocomposites using zinc (II) acetylacetonate and different ZnO contents (13, 25, 50, and 75% mol) were synthesized through sol-gel method. The synthesis process was strongly related to nanocomposite properties especially on their structural composition. The obtained ZnO-ZrO2 nanomaterials presented tetragonal crystalline structure for zirconia whereas hexagonal one was formed in ZnO. Raman spectroscopy and XRD patterns confirmed the formation of tetragonal zirconia whereas inhibition of monoclinic structure was observed. Addition of ZnO affected the pore size distribution of the composite, and the measured specific surface areas were from 10 m2/g (for pure ZnO) to 46 m2/g (pristine ZrO2). Eg values of ZrO2 were modified by ZnO addition, since calculated values using Kubelka-Munk’s function varied from 4.73 to 3.76 eV. The morphology and size of the nanomaterials investigated by electron microscopy showed formation of nanorods for ZnO with sizes ranging from 50 nm to 300 nm while zirconia was formed by smaller particles (less than 50 nm). The main advantage of using the nanocomposite for photocatalytic degradation of phenol was the mineralization degree, since 75ZnO-ZrO2 nanocomposite surpassed mineralization reached by pure ZnO and also inhibited formation of undesirable intermediates. --- ## Body ## 1. Introduction Zirconium oxide (ZrO2) known as zirconia is an interesting material due to its application in various photochemical heterogeneous reactions. ZrO2 is an n-type semiconductor with a wide band gap energy between 5.0 and 5.5 eV [1]. Because of this, ZrO2 requires UV-C light (<280 nm) to be excited and generate electron-hole pairs [2]. A strategy to overcome this is by doping ZrO2 with different transition metal ions or coupling with other metal oxides with dissimilar band edge [3]. Composites made of two metal oxides have attracted much attention in different researches because they possess improved physicochemical properties than the pure oxides. Usually, composites enhance photocatalytic activity [4, 5], produce new crystallographic phases with quite different properties than the original oxides, create defect energy levels in the band gap region [6], change the surface characteristics of the individual oxides due to the formation of new sites in the interface between the components [7], and also increase the stability of a photoactive crystalline phase [8]. In order to enhance optical properties of ZrO2 several semiconductors like SiO2, TiO2, ZnO, WO3, and NiO have been coupled to ZrO2. Together with TiO2, zin oxide is one of the most investigated n-type semiconductor materials due to its low-cost, easy fabrication, wide band-gap, and photocatalytic activity for degrading several organic pollutants into less harmful products [9]. The main advantage of ZnO is that it absorbs a larger fraction of the solar spectrum than TiO2 [10]. The band gap of ZnO is ~3.37 eV, and its exciton-binding energy is about 60 meV [11]. Many reports have been published about the good physicochemical properties given by the use of ZnO in composites. For instance, a composite made by nanostructures transparent conducting metal-oxides (TCMOS) as ZnO/NiO resulted in an excellent candidate for acetone sensing [12]. Nanocomposites of Zn(1−x)MgxO/graphene showed an excellent performance to remove methylene blue dye under natural sunlight illumination [13]. TiO2/ZnO nanocomposites with different contents of ZnO showed an improvement in the degradation of the organic dyes brilliant green and methylene blue under solar light irradiation [14]. ZnO/TiO2 photocatalyst exhibited much higher photocatalytic activity than pure TiO2, ZnO, and P-25 in the degradation of 4-chlorophenol under low UV irradiation [15]. The composites of ZnO/Ag2CO3/Ag2O demonstrated a potential effect in the photodegradation of phenol under visible light irradiation due to the facilitate charge transfer and suppress recombination of photogenerated electrons and holes [16].Recently, composites of ZrO2 with ZnO have attracted much attention because of their excellent properties as a semiconductor material, especially for the degradation reactions of recalcitrant organic pollutants. The enhancement in photocatalytic activity of ZnO-ZrO2 composites has been associated with the changes in their structural, textural, and optical properties, such as surface area, particle size, formation of a specific crystalline phase, and low band gap energy [4, 17, 18]. In addition, the improved electron-hole pair enhances the photocatalytic efficiency. Under illumination, both the semiconductors of nanocomposite are simultaneously excited, and the electrons slip to the low-lying conduction band of one semiconductor, while holes move to the less anodic valence band. Sherly et al. [19] attributed the efficiency of Zn2Zr (ZnO and ZrO2 in 2 : 1 ratio) photocatalyst in the degradation of 2,4-dichlorophenol to the good stability and the efficient separation of photogenerated electron-hole pairs. Aghabeygi and Khademi-Shamami [4] stated that the good properties of 1 : 2 molar ratio of ZrO2 : ZnO as photocatalyst in the degradation of Congo Red dye could be by the decrease in the rate of the hole-electron pairs recombination when the excitation takes place with energy lower than Eg. Besides, they proposed that ZnO could increase the concentration of free electrons in the CB of ZrO2 by reducing the charge recombination in the process of electron transport. Gurushantha et al. [20] demonstrated a photocatalytic enhancement in the ZrO2/ZnO (1 : 2) nanocomposite for the degradation of acid orange 8 dye under UV light irradiation (254 nm). They observed that the reduction of energy gap, the increase of the density states, and the stability of the composite increased the photocatalyst efficiency.In this work, we investigated the effect of ZnO on the photocatalytic properties of ZnO-ZrO2 nanocomposites obtained by sol-gel method in the photodegradation of phenol in water under UV-A irradiation. ## 2. Materials and Methods ### 2.1. Reagents Zirconium (IV) butoxide (80 wt. % in 1-butanol), phenol (ReagentPlus ≥99%), and Zinc Acetylacetonate hydrate were purchased from Sigma-Aldrich; hydrochloric acid (36.5–38%) was obtained from Civeq (México). In all cases, deionized water was used. ### 2.2. Synthesis of ZrO2 81.2 mmol of de Zirconium (IV) butoxide were added dropwise to 48.9 mL of deionized water and ethanol mixture (1 : 8) preheated at 70°C. Before addition of the alkoxide, pH was adjusted at pH 3 with hydrochloric acid (2.5 M). The white suspension was kept under temperature at 70°C, with continuous stirring and reflux for 24 h. The gel was dried at 70°C for 8 h afterwards. Finally, the obtained powder was ground and then calcined at 500°C for 4 h. ### 2.3. Synthesis of ZnO 11.38 mmol of Zinc Acetylacetonate hydrate (powder, Sigma-Aldrich) were added into 50 mL of ethanol (96%, Civeq) previously heated at 70°C and adjusted at pH 3 with chlorhydric acid (2.5 M) (36.5–38%, Civeq) during 30 min. The suspension was stirred for 4 h at 70°C and then being aged for 24 h under continuous agitation. Later, the resulting gel was washed several times with ethanol and deionized water. Finally, the white powders were dried at 70°C during 6 h, ground, and then calcined at 500°C for 4 h. ### 2.4. Synthesis of ZnO-ZrO2 Nanocomposites Different molar percentages (13%, 25%, 50%, and 75%) of ZnO were incorporated into ZrO2 and named as 13ZnO-ZrO2, 25ZnO-ZrO2, 50ZnO-ZrO2, and 75ZnO-ZrO2. The photocatalysts were prepared as follows: the appropriated amount of Zinc Acetylacetonate hydrate was dissolved into 50 mL of ethanol previously heated at 70°C and adjusted at pH 3 (HCl, 2.5 M). ZrO2 sols were prepared separately as previously described but just before the addition of the half of the total amount of alkoxide was completed, the corresponding ZnO sol was incorporated into the mixture, followed by the dropwise of the rest of the zirconium alkoxide. The mixture was kept under vigorous stirring and reflux at 70°C during 24 h. The obtained gels were washed several times with ethanol and deionized water, then they were, dried, ground, and calcined at 500°C during 4 h. The proposed reactions are presented in Supplementary Materials (Figures S1, S2, and S3). ### 2.5. Characterization of the Nanocomposites X-ray diffraction (XRD) patterns were obtained on a Bruker D8 Advance diffractometer using CuKα radiation (1.5418 Å) in the 2θ scan range of 10–90°. The average crystallite size of the samples was estimated using the Debye-Scherrer equation (equation (1)). (1)D=0.89λβcosθ,where λ is the wavelength of CuKα radiation, β is the peak width at half maximum, and θ is the diffraction angle.To calculate the percentage of monoclinic and tetragonal phases of pure ZrO2, we used the monoclinic phase fraction Xm and the following equation described by Garvie and Nicholson [21]: (2)Xm=Im−111+Im111Im−111+Im111+It101×100,where Im and It represent the integral intensities of monoclinic 111 and −111 and tetragonal 101 peaks.Raman spectroscopy was performed with an XploRA PLUS Raman system equipment (HORIBA) with a CCD detector, an optical microscope (Olympus BX), and solid-state laser (532 nm/25 mW). Fourier transformed infrared spectra (FT-IR) were collected in a Shimadzu IRAffinity-1 spectrophotometer. The powders (<5%wt) were pressed into 100 mg wafers together with KBr (J.T.Baker, Infrared grade).Diffuse reflectance UV-Vis spectroscopy (UV-Vis/DRS) was carried out on Shimadzu UV-2600 spectrophotometer equipped with an integrating sphere accessory and BaSO4 as reference (99% reflectance). The band-gap (Eg) values were calculated using Kubelka-Munk function, F(R), by the construction of a Tauc’s plot: (F(R)∗hν)2 or (F(R)∗hν)1/2 versus energy (eV), for a direct and indirect allowed transition, respectively. The BET specific surface areas (SBET) and pore volume (BJH method) of the samples were determined by N2 adsorption-desorption isotherms at 77°K using a Quantachrome Autosorb 3B instrument. Degasification of the samples was performed at 100°C during 12 h. Surface morphology of the materials was analyzed by field emission scanning electron microscopy (FESEM) using a Hitachi S-4800 microscope, whereas high-resolution transmission electron microscopy (HRTEM) was performed in a JEOL JSM-2100 electron microscope operated at 200 kV, with a 0.19 nm resolution. ### 2.6. Photocatalytic Activity Synthesized particles were tested in the photodegradation of phenol. The photocatalytic study was carried out in a 250 mL pyrex reactor covered with a UV-transparent Plexiglas (absorption at 250 nm); the intensity of the radiation over the suspension was 90 W/m2. For each test, 200 mg of photocatalyst were suspended into 200 mL of phenol solution (50 ppm). The suspension was magnetically stirred in the dark with oxygen flow of 20 L/h until adsorption-desorption equilibrium was reached (ca. 20 min). Then the suspension was illuminated with an Osram Ultra-Vitalux lamp (300 W, UV-A, λ=365nm). Aliquots of 3 mL were taken and filtered (Millipore Milex-HV 0.45 μm) for further analysis. Variations in phenol concentration were tracked by high-performance liquid chromatography (HPLC) using an Agilent Technologies 1200 chromatograph equipped with a UV-Vis detector and Eclipse XDB-C18 column 5 μm, 4.6mm×150mm. The mobile phase was water/methanol (65 : 35) at a flow rate of 0.8 mL/min. Mineralization of phenol was measured by the total organic content (TOC) in a Shimadzu 5000 TOC analyzer. The percentage of mineralization was estimated using the equation (equation (2)): (3)%Mineralization=1−TOCfinalTOCinitial∗100,where TOCinitial and TOCfinal are the total organic carbon concentrations in the media before and after the photocatalytic reaction, respectively. ## 2.1. Reagents Zirconium (IV) butoxide (80 wt. % in 1-butanol), phenol (ReagentPlus ≥99%), and Zinc Acetylacetonate hydrate were purchased from Sigma-Aldrich; hydrochloric acid (36.5–38%) was obtained from Civeq (México). In all cases, deionized water was used. ## 2.2. Synthesis of ZrO2 81.2 mmol of de Zirconium (IV) butoxide were added dropwise to 48.9 mL of deionized water and ethanol mixture (1 : 8) preheated at 70°C. Before addition of the alkoxide, pH was adjusted at pH 3 with hydrochloric acid (2.5 M). The white suspension was kept under temperature at 70°C, with continuous stirring and reflux for 24 h. The gel was dried at 70°C for 8 h afterwards. Finally, the obtained powder was ground and then calcined at 500°C for 4 h. ## 2.3. Synthesis of ZnO 11.38 mmol of Zinc Acetylacetonate hydrate (powder, Sigma-Aldrich) were added into 50 mL of ethanol (96%, Civeq) previously heated at 70°C and adjusted at pH 3 with chlorhydric acid (2.5 M) (36.5–38%, Civeq) during 30 min. The suspension was stirred for 4 h at 70°C and then being aged for 24 h under continuous agitation. Later, the resulting gel was washed several times with ethanol and deionized water. Finally, the white powders were dried at 70°C during 6 h, ground, and then calcined at 500°C for 4 h. ## 2.4. Synthesis of ZnO-ZrO2 Nanocomposites Different molar percentages (13%, 25%, 50%, and 75%) of ZnO were incorporated into ZrO2 and named as 13ZnO-ZrO2, 25ZnO-ZrO2, 50ZnO-ZrO2, and 75ZnO-ZrO2. The photocatalysts were prepared as follows: the appropriated amount of Zinc Acetylacetonate hydrate was dissolved into 50 mL of ethanol previously heated at 70°C and adjusted at pH 3 (HCl, 2.5 M). ZrO2 sols were prepared separately as previously described but just before the addition of the half of the total amount of alkoxide was completed, the corresponding ZnO sol was incorporated into the mixture, followed by the dropwise of the rest of the zirconium alkoxide. The mixture was kept under vigorous stirring and reflux at 70°C during 24 h. The obtained gels were washed several times with ethanol and deionized water, then they were, dried, ground, and calcined at 500°C during 4 h. The proposed reactions are presented in Supplementary Materials (Figures S1, S2, and S3). ## 2.5. Characterization of the Nanocomposites X-ray diffraction (XRD) patterns were obtained on a Bruker D8 Advance diffractometer using CuKα radiation (1.5418 Å) in the 2θ scan range of 10–90°. The average crystallite size of the samples was estimated using the Debye-Scherrer equation (equation (1)). (1)D=0.89λβcosθ,where λ is the wavelength of CuKα radiation, β is the peak width at half maximum, and θ is the diffraction angle.To calculate the percentage of monoclinic and tetragonal phases of pure ZrO2, we used the monoclinic phase fraction Xm and the following equation described by Garvie and Nicholson [21]: (2)Xm=Im−111+Im111Im−111+Im111+It101×100,where Im and It represent the integral intensities of monoclinic 111 and −111 and tetragonal 101 peaks.Raman spectroscopy was performed with an XploRA PLUS Raman system equipment (HORIBA) with a CCD detector, an optical microscope (Olympus BX), and solid-state laser (532 nm/25 mW). Fourier transformed infrared spectra (FT-IR) were collected in a Shimadzu IRAffinity-1 spectrophotometer. The powders (<5%wt) were pressed into 100 mg wafers together with KBr (J.T.Baker, Infrared grade).Diffuse reflectance UV-Vis spectroscopy (UV-Vis/DRS) was carried out on Shimadzu UV-2600 spectrophotometer equipped with an integrating sphere accessory and BaSO4 as reference (99% reflectance). The band-gap (Eg) values were calculated using Kubelka-Munk function, F(R), by the construction of a Tauc’s plot: (F(R)∗hν)2 or (F(R)∗hν)1/2 versus energy (eV), for a direct and indirect allowed transition, respectively. The BET specific surface areas (SBET) and pore volume (BJH method) of the samples were determined by N2 adsorption-desorption isotherms at 77°K using a Quantachrome Autosorb 3B instrument. Degasification of the samples was performed at 100°C during 12 h. Surface morphology of the materials was analyzed by field emission scanning electron microscopy (FESEM) using a Hitachi S-4800 microscope, whereas high-resolution transmission electron microscopy (HRTEM) was performed in a JEOL JSM-2100 electron microscope operated at 200 kV, with a 0.19 nm resolution. ## 2.6. Photocatalytic Activity Synthesized particles were tested in the photodegradation of phenol. The photocatalytic study was carried out in a 250 mL pyrex reactor covered with a UV-transparent Plexiglas (absorption at 250 nm); the intensity of the radiation over the suspension was 90 W/m2. For each test, 200 mg of photocatalyst were suspended into 200 mL of phenol solution (50 ppm). The suspension was magnetically stirred in the dark with oxygen flow of 20 L/h until adsorption-desorption equilibrium was reached (ca. 20 min). Then the suspension was illuminated with an Osram Ultra-Vitalux lamp (300 W, UV-A, λ=365nm). Aliquots of 3 mL were taken and filtered (Millipore Milex-HV 0.45 μm) for further analysis. Variations in phenol concentration were tracked by high-performance liquid chromatography (HPLC) using an Agilent Technologies 1200 chromatograph equipped with a UV-Vis detector and Eclipse XDB-C18 column 5 μm, 4.6mm×150mm. The mobile phase was water/methanol (65 : 35) at a flow rate of 0.8 mL/min. Mineralization of phenol was measured by the total organic content (TOC) in a Shimadzu 5000 TOC analyzer. The percentage of mineralization was estimated using the equation (equation (2)): (3)%Mineralization=1−TOCfinalTOCinitial∗100,where TOCinitial and TOCfinal are the total organic carbon concentrations in the media before and after the photocatalytic reaction, respectively. ## 3. Results ### 3.1. X-Ray Diffraction Analysis X-ray patterns of the photocatalysts are depicted in Figure1(a). All the diffractograms of the samples containing ZnO exhibited sharp and strong peaks at 31.70°, 34.30° y 36.20 (2θ) which correspond to (100), (002), and (101) reflections, respectively, and agree with the characteristic peaks of ZnO wurtzite-type hexagonal crystalline structure (JCPDS 36-1451). The high intensity of the (101) peak suggests anisotropic growth and orientation of the crystals [22, 23].Figure 1 (a) XRD pattern of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 calcined at 500°C. t = tetragonal and m = monoclinic of ZrO2; w = wurtzite for ZnO. (b) XRD pattern of the samples evaluated in the region of 71–77° to identify the tetragonal doublets of ZrO2. (a) (b)On the other hand, pristine ZrO2 showed broad peaks located at 28.20°, 30.20°, and 31.50° (2θ). The peak centered at 30.20°(101) is characteristic of tetragonal crystalline phase (t-ZrO2) according to JCPDS 79-1771 card, whereas those at 28.20°(−111) and 31.5°(111) are representative of monoclinic phase (m-ZrO2, JCPDS 37-1484). These results suggest a mixture of both tetragonal and monoclinic crystalline phases, which is commonly observed on ZrO2 materials when calcined at similar temperatures [24, 25].When ZnO was added to ZrO2, the corresponding peaks to monoclinic phase were not observed, indicating inhibition of monoclinic phase. As the content of ZnO increases, the reflection (101) observed at 30.20° (2θ) appeared slightly shifted towards 30.38° except for the 50ZnO-ZrO2 sample; therefore, the peaks observed for all the ZnO-ZrO2 materials are assigned to the presence of tetragonal phase. To distinguish between the diffraction patterns of cubic and tetragonal phases of ZrO2, the 2θ region at 71–77° was carefully examined. The asymmetric doublets at ~74° indicated the formation of tetragonal ZrO2 [26–28]. Figure 1(b) shows the tetragonal doublets for all ZnO-ZrO2 materials. Crystallite size and percentage of phases for pure ZnO and ZrO2 and ZnO-ZrO2 nanocomposites were determined using Debye-Scherrer equation and Garvie and Nicholson method. Because monoclinic phase was not observed in ZnO-ZrO2 composites, we considered only the integral intensities of tetragonal and wurtzite peaks of ZrO2 and ZnO, respectively. The obtained values are presented in Table 1.Table 1 Crystallite size and percentage of phase content of nanocomposites obtained from Debye-Scherrer equation and Garvie and Nicholson method, respectively. m represents monoclinic and t tetragonal phase of ZrO2. w indicates the wurtzite structure of ZnO. Sample Crystallite size (nm) h k l Phase content (%) ZnO 33.2 (101) 100% ZrO2 14.3 (101)m 59.7% 15.4 (−111)t 40.3% 13ZnO-ZrO2 18.2 (101)t 100% — — 25ZnO-ZrO2 14.5 (101)t 88.0% 44.7 (101)w 12.0% 50ZnO-ZrO2 14.4 (101)t 82.0% 48.3 (101)w 18.0% 75ZnO-ZrO2 13.8 (101)t 60.9% 44.6 (101)w 39.1% ### 3.2. Raman Spectroscopy The theory of groups predicts six Raman-active modes of vibrations for tetragonal (A1g+2B1g+3Eg) and 18 for monoclinic (9Ag+9Bg) of ZrO2, whereas for ZnO there are 4 Raman-active modes, although splitting of E2 modes into longitudinal optical (LO) and transversal optical (TO) gives place to 6 active modes. In Figure 2, Raman spectra of the photocatalysts are presented. For ZnO, two peaks can be clearly observed, the first one at 99 cm−1 and the second at 434 cm−1, corresponding to E2 mode characteristic of wurtzite-type structure; two additional weak bands at 326 and 380 cm−1 were also observed which are related to 2-phonon and A1(TO) mode, respectively [29]. The ZrO2 spectrum showed several peaks located at 100, 176, 217, 306, 333, 380, 474, 501, 534, 553, 613, and 638 cm−1, which are very close to those reported for monoclinic phase. The peaks attributed to tetragonal structure were located at 142 and 265 cm−1 whereas two additional bands reported for this structure around 318 and 461 cm−1 seemed to be overlapped with monoclinic signals; the peaks between 640 and 641 cm−1 are shared by monoclinic and tetragonal structures [30, 31]. Since cubic structure of zirconia usually exhibits one strong peak around 617 cm1, the absence of this signal indicates the presence of a mixture of only two structures (monoclinic and tetragonal) in pristine ZrO2. For the ZnO-ZrO2 composites, Raman spectra did not show sharp peaks. As the content of ZnO increased, we observed the broadening of the peaks that correspond to tetragonal structure modes; this broadening is related to the decrease in the crystallite size of this crystalline phase, usually due to phonons associated with the nanosized particles. On the other hand, the absence of representative signals for monoclinic structure in both Raman spectra and XRD patterns leads us to conclude that ZnO inhibited the formation of this structure in ZnO-ZrO2 composites. Additionally, Rietveld refinement of 13ZnO-ZrO2 was performed (Supplementary Materials, Figure S4 and Table S1). This analysis confirmed the absence of both solid solution and cubic crystalline phase formation; the estimated percentages of each crystalline structure are shown in Table 1.Figure 2 Raman spectra of ZnO-ZrO2 nanocomposites. (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2. m indicates monoclinic and t tetragonal structures for ZrO2 whereas z indicates wurtzite crystalline structure of ZnO. ### 3.3. FTIR Analysis Figure3 shows the spectra of pure ZnO, ZrO2, and ZnO-ZrO2 composites with different content of ZnO. FTIR spectra of all materials presented wide bands at 3410–3450 cm−1 which correspond to O-H stretching vibrations of physical adsorbed water on the catalyst surface [32]. Compared to the ZrO2 band, a shift of the O-H band to lower frequencies occurs as the percentage of ZnO increases in the ZnO-ZrO2 composites. Pure ZnO spectrum showed an intense band centered at 423 cm−1. This band is characteristic for Zn-O vibrations [23, 32]. Two intense bands appeared at 744 cm−1 and 576 cm−1 have been associated with vibrations of Zr-O in monoclinic structure. An additional band located at 498 cm−1 was also present in ZrO2 spectrum; this signal corresponds to Zr-O-Zr vibrations in tetragonal structure [33–35], which appears slightly shifted for all ZnO-ZrO2 nanocomposites. This behavior can be attributed to the addition of divalent oxides like ZnO (Zn+2) to ZrO2. Also, the incorporation of these oxides may produce a lattice deformation on the crystalline structure, with subsequent modification on the force constants of Zr–O and related bonds [33].Figure 3 (a) FT-IR full spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites with different content of ZnO. (b) FTIR region from 1300 to 400 cm−1. (a) (b) ### 3.4. Specific Surface Area Figure4 shows the N2 adsorption-desorption isotherms of the nanocomposites as well as their corresponding pore size distribution (insets). The isotherms for all the samples presented type IV(a) shape according to IUPAC classification [36] which corresponds to mesoporous structures where capillary condensation takes place and is accompanied by hysteresis. The adsorbed volume in all cases is relatively low which explains the observed values for specific surface area. It has been reported that ZnO usually exhibits poor BET surface areas ranging from 1 to 15 m2/g when no additives are used to improve this property. In our samples, pure ZnO showed a SBET=10m2/g while ZrO2 exhibited 46 m2/g.Figure 4 N2 adsorption-desorption BET isotherms of (a) pure oxides and (b) different ZnO-ZrO2 composites; the insets represent BJH pore size distribution from desorption branches. (a) (b)The isotherms of both ZnO and ZrO2 pure oxides showed a narrow hysteresis loop, which for ZrO2 starts at 0.4 (P/P0) and for ZnO this occurs at higher relative pressures (0.8) (Figure 4(a)). When composites were analyzed, hysteresis loop for all the composition was slightly broader than pure oxides, indicating changes in porosity (Figure 4(b)). ZnO showed H3-type hysteresis loop at high relative pressure. However, ZrO2 and all ZnO-ZrO2 composites exhibited H2-type hysteresis which is associated with the presence of bottle-shaped mesopores that can be explained as a consequence of the interconnectivity of pores [1]. The specific surface area decreased from 46 to 8 m2/g when 25% of ZnO was incorporated, and for 75ZnO-ZrO2, SBET value was 36 m2/g, and its N2 isotherm showed a broader hysteresis loop than the observed pure oxides. Additional effect of ZnO incorporation was observed in BJH pore size distribution: pristine ZrO2 showed a wide pore size distribution from meso- to macropores, whereas ZnO presented pores around 30 nm and also macroporosity; when both oxides were coupled, pore size distribution for all composites showed unimodal distribution with very close pore size average. ### 3.5. Diffuse Reflectance UV-Vis Spectroscopy The absorption spectra of the oxides are depicted in Figure5(a). The Uv-Vis spectrum of ZnO shows an absorption edge at 370 nm, which is in good agreement with literature [37]. This characteristic band can be assigned to the intrinsic band-gap absorption of ZnO due to the electron transitions from the valence band to the conduction band (O2p → Zn3d) [38]. ZrO2 spectrum showed a small absorption band at 208 nm and another large band at 228 nm, which appeared at lower wavelengths than the characteristic bands reported for ZrO2, usually observed ~240 nm [39]. These bands correspond to the presence of Zr species as the tetrahedral Zr+4, and it is electronically produced by the charge transfer transition of the valence band O2p electron to the conduction band Zr4d (O2 → Zr4+) level upon UV light excitation [40]. There was no other absorption band observed in the UV-Vis region. It has been reported that the identification of ZrO2 phases can be possible by using UV range of DRS. The two bands observed in pure ZrO2 are typically observed since ZrO2 has two band-to-band transitions. According to Sahu and Rao [41], the first transition corresponds to values around 5.17–5.2 eV that can be associated with m-ZrO2. In the ZrO2 here prepared, we observed by DRX that monoclinic and tetragonal phases are presented in pure oxide. The band at high energy cannot be observed when the content of ZnO increases, not only due to the disappearance of monoclinic phase but also for the incorporation of ZnO affecting the shape of the spectrum. The spectrum of pure ZrO2 also depicts a small shoulder with an onset at 310 nm that can be attributed to t-ZrO2 crystalline structure [42]. The shape as well as the intensity of this band changes as the content of ZnO increases and slightly shifts towards lower energy region, due to the effect of ZnO.Figure 5 (a) UV-Vis/DR absorption spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites; (b) Kubelka-Munk function of (A) ZrO2, (B) 13ZnO-ZrO2, (C) 25ZnO-ZrO2, (D) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 composites. (I) represents the first and (II) the second edge of the samples considered for Eg estimations. (a) (b)The addition of ZnO modified the absorption edge of ZrO2 towards lower values. A red shift was observed in the last band of all ZnO-ZrO2 materials towards longer wavelength region, due to the introduction of energy levels in the interband gap [17]. Kubelka-Munk function was used to estimate the band gap of the nanocomposites (Figure 5(b)). The band gap values of pure ZnO and ZrO2 were evaluated by using n=2 (direct transitions) and n=1/2 (indirect transitions), respectively, and their values are summarized in Table S2 (Supplementary Materials).For ZnO-ZrO2 materials, it can be noticed the presence of two edges instead of only one; the bandgap was estimated using n=1/2. The obtained value for ZnO was 3.26 eV (Supplementary Materials, Figure S5) whereas 4.53 and 5.06 eV were calculated for ZrO2 (for the two edges observed in the corresponding spectrum), these values changed to 4.73, 4.35, 3.76, and 4.16 eV by the addition of different contents of ZnO (indicated as I in Figure 5(b)), being 50ZnO-ZrO2 the one with the lowest value. With regard to the low-energy shoulder observed in ZnO-ZrO2 spectra (indicated as II in Figure 5(b)), the calculated values were 3.07, 3.10, 3.15, and 3.16 eV for the composites with 13, 25, 50, and 75% of ZnO, respectively, allowing nanocomposites to be excited at lower energy. ### 3.6. Electron Microscopy Morphology of the nanocomposites was investigated using electron microscopy, and the images are shown in Figure6. We observed that ZnO nanoparticles are rod-like shaped with sizes ranging from 100 to 300 nm producing agglomerates. On the other hand, ZrO2 is made of smaller particles nonuniform in shape and size. In 13ZnO-ZrO2 nanocomposite, we observed that ZnO particles change in shape whereas some ZrO2 agglomerates grew up to 1 μm but also smaller quasispherical particles around 300 nm were observed. The most significant change was exhibited by 75ZnO-ZrO2 nanocomposite, since agglomerates of particles with smaller sizes were obtained.Figure 6 FESEM (a) and the corresponding HRTEM (b) micrographs of pure oxides and ZnO-ZrO2 composites. (a) (b) ### 3.7. Photocatalytic Test Previous to the test, adsorption-desorption equilibrium was reached after stirring the suspensions for 20 minutes in the dark, of which in general all the composites presented low adsorption of the pollutant. The results of photocatalytic degradation are shown in Figure7(a). Here, we observed that pure ZrO2 degraded only 5% of phenol even after 120 min; when ZnO was incorporated, a slight increase in photodegradation with 13 and 25% of ZnO was observed, but they barely degraded around 15% of the pollutant. By increasing the ZnO content up to 50% mol, 47% of phenol was degraded, whereas 71% of degradation was achieved using 75ZnO-ZrO2 composite. For the prepared ZnO, 74% of degradation was obtained during the same time. The kinetic parameters were also calculated assuming pseudofirst order kinetics (Table 2) where we observed that t1/2 for 75ZnO-ZrO2 composite was very close to that calculated for ZnO.Figure 7 (a) Degradation curves of phenol by ZnO, ZrO2, and different ZnO-ZrO2 composites (experimental conditions: phenol = 50 ppm, volume of phenol = 200 mL, and catalyst dosage = 200 mg). (b) TOC results obtained after 120 min of illumination in the UV. (a) (b)Table 2 Kinetic parameters estimated from pseudofirst order kinetics. Photocatalyst k (min−1) R2 t1/2 (min) ZrO2 0.7×10−3 0.9585 1155 13ZnO-ZrO2 1.3×10−3 0.9193 533 25ZnO-ZrO2 1.5×10−3 0.9808 462 50ZnO-ZrO2 5.3×10−3 0.9939 130 75ZnO-ZrO2 10.3×10−3 0.9964 67 ZnO 11.2×10−3 0.9810 62It is well known that ZnO can be activated under UV-A light and generally exhibits good photodegradation rates, but it is also needed to assess the mineralization of the pollutant. For this purpose, TOC analysis was performed and the results are tabulated in Figure7(b). Pure ZrO2 mineralized 2.5% of phenol, but the mineralization increases with increasing ZnO content, which stabilizes tetragonal crystalline phases of ZrO2. Although ZnO reached 74% degradation, the mineralization of the pollutant was 40%, while 51% of mineralization was achieved with 75ZnOZrO2. Tetragonal ZrO2 has been reported as the most active polymorph of ZrO2 which also shows high selectivity in catalytic reactions. These results showed that at this concentration, ZnO-ZrO2 composite improved the mineralization of phenol when compared to pure ZnO. Besides, it was also confirmed that intermediaries like catechol, resorcinol, and hydroquinone were not generated during the photodegradation with 75ZnO-ZrO2.According to the obtained results, a schematic representation of phenol degradation is shown in Figure8. When the ZnO-ZrO2 composites were excited by Uv-A irradiation (365 nm), electrons migrate from the valence band (VB) to the conduction band (CB) of ZnO, leading to the formation of electron/hole (e−/h+) pairs. Since the energy levels of ZnO fit well into the band gap of ZrO2, the electrons from the CB of ZrO2 can easily be transferred to the CB of ZnO; conversely, the holes migrate from the VB of ZnO to the VB of ZrO2, and thereby the electron-hole pair recombination may be decreased in ZnO-ZrO2 composites. These e− and h+ react with water and oxygen to produce hydroxyl (OH⋅) radicals which are very reactive and can easily oxidize the phenol until obtaining CO2 and water. As a result, we obtained an enhancement in photocatalytic performance of phenol degradation by the composite with the highest ZnO percentage [8, 19, 20, 43].Figure 8 Schematic representation of photocatalytic mechanism of electron-hole pair separation of ZnO-ZrO2 composites for the degradation of phenol. ## 3.1. X-Ray Diffraction Analysis X-ray patterns of the photocatalysts are depicted in Figure1(a). All the diffractograms of the samples containing ZnO exhibited sharp and strong peaks at 31.70°, 34.30° y 36.20 (2θ) which correspond to (100), (002), and (101) reflections, respectively, and agree with the characteristic peaks of ZnO wurtzite-type hexagonal crystalline structure (JCPDS 36-1451). The high intensity of the (101) peak suggests anisotropic growth and orientation of the crystals [22, 23].Figure 1 (a) XRD pattern of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 calcined at 500°C. t = tetragonal and m = monoclinic of ZrO2; w = wurtzite for ZnO. (b) XRD pattern of the samples evaluated in the region of 71–77° to identify the tetragonal doublets of ZrO2. (a) (b)On the other hand, pristine ZrO2 showed broad peaks located at 28.20°, 30.20°, and 31.50° (2θ). The peak centered at 30.20°(101) is characteristic of tetragonal crystalline phase (t-ZrO2) according to JCPDS 79-1771 card, whereas those at 28.20°(−111) and 31.5°(111) are representative of monoclinic phase (m-ZrO2, JCPDS 37-1484). These results suggest a mixture of both tetragonal and monoclinic crystalline phases, which is commonly observed on ZrO2 materials when calcined at similar temperatures [24, 25].When ZnO was added to ZrO2, the corresponding peaks to monoclinic phase were not observed, indicating inhibition of monoclinic phase. As the content of ZnO increases, the reflection (101) observed at 30.20° (2θ) appeared slightly shifted towards 30.38° except for the 50ZnO-ZrO2 sample; therefore, the peaks observed for all the ZnO-ZrO2 materials are assigned to the presence of tetragonal phase. To distinguish between the diffraction patterns of cubic and tetragonal phases of ZrO2, the 2θ region at 71–77° was carefully examined. The asymmetric doublets at ~74° indicated the formation of tetragonal ZrO2 [26–28]. Figure 1(b) shows the tetragonal doublets for all ZnO-ZrO2 materials. Crystallite size and percentage of phases for pure ZnO and ZrO2 and ZnO-ZrO2 nanocomposites were determined using Debye-Scherrer equation and Garvie and Nicholson method. Because monoclinic phase was not observed in ZnO-ZrO2 composites, we considered only the integral intensities of tetragonal and wurtzite peaks of ZrO2 and ZnO, respectively. The obtained values are presented in Table 1.Table 1 Crystallite size and percentage of phase content of nanocomposites obtained from Debye-Scherrer equation and Garvie and Nicholson method, respectively. m represents monoclinic and t tetragonal phase of ZrO2. w indicates the wurtzite structure of ZnO. Sample Crystallite size (nm) h k l Phase content (%) ZnO 33.2 (101) 100% ZrO2 14.3 (101)m 59.7% 15.4 (−111)t 40.3% 13ZnO-ZrO2 18.2 (101)t 100% — — 25ZnO-ZrO2 14.5 (101)t 88.0% 44.7 (101)w 12.0% 50ZnO-ZrO2 14.4 (101)t 82.0% 48.3 (101)w 18.0% 75ZnO-ZrO2 13.8 (101)t 60.9% 44.6 (101)w 39.1% ## 3.2. Raman Spectroscopy The theory of groups predicts six Raman-active modes of vibrations for tetragonal (A1g+2B1g+3Eg) and 18 for monoclinic (9Ag+9Bg) of ZrO2, whereas for ZnO there are 4 Raman-active modes, although splitting of E2 modes into longitudinal optical (LO) and transversal optical (TO) gives place to 6 active modes. In Figure 2, Raman spectra of the photocatalysts are presented. For ZnO, two peaks can be clearly observed, the first one at 99 cm−1 and the second at 434 cm−1, corresponding to E2 mode characteristic of wurtzite-type structure; two additional weak bands at 326 and 380 cm−1 were also observed which are related to 2-phonon and A1(TO) mode, respectively [29]. The ZrO2 spectrum showed several peaks located at 100, 176, 217, 306, 333, 380, 474, 501, 534, 553, 613, and 638 cm−1, which are very close to those reported for monoclinic phase. The peaks attributed to tetragonal structure were located at 142 and 265 cm−1 whereas two additional bands reported for this structure around 318 and 461 cm−1 seemed to be overlapped with monoclinic signals; the peaks between 640 and 641 cm−1 are shared by monoclinic and tetragonal structures [30, 31]. Since cubic structure of zirconia usually exhibits one strong peak around 617 cm1, the absence of this signal indicates the presence of a mixture of only two structures (monoclinic and tetragonal) in pristine ZrO2. For the ZnO-ZrO2 composites, Raman spectra did not show sharp peaks. As the content of ZnO increased, we observed the broadening of the peaks that correspond to tetragonal structure modes; this broadening is related to the decrease in the crystallite size of this crystalline phase, usually due to phonons associated with the nanosized particles. On the other hand, the absence of representative signals for monoclinic structure in both Raman spectra and XRD patterns leads us to conclude that ZnO inhibited the formation of this structure in ZnO-ZrO2 composites. Additionally, Rietveld refinement of 13ZnO-ZrO2 was performed (Supplementary Materials, Figure S4 and Table S1). This analysis confirmed the absence of both solid solution and cubic crystalline phase formation; the estimated percentages of each crystalline structure are shown in Table 1.Figure 2 Raman spectra of ZnO-ZrO2 nanocomposites. (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2. m indicates monoclinic and t tetragonal structures for ZrO2 whereas z indicates wurtzite crystalline structure of ZnO. ## 3.3. FTIR Analysis Figure3 shows the spectra of pure ZnO, ZrO2, and ZnO-ZrO2 composites with different content of ZnO. FTIR spectra of all materials presented wide bands at 3410–3450 cm−1 which correspond to O-H stretching vibrations of physical adsorbed water on the catalyst surface [32]. Compared to the ZrO2 band, a shift of the O-H band to lower frequencies occurs as the percentage of ZnO increases in the ZnO-ZrO2 composites. Pure ZnO spectrum showed an intense band centered at 423 cm−1. This band is characteristic for Zn-O vibrations [23, 32]. Two intense bands appeared at 744 cm−1 and 576 cm−1 have been associated with vibrations of Zr-O in monoclinic structure. An additional band located at 498 cm−1 was also present in ZrO2 spectrum; this signal corresponds to Zr-O-Zr vibrations in tetragonal structure [33–35], which appears slightly shifted for all ZnO-ZrO2 nanocomposites. This behavior can be attributed to the addition of divalent oxides like ZnO (Zn+2) to ZrO2. Also, the incorporation of these oxides may produce a lattice deformation on the crystalline structure, with subsequent modification on the force constants of Zr–O and related bonds [33].Figure 3 (a) FT-IR full spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites with different content of ZnO. (b) FTIR region from 1300 to 400 cm−1. (a) (b) ## 3.4. Specific Surface Area Figure4 shows the N2 adsorption-desorption isotherms of the nanocomposites as well as their corresponding pore size distribution (insets). The isotherms for all the samples presented type IV(a) shape according to IUPAC classification [36] which corresponds to mesoporous structures where capillary condensation takes place and is accompanied by hysteresis. The adsorbed volume in all cases is relatively low which explains the observed values for specific surface area. It has been reported that ZnO usually exhibits poor BET surface areas ranging from 1 to 15 m2/g when no additives are used to improve this property. In our samples, pure ZnO showed a SBET=10m2/g while ZrO2 exhibited 46 m2/g.Figure 4 N2 adsorption-desorption BET isotherms of (a) pure oxides and (b) different ZnO-ZrO2 composites; the insets represent BJH pore size distribution from desorption branches. (a) (b)The isotherms of both ZnO and ZrO2 pure oxides showed a narrow hysteresis loop, which for ZrO2 starts at 0.4 (P/P0) and for ZnO this occurs at higher relative pressures (0.8) (Figure 4(a)). When composites were analyzed, hysteresis loop for all the composition was slightly broader than pure oxides, indicating changes in porosity (Figure 4(b)). ZnO showed H3-type hysteresis loop at high relative pressure. However, ZrO2 and all ZnO-ZrO2 composites exhibited H2-type hysteresis which is associated with the presence of bottle-shaped mesopores that can be explained as a consequence of the interconnectivity of pores [1]. The specific surface area decreased from 46 to 8 m2/g when 25% of ZnO was incorporated, and for 75ZnO-ZrO2, SBET value was 36 m2/g, and its N2 isotherm showed a broader hysteresis loop than the observed pure oxides. Additional effect of ZnO incorporation was observed in BJH pore size distribution: pristine ZrO2 showed a wide pore size distribution from meso- to macropores, whereas ZnO presented pores around 30 nm and also macroporosity; when both oxides were coupled, pore size distribution for all composites showed unimodal distribution with very close pore size average. ## 3.5. Diffuse Reflectance UV-Vis Spectroscopy The absorption spectra of the oxides are depicted in Figure5(a). The Uv-Vis spectrum of ZnO shows an absorption edge at 370 nm, which is in good agreement with literature [37]. This characteristic band can be assigned to the intrinsic band-gap absorption of ZnO due to the electron transitions from the valence band to the conduction band (O2p → Zn3d) [38]. ZrO2 spectrum showed a small absorption band at 208 nm and another large band at 228 nm, which appeared at lower wavelengths than the characteristic bands reported for ZrO2, usually observed ~240 nm [39]. These bands correspond to the presence of Zr species as the tetrahedral Zr+4, and it is electronically produced by the charge transfer transition of the valence band O2p electron to the conduction band Zr4d (O2 → Zr4+) level upon UV light excitation [40]. There was no other absorption band observed in the UV-Vis region. It has been reported that the identification of ZrO2 phases can be possible by using UV range of DRS. The two bands observed in pure ZrO2 are typically observed since ZrO2 has two band-to-band transitions. According to Sahu and Rao [41], the first transition corresponds to values around 5.17–5.2 eV that can be associated with m-ZrO2. In the ZrO2 here prepared, we observed by DRX that monoclinic and tetragonal phases are presented in pure oxide. The band at high energy cannot be observed when the content of ZnO increases, not only due to the disappearance of monoclinic phase but also for the incorporation of ZnO affecting the shape of the spectrum. The spectrum of pure ZrO2 also depicts a small shoulder with an onset at 310 nm that can be attributed to t-ZrO2 crystalline structure [42]. The shape as well as the intensity of this band changes as the content of ZnO increases and slightly shifts towards lower energy region, due to the effect of ZnO.Figure 5 (a) UV-Vis/DR absorption spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites; (b) Kubelka-Munk function of (A) ZrO2, (B) 13ZnO-ZrO2, (C) 25ZnO-ZrO2, (D) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 composites. (I) represents the first and (II) the second edge of the samples considered for Eg estimations. (a) (b)The addition of ZnO modified the absorption edge of ZrO2 towards lower values. A red shift was observed in the last band of all ZnO-ZrO2 materials towards longer wavelength region, due to the introduction of energy levels in the interband gap [17]. Kubelka-Munk function was used to estimate the band gap of the nanocomposites (Figure 5(b)). The band gap values of pure ZnO and ZrO2 were evaluated by using n=2 (direct transitions) and n=1/2 (indirect transitions), respectively, and their values are summarized in Table S2 (Supplementary Materials).For ZnO-ZrO2 materials, it can be noticed the presence of two edges instead of only one; the bandgap was estimated using n=1/2. The obtained value for ZnO was 3.26 eV (Supplementary Materials, Figure S5) whereas 4.53 and 5.06 eV were calculated for ZrO2 (for the two edges observed in the corresponding spectrum), these values changed to 4.73, 4.35, 3.76, and 4.16 eV by the addition of different contents of ZnO (indicated as I in Figure 5(b)), being 50ZnO-ZrO2 the one with the lowest value. With regard to the low-energy shoulder observed in ZnO-ZrO2 spectra (indicated as II in Figure 5(b)), the calculated values were 3.07, 3.10, 3.15, and 3.16 eV for the composites with 13, 25, 50, and 75% of ZnO, respectively, allowing nanocomposites to be excited at lower energy. ## 3.6. Electron Microscopy Morphology of the nanocomposites was investigated using electron microscopy, and the images are shown in Figure6. We observed that ZnO nanoparticles are rod-like shaped with sizes ranging from 100 to 300 nm producing agglomerates. On the other hand, ZrO2 is made of smaller particles nonuniform in shape and size. In 13ZnO-ZrO2 nanocomposite, we observed that ZnO particles change in shape whereas some ZrO2 agglomerates grew up to 1 μm but also smaller quasispherical particles around 300 nm were observed. The most significant change was exhibited by 75ZnO-ZrO2 nanocomposite, since agglomerates of particles with smaller sizes were obtained.Figure 6 FESEM (a) and the corresponding HRTEM (b) micrographs of pure oxides and ZnO-ZrO2 composites. (a) (b) ## 3.7. Photocatalytic Test Previous to the test, adsorption-desorption equilibrium was reached after stirring the suspensions for 20 minutes in the dark, of which in general all the composites presented low adsorption of the pollutant. The results of photocatalytic degradation are shown in Figure7(a). Here, we observed that pure ZrO2 degraded only 5% of phenol even after 120 min; when ZnO was incorporated, a slight increase in photodegradation with 13 and 25% of ZnO was observed, but they barely degraded around 15% of the pollutant. By increasing the ZnO content up to 50% mol, 47% of phenol was degraded, whereas 71% of degradation was achieved using 75ZnO-ZrO2 composite. For the prepared ZnO, 74% of degradation was obtained during the same time. The kinetic parameters were also calculated assuming pseudofirst order kinetics (Table 2) where we observed that t1/2 for 75ZnO-ZrO2 composite was very close to that calculated for ZnO.Figure 7 (a) Degradation curves of phenol by ZnO, ZrO2, and different ZnO-ZrO2 composites (experimental conditions: phenol = 50 ppm, volume of phenol = 200 mL, and catalyst dosage = 200 mg). (b) TOC results obtained after 120 min of illumination in the UV. (a) (b)Table 2 Kinetic parameters estimated from pseudofirst order kinetics. Photocatalyst k (min−1) R2 t1/2 (min) ZrO2 0.7×10−3 0.9585 1155 13ZnO-ZrO2 1.3×10−3 0.9193 533 25ZnO-ZrO2 1.5×10−3 0.9808 462 50ZnO-ZrO2 5.3×10−3 0.9939 130 75ZnO-ZrO2 10.3×10−3 0.9964 67 ZnO 11.2×10−3 0.9810 62It is well known that ZnO can be activated under UV-A light and generally exhibits good photodegradation rates, but it is also needed to assess the mineralization of the pollutant. For this purpose, TOC analysis was performed and the results are tabulated in Figure7(b). Pure ZrO2 mineralized 2.5% of phenol, but the mineralization increases with increasing ZnO content, which stabilizes tetragonal crystalline phases of ZrO2. Although ZnO reached 74% degradation, the mineralization of the pollutant was 40%, while 51% of mineralization was achieved with 75ZnOZrO2. Tetragonal ZrO2 has been reported as the most active polymorph of ZrO2 which also shows high selectivity in catalytic reactions. These results showed that at this concentration, ZnO-ZrO2 composite improved the mineralization of phenol when compared to pure ZnO. Besides, it was also confirmed that intermediaries like catechol, resorcinol, and hydroquinone were not generated during the photodegradation with 75ZnO-ZrO2.According to the obtained results, a schematic representation of phenol degradation is shown in Figure8. When the ZnO-ZrO2 composites were excited by Uv-A irradiation (365 nm), electrons migrate from the valence band (VB) to the conduction band (CB) of ZnO, leading to the formation of electron/hole (e−/h+) pairs. Since the energy levels of ZnO fit well into the band gap of ZrO2, the electrons from the CB of ZrO2 can easily be transferred to the CB of ZnO; conversely, the holes migrate from the VB of ZnO to the VB of ZrO2, and thereby the electron-hole pair recombination may be decreased in ZnO-ZrO2 composites. These e− and h+ react with water and oxygen to produce hydroxyl (OH⋅) radicals which are very reactive and can easily oxidize the phenol until obtaining CO2 and water. As a result, we obtained an enhancement in photocatalytic performance of phenol degradation by the composite with the highest ZnO percentage [8, 19, 20, 43].Figure 8 Schematic representation of photocatalytic mechanism of electron-hole pair separation of ZnO-ZrO2 composites for the degradation of phenol. ## 4. Discussion It is well known that the catalytic and photocatalytic performance of ZrO2 can be affected by crystalline phases. Although there are some reports on the catalytic activity of ZnO-ZrO2 materials, deep structural studies have not been performed so far. The fact that both tetragonal and cubic phases exhibit similar XRD patterns makes it difficult to discern these structures based only on X-ray diffraction analysis, and Rietveld refinement or Raman spectroscopy are good tools for elucidating this ambiguity. In this work, we observed that by adding ZnO, only the t-ZrO2 phase was obtained, and this phenomenon could also be suspected by analyzing Raman results. In all composites, t-ZrO2 was detected as well as zincite (wurtzite-type) structure for ZnO. Formation of the nanocomposite influenced the crystallite size of both ZnO and ZrO2, increasing for ZnO but decreasing for t-ZrO2 as a function of the ZnO-ZrO2 ratio. The absence of solid solution could be explained not only due to the differences in valence between ions, which depends on the amount of Zn2+ species [44], but also due to the synthesis procedure here reported. Lattice parameters of the 13ZnOZrO2 nanocomposite are provided in Supplementary Materials. We observed small changes when compared to pure zirconia; this can be attributed to the presence of the divalent oxide, since incorporation of these type of oxides causes lattice deformation on the crystalline structure of ZrO2, with subsequent modification on the force constants of Zr–O and related bonds [33].Since band gap is one important feature to consider in photoactivity, many attempts to improve this property have been made. By coupling ZnO and ZrO2, radiation of low energy can be absorbed by the ZnO-ZrO2 composites. From the calculated Eg values, it can be assumed that the energy levels for both the valence band (VB) and conduction band (CB) in ZnO fit in with the bandgap of ZrO2. When the electrons are excited, most of the electrons from the conduction band of ZrO2 can be easily transferred to the conduction band of ZnO, and thus, the band gap may be decreased, indicating that ZnO-ZrO2 nanoparticles have a suitable band gap to generate excited electron/hole pairs [17] allowing the use of simulated solar radiation.Both oxides, ZnO and ZrO2, are well known for their photocatalytic properties, but one of their limitations is the need of UV light for its activation, especially ZrO2 that usually requires UV radiation due to its wide band gap. Here, we investigated the effect of ZnO in the photocatalytic performance of ZrO2 under simulated solar light. Since the use of pure ZnO leads to the formation of several compounds during the photoreaction, it is remarkable that 75ZnO-ZrO2 nanocomposite conducted to a reaction without formation of any intermediaries, which represents the main advantage of using ZnO-ZrO2. ## 5. Conclusions So far, ZnO-ZrO2 materials have been reported for several catalytic reactions with an enhanced performance compared to their pristine moieties. Recently, this type of composites has been studied also as photocatalysts with promising results, but full understanding of their properties related to their composition are needed. In this work, we prepared ZnO-ZrO2 composites using zinc (II) acetylacetonate and zirconium n-butoxide as raw materials. We observed that synthesis procedure strongly affected the stabilization of zirconia polymorphs, where ZnO plays an important role in inhibiting ZrO2 monoclinic structure and stabilizing tetragonal phase. By coupling ZnO to ZrO2, we observed significant changes in the absorption behavior of ZrO2 shifting its absorption edges in the UV region toward lower energies. The pore distribution of the composite was intensely changed by the interaction of both oxides, directing to a larger amount of mesopores than the observed for uncoupled oxides. Test revealed that 75ZnO-ZrO2 composite exhibited good performance in the degradation of phenol using simulated solar radiation, improved the mineralization reached by pure ZnO and ZrO2, and inhibited the formation of undesirable intermediates usually obtained as a result of photocatalytic degradation of phenol. --- *Source: 1015876-2019-01-10.xml*
1015876-2019-01-10_1015876-2019-01-10.md
56,939
Synthesis and Characterization of ZnO-ZrO2 Nanocomposites for Photocatalytic Degradation and Mineralization of Phenol
M. C. Uribe López; M. A. Alvarez Lemus; M. C. Hidalgo; R. López González; P. Quintana Owen; S. Oros-Ruiz; S. A. Uribe López; J. Acosta
Journal of Nanomaterials (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1015876
1015876-2019-01-10.xml
--- ## Abstract ZnO-ZrO2 nanocomposites using zinc (II) acetylacetonate and different ZnO contents (13, 25, 50, and 75% mol) were synthesized through sol-gel method. The synthesis process was strongly related to nanocomposite properties especially on their structural composition. The obtained ZnO-ZrO2 nanomaterials presented tetragonal crystalline structure for zirconia whereas hexagonal one was formed in ZnO. Raman spectroscopy and XRD patterns confirmed the formation of tetragonal zirconia whereas inhibition of monoclinic structure was observed. Addition of ZnO affected the pore size distribution of the composite, and the measured specific surface areas were from 10 m2/g (for pure ZnO) to 46 m2/g (pristine ZrO2). Eg values of ZrO2 were modified by ZnO addition, since calculated values using Kubelka-Munk’s function varied from 4.73 to 3.76 eV. The morphology and size of the nanomaterials investigated by electron microscopy showed formation of nanorods for ZnO with sizes ranging from 50 nm to 300 nm while zirconia was formed by smaller particles (less than 50 nm). The main advantage of using the nanocomposite for photocatalytic degradation of phenol was the mineralization degree, since 75ZnO-ZrO2 nanocomposite surpassed mineralization reached by pure ZnO and also inhibited formation of undesirable intermediates. --- ## Body ## 1. Introduction Zirconium oxide (ZrO2) known as zirconia is an interesting material due to its application in various photochemical heterogeneous reactions. ZrO2 is an n-type semiconductor with a wide band gap energy between 5.0 and 5.5 eV [1]. Because of this, ZrO2 requires UV-C light (<280 nm) to be excited and generate electron-hole pairs [2]. A strategy to overcome this is by doping ZrO2 with different transition metal ions or coupling with other metal oxides with dissimilar band edge [3]. Composites made of two metal oxides have attracted much attention in different researches because they possess improved physicochemical properties than the pure oxides. Usually, composites enhance photocatalytic activity [4, 5], produce new crystallographic phases with quite different properties than the original oxides, create defect energy levels in the band gap region [6], change the surface characteristics of the individual oxides due to the formation of new sites in the interface between the components [7], and also increase the stability of a photoactive crystalline phase [8]. In order to enhance optical properties of ZrO2 several semiconductors like SiO2, TiO2, ZnO, WO3, and NiO have been coupled to ZrO2. Together with TiO2, zin oxide is one of the most investigated n-type semiconductor materials due to its low-cost, easy fabrication, wide band-gap, and photocatalytic activity for degrading several organic pollutants into less harmful products [9]. The main advantage of ZnO is that it absorbs a larger fraction of the solar spectrum than TiO2 [10]. The band gap of ZnO is ~3.37 eV, and its exciton-binding energy is about 60 meV [11]. Many reports have been published about the good physicochemical properties given by the use of ZnO in composites. For instance, a composite made by nanostructures transparent conducting metal-oxides (TCMOS) as ZnO/NiO resulted in an excellent candidate for acetone sensing [12]. Nanocomposites of Zn(1−x)MgxO/graphene showed an excellent performance to remove methylene blue dye under natural sunlight illumination [13]. TiO2/ZnO nanocomposites with different contents of ZnO showed an improvement in the degradation of the organic dyes brilliant green and methylene blue under solar light irradiation [14]. ZnO/TiO2 photocatalyst exhibited much higher photocatalytic activity than pure TiO2, ZnO, and P-25 in the degradation of 4-chlorophenol under low UV irradiation [15]. The composites of ZnO/Ag2CO3/Ag2O demonstrated a potential effect in the photodegradation of phenol under visible light irradiation due to the facilitate charge transfer and suppress recombination of photogenerated electrons and holes [16].Recently, composites of ZrO2 with ZnO have attracted much attention because of their excellent properties as a semiconductor material, especially for the degradation reactions of recalcitrant organic pollutants. The enhancement in photocatalytic activity of ZnO-ZrO2 composites has been associated with the changes in their structural, textural, and optical properties, such as surface area, particle size, formation of a specific crystalline phase, and low band gap energy [4, 17, 18]. In addition, the improved electron-hole pair enhances the photocatalytic efficiency. Under illumination, both the semiconductors of nanocomposite are simultaneously excited, and the electrons slip to the low-lying conduction band of one semiconductor, while holes move to the less anodic valence band. Sherly et al. [19] attributed the efficiency of Zn2Zr (ZnO and ZrO2 in 2 : 1 ratio) photocatalyst in the degradation of 2,4-dichlorophenol to the good stability and the efficient separation of photogenerated electron-hole pairs. Aghabeygi and Khademi-Shamami [4] stated that the good properties of 1 : 2 molar ratio of ZrO2 : ZnO as photocatalyst in the degradation of Congo Red dye could be by the decrease in the rate of the hole-electron pairs recombination when the excitation takes place with energy lower than Eg. Besides, they proposed that ZnO could increase the concentration of free electrons in the CB of ZrO2 by reducing the charge recombination in the process of electron transport. Gurushantha et al. [20] demonstrated a photocatalytic enhancement in the ZrO2/ZnO (1 : 2) nanocomposite for the degradation of acid orange 8 dye under UV light irradiation (254 nm). They observed that the reduction of energy gap, the increase of the density states, and the stability of the composite increased the photocatalyst efficiency.In this work, we investigated the effect of ZnO on the photocatalytic properties of ZnO-ZrO2 nanocomposites obtained by sol-gel method in the photodegradation of phenol in water under UV-A irradiation. ## 2. Materials and Methods ### 2.1. Reagents Zirconium (IV) butoxide (80 wt. % in 1-butanol), phenol (ReagentPlus ≥99%), and Zinc Acetylacetonate hydrate were purchased from Sigma-Aldrich; hydrochloric acid (36.5–38%) was obtained from Civeq (México). In all cases, deionized water was used. ### 2.2. Synthesis of ZrO2 81.2 mmol of de Zirconium (IV) butoxide were added dropwise to 48.9 mL of deionized water and ethanol mixture (1 : 8) preheated at 70°C. Before addition of the alkoxide, pH was adjusted at pH 3 with hydrochloric acid (2.5 M). The white suspension was kept under temperature at 70°C, with continuous stirring and reflux for 24 h. The gel was dried at 70°C for 8 h afterwards. Finally, the obtained powder was ground and then calcined at 500°C for 4 h. ### 2.3. Synthesis of ZnO 11.38 mmol of Zinc Acetylacetonate hydrate (powder, Sigma-Aldrich) were added into 50 mL of ethanol (96%, Civeq) previously heated at 70°C and adjusted at pH 3 with chlorhydric acid (2.5 M) (36.5–38%, Civeq) during 30 min. The suspension was stirred for 4 h at 70°C and then being aged for 24 h under continuous agitation. Later, the resulting gel was washed several times with ethanol and deionized water. Finally, the white powders were dried at 70°C during 6 h, ground, and then calcined at 500°C for 4 h. ### 2.4. Synthesis of ZnO-ZrO2 Nanocomposites Different molar percentages (13%, 25%, 50%, and 75%) of ZnO were incorporated into ZrO2 and named as 13ZnO-ZrO2, 25ZnO-ZrO2, 50ZnO-ZrO2, and 75ZnO-ZrO2. The photocatalysts were prepared as follows: the appropriated amount of Zinc Acetylacetonate hydrate was dissolved into 50 mL of ethanol previously heated at 70°C and adjusted at pH 3 (HCl, 2.5 M). ZrO2 sols were prepared separately as previously described but just before the addition of the half of the total amount of alkoxide was completed, the corresponding ZnO sol was incorporated into the mixture, followed by the dropwise of the rest of the zirconium alkoxide. The mixture was kept under vigorous stirring and reflux at 70°C during 24 h. The obtained gels were washed several times with ethanol and deionized water, then they were, dried, ground, and calcined at 500°C during 4 h. The proposed reactions are presented in Supplementary Materials (Figures S1, S2, and S3). ### 2.5. Characterization of the Nanocomposites X-ray diffraction (XRD) patterns were obtained on a Bruker D8 Advance diffractometer using CuKα radiation (1.5418 Å) in the 2θ scan range of 10–90°. The average crystallite size of the samples was estimated using the Debye-Scherrer equation (equation (1)). (1)D=0.89λβcosθ,where λ is the wavelength of CuKα radiation, β is the peak width at half maximum, and θ is the diffraction angle.To calculate the percentage of monoclinic and tetragonal phases of pure ZrO2, we used the monoclinic phase fraction Xm and the following equation described by Garvie and Nicholson [21]: (2)Xm=Im−111+Im111Im−111+Im111+It101×100,where Im and It represent the integral intensities of monoclinic 111 and −111 and tetragonal 101 peaks.Raman spectroscopy was performed with an XploRA PLUS Raman system equipment (HORIBA) with a CCD detector, an optical microscope (Olympus BX), and solid-state laser (532 nm/25 mW). Fourier transformed infrared spectra (FT-IR) were collected in a Shimadzu IRAffinity-1 spectrophotometer. The powders (<5%wt) were pressed into 100 mg wafers together with KBr (J.T.Baker, Infrared grade).Diffuse reflectance UV-Vis spectroscopy (UV-Vis/DRS) was carried out on Shimadzu UV-2600 spectrophotometer equipped with an integrating sphere accessory and BaSO4 as reference (99% reflectance). The band-gap (Eg) values were calculated using Kubelka-Munk function, F(R), by the construction of a Tauc’s plot: (F(R)∗hν)2 or (F(R)∗hν)1/2 versus energy (eV), for a direct and indirect allowed transition, respectively. The BET specific surface areas (SBET) and pore volume (BJH method) of the samples were determined by N2 adsorption-desorption isotherms at 77°K using a Quantachrome Autosorb 3B instrument. Degasification of the samples was performed at 100°C during 12 h. Surface morphology of the materials was analyzed by field emission scanning electron microscopy (FESEM) using a Hitachi S-4800 microscope, whereas high-resolution transmission electron microscopy (HRTEM) was performed in a JEOL JSM-2100 electron microscope operated at 200 kV, with a 0.19 nm resolution. ### 2.6. Photocatalytic Activity Synthesized particles were tested in the photodegradation of phenol. The photocatalytic study was carried out in a 250 mL pyrex reactor covered with a UV-transparent Plexiglas (absorption at 250 nm); the intensity of the radiation over the suspension was 90 W/m2. For each test, 200 mg of photocatalyst were suspended into 200 mL of phenol solution (50 ppm). The suspension was magnetically stirred in the dark with oxygen flow of 20 L/h until adsorption-desorption equilibrium was reached (ca. 20 min). Then the suspension was illuminated with an Osram Ultra-Vitalux lamp (300 W, UV-A, λ=365nm). Aliquots of 3 mL were taken and filtered (Millipore Milex-HV 0.45 μm) for further analysis. Variations in phenol concentration were tracked by high-performance liquid chromatography (HPLC) using an Agilent Technologies 1200 chromatograph equipped with a UV-Vis detector and Eclipse XDB-C18 column 5 μm, 4.6mm×150mm. The mobile phase was water/methanol (65 : 35) at a flow rate of 0.8 mL/min. Mineralization of phenol was measured by the total organic content (TOC) in a Shimadzu 5000 TOC analyzer. The percentage of mineralization was estimated using the equation (equation (2)): (3)%Mineralization=1−TOCfinalTOCinitial∗100,where TOCinitial and TOCfinal are the total organic carbon concentrations in the media before and after the photocatalytic reaction, respectively. ## 2.1. Reagents Zirconium (IV) butoxide (80 wt. % in 1-butanol), phenol (ReagentPlus ≥99%), and Zinc Acetylacetonate hydrate were purchased from Sigma-Aldrich; hydrochloric acid (36.5–38%) was obtained from Civeq (México). In all cases, deionized water was used. ## 2.2. Synthesis of ZrO2 81.2 mmol of de Zirconium (IV) butoxide were added dropwise to 48.9 mL of deionized water and ethanol mixture (1 : 8) preheated at 70°C. Before addition of the alkoxide, pH was adjusted at pH 3 with hydrochloric acid (2.5 M). The white suspension was kept under temperature at 70°C, with continuous stirring and reflux for 24 h. The gel was dried at 70°C for 8 h afterwards. Finally, the obtained powder was ground and then calcined at 500°C for 4 h. ## 2.3. Synthesis of ZnO 11.38 mmol of Zinc Acetylacetonate hydrate (powder, Sigma-Aldrich) were added into 50 mL of ethanol (96%, Civeq) previously heated at 70°C and adjusted at pH 3 with chlorhydric acid (2.5 M) (36.5–38%, Civeq) during 30 min. The suspension was stirred for 4 h at 70°C and then being aged for 24 h under continuous agitation. Later, the resulting gel was washed several times with ethanol and deionized water. Finally, the white powders were dried at 70°C during 6 h, ground, and then calcined at 500°C for 4 h. ## 2.4. Synthesis of ZnO-ZrO2 Nanocomposites Different molar percentages (13%, 25%, 50%, and 75%) of ZnO were incorporated into ZrO2 and named as 13ZnO-ZrO2, 25ZnO-ZrO2, 50ZnO-ZrO2, and 75ZnO-ZrO2. The photocatalysts were prepared as follows: the appropriated amount of Zinc Acetylacetonate hydrate was dissolved into 50 mL of ethanol previously heated at 70°C and adjusted at pH 3 (HCl, 2.5 M). ZrO2 sols were prepared separately as previously described but just before the addition of the half of the total amount of alkoxide was completed, the corresponding ZnO sol was incorporated into the mixture, followed by the dropwise of the rest of the zirconium alkoxide. The mixture was kept under vigorous stirring and reflux at 70°C during 24 h. The obtained gels were washed several times with ethanol and deionized water, then they were, dried, ground, and calcined at 500°C during 4 h. The proposed reactions are presented in Supplementary Materials (Figures S1, S2, and S3). ## 2.5. Characterization of the Nanocomposites X-ray diffraction (XRD) patterns were obtained on a Bruker D8 Advance diffractometer using CuKα radiation (1.5418 Å) in the 2θ scan range of 10–90°. The average crystallite size of the samples was estimated using the Debye-Scherrer equation (equation (1)). (1)D=0.89λβcosθ,where λ is the wavelength of CuKα radiation, β is the peak width at half maximum, and θ is the diffraction angle.To calculate the percentage of monoclinic and tetragonal phases of pure ZrO2, we used the monoclinic phase fraction Xm and the following equation described by Garvie and Nicholson [21]: (2)Xm=Im−111+Im111Im−111+Im111+It101×100,where Im and It represent the integral intensities of monoclinic 111 and −111 and tetragonal 101 peaks.Raman spectroscopy was performed with an XploRA PLUS Raman system equipment (HORIBA) with a CCD detector, an optical microscope (Olympus BX), and solid-state laser (532 nm/25 mW). Fourier transformed infrared spectra (FT-IR) were collected in a Shimadzu IRAffinity-1 spectrophotometer. The powders (<5%wt) were pressed into 100 mg wafers together with KBr (J.T.Baker, Infrared grade).Diffuse reflectance UV-Vis spectroscopy (UV-Vis/DRS) was carried out on Shimadzu UV-2600 spectrophotometer equipped with an integrating sphere accessory and BaSO4 as reference (99% reflectance). The band-gap (Eg) values were calculated using Kubelka-Munk function, F(R), by the construction of a Tauc’s plot: (F(R)∗hν)2 or (F(R)∗hν)1/2 versus energy (eV), for a direct and indirect allowed transition, respectively. The BET specific surface areas (SBET) and pore volume (BJH method) of the samples were determined by N2 adsorption-desorption isotherms at 77°K using a Quantachrome Autosorb 3B instrument. Degasification of the samples was performed at 100°C during 12 h. Surface morphology of the materials was analyzed by field emission scanning electron microscopy (FESEM) using a Hitachi S-4800 microscope, whereas high-resolution transmission electron microscopy (HRTEM) was performed in a JEOL JSM-2100 electron microscope operated at 200 kV, with a 0.19 nm resolution. ## 2.6. Photocatalytic Activity Synthesized particles were tested in the photodegradation of phenol. The photocatalytic study was carried out in a 250 mL pyrex reactor covered with a UV-transparent Plexiglas (absorption at 250 nm); the intensity of the radiation over the suspension was 90 W/m2. For each test, 200 mg of photocatalyst were suspended into 200 mL of phenol solution (50 ppm). The suspension was magnetically stirred in the dark with oxygen flow of 20 L/h until adsorption-desorption equilibrium was reached (ca. 20 min). Then the suspension was illuminated with an Osram Ultra-Vitalux lamp (300 W, UV-A, λ=365nm). Aliquots of 3 mL were taken and filtered (Millipore Milex-HV 0.45 μm) for further analysis. Variations in phenol concentration were tracked by high-performance liquid chromatography (HPLC) using an Agilent Technologies 1200 chromatograph equipped with a UV-Vis detector and Eclipse XDB-C18 column 5 μm, 4.6mm×150mm. The mobile phase was water/methanol (65 : 35) at a flow rate of 0.8 mL/min. Mineralization of phenol was measured by the total organic content (TOC) in a Shimadzu 5000 TOC analyzer. The percentage of mineralization was estimated using the equation (equation (2)): (3)%Mineralization=1−TOCfinalTOCinitial∗100,where TOCinitial and TOCfinal are the total organic carbon concentrations in the media before and after the photocatalytic reaction, respectively. ## 3. Results ### 3.1. X-Ray Diffraction Analysis X-ray patterns of the photocatalysts are depicted in Figure1(a). All the diffractograms of the samples containing ZnO exhibited sharp and strong peaks at 31.70°, 34.30° y 36.20 (2θ) which correspond to (100), (002), and (101) reflections, respectively, and agree with the characteristic peaks of ZnO wurtzite-type hexagonal crystalline structure (JCPDS 36-1451). The high intensity of the (101) peak suggests anisotropic growth and orientation of the crystals [22, 23].Figure 1 (a) XRD pattern of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 calcined at 500°C. t = tetragonal and m = monoclinic of ZrO2; w = wurtzite for ZnO. (b) XRD pattern of the samples evaluated in the region of 71–77° to identify the tetragonal doublets of ZrO2. (a) (b)On the other hand, pristine ZrO2 showed broad peaks located at 28.20°, 30.20°, and 31.50° (2θ). The peak centered at 30.20°(101) is characteristic of tetragonal crystalline phase (t-ZrO2) according to JCPDS 79-1771 card, whereas those at 28.20°(−111) and 31.5°(111) are representative of monoclinic phase (m-ZrO2, JCPDS 37-1484). These results suggest a mixture of both tetragonal and monoclinic crystalline phases, which is commonly observed on ZrO2 materials when calcined at similar temperatures [24, 25].When ZnO was added to ZrO2, the corresponding peaks to monoclinic phase were not observed, indicating inhibition of monoclinic phase. As the content of ZnO increases, the reflection (101) observed at 30.20° (2θ) appeared slightly shifted towards 30.38° except for the 50ZnO-ZrO2 sample; therefore, the peaks observed for all the ZnO-ZrO2 materials are assigned to the presence of tetragonal phase. To distinguish between the diffraction patterns of cubic and tetragonal phases of ZrO2, the 2θ region at 71–77° was carefully examined. The asymmetric doublets at ~74° indicated the formation of tetragonal ZrO2 [26–28]. Figure 1(b) shows the tetragonal doublets for all ZnO-ZrO2 materials. Crystallite size and percentage of phases for pure ZnO and ZrO2 and ZnO-ZrO2 nanocomposites were determined using Debye-Scherrer equation and Garvie and Nicholson method. Because monoclinic phase was not observed in ZnO-ZrO2 composites, we considered only the integral intensities of tetragonal and wurtzite peaks of ZrO2 and ZnO, respectively. The obtained values are presented in Table 1.Table 1 Crystallite size and percentage of phase content of nanocomposites obtained from Debye-Scherrer equation and Garvie and Nicholson method, respectively. m represents monoclinic and t tetragonal phase of ZrO2. w indicates the wurtzite structure of ZnO. Sample Crystallite size (nm) h k l Phase content (%) ZnO 33.2 (101) 100% ZrO2 14.3 (101)m 59.7% 15.4 (−111)t 40.3% 13ZnO-ZrO2 18.2 (101)t 100% — — 25ZnO-ZrO2 14.5 (101)t 88.0% 44.7 (101)w 12.0% 50ZnO-ZrO2 14.4 (101)t 82.0% 48.3 (101)w 18.0% 75ZnO-ZrO2 13.8 (101)t 60.9% 44.6 (101)w 39.1% ### 3.2. Raman Spectroscopy The theory of groups predicts six Raman-active modes of vibrations for tetragonal (A1g+2B1g+3Eg) and 18 for monoclinic (9Ag+9Bg) of ZrO2, whereas for ZnO there are 4 Raman-active modes, although splitting of E2 modes into longitudinal optical (LO) and transversal optical (TO) gives place to 6 active modes. In Figure 2, Raman spectra of the photocatalysts are presented. For ZnO, two peaks can be clearly observed, the first one at 99 cm−1 and the second at 434 cm−1, corresponding to E2 mode characteristic of wurtzite-type structure; two additional weak bands at 326 and 380 cm−1 were also observed which are related to 2-phonon and A1(TO) mode, respectively [29]. The ZrO2 spectrum showed several peaks located at 100, 176, 217, 306, 333, 380, 474, 501, 534, 553, 613, and 638 cm−1, which are very close to those reported for monoclinic phase. The peaks attributed to tetragonal structure were located at 142 and 265 cm−1 whereas two additional bands reported for this structure around 318 and 461 cm−1 seemed to be overlapped with monoclinic signals; the peaks between 640 and 641 cm−1 are shared by monoclinic and tetragonal structures [30, 31]. Since cubic structure of zirconia usually exhibits one strong peak around 617 cm1, the absence of this signal indicates the presence of a mixture of only two structures (monoclinic and tetragonal) in pristine ZrO2. For the ZnO-ZrO2 composites, Raman spectra did not show sharp peaks. As the content of ZnO increased, we observed the broadening of the peaks that correspond to tetragonal structure modes; this broadening is related to the decrease in the crystallite size of this crystalline phase, usually due to phonons associated with the nanosized particles. On the other hand, the absence of representative signals for monoclinic structure in both Raman spectra and XRD patterns leads us to conclude that ZnO inhibited the formation of this structure in ZnO-ZrO2 composites. Additionally, Rietveld refinement of 13ZnO-ZrO2 was performed (Supplementary Materials, Figure S4 and Table S1). This analysis confirmed the absence of both solid solution and cubic crystalline phase formation; the estimated percentages of each crystalline structure are shown in Table 1.Figure 2 Raman spectra of ZnO-ZrO2 nanocomposites. (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2. m indicates monoclinic and t tetragonal structures for ZrO2 whereas z indicates wurtzite crystalline structure of ZnO. ### 3.3. FTIR Analysis Figure3 shows the spectra of pure ZnO, ZrO2, and ZnO-ZrO2 composites with different content of ZnO. FTIR spectra of all materials presented wide bands at 3410–3450 cm−1 which correspond to O-H stretching vibrations of physical adsorbed water on the catalyst surface [32]. Compared to the ZrO2 band, a shift of the O-H band to lower frequencies occurs as the percentage of ZnO increases in the ZnO-ZrO2 composites. Pure ZnO spectrum showed an intense band centered at 423 cm−1. This band is characteristic for Zn-O vibrations [23, 32]. Two intense bands appeared at 744 cm−1 and 576 cm−1 have been associated with vibrations of Zr-O in monoclinic structure. An additional band located at 498 cm−1 was also present in ZrO2 spectrum; this signal corresponds to Zr-O-Zr vibrations in tetragonal structure [33–35], which appears slightly shifted for all ZnO-ZrO2 nanocomposites. This behavior can be attributed to the addition of divalent oxides like ZnO (Zn+2) to ZrO2. Also, the incorporation of these oxides may produce a lattice deformation on the crystalline structure, with subsequent modification on the force constants of Zr–O and related bonds [33].Figure 3 (a) FT-IR full spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites with different content of ZnO. (b) FTIR region from 1300 to 400 cm−1. (a) (b) ### 3.4. Specific Surface Area Figure4 shows the N2 adsorption-desorption isotherms of the nanocomposites as well as their corresponding pore size distribution (insets). The isotherms for all the samples presented type IV(a) shape according to IUPAC classification [36] which corresponds to mesoporous structures where capillary condensation takes place and is accompanied by hysteresis. The adsorbed volume in all cases is relatively low which explains the observed values for specific surface area. It has been reported that ZnO usually exhibits poor BET surface areas ranging from 1 to 15 m2/g when no additives are used to improve this property. In our samples, pure ZnO showed a SBET=10m2/g while ZrO2 exhibited 46 m2/g.Figure 4 N2 adsorption-desorption BET isotherms of (a) pure oxides and (b) different ZnO-ZrO2 composites; the insets represent BJH pore size distribution from desorption branches. (a) (b)The isotherms of both ZnO and ZrO2 pure oxides showed a narrow hysteresis loop, which for ZrO2 starts at 0.4 (P/P0) and for ZnO this occurs at higher relative pressures (0.8) (Figure 4(a)). When composites were analyzed, hysteresis loop for all the composition was slightly broader than pure oxides, indicating changes in porosity (Figure 4(b)). ZnO showed H3-type hysteresis loop at high relative pressure. However, ZrO2 and all ZnO-ZrO2 composites exhibited H2-type hysteresis which is associated with the presence of bottle-shaped mesopores that can be explained as a consequence of the interconnectivity of pores [1]. The specific surface area decreased from 46 to 8 m2/g when 25% of ZnO was incorporated, and for 75ZnO-ZrO2, SBET value was 36 m2/g, and its N2 isotherm showed a broader hysteresis loop than the observed pure oxides. Additional effect of ZnO incorporation was observed in BJH pore size distribution: pristine ZrO2 showed a wide pore size distribution from meso- to macropores, whereas ZnO presented pores around 30 nm and also macroporosity; when both oxides were coupled, pore size distribution for all composites showed unimodal distribution with very close pore size average. ### 3.5. Diffuse Reflectance UV-Vis Spectroscopy The absorption spectra of the oxides are depicted in Figure5(a). The Uv-Vis spectrum of ZnO shows an absorption edge at 370 nm, which is in good agreement with literature [37]. This characteristic band can be assigned to the intrinsic band-gap absorption of ZnO due to the electron transitions from the valence band to the conduction band (O2p → Zn3d) [38]. ZrO2 spectrum showed a small absorption band at 208 nm and another large band at 228 nm, which appeared at lower wavelengths than the characteristic bands reported for ZrO2, usually observed ~240 nm [39]. These bands correspond to the presence of Zr species as the tetrahedral Zr+4, and it is electronically produced by the charge transfer transition of the valence band O2p electron to the conduction band Zr4d (O2 → Zr4+) level upon UV light excitation [40]. There was no other absorption band observed in the UV-Vis region. It has been reported that the identification of ZrO2 phases can be possible by using UV range of DRS. The two bands observed in pure ZrO2 are typically observed since ZrO2 has two band-to-band transitions. According to Sahu and Rao [41], the first transition corresponds to values around 5.17–5.2 eV that can be associated with m-ZrO2. In the ZrO2 here prepared, we observed by DRX that monoclinic and tetragonal phases are presented in pure oxide. The band at high energy cannot be observed when the content of ZnO increases, not only due to the disappearance of monoclinic phase but also for the incorporation of ZnO affecting the shape of the spectrum. The spectrum of pure ZrO2 also depicts a small shoulder with an onset at 310 nm that can be attributed to t-ZrO2 crystalline structure [42]. The shape as well as the intensity of this band changes as the content of ZnO increases and slightly shifts towards lower energy region, due to the effect of ZnO.Figure 5 (a) UV-Vis/DR absorption spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites; (b) Kubelka-Munk function of (A) ZrO2, (B) 13ZnO-ZrO2, (C) 25ZnO-ZrO2, (D) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 composites. (I) represents the first and (II) the second edge of the samples considered for Eg estimations. (a) (b)The addition of ZnO modified the absorption edge of ZrO2 towards lower values. A red shift was observed in the last band of all ZnO-ZrO2 materials towards longer wavelength region, due to the introduction of energy levels in the interband gap [17]. Kubelka-Munk function was used to estimate the band gap of the nanocomposites (Figure 5(b)). The band gap values of pure ZnO and ZrO2 were evaluated by using n=2 (direct transitions) and n=1/2 (indirect transitions), respectively, and their values are summarized in Table S2 (Supplementary Materials).For ZnO-ZrO2 materials, it can be noticed the presence of two edges instead of only one; the bandgap was estimated using n=1/2. The obtained value for ZnO was 3.26 eV (Supplementary Materials, Figure S5) whereas 4.53 and 5.06 eV were calculated for ZrO2 (for the two edges observed in the corresponding spectrum), these values changed to 4.73, 4.35, 3.76, and 4.16 eV by the addition of different contents of ZnO (indicated as I in Figure 5(b)), being 50ZnO-ZrO2 the one with the lowest value. With regard to the low-energy shoulder observed in ZnO-ZrO2 spectra (indicated as II in Figure 5(b)), the calculated values were 3.07, 3.10, 3.15, and 3.16 eV for the composites with 13, 25, 50, and 75% of ZnO, respectively, allowing nanocomposites to be excited at lower energy. ### 3.6. Electron Microscopy Morphology of the nanocomposites was investigated using electron microscopy, and the images are shown in Figure6. We observed that ZnO nanoparticles are rod-like shaped with sizes ranging from 100 to 300 nm producing agglomerates. On the other hand, ZrO2 is made of smaller particles nonuniform in shape and size. In 13ZnO-ZrO2 nanocomposite, we observed that ZnO particles change in shape whereas some ZrO2 agglomerates grew up to 1 μm but also smaller quasispherical particles around 300 nm were observed. The most significant change was exhibited by 75ZnO-ZrO2 nanocomposite, since agglomerates of particles with smaller sizes were obtained.Figure 6 FESEM (a) and the corresponding HRTEM (b) micrographs of pure oxides and ZnO-ZrO2 composites. (a) (b) ### 3.7. Photocatalytic Test Previous to the test, adsorption-desorption equilibrium was reached after stirring the suspensions for 20 minutes in the dark, of which in general all the composites presented low adsorption of the pollutant. The results of photocatalytic degradation are shown in Figure7(a). Here, we observed that pure ZrO2 degraded only 5% of phenol even after 120 min; when ZnO was incorporated, a slight increase in photodegradation with 13 and 25% of ZnO was observed, but they barely degraded around 15% of the pollutant. By increasing the ZnO content up to 50% mol, 47% of phenol was degraded, whereas 71% of degradation was achieved using 75ZnO-ZrO2 composite. For the prepared ZnO, 74% of degradation was obtained during the same time. The kinetic parameters were also calculated assuming pseudofirst order kinetics (Table 2) where we observed that t1/2 for 75ZnO-ZrO2 composite was very close to that calculated for ZnO.Figure 7 (a) Degradation curves of phenol by ZnO, ZrO2, and different ZnO-ZrO2 composites (experimental conditions: phenol = 50 ppm, volume of phenol = 200 mL, and catalyst dosage = 200 mg). (b) TOC results obtained after 120 min of illumination in the UV. (a) (b)Table 2 Kinetic parameters estimated from pseudofirst order kinetics. Photocatalyst k (min−1) R2 t1/2 (min) ZrO2 0.7×10−3 0.9585 1155 13ZnO-ZrO2 1.3×10−3 0.9193 533 25ZnO-ZrO2 1.5×10−3 0.9808 462 50ZnO-ZrO2 5.3×10−3 0.9939 130 75ZnO-ZrO2 10.3×10−3 0.9964 67 ZnO 11.2×10−3 0.9810 62It is well known that ZnO can be activated under UV-A light and generally exhibits good photodegradation rates, but it is also needed to assess the mineralization of the pollutant. For this purpose, TOC analysis was performed and the results are tabulated in Figure7(b). Pure ZrO2 mineralized 2.5% of phenol, but the mineralization increases with increasing ZnO content, which stabilizes tetragonal crystalline phases of ZrO2. Although ZnO reached 74% degradation, the mineralization of the pollutant was 40%, while 51% of mineralization was achieved with 75ZnOZrO2. Tetragonal ZrO2 has been reported as the most active polymorph of ZrO2 which also shows high selectivity in catalytic reactions. These results showed that at this concentration, ZnO-ZrO2 composite improved the mineralization of phenol when compared to pure ZnO. Besides, it was also confirmed that intermediaries like catechol, resorcinol, and hydroquinone were not generated during the photodegradation with 75ZnO-ZrO2.According to the obtained results, a schematic representation of phenol degradation is shown in Figure8. When the ZnO-ZrO2 composites were excited by Uv-A irradiation (365 nm), electrons migrate from the valence band (VB) to the conduction band (CB) of ZnO, leading to the formation of electron/hole (e−/h+) pairs. Since the energy levels of ZnO fit well into the band gap of ZrO2, the electrons from the CB of ZrO2 can easily be transferred to the CB of ZnO; conversely, the holes migrate from the VB of ZnO to the VB of ZrO2, and thereby the electron-hole pair recombination may be decreased in ZnO-ZrO2 composites. These e− and h+ react with water and oxygen to produce hydroxyl (OH⋅) radicals which are very reactive and can easily oxidize the phenol until obtaining CO2 and water. As a result, we obtained an enhancement in photocatalytic performance of phenol degradation by the composite with the highest ZnO percentage [8, 19, 20, 43].Figure 8 Schematic representation of photocatalytic mechanism of electron-hole pair separation of ZnO-ZrO2 composites for the degradation of phenol. ## 3.1. X-Ray Diffraction Analysis X-ray patterns of the photocatalysts are depicted in Figure1(a). All the diffractograms of the samples containing ZnO exhibited sharp and strong peaks at 31.70°, 34.30° y 36.20 (2θ) which correspond to (100), (002), and (101) reflections, respectively, and agree with the characteristic peaks of ZnO wurtzite-type hexagonal crystalline structure (JCPDS 36-1451). The high intensity of the (101) peak suggests anisotropic growth and orientation of the crystals [22, 23].Figure 1 (a) XRD pattern of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 calcined at 500°C. t = tetragonal and m = monoclinic of ZrO2; w = wurtzite for ZnO. (b) XRD pattern of the samples evaluated in the region of 71–77° to identify the tetragonal doublets of ZrO2. (a) (b)On the other hand, pristine ZrO2 showed broad peaks located at 28.20°, 30.20°, and 31.50° (2θ). The peak centered at 30.20°(101) is characteristic of tetragonal crystalline phase (t-ZrO2) according to JCPDS 79-1771 card, whereas those at 28.20°(−111) and 31.5°(111) are representative of monoclinic phase (m-ZrO2, JCPDS 37-1484). These results suggest a mixture of both tetragonal and monoclinic crystalline phases, which is commonly observed on ZrO2 materials when calcined at similar temperatures [24, 25].When ZnO was added to ZrO2, the corresponding peaks to monoclinic phase were not observed, indicating inhibition of monoclinic phase. As the content of ZnO increases, the reflection (101) observed at 30.20° (2θ) appeared slightly shifted towards 30.38° except for the 50ZnO-ZrO2 sample; therefore, the peaks observed for all the ZnO-ZrO2 materials are assigned to the presence of tetragonal phase. To distinguish between the diffraction patterns of cubic and tetragonal phases of ZrO2, the 2θ region at 71–77° was carefully examined. The asymmetric doublets at ~74° indicated the formation of tetragonal ZrO2 [26–28]. Figure 1(b) shows the tetragonal doublets for all ZnO-ZrO2 materials. Crystallite size and percentage of phases for pure ZnO and ZrO2 and ZnO-ZrO2 nanocomposites were determined using Debye-Scherrer equation and Garvie and Nicholson method. Because monoclinic phase was not observed in ZnO-ZrO2 composites, we considered only the integral intensities of tetragonal and wurtzite peaks of ZrO2 and ZnO, respectively. The obtained values are presented in Table 1.Table 1 Crystallite size and percentage of phase content of nanocomposites obtained from Debye-Scherrer equation and Garvie and Nicholson method, respectively. m represents monoclinic and t tetragonal phase of ZrO2. w indicates the wurtzite structure of ZnO. Sample Crystallite size (nm) h k l Phase content (%) ZnO 33.2 (101) 100% ZrO2 14.3 (101)m 59.7% 15.4 (−111)t 40.3% 13ZnO-ZrO2 18.2 (101)t 100% — — 25ZnO-ZrO2 14.5 (101)t 88.0% 44.7 (101)w 12.0% 50ZnO-ZrO2 14.4 (101)t 82.0% 48.3 (101)w 18.0% 75ZnO-ZrO2 13.8 (101)t 60.9% 44.6 (101)w 39.1% ## 3.2. Raman Spectroscopy The theory of groups predicts six Raman-active modes of vibrations for tetragonal (A1g+2B1g+3Eg) and 18 for monoclinic (9Ag+9Bg) of ZrO2, whereas for ZnO there are 4 Raman-active modes, although splitting of E2 modes into longitudinal optical (LO) and transversal optical (TO) gives place to 6 active modes. In Figure 2, Raman spectra of the photocatalysts are presented. For ZnO, two peaks can be clearly observed, the first one at 99 cm−1 and the second at 434 cm−1, corresponding to E2 mode characteristic of wurtzite-type structure; two additional weak bands at 326 and 380 cm−1 were also observed which are related to 2-phonon and A1(TO) mode, respectively [29]. The ZrO2 spectrum showed several peaks located at 100, 176, 217, 306, 333, 380, 474, 501, 534, 553, 613, and 638 cm−1, which are very close to those reported for monoclinic phase. The peaks attributed to tetragonal structure were located at 142 and 265 cm−1 whereas two additional bands reported for this structure around 318 and 461 cm−1 seemed to be overlapped with monoclinic signals; the peaks between 640 and 641 cm−1 are shared by monoclinic and tetragonal structures [30, 31]. Since cubic structure of zirconia usually exhibits one strong peak around 617 cm1, the absence of this signal indicates the presence of a mixture of only two structures (monoclinic and tetragonal) in pristine ZrO2. For the ZnO-ZrO2 composites, Raman spectra did not show sharp peaks. As the content of ZnO increased, we observed the broadening of the peaks that correspond to tetragonal structure modes; this broadening is related to the decrease in the crystallite size of this crystalline phase, usually due to phonons associated with the nanosized particles. On the other hand, the absence of representative signals for monoclinic structure in both Raman spectra and XRD patterns leads us to conclude that ZnO inhibited the formation of this structure in ZnO-ZrO2 composites. Additionally, Rietveld refinement of 13ZnO-ZrO2 was performed (Supplementary Materials, Figure S4 and Table S1). This analysis confirmed the absence of both solid solution and cubic crystalline phase formation; the estimated percentages of each crystalline structure are shown in Table 1.Figure 2 Raman spectra of ZnO-ZrO2 nanocomposites. (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2. m indicates monoclinic and t tetragonal structures for ZrO2 whereas z indicates wurtzite crystalline structure of ZnO. ## 3.3. FTIR Analysis Figure3 shows the spectra of pure ZnO, ZrO2, and ZnO-ZrO2 composites with different content of ZnO. FTIR spectra of all materials presented wide bands at 3410–3450 cm−1 which correspond to O-H stretching vibrations of physical adsorbed water on the catalyst surface [32]. Compared to the ZrO2 band, a shift of the O-H band to lower frequencies occurs as the percentage of ZnO increases in the ZnO-ZrO2 composites. Pure ZnO spectrum showed an intense band centered at 423 cm−1. This band is characteristic for Zn-O vibrations [23, 32]. Two intense bands appeared at 744 cm−1 and 576 cm−1 have been associated with vibrations of Zr-O in monoclinic structure. An additional band located at 498 cm−1 was also present in ZrO2 spectrum; this signal corresponds to Zr-O-Zr vibrations in tetragonal structure [33–35], which appears slightly shifted for all ZnO-ZrO2 nanocomposites. This behavior can be attributed to the addition of divalent oxides like ZnO (Zn+2) to ZrO2. Also, the incorporation of these oxides may produce a lattice deformation on the crystalline structure, with subsequent modification on the force constants of Zr–O and related bonds [33].Figure 3 (a) FT-IR full spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites with different content of ZnO. (b) FTIR region from 1300 to 400 cm−1. (a) (b) ## 3.4. Specific Surface Area Figure4 shows the N2 adsorption-desorption isotherms of the nanocomposites as well as their corresponding pore size distribution (insets). The isotherms for all the samples presented type IV(a) shape according to IUPAC classification [36] which corresponds to mesoporous structures where capillary condensation takes place and is accompanied by hysteresis. The adsorbed volume in all cases is relatively low which explains the observed values for specific surface area. It has been reported that ZnO usually exhibits poor BET surface areas ranging from 1 to 15 m2/g when no additives are used to improve this property. In our samples, pure ZnO showed a SBET=10m2/g while ZrO2 exhibited 46 m2/g.Figure 4 N2 adsorption-desorption BET isotherms of (a) pure oxides and (b) different ZnO-ZrO2 composites; the insets represent BJH pore size distribution from desorption branches. (a) (b)The isotherms of both ZnO and ZrO2 pure oxides showed a narrow hysteresis loop, which for ZrO2 starts at 0.4 (P/P0) and for ZnO this occurs at higher relative pressures (0.8) (Figure 4(a)). When composites were analyzed, hysteresis loop for all the composition was slightly broader than pure oxides, indicating changes in porosity (Figure 4(b)). ZnO showed H3-type hysteresis loop at high relative pressure. However, ZrO2 and all ZnO-ZrO2 composites exhibited H2-type hysteresis which is associated with the presence of bottle-shaped mesopores that can be explained as a consequence of the interconnectivity of pores [1]. The specific surface area decreased from 46 to 8 m2/g when 25% of ZnO was incorporated, and for 75ZnO-ZrO2, SBET value was 36 m2/g, and its N2 isotherm showed a broader hysteresis loop than the observed pure oxides. Additional effect of ZnO incorporation was observed in BJH pore size distribution: pristine ZrO2 showed a wide pore size distribution from meso- to macropores, whereas ZnO presented pores around 30 nm and also macroporosity; when both oxides were coupled, pore size distribution for all composites showed unimodal distribution with very close pore size average. ## 3.5. Diffuse Reflectance UV-Vis Spectroscopy The absorption spectra of the oxides are depicted in Figure5(a). The Uv-Vis spectrum of ZnO shows an absorption edge at 370 nm, which is in good agreement with literature [37]. This characteristic band can be assigned to the intrinsic band-gap absorption of ZnO due to the electron transitions from the valence band to the conduction band (O2p → Zn3d) [38]. ZrO2 spectrum showed a small absorption band at 208 nm and another large band at 228 nm, which appeared at lower wavelengths than the characteristic bands reported for ZrO2, usually observed ~240 nm [39]. These bands correspond to the presence of Zr species as the tetrahedral Zr+4, and it is electronically produced by the charge transfer transition of the valence band O2p electron to the conduction band Zr4d (O2 → Zr4+) level upon UV light excitation [40]. There was no other absorption band observed in the UV-Vis region. It has been reported that the identification of ZrO2 phases can be possible by using UV range of DRS. The two bands observed in pure ZrO2 are typically observed since ZrO2 has two band-to-band transitions. According to Sahu and Rao [41], the first transition corresponds to values around 5.17–5.2 eV that can be associated with m-ZrO2. In the ZrO2 here prepared, we observed by DRX that monoclinic and tetragonal phases are presented in pure oxide. The band at high energy cannot be observed when the content of ZnO increases, not only due to the disappearance of monoclinic phase but also for the incorporation of ZnO affecting the shape of the spectrum. The spectrum of pure ZrO2 also depicts a small shoulder with an onset at 310 nm that can be attributed to t-ZrO2 crystalline structure [42]. The shape as well as the intensity of this band changes as the content of ZnO increases and slightly shifts towards lower energy region, due to the effect of ZnO.Figure 5 (a) UV-Vis/DR absorption spectra of (A) ZnO, (B) ZrO2, (C) 13ZnO-ZrO2, (D) 25ZnO-ZrO2, (E) 50ZnO-ZrO2, and (F) 75ZnO-ZrO2 composites; (b) Kubelka-Munk function of (A) ZrO2, (B) 13ZnO-ZrO2, (C) 25ZnO-ZrO2, (D) 50ZnO-ZrO2, and (E) 75ZnO-ZrO2 composites. (I) represents the first and (II) the second edge of the samples considered for Eg estimations. (a) (b)The addition of ZnO modified the absorption edge of ZrO2 towards lower values. A red shift was observed in the last band of all ZnO-ZrO2 materials towards longer wavelength region, due to the introduction of energy levels in the interband gap [17]. Kubelka-Munk function was used to estimate the band gap of the nanocomposites (Figure 5(b)). The band gap values of pure ZnO and ZrO2 were evaluated by using n=2 (direct transitions) and n=1/2 (indirect transitions), respectively, and their values are summarized in Table S2 (Supplementary Materials).For ZnO-ZrO2 materials, it can be noticed the presence of two edges instead of only one; the bandgap was estimated using n=1/2. The obtained value for ZnO was 3.26 eV (Supplementary Materials, Figure S5) whereas 4.53 and 5.06 eV were calculated for ZrO2 (for the two edges observed in the corresponding spectrum), these values changed to 4.73, 4.35, 3.76, and 4.16 eV by the addition of different contents of ZnO (indicated as I in Figure 5(b)), being 50ZnO-ZrO2 the one with the lowest value. With regard to the low-energy shoulder observed in ZnO-ZrO2 spectra (indicated as II in Figure 5(b)), the calculated values were 3.07, 3.10, 3.15, and 3.16 eV for the composites with 13, 25, 50, and 75% of ZnO, respectively, allowing nanocomposites to be excited at lower energy. ## 3.6. Electron Microscopy Morphology of the nanocomposites was investigated using electron microscopy, and the images are shown in Figure6. We observed that ZnO nanoparticles are rod-like shaped with sizes ranging from 100 to 300 nm producing agglomerates. On the other hand, ZrO2 is made of smaller particles nonuniform in shape and size. In 13ZnO-ZrO2 nanocomposite, we observed that ZnO particles change in shape whereas some ZrO2 agglomerates grew up to 1 μm but also smaller quasispherical particles around 300 nm were observed. The most significant change was exhibited by 75ZnO-ZrO2 nanocomposite, since agglomerates of particles with smaller sizes were obtained.Figure 6 FESEM (a) and the corresponding HRTEM (b) micrographs of pure oxides and ZnO-ZrO2 composites. (a) (b) ## 3.7. Photocatalytic Test Previous to the test, adsorption-desorption equilibrium was reached after stirring the suspensions for 20 minutes in the dark, of which in general all the composites presented low adsorption of the pollutant. The results of photocatalytic degradation are shown in Figure7(a). Here, we observed that pure ZrO2 degraded only 5% of phenol even after 120 min; when ZnO was incorporated, a slight increase in photodegradation with 13 and 25% of ZnO was observed, but they barely degraded around 15% of the pollutant. By increasing the ZnO content up to 50% mol, 47% of phenol was degraded, whereas 71% of degradation was achieved using 75ZnO-ZrO2 composite. For the prepared ZnO, 74% of degradation was obtained during the same time. The kinetic parameters were also calculated assuming pseudofirst order kinetics (Table 2) where we observed that t1/2 for 75ZnO-ZrO2 composite was very close to that calculated for ZnO.Figure 7 (a) Degradation curves of phenol by ZnO, ZrO2, and different ZnO-ZrO2 composites (experimental conditions: phenol = 50 ppm, volume of phenol = 200 mL, and catalyst dosage = 200 mg). (b) TOC results obtained after 120 min of illumination in the UV. (a) (b)Table 2 Kinetic parameters estimated from pseudofirst order kinetics. Photocatalyst k (min−1) R2 t1/2 (min) ZrO2 0.7×10−3 0.9585 1155 13ZnO-ZrO2 1.3×10−3 0.9193 533 25ZnO-ZrO2 1.5×10−3 0.9808 462 50ZnO-ZrO2 5.3×10−3 0.9939 130 75ZnO-ZrO2 10.3×10−3 0.9964 67 ZnO 11.2×10−3 0.9810 62It is well known that ZnO can be activated under UV-A light and generally exhibits good photodegradation rates, but it is also needed to assess the mineralization of the pollutant. For this purpose, TOC analysis was performed and the results are tabulated in Figure7(b). Pure ZrO2 mineralized 2.5% of phenol, but the mineralization increases with increasing ZnO content, which stabilizes tetragonal crystalline phases of ZrO2. Although ZnO reached 74% degradation, the mineralization of the pollutant was 40%, while 51% of mineralization was achieved with 75ZnOZrO2. Tetragonal ZrO2 has been reported as the most active polymorph of ZrO2 which also shows high selectivity in catalytic reactions. These results showed that at this concentration, ZnO-ZrO2 composite improved the mineralization of phenol when compared to pure ZnO. Besides, it was also confirmed that intermediaries like catechol, resorcinol, and hydroquinone were not generated during the photodegradation with 75ZnO-ZrO2.According to the obtained results, a schematic representation of phenol degradation is shown in Figure8. When the ZnO-ZrO2 composites were excited by Uv-A irradiation (365 nm), electrons migrate from the valence band (VB) to the conduction band (CB) of ZnO, leading to the formation of electron/hole (e−/h+) pairs. Since the energy levels of ZnO fit well into the band gap of ZrO2, the electrons from the CB of ZrO2 can easily be transferred to the CB of ZnO; conversely, the holes migrate from the VB of ZnO to the VB of ZrO2, and thereby the electron-hole pair recombination may be decreased in ZnO-ZrO2 composites. These e− and h+ react with water and oxygen to produce hydroxyl (OH⋅) radicals which are very reactive and can easily oxidize the phenol until obtaining CO2 and water. As a result, we obtained an enhancement in photocatalytic performance of phenol degradation by the composite with the highest ZnO percentage [8, 19, 20, 43].Figure 8 Schematic representation of photocatalytic mechanism of electron-hole pair separation of ZnO-ZrO2 composites for the degradation of phenol. ## 4. Discussion It is well known that the catalytic and photocatalytic performance of ZrO2 can be affected by crystalline phases. Although there are some reports on the catalytic activity of ZnO-ZrO2 materials, deep structural studies have not been performed so far. The fact that both tetragonal and cubic phases exhibit similar XRD patterns makes it difficult to discern these structures based only on X-ray diffraction analysis, and Rietveld refinement or Raman spectroscopy are good tools for elucidating this ambiguity. In this work, we observed that by adding ZnO, only the t-ZrO2 phase was obtained, and this phenomenon could also be suspected by analyzing Raman results. In all composites, t-ZrO2 was detected as well as zincite (wurtzite-type) structure for ZnO. Formation of the nanocomposite influenced the crystallite size of both ZnO and ZrO2, increasing for ZnO but decreasing for t-ZrO2 as a function of the ZnO-ZrO2 ratio. The absence of solid solution could be explained not only due to the differences in valence between ions, which depends on the amount of Zn2+ species [44], but also due to the synthesis procedure here reported. Lattice parameters of the 13ZnOZrO2 nanocomposite are provided in Supplementary Materials. We observed small changes when compared to pure zirconia; this can be attributed to the presence of the divalent oxide, since incorporation of these type of oxides causes lattice deformation on the crystalline structure of ZrO2, with subsequent modification on the force constants of Zr–O and related bonds [33].Since band gap is one important feature to consider in photoactivity, many attempts to improve this property have been made. By coupling ZnO and ZrO2, radiation of low energy can be absorbed by the ZnO-ZrO2 composites. From the calculated Eg values, it can be assumed that the energy levels for both the valence band (VB) and conduction band (CB) in ZnO fit in with the bandgap of ZrO2. When the electrons are excited, most of the electrons from the conduction band of ZrO2 can be easily transferred to the conduction band of ZnO, and thus, the band gap may be decreased, indicating that ZnO-ZrO2 nanoparticles have a suitable band gap to generate excited electron/hole pairs [17] allowing the use of simulated solar radiation.Both oxides, ZnO and ZrO2, are well known for their photocatalytic properties, but one of their limitations is the need of UV light for its activation, especially ZrO2 that usually requires UV radiation due to its wide band gap. Here, we investigated the effect of ZnO in the photocatalytic performance of ZrO2 under simulated solar light. Since the use of pure ZnO leads to the formation of several compounds during the photoreaction, it is remarkable that 75ZnO-ZrO2 nanocomposite conducted to a reaction without formation of any intermediaries, which represents the main advantage of using ZnO-ZrO2. ## 5. Conclusions So far, ZnO-ZrO2 materials have been reported for several catalytic reactions with an enhanced performance compared to their pristine moieties. Recently, this type of composites has been studied also as photocatalysts with promising results, but full understanding of their properties related to their composition are needed. In this work, we prepared ZnO-ZrO2 composites using zinc (II) acetylacetonate and zirconium n-butoxide as raw materials. We observed that synthesis procedure strongly affected the stabilization of zirconia polymorphs, where ZnO plays an important role in inhibiting ZrO2 monoclinic structure and stabilizing tetragonal phase. By coupling ZnO to ZrO2, we observed significant changes in the absorption behavior of ZrO2 shifting its absorption edges in the UV region toward lower energies. The pore distribution of the composite was intensely changed by the interaction of both oxides, directing to a larger amount of mesopores than the observed for uncoupled oxides. Test revealed that 75ZnO-ZrO2 composite exhibited good performance in the degradation of phenol using simulated solar radiation, improved the mineralization reached by pure ZnO and ZrO2, and inhibited the formation of undesirable intermediates usually obtained as a result of photocatalytic degradation of phenol. --- *Source: 1015876-2019-01-10.xml*
2019
# Intraplatelet L-Arginine-Nitric Oxide Metabolic Pathway: From Discovery to Clinical Implications in Prevention and Treatment of Cardiovascular Disorders **Authors:** Jakub Gawrys; Damian Gajecki; Ewa Szahidewicz-Krupska; Adrian Doroszko **Journal:** Oxidative Medicine and Cellular Longevity (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1015908 --- ## Abstract Despite the development of new drugs and other therapeutic strategies, cardiovascular disease (CVD) remains still the major cause of morbidity and mortality in the world population. A lot of research, performed mostly in the last three decades, revealed an important correlation between “classical” demographic and biochemical risk factors for CVD, (i.e., hypercholesterolemia, hyperhomocysteinemia, smoking, renal failure, aging, diabetes, and hypertension) with endothelial dysfunction associated directly with the nitric oxide deficiency. The discovery of nitric oxide and its recognition as an endothelial-derived relaxing factor was a breakthrough in understanding the pathophysiology and development of cardiovascular system disorders. The nitric oxide synthesis pathway and its regulation and association with cardiovascular risk factors were a common subject for research during the last decades. As nitric oxide synthase, especially its endothelial isoform, which plays a crucial role in the regulation of NO bioavailability, inhibiting its function results in the increase in the cardiovascular risk pattern. Among agents altering the production of nitric oxide, asymmetric dimethylarginine—the competitive inhibitor of NOS—appears to be the most important. In this review paper, we summarize the role of L-arginine-nitric oxide pathway in cardiovascular disorders with the focus on intraplatelet metabolism. --- ## Body ## 1. Introduction After establishing the real nature of EDRF by Furchgott et al. [1, 2], which appeared to be nitric oxide (NO), numerous other groups were working on the nitric oxide synthesis pathway and its potential role in human (patho)physiology. This led to the discovery of the nitric oxide synthase [3] which produces nitric oxide from L-arginine with flavin adenine dinucleotide (FAD), flavin mononucleotide (FMN), tetrahydrobiopterin (BH4), and heme with a zinc atom as cofactors. From that time, numerous functions of NO were established which can generally be divided into three groups: (1) Group associated with neuronal transmission, where the NO plays an inhibitory role as a mediator in peripheral nonadrenergic noncholinergic (NANC) neurotransmission (causing relaxation mainly in the gastrointestinal tract, penile corpus cavernosum, and bladder) [4] (2) Group playing an inflammatory role, where NO is produced by the inducible isoform of nitric oxide synthase (iNOS) (3) Group related to the cardiovascular system ## 2. Nitric Oxide in Cardiovascular Disorders Despite the development of new drugs and other therapeutic strategies, cardiovascular disease (CVD) remains still the major cause of morbidity and mortality in the world population [5]. A lot of research, performed mostly in the last three decades, revealed an important correlation between “classical” demographic and biochemical risk factors for CVD (i.e., hypercholesterolemia [6], hyperhomocysteinemia [7], smoking [8], renal failure [9], aging [10], diabetes [11], and hypertension [12]) with endothelial dysfunction associated directly with the nitric oxide deficiency. In the vascular endothelium, NO is produced by the endothelial isoform of nitric oxide synthase (eNOS = NOS3) which is constitutively active, allowing the maintenance of appropriate vascular tone by constant vasodilating action [13]. The other functions of NO are inhibition of platelet aggregation, inhibition of smooth muscle proliferation, and leucocyte interaction with the vascular wall [14]. All of these properties place nitric oxide as a key modulator of vascular homeostasis. Nowadays, endothelial dysfunction, defined as a reduction in the endothelial NO bioavailability, can be measured noninvasively by the change in blood flow (e.g., EndoPAT 2000 and brachial flow-mediated dilation) or appropriate agonists (e.g., reaction to acetylcholine administered by iontophoresis measured by laser Doppler flowmetry) [15]. There are several mechanisms which can limit the bioavailability of NO. One of them is a decrease in the eNOS expression in endothelial cells which occurs in advanced atherosclerosis [16] and in smokers [17]. Decreased NO production can also be an effect of L-arginine deficiency or nitric oxide synthase cofactors. A lot of studies have been performed on the oxidative stress as a factor limiting the NO bioavailability [18]. An imbalance between the creation of reactive oxygen species (ROS) and their scavenging by antioxidants promotes the reaction between NO and O2- which results in the peroxynitrite formation. Peroxynitrite is a potent oxidative compound which promotes posttranslational modifications of proteins (including the eNOS protein) [19], alterations in the main metabolic pathways [20], or eNOS uncoupling which results in the production of superoxide anion instead of NO [21, 22]. Increased formation of peroxynitrite and other reactive oxygen species has been demonstrated in established cardiovascular system disorders [23] and is associated with a vast majority of CVD risk factors such as hypertension [24], diabetes [25], tobacco use [26], and hypercholesterolemia [27]. Another mechanism responsible for nitric oxide deficiency, which is deeply investigated, is connected with competitive inhibition of nitric oxide synthase by asymmetric dimethylarginine (ADMA)—a naturally occurring amino acid circulating in plasma and present in various tissues and cells. ## 3. ADMA as the Most Potent Inhibitor of the L-Arginine-Nitric Oxide Pathway The first mention about asymmetric dimethylarginine presence comes from the study by Kakimoto and Akazawa who have isolated its crystalline form, among other substances, by ion-exchange chromatography of the aliphatic basic amino acid fraction of human urine [28]. By the fact that its concentration in urine is not affected by arginine administered orally, the authors assumed that this compound may be a derivate from endogenous protein proteolysis. In 1992, Leone et al. proposed its potential pathophysiological role by providing in vitro and in vivo evidence that ADMA inhibits NO synthesis [29]. In addition, they described the accumulation of dimethylarginines by the lack of urine production in patients with end-stage chronic renal failure as a potential mechanism of hypertension and immune dysfunction in this group of patients.Methylated derivates of arginine are produced as a result of proteolysis of endogenous methylated proteins, i.e., histones. This methylation is catalysed by two isoforms of the arginine methyltransferases (PMRTs)—PMRT-1 and PMRT-2 proteins—with S-adenosylmethionine as a donor of methyl residues. As an effect of PMRT-1—the main isoform present in the vascular wall (endothelial and smooth muscle cells)—asymmetrically dimethylated and monomethylated arginine residues are formed. PMRT-2 is also capable of mono- and dimethylation of arginine residues, but in this case, residues are dimethylated symmetrically [30, 32]. After protein degradation, methylarginine compounds appear at the beginning in the cytosol but also in plasma [31]. Monomethylated arginine (L-NMMA) and asymmetric dimethylarginine (ADMA) are inhibitors of all nitric oxide synthase isoforms whereas symmetric dimethylarginine (SDMA) is not (Figure 1). The inhibitory effect of ADMA and L-NMMA on NOS is similar [33], but considering that plasma concentration of ADMA is up to tenfold higher than that of L-NMMA, ADMA was an object of research for the last decades. The inhibition of NOS may not be the only effect of asymmetric dimethylarginine in human. There are reports that at high concentrations, both ADMA and SDMA may compete in the transport through the Y-amino acid transporter with arginine [34] and also may inhibit the Na+/K+ ATPase [35]. However, concentrations required for these actions seem to be too high to be clinically relevant. Murray-Rust et al. proposed another potential target for ADMA, which is the arginine-glycine amidinotransferase. The structure of this enzyme is similar to that of dimethylarginine dimethylaminohydrolases (DDAHs) that metabolize ADMA [36]. However, Vallance and Leiper suggest that ADMA is only a poor inhibitor of this transferase [37].Figure 1 Synthesis of ADMA from methylated proteins. SAM: S-adenosylmethionine; SAH: S-adenosylhomocysteine; PMRT-1: protein methyltransferase-1; PMRT-2: protein methyltransferase-2; SDMA: symmetric dimethylarginine; L-NMMA: monomethylated arginine; ADMA: asymmetric dimethylarginine; NOS: nitric oxide synthase; NO: nitric oxide. Based on [30–32].All of the methylated arginine derivates are eliminated by kidneys. In contrast to SDMA, which is excreted completely by kidneys, ADMA and L-NMMA are also degraded by DDAH [38, 40]. As a result, citrulline and the monomethylamines are formed. The catalytic site of DDAH involves cysteine residue. Its nitrosylation by reactive nitrogen species renders the enzyme inactive which can be the potential homeostatic mechanism especially in reactions involving the inducible NOS (iNOS) (increased production of NO leads to the accumulation of ADMA by inhibiting the DDAH) [39] (Figure 2). In addition, this cysteine residue is susceptible to the action of the number of oxidative stress-related cardiovascular risk factors such as hypercholesterolemia, hypertension, renal failure [41], hyperhomocysteinemia, hyperglycaemia [42], and tobacco smoking [43], which also results in the accumulation of ADMA. This can be a pathway, allowing different factors to affect endothelial function [44]. There are two known isoforms of DDAH. DDAH-1 accompanies the neuronal NOS and is present in the liver, kidneys, and lungs (its action contributes to circulating ADMA concentration), while DDAH-2 is present in tissues expressing endothelial and inducible NOS and dominates in vessels, especially endothelium and smooth muscle cells (its role is connected with the located regulation of the amount of ADMA) [38, 45].Figure 2 Potential homeostatic mechanism of autoregulation of nitric oxide production. NO: nitric oxide; NOS: nitric oxide synthase; ADMA: asymmetric dimethylarginine; DDAH: dimethylarginine dimethylaminohydrolase;∗: reactions involving inducible nitric oxide synthase. Based on [38, 39].Concentrations of ADMA in plasma, which are connected with biologic action, are approximately 10-fold higher than concentrations observed under physiological condition. It suggests that even if plasma concentration of ADMA indicates its amount in the whole body, it does not mean that the same concentrations are present in all tissues [31, 46]. In the study by Cardounel et al., where the inhibition of NO generation by bovine endothelial cells was measured, the effect of raising ADMA concentrations was greater than expected on the basis of kinetic studies [47]. It suggests that there should be a mechanism of methylarginine uptake by cells, which, according to Bogle et al., could be the y+ transport system [34]. The regulation of transport may be the explanation for “L-arginine paradox.” This term is used to refer situations where exogenous administration of L-arginine led to enhancement of endothelial vasodilatory function, which is present despite the fact that its plasma concentration is up to 30-fold higher than necessary to completely saturate the NOS [48]. In physiological conditions, plasma level of ADMA is insufficient to compete with L-arginine in transport through the endothelial cell membrane [49]. However, in subjects with developed CVD or with a high CVD risk profile, elevated ADMA concentrations are able to affect the eNOS activity. The restoration of eNOS activity and improvement of vasodilatory function of the endothelium by the addition of exogenous L-arginine in pathological conditions, without effects in healthy subjects, point to the L-arginine/ADMA ratio, instead of L-arginine and ADMA concentrations alone, as the main factor regulating the NO bioavailability [50, 51]. ## 4. The Role of ADMA in Cardiovascular Disease Development After the discovery of ADMA and the establishment of its function in the L-arginine → nitric oxide → cGMP pathway, the research focused on the connection of elevated ADMA concentrations with CVD and classic cardiovascular risk factors. One of the first studies evaluating the role of ADMA was performed by Bode-Böger et al. They showed that elevated concentrations of asymmetric dimethylarginine are found in hypercholesterolemic rabbits and it is the first biochemical abnormality observed at the early stage of atherosclerosis [52]. The following studies led to the discovery that elevated ADMA plasma concentrations are present in humans with hypercholesterolemia and with vascular disease. This finding was associated with endothelial dysfunction and impairment in the NO production measured by lower excretion of nitrates in the urine and worse NO-dependent forearm vasodilation [53]. It led to the conclusion that elevated ADMA concentration is an early marker of endothelial dysfunction known as a prognostic marker of severe cardiovascular events. At the end of the previous century, Cardillo et al. discovered that hypertension is associated with a defect in NO synthesis [54]. As a result, impaired endothelium-dependent vasodilation occurred, but the reaction for isoproterenol and sodium nitroprusside, which both enhance the NO concentration, was preserved. It means that endothelial dysfunction in hypertension is an effect of the selective decrease in NO bioavailability. Other studies proved that in early stages, hypertension is connected with the elevated plasma level of ADMA. Sonmez et al. compared the population of healthy adults with a demographically matched group of people with a recent diagnosis of hypertension, yet without any medical intervention [55]. Their work indicates that even in the initial stage of the disease, hypertensive patients have a higher level of plasma ADMA concentrations when other factors such as age, BMI, smoking history, CRP level, total cholesterol, LDL cholesterol, and triglyceride levels were similar between the two groups. Other studies on the link between ADMA and hypertension are in line with the previous results [56]. In addition, Curgunlu et al. demonstrated that ADMA concentration is elevated not only in hypertensive subjects but also in individuals with white coat hypertension, which indicates endothelial dysfunction presence in this state. Numerical values of ADMA concentrations place people with white coat hypertension in the continuum between normotensive (lower) and hypertensive (higher ADMA concentrations) subjects [57].Aging is one of the main risk factors for cardiovascular disease and is connected with the progression of endothelial dysfunction. This resulted in the hypothesis that aging may be associated with increased ADMA concentrations. Gates et al. compared ADMA levels in a group of young adults (18-26 years old) with those in older ones (52-71 years old) without accompanying diseases except for impaired endothelial function measured by forearm flow-mediated dilation. The lack of difference in ADMA concentrations between these groups points to another reason for the dysfunction of endothelium during aging other than competitive inhibition of NOS [58].One of the main risk factors for coronary artery disease and other cardiovascular disorders is tobacco smoking. Sobczak et al. investigated the influence of active and passive smoking for ADMA concentration in healthy volunteers without other CVD risk factors. The ADMA concentrations were higher in active and passive smokers when compared to the nonsmoking control group. However, these differences were not statistically significant [59]. Other research on the effect of tobacco smoking on the NO bioavailability presented similar results, but most of them were performed on a population with already developed cardiovascular disease [60, 61]. Despite the fact that some authors observed higher ADMA plasma concentrations in healthy people smoking >20 cigarettes daily [62], it seems that endothelial dysfunction and higher CVD risk related to tobacco use are not connected with alterations in NO bioavailability.In contrast to smoking, hyperhomocysteinemia is among the risk factors that probably cause endothelial dysfunction by elevating plasma levels of asymmetric dimethylarginine. There are some hypotheses regarding the exact pathway in which concentrations of these compounds are connected. The first of them is that hyperhomocysteinemia causes increased methylation of proteins when ADMA is a product of proteolysis [63]. This hypothesis was initially supported by detecting higher ADMA concentrations after the hyperhomocysteinemic meal in humans [64]. However, there are other possible mechanisms such as stimulating SAM-dependent activity of PRMTs, decreased renal excretion, or inhibiting the DDAHs—enzymes responsible for ADMA degradation. Supporting the last hypothesis, Stühlinger et al. found that higher ADMA concentrations after exposure to homocysteine or methionine are connected with decreased activity of the DDAH isoforms [42]. The same authors observed direct inhibition of recombinant human DDAH activity by homocysteine in cell-free systems. What is more, other studies showed that mice with hyperhomocysteinemia had decreased levels of mRNA for the two major DDAH enzymes [65]. Considering the above-given reports, further research is needed to establish the exact mechanism of ADMA concentration elevation by homocysteine.Metabolic disorders such as obesity, insulin resistance, and diabetes mellitus (DM) are known to be the risk factors for cardiovascular disease development. In all of them, endothelial dysfunction appears to play a pivotal role. It has been proposed that elevated levels of plasma ADMA concentration are responsible for the impairment in the NO bioavailability in metabolic disorders (Figure3). Almost every diabetic person (with the exception of young subjects with type 1 diabetes) is considered a patient with a high CVD risk profile. In this population, even with no atherosclerosis and other organ damage, elevated ADMA levels are obtained [66]. The ADMA concentrations rise not only in general but also in dynamic situations. Fard et al. investigated changes in ADMA level 5 hours after ingestion of a high-fat meal. Their study demonstrated acute elevation of its concentration followed by a decreased vasodilatory response of the brachial artery measured by flow-mediated dilation which can be another factor promoting the development of atherosclerosis [67]. Higher ADMA level is a predictor of poor prognosis in patients with DM and already developed coronary artery disease [68] and is also a predictor of acute cardiovascular events in DM patients without vascular changes [69]. Endothelial dysfunction connected with elevated ADMA concentration is present also in prediabetic states such as obesity and insulin resistance [70–72] and is considered the first step in the development of atherosclerosis. The intensity of its dysfunction is higher in insulin-resistant subjects than in obese but insulin-sensitive ones [73]. Some research has shown that weight loss (provided by bariatric surgery followed by diet or only by diet), as well as reduction of insulin resistance (by pharmacological and nonpharmacological treatment), resulted in lowering of plasma ADMA levels and in the improvement of endothelial function [71, 73]. Searching for an explanation of elevated ADMA concentrations in metabolic disorders, Lin et al. conducted a study in which the possible pathways of ADMA accumulation in diabetic rats were investigated [74]. The discovery was that this accumulation is connected with reduced endothelial DDAH activity. However, the amount of DDAH found in the aortic endothelium of both groups was comparable. This suggests that these effects are reversible which is consistent with the studies mentioned above. As DDAH is sensitive to oxidative stress [36], hyperglycaemia-induced release of free radicals may be responsible for the elevation of ADMA concentration and endothelial dysfunction in metabolic disorders.Figure 3 The effect of hyperglycaemia on the L-arginine-nitric oxide pathway. ROS: reactive oxygen species; DDAH: dimethylarginine dimethylaminohydrolase; ADMA: asymmetric dimethylarginine; NOS: nitric oxide synthase; NO: nitric oxide. Authors’ modification based on [66–69]. ## 5. NOS Pathway in Platelets (Figure4) Figure 4 The known functions of platelet-derived nitric oxide. Authors’ modification on the basis of [83].Shortly after the discovery of the L-arginine-nitric oxide pathway, the presence of nitric oxide synthase activity in human platelets was reported by Radomski et al. [75]. The tests performed on the specially prepared platelet cytosol showed that an increase in cGMP concentration was shown not only with direct NO donors (sodium nitroprusside) but also with L-arginine, known as a substrate of nitric oxide synthase. The effect of L-arginine is dependent on the presence of NADPH which indicates the enzymatic character of this reaction. The formation of NO in the platelet cytosol was inhibited by the addition of competitive NOS inhibitors such as L-NMMA which provides evidence of the presence of the nitric oxide synthesis pathway in human platelets. The L-arginine addition to the platelet cytosol did not increase the basal level of cGMP when platelets were not activated by collagen. It shows that the exogenous substrate, such as L-arginine, can be used by platelet NOS only after activation which probably potentiates the transport of the substrate through the platelet membrane [76]. However, the presence of nitric oxide synthase and all NO pathway components was questioned by some authors. Subjects to doubt were, among others, contamination of the probes with another blood cells [77], lack of specificity of used assays [78], measurement of cGMP activity or the amount of L-citrulline as indicators of NOS activity without considering other metabolic pathways [79, 80], or measurement of inorganic NO metabolites [81]. Finally, Cozzi et al. directly visualized the nitric oxide production by collagen-induced platelets using an NO-specific fluorescent agent—4-amino-5-methylamino-2′,7′-difluorofluorescein diacetate (DAF-FM) [82]. This agent reacts with NO and provides a fluorescent signal. The specificity of this compound was tested by incubation in an NO donor and H2O2—fluorescence occurred only in solution with an NO donor. What is more, in tests conducted on platelets from eNOS-knockout mice, fluorescence was not observed—it further confirms the specificity of DAF-FM. The results of this study confirmed the presence of eNOS in platelets by the increase in DAF fluorescence in platelets adhering to type I collagen under flow. The intensification of the signal was dependent on the presence of the NOS substrate (L-arginine), as well as of the competitive NOS inhibitor (L-NMMA).Despite the low concentrations of nitric oxide produced in platelets (compared to the endothelial cells) [82], it appears to have an important role in the aggregation of thrombocytes. According to Tymvios et al., platelet aggregation is regulated by endogenous NO. Inhibiting of all the endogenous NO action resulted in fatal thromboembolism in mice, but the deletion of the eNOS gene did not affect platelet reactivity [84]. This suggests that other sources of NO production, identified to be platelet-derived, are responsible for the regulation of aggregation and recruitment of thrombocytes. The increase in the amount of platelet-derived nitric oxide (PDNO) works as a negative feedback mechanism limiting the number of recruited thrombocytes during thrombus formation [75]. Impaired platelet NO production results in intensified surface P-selectin expression, which promotes coagulation by enhancing the expression of the tissue factor [85]. Considering the fact that platelet-derived NO release upon activation is delayed, its role is more complex. It not only limits the process of aggregation but also allows the recruitment of the required amount of cells to form the hemostatic clot [86]. Alterations of this subtle mechanism can play a crucial role in the development of cardiovascular diseases. Several studies provided data that impaired platelet-derived nitric oxide availability is connected with the intensity of coronary disease risk factors. Ikeda et al. showed the inverse correlation of PDNO with age, mean arterial pressure, hypercholesterolemia, and smoking [87]. What is more, the decrease in PDNO production was also correlated with a number of risk factors present in each individual. The impaired platelet-derived nitric oxide release is present also in already developed cardiovascular disorders such as coronary artery disease [88]. LDL cholesterol reduces L-arginine transport into platelets which is followed by reduction of the NO production [89]. It has been shown that statins have the potential to restore the PDNO release, which results in an improvement in the regulation of platelet aggregation [90, 91]. Platelet NOS activity is also impaired in subjects suffering from diabetes, both types 1 and 2 [92].Patients with type 2 diabetes are characterized by impaired production of NO and cGMP with no changes in the amount of nitric oxide synthase. Intraplatelet level of cGMP presents an inverse correlation with glycated haemoglobin and blood glucose levels [93]. It suggests that impairment in platelet-derived NO production in this population may be associated with glycaemia-dependent suppression of platelet NOS activation [83].Intraplatelet NO signalling is also affected in essential hypertension. Some studies show decreased platelet-derived NO release [94] and downregulation of receptors (ỳ+ transport system) responsible for membrane L-arginine transport [95]. The inhibition of L-arginine transport, according to Brunini et al., is connected with elevated levels of ADMA and L-NMMA [96]. What is more, the use of specific agonists for NOS3 with different pathways of action did not result in an increase in its activity in hypertensive subjects [97]. It indicates the enzyme defect as the main reason for the impairment of platelet NO release. Although plasma ADMA concentrations are elevated in a hypertensive subject compared to a healthy subject, Tymvios with colleagues demonstrated that elevated plasma ADMA level does not alter platelet NO production [84]. It suggests the existence of another mechanism controlling PDNO release. On the contrary, De Meirelles et al. found that plasma ADMA and L-NMMA are capable of decreasing the intraplatelet cGMP concentration which corresponds with lower NOS activity [98]. Previously cited, Cozzi et al. [82] showed an impact of the competitive NOS inhibitor—L-NMMA—on the release of nitric oxide by platelets. There is a possibility that disruptions of the NO synthesis process in the abovementioned situations are the effect of the accumulation of NOS inhibitors in thrombocytes. Further research is necessary to test this hypothesis and evaluate its clinical importance.Recent studies have shown another interesting aspect regarding PDNO release and its potential role in the hemostasis and thrombus generation. The discovery of two subpopulations of platelets, with and without the presence of intraplatelet eNOS, allowed the determination of a new hypothesis on a thrombus generation mechanism. In response to vascular injury, eNOSneg platelets (about 20% of all thrombocytes) adhere to the damaged area. This process is facilitated by the lack of endogenous NO production by this subpopulation. eNOSneg platelets, by the secretion of metalloproteinase-2, recruit eNOSpos ones (80% of the thrombocyte population), which, by their higher COX-1 content and higher thromboxane production, form the majority of the emerging aggregate. However, their ability to produce NO results in the limitation of the thrombus size [99] [100]. In vitro studies showed that increase in the eNOSneg/eNOSpos ratio, as well as inhibition of eNOS, promotes platelet aggregation. Changes in this ratio may be responsible for the impairment of blood coagulation homeostasis and may predispose individuals to developing CVD. It has been shown that platelets from patients after acute coronary syndrome produce less NO when compared to those from healthy ones [88]. Further research is needed to fully understand and determine the role of alterations in platelet subpopulations or their potential function as a target for new therapeutic strategies.Little is known about the effect of antiplatelet drugs on NO release by thrombocytes. Inhibition of the GPIIb/IIIa receptor (responsible for fibrinogen binding during platelet aggregation) resulted in the enhancement of NO production and reduction of the formation of superoxide anion [101]. Acetylsalicylic acid (ASA) has different effects on NOS activity dependent on dose-dependent mechanisms of action and duration of the treatment. On the one hand, ASA reduces NOS activity by limiting the NOS-activating response to stimulation of platelet beta-adrenergic receptors—this effect is shared with other nonsteroidal anti-inflammatory drugs so it appears to be mediated through COX inhibition. On the other hand, acute in vivo and in vitro action of aspirin results in the acetylation of the platelet NOS and thereby in COX-independent activation of this enzyme. Of clinical relevance, chronic administration of small doses of ASA (75 mg per day) did not enhance platelet NOS activity in a COX-independent mechanism, but the response to beta-adrenergic stimulation remains reduced [102, 103]. What is more, Rothwell et al. showed that an optimal dose of ASA depends on bodyweight and that for subjects above 70 kg, a daily dose of 75 mg is insufficient to reduce cardiovascular events properly [104]. It suggests that the methodology of already conducted studies should be carefully revised. ## 6. Conclusions The knowledge regarding the exact pathogenesis of impaired production of platelet-derived nitric oxide may have important clinical implications. Cardiovascular disorders are frequently related to enhanced thrombus formation. What is more, several conditions, despite the proper antiplatelet treatment, are associated with an elevated incidence of cardiovascular events. Identification of the patients with higher risk, for example, by assessment of platelet-derived nitric oxide production impairment or changes in the eNOSneg/eNOSpos ratio, may enable the application of the more appropriate, individualized treatment or early implementation of proper prevention. More research on the exact relation between cardiovascular disorders and the amount of nitric oxide synthesized by platelets is necessary to fully determine their clinical importance. Finally, the knowledge about the biochemistry and exact pathways of PDNO actions may serve as a basis for creating new or using already known drugs in new indications. --- *Source: 1015908-2020-03-04.xml*
1015908-2020-03-04_1015908-2020-03-04.md
31,591
Intraplatelet L-Arginine-Nitric Oxide Metabolic Pathway: From Discovery to Clinical Implications in Prevention and Treatment of Cardiovascular Disorders
Jakub Gawrys; Damian Gajecki; Ewa Szahidewicz-Krupska; Adrian Doroszko
Oxidative Medicine and Cellular Longevity (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1015908
1015908-2020-03-04.xml
--- ## Abstract Despite the development of new drugs and other therapeutic strategies, cardiovascular disease (CVD) remains still the major cause of morbidity and mortality in the world population. A lot of research, performed mostly in the last three decades, revealed an important correlation between “classical” demographic and biochemical risk factors for CVD, (i.e., hypercholesterolemia, hyperhomocysteinemia, smoking, renal failure, aging, diabetes, and hypertension) with endothelial dysfunction associated directly with the nitric oxide deficiency. The discovery of nitric oxide and its recognition as an endothelial-derived relaxing factor was a breakthrough in understanding the pathophysiology and development of cardiovascular system disorders. The nitric oxide synthesis pathway and its regulation and association with cardiovascular risk factors were a common subject for research during the last decades. As nitric oxide synthase, especially its endothelial isoform, which plays a crucial role in the regulation of NO bioavailability, inhibiting its function results in the increase in the cardiovascular risk pattern. Among agents altering the production of nitric oxide, asymmetric dimethylarginine—the competitive inhibitor of NOS—appears to be the most important. In this review paper, we summarize the role of L-arginine-nitric oxide pathway in cardiovascular disorders with the focus on intraplatelet metabolism. --- ## Body ## 1. Introduction After establishing the real nature of EDRF by Furchgott et al. [1, 2], which appeared to be nitric oxide (NO), numerous other groups were working on the nitric oxide synthesis pathway and its potential role in human (patho)physiology. This led to the discovery of the nitric oxide synthase [3] which produces nitric oxide from L-arginine with flavin adenine dinucleotide (FAD), flavin mononucleotide (FMN), tetrahydrobiopterin (BH4), and heme with a zinc atom as cofactors. From that time, numerous functions of NO were established which can generally be divided into three groups: (1) Group associated with neuronal transmission, where the NO plays an inhibitory role as a mediator in peripheral nonadrenergic noncholinergic (NANC) neurotransmission (causing relaxation mainly in the gastrointestinal tract, penile corpus cavernosum, and bladder) [4] (2) Group playing an inflammatory role, where NO is produced by the inducible isoform of nitric oxide synthase (iNOS) (3) Group related to the cardiovascular system ## 2. Nitric Oxide in Cardiovascular Disorders Despite the development of new drugs and other therapeutic strategies, cardiovascular disease (CVD) remains still the major cause of morbidity and mortality in the world population [5]. A lot of research, performed mostly in the last three decades, revealed an important correlation between “classical” demographic and biochemical risk factors for CVD (i.e., hypercholesterolemia [6], hyperhomocysteinemia [7], smoking [8], renal failure [9], aging [10], diabetes [11], and hypertension [12]) with endothelial dysfunction associated directly with the nitric oxide deficiency. In the vascular endothelium, NO is produced by the endothelial isoform of nitric oxide synthase (eNOS = NOS3) which is constitutively active, allowing the maintenance of appropriate vascular tone by constant vasodilating action [13]. The other functions of NO are inhibition of platelet aggregation, inhibition of smooth muscle proliferation, and leucocyte interaction with the vascular wall [14]. All of these properties place nitric oxide as a key modulator of vascular homeostasis. Nowadays, endothelial dysfunction, defined as a reduction in the endothelial NO bioavailability, can be measured noninvasively by the change in blood flow (e.g., EndoPAT 2000 and brachial flow-mediated dilation) or appropriate agonists (e.g., reaction to acetylcholine administered by iontophoresis measured by laser Doppler flowmetry) [15]. There are several mechanisms which can limit the bioavailability of NO. One of them is a decrease in the eNOS expression in endothelial cells which occurs in advanced atherosclerosis [16] and in smokers [17]. Decreased NO production can also be an effect of L-arginine deficiency or nitric oxide synthase cofactors. A lot of studies have been performed on the oxidative stress as a factor limiting the NO bioavailability [18]. An imbalance between the creation of reactive oxygen species (ROS) and their scavenging by antioxidants promotes the reaction between NO and O2- which results in the peroxynitrite formation. Peroxynitrite is a potent oxidative compound which promotes posttranslational modifications of proteins (including the eNOS protein) [19], alterations in the main metabolic pathways [20], or eNOS uncoupling which results in the production of superoxide anion instead of NO [21, 22]. Increased formation of peroxynitrite and other reactive oxygen species has been demonstrated in established cardiovascular system disorders [23] and is associated with a vast majority of CVD risk factors such as hypertension [24], diabetes [25], tobacco use [26], and hypercholesterolemia [27]. Another mechanism responsible for nitric oxide deficiency, which is deeply investigated, is connected with competitive inhibition of nitric oxide synthase by asymmetric dimethylarginine (ADMA)—a naturally occurring amino acid circulating in plasma and present in various tissues and cells. ## 3. ADMA as the Most Potent Inhibitor of the L-Arginine-Nitric Oxide Pathway The first mention about asymmetric dimethylarginine presence comes from the study by Kakimoto and Akazawa who have isolated its crystalline form, among other substances, by ion-exchange chromatography of the aliphatic basic amino acid fraction of human urine [28]. By the fact that its concentration in urine is not affected by arginine administered orally, the authors assumed that this compound may be a derivate from endogenous protein proteolysis. In 1992, Leone et al. proposed its potential pathophysiological role by providing in vitro and in vivo evidence that ADMA inhibits NO synthesis [29]. In addition, they described the accumulation of dimethylarginines by the lack of urine production in patients with end-stage chronic renal failure as a potential mechanism of hypertension and immune dysfunction in this group of patients.Methylated derivates of arginine are produced as a result of proteolysis of endogenous methylated proteins, i.e., histones. This methylation is catalysed by two isoforms of the arginine methyltransferases (PMRTs)—PMRT-1 and PMRT-2 proteins—with S-adenosylmethionine as a donor of methyl residues. As an effect of PMRT-1—the main isoform present in the vascular wall (endothelial and smooth muscle cells)—asymmetrically dimethylated and monomethylated arginine residues are formed. PMRT-2 is also capable of mono- and dimethylation of arginine residues, but in this case, residues are dimethylated symmetrically [30, 32]. After protein degradation, methylarginine compounds appear at the beginning in the cytosol but also in plasma [31]. Monomethylated arginine (L-NMMA) and asymmetric dimethylarginine (ADMA) are inhibitors of all nitric oxide synthase isoforms whereas symmetric dimethylarginine (SDMA) is not (Figure 1). The inhibitory effect of ADMA and L-NMMA on NOS is similar [33], but considering that plasma concentration of ADMA is up to tenfold higher than that of L-NMMA, ADMA was an object of research for the last decades. The inhibition of NOS may not be the only effect of asymmetric dimethylarginine in human. There are reports that at high concentrations, both ADMA and SDMA may compete in the transport through the Y-amino acid transporter with arginine [34] and also may inhibit the Na+/K+ ATPase [35]. However, concentrations required for these actions seem to be too high to be clinically relevant. Murray-Rust et al. proposed another potential target for ADMA, which is the arginine-glycine amidinotransferase. The structure of this enzyme is similar to that of dimethylarginine dimethylaminohydrolases (DDAHs) that metabolize ADMA [36]. However, Vallance and Leiper suggest that ADMA is only a poor inhibitor of this transferase [37].Figure 1 Synthesis of ADMA from methylated proteins. SAM: S-adenosylmethionine; SAH: S-adenosylhomocysteine; PMRT-1: protein methyltransferase-1; PMRT-2: protein methyltransferase-2; SDMA: symmetric dimethylarginine; L-NMMA: monomethylated arginine; ADMA: asymmetric dimethylarginine; NOS: nitric oxide synthase; NO: nitric oxide. Based on [30–32].All of the methylated arginine derivates are eliminated by kidneys. In contrast to SDMA, which is excreted completely by kidneys, ADMA and L-NMMA are also degraded by DDAH [38, 40]. As a result, citrulline and the monomethylamines are formed. The catalytic site of DDAH involves cysteine residue. Its nitrosylation by reactive nitrogen species renders the enzyme inactive which can be the potential homeostatic mechanism especially in reactions involving the inducible NOS (iNOS) (increased production of NO leads to the accumulation of ADMA by inhibiting the DDAH) [39] (Figure 2). In addition, this cysteine residue is susceptible to the action of the number of oxidative stress-related cardiovascular risk factors such as hypercholesterolemia, hypertension, renal failure [41], hyperhomocysteinemia, hyperglycaemia [42], and tobacco smoking [43], which also results in the accumulation of ADMA. This can be a pathway, allowing different factors to affect endothelial function [44]. There are two known isoforms of DDAH. DDAH-1 accompanies the neuronal NOS and is present in the liver, kidneys, and lungs (its action contributes to circulating ADMA concentration), while DDAH-2 is present in tissues expressing endothelial and inducible NOS and dominates in vessels, especially endothelium and smooth muscle cells (its role is connected with the located regulation of the amount of ADMA) [38, 45].Figure 2 Potential homeostatic mechanism of autoregulation of nitric oxide production. NO: nitric oxide; NOS: nitric oxide synthase; ADMA: asymmetric dimethylarginine; DDAH: dimethylarginine dimethylaminohydrolase;∗: reactions involving inducible nitric oxide synthase. Based on [38, 39].Concentrations of ADMA in plasma, which are connected with biologic action, are approximately 10-fold higher than concentrations observed under physiological condition. It suggests that even if plasma concentration of ADMA indicates its amount in the whole body, it does not mean that the same concentrations are present in all tissues [31, 46]. In the study by Cardounel et al., where the inhibition of NO generation by bovine endothelial cells was measured, the effect of raising ADMA concentrations was greater than expected on the basis of kinetic studies [47]. It suggests that there should be a mechanism of methylarginine uptake by cells, which, according to Bogle et al., could be the y+ transport system [34]. The regulation of transport may be the explanation for “L-arginine paradox.” This term is used to refer situations where exogenous administration of L-arginine led to enhancement of endothelial vasodilatory function, which is present despite the fact that its plasma concentration is up to 30-fold higher than necessary to completely saturate the NOS [48]. In physiological conditions, plasma level of ADMA is insufficient to compete with L-arginine in transport through the endothelial cell membrane [49]. However, in subjects with developed CVD or with a high CVD risk profile, elevated ADMA concentrations are able to affect the eNOS activity. The restoration of eNOS activity and improvement of vasodilatory function of the endothelium by the addition of exogenous L-arginine in pathological conditions, without effects in healthy subjects, point to the L-arginine/ADMA ratio, instead of L-arginine and ADMA concentrations alone, as the main factor regulating the NO bioavailability [50, 51]. ## 4. The Role of ADMA in Cardiovascular Disease Development After the discovery of ADMA and the establishment of its function in the L-arginine → nitric oxide → cGMP pathway, the research focused on the connection of elevated ADMA concentrations with CVD and classic cardiovascular risk factors. One of the first studies evaluating the role of ADMA was performed by Bode-Böger et al. They showed that elevated concentrations of asymmetric dimethylarginine are found in hypercholesterolemic rabbits and it is the first biochemical abnormality observed at the early stage of atherosclerosis [52]. The following studies led to the discovery that elevated ADMA plasma concentrations are present in humans with hypercholesterolemia and with vascular disease. This finding was associated with endothelial dysfunction and impairment in the NO production measured by lower excretion of nitrates in the urine and worse NO-dependent forearm vasodilation [53]. It led to the conclusion that elevated ADMA concentration is an early marker of endothelial dysfunction known as a prognostic marker of severe cardiovascular events. At the end of the previous century, Cardillo et al. discovered that hypertension is associated with a defect in NO synthesis [54]. As a result, impaired endothelium-dependent vasodilation occurred, but the reaction for isoproterenol and sodium nitroprusside, which both enhance the NO concentration, was preserved. It means that endothelial dysfunction in hypertension is an effect of the selective decrease in NO bioavailability. Other studies proved that in early stages, hypertension is connected with the elevated plasma level of ADMA. Sonmez et al. compared the population of healthy adults with a demographically matched group of people with a recent diagnosis of hypertension, yet without any medical intervention [55]. Their work indicates that even in the initial stage of the disease, hypertensive patients have a higher level of plasma ADMA concentrations when other factors such as age, BMI, smoking history, CRP level, total cholesterol, LDL cholesterol, and triglyceride levels were similar between the two groups. Other studies on the link between ADMA and hypertension are in line with the previous results [56]. In addition, Curgunlu et al. demonstrated that ADMA concentration is elevated not only in hypertensive subjects but also in individuals with white coat hypertension, which indicates endothelial dysfunction presence in this state. Numerical values of ADMA concentrations place people with white coat hypertension in the continuum between normotensive (lower) and hypertensive (higher ADMA concentrations) subjects [57].Aging is one of the main risk factors for cardiovascular disease and is connected with the progression of endothelial dysfunction. This resulted in the hypothesis that aging may be associated with increased ADMA concentrations. Gates et al. compared ADMA levels in a group of young adults (18-26 years old) with those in older ones (52-71 years old) without accompanying diseases except for impaired endothelial function measured by forearm flow-mediated dilation. The lack of difference in ADMA concentrations between these groups points to another reason for the dysfunction of endothelium during aging other than competitive inhibition of NOS [58].One of the main risk factors for coronary artery disease and other cardiovascular disorders is tobacco smoking. Sobczak et al. investigated the influence of active and passive smoking for ADMA concentration in healthy volunteers without other CVD risk factors. The ADMA concentrations were higher in active and passive smokers when compared to the nonsmoking control group. However, these differences were not statistically significant [59]. Other research on the effect of tobacco smoking on the NO bioavailability presented similar results, but most of them were performed on a population with already developed cardiovascular disease [60, 61]. Despite the fact that some authors observed higher ADMA plasma concentrations in healthy people smoking >20 cigarettes daily [62], it seems that endothelial dysfunction and higher CVD risk related to tobacco use are not connected with alterations in NO bioavailability.In contrast to smoking, hyperhomocysteinemia is among the risk factors that probably cause endothelial dysfunction by elevating plasma levels of asymmetric dimethylarginine. There are some hypotheses regarding the exact pathway in which concentrations of these compounds are connected. The first of them is that hyperhomocysteinemia causes increased methylation of proteins when ADMA is a product of proteolysis [63]. This hypothesis was initially supported by detecting higher ADMA concentrations after the hyperhomocysteinemic meal in humans [64]. However, there are other possible mechanisms such as stimulating SAM-dependent activity of PRMTs, decreased renal excretion, or inhibiting the DDAHs—enzymes responsible for ADMA degradation. Supporting the last hypothesis, Stühlinger et al. found that higher ADMA concentrations after exposure to homocysteine or methionine are connected with decreased activity of the DDAH isoforms [42]. The same authors observed direct inhibition of recombinant human DDAH activity by homocysteine in cell-free systems. What is more, other studies showed that mice with hyperhomocysteinemia had decreased levels of mRNA for the two major DDAH enzymes [65]. Considering the above-given reports, further research is needed to establish the exact mechanism of ADMA concentration elevation by homocysteine.Metabolic disorders such as obesity, insulin resistance, and diabetes mellitus (DM) are known to be the risk factors for cardiovascular disease development. In all of them, endothelial dysfunction appears to play a pivotal role. It has been proposed that elevated levels of plasma ADMA concentration are responsible for the impairment in the NO bioavailability in metabolic disorders (Figure3). Almost every diabetic person (with the exception of young subjects with type 1 diabetes) is considered a patient with a high CVD risk profile. In this population, even with no atherosclerosis and other organ damage, elevated ADMA levels are obtained [66]. The ADMA concentrations rise not only in general but also in dynamic situations. Fard et al. investigated changes in ADMA level 5 hours after ingestion of a high-fat meal. Their study demonstrated acute elevation of its concentration followed by a decreased vasodilatory response of the brachial artery measured by flow-mediated dilation which can be another factor promoting the development of atherosclerosis [67]. Higher ADMA level is a predictor of poor prognosis in patients with DM and already developed coronary artery disease [68] and is also a predictor of acute cardiovascular events in DM patients without vascular changes [69]. Endothelial dysfunction connected with elevated ADMA concentration is present also in prediabetic states such as obesity and insulin resistance [70–72] and is considered the first step in the development of atherosclerosis. The intensity of its dysfunction is higher in insulin-resistant subjects than in obese but insulin-sensitive ones [73]. Some research has shown that weight loss (provided by bariatric surgery followed by diet or only by diet), as well as reduction of insulin resistance (by pharmacological and nonpharmacological treatment), resulted in lowering of plasma ADMA levels and in the improvement of endothelial function [71, 73]. Searching for an explanation of elevated ADMA concentrations in metabolic disorders, Lin et al. conducted a study in which the possible pathways of ADMA accumulation in diabetic rats were investigated [74]. The discovery was that this accumulation is connected with reduced endothelial DDAH activity. However, the amount of DDAH found in the aortic endothelium of both groups was comparable. This suggests that these effects are reversible which is consistent with the studies mentioned above. As DDAH is sensitive to oxidative stress [36], hyperglycaemia-induced release of free radicals may be responsible for the elevation of ADMA concentration and endothelial dysfunction in metabolic disorders.Figure 3 The effect of hyperglycaemia on the L-arginine-nitric oxide pathway. ROS: reactive oxygen species; DDAH: dimethylarginine dimethylaminohydrolase; ADMA: asymmetric dimethylarginine; NOS: nitric oxide synthase; NO: nitric oxide. Authors’ modification based on [66–69]. ## 5. NOS Pathway in Platelets (Figure4) Figure 4 The known functions of platelet-derived nitric oxide. Authors’ modification on the basis of [83].Shortly after the discovery of the L-arginine-nitric oxide pathway, the presence of nitric oxide synthase activity in human platelets was reported by Radomski et al. [75]. The tests performed on the specially prepared platelet cytosol showed that an increase in cGMP concentration was shown not only with direct NO donors (sodium nitroprusside) but also with L-arginine, known as a substrate of nitric oxide synthase. The effect of L-arginine is dependent on the presence of NADPH which indicates the enzymatic character of this reaction. The formation of NO in the platelet cytosol was inhibited by the addition of competitive NOS inhibitors such as L-NMMA which provides evidence of the presence of the nitric oxide synthesis pathway in human platelets. The L-arginine addition to the platelet cytosol did not increase the basal level of cGMP when platelets were not activated by collagen. It shows that the exogenous substrate, such as L-arginine, can be used by platelet NOS only after activation which probably potentiates the transport of the substrate through the platelet membrane [76]. However, the presence of nitric oxide synthase and all NO pathway components was questioned by some authors. Subjects to doubt were, among others, contamination of the probes with another blood cells [77], lack of specificity of used assays [78], measurement of cGMP activity or the amount of L-citrulline as indicators of NOS activity without considering other metabolic pathways [79, 80], or measurement of inorganic NO metabolites [81]. Finally, Cozzi et al. directly visualized the nitric oxide production by collagen-induced platelets using an NO-specific fluorescent agent—4-amino-5-methylamino-2′,7′-difluorofluorescein diacetate (DAF-FM) [82]. This agent reacts with NO and provides a fluorescent signal. The specificity of this compound was tested by incubation in an NO donor and H2O2—fluorescence occurred only in solution with an NO donor. What is more, in tests conducted on platelets from eNOS-knockout mice, fluorescence was not observed—it further confirms the specificity of DAF-FM. The results of this study confirmed the presence of eNOS in platelets by the increase in DAF fluorescence in platelets adhering to type I collagen under flow. The intensification of the signal was dependent on the presence of the NOS substrate (L-arginine), as well as of the competitive NOS inhibitor (L-NMMA).Despite the low concentrations of nitric oxide produced in platelets (compared to the endothelial cells) [82], it appears to have an important role in the aggregation of thrombocytes. According to Tymvios et al., platelet aggregation is regulated by endogenous NO. Inhibiting of all the endogenous NO action resulted in fatal thromboembolism in mice, but the deletion of the eNOS gene did not affect platelet reactivity [84]. This suggests that other sources of NO production, identified to be platelet-derived, are responsible for the regulation of aggregation and recruitment of thrombocytes. The increase in the amount of platelet-derived nitric oxide (PDNO) works as a negative feedback mechanism limiting the number of recruited thrombocytes during thrombus formation [75]. Impaired platelet NO production results in intensified surface P-selectin expression, which promotes coagulation by enhancing the expression of the tissue factor [85]. Considering the fact that platelet-derived NO release upon activation is delayed, its role is more complex. It not only limits the process of aggregation but also allows the recruitment of the required amount of cells to form the hemostatic clot [86]. Alterations of this subtle mechanism can play a crucial role in the development of cardiovascular diseases. Several studies provided data that impaired platelet-derived nitric oxide availability is connected with the intensity of coronary disease risk factors. Ikeda et al. showed the inverse correlation of PDNO with age, mean arterial pressure, hypercholesterolemia, and smoking [87]. What is more, the decrease in PDNO production was also correlated with a number of risk factors present in each individual. The impaired platelet-derived nitric oxide release is present also in already developed cardiovascular disorders such as coronary artery disease [88]. LDL cholesterol reduces L-arginine transport into platelets which is followed by reduction of the NO production [89]. It has been shown that statins have the potential to restore the PDNO release, which results in an improvement in the regulation of platelet aggregation [90, 91]. Platelet NOS activity is also impaired in subjects suffering from diabetes, both types 1 and 2 [92].Patients with type 2 diabetes are characterized by impaired production of NO and cGMP with no changes in the amount of nitric oxide synthase. Intraplatelet level of cGMP presents an inverse correlation with glycated haemoglobin and blood glucose levels [93]. It suggests that impairment in platelet-derived NO production in this population may be associated with glycaemia-dependent suppression of platelet NOS activation [83].Intraplatelet NO signalling is also affected in essential hypertension. Some studies show decreased platelet-derived NO release [94] and downregulation of receptors (ỳ+ transport system) responsible for membrane L-arginine transport [95]. The inhibition of L-arginine transport, according to Brunini et al., is connected with elevated levels of ADMA and L-NMMA [96]. What is more, the use of specific agonists for NOS3 with different pathways of action did not result in an increase in its activity in hypertensive subjects [97]. It indicates the enzyme defect as the main reason for the impairment of platelet NO release. Although plasma ADMA concentrations are elevated in a hypertensive subject compared to a healthy subject, Tymvios with colleagues demonstrated that elevated plasma ADMA level does not alter platelet NO production [84]. It suggests the existence of another mechanism controlling PDNO release. On the contrary, De Meirelles et al. found that plasma ADMA and L-NMMA are capable of decreasing the intraplatelet cGMP concentration which corresponds with lower NOS activity [98]. Previously cited, Cozzi et al. [82] showed an impact of the competitive NOS inhibitor—L-NMMA—on the release of nitric oxide by platelets. There is a possibility that disruptions of the NO synthesis process in the abovementioned situations are the effect of the accumulation of NOS inhibitors in thrombocytes. Further research is necessary to test this hypothesis and evaluate its clinical importance.Recent studies have shown another interesting aspect regarding PDNO release and its potential role in the hemostasis and thrombus generation. The discovery of two subpopulations of platelets, with and without the presence of intraplatelet eNOS, allowed the determination of a new hypothesis on a thrombus generation mechanism. In response to vascular injury, eNOSneg platelets (about 20% of all thrombocytes) adhere to the damaged area. This process is facilitated by the lack of endogenous NO production by this subpopulation. eNOSneg platelets, by the secretion of metalloproteinase-2, recruit eNOSpos ones (80% of the thrombocyte population), which, by their higher COX-1 content and higher thromboxane production, form the majority of the emerging aggregate. However, their ability to produce NO results in the limitation of the thrombus size [99] [100]. In vitro studies showed that increase in the eNOSneg/eNOSpos ratio, as well as inhibition of eNOS, promotes platelet aggregation. Changes in this ratio may be responsible for the impairment of blood coagulation homeostasis and may predispose individuals to developing CVD. It has been shown that platelets from patients after acute coronary syndrome produce less NO when compared to those from healthy ones [88]. Further research is needed to fully understand and determine the role of alterations in platelet subpopulations or their potential function as a target for new therapeutic strategies.Little is known about the effect of antiplatelet drugs on NO release by thrombocytes. Inhibition of the GPIIb/IIIa receptor (responsible for fibrinogen binding during platelet aggregation) resulted in the enhancement of NO production and reduction of the formation of superoxide anion [101]. Acetylsalicylic acid (ASA) has different effects on NOS activity dependent on dose-dependent mechanisms of action and duration of the treatment. On the one hand, ASA reduces NOS activity by limiting the NOS-activating response to stimulation of platelet beta-adrenergic receptors—this effect is shared with other nonsteroidal anti-inflammatory drugs so it appears to be mediated through COX inhibition. On the other hand, acute in vivo and in vitro action of aspirin results in the acetylation of the platelet NOS and thereby in COX-independent activation of this enzyme. Of clinical relevance, chronic administration of small doses of ASA (75 mg per day) did not enhance platelet NOS activity in a COX-independent mechanism, but the response to beta-adrenergic stimulation remains reduced [102, 103]. What is more, Rothwell et al. showed that an optimal dose of ASA depends on bodyweight and that for subjects above 70 kg, a daily dose of 75 mg is insufficient to reduce cardiovascular events properly [104]. It suggests that the methodology of already conducted studies should be carefully revised. ## 6. Conclusions The knowledge regarding the exact pathogenesis of impaired production of platelet-derived nitric oxide may have important clinical implications. Cardiovascular disorders are frequently related to enhanced thrombus formation. What is more, several conditions, despite the proper antiplatelet treatment, are associated with an elevated incidence of cardiovascular events. Identification of the patients with higher risk, for example, by assessment of platelet-derived nitric oxide production impairment or changes in the eNOSneg/eNOSpos ratio, may enable the application of the more appropriate, individualized treatment or early implementation of proper prevention. More research on the exact relation between cardiovascular disorders and the amount of nitric oxide synthesized by platelets is necessary to fully determine their clinical importance. Finally, the knowledge about the biochemistry and exact pathways of PDNO actions may serve as a basis for creating new or using already known drugs in new indications. --- *Source: 1015908-2020-03-04.xml*
2020
# Development and Remodeling of the Vertebrate Blood-Gas Barrier **Authors:** Andrew Makanya; Aikaterini Anagnostopoulou; Valentin Djonov **Journal:** BioMed Research International (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101597 --- ## Abstract During vertebrate development, the lung inaugurates as an endodermal bud from the primitive foregut. Dichotomous subdivision of the bud results in arborizing airways that form the prospective gas exchanging chambers, where a thin blood-gas barrier (BGB) is established. In the mammalian lung, this proceeds through conversion of type II cells to type I cells, thinning, and elongation of the cells as well as extrusion of the lamellar bodies. Subsequent diminution of interstitial tissue and apposition of capillaries to the alveolar epithelium establish a thin BGB. In the noncompliant avian lung, attenuation proceeds through cell-cutting processes that result in remarkable thinning of the epithelial layer. A host of morphoregulatory molecules, including transcription factors such as Nkx2.1, GATA, HNF-3, and WNT5a; signaling molecules including FGF, BMP-4, Shh, and TFG-β and extracellular proteins and their receptors have been implicated. During normal physiological function, the BGB may be remodeled in response to alterations in transmural pressures in both blood capillaries and airspaces. Such changes are mitigated through rapid expression of the relevant genes for extracellular matrix proteins and growth factors. While an appreciable amount of information regarding molecular control has been documented in the mammalian lung, very little is available on the avian lung. --- ## Body ## 1. Introduction The pulmonary blood-gas barrier (BGB) performs the noble role of passive diffusion of gases between blood and a common pool that delivers the air to the exchanging structures. The BGB is a paradoxical bioengineering structure in that it attains remarkable strength while at the same time remaining thin enough to allow gas exchange. The two aspects of the BGB important for efficient exchange are thinness and an extensive surface area. Additionally, the barrier needs to be strong to withstand stress failure, as may occur due to increased blood capillary pressure during exercise [1]. The presence of collagen IV within the basement membranes is associated with the remarkable strength characteristic of the BGB [2].In vertebrates, the design of the BGB is governed by many factors, including evolutionary status, gas exchange medium, and level of physiological activity.The BGB has been most refined in avians whereby it is reputed to be largely uniform on both sides of the capillary and is generally 2.5 times thinner than that in mammals [2]. In the developing mammalian lung at saccular stage, interairspace septa have a double capillary system [3], and hence only one side of the capillary is exposed to air. Such are generally referred to as immature septa. In mammals, this double capillary system is converted to a single one [3] except in some primitive ones such as the naked mole rats (Heterocephaus glaber), where it persists in adults [4]. In adult mammals, the BGB occurs in two types, a tripartite thinner one that comprises the alveolar epithelium, which is separated from the capillary endothelium by a basal lamina, (Figure 1) and a thicker one where an interstitium intervenes between the epithelial basal lamina and the endothelial basal lamina [19]. In ectotherms generally the immature septa with a double capillary system preponderate [5].Micrographs showing the changing pulmonary epithelium in the developing quokka lung. (a) At the canalicular stage, both cuboidal (closed arrowhead) and squamous epithelium (open arrowhead) are present. At the centre of the thick interstitium is a large blood vessel (V). (b) The cuboidal epithelium comprises of cells well-endowed with lamellar bodies (white arrows). These cells notably lack microvilli and may be described as pneumoblasts with a potential to form either of the two definitive alveolar pneumocytes (AT-I and AT-II). Note the large blood vessel (V) below the epithelium. ((c) and (d)) During the saccular stage the epithelial cells (E) possess numerous lamellar bodies (asterisk) and have become low cuboidal in the process of conversion to AT-I cells. AT-II cells converting to AT-I pneumocytes appear to do so by extruding entire lamellar bodies (closed arrowhead in (d)) and flattening out (arrow). Notice the already formed thin BGB (open arrowhead) and an erythrocyte (Er) in the conterminous capillary. ((e) and (f)) Immature interalveolar septa (E) are converted to mature ones through fusion of capillary layers (asterisk in (e)) and reduction in interstitial tissue. The process starts during the alveolar stage and continues during the microvascular maturation stage. Notice the thin BGB (square frames) and the thick side of the BGB in adults (open arrowhead in (f)). The Erythrocytes (Er) and a nucleus (N) belonging to a AT-I cell are also shown. (a)–(c) are from [11], (d) is from [12] while (e) and (f) were obtained from [13], all with permission from the publishers. (a) (b) (c) (d) (e) (f)Generally, the vertebrate lung develops from the ventral aspect of the primitive gut, where the endodermal layer forms a laryngotracheal groove, which later forms the lung bud [6]. This occurs at about embryonic day 9 (E9) in mice, E26 in humans [7], and E3-E4 in the chick [8]. In mammals, there is dichotomous branching of the primitive tubes of the early lung leading to formation of the gas exchange units. In birds, the initial step in dichotomous branching gives rise to the primary bronchi, which proceed to form the mesobronchi. Development of the secondary bronchi, however, does not appear to follow the dichotomous pattern since certain groups of the secondary bronchi arise from prescribed areas and have a specific 3D orientation [9, 10]. While a wealth of the literature exists on the mammalian lung development, the picture on the avian is just beginning to emerge. In contrast, development in ectotherms appears to have been ignored by contemporary investigators. ## 2. Structure of the Blood-Gas Barrier in Vertebrates The basic components of the blood-gas barrier are the epithelium on the aerated side, the intermediate extracellular matrix (ECM), and the capillary endothelium on the perfused side. In mammals, the thickest component of the BGB is the ECM. Calculations from data provided by Watson et al. [14] indicate that the ECM takes 42% and 40% of the entire thickness in the horse and dog, respectively, whereas the epithelium and the endothelium take almost equal proportions at about 28–30%. Unlike in mammals, the interstitium in the avian lung is the thinnest component of the BGB at 17%, while the endothelium is the thickest at 51% [15]. Additionally, the layers of the BGB in the chicken lung are remarkably uniform in thickness over wide regions. The chicken ECM measures about 0.135 μm (arithmetic mean thickness) and mainly comprises of fused basement membranes of the epithelium and endothelium. In ectotherms, the ECM is abundant and lies between the two capillary layers, as well, as within the BGB, and hence they have a thicker BGB than either mammals or birds.The thickness of the blood-water/air (tissue) barrier increases from fish, amphibians, reptiles, and mammals to birds [2, 5]. In humans, the thin side has a thickness of 0.2-0.3 μm and covers approximately half of the alveolar wall [16]. It is made up of the fused basement membranes of the epithelial and endothelial layers and is the critical structure for pulmonary gas exchange and stress failure. In contrast, the thick side also contains interstitial cells, such as fibroblasts and pericytes, as well as type I collagen fibers that are important in the scaffold support of the lung. This thick side measures up to 1 μm or more in humans [17] and may be as little as 0.1 μm or less in some domestic mammals [18]. The tensile strength of the basement membrane comes from type IV collagen, which is synthesized by both epithelial and endothelial cells, and in smaller amounts by other mesenchymal cells. A detailed review of the structure and remodeling of the BGB was provided by West and Mathieou-Costello [19].Amongst vertebrates, the lung is more specialized in endotherms (mammals and birds) compared to ectotherms (fish, amphibians, and reptiles). The barrier is thicker in fish gills but relatively thin in the lung of air-breathing fishes. In the gills of the air-breathing Amozonian fish (Arapaima gigas), BGB is 9.6 μm, while at the swim bladder the harmonic mean thickness of the BGB is 0.22 μm [20]. In lungs of amphibians and reptiles, it is thinner than in fish gills. In amphibians, it ranges from 1.21 μm in the South African clawed toad (Xenopus laevis) [21] to 2.34 μm in the common newt (Triturus vulgaris) [21]. In reptiles, the BGB is generally much smaller than in amphibians, and the range is also narrower. The smallest recorded maximal harmonical mean thickness was in the red-eared turtle (Pseudemys scripta) at 0.46 μm [21], while the highest was in the Nile crocodile (Crocodylus niloticus) at 1.4 μm [22]. Among vertebrates, the thinnest BGB has been encountered in birds and highly active mammals. In the African rock martin (Ptyonoprogne filigula), it measures 0.09 μm, while in the violet-eared hummingbird (Colibri coruscans), it is 0.099 μm [5]. Specialization of the lung amongst mammals appears to be most refined in bats, the only mammals capable of flapping flight, with the greater spear-nosed bat (Phyllostomus hastatus) having the thinnest BGB at 0.1204 μm [23]. Despite the vast range in body mass amongst mammals, the BGB does not appear to be that different, being 0.26 μm in the 2.6 g Etruscan shrew (Suncus etruscus) and a close value of 0.35 μm in the bowhead whale (Balaena mysticetus), which weighs about 150 tons [24].The thickest BGB in birds, for which data are available, is found in the flightless species of the ostrich (Struthio camelus) leading the pack at 0.56 μm [25], followed by the Humboldt penguin (Spheniscus humboldti) at 0.53 μm [26]. In the better studied domestic fowl, the thickness of BGB is intermediate at 0.318 μm. In the emu (Dromaius novaehollandiae) [27], a large flightless bird that has evolved in a habitat with few predators, the BGB is much thinner at 0.232 μm. ## 3. Formation of the Mammalian BGB In mammals, lung development proceeds through well-defined stages chronologically described as embryonic, pseudoglandular, canalicular, saccular, alveolar, and microvascular maturation [6, 12]. The primitive migrating tubes of the pseudoglandular stage are lined by tall columnar cells, which are progressively reduced in height to form the squamous pneumocytes that participate in the formation of the BGB. Initially, the columnar epithelial cells are converted to primitive pneumoblasts containing numerous lamellar bodies [11]. These pneumoblasts later differentiate to definitive AT-I and AT-II cells in the canalicular stage [11, 12, 28]. The majority of these AT-II cells are converted to AT-I cells (Figures 1 and 2), which form the internal (alveolar) layer of the BGB [11, 29]. The conversion of AT-II to AT-I cells entails several events, which include lowering of the intercellular tight junctions between adjacent epithelial cells (Figure 2) such that the apical part of the cells appears to protrude into the lumen [30]. In addition, there is extrusion of lamellar bodies and the cells spread out as the airspaces expand (Figure 1). Subsequent thinning of the cells and ultimate apposition of the blood capillaries [11, 12, 28] accomplish the thin BGB.Schematic diagrams showing the steps in attenuation of the epithelium in the mammalian lung. (a) At the pseudoglandular stage, lung tubules are lined with high columnar homogeneous cells with the intercellular tight junctions placed high up towards the tubular lumen (open arrows). Notice also that the cells are devoid of microvilli (open arrowheads). (b) As the epithelium attenuates, cells develop lamellar bodies (open arrowheads) and there is lowering of intercellular tight junctions as the cells become stretched and also the intercellular spaces widen (closed arrows). The epithelial cells at this stage are no longer columnar but cuboidal and the tight junctions have been lowered to the basal part of the epithelium (open arrows). (c) The cells destined to become squamous pneumocytes (AT-I cells) become thinner (closed arrow), extrude their lamellar bodies (open arrowhead) and approximate blood capillaries (BC) so that a thin BGB is formed. Other cells differentiate to ultimate AT-II pneumocytes (closed arrowheads) and have well developed lamellar bodies. Notice also the depressed position of tight junctions (open arrows). Fibroblasts (f) are abundant in the interstitial tissue and are important in laying down collagen. (a) (b) (c)In addition to cell movements, apoptosis of putative superfluous AT-II cells [31] and their subsequent clearance by alveolar macrophages create space for incipient AT-I cells [12]. During the saccular stage, the interalveolar septa have a double capillary system, the epithelium is thick, and the interstitium is abundant, but these are reduced by progressive diminution of the interstitial connective tissue, so that the two capillary layers fuse, resulting in a single capillary of the mature lung [12, 28]. The structure of the BGB in mammals has been described in generous details [19] with the notion that it needs to be extremely thin while maintaining an appreciable strength to withstand stress failure. The basic structure of the BGB has been well conserved through evolution and comprises an epithelium, an interstitium, and an endothelium [32]. ## 4. Development of the Avian BGB In birds, the process of BGB formation is totally different from that described in mammals, and lung growth has not been divided into phases. From the laryngotracheal groove formed from the chick primitive pharynx at about 3-4 days of incubation, the primordial lungs arise as paired evaginations. The proximal part of each lung bud forms the extrapulmonary primary bronchus, and the distal one forms the lung. The distal part of the bronchus (mesobronchus) grows into the surrounding mesenchyme and gives rise to the secondary bronchi [8]. The endoderm gives rise to the epithelium of the airway system while the surrounding mesenchymal tissue gives rise to the muscles, connective tissues and lymphatics [8]. Both local vasculogenesis [33] as well as sprouting angiogenesis [34] contributes, to blood vessel formation in the lung. Augmentation, reorganization, and reorientation of the capillaries in forming the thin BGB and the architectural pattern characteristic of the parabronchial unit are by intussusceptive angiogenesis [35].Formation of the BGB in the chick lung is recognizable at about E8 (E24 in the ostrich) when the cuboidal epithelium is converted to a columnar one, and by E12, it is stratified and shows signs of losing the apical parts (Figure3). Interestingly, cells positive for α-SMA align themselves around the parabronchial tubes leaving gaps for migration of the prospective gas exchanging units. Such cells finally become the smooth muscle cells that support the interatrial septa (Figures 3 and 4). A recent review on the BGB formation in avian embryos [15] has documented what is known, but the information was mainly based on the chicken lung, due to lack of data on other species.Micrographs from semithin sections ((a)–(d)) showing the coarse changes in the parabronchial epithelium and from paraffin sections showing staining forα-smooth muscle actin ((e) and (f)). ((a) and (b)) A close up of individual parabronchial tubes (PB) in the ostrich at E24, showing a cuboidal epithelium (open arrow in (a)) and a thickened columnar epithelium (open arrowhead in (b)). Note that in both cases, the nuclei remain in the basal region; the apical part of the cell becomes elongated thus reducing the parabronchial lumen (PB). ((c) and (d)) By E11 in the chick embryo (c), the parabronchial epithelium is pseudostratified and the apical parts of the cells appear club-like (open arrowheads in (c)). By E12, these apical parts are severed such that they appear to fall off into the parabronchial (PB) lumen (open arrowheads in (d)). Dark arrowheads in (c) show developing capillaries. ((e) and (f)) Chick lung stained for alpha-SMA at E8 and E19 respectively. These alpha-SMA positive cells (open arrows in (e)) surround the developing parabronchus (PB) while leaving some gaps (closed arrowheads in (e)) for future migration of atria. At E19, the atria are well formed and the alpha-SMA positive cells are restricted to the apical parts of the interatrial septa (open arrows in (f)). (a) and (b) are modified from [36] while (c)–(f) are from [15]. (a) (b) (c) (d) (e) (f)Transmission electron micrographs showing the various stages in attenuation of the avian conduit epithelium. (a) At E12 in the chick embryo, apical elongation of the epithelial cells results in formation of aposomes (stars) and this precedes constriction of the cell at a region below the aposome (arrowheads), due to squeezing by adjacent better endowed cells [37]. (b) In the ostrich embryo at E24 several attenuation processes are evident contemporaneously. In addition to development of lamellar bodies (open arrowhead), there is lowering of tight junctions (open arrows and circle) so that the aposome (star) is clearly delineated. ((c)-(d)) A second method of extruding the aposomes demonstrated in the ostrich involves formation of a double membrane separating the basal part of the cell from the aposome (arrowheads). With subsequent unzipping of the double membrane (open arrows in (c)), the aposome is discharged. Notice the still attached aposomes (stars) and the discharged ones (asterisks in (d)). (b)–(d) are modified from [36]. Closed arrows in (d) indicate microfolds formed after rapture of vesicles. (a) (b) (c) (d)The events in the developing lung of the ostrich closely resemble those of the chick but appear to be delayed by twice the duration (incubation period at 40–42 days is twice that of the chicken). The early events in the ostrich have not been documented, but at E24, the lung resembles that of the chick embryo at embryonic day 8 (E8), with parabronchi lined with a cuboidal to tall columnar epithelium, and some cells are seen to have tapered apical portions and formation of double membranes separating the apical protrusion (aposome) from the basal part of the cell (Figure3). A detailed description of these cell attenuation processes is only available for the chicken lung [37]. A recent report on the ostrich lung indicates that these events are well conserved in the avian species [36]. For the aforementioned reason, the description herein after is mainly based on the chick lung but is taken to represent the avian species, with specific reference to the ostrich where differences are encountered. ### 4.1. Peremerecytosis: Cell Decapitation by Constriction or Squeezing The process of cell attenuation by constriction, strangulation, or even squeezing was dubbed peremerecytosis [37]. Aposome formation by the epithelial cells occurs concomitantly with growth and expansion so that the better endowed cells squeeze out the aposomes of their sandwiched neighbors. Presumably, this results in adherence and subsequent fusion of the lateral membranes of the squeezed cell, and as such the aposome is discharged (Figure 4). Alternatively, aposome formation is followed by the lowering of the tight junctions between adjacent cells then spontaneous constriction of the cell just abovewherethe tight junction occurs. Similar epithelial cell protrusions into the parabronchial lumina were reported in the developing quail lung [38] and in the developing chicken lung [39], but the precise cellular events were not recognized then. The set of diverse morphogenetic events was presented in details on the chicken lung [37], and similar processes have recently been demonstrated in the ostrich [36]. In either case, progressive thinning of the stalk of the protrusion results in severing of the aposome. This process is analogous to aposecretion in exocrine glands [40], the difference being in the contents discharged and the timing of the events. In archetypical aposecretion there is bulging of the apical cytoplasm, absence of subcellular structures, and presence of membrane-bound cell fragments, the so-called aposomes [41]. ### 4.2. Secarecytosis: Cell Cutting by Cavitation or Double Membrane Unzipping The various processes that result in the cutting of the epithelial cells during attenuation have been grouped together under one name, secarecytosis. This terminology describes all the processes that lead to severing of the cell aposome or cell processes such as microfolds without causing constriction. Cutting in this case proceeds through intercellular cavitation or double membrane formation [36, 37]. #### 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) #### 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 4.1. Peremerecytosis: Cell Decapitation by Constriction or Squeezing The process of cell attenuation by constriction, strangulation, or even squeezing was dubbed peremerecytosis [37]. Aposome formation by the epithelial cells occurs concomitantly with growth and expansion so that the better endowed cells squeeze out the aposomes of their sandwiched neighbors. Presumably, this results in adherence and subsequent fusion of the lateral membranes of the squeezed cell, and as such the aposome is discharged (Figure 4). Alternatively, aposome formation is followed by the lowering of the tight junctions between adjacent cells then spontaneous constriction of the cell just abovewherethe tight junction occurs. Similar epithelial cell protrusions into the parabronchial lumina were reported in the developing quail lung [38] and in the developing chicken lung [39], but the precise cellular events were not recognized then. The set of diverse morphogenetic events was presented in details on the chicken lung [37], and similar processes have recently been demonstrated in the ostrich [36]. In either case, progressive thinning of the stalk of the protrusion results in severing of the aposome. This process is analogous to aposecretion in exocrine glands [40], the difference being in the contents discharged and the timing of the events. In archetypical aposecretion there is bulging of the apical cytoplasm, absence of subcellular structures, and presence of membrane-bound cell fragments, the so-called aposomes [41]. ## 4.2. Secarecytosis: Cell Cutting by Cavitation or Double Membrane Unzipping The various processes that result in the cutting of the epithelial cells during attenuation have been grouped together under one name, secarecytosis. This terminology describes all the processes that lead to severing of the cell aposome or cell processes such as microfolds without causing constriction. Cutting in this case proceeds through intercellular cavitation or double membrane formation [36, 37]. ### 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) ### 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) ## 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 5. Mechanisms of Epithelial Cell Attenuation In the mammalian lungs the mechanisms of BGB formation appear rather simple. Lowering of the tight junctions towards the basal part of the cell is followed by stretching of the cell as the airspaces expand. It was, however, noted that in the attenuating cells, there is summary discharge of lamellar bodies (Figures1 and 2) rather than discharge of the contents [11]. In physiological type II cell secretion, surfactant is discharged through tiny pores averaging 0.2×0.4 μm in size on luminal surface of AT-II cells [42]. The details on how exactly the tight junctions are lowered, how the cells become stretched, or even how the entire lamellar bodies are squeezed out are lacking.The processes and mechanisms involved in attenuation of the epithelium of the chicken lung are much more complicated but to a large extent resemble physiological secretory processes. In general, they lead to progressive reduction in the cell height until the required thickness is attained. As observed in the developing chicken lung, the primitive tubes at E8 are mainly lined by cuboidal epithelium, which converts to high columnar, then becomes stratified columnar with the onset of the first signs of attenuation (Figure3). Subsequently, the epithelium undergoes dramatic size reduction and loses morphological polarization by the processes described above. These processes closely resemble aposecretion, where a portion of a cell is discharged with its contents, minus the organelles.During aposecretion, proteins such as myosin and gelsolin [43] or even actin [44, 45] have been implicated in extrusion of the apical protrusions. Presence of actin filaments in the constricting aposome has been demonstrated in the attenuating epithelium of the chick embryo lung, plausibly implicating it in the cell cutting process [37]. The actin filaments were localized at the level of the aposomal constriction since they are associated with the cell adhesion belt [46]) and are also indicators for distal relocation of cell junctions [37]. Change of shape in ingressing embryonic cells has been reported. The apices of such cells are constricted, plausibly through actinomyosin contraction [47] with the result that organelles are displaced basally in readiness for migration. Over and above the actinomyosin activity, physiological aposecretion as occurs in the reproductive system, is also driven by hormones and muscarinic receptors [43].Smooth muscle cells staining positively for alpha actin have been shown to be associated with the developing parabronchi in the chicken lung. Notably, such cells become aligned at the basal aspects of parabronchial epithelial cells delineating gaps through which incipient atria sprout (Figure3). The α-SMA-positive cells, while playing a role in tubular patterning, may be important in epithelial attenuation. During milk secretion, for example, myoepithelial cells below the secretory epithelium squeeze the epithelial cells above and, in so doing, facilitate the release of milk into the secretory acinus [48]. Plausibly, association of α-SMA-positive cells with the attenuating air conduit epithelium during epithelial attenuation is important in facilitating such aposecretion-like cell processes. ## 6. Physiological Adaptation and Remodeling of the BGB The pulmonary BGB undergoes certain changes that include increase in the thickness of the basement membranes and breaks in the endothelium as a result of stress failure [1, 2]. Continual regulation of the wall structure of the BGB occurs through rapid changes in gene expression for extracellular matrix proteins and growth factors in response to increases in capillary wall stress. This helps to maintain the extreme thinness with sufficient strength [49].Structural alterations in the BGB in response to physiological changes have been demonstrated. Berg and co-workers [50] subjected lungs to high states of inflation over 4 hours with the result that gene expression for α1(III) and α2(IV) procollagens, fibronectin, basic fibroblast growth factor (bFGF), and transforming growth factor β1 (TGF-β1) were increased. Similarly, Parker and colleagues increased venous pressure in perfused isolated rabbit lungs with the finding that there was significant increase in mRNA for α1(I) but not α2(IV) procollagen [51]. The difference was thought to be because both experimental techniques increase stress in structures other than capillaries. In young dogs subjected to prolonged low oxygen tensions (high altitude), there was notable reduction in harmonic mean thickness of the BGB and a shift in its frequency distribution such that thinner segments were more preponderant [52]. This indicates redistribution of tissue components within the alveolar septa in such a way that there is minimized diffusive resistance.Breaks in the BGB in cases of extreme stress have been reported. In thoroughbred racehorses after galloping, excessive pressures can lead to pulmonary capillary failure with the resultant pulmonary hemorrhage [53]. In related studies, increase in red blood cells and protein in the broncho-alveolar lavage fluid of exercising elite athletes indicated that the integrity of the blood-gas barrier is impaired by short-term exercise [54]. Similar findings were documented from a rabbit model of increased capillary pressure with subsequent damage to all or parts of the blood-gas barrier [55]. The lack of significant elevations in the cytokines known to increase the permeability of the capillary endothelium mitigates against an inflammatory mechanism and supports the hypothesis that mechanical stress may impair the function of the human blood-gas barrier during exercise [54]. Extremely high stress in the walls of the pulmonary capillaries, as may occur in mechanical ventilation, results in ultrastructural changes including disruptions of both the alveolar epithelial and capillary endothelial layers [56]. Stress failure can result from pathological conditions that interfere with its structural and/or physiological integrity. Such conditions include high-altitude pulmonary edema, neurogenic pulmonary edema, severe left ventricular failure, mitral valve stenosis, and overinflation of the lung [56]. There is a spectrum of low permeability to high permeability edema as the capillary pressure is raised. Remodeling of pulmonary capillaries apparently occurs at high capillary pressures. It is likely that the extracellular matrix of the capillaries is continuously regulated in response to capillary wall stress. ## 7. Molecular Regulation of BGB Development A detailed discussion of molecular control of BGB formation needs to consider the various coarse components that come into play during its establishment. On the vascular side is the capillary endothelium, the middle layer is the extracellular matrix (ECM), while the epithelium lines the airspaces. Recently, Herbert and Stainier [57] have provided an updated review of the molecular control of the endothelial cell differentiation, with the notion that VEGF and Notch signaling are important pathways. Angiogenesis itself is a complex process which is currently under intensive investigation and whose molecular control is slowly falling into shape [58]. The intermediate layer of the BGB starts by being excessively abundant but is successfully diminished and, in doing so, the capillary endothelium approximates the attenuating gas exchange epithelium. Therefore, the genes that come into play in production and regulation of the matrix metalloproteinases the enzymes that lead to reduction in ECM are important in lung development [59] and BGB formation. Detailed reports of the molecular control of angiogenesis and ECM biosynthesis are, however, not within the scope of the current discussion, and we will concentrate on differentiation of the alveolar/air capillary epithelium and its subsequent approximation to the endothelium.The lung in vertebrates is known to be compliant except in avian species. Therefore, some commonalities would be expected in the inauguration and early stages of lung development. Lung development has been well studied in mammals and to some reasonable extent in birds, but not much has been done in the ectotherms. Reports on the reptilian lung structure [22, 60, 61] and in the frog [62] and fish [5] indicate that the parenchymal interairspace septa do not mature, and a double capillary system is retained in these ectotherms. While controlling molecules may be similar to those in mammals and birds at the inaugural stages of lung development, subtle differences would be expected when it comes to later stages of lung maturation. Indeed, many of the controlling factors have been highly conserved through evolution [7, 32].Lung development is driven by two forces: intrinsic factors that include a host of regulatory molecules and extrinsic forces, the main one being extracellular lung fluid [63]. A complex set of morphoregulatory molecules constitutes the intrinsic factors, which can be grouped into three classes: transcription factors (e.g., Nkx2.1 also known as thyroid transcription factor-1 (TTF-1), GATA, and HNF-3); signaling molecules such as FGF, BMP-4, PDGF, Shh, and TGF-β; extracellular matrix proteins and their receptors [7, 63, 64]. In mammals, extrinsic/mechanical forces have been shown to be important for fetal alveolar epithelial cell differentiation. Such forces emanate from fetal lung movements that propel fluid through incipient air conduits [65].Formation of BGB in mammals involves the attenuation of the developing lung epithelium, which includes conversion of the columnar epithelium of the pseudoglandular stage to a mainly cuboidal one with lamellar bodies (Figure1). Subsequently, there is a lowering of the intercellular tight junctions, spreading or stretching of the cell, and total extrusion of lamellar bodies (Figure 2) leading to differentiated AT-I and AT-II epithelial cells. The AT-I cells constitute a thin squamous epithelium that covers over 90% of the alveolar surface area, which provides gas exchange between the airspaces and pulmonary capillary vasculature. AT-II are interspersed throughout the alveoli and are responsible for the production and secretion of pulmonary surfactant, regulation of alveolar fluid homeostasis, and differentiation into AT-I cells during lung development and injury. Genetic control of the specific aforementioned steps has not been investigated, but there exists reports on the differentiation of AT-II and AT-I cells and conversion of AT-II to AT-I cells in mammals [66]. Some of the molecular signals that have been proposed to be involved in the differentiation of AT-II and AT-I cells are (i) transcription factors such as thyroid transcription factor-1 (TTF-1), forkhead orthologs (FOXs), GATA6, HIF2α, Notch, glucocorticoid receptor, retinoic acid, and ETS family members; (ii) growth factors such as epithelial growth factor (EGF) and bone morphogenetic protein 4 (BMP4); (iii) other signaling molecules including connexin 43, T1 alpha, and semaphorin 3A. Herein after, the role of these molecules in epithelial cell differentiation in the distal lung is briefly described. ### 7.1. Molecular Regulation of BGB in Mammals #### 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. #### 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. #### 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ### 7.2. Molecular Regulation of BGB Formation in Birds The avian lung differs fundamentally from that of other vertebrates in having noncompliant terminal gas exchange units. While the upstream control of lung development may be close or similar to that of the other vertebrates, later events indicate that a totally different process occurs. Formation of the BGB requires that the blood capillaries (BCs) and the attenuating air capillaries (ACs) migrate through progressively attenuating interstitium to approximate each other [35, 37]. Elevation of levels of basic FGF (bFGF), VEGF-A, and PDGF-B during the later phase of avian lung microvascular development [34] indicated that they may be important during interaction of the BCs and the ACs. In the chicken lung, pulmonary noncanonical Wnt5a uses Ror2 to control patterning of both distal airway and vascular tubulogenesis and perhaps guides the interfacing of the air capillaries with the blood capillaries [91]. The latter authors showed that lungs with mis-/overexpressed Wnt5a were hypoplastic with erratic expression patterns of Shh, L-CAM, fibronectin, VEGF, and Flk1. Coordinated development of pulmonary air conduits and vasculature is achieved through Wnt5a, which plausibly works through fibronectin-mediated VEGF signaling through its regulation of Shh [91]. Fibroblast growth factors (FGFs) and their cognate receptors (FGFRs) are expressed in the developing chick lung and are essential for the epithelial-mesenchymal interactions. Such interactions determine epithelial branching [92] and may be essential for ultimate BGB establishment. ## 7.1. Molecular Regulation of BGB in Mammals ### 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. ### 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. ### 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ## 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. ## 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. ## 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ## 7.2. Molecular Regulation of BGB Formation in Birds The avian lung differs fundamentally from that of other vertebrates in having noncompliant terminal gas exchange units. While the upstream control of lung development may be close or similar to that of the other vertebrates, later events indicate that a totally different process occurs. Formation of the BGB requires that the blood capillaries (BCs) and the attenuating air capillaries (ACs) migrate through progressively attenuating interstitium to approximate each other [35, 37]. Elevation of levels of basic FGF (bFGF), VEGF-A, and PDGF-B during the later phase of avian lung microvascular development [34] indicated that they may be important during interaction of the BCs and the ACs. In the chicken lung, pulmonary noncanonical Wnt5a uses Ror2 to control patterning of both distal airway and vascular tubulogenesis and perhaps guides the interfacing of the air capillaries with the blood capillaries [91]. The latter authors showed that lungs with mis-/overexpressed Wnt5a were hypoplastic with erratic expression patterns of Shh, L-CAM, fibronectin, VEGF, and Flk1. Coordinated development of pulmonary air conduits and vasculature is achieved through Wnt5a, which plausibly works through fibronectin-mediated VEGF signaling through its regulation of Shh [91]. Fibroblast growth factors (FGFs) and their cognate receptors (FGFRs) are expressed in the developing chick lung and are essential for the epithelial-mesenchymal interactions. Such interactions determine epithelial branching [92] and may be essential for ultimate BGB establishment. ## 8. Conclusion In the current paper, we have presented an overview of the events that take place during inauguration, development, and remodeling of the vertebrate BG. We have highlighted the fact that the events differ fundamentally between the compliant mammalian lung and the rigid avian lung. The paper is skewed towards the formation of the internal (alveolar/air capillary) layer of the BGB. Specific studies on molecular control of BGB formation are lacking, but investigations on the AT-II and AT-I cell differentiation in mammals exist. While there is a rapidly increasing wealth of studies on molecular control of the mammalian lung development, very little has been done on the avian species. Studies on the factors guiding and controlling the newly described cell processes of secarecytosis and peremerecytosis in the avian lung are strongly recommended. Furthermore, investigations focused on epithelial attenuation and epithelial-endothelial interactions would illuminate the mechanisms preponderant during BGB formation. --- *Source: 101597-2012-12-27.xml*
101597-2012-12-27_101597-2012-12-27.md
90,094
Development and Remodeling of the Vertebrate Blood-Gas Barrier
Andrew Makanya; Aikaterini Anagnostopoulou; Valentin Djonov
BioMed Research International (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101597
101597-2012-12-27.xml
--- ## Abstract During vertebrate development, the lung inaugurates as an endodermal bud from the primitive foregut. Dichotomous subdivision of the bud results in arborizing airways that form the prospective gas exchanging chambers, where a thin blood-gas barrier (BGB) is established. In the mammalian lung, this proceeds through conversion of type II cells to type I cells, thinning, and elongation of the cells as well as extrusion of the lamellar bodies. Subsequent diminution of interstitial tissue and apposition of capillaries to the alveolar epithelium establish a thin BGB. In the noncompliant avian lung, attenuation proceeds through cell-cutting processes that result in remarkable thinning of the epithelial layer. A host of morphoregulatory molecules, including transcription factors such as Nkx2.1, GATA, HNF-3, and WNT5a; signaling molecules including FGF, BMP-4, Shh, and TFG-β and extracellular proteins and their receptors have been implicated. During normal physiological function, the BGB may be remodeled in response to alterations in transmural pressures in both blood capillaries and airspaces. Such changes are mitigated through rapid expression of the relevant genes for extracellular matrix proteins and growth factors. While an appreciable amount of information regarding molecular control has been documented in the mammalian lung, very little is available on the avian lung. --- ## Body ## 1. Introduction The pulmonary blood-gas barrier (BGB) performs the noble role of passive diffusion of gases between blood and a common pool that delivers the air to the exchanging structures. The BGB is a paradoxical bioengineering structure in that it attains remarkable strength while at the same time remaining thin enough to allow gas exchange. The two aspects of the BGB important for efficient exchange are thinness and an extensive surface area. Additionally, the barrier needs to be strong to withstand stress failure, as may occur due to increased blood capillary pressure during exercise [1]. The presence of collagen IV within the basement membranes is associated with the remarkable strength characteristic of the BGB [2].In vertebrates, the design of the BGB is governed by many factors, including evolutionary status, gas exchange medium, and level of physiological activity.The BGB has been most refined in avians whereby it is reputed to be largely uniform on both sides of the capillary and is generally 2.5 times thinner than that in mammals [2]. In the developing mammalian lung at saccular stage, interairspace septa have a double capillary system [3], and hence only one side of the capillary is exposed to air. Such are generally referred to as immature septa. In mammals, this double capillary system is converted to a single one [3] except in some primitive ones such as the naked mole rats (Heterocephaus glaber), where it persists in adults [4]. In adult mammals, the BGB occurs in two types, a tripartite thinner one that comprises the alveolar epithelium, which is separated from the capillary endothelium by a basal lamina, (Figure 1) and a thicker one where an interstitium intervenes between the epithelial basal lamina and the endothelial basal lamina [19]. In ectotherms generally the immature septa with a double capillary system preponderate [5].Micrographs showing the changing pulmonary epithelium in the developing quokka lung. (a) At the canalicular stage, both cuboidal (closed arrowhead) and squamous epithelium (open arrowhead) are present. At the centre of the thick interstitium is a large blood vessel (V). (b) The cuboidal epithelium comprises of cells well-endowed with lamellar bodies (white arrows). These cells notably lack microvilli and may be described as pneumoblasts with a potential to form either of the two definitive alveolar pneumocytes (AT-I and AT-II). Note the large blood vessel (V) below the epithelium. ((c) and (d)) During the saccular stage the epithelial cells (E) possess numerous lamellar bodies (asterisk) and have become low cuboidal in the process of conversion to AT-I cells. AT-II cells converting to AT-I pneumocytes appear to do so by extruding entire lamellar bodies (closed arrowhead in (d)) and flattening out (arrow). Notice the already formed thin BGB (open arrowhead) and an erythrocyte (Er) in the conterminous capillary. ((e) and (f)) Immature interalveolar septa (E) are converted to mature ones through fusion of capillary layers (asterisk in (e)) and reduction in interstitial tissue. The process starts during the alveolar stage and continues during the microvascular maturation stage. Notice the thin BGB (square frames) and the thick side of the BGB in adults (open arrowhead in (f)). The Erythrocytes (Er) and a nucleus (N) belonging to a AT-I cell are also shown. (a)–(c) are from [11], (d) is from [12] while (e) and (f) were obtained from [13], all with permission from the publishers. (a) (b) (c) (d) (e) (f)Generally, the vertebrate lung develops from the ventral aspect of the primitive gut, where the endodermal layer forms a laryngotracheal groove, which later forms the lung bud [6]. This occurs at about embryonic day 9 (E9) in mice, E26 in humans [7], and E3-E4 in the chick [8]. In mammals, there is dichotomous branching of the primitive tubes of the early lung leading to formation of the gas exchange units. In birds, the initial step in dichotomous branching gives rise to the primary bronchi, which proceed to form the mesobronchi. Development of the secondary bronchi, however, does not appear to follow the dichotomous pattern since certain groups of the secondary bronchi arise from prescribed areas and have a specific 3D orientation [9, 10]. While a wealth of the literature exists on the mammalian lung development, the picture on the avian is just beginning to emerge. In contrast, development in ectotherms appears to have been ignored by contemporary investigators. ## 2. Structure of the Blood-Gas Barrier in Vertebrates The basic components of the blood-gas barrier are the epithelium on the aerated side, the intermediate extracellular matrix (ECM), and the capillary endothelium on the perfused side. In mammals, the thickest component of the BGB is the ECM. Calculations from data provided by Watson et al. [14] indicate that the ECM takes 42% and 40% of the entire thickness in the horse and dog, respectively, whereas the epithelium and the endothelium take almost equal proportions at about 28–30%. Unlike in mammals, the interstitium in the avian lung is the thinnest component of the BGB at 17%, while the endothelium is the thickest at 51% [15]. Additionally, the layers of the BGB in the chicken lung are remarkably uniform in thickness over wide regions. The chicken ECM measures about 0.135 μm (arithmetic mean thickness) and mainly comprises of fused basement membranes of the epithelium and endothelium. In ectotherms, the ECM is abundant and lies between the two capillary layers, as well, as within the BGB, and hence they have a thicker BGB than either mammals or birds.The thickness of the blood-water/air (tissue) barrier increases from fish, amphibians, reptiles, and mammals to birds [2, 5]. In humans, the thin side has a thickness of 0.2-0.3 μm and covers approximately half of the alveolar wall [16]. It is made up of the fused basement membranes of the epithelial and endothelial layers and is the critical structure for pulmonary gas exchange and stress failure. In contrast, the thick side also contains interstitial cells, such as fibroblasts and pericytes, as well as type I collagen fibers that are important in the scaffold support of the lung. This thick side measures up to 1 μm or more in humans [17] and may be as little as 0.1 μm or less in some domestic mammals [18]. The tensile strength of the basement membrane comes from type IV collagen, which is synthesized by both epithelial and endothelial cells, and in smaller amounts by other mesenchymal cells. A detailed review of the structure and remodeling of the BGB was provided by West and Mathieou-Costello [19].Amongst vertebrates, the lung is more specialized in endotherms (mammals and birds) compared to ectotherms (fish, amphibians, and reptiles). The barrier is thicker in fish gills but relatively thin in the lung of air-breathing fishes. In the gills of the air-breathing Amozonian fish (Arapaima gigas), BGB is 9.6 μm, while at the swim bladder the harmonic mean thickness of the BGB is 0.22 μm [20]. In lungs of amphibians and reptiles, it is thinner than in fish gills. In amphibians, it ranges from 1.21 μm in the South African clawed toad (Xenopus laevis) [21] to 2.34 μm in the common newt (Triturus vulgaris) [21]. In reptiles, the BGB is generally much smaller than in amphibians, and the range is also narrower. The smallest recorded maximal harmonical mean thickness was in the red-eared turtle (Pseudemys scripta) at 0.46 μm [21], while the highest was in the Nile crocodile (Crocodylus niloticus) at 1.4 μm [22]. Among vertebrates, the thinnest BGB has been encountered in birds and highly active mammals. In the African rock martin (Ptyonoprogne filigula), it measures 0.09 μm, while in the violet-eared hummingbird (Colibri coruscans), it is 0.099 μm [5]. Specialization of the lung amongst mammals appears to be most refined in bats, the only mammals capable of flapping flight, with the greater spear-nosed bat (Phyllostomus hastatus) having the thinnest BGB at 0.1204 μm [23]. Despite the vast range in body mass amongst mammals, the BGB does not appear to be that different, being 0.26 μm in the 2.6 g Etruscan shrew (Suncus etruscus) and a close value of 0.35 μm in the bowhead whale (Balaena mysticetus), which weighs about 150 tons [24].The thickest BGB in birds, for which data are available, is found in the flightless species of the ostrich (Struthio camelus) leading the pack at 0.56 μm [25], followed by the Humboldt penguin (Spheniscus humboldti) at 0.53 μm [26]. In the better studied domestic fowl, the thickness of BGB is intermediate at 0.318 μm. In the emu (Dromaius novaehollandiae) [27], a large flightless bird that has evolved in a habitat with few predators, the BGB is much thinner at 0.232 μm. ## 3. Formation of the Mammalian BGB In mammals, lung development proceeds through well-defined stages chronologically described as embryonic, pseudoglandular, canalicular, saccular, alveolar, and microvascular maturation [6, 12]. The primitive migrating tubes of the pseudoglandular stage are lined by tall columnar cells, which are progressively reduced in height to form the squamous pneumocytes that participate in the formation of the BGB. Initially, the columnar epithelial cells are converted to primitive pneumoblasts containing numerous lamellar bodies [11]. These pneumoblasts later differentiate to definitive AT-I and AT-II cells in the canalicular stage [11, 12, 28]. The majority of these AT-II cells are converted to AT-I cells (Figures 1 and 2), which form the internal (alveolar) layer of the BGB [11, 29]. The conversion of AT-II to AT-I cells entails several events, which include lowering of the intercellular tight junctions between adjacent epithelial cells (Figure 2) such that the apical part of the cells appears to protrude into the lumen [30]. In addition, there is extrusion of lamellar bodies and the cells spread out as the airspaces expand (Figure 1). Subsequent thinning of the cells and ultimate apposition of the blood capillaries [11, 12, 28] accomplish the thin BGB.Schematic diagrams showing the steps in attenuation of the epithelium in the mammalian lung. (a) At the pseudoglandular stage, lung tubules are lined with high columnar homogeneous cells with the intercellular tight junctions placed high up towards the tubular lumen (open arrows). Notice also that the cells are devoid of microvilli (open arrowheads). (b) As the epithelium attenuates, cells develop lamellar bodies (open arrowheads) and there is lowering of intercellular tight junctions as the cells become stretched and also the intercellular spaces widen (closed arrows). The epithelial cells at this stage are no longer columnar but cuboidal and the tight junctions have been lowered to the basal part of the epithelium (open arrows). (c) The cells destined to become squamous pneumocytes (AT-I cells) become thinner (closed arrow), extrude their lamellar bodies (open arrowhead) and approximate blood capillaries (BC) so that a thin BGB is formed. Other cells differentiate to ultimate AT-II pneumocytes (closed arrowheads) and have well developed lamellar bodies. Notice also the depressed position of tight junctions (open arrows). Fibroblasts (f) are abundant in the interstitial tissue and are important in laying down collagen. (a) (b) (c)In addition to cell movements, apoptosis of putative superfluous AT-II cells [31] and their subsequent clearance by alveolar macrophages create space for incipient AT-I cells [12]. During the saccular stage, the interalveolar septa have a double capillary system, the epithelium is thick, and the interstitium is abundant, but these are reduced by progressive diminution of the interstitial connective tissue, so that the two capillary layers fuse, resulting in a single capillary of the mature lung [12, 28]. The structure of the BGB in mammals has been described in generous details [19] with the notion that it needs to be extremely thin while maintaining an appreciable strength to withstand stress failure. The basic structure of the BGB has been well conserved through evolution and comprises an epithelium, an interstitium, and an endothelium [32]. ## 4. Development of the Avian BGB In birds, the process of BGB formation is totally different from that described in mammals, and lung growth has not been divided into phases. From the laryngotracheal groove formed from the chick primitive pharynx at about 3-4 days of incubation, the primordial lungs arise as paired evaginations. The proximal part of each lung bud forms the extrapulmonary primary bronchus, and the distal one forms the lung. The distal part of the bronchus (mesobronchus) grows into the surrounding mesenchyme and gives rise to the secondary bronchi [8]. The endoderm gives rise to the epithelium of the airway system while the surrounding mesenchymal tissue gives rise to the muscles, connective tissues and lymphatics [8]. Both local vasculogenesis [33] as well as sprouting angiogenesis [34] contributes, to blood vessel formation in the lung. Augmentation, reorganization, and reorientation of the capillaries in forming the thin BGB and the architectural pattern characteristic of the parabronchial unit are by intussusceptive angiogenesis [35].Formation of the BGB in the chick lung is recognizable at about E8 (E24 in the ostrich) when the cuboidal epithelium is converted to a columnar one, and by E12, it is stratified and shows signs of losing the apical parts (Figure3). Interestingly, cells positive for α-SMA align themselves around the parabronchial tubes leaving gaps for migration of the prospective gas exchanging units. Such cells finally become the smooth muscle cells that support the interatrial septa (Figures 3 and 4). A recent review on the BGB formation in avian embryos [15] has documented what is known, but the information was mainly based on the chicken lung, due to lack of data on other species.Micrographs from semithin sections ((a)–(d)) showing the coarse changes in the parabronchial epithelium and from paraffin sections showing staining forα-smooth muscle actin ((e) and (f)). ((a) and (b)) A close up of individual parabronchial tubes (PB) in the ostrich at E24, showing a cuboidal epithelium (open arrow in (a)) and a thickened columnar epithelium (open arrowhead in (b)). Note that in both cases, the nuclei remain in the basal region; the apical part of the cell becomes elongated thus reducing the parabronchial lumen (PB). ((c) and (d)) By E11 in the chick embryo (c), the parabronchial epithelium is pseudostratified and the apical parts of the cells appear club-like (open arrowheads in (c)). By E12, these apical parts are severed such that they appear to fall off into the parabronchial (PB) lumen (open arrowheads in (d)). Dark arrowheads in (c) show developing capillaries. ((e) and (f)) Chick lung stained for alpha-SMA at E8 and E19 respectively. These alpha-SMA positive cells (open arrows in (e)) surround the developing parabronchus (PB) while leaving some gaps (closed arrowheads in (e)) for future migration of atria. At E19, the atria are well formed and the alpha-SMA positive cells are restricted to the apical parts of the interatrial septa (open arrows in (f)). (a) and (b) are modified from [36] while (c)–(f) are from [15]. (a) (b) (c) (d) (e) (f)Transmission electron micrographs showing the various stages in attenuation of the avian conduit epithelium. (a) At E12 in the chick embryo, apical elongation of the epithelial cells results in formation of aposomes (stars) and this precedes constriction of the cell at a region below the aposome (arrowheads), due to squeezing by adjacent better endowed cells [37]. (b) In the ostrich embryo at E24 several attenuation processes are evident contemporaneously. In addition to development of lamellar bodies (open arrowhead), there is lowering of tight junctions (open arrows and circle) so that the aposome (star) is clearly delineated. ((c)-(d)) A second method of extruding the aposomes demonstrated in the ostrich involves formation of a double membrane separating the basal part of the cell from the aposome (arrowheads). With subsequent unzipping of the double membrane (open arrows in (c)), the aposome is discharged. Notice the still attached aposomes (stars) and the discharged ones (asterisks in (d)). (b)–(d) are modified from [36]. Closed arrows in (d) indicate microfolds formed after rapture of vesicles. (a) (b) (c) (d)The events in the developing lung of the ostrich closely resemble those of the chick but appear to be delayed by twice the duration (incubation period at 40–42 days is twice that of the chicken). The early events in the ostrich have not been documented, but at E24, the lung resembles that of the chick embryo at embryonic day 8 (E8), with parabronchi lined with a cuboidal to tall columnar epithelium, and some cells are seen to have tapered apical portions and formation of double membranes separating the apical protrusion (aposome) from the basal part of the cell (Figure3). A detailed description of these cell attenuation processes is only available for the chicken lung [37]. A recent report on the ostrich lung indicates that these events are well conserved in the avian species [36]. For the aforementioned reason, the description herein after is mainly based on the chick lung but is taken to represent the avian species, with specific reference to the ostrich where differences are encountered. ### 4.1. Peremerecytosis: Cell Decapitation by Constriction or Squeezing The process of cell attenuation by constriction, strangulation, or even squeezing was dubbed peremerecytosis [37]. Aposome formation by the epithelial cells occurs concomitantly with growth and expansion so that the better endowed cells squeeze out the aposomes of their sandwiched neighbors. Presumably, this results in adherence and subsequent fusion of the lateral membranes of the squeezed cell, and as such the aposome is discharged (Figure 4). Alternatively, aposome formation is followed by the lowering of the tight junctions between adjacent cells then spontaneous constriction of the cell just abovewherethe tight junction occurs. Similar epithelial cell protrusions into the parabronchial lumina were reported in the developing quail lung [38] and in the developing chicken lung [39], but the precise cellular events were not recognized then. The set of diverse morphogenetic events was presented in details on the chicken lung [37], and similar processes have recently been demonstrated in the ostrich [36]. In either case, progressive thinning of the stalk of the protrusion results in severing of the aposome. This process is analogous to aposecretion in exocrine glands [40], the difference being in the contents discharged and the timing of the events. In archetypical aposecretion there is bulging of the apical cytoplasm, absence of subcellular structures, and presence of membrane-bound cell fragments, the so-called aposomes [41]. ### 4.2. Secarecytosis: Cell Cutting by Cavitation or Double Membrane Unzipping The various processes that result in the cutting of the epithelial cells during attenuation have been grouped together under one name, secarecytosis. This terminology describes all the processes that lead to severing of the cell aposome or cell processes such as microfolds without causing constriction. Cutting in this case proceeds through intercellular cavitation or double membrane formation [36, 37]. #### 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) #### 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 4.1. Peremerecytosis: Cell Decapitation by Constriction or Squeezing The process of cell attenuation by constriction, strangulation, or even squeezing was dubbed peremerecytosis [37]. Aposome formation by the epithelial cells occurs concomitantly with growth and expansion so that the better endowed cells squeeze out the aposomes of their sandwiched neighbors. Presumably, this results in adherence and subsequent fusion of the lateral membranes of the squeezed cell, and as such the aposome is discharged (Figure 4). Alternatively, aposome formation is followed by the lowering of the tight junctions between adjacent cells then spontaneous constriction of the cell just abovewherethe tight junction occurs. Similar epithelial cell protrusions into the parabronchial lumina were reported in the developing quail lung [38] and in the developing chicken lung [39], but the precise cellular events were not recognized then. The set of diverse morphogenetic events was presented in details on the chicken lung [37], and similar processes have recently been demonstrated in the ostrich [36]. In either case, progressive thinning of the stalk of the protrusion results in severing of the aposome. This process is analogous to aposecretion in exocrine glands [40], the difference being in the contents discharged and the timing of the events. In archetypical aposecretion there is bulging of the apical cytoplasm, absence of subcellular structures, and presence of membrane-bound cell fragments, the so-called aposomes [41]. ## 4.2. Secarecytosis: Cell Cutting by Cavitation or Double Membrane Unzipping The various processes that result in the cutting of the epithelial cells during attenuation have been grouped together under one name, secarecytosis. This terminology describes all the processes that lead to severing of the cell aposome or cell processes such as microfolds without causing constriction. Cutting in this case proceeds through intercellular cavitation or double membrane formation [36, 37]. ### 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) ### 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 4.2.1. Cell Cutting by Intracellular Space Formation The processes and events that preponderate in the later stages of BGB formation in the avian lung have recently been reviewed [15]. Formation of vesicles (endocytic cavities smaller than 50 nm in diameter) or vacuoles (endocytic cavities greater than 50 nm in diameter) in rows below the cell apical portion is seen in later stages of development. Such cavities finally fuse with their neighboring cognates and then with apicolateral plasma membranes and, in doing so, sever the aposomal projection from the rest of the cell in a process referred to as coalescing vesiculation. The latter processes mainly characterize attenuation of the low cuboidal epithelium in the formative atria and infundibulae as well as in the migrating air capillaries. The aposomal bodies released contain abundant organelles and several microfolds. Plausibly, the microfolds result from the fusion of contiguous vesicular/vacuolar membranes at the interphase between the aposome and the basal part of the cell, hence discharging the aposome. The process has been referred to as coalescing vesiculation and is contrasted from rapturing vesiculation where vesicles and vacuoles move towards the apical plasma membrane, fuse with it and discharge their entire contents (Figure 5). The result is that the vacuole remains like a concavity bounded on either side by a microfold that resembles a microvillus on 2D section. If the participating cavities are vacuoles, large folds separating the concavities are formed, while rapture of vesicles leaves tiny microfolds resembling microvilli. Whatever the circumstance, there is concomitant reduction in the cell height. The detailed events were previously reported in the chicken lung [37] and have recently been reported in the ostrich [36].Transmission electron photomicrographs illustrating the additional mechanisms of epithelial attenuation that occur later during the attenuation process in the chiken lung. In all cases the rectangular frame delineates the BGB, BC is blood capillary and AC is air capillary. ((a) and (b)) are modified from [35] while the rest are from [37], with the kind permission of the publisher. ((a) and (b)) Formation of vacuoles (V) and vesicles (open arrows) and their subsequent rapture (fusion with the apical plasmalema) results in formation of numerous microfolds that resemble microvilli (closed arrowheads). Notice an aposome (asterisk in (b)) still attached to the cell apical membrane but still hanging above a vacuole (closed arrow). ((c) and (d)) Microfolds formed as a result of vesicle rapture (closed arrowheads) are severed so that by the time of hatching (d) there were virtually no microfolds. The BGB was similar to that of adults and the air capillaries (AC) were well developed. (a) (b) (c) (d) ## 4.2.2. Cell Cutting by Double Membrane Unzipping Formation of dark bands across a cell occurs usually between the protruding aposome and the basal part of the cell. The band is believed to be a double plasma membrane, probably associated with cytoskeletal proteins. The double membrane may form the site of separation, whereby the apical part is severed from the basal one (Figure4). In some cases, the double membrane forms a boundary above which the processes of cell cutting such as rapturing vesiculation take place. These processes have recently been demonstrated in the chicken [37] and the ostrich lungs [36]. ## 5. Mechanisms of Epithelial Cell Attenuation In the mammalian lungs the mechanisms of BGB formation appear rather simple. Lowering of the tight junctions towards the basal part of the cell is followed by stretching of the cell as the airspaces expand. It was, however, noted that in the attenuating cells, there is summary discharge of lamellar bodies (Figures1 and 2) rather than discharge of the contents [11]. In physiological type II cell secretion, surfactant is discharged through tiny pores averaging 0.2×0.4 μm in size on luminal surface of AT-II cells [42]. The details on how exactly the tight junctions are lowered, how the cells become stretched, or even how the entire lamellar bodies are squeezed out are lacking.The processes and mechanisms involved in attenuation of the epithelium of the chicken lung are much more complicated but to a large extent resemble physiological secretory processes. In general, they lead to progressive reduction in the cell height until the required thickness is attained. As observed in the developing chicken lung, the primitive tubes at E8 are mainly lined by cuboidal epithelium, which converts to high columnar, then becomes stratified columnar with the onset of the first signs of attenuation (Figure3). Subsequently, the epithelium undergoes dramatic size reduction and loses morphological polarization by the processes described above. These processes closely resemble aposecretion, where a portion of a cell is discharged with its contents, minus the organelles.During aposecretion, proteins such as myosin and gelsolin [43] or even actin [44, 45] have been implicated in extrusion of the apical protrusions. Presence of actin filaments in the constricting aposome has been demonstrated in the attenuating epithelium of the chick embryo lung, plausibly implicating it in the cell cutting process [37]. The actin filaments were localized at the level of the aposomal constriction since they are associated with the cell adhesion belt [46]) and are also indicators for distal relocation of cell junctions [37]. Change of shape in ingressing embryonic cells has been reported. The apices of such cells are constricted, plausibly through actinomyosin contraction [47] with the result that organelles are displaced basally in readiness for migration. Over and above the actinomyosin activity, physiological aposecretion as occurs in the reproductive system, is also driven by hormones and muscarinic receptors [43].Smooth muscle cells staining positively for alpha actin have been shown to be associated with the developing parabronchi in the chicken lung. Notably, such cells become aligned at the basal aspects of parabronchial epithelial cells delineating gaps through which incipient atria sprout (Figure3). The α-SMA-positive cells, while playing a role in tubular patterning, may be important in epithelial attenuation. During milk secretion, for example, myoepithelial cells below the secretory epithelium squeeze the epithelial cells above and, in so doing, facilitate the release of milk into the secretory acinus [48]. Plausibly, association of α-SMA-positive cells with the attenuating air conduit epithelium during epithelial attenuation is important in facilitating such aposecretion-like cell processes. ## 6. Physiological Adaptation and Remodeling of the BGB The pulmonary BGB undergoes certain changes that include increase in the thickness of the basement membranes and breaks in the endothelium as a result of stress failure [1, 2]. Continual regulation of the wall structure of the BGB occurs through rapid changes in gene expression for extracellular matrix proteins and growth factors in response to increases in capillary wall stress. This helps to maintain the extreme thinness with sufficient strength [49].Structural alterations in the BGB in response to physiological changes have been demonstrated. Berg and co-workers [50] subjected lungs to high states of inflation over 4 hours with the result that gene expression for α1(III) and α2(IV) procollagens, fibronectin, basic fibroblast growth factor (bFGF), and transforming growth factor β1 (TGF-β1) were increased. Similarly, Parker and colleagues increased venous pressure in perfused isolated rabbit lungs with the finding that there was significant increase in mRNA for α1(I) but not α2(IV) procollagen [51]. The difference was thought to be because both experimental techniques increase stress in structures other than capillaries. In young dogs subjected to prolonged low oxygen tensions (high altitude), there was notable reduction in harmonic mean thickness of the BGB and a shift in its frequency distribution such that thinner segments were more preponderant [52]. This indicates redistribution of tissue components within the alveolar septa in such a way that there is minimized diffusive resistance.Breaks in the BGB in cases of extreme stress have been reported. In thoroughbred racehorses after galloping, excessive pressures can lead to pulmonary capillary failure with the resultant pulmonary hemorrhage [53]. In related studies, increase in red blood cells and protein in the broncho-alveolar lavage fluid of exercising elite athletes indicated that the integrity of the blood-gas barrier is impaired by short-term exercise [54]. Similar findings were documented from a rabbit model of increased capillary pressure with subsequent damage to all or parts of the blood-gas barrier [55]. The lack of significant elevations in the cytokines known to increase the permeability of the capillary endothelium mitigates against an inflammatory mechanism and supports the hypothesis that mechanical stress may impair the function of the human blood-gas barrier during exercise [54]. Extremely high stress in the walls of the pulmonary capillaries, as may occur in mechanical ventilation, results in ultrastructural changes including disruptions of both the alveolar epithelial and capillary endothelial layers [56]. Stress failure can result from pathological conditions that interfere with its structural and/or physiological integrity. Such conditions include high-altitude pulmonary edema, neurogenic pulmonary edema, severe left ventricular failure, mitral valve stenosis, and overinflation of the lung [56]. There is a spectrum of low permeability to high permeability edema as the capillary pressure is raised. Remodeling of pulmonary capillaries apparently occurs at high capillary pressures. It is likely that the extracellular matrix of the capillaries is continuously regulated in response to capillary wall stress. ## 7. Molecular Regulation of BGB Development A detailed discussion of molecular control of BGB formation needs to consider the various coarse components that come into play during its establishment. On the vascular side is the capillary endothelium, the middle layer is the extracellular matrix (ECM), while the epithelium lines the airspaces. Recently, Herbert and Stainier [57] have provided an updated review of the molecular control of the endothelial cell differentiation, with the notion that VEGF and Notch signaling are important pathways. Angiogenesis itself is a complex process which is currently under intensive investigation and whose molecular control is slowly falling into shape [58]. The intermediate layer of the BGB starts by being excessively abundant but is successfully diminished and, in doing so, the capillary endothelium approximates the attenuating gas exchange epithelium. Therefore, the genes that come into play in production and regulation of the matrix metalloproteinases the enzymes that lead to reduction in ECM are important in lung development [59] and BGB formation. Detailed reports of the molecular control of angiogenesis and ECM biosynthesis are, however, not within the scope of the current discussion, and we will concentrate on differentiation of the alveolar/air capillary epithelium and its subsequent approximation to the endothelium.The lung in vertebrates is known to be compliant except in avian species. Therefore, some commonalities would be expected in the inauguration and early stages of lung development. Lung development has been well studied in mammals and to some reasonable extent in birds, but not much has been done in the ectotherms. Reports on the reptilian lung structure [22, 60, 61] and in the frog [62] and fish [5] indicate that the parenchymal interairspace septa do not mature, and a double capillary system is retained in these ectotherms. While controlling molecules may be similar to those in mammals and birds at the inaugural stages of lung development, subtle differences would be expected when it comes to later stages of lung maturation. Indeed, many of the controlling factors have been highly conserved through evolution [7, 32].Lung development is driven by two forces: intrinsic factors that include a host of regulatory molecules and extrinsic forces, the main one being extracellular lung fluid [63]. A complex set of morphoregulatory molecules constitutes the intrinsic factors, which can be grouped into three classes: transcription factors (e.g., Nkx2.1 also known as thyroid transcription factor-1 (TTF-1), GATA, and HNF-3); signaling molecules such as FGF, BMP-4, PDGF, Shh, and TGF-β; extracellular matrix proteins and their receptors [7, 63, 64]. In mammals, extrinsic/mechanical forces have been shown to be important for fetal alveolar epithelial cell differentiation. Such forces emanate from fetal lung movements that propel fluid through incipient air conduits [65].Formation of BGB in mammals involves the attenuation of the developing lung epithelium, which includes conversion of the columnar epithelium of the pseudoglandular stage to a mainly cuboidal one with lamellar bodies (Figure1). Subsequently, there is a lowering of the intercellular tight junctions, spreading or stretching of the cell, and total extrusion of lamellar bodies (Figure 2) leading to differentiated AT-I and AT-II epithelial cells. The AT-I cells constitute a thin squamous epithelium that covers over 90% of the alveolar surface area, which provides gas exchange between the airspaces and pulmonary capillary vasculature. AT-II are interspersed throughout the alveoli and are responsible for the production and secretion of pulmonary surfactant, regulation of alveolar fluid homeostasis, and differentiation into AT-I cells during lung development and injury. Genetic control of the specific aforementioned steps has not been investigated, but there exists reports on the differentiation of AT-II and AT-I cells and conversion of AT-II to AT-I cells in mammals [66]. Some of the molecular signals that have been proposed to be involved in the differentiation of AT-II and AT-I cells are (i) transcription factors such as thyroid transcription factor-1 (TTF-1), forkhead orthologs (FOXs), GATA6, HIF2α, Notch, glucocorticoid receptor, retinoic acid, and ETS family members; (ii) growth factors such as epithelial growth factor (EGF) and bone morphogenetic protein 4 (BMP4); (iii) other signaling molecules including connexin 43, T1 alpha, and semaphorin 3A. Herein after, the role of these molecules in epithelial cell differentiation in the distal lung is briefly described. ### 7.1. Molecular Regulation of BGB in Mammals #### 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. #### 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. #### 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ### 7.2. Molecular Regulation of BGB Formation in Birds The avian lung differs fundamentally from that of other vertebrates in having noncompliant terminal gas exchange units. While the upstream control of lung development may be close or similar to that of the other vertebrates, later events indicate that a totally different process occurs. Formation of the BGB requires that the blood capillaries (BCs) and the attenuating air capillaries (ACs) migrate through progressively attenuating interstitium to approximate each other [35, 37]. Elevation of levels of basic FGF (bFGF), VEGF-A, and PDGF-B during the later phase of avian lung microvascular development [34] indicated that they may be important during interaction of the BCs and the ACs. In the chicken lung, pulmonary noncanonical Wnt5a uses Ror2 to control patterning of both distal airway and vascular tubulogenesis and perhaps guides the interfacing of the air capillaries with the blood capillaries [91]. The latter authors showed that lungs with mis-/overexpressed Wnt5a were hypoplastic with erratic expression patterns of Shh, L-CAM, fibronectin, VEGF, and Flk1. Coordinated development of pulmonary air conduits and vasculature is achieved through Wnt5a, which plausibly works through fibronectin-mediated VEGF signaling through its regulation of Shh [91]. Fibroblast growth factors (FGFs) and their cognate receptors (FGFRs) are expressed in the developing chick lung and are essential for the epithelial-mesenchymal interactions. Such interactions determine epithelial branching [92] and may be essential for ultimate BGB establishment. ## 7.1. Molecular Regulation of BGB in Mammals ### 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. ### 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. ### 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ## 7.1.1. Growth Factors (1) ErbB Growth Factor Receptors. Growth factors regulate the growth and development of the lung. Growth factors signal their mitogenic activities through tyrosine kinase receptors. Epithelial growth factor receptor (EGFR), a member of the ErbB transmembrane tyrosine kinases, and its ligand (epithelial growth factor, EGF) have been shown to be involved in alveolar maturation. EGF deficiency in rats during perinatal development using EGF autoantibodies results in mild respiratory distress syndrome and delayed alveolar maturation [67]. Inactivation of EGFR/ErbB1 by gene targeting in mice resulted in respiratory failure as a result of impaired alveolarization including presence of collapsed [68] or thick-walled alveoli [68]. EGFR is also important for the AT-II differentiation as lungs from EGFR−/− mice have decreased expression of AT-II specification markers, surfactant proteins (SP)–B, C, and D [69]. ErbB4, another member of the ErbB receptors family, has been also shown to be involved in alveolar maturation. Deletion of ErbB4 in mice results in alveolar hypoplasia during development and hyperreactive airways in adults. Moreover, developing lungs from ErbB4−/− mice exhibited impaired differentiation of AT-II cells with decreased expression of SP-B and decreased surfactant phospholipid synthesis, indicating that ErbB4 plays a role in the differentiation of AT-II cells [70]. Recently, it has been demonstrated that EGFR and ErbB4 regulate stretch-induced differentiation of fetal type II epithelial cells via the ERK pathway [69].(2) Bone Morphogenetic Protein 4 (BMP4). Bone morphogenetic protein 4 (BMP4), a transforming growth factor-β (TGFβ), is highly expressed in the distal tips of the branching lung epithelium, with lower levels in the adjacent mesenchyme. The role of BMP4 in alveolar differentiation has been examined by using transgenic mice that overexpress BMP4 throughout the distal epithelium of the lung using the SP-C promoter/enhancer. The BMP4 transgenic lungs are significantly smaller than normal, with greatly distended terminal buds at E16.5 and E18.5 and at birth contain large air-filled sacs which do not support normal lung function [71]. Furthermore, whole-mount in situ hybridization analysis of BMP4 transgenic lungs using probes for the proximal airway marker, CC10, and the distal airway marker, SP-C, showed normal AT-II differentiation of bronchiolar Clara cells but a reduction in differentiated cells, indicating that BMP4 plays an essential role in the alveolar epithelial differentiation [71]. ## 7.1.2. Transcription Factors and Nuclear Receptors (1) GATA-6 Transcription Factor. Expression of GATA-6, a member of the GATA family of zinc finger transcription factors, occurs in respiratory epithelial cells throughout lung morphogenesis. Dominant negative GATA-6 expression in respiratory epithelial cells inhibits lung differentiation in late gestation and decreases expression of aquaporin-5, the specific marker for AT-I, and surfactant proteins [70] often acting synergistically with TTF-1 [72]. Overexpression of GATA-6 in the epithelium was shown to inhibit alveolarization, and there was lack of differentiation of AT-II and AT-I epithelial cells as well as failure of surfactant lipid synthesis [70]. In mice expressing increased levels of GATA-6 in respiratory epithelial cells, postnatal alveolarization was disrupted, resulting in airspace enlargement.(2) Forkhead Orthologs (FOXs) Transcription Factors. Foxa1 and Foxa2, members of the winged-helix/forkhead transcription family, are expressed in the epithelium of the developing mouse lung and are important for epithelial branching and cell differentiation. Mice null for Foxa1 do not develop squamous pneumocytes, and although the pulmonary capillaries are well developed, no thin BGB is formed [73]. Previously, it was demonstrated that Foxa2 controls pulmonary maturation at birth. Neonatal mice lacking Foxa2 expression develop archetypical respiratory distress syndrome with all of the morphological, molecular, and biochemical features found in preterm infants, including atelectasis, hyaline membranes, and the lack of pulmonary surfactant lipids and proteins, and they die at birth [74].(3) Thyroid Transcription Factor (TTF-1). The transcription factor TTF-1, a member of the Nkx homeodomain gene family, is expressed in the forebrain, thyroid gland, and lung. In the lung, TTF-1 plays an essential role in the regulation of lung morphogenesis and epithelial cell differentiation via transactivating several lung-specific genes including the surfactant proteins A, B, C, D, and CC10 [75]. Mice harboring a null mutation in the TTF-1 gene exhibit severely attenuated lung epithelial development with a dramatic decrease in airway branching. Moreover, lung epithelial cells in these mice lack expression of SP-C suggesting that TTF-1 is the major transcription factor for lung epithelial gene expression [76]. Mutations in the human TTF-1 gene have been associated with hypothyroidism and respiratory failure in human infants [77].(4) Hypoxia-Inducible Factor 2α (HIF2α). Hypoxia-inducible factor 2α (HIF2α), an oxygen-regulated transcription factor, in the lung is primarily expressed in endothelial, bronchial, and AT-II cells. The role of HIF2α in AT-II cells was examined by using transgenic mice that conditionally expressed an oxygen-insensitive mutant of HIF2α (mutHIF2α) in airway epithelial cells during development [69]. These mice were shown to have dilated alveolar structures during development, and the newborn mice died shortly after birth due to respiratory distress. Moreover, the distal airspaces of mutHIF2α lungs contained abnormal morphology of AT-II cells including an enlarged cytoplasmic appearance, a decreased formation of lamellar bodies, and a significantly reduced number of AT-I cells with decreased expression of aquaporin-5. Therefore, it was indicated that HIF2α negatively regulates the differentiation of AT-II to AT-I cells. Inactivation of HIF2α in transgenic mice resulted in fatal respiratory distress in neonatal mice due to insufficient surfactant production by AT-II cells. Furthermore, lungs of HIF2α-/-mice exhibited disruption of the thinning of the alveolar septa and decreased numbers of AT-II cells, indicating that HIF2α regulates the differentiation of AT-II cells [78].(5) Notch. Notch signaling is also involved in the differentiation of AT-II cells to AT-I cells in mammals. Overexpression of Notch1 in the lung epithelium of transgenic mice constitutively expressing the activation domain (NICD) of Notch 1 in the distal lung epithelium using a SP-C promoter/enhancer prevented the differentiation of the alveolar epithelium [79]. In these mice, lungs at E18.5 had dilated cysts in place of alveolar saccules. The cysts composed of cells that were devoid of alveolar markers including SP-C, keratin 5, and p63, but expressed some markers of proximal airway epithelium including E-cadherin and Foxa2. Thus, Notch1 arrests differentiation of alveolar epithelial cells. Notch3, another member of the Notch signaling pathway, has also been demonstrated to play a role in alveolar epithelial differentiation. Transgenic mice that constitutively express the activated domain of Notch3 (NICD) in the distal lung epithelium using a SP-C promoter/enhancer were shown to be embryonic lethal at E18.5 and harbored altered lung morphology in which epithelial differentiation into AT-I and AT-II cells was impaired. Metaplasia of undifferentiated cuboidal cells in the terminal airways was also evident [80]. Therefore, constitutive activation of Notch3 arrests differentiation of distal lung alveolar epithelial cells. Recent complementary evidence showed that pharmacological approaches to disrupt global Notch signaling in mice lung organ cultures during early lung development resulted in the expanding of the population of the distal lung progenitors, altering morphogenetic boundaries and proximal-distal lung patterning [81].(6) Glucocorticoid Receptor and Retinoic Acid. Glucocorticoids are important for the maturation of the fetal lung, and glucocorticoid actions are mediated via the intracellular glucocorticoid receptor (GR), a ligand-activated transcriptional regulator. The role of glucocorticoid action via GR signaling in fetal lung maturation has been demonstrated by using GR-null mice [82]. The lungs of fetal GR-null mice were found to be hypercellular with blunted septal thinning leading to a 6-fold increase in the airway to capillary diffusion distance and hence failure to develop a functionally viable BGB [82]. The phenotype of these mice was accompanied with increased number of AT-II cells and decreased number of AT-I cells with decreased mRNA expression of AT-I specific markers T1 alpha and aquaporin-5. The conclusion in these studies was that receptor-mediated glucocorticoid signaling facilitates the differentiation of epithelial cells into AT-I cells but has no effect on AT-II cell differentiation.Retinoic acid receptor (RAR) signaling is important early during development but its role has a temporal disposition. RAR signaling establishes an initial program that assigns distal cell fate to the prospective lung epithelium. Downregulation of RA signaling in late prenatal period is requisite for eventual formation of mature AT-I and AT-II cells [82, 83]. Furthermore, RAR activation interferes with the proper temporal expression of GATA6, a gene that is critical in regulation of surfactant protein expression in branching epithelial tubules and establishment of the mature AT-II and AT-I cell phenotypes [84]. Later during lung development, RAR signaling is essential for alveolar formation [85].(7) E74-Like Factor 5 (ELF5). E74-like factor 5 (ELF5), an Ets family transcription factor, is expressed in the distal lung epithelium during early lung development and then becomes restricted to proximal airways at the end of gestation. Overexpression of ELF5, specifically in the lung epithelium during early lung development by using a doxycycline inducible HA-tagged ELF5 transgene under the SP-C promoter/enhancer, resulted in disrupted branching morphogenesis and delayed epithelial cell differentiation [86]. Lungs overexpressing ELF5 exhibited reduced expression of the distal lung epithelial differentiation marker SP-C [86], indicating that ELF5 negatively regulates AT-II differentiation.(8) Wnt/β-catenin. The Wnt/β-catenin pathway regulates intracellular signaling, gene transcription and cell proliferation and/or differentiation. The essential role of the Wnt/β-catenin pathway in the differentiation of alveolar epithelium has been demonstrated by using transgenic mice in which β-catenin was deleted in the developing respiratory epithelium, using a doxycycline-inducible conditional system to express Cre recombinase-mediated, homologous recombination strategy [87]. Deficiency of β-catenin in the respiratory epithelium resulted in pulmonary malformations consisting of multiple, enlarged, and elongated bronchiolar tubules and disruption of the formation and differentiation of distal terminal alveolar saccules, including the specification of AT-I and AT-II epithelial cells in the alveolus [87]. ## 7.1.3. Other Molecular Signals (1) Semaphorin 3A. Semaphorin 3A (Sema3A), a neural guidance cue, mediates cell migration, proliferation, and apoptosis and inhibits branching morphogenesis. The role of Sema3A in maturation and/or differentiation of the distal lung epithelium during development was deduced from studies on Sema3A-null mice. Lungs from Sema3A−/− embryos had reduced airspace size and thickened alveolar septae with impaired epithelial cell maturation of AT-I and AT-II cells [88].(2) Connexin 43. Connexin 43, one of the connexins family members that form gap junctions, is one of the most studied proteins in organogenesis. During early lung branching morphogenesis in mice, connexin 43 is highly expressed in the distal tip endoderm of the embryonic lung at E11.5, and after birth, connexin-43 is expressed between adjacent AT-I cells in rats and mice. Connexin 43 knockout mice die shortly after birth due to hypoplastic lungs [89]. Lungs from connexin 43−/− mice exhibit delayed formation of alveoli, narrow airspaces, and thicker interalveolar septae. Additionally, such lungs have decreased mRNA expressions of AT-II specific marker SP-C gene, AT-I specific marker aquaporin-5, and α-SMA actin and have reduced numbers of AT-I cells [89].(3) T1 Alpha. T1 alpha, a differentiation gene of AT-I cells, is highly expressed in the lung at the end of gestation. T1 alpha is only expressed in AT-I cells but not AT-II cells. Evidence for participation of T1 alpha in differentiation of AT-I cells but not AT-II cells was adduced from studies on knockout mice. Homozygous T1 alpha null mice die at birth due to respiratory failure, and lungs exhibit abnormal high expression of proliferation markers in the distal lung [81]. There is normal differentiation of AT-II cells with normal expression of surfactant proteins, lack of differentiation of AT-I cells with decreased expression of aquaporin-5, narrower and irregular airspaces, and defective formation of alveolar saccules. Comparison of microarray analyses of T1 alpha−/− and wild-type lungs showed that there was an altered expression of genes including upregulation of the cell-cell interaction gene ephrinA3 and downregulation of negative regulators of the cellcycle such as FosB, EGR1, MPK-1 and Nur11 [90]. ## 7.2. Molecular Regulation of BGB Formation in Birds The avian lung differs fundamentally from that of other vertebrates in having noncompliant terminal gas exchange units. While the upstream control of lung development may be close or similar to that of the other vertebrates, later events indicate that a totally different process occurs. Formation of the BGB requires that the blood capillaries (BCs) and the attenuating air capillaries (ACs) migrate through progressively attenuating interstitium to approximate each other [35, 37]. Elevation of levels of basic FGF (bFGF), VEGF-A, and PDGF-B during the later phase of avian lung microvascular development [34] indicated that they may be important during interaction of the BCs and the ACs. In the chicken lung, pulmonary noncanonical Wnt5a uses Ror2 to control patterning of both distal airway and vascular tubulogenesis and perhaps guides the interfacing of the air capillaries with the blood capillaries [91]. The latter authors showed that lungs with mis-/overexpressed Wnt5a were hypoplastic with erratic expression patterns of Shh, L-CAM, fibronectin, VEGF, and Flk1. Coordinated development of pulmonary air conduits and vasculature is achieved through Wnt5a, which plausibly works through fibronectin-mediated VEGF signaling through its regulation of Shh [91]. Fibroblast growth factors (FGFs) and their cognate receptors (FGFRs) are expressed in the developing chick lung and are essential for the epithelial-mesenchymal interactions. Such interactions determine epithelial branching [92] and may be essential for ultimate BGB establishment. ## 8. Conclusion In the current paper, we have presented an overview of the events that take place during inauguration, development, and remodeling of the vertebrate BG. We have highlighted the fact that the events differ fundamentally between the compliant mammalian lung and the rigid avian lung. The paper is skewed towards the formation of the internal (alveolar/air capillary) layer of the BGB. Specific studies on molecular control of BGB formation are lacking, but investigations on the AT-II and AT-I cell differentiation in mammals exist. While there is a rapidly increasing wealth of studies on molecular control of the mammalian lung development, very little has been done on the avian species. Studies on the factors guiding and controlling the newly described cell processes of secarecytosis and peremerecytosis in the avian lung are strongly recommended. Furthermore, investigations focused on epithelial attenuation and epithelial-endothelial interactions would illuminate the mechanisms preponderant during BGB formation. --- *Source: 101597-2012-12-27.xml*
2013
# Ethical Challenges and Controversies in the Practice and Advancement of Gene Therapy **Authors:** Emmanuel Owusu Ansah **Journal:** Advances in Cell and Gene Therapy (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1015996 --- ## Abstract One of the most important technologies in modern medicine is gene therapy, which allows therapeutic genes to be introduced into cells of the body. The approach involves genetics and recombinant DNA techniques that allow manipulating vectors for delivery of exogenous material to target cells. The efficacy and safety of the delivery system are a key step towards the success of gene therapy. Somatic cell gene therapy is the easiest in terms of technology and the least problematic in terms of ethics. Although genetic manipulation of germline cells at the gene level has the potential to permanently eradicate certain hereditary disorders, major ethical issues such as eugenics, enhancement, mosaicism, and the transmission of undesirable traits or side effects to patients’ descendants currently stymie its development, leaving only somatic gene therapy in the works. However, moral, social, and ethical arguments do not imply that germline gene therapy should be banned forever. This review discusses in detail the current challenges surrounding the practice of gene therapy, focusing on the moral arguments and scientific claims that affect the advancement of the technology. The review also suggests precautionary principles as a means to navigate ethical uncertainties. --- ## Body ## 1. Introduction The concept of gene therapy is an experimental procedure that involves the introduction of a normal gene to compensate for a defective gene with the goal of improving a disease condition. This is achieved efficiently using viral vectors to introduce a gene of interest into target cells. Over the past decades, gene therapy has contributed significantly to the treatment of human diseases, such as cancers, cystic fibrosis, heart disease, diabetes, muscular dystrophy, hemophilia, and AIDS [1]. Historically, the first successful trials of gene therapy in humans occurred in 1990 when Ashanti DeSilva with adenosine deaminase deficiency (ADA), leading to X-linked severe combined immunodeficiency (SCID), was treated with her own blood [2]. Nine years later, gene therapy faced a devastating setback when Jesse Gelsinger, an 18-year-old boy with ornithine transcarbamylase deficiency (OTC), died after a clinical trial of therapeutic gene treatment. His death resulted from an excessive immune response after the administration of the therapeutic product. However, gene therapy has transcended beyond the sphere of failure into the arena of breakthrough. Substantial contributions have been made by gene therapy towards the treatment of human diseases. The efficient delivery of therapeutic gene by viral vectors, especially adeno-associated viruses (AAV), as well as the optimization of the delivery systems, has greatly wiped away certain negative assumptions surrounding the practice of viral gene therapy [3].Among the first gene therapy products, Gendicine was first approved in 2003 by the Chinese Food and Drug Administration (ADA). The medication is an oncolytic virotherapeutic product used to treat neck and head carcinoma [4]. Globally, almost 2600 gene therapy products have been considered for clinical trials, of which a significant percentage have been approved [5]. Additionally, the FoCUS project by MIT suggests that 39 gene therapies will gain regulatory approval by the end of 2022 from the 2017 pipeline of 932 development candidates and this includes already approved product. Among this number, 45% of the total are expected to be utilized in the area of oncology [6].Gene therapy can be divided into two types: germline and somatic. The distinction between these two procedures is that, in somatic gene therapy, genetic material is injected into some target cells, but the alteration is not handed down to future generations, whereas in germline gene therapy, the therapeutic or changed gene is carried down to future generations. Despite the fact that gene therapy is still in its infancy as a clinically viable therapeutic modality, ethical difficulties and conflicts must be addressed in order to avoid unethical research and health practices. The purpose of this article is to highlight the different ethical difficulties and debates that have arisen as a result of the practice and advancement of gene therapy. ## 2. The Approach of Gene Therapy Gene therapy uses two approaches for therapeutic gene transfer; this includes in vivo and ex vivo gene therapy. In vivo gene therapy involves the direct introduction of the gene of interest into a patient tissue via a plasmid, nonviral or viral vectors. With ex vivo gene therapy, isolated patient cells are genetically altered outside of the human body and finally reimplanted in the same patient, or the desired proteins expressed by engineered cells are infused to the patient to introduce potentially therapeutic changes. ### 2.1. Genome Editing Technologies Genome editing techniques are considered one of the most challenging yet efficient tools for gene therapeutic approaches [7]. Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9), transcription activator-like effector nucleases (TALEN), and zinc finger nucleases (ZFN) are the most widely used genome editing tools. Genome editing in the field of gene therapy uses an in vivo or ex vivo approach with greatly increased specificity and efficiency. This is achieved by delivering the editing machinery stably into cells to edit genes, as well as making other highly targeted genomic modifications [8]. CRISPR technology offers a great promise for treating a wide range of human genetic diseases. Currently, the CRISPR/Cas9 system is the latest genome editing technology, as it is efficient and precise for genetic modification processes that include the insertion of therapeutic genes, the destruction of viral DNA, and the correction of harmful mutations [9]. Researchers have demonstrated successful proof-of-concept studies in germline and somatic gene therapy by genome editing. In 2014, Genovese and his group used CRISPR/Cas9 genome editing strategy to correct the interleukin-2 receptor subunit gamma (IL2RG) gene, which has provided a new avenue for the treatment of SCID. Another study focused on CRISPR/Cas9-mediated chromosomal inversion of the factor VIII gene in patients with hemophilia A [10]. Genome editing has now become a powerful method in the field of gene therapy. However, there are certain ethical challenges, moral, and safety concerns related to the attractive application of this technology, especially in the germline. ### 2.2. Germline Genome Editing Germline gene editing (GGE) has been used as a research tool and as a therapeutic intervention. This technique has been used to modify genes of yeast, mice, plants, rodents, pigs, and primates [11]. In a recent study, gene editing was used to deactivate 62 retrovirus genes in a pig cell line, a crucial step towards creating suitable pig organs for transplantation [12, 13]. In October 2015, researchers edited a gene related to muscle growth in a beagle to double its normal muscular mass [14]. Germline gene editing has the potential to ameliorate disease phenotype from embryos, and supporters of the technique claim that it could be used as a means of disease prevention in humans. Despite the broad implications, public debate has focused on the ethics of human germline gene editing [15]. In April 2015, He, a genome editing researcher in China, used for the first time CRISPER/Cas9 genome editing technique to disable HIV-CCR5 gene that is responsible for HIV entry into target cells from an embryo and implanted into a woman [16]. DNA sequencing confirmed the deletion of the CCR5 gene, suggesting the great benefit that can be derived from germline editing. In another study, to understand the efficiency and potential off-target effect of CRISPR technology in embryo editing, Liang et al. cleaved β-globin gene of triponuclear (3PN) human zygotes using the CRISPR/Cas ribonucleoprotein. The results showed an apparent off-target effect and a low efficiency of homologous recombination directed repair (HDR) coupled with mosaicism [17]. Thus, editing a human embryo could be a useful method to eliminate defective genes and even provide HIV-positive couples the opportunity to give birth to HIV-negative children; however, some potential pitfalls, including the off-target effect and mosaicism, limit its application on humans. The safety and efficacy of genome editing tools are the main concerns for clinical application. Consequently, alternative genetic approaches that are safer and more efficient must be explored to protect people, other than changing the DNA of an embryo [18]. ## 2.1. Genome Editing Technologies Genome editing techniques are considered one of the most challenging yet efficient tools for gene therapeutic approaches [7]. Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9), transcription activator-like effector nucleases (TALEN), and zinc finger nucleases (ZFN) are the most widely used genome editing tools. Genome editing in the field of gene therapy uses an in vivo or ex vivo approach with greatly increased specificity and efficiency. This is achieved by delivering the editing machinery stably into cells to edit genes, as well as making other highly targeted genomic modifications [8]. CRISPR technology offers a great promise for treating a wide range of human genetic diseases. Currently, the CRISPR/Cas9 system is the latest genome editing technology, as it is efficient and precise for genetic modification processes that include the insertion of therapeutic genes, the destruction of viral DNA, and the correction of harmful mutations [9]. Researchers have demonstrated successful proof-of-concept studies in germline and somatic gene therapy by genome editing. In 2014, Genovese and his group used CRISPR/Cas9 genome editing strategy to correct the interleukin-2 receptor subunit gamma (IL2RG) gene, which has provided a new avenue for the treatment of SCID. Another study focused on CRISPR/Cas9-mediated chromosomal inversion of the factor VIII gene in patients with hemophilia A [10]. Genome editing has now become a powerful method in the field of gene therapy. However, there are certain ethical challenges, moral, and safety concerns related to the attractive application of this technology, especially in the germline. ## 2.2. Germline Genome Editing Germline gene editing (GGE) has been used as a research tool and as a therapeutic intervention. This technique has been used to modify genes of yeast, mice, plants, rodents, pigs, and primates [11]. In a recent study, gene editing was used to deactivate 62 retrovirus genes in a pig cell line, a crucial step towards creating suitable pig organs for transplantation [12, 13]. In October 2015, researchers edited a gene related to muscle growth in a beagle to double its normal muscular mass [14]. Germline gene editing has the potential to ameliorate disease phenotype from embryos, and supporters of the technique claim that it could be used as a means of disease prevention in humans. Despite the broad implications, public debate has focused on the ethics of human germline gene editing [15]. In April 2015, He, a genome editing researcher in China, used for the first time CRISPER/Cas9 genome editing technique to disable HIV-CCR5 gene that is responsible for HIV entry into target cells from an embryo and implanted into a woman [16]. DNA sequencing confirmed the deletion of the CCR5 gene, suggesting the great benefit that can be derived from germline editing. In another study, to understand the efficiency and potential off-target effect of CRISPR technology in embryo editing, Liang et al. cleaved β-globin gene of triponuclear (3PN) human zygotes using the CRISPR/Cas ribonucleoprotein. The results showed an apparent off-target effect and a low efficiency of homologous recombination directed repair (HDR) coupled with mosaicism [17]. Thus, editing a human embryo could be a useful method to eliminate defective genes and even provide HIV-positive couples the opportunity to give birth to HIV-negative children; however, some potential pitfalls, including the off-target effect and mosaicism, limit its application on humans. The safety and efficacy of genome editing tools are the main concerns for clinical application. Consequently, alternative genetic approaches that are safer and more efficient must be explored to protect people, other than changing the DNA of an embryo [18]. ## 3. Ethical Challenges of Gene Therapy ### 3.1. Off-Target Mutation The most obvious ethical debate specifically from the National Institute of Health (NIH) against GGE is the off-target effect. Off-target gene mutation could potentially result in insertional mutagenesis and gene mutation [19]. Bioethicists and researchers suggest that genome editing is new and unpredictable technology, and little is known about gene regulation and mechanisms of embryonic development; therefore, the consequences of germline therapy can be fatal [20]. Despite the fact that CRISPR/Cas proves to be an efficient tool for clinical somatic use, it has not reached the stage to be utilized in human genome editing for clinical reproductive purposes. Therefore, the apparent long-term effects cannot be overlooked [21, 22]. Genome editing performed on human embryos has a high risk of causing pathologic diseases and disabilities that can permanently affect the patient and the offspring. Although the specificity of Cas9 targeting is tightly controlled, potential off-target cleavage activity could still occur in DNA sequences and has been demonstrated in previous studies [17, 23, 24]. Nevertheless, integrating viral vectors including retrovirus, lentivirus, and even adeno-associated viruses can carry a gene of interest into a nontarget region of the host genome which can likely result in insertional mutagenesis. A study in an animal model shows that the integration of AAV into chromosome 19 could possibly result in genotoxic effects, leading to neoplastic transformations that are prone to tumor development [25]. In addition, off-target integration has been observed in lentiviral vector systems (LV), one of the main delivery vehicles due to its high tissue tropism and long-term expression of the transgene [26]. However, refined strategies have been adapted to improve and optimize LV systems for effective and accurate gene delivery [27]. ### 3.2. Genetic Mosaicism In CRISPR germline gene therapy, the CRISPR/Cas vector is inserted immediately after fertilization so that each successive cell resulting from cleavage is genetically modified. However, the vector can persist and transcribe, making it possible to further introduce the Cas protein into parts of already engineered cells and potentially initiate another cleavage, leading to mosaicism [28, 29]. Some cells may eventually acquire edits that are different from those of other cells, leading to differences in gene copy number, causing skin, brain, and heart disorders, and impairing embryo maturation. In a study, high levels of mosaicism were observed after germline editing of a model bovine embryo using the Cas9 system [30]. This finding confirms the possibility of its occurrence in human embryos if left unregulated. Furthermore, the technological approach to testing mosaic mutations in an edited embryo may be ineffective, as the small number of cells selected for testing may not include a mosaic mutant cell [31]. In the summer of 2019, the potential effect of mosaicism emerging from the clinical application of germline editing was discussed by the US National Academy of Medicine, the US National Academy of Sciences, and the Royal Society of medicine [32]. The lack of clear evidence from experts that mosaic mutation has not occurred in a range of cell and tissue types of early-stage human embryo editing, as well as the inability of the technology to validate that a particular edit is correct and devoid of mosaic mutation could make it difficult for the public to support the application. Therefore, to ensure that germline editing is safe, all important issues and controversies should be addressed. ### 3.3. Informed Consent Following the first gene therapy death recorded in a clinical trial in September 1999, the informed decision about participating in a clinical trial has gained numerous concerns. It is advisable that participants undergoing gene therapy clinical trials must be extensively educated on the potential risks and benefits associated with treatment to provide them with enough information on which to decide to participate or not without coercion [33]. A study by the National Human Genome Research Institute (NHGRI) proposed the need and importance of informed consent in CRISPR somatic genome editing after surveying patients with sickle cell disease [34]. Inasmuch as gene therapies suggest future transformation by treating many incurable diseases, the perceived benefits of the technology should not overshadow the difficulties that the patients may face in grasping long-term hazards. Although somatic gene therapy meets the need for informed consent, germline embryo editing poses a more difficult regulatory issue, that is, whether consent of a future generation is required and, if so, who should express consent because embryos cannot consent to germline intervention [35]. Moreover, the extent of authority over the embryo by the prospective parents and practitioners raises ethical debate, whether parents will be the only autonomous entity to make decisions for their unborn babies or will this be seen as usurping the interests of future generations who are unable to consent at the time of the decision [36]. Due to too many unknowns, it is uncertain what information would be required or available to properly inform prospective parents about dangers, including those for future generations [37]. This poses a significant challenge in obtaining informed consent [38]. As additional gene treatments for incurable hereditary disorders enter the consent clinic, a discussion on ethics should be started so that these issues can be discussed in a clear, fair, and balanced manner, rather than allowing any particular profession to make the final decision on where the ethical limits should be drawn [39]. It is an undeniable fact that any research which may someday prove to be a breakthrough should completely meet the ethical standards of informed consent [40]. ### 3.4. Enhancement and Eugenics Genetic enhancement or improvement is also a legitimate concern surrounding the application of gene therapy. Enhancement gene therapy means manipulating genes to improve the characteristics of an individual according to the interests of the person [41]. Genetic therapy, on the other hand, involves altering faulty genes to prevent or cure diseases [42]. A classic example of enhancement therapy is the injection of recombinant human growth hormone (rhGH) into children of short stature to increase the growth rate and final height [43]. However, the injection of rhGH into children of normal height in an attempt to make them taller may possibly create ethical issues. Furthermore, athletes rely on human recombinant erythropoietin (EPO) for improvement. The EPO hormone is used to induce the production of red blood cells that are used to treat kidney dialysis and anemia. However, athletes who do not have any health conditions seek EPO therapy in an attempt to improve performance in competitive events where muscles require a lot of oxygen [44, 45]. Inasmuch as some enhancement practices are considered morally unethical since it shifts from the natural, the distinction between enhancement and therapy may be a contextual issue and must be clearly understood. An enhancement application may be therapeutic and vice versa. The improvement of the height of short persons whose condition is a result of human growth hormone deficiency, as well as, enhancing the skin color of patients suffering from vitiligo indicate a therapeutic enhancement. This suggests that genetic therapy and enhancement may have common similarities [46]. Moreover, enhancement can potentially lead to eugenics. CRISPER/Cas9 offers the prospect of manipulating the germline to select human traits such as beauty, character, body formation, and intelligence. This makes it possible to create evolutionary individuals and improve the human race [47]. In 2015, the UNESCO International Bioethics Committee commented on the eugenic dangers of germline procedures. The committee suggested that the incorporation of gene editing techniques into gene therapy may possibly change the therapeutic application to racial improvement. Hence, the equal dignity of all human beings may be altered and eventually renew eugenics [48]. To control the use of technology, an intervention aimed at altering the human genome may be performed only for preventive, diagnostic, or therapeutic purposes, and any attempt to achieve this goal should be banned [49]. Furthermore, the extent of human condition to which gene therapy is applicable should be clearly defined and properly regulated to make people aware of diseases and condition of disease that require experimental treatments. This may address concerns about equal accessibility while minimizing nontherapeutic traits enhancement. Scientific researchers should clearly state the goal of any applied or basic research involving CRISPR/Cas editing; either the research is to provide a therapeutic solution, to generate preliminary data for the development of human genome editing applications, or to just improve the expression of certain traits for nontherapeutic purposes. These distinctions are necessary in the sense that even if one opposes human enhancement therapy, there are important applications of CRISPR/Cas editing that do not serve that purpose. Nonetheless, it is crucial to emphasize that distinguishing eugenics from treatment might be difficult. For example, it is often discussed whether enhancing the immune system through gene and immunotherapeutic approaches is eugenics or not [50]. As a result, a case-by-case analysis is required to resolve numerous concerns. In fact, eugenics is rooted in a social construct which justifies discrimination and injustice against those who are genetically unfit [51]. Therefore, it is worthwhile to clarify that gene therapy, when placed in the right context, has the potential to eliminate birth abnormalities and terminal diseases. ## 3.1. Off-Target Mutation The most obvious ethical debate specifically from the National Institute of Health (NIH) against GGE is the off-target effect. Off-target gene mutation could potentially result in insertional mutagenesis and gene mutation [19]. Bioethicists and researchers suggest that genome editing is new and unpredictable technology, and little is known about gene regulation and mechanisms of embryonic development; therefore, the consequences of germline therapy can be fatal [20]. Despite the fact that CRISPR/Cas proves to be an efficient tool for clinical somatic use, it has not reached the stage to be utilized in human genome editing for clinical reproductive purposes. Therefore, the apparent long-term effects cannot be overlooked [21, 22]. Genome editing performed on human embryos has a high risk of causing pathologic diseases and disabilities that can permanently affect the patient and the offspring. Although the specificity of Cas9 targeting is tightly controlled, potential off-target cleavage activity could still occur in DNA sequences and has been demonstrated in previous studies [17, 23, 24]. Nevertheless, integrating viral vectors including retrovirus, lentivirus, and even adeno-associated viruses can carry a gene of interest into a nontarget region of the host genome which can likely result in insertional mutagenesis. A study in an animal model shows that the integration of AAV into chromosome 19 could possibly result in genotoxic effects, leading to neoplastic transformations that are prone to tumor development [25]. In addition, off-target integration has been observed in lentiviral vector systems (LV), one of the main delivery vehicles due to its high tissue tropism and long-term expression of the transgene [26]. However, refined strategies have been adapted to improve and optimize LV systems for effective and accurate gene delivery [27]. ## 3.2. Genetic Mosaicism In CRISPR germline gene therapy, the CRISPR/Cas vector is inserted immediately after fertilization so that each successive cell resulting from cleavage is genetically modified. However, the vector can persist and transcribe, making it possible to further introduce the Cas protein into parts of already engineered cells and potentially initiate another cleavage, leading to mosaicism [28, 29]. Some cells may eventually acquire edits that are different from those of other cells, leading to differences in gene copy number, causing skin, brain, and heart disorders, and impairing embryo maturation. In a study, high levels of mosaicism were observed after germline editing of a model bovine embryo using the Cas9 system [30]. This finding confirms the possibility of its occurrence in human embryos if left unregulated. Furthermore, the technological approach to testing mosaic mutations in an edited embryo may be ineffective, as the small number of cells selected for testing may not include a mosaic mutant cell [31]. In the summer of 2019, the potential effect of mosaicism emerging from the clinical application of germline editing was discussed by the US National Academy of Medicine, the US National Academy of Sciences, and the Royal Society of medicine [32]. The lack of clear evidence from experts that mosaic mutation has not occurred in a range of cell and tissue types of early-stage human embryo editing, as well as the inability of the technology to validate that a particular edit is correct and devoid of mosaic mutation could make it difficult for the public to support the application. Therefore, to ensure that germline editing is safe, all important issues and controversies should be addressed. ## 3.3. Informed Consent Following the first gene therapy death recorded in a clinical trial in September 1999, the informed decision about participating in a clinical trial has gained numerous concerns. It is advisable that participants undergoing gene therapy clinical trials must be extensively educated on the potential risks and benefits associated with treatment to provide them with enough information on which to decide to participate or not without coercion [33]. A study by the National Human Genome Research Institute (NHGRI) proposed the need and importance of informed consent in CRISPR somatic genome editing after surveying patients with sickle cell disease [34]. Inasmuch as gene therapies suggest future transformation by treating many incurable diseases, the perceived benefits of the technology should not overshadow the difficulties that the patients may face in grasping long-term hazards. Although somatic gene therapy meets the need for informed consent, germline embryo editing poses a more difficult regulatory issue, that is, whether consent of a future generation is required and, if so, who should express consent because embryos cannot consent to germline intervention [35]. Moreover, the extent of authority over the embryo by the prospective parents and practitioners raises ethical debate, whether parents will be the only autonomous entity to make decisions for their unborn babies or will this be seen as usurping the interests of future generations who are unable to consent at the time of the decision [36]. Due to too many unknowns, it is uncertain what information would be required or available to properly inform prospective parents about dangers, including those for future generations [37]. This poses a significant challenge in obtaining informed consent [38]. As additional gene treatments for incurable hereditary disorders enter the consent clinic, a discussion on ethics should be started so that these issues can be discussed in a clear, fair, and balanced manner, rather than allowing any particular profession to make the final decision on where the ethical limits should be drawn [39]. It is an undeniable fact that any research which may someday prove to be a breakthrough should completely meet the ethical standards of informed consent [40]. ## 3.4. Enhancement and Eugenics Genetic enhancement or improvement is also a legitimate concern surrounding the application of gene therapy. Enhancement gene therapy means manipulating genes to improve the characteristics of an individual according to the interests of the person [41]. Genetic therapy, on the other hand, involves altering faulty genes to prevent or cure diseases [42]. A classic example of enhancement therapy is the injection of recombinant human growth hormone (rhGH) into children of short stature to increase the growth rate and final height [43]. However, the injection of rhGH into children of normal height in an attempt to make them taller may possibly create ethical issues. Furthermore, athletes rely on human recombinant erythropoietin (EPO) for improvement. The EPO hormone is used to induce the production of red blood cells that are used to treat kidney dialysis and anemia. However, athletes who do not have any health conditions seek EPO therapy in an attempt to improve performance in competitive events where muscles require a lot of oxygen [44, 45]. Inasmuch as some enhancement practices are considered morally unethical since it shifts from the natural, the distinction between enhancement and therapy may be a contextual issue and must be clearly understood. An enhancement application may be therapeutic and vice versa. The improvement of the height of short persons whose condition is a result of human growth hormone deficiency, as well as, enhancing the skin color of patients suffering from vitiligo indicate a therapeutic enhancement. This suggests that genetic therapy and enhancement may have common similarities [46]. Moreover, enhancement can potentially lead to eugenics. CRISPER/Cas9 offers the prospect of manipulating the germline to select human traits such as beauty, character, body formation, and intelligence. This makes it possible to create evolutionary individuals and improve the human race [47]. In 2015, the UNESCO International Bioethics Committee commented on the eugenic dangers of germline procedures. The committee suggested that the incorporation of gene editing techniques into gene therapy may possibly change the therapeutic application to racial improvement. Hence, the equal dignity of all human beings may be altered and eventually renew eugenics [48]. To control the use of technology, an intervention aimed at altering the human genome may be performed only for preventive, diagnostic, or therapeutic purposes, and any attempt to achieve this goal should be banned [49]. Furthermore, the extent of human condition to which gene therapy is applicable should be clearly defined and properly regulated to make people aware of diseases and condition of disease that require experimental treatments. This may address concerns about equal accessibility while minimizing nontherapeutic traits enhancement. Scientific researchers should clearly state the goal of any applied or basic research involving CRISPR/Cas editing; either the research is to provide a therapeutic solution, to generate preliminary data for the development of human genome editing applications, or to just improve the expression of certain traits for nontherapeutic purposes. These distinctions are necessary in the sense that even if one opposes human enhancement therapy, there are important applications of CRISPR/Cas editing that do not serve that purpose. Nonetheless, it is crucial to emphasize that distinguishing eugenics from treatment might be difficult. For example, it is often discussed whether enhancing the immune system through gene and immunotherapeutic approaches is eugenics or not [50]. As a result, a case-by-case analysis is required to resolve numerous concerns. In fact, eugenics is rooted in a social construct which justifies discrimination and injustice against those who are genetically unfit [51]. Therefore, it is worthwhile to clarify that gene therapy, when placed in the right context, has the potential to eliminate birth abnormalities and terminal diseases. ## 4. Conclusion Gene therapy has made incredible strides since its first human trial and holds great promise in medicine and health care. Even with the tragedies of early clinical trials and optimism surrounding this emerging field, many therapeutic products have been approved worldwide and are still being tested. Among the two gene therapy approaches, germline gene therapy is considered to have raised controversial arguments including off-target effects, mosaic mutation, informed consent, and eugenics. Although bioethical concerns may sound morally and socially legitimate to proponents, the public and even scientists, they are not conclusive enough to stop the good applications of gene therapy. However, to minimize public debates hampering the advancement of gene therapy, system optimization, detailed safety protocols, and critical regulatory measures must be put in place to help achieve the therapeutic goals of this technology. --- *Source: 1015996-2022-08-24.xml*
1015996-2022-08-24_1015996-2022-08-24.md
33,870
Ethical Challenges and Controversies in the Practice and Advancement of Gene Therapy
Emmanuel Owusu Ansah
Advances in Cell and Gene Therapy (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1015996
1015996-2022-08-24.xml
--- ## Abstract One of the most important technologies in modern medicine is gene therapy, which allows therapeutic genes to be introduced into cells of the body. The approach involves genetics and recombinant DNA techniques that allow manipulating vectors for delivery of exogenous material to target cells. The efficacy and safety of the delivery system are a key step towards the success of gene therapy. Somatic cell gene therapy is the easiest in terms of technology and the least problematic in terms of ethics. Although genetic manipulation of germline cells at the gene level has the potential to permanently eradicate certain hereditary disorders, major ethical issues such as eugenics, enhancement, mosaicism, and the transmission of undesirable traits or side effects to patients’ descendants currently stymie its development, leaving only somatic gene therapy in the works. However, moral, social, and ethical arguments do not imply that germline gene therapy should be banned forever. This review discusses in detail the current challenges surrounding the practice of gene therapy, focusing on the moral arguments and scientific claims that affect the advancement of the technology. The review also suggests precautionary principles as a means to navigate ethical uncertainties. --- ## Body ## 1. Introduction The concept of gene therapy is an experimental procedure that involves the introduction of a normal gene to compensate for a defective gene with the goal of improving a disease condition. This is achieved efficiently using viral vectors to introduce a gene of interest into target cells. Over the past decades, gene therapy has contributed significantly to the treatment of human diseases, such as cancers, cystic fibrosis, heart disease, diabetes, muscular dystrophy, hemophilia, and AIDS [1]. Historically, the first successful trials of gene therapy in humans occurred in 1990 when Ashanti DeSilva with adenosine deaminase deficiency (ADA), leading to X-linked severe combined immunodeficiency (SCID), was treated with her own blood [2]. Nine years later, gene therapy faced a devastating setback when Jesse Gelsinger, an 18-year-old boy with ornithine transcarbamylase deficiency (OTC), died after a clinical trial of therapeutic gene treatment. His death resulted from an excessive immune response after the administration of the therapeutic product. However, gene therapy has transcended beyond the sphere of failure into the arena of breakthrough. Substantial contributions have been made by gene therapy towards the treatment of human diseases. The efficient delivery of therapeutic gene by viral vectors, especially adeno-associated viruses (AAV), as well as the optimization of the delivery systems, has greatly wiped away certain negative assumptions surrounding the practice of viral gene therapy [3].Among the first gene therapy products, Gendicine was first approved in 2003 by the Chinese Food and Drug Administration (ADA). The medication is an oncolytic virotherapeutic product used to treat neck and head carcinoma [4]. Globally, almost 2600 gene therapy products have been considered for clinical trials, of which a significant percentage have been approved [5]. Additionally, the FoCUS project by MIT suggests that 39 gene therapies will gain regulatory approval by the end of 2022 from the 2017 pipeline of 932 development candidates and this includes already approved product. Among this number, 45% of the total are expected to be utilized in the area of oncology [6].Gene therapy can be divided into two types: germline and somatic. The distinction between these two procedures is that, in somatic gene therapy, genetic material is injected into some target cells, but the alteration is not handed down to future generations, whereas in germline gene therapy, the therapeutic or changed gene is carried down to future generations. Despite the fact that gene therapy is still in its infancy as a clinically viable therapeutic modality, ethical difficulties and conflicts must be addressed in order to avoid unethical research and health practices. The purpose of this article is to highlight the different ethical difficulties and debates that have arisen as a result of the practice and advancement of gene therapy. ## 2. The Approach of Gene Therapy Gene therapy uses two approaches for therapeutic gene transfer; this includes in vivo and ex vivo gene therapy. In vivo gene therapy involves the direct introduction of the gene of interest into a patient tissue via a plasmid, nonviral or viral vectors. With ex vivo gene therapy, isolated patient cells are genetically altered outside of the human body and finally reimplanted in the same patient, or the desired proteins expressed by engineered cells are infused to the patient to introduce potentially therapeutic changes. ### 2.1. Genome Editing Technologies Genome editing techniques are considered one of the most challenging yet efficient tools for gene therapeutic approaches [7]. Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9), transcription activator-like effector nucleases (TALEN), and zinc finger nucleases (ZFN) are the most widely used genome editing tools. Genome editing in the field of gene therapy uses an in vivo or ex vivo approach with greatly increased specificity and efficiency. This is achieved by delivering the editing machinery stably into cells to edit genes, as well as making other highly targeted genomic modifications [8]. CRISPR technology offers a great promise for treating a wide range of human genetic diseases. Currently, the CRISPR/Cas9 system is the latest genome editing technology, as it is efficient and precise for genetic modification processes that include the insertion of therapeutic genes, the destruction of viral DNA, and the correction of harmful mutations [9]. Researchers have demonstrated successful proof-of-concept studies in germline and somatic gene therapy by genome editing. In 2014, Genovese and his group used CRISPR/Cas9 genome editing strategy to correct the interleukin-2 receptor subunit gamma (IL2RG) gene, which has provided a new avenue for the treatment of SCID. Another study focused on CRISPR/Cas9-mediated chromosomal inversion of the factor VIII gene in patients with hemophilia A [10]. Genome editing has now become a powerful method in the field of gene therapy. However, there are certain ethical challenges, moral, and safety concerns related to the attractive application of this technology, especially in the germline. ### 2.2. Germline Genome Editing Germline gene editing (GGE) has been used as a research tool and as a therapeutic intervention. This technique has been used to modify genes of yeast, mice, plants, rodents, pigs, and primates [11]. In a recent study, gene editing was used to deactivate 62 retrovirus genes in a pig cell line, a crucial step towards creating suitable pig organs for transplantation [12, 13]. In October 2015, researchers edited a gene related to muscle growth in a beagle to double its normal muscular mass [14]. Germline gene editing has the potential to ameliorate disease phenotype from embryos, and supporters of the technique claim that it could be used as a means of disease prevention in humans. Despite the broad implications, public debate has focused on the ethics of human germline gene editing [15]. In April 2015, He, a genome editing researcher in China, used for the first time CRISPER/Cas9 genome editing technique to disable HIV-CCR5 gene that is responsible for HIV entry into target cells from an embryo and implanted into a woman [16]. DNA sequencing confirmed the deletion of the CCR5 gene, suggesting the great benefit that can be derived from germline editing. In another study, to understand the efficiency and potential off-target effect of CRISPR technology in embryo editing, Liang et al. cleaved β-globin gene of triponuclear (3PN) human zygotes using the CRISPR/Cas ribonucleoprotein. The results showed an apparent off-target effect and a low efficiency of homologous recombination directed repair (HDR) coupled with mosaicism [17]. Thus, editing a human embryo could be a useful method to eliminate defective genes and even provide HIV-positive couples the opportunity to give birth to HIV-negative children; however, some potential pitfalls, including the off-target effect and mosaicism, limit its application on humans. The safety and efficacy of genome editing tools are the main concerns for clinical application. Consequently, alternative genetic approaches that are safer and more efficient must be explored to protect people, other than changing the DNA of an embryo [18]. ## 2.1. Genome Editing Technologies Genome editing techniques are considered one of the most challenging yet efficient tools for gene therapeutic approaches [7]. Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9), transcription activator-like effector nucleases (TALEN), and zinc finger nucleases (ZFN) are the most widely used genome editing tools. Genome editing in the field of gene therapy uses an in vivo or ex vivo approach with greatly increased specificity and efficiency. This is achieved by delivering the editing machinery stably into cells to edit genes, as well as making other highly targeted genomic modifications [8]. CRISPR technology offers a great promise for treating a wide range of human genetic diseases. Currently, the CRISPR/Cas9 system is the latest genome editing technology, as it is efficient and precise for genetic modification processes that include the insertion of therapeutic genes, the destruction of viral DNA, and the correction of harmful mutations [9]. Researchers have demonstrated successful proof-of-concept studies in germline and somatic gene therapy by genome editing. In 2014, Genovese and his group used CRISPR/Cas9 genome editing strategy to correct the interleukin-2 receptor subunit gamma (IL2RG) gene, which has provided a new avenue for the treatment of SCID. Another study focused on CRISPR/Cas9-mediated chromosomal inversion of the factor VIII gene in patients with hemophilia A [10]. Genome editing has now become a powerful method in the field of gene therapy. However, there are certain ethical challenges, moral, and safety concerns related to the attractive application of this technology, especially in the germline. ## 2.2. Germline Genome Editing Germline gene editing (GGE) has been used as a research tool and as a therapeutic intervention. This technique has been used to modify genes of yeast, mice, plants, rodents, pigs, and primates [11]. In a recent study, gene editing was used to deactivate 62 retrovirus genes in a pig cell line, a crucial step towards creating suitable pig organs for transplantation [12, 13]. In October 2015, researchers edited a gene related to muscle growth in a beagle to double its normal muscular mass [14]. Germline gene editing has the potential to ameliorate disease phenotype from embryos, and supporters of the technique claim that it could be used as a means of disease prevention in humans. Despite the broad implications, public debate has focused on the ethics of human germline gene editing [15]. In April 2015, He, a genome editing researcher in China, used for the first time CRISPER/Cas9 genome editing technique to disable HIV-CCR5 gene that is responsible for HIV entry into target cells from an embryo and implanted into a woman [16]. DNA sequencing confirmed the deletion of the CCR5 gene, suggesting the great benefit that can be derived from germline editing. In another study, to understand the efficiency and potential off-target effect of CRISPR technology in embryo editing, Liang et al. cleaved β-globin gene of triponuclear (3PN) human zygotes using the CRISPR/Cas ribonucleoprotein. The results showed an apparent off-target effect and a low efficiency of homologous recombination directed repair (HDR) coupled with mosaicism [17]. Thus, editing a human embryo could be a useful method to eliminate defective genes and even provide HIV-positive couples the opportunity to give birth to HIV-negative children; however, some potential pitfalls, including the off-target effect and mosaicism, limit its application on humans. The safety and efficacy of genome editing tools are the main concerns for clinical application. Consequently, alternative genetic approaches that are safer and more efficient must be explored to protect people, other than changing the DNA of an embryo [18]. ## 3. Ethical Challenges of Gene Therapy ### 3.1. Off-Target Mutation The most obvious ethical debate specifically from the National Institute of Health (NIH) against GGE is the off-target effect. Off-target gene mutation could potentially result in insertional mutagenesis and gene mutation [19]. Bioethicists and researchers suggest that genome editing is new and unpredictable technology, and little is known about gene regulation and mechanisms of embryonic development; therefore, the consequences of germline therapy can be fatal [20]. Despite the fact that CRISPR/Cas proves to be an efficient tool for clinical somatic use, it has not reached the stage to be utilized in human genome editing for clinical reproductive purposes. Therefore, the apparent long-term effects cannot be overlooked [21, 22]. Genome editing performed on human embryos has a high risk of causing pathologic diseases and disabilities that can permanently affect the patient and the offspring. Although the specificity of Cas9 targeting is tightly controlled, potential off-target cleavage activity could still occur in DNA sequences and has been demonstrated in previous studies [17, 23, 24]. Nevertheless, integrating viral vectors including retrovirus, lentivirus, and even adeno-associated viruses can carry a gene of interest into a nontarget region of the host genome which can likely result in insertional mutagenesis. A study in an animal model shows that the integration of AAV into chromosome 19 could possibly result in genotoxic effects, leading to neoplastic transformations that are prone to tumor development [25]. In addition, off-target integration has been observed in lentiviral vector systems (LV), one of the main delivery vehicles due to its high tissue tropism and long-term expression of the transgene [26]. However, refined strategies have been adapted to improve and optimize LV systems for effective and accurate gene delivery [27]. ### 3.2. Genetic Mosaicism In CRISPR germline gene therapy, the CRISPR/Cas vector is inserted immediately after fertilization so that each successive cell resulting from cleavage is genetically modified. However, the vector can persist and transcribe, making it possible to further introduce the Cas protein into parts of already engineered cells and potentially initiate another cleavage, leading to mosaicism [28, 29]. Some cells may eventually acquire edits that are different from those of other cells, leading to differences in gene copy number, causing skin, brain, and heart disorders, and impairing embryo maturation. In a study, high levels of mosaicism were observed after germline editing of a model bovine embryo using the Cas9 system [30]. This finding confirms the possibility of its occurrence in human embryos if left unregulated. Furthermore, the technological approach to testing mosaic mutations in an edited embryo may be ineffective, as the small number of cells selected for testing may not include a mosaic mutant cell [31]. In the summer of 2019, the potential effect of mosaicism emerging from the clinical application of germline editing was discussed by the US National Academy of Medicine, the US National Academy of Sciences, and the Royal Society of medicine [32]. The lack of clear evidence from experts that mosaic mutation has not occurred in a range of cell and tissue types of early-stage human embryo editing, as well as the inability of the technology to validate that a particular edit is correct and devoid of mosaic mutation could make it difficult for the public to support the application. Therefore, to ensure that germline editing is safe, all important issues and controversies should be addressed. ### 3.3. Informed Consent Following the first gene therapy death recorded in a clinical trial in September 1999, the informed decision about participating in a clinical trial has gained numerous concerns. It is advisable that participants undergoing gene therapy clinical trials must be extensively educated on the potential risks and benefits associated with treatment to provide them with enough information on which to decide to participate or not without coercion [33]. A study by the National Human Genome Research Institute (NHGRI) proposed the need and importance of informed consent in CRISPR somatic genome editing after surveying patients with sickle cell disease [34]. Inasmuch as gene therapies suggest future transformation by treating many incurable diseases, the perceived benefits of the technology should not overshadow the difficulties that the patients may face in grasping long-term hazards. Although somatic gene therapy meets the need for informed consent, germline embryo editing poses a more difficult regulatory issue, that is, whether consent of a future generation is required and, if so, who should express consent because embryos cannot consent to germline intervention [35]. Moreover, the extent of authority over the embryo by the prospective parents and practitioners raises ethical debate, whether parents will be the only autonomous entity to make decisions for their unborn babies or will this be seen as usurping the interests of future generations who are unable to consent at the time of the decision [36]. Due to too many unknowns, it is uncertain what information would be required or available to properly inform prospective parents about dangers, including those for future generations [37]. This poses a significant challenge in obtaining informed consent [38]. As additional gene treatments for incurable hereditary disorders enter the consent clinic, a discussion on ethics should be started so that these issues can be discussed in a clear, fair, and balanced manner, rather than allowing any particular profession to make the final decision on where the ethical limits should be drawn [39]. It is an undeniable fact that any research which may someday prove to be a breakthrough should completely meet the ethical standards of informed consent [40]. ### 3.4. Enhancement and Eugenics Genetic enhancement or improvement is also a legitimate concern surrounding the application of gene therapy. Enhancement gene therapy means manipulating genes to improve the characteristics of an individual according to the interests of the person [41]. Genetic therapy, on the other hand, involves altering faulty genes to prevent or cure diseases [42]. A classic example of enhancement therapy is the injection of recombinant human growth hormone (rhGH) into children of short stature to increase the growth rate and final height [43]. However, the injection of rhGH into children of normal height in an attempt to make them taller may possibly create ethical issues. Furthermore, athletes rely on human recombinant erythropoietin (EPO) for improvement. The EPO hormone is used to induce the production of red blood cells that are used to treat kidney dialysis and anemia. However, athletes who do not have any health conditions seek EPO therapy in an attempt to improve performance in competitive events where muscles require a lot of oxygen [44, 45]. Inasmuch as some enhancement practices are considered morally unethical since it shifts from the natural, the distinction between enhancement and therapy may be a contextual issue and must be clearly understood. An enhancement application may be therapeutic and vice versa. The improvement of the height of short persons whose condition is a result of human growth hormone deficiency, as well as, enhancing the skin color of patients suffering from vitiligo indicate a therapeutic enhancement. This suggests that genetic therapy and enhancement may have common similarities [46]. Moreover, enhancement can potentially lead to eugenics. CRISPER/Cas9 offers the prospect of manipulating the germline to select human traits such as beauty, character, body formation, and intelligence. This makes it possible to create evolutionary individuals and improve the human race [47]. In 2015, the UNESCO International Bioethics Committee commented on the eugenic dangers of germline procedures. The committee suggested that the incorporation of gene editing techniques into gene therapy may possibly change the therapeutic application to racial improvement. Hence, the equal dignity of all human beings may be altered and eventually renew eugenics [48]. To control the use of technology, an intervention aimed at altering the human genome may be performed only for preventive, diagnostic, or therapeutic purposes, and any attempt to achieve this goal should be banned [49]. Furthermore, the extent of human condition to which gene therapy is applicable should be clearly defined and properly regulated to make people aware of diseases and condition of disease that require experimental treatments. This may address concerns about equal accessibility while minimizing nontherapeutic traits enhancement. Scientific researchers should clearly state the goal of any applied or basic research involving CRISPR/Cas editing; either the research is to provide a therapeutic solution, to generate preliminary data for the development of human genome editing applications, or to just improve the expression of certain traits for nontherapeutic purposes. These distinctions are necessary in the sense that even if one opposes human enhancement therapy, there are important applications of CRISPR/Cas editing that do not serve that purpose. Nonetheless, it is crucial to emphasize that distinguishing eugenics from treatment might be difficult. For example, it is often discussed whether enhancing the immune system through gene and immunotherapeutic approaches is eugenics or not [50]. As a result, a case-by-case analysis is required to resolve numerous concerns. In fact, eugenics is rooted in a social construct which justifies discrimination and injustice against those who are genetically unfit [51]. Therefore, it is worthwhile to clarify that gene therapy, when placed in the right context, has the potential to eliminate birth abnormalities and terminal diseases. ## 3.1. Off-Target Mutation The most obvious ethical debate specifically from the National Institute of Health (NIH) against GGE is the off-target effect. Off-target gene mutation could potentially result in insertional mutagenesis and gene mutation [19]. Bioethicists and researchers suggest that genome editing is new and unpredictable technology, and little is known about gene regulation and mechanisms of embryonic development; therefore, the consequences of germline therapy can be fatal [20]. Despite the fact that CRISPR/Cas proves to be an efficient tool for clinical somatic use, it has not reached the stage to be utilized in human genome editing for clinical reproductive purposes. Therefore, the apparent long-term effects cannot be overlooked [21, 22]. Genome editing performed on human embryos has a high risk of causing pathologic diseases and disabilities that can permanently affect the patient and the offspring. Although the specificity of Cas9 targeting is tightly controlled, potential off-target cleavage activity could still occur in DNA sequences and has been demonstrated in previous studies [17, 23, 24]. Nevertheless, integrating viral vectors including retrovirus, lentivirus, and even adeno-associated viruses can carry a gene of interest into a nontarget region of the host genome which can likely result in insertional mutagenesis. A study in an animal model shows that the integration of AAV into chromosome 19 could possibly result in genotoxic effects, leading to neoplastic transformations that are prone to tumor development [25]. In addition, off-target integration has been observed in lentiviral vector systems (LV), one of the main delivery vehicles due to its high tissue tropism and long-term expression of the transgene [26]. However, refined strategies have been adapted to improve and optimize LV systems for effective and accurate gene delivery [27]. ## 3.2. Genetic Mosaicism In CRISPR germline gene therapy, the CRISPR/Cas vector is inserted immediately after fertilization so that each successive cell resulting from cleavage is genetically modified. However, the vector can persist and transcribe, making it possible to further introduce the Cas protein into parts of already engineered cells and potentially initiate another cleavage, leading to mosaicism [28, 29]. Some cells may eventually acquire edits that are different from those of other cells, leading to differences in gene copy number, causing skin, brain, and heart disorders, and impairing embryo maturation. In a study, high levels of mosaicism were observed after germline editing of a model bovine embryo using the Cas9 system [30]. This finding confirms the possibility of its occurrence in human embryos if left unregulated. Furthermore, the technological approach to testing mosaic mutations in an edited embryo may be ineffective, as the small number of cells selected for testing may not include a mosaic mutant cell [31]. In the summer of 2019, the potential effect of mosaicism emerging from the clinical application of germline editing was discussed by the US National Academy of Medicine, the US National Academy of Sciences, and the Royal Society of medicine [32]. The lack of clear evidence from experts that mosaic mutation has not occurred in a range of cell and tissue types of early-stage human embryo editing, as well as the inability of the technology to validate that a particular edit is correct and devoid of mosaic mutation could make it difficult for the public to support the application. Therefore, to ensure that germline editing is safe, all important issues and controversies should be addressed. ## 3.3. Informed Consent Following the first gene therapy death recorded in a clinical trial in September 1999, the informed decision about participating in a clinical trial has gained numerous concerns. It is advisable that participants undergoing gene therapy clinical trials must be extensively educated on the potential risks and benefits associated with treatment to provide them with enough information on which to decide to participate or not without coercion [33]. A study by the National Human Genome Research Institute (NHGRI) proposed the need and importance of informed consent in CRISPR somatic genome editing after surveying patients with sickle cell disease [34]. Inasmuch as gene therapies suggest future transformation by treating many incurable diseases, the perceived benefits of the technology should not overshadow the difficulties that the patients may face in grasping long-term hazards. Although somatic gene therapy meets the need for informed consent, germline embryo editing poses a more difficult regulatory issue, that is, whether consent of a future generation is required and, if so, who should express consent because embryos cannot consent to germline intervention [35]. Moreover, the extent of authority over the embryo by the prospective parents and practitioners raises ethical debate, whether parents will be the only autonomous entity to make decisions for their unborn babies or will this be seen as usurping the interests of future generations who are unable to consent at the time of the decision [36]. Due to too many unknowns, it is uncertain what information would be required or available to properly inform prospective parents about dangers, including those for future generations [37]. This poses a significant challenge in obtaining informed consent [38]. As additional gene treatments for incurable hereditary disorders enter the consent clinic, a discussion on ethics should be started so that these issues can be discussed in a clear, fair, and balanced manner, rather than allowing any particular profession to make the final decision on where the ethical limits should be drawn [39]. It is an undeniable fact that any research which may someday prove to be a breakthrough should completely meet the ethical standards of informed consent [40]. ## 3.4. Enhancement and Eugenics Genetic enhancement or improvement is also a legitimate concern surrounding the application of gene therapy. Enhancement gene therapy means manipulating genes to improve the characteristics of an individual according to the interests of the person [41]. Genetic therapy, on the other hand, involves altering faulty genes to prevent or cure diseases [42]. A classic example of enhancement therapy is the injection of recombinant human growth hormone (rhGH) into children of short stature to increase the growth rate and final height [43]. However, the injection of rhGH into children of normal height in an attempt to make them taller may possibly create ethical issues. Furthermore, athletes rely on human recombinant erythropoietin (EPO) for improvement. The EPO hormone is used to induce the production of red blood cells that are used to treat kidney dialysis and anemia. However, athletes who do not have any health conditions seek EPO therapy in an attempt to improve performance in competitive events where muscles require a lot of oxygen [44, 45]. Inasmuch as some enhancement practices are considered morally unethical since it shifts from the natural, the distinction between enhancement and therapy may be a contextual issue and must be clearly understood. An enhancement application may be therapeutic and vice versa. The improvement of the height of short persons whose condition is a result of human growth hormone deficiency, as well as, enhancing the skin color of patients suffering from vitiligo indicate a therapeutic enhancement. This suggests that genetic therapy and enhancement may have common similarities [46]. Moreover, enhancement can potentially lead to eugenics. CRISPER/Cas9 offers the prospect of manipulating the germline to select human traits such as beauty, character, body formation, and intelligence. This makes it possible to create evolutionary individuals and improve the human race [47]. In 2015, the UNESCO International Bioethics Committee commented on the eugenic dangers of germline procedures. The committee suggested that the incorporation of gene editing techniques into gene therapy may possibly change the therapeutic application to racial improvement. Hence, the equal dignity of all human beings may be altered and eventually renew eugenics [48]. To control the use of technology, an intervention aimed at altering the human genome may be performed only for preventive, diagnostic, or therapeutic purposes, and any attempt to achieve this goal should be banned [49]. Furthermore, the extent of human condition to which gene therapy is applicable should be clearly defined and properly regulated to make people aware of diseases and condition of disease that require experimental treatments. This may address concerns about equal accessibility while minimizing nontherapeutic traits enhancement. Scientific researchers should clearly state the goal of any applied or basic research involving CRISPR/Cas editing; either the research is to provide a therapeutic solution, to generate preliminary data for the development of human genome editing applications, or to just improve the expression of certain traits for nontherapeutic purposes. These distinctions are necessary in the sense that even if one opposes human enhancement therapy, there are important applications of CRISPR/Cas editing that do not serve that purpose. Nonetheless, it is crucial to emphasize that distinguishing eugenics from treatment might be difficult. For example, it is often discussed whether enhancing the immune system through gene and immunotherapeutic approaches is eugenics or not [50]. As a result, a case-by-case analysis is required to resolve numerous concerns. In fact, eugenics is rooted in a social construct which justifies discrimination and injustice against those who are genetically unfit [51]. Therefore, it is worthwhile to clarify that gene therapy, when placed in the right context, has the potential to eliminate birth abnormalities and terminal diseases. ## 4. Conclusion Gene therapy has made incredible strides since its first human trial and holds great promise in medicine and health care. Even with the tragedies of early clinical trials and optimism surrounding this emerging field, many therapeutic products have been approved worldwide and are still being tested. Among the two gene therapy approaches, germline gene therapy is considered to have raised controversial arguments including off-target effects, mosaic mutation, informed consent, and eugenics. Although bioethical concerns may sound morally and socially legitimate to proponents, the public and even scientists, they are not conclusive enough to stop the good applications of gene therapy. However, to minimize public debates hampering the advancement of gene therapy, system optimization, detailed safety protocols, and critical regulatory measures must be put in place to help achieve the therapeutic goals of this technology. --- *Source: 1015996-2022-08-24.xml*
2022
# Design of a Novel Ultrawide Stopband Lowpass Filter Using a DMS-DGS Technique for Radar Applications **Authors:** Ahmed Boutejdar; Ahmed A. Ibrahim; Edmund P. Burte **Journal:** International Journal of Microwave Science and Technology (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101602 --- ## Abstract A novel wide stopband (WSB) low pass filter based on combination of defected ground structure (DGS), defected microstrip structure (DMS), and compensated microstrip capacitors is proposed. Their excellent defected characteristics are verified through simulation and measurements. Additionally to a sharp cutoff, the structure exhibits simple design and fabrication, very low insertion loss in the pass band of 0.3 dB and it achieves a wide rejection bandwidth with overall 20 dB attenuation from 1.5 GHz up to 8.3 GHz. The compact low pass structure occupies an area of (0.40λg  × 0.24λg) where λg = 148 mm is the waveguide length at the cut-off frequency 1.1 GHz. Comparison between measured and simulated results confirms the validity of the proposed method. Such filter topologies are utilized in many areas of communications systems and microwave technology because of their several benefits such as small losses, wide reject band, and high compactness. --- ## Body ## 1. Introduction With the rapid progress in modern communications systems, design goals such as compact size, low cost, good quality factor, and high performance components are highly considered. To achieve these targets, many filtering structures as open-circuited stubs, hi-low impedances, parallel coupled, and end coupled filters have been investigated. Nevertheless these methods keep the satisfactory results unattainable. In order to approach the desired results, a DGS-DMS filter could be an effective solution. Due to their improved performance characteristics, many filter techniques and methodologies have been proposed and successfully realized. Defected ground structures (DGSs) with and without periodic array have been realized by etching a pattern in the backside of the metallic ground plane to obtain the stopband effect [1–13]. DGS often consisted of two large defected areas and a narrow connecting slot channel, which corresponds to its equivalent L-C elements [14]. The DGS with periodic or nonperiodic topology leads to a reject band in some frequency range due to the slow wave effect, as a result of increasing the effective capacitance and inductance of the transmission line. In general DMS-unit [15–18] is used as a complementary element for the DGS-unit to achieve required filter response. DMS compared to DGS is etched on the microstrip line and exhibits same frequency behavior. Additionally, the design is simpler than the DGS and is more easily integrated with other microwave circuits. Moreover, it has an effective reduced circuit size compared to DGS.In this paper, a new compact microstrip low pass filter using coupled DMS, DGS resonators, and compensated capacitors is reported. The compensated capacitors are added on the top layer in order to get a sharp transmission domain and to regenerate transmission zeros to obtain a large reject band. Dimensions of the microstrip capacitors were computed according to the desired equivalent circuit, and using Richard’s-Kuroda transformation and TX-Line software [19]; afterwards they were optimized by AWR EM simulator [20]. The measured results agree well with simulated results. The DGS-DMS technique (see Figure 1) in this research can be applied in microwave coupler, antennas, and in MRI technology.Figure 1 Three-dimensional view of the DMS-unit and DGS cell. ## 2. Characteristics and Modeling of DMS Resonators Top layer of Figure1 shows the proposed DMS cell, which is composed of wide and narrow etched sections in the feed line placed on the top layer. The extremes of this resonator are connected through microstrip line with SMA connectors. The widths of the microstrip lines at port 1 and port 2 are designed to match the characteristic impedance of 50 Ω. The etched surface presents the capacitance, while the arms correspond to the inductance. The DMS cell acts as a band stop element with a resonance frequency of 4.8 GHz and an insertion loss of −0.5 dB as shown in Figure 4. The structure has been designed on RO4003 substrate with a relative dielectric constant ε r = 3.38 and thicknesses h = 0.813 mm and a loss tangent of 0.0027. The equivalent circuit of the DMS cell acts as a parallel LC resonator as shown in Figure 2.Figure 2 Equivalent circuit of the DMS/DGS-unit.The valuesR, L, and C of the circuit parameters can be computed using result that is matched to the one-pole Butterworth-type low pass response [7]. Furthermore, radiation effects are more or less neglected. The reactance values of DMS and filter first order can be expressed as(1) X L C = ω 0 C ω 0 ω - ω ω 0 - 1 .The series inductance (reactance) of one-pole Butterworth low pass filter can be derived as follows:(2) X L = ω L = ω g 1 Z 0 ω g ,where ω 0, ω g, Z 0, and g 1 are the resonant frequency, cutoff frequency, the scaled characteristic impedance, and prototype value of the Butterworth-type LPF, respectively. By matching the two previous reactance values, the parallel capacitance and the inductance of the equivalent DMS-circuit can be derived using the following equations:(3) C = ω c 2 Z 0 ω 0 2 - ω c 2 , L = 1 ω 0 2 C , R = 2 Z 0 1 / S 11 ω 0 2 - 2 Z 0 ω 0 C - 1 / ω 0 L 2 - 1 . The computed values of parameters C, L, and R are, respectively, 0.64 pF, 1.73 nH, and 7.42 kΩ. The simulation results of the investigated EM structure and its corresponding circuit are illustrated in Figure 3, which shows identical values of 3 dB cutoff frequency (f c) and pole frequency (f p) at 3.37 GHz and 4.83 GHz, respectively. The transmit band shows an insertion loss pass of 0.5 dB. All dimensions of DMS-unit are depicted in Table 1. The proposed DMS resonator is shown in Figure 4.Table 1 Dimensions of the defected microstrip structure- (DMS-) element. Dimensions of DMS-unit Values (mm) h 0.50 p 1.88 g 0.40 k 0.60 l 9.50Figure 3 S-parameters of the DMS-element and its equivalent circuit.Figure 4 Layout of the DMS-element. ## 3. Design of Band Stop Filter Using Cascaded DMS A new BSF was designed using two cascaded DMS resonators, which are positioned one to the other by 180 degrees and are directly connected with the ports through 50 Ω microstrip lines. Figure5 shows the 3D view of the proposed BSF. The geometry of each DMS-unit is equal to the dimensions indicated in Table 1, while the microstrip distance (r) between two DMS resonators is 0.5 mm. The 50 Ω feed line has a line width of w. The band stop structure is simulated and optimized by using Microwave Studio CST [21], Microwave Office AWR. The dimensions are calculated using filter theory, TX-Line software, and EM simulator. Figure 6 shows the designed equivalent circuit of the BSF employing circuit simulator AWR. The extracted circuit parameters are computed based on the EM simulations and empirical method and defined as follows: L = 6.2 nH, C = 0.77 pF, R = 0.51 kΩ, C p = 0.96 pH, and C 0 = 3.32 pF.Figure 5 Layout of the cascaded DMS-band stop filter.Figure 6 Equivalent circuit of the DMS-band stop filter.The simulated results depicted in Figures7(a) and 7(b) prove that the proposed filter has a 3 dB cutoff frequency at 2.7 GHz and a suppression level of 20 dB from 4.5 to 5.5 GHz; the insertion loss in the pass band is about 0.65 dB. Good agreement is verified between the EM simulations and the circuit simulations.Figure 7 Simulation results of the DMS-band stop filter. (a) EM simulation, (b) circuit simulation. (a) (b) ## 4. Band Stop to Low Pass Using Compensated Capacitors In order to demonstrate the effectiveness of the compensated microstrip capacitor in transforming a structure with band stop to low pass behaviors, the added parallel microstrip capacitors to the previous structure (Figure5) are designed and optimized as shown in Figure 8. A new DMS low pass filter is composed of three compensated parallel microstrip capacitors, which are separated through two identical DMS resonators. All components are cascaded on the top layer and directly connected with the SMAs through the two 50 Ω feed lines of width 1.88 mm as shown in Figure 8. The filter has been designed and simulated in order to improve the reject band and to minimize the pass band losses. The DMS low pass structure is designed for cutoff frequency at 1.55 GHz and is simulated on the Rogers RO4003 substrate with the dielectric constant of 3.38 and thickness of 0.813 mm. The total size of the filter is 59 × 35 mm2. Simulations have been performed using the full-wave EM Microwave Studio CST and Microwave Office AWR.Figure 8 3D view of the proposed DMS low pass filter.Figures9 and 10 represent the equivalent circuit and the S-parameters of the DMS low pass filter. According to the simulation response, we can conclude that the equivalent circuit has the characteristics of quasielliptic function, because the frequency response of the elliptic function filters is known with its generated transmission zeros in pass band and thus its high sharpness in transition response as shown in Figure 10.Figure 9 Equivalent circuit of DMS low pass filter.Figure 10 Comparison of simulated results of DMS low pass filter.The values ofR, L, and C are obtained as 4 kΩ, 1.3 nH, and 0.6 pF, respectively, after using an optimization technique, while the values of three parallel open-circuit capacitors are calculated using TX-Line software or exactly calculated using the following equation:(4) C = 1 Z 0 C ω c sin ⁡ β C l C + 2 Z 0 L ω c tan ⁡ β L l L ,where Z 0 C, β C, l C, β L, and l L are the characteristic impedance, the phase constant and the physical length of the compensated capacitor (low-impedance line), and the phase constant and the physical width of the series reactance (high-impedance line), respectively. Both series reactance values of the low-impedance line are negligible; thus the length of stub capacitors is approximated as the following:(5) l C = λ g C 2 π sin - 1 ⁡ ω c Z 0 L C , W C ≈ f Z 0 C , θ , t , h , ε r ,where W C, λ g C,  θ, t, h, and ε r are the width of the open-circuit stub capacitance, the guided wavelengths, the electrical length, the thickness of metal, thickness of the substrate, and the relative dielectric constant, respectively. As shown in Figure 10, two reflection zeros at 1 GHz and 1.5 GHz are generated; thus the insertion loss is less than 0.2 dB from DC up to 1.55 GHz. The return loss in the pass band is less than −12 dB. The stopband rejection is higher than −20 dB from 2.15 GHz up to 4.25 GHz. As illustrated in Figure 10, an undesired peak appeared around the frequency of 4.4 GHz. In order to suppress this undesired harmonic, another technique based on DGS will be used. The dimensions of the proposed structure are depicted in Table 2.Table 2 Dimensions of the DMS low pass filter structure. Dimensions of DMS-unit Values (mm) a 9.05 b 10 c 3 e 2 i 20 ## 5. Improvement of the Low Pass Filter In order to suppress the undesired peak at 4.4 GHz of the DMS low pass filter, a pair of DGS-units has been used. The idea is to choose DGS resonators having resonance frequency around the unwanted frequency 4.4 GHz, thus to realize structure with a wide reject band. As shown in Figure11, a multilayer structure is used to improve the performance of the previous low pass filter. The proposed element is similar to the DMS-unit with the difference that the new structure consists of additional two DGS-units, which are located between the microstrip capacitors as shown in Figure 11.Figure 11 3D view of the proposed DMS-DGS low pass filter.The dimensions of the two DGS shapes, which are etched in the ground, have been defined as follows:s = 0.6 mm, q = 6 mm, t = 10 mm, and z = 10 mm. The coupled distance (d = 26 mm) between the cascaded DGS resonators is obtained based on the empirical method. This proposed geometrical idea is based on using several stacked layers and it was able to improve the performance of the filter. The proposed DGS-DMS low pass filter has been simulated on a Rogers RO4003 substrate with a relative dielectric constant ε r of 3.38 and a thickness h of 0.813 mm. As depicted in Figure 12, the proposed LPF behaves well in the pass band and the stopband. The filter has a −3 dB cutoff frequency at 1.1 GHz, an insertion loss of 0.1 dB, and a return loss less than −20 dB in the whole pass band.Figure 12 Simulation results of the proposed DMS-DGS low pass filter.In addition, an ultrawide suppression level approximately equal to −20 dB in the frequency stopband ranging from 1.5 GHz to more than 8.5 GHz is achieved. Simulations were performed using Microwave Office AWR and CST Microwave Studio simulators. ## 6. Field Distribution along the Filter Structure The investigation of this EM field distribution has an objective of showing the frequency behaviour of this proposed filter and to prove the validity of the intuitive equivalent circuit. Figure13(a) shows the field distribution in the pass band region at the frequency of 0.5 GHz. The magnetic field is concentrated along the DMS resonators and on the 50 Ω lines, while a negligible electric field appears between both poles of this DMS structure. The transmission power between both ports is magnetic. The arm of the DMS represents an inductor. Figure 13(b) shows a filter with stopband behaviour at a resonant frequency of 4 GHz. The electric and magnetic fields show same distribution densities. The electric field is concentrated between extremities of the first slot, which represents the capacity. Based on this EM field investigation, the parallel LC circuit can be an approach model for the DMS-unit.Figure 13 Field distribution: (a) magnetic field atf = 0.5 GHz, (b) magnetic field at f = 4 GHz. (a) (b) ## 7. Fabrication and Measurement Figure14 shows photographs of the fabricated LPF filter. The simulations of the proposed DMS-DGS-UWRB-LPF are carried out using CST Microwave Studio and AWR Microwave Office. The simulation results show that the designed filter has a high sharpness factor, small losses in the pass band, and a wide reject band as shown in Figure 15. In order to verify the validity of the proposed DMS-DGS combination idea, the filter has been fabricated and measured using an HP8722D network analyzer. The LPF has been fabricated on a substrate with a relative dielectric constant ε r of 3.38 and a thickness h of 0.813 mm. The comparison between measured and simulated results is depicted in Figure 15. In the pass band, the measured insertion and return losses are less than –0.3 dB and −17 dB, respectively. The stopband rejection is higher than −20 dB from 1.3 GHz up to 8.9 GHz The compact low pass structure occupies an area of (0.40 λ g × 0.24 λ g), where λ g = 148 mm is the guided wavelength at cutoff frequency.Figure 14 Photograph of the fabricated DMS-DGS LPF, (a) top layer, (b) bottom layer. (a) (b)Figure 15 Comparison of simulation and measurement results of proposed DMS-DGS LPF.A very good agreement between simulations and measurements has been obtained. Some discrepancy between them can be interpreted as unexpected fabrication tolerances. The observed deviation between simulation and experimental results, which also means the loss of transmission line, has been caused by mismatching effects and the manufacturing tolerance errors. ## 8. Conclusion In this work, a novel DMS-DGS wide stopband low pass filter has been introduced and investigated. The filter structure with strong suppression of undesired harmonic responses has been presented, which is based on stopband behaviors of the DMS and DGS cells. It is demonstrated that low insertion loss (0.3 dB), deep return loss (greater than 17 dB), and a wide rejection bandwidth with overall 20 dB attenuation from 1.3 GHz up to 8.9 GHz and bandwidth with overall 40 dB attenuation from 1.9 GHz up to 7.7 GHz have been achieved in this type of filter. It has been shown that the simulated results achieved by full-wave EM were in excellent agreement with the measured ones. The newly proposed DMS-DGS-LPF and the related design method are compatible with monolithic microwave integrated circuit (MMIC) or multilayer technology and can be used in a wide range of microwave and millimeter wave applications. --- *Source: 101602-2015-10-15.xml*
101602-2015-10-15_101602-2015-10-15.md
16,809
Design of a Novel Ultrawide Stopband Lowpass Filter Using a DMS-DGS Technique for Radar Applications
Ahmed Boutejdar; Ahmed A. Ibrahim; Edmund P. Burte
International Journal of Microwave Science and Technology (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101602
101602-2015-10-15.xml
--- ## Abstract A novel wide stopband (WSB) low pass filter based on combination of defected ground structure (DGS), defected microstrip structure (DMS), and compensated microstrip capacitors is proposed. Their excellent defected characteristics are verified through simulation and measurements. Additionally to a sharp cutoff, the structure exhibits simple design and fabrication, very low insertion loss in the pass band of 0.3 dB and it achieves a wide rejection bandwidth with overall 20 dB attenuation from 1.5 GHz up to 8.3 GHz. The compact low pass structure occupies an area of (0.40λg  × 0.24λg) where λg = 148 mm is the waveguide length at the cut-off frequency 1.1 GHz. Comparison between measured and simulated results confirms the validity of the proposed method. Such filter topologies are utilized in many areas of communications systems and microwave technology because of their several benefits such as small losses, wide reject band, and high compactness. --- ## Body ## 1. Introduction With the rapid progress in modern communications systems, design goals such as compact size, low cost, good quality factor, and high performance components are highly considered. To achieve these targets, many filtering structures as open-circuited stubs, hi-low impedances, parallel coupled, and end coupled filters have been investigated. Nevertheless these methods keep the satisfactory results unattainable. In order to approach the desired results, a DGS-DMS filter could be an effective solution. Due to their improved performance characteristics, many filter techniques and methodologies have been proposed and successfully realized. Defected ground structures (DGSs) with and without periodic array have been realized by etching a pattern in the backside of the metallic ground plane to obtain the stopband effect [1–13]. DGS often consisted of two large defected areas and a narrow connecting slot channel, which corresponds to its equivalent L-C elements [14]. The DGS with periodic or nonperiodic topology leads to a reject band in some frequency range due to the slow wave effect, as a result of increasing the effective capacitance and inductance of the transmission line. In general DMS-unit [15–18] is used as a complementary element for the DGS-unit to achieve required filter response. DMS compared to DGS is etched on the microstrip line and exhibits same frequency behavior. Additionally, the design is simpler than the DGS and is more easily integrated with other microwave circuits. Moreover, it has an effective reduced circuit size compared to DGS.In this paper, a new compact microstrip low pass filter using coupled DMS, DGS resonators, and compensated capacitors is reported. The compensated capacitors are added on the top layer in order to get a sharp transmission domain and to regenerate transmission zeros to obtain a large reject band. Dimensions of the microstrip capacitors were computed according to the desired equivalent circuit, and using Richard’s-Kuroda transformation and TX-Line software [19]; afterwards they were optimized by AWR EM simulator [20]. The measured results agree well with simulated results. The DGS-DMS technique (see Figure 1) in this research can be applied in microwave coupler, antennas, and in MRI technology.Figure 1 Three-dimensional view of the DMS-unit and DGS cell. ## 2. Characteristics and Modeling of DMS Resonators Top layer of Figure1 shows the proposed DMS cell, which is composed of wide and narrow etched sections in the feed line placed on the top layer. The extremes of this resonator are connected through microstrip line with SMA connectors. The widths of the microstrip lines at port 1 and port 2 are designed to match the characteristic impedance of 50 Ω. The etched surface presents the capacitance, while the arms correspond to the inductance. The DMS cell acts as a band stop element with a resonance frequency of 4.8 GHz and an insertion loss of −0.5 dB as shown in Figure 4. The structure has been designed on RO4003 substrate with a relative dielectric constant ε r = 3.38 and thicknesses h = 0.813 mm and a loss tangent of 0.0027. The equivalent circuit of the DMS cell acts as a parallel LC resonator as shown in Figure 2.Figure 2 Equivalent circuit of the DMS/DGS-unit.The valuesR, L, and C of the circuit parameters can be computed using result that is matched to the one-pole Butterworth-type low pass response [7]. Furthermore, radiation effects are more or less neglected. The reactance values of DMS and filter first order can be expressed as(1) X L C = ω 0 C ω 0 ω - ω ω 0 - 1 .The series inductance (reactance) of one-pole Butterworth low pass filter can be derived as follows:(2) X L = ω L = ω g 1 Z 0 ω g ,where ω 0, ω g, Z 0, and g 1 are the resonant frequency, cutoff frequency, the scaled characteristic impedance, and prototype value of the Butterworth-type LPF, respectively. By matching the two previous reactance values, the parallel capacitance and the inductance of the equivalent DMS-circuit can be derived using the following equations:(3) C = ω c 2 Z 0 ω 0 2 - ω c 2 , L = 1 ω 0 2 C , R = 2 Z 0 1 / S 11 ω 0 2 - 2 Z 0 ω 0 C - 1 / ω 0 L 2 - 1 . The computed values of parameters C, L, and R are, respectively, 0.64 pF, 1.73 nH, and 7.42 kΩ. The simulation results of the investigated EM structure and its corresponding circuit are illustrated in Figure 3, which shows identical values of 3 dB cutoff frequency (f c) and pole frequency (f p) at 3.37 GHz and 4.83 GHz, respectively. The transmit band shows an insertion loss pass of 0.5 dB. All dimensions of DMS-unit are depicted in Table 1. The proposed DMS resonator is shown in Figure 4.Table 1 Dimensions of the defected microstrip structure- (DMS-) element. Dimensions of DMS-unit Values (mm) h 0.50 p 1.88 g 0.40 k 0.60 l 9.50Figure 3 S-parameters of the DMS-element and its equivalent circuit.Figure 4 Layout of the DMS-element. ## 3. Design of Band Stop Filter Using Cascaded DMS A new BSF was designed using two cascaded DMS resonators, which are positioned one to the other by 180 degrees and are directly connected with the ports through 50 Ω microstrip lines. Figure5 shows the 3D view of the proposed BSF. The geometry of each DMS-unit is equal to the dimensions indicated in Table 1, while the microstrip distance (r) between two DMS resonators is 0.5 mm. The 50 Ω feed line has a line width of w. The band stop structure is simulated and optimized by using Microwave Studio CST [21], Microwave Office AWR. The dimensions are calculated using filter theory, TX-Line software, and EM simulator. Figure 6 shows the designed equivalent circuit of the BSF employing circuit simulator AWR. The extracted circuit parameters are computed based on the EM simulations and empirical method and defined as follows: L = 6.2 nH, C = 0.77 pF, R = 0.51 kΩ, C p = 0.96 pH, and C 0 = 3.32 pF.Figure 5 Layout of the cascaded DMS-band stop filter.Figure 6 Equivalent circuit of the DMS-band stop filter.The simulated results depicted in Figures7(a) and 7(b) prove that the proposed filter has a 3 dB cutoff frequency at 2.7 GHz and a suppression level of 20 dB from 4.5 to 5.5 GHz; the insertion loss in the pass band is about 0.65 dB. Good agreement is verified between the EM simulations and the circuit simulations.Figure 7 Simulation results of the DMS-band stop filter. (a) EM simulation, (b) circuit simulation. (a) (b) ## 4. Band Stop to Low Pass Using Compensated Capacitors In order to demonstrate the effectiveness of the compensated microstrip capacitor in transforming a structure with band stop to low pass behaviors, the added parallel microstrip capacitors to the previous structure (Figure5) are designed and optimized as shown in Figure 8. A new DMS low pass filter is composed of three compensated parallel microstrip capacitors, which are separated through two identical DMS resonators. All components are cascaded on the top layer and directly connected with the SMAs through the two 50 Ω feed lines of width 1.88 mm as shown in Figure 8. The filter has been designed and simulated in order to improve the reject band and to minimize the pass band losses. The DMS low pass structure is designed for cutoff frequency at 1.55 GHz and is simulated on the Rogers RO4003 substrate with the dielectric constant of 3.38 and thickness of 0.813 mm. The total size of the filter is 59 × 35 mm2. Simulations have been performed using the full-wave EM Microwave Studio CST and Microwave Office AWR.Figure 8 3D view of the proposed DMS low pass filter.Figures9 and 10 represent the equivalent circuit and the S-parameters of the DMS low pass filter. According to the simulation response, we can conclude that the equivalent circuit has the characteristics of quasielliptic function, because the frequency response of the elliptic function filters is known with its generated transmission zeros in pass band and thus its high sharpness in transition response as shown in Figure 10.Figure 9 Equivalent circuit of DMS low pass filter.Figure 10 Comparison of simulated results of DMS low pass filter.The values ofR, L, and C are obtained as 4 kΩ, 1.3 nH, and 0.6 pF, respectively, after using an optimization technique, while the values of three parallel open-circuit capacitors are calculated using TX-Line software or exactly calculated using the following equation:(4) C = 1 Z 0 C ω c sin ⁡ β C l C + 2 Z 0 L ω c tan ⁡ β L l L ,where Z 0 C, β C, l C, β L, and l L are the characteristic impedance, the phase constant and the physical length of the compensated capacitor (low-impedance line), and the phase constant and the physical width of the series reactance (high-impedance line), respectively. Both series reactance values of the low-impedance line are negligible; thus the length of stub capacitors is approximated as the following:(5) l C = λ g C 2 π sin - 1 ⁡ ω c Z 0 L C , W C ≈ f Z 0 C , θ , t , h , ε r ,where W C, λ g C,  θ, t, h, and ε r are the width of the open-circuit stub capacitance, the guided wavelengths, the electrical length, the thickness of metal, thickness of the substrate, and the relative dielectric constant, respectively. As shown in Figure 10, two reflection zeros at 1 GHz and 1.5 GHz are generated; thus the insertion loss is less than 0.2 dB from DC up to 1.55 GHz. The return loss in the pass band is less than −12 dB. The stopband rejection is higher than −20 dB from 2.15 GHz up to 4.25 GHz. As illustrated in Figure 10, an undesired peak appeared around the frequency of 4.4 GHz. In order to suppress this undesired harmonic, another technique based on DGS will be used. The dimensions of the proposed structure are depicted in Table 2.Table 2 Dimensions of the DMS low pass filter structure. Dimensions of DMS-unit Values (mm) a 9.05 b 10 c 3 e 2 i 20 ## 5. Improvement of the Low Pass Filter In order to suppress the undesired peak at 4.4 GHz of the DMS low pass filter, a pair of DGS-units has been used. The idea is to choose DGS resonators having resonance frequency around the unwanted frequency 4.4 GHz, thus to realize structure with a wide reject band. As shown in Figure11, a multilayer structure is used to improve the performance of the previous low pass filter. The proposed element is similar to the DMS-unit with the difference that the new structure consists of additional two DGS-units, which are located between the microstrip capacitors as shown in Figure 11.Figure 11 3D view of the proposed DMS-DGS low pass filter.The dimensions of the two DGS shapes, which are etched in the ground, have been defined as follows:s = 0.6 mm, q = 6 mm, t = 10 mm, and z = 10 mm. The coupled distance (d = 26 mm) between the cascaded DGS resonators is obtained based on the empirical method. This proposed geometrical idea is based on using several stacked layers and it was able to improve the performance of the filter. The proposed DGS-DMS low pass filter has been simulated on a Rogers RO4003 substrate with a relative dielectric constant ε r of 3.38 and a thickness h of 0.813 mm. As depicted in Figure 12, the proposed LPF behaves well in the pass band and the stopband. The filter has a −3 dB cutoff frequency at 1.1 GHz, an insertion loss of 0.1 dB, and a return loss less than −20 dB in the whole pass band.Figure 12 Simulation results of the proposed DMS-DGS low pass filter.In addition, an ultrawide suppression level approximately equal to −20 dB in the frequency stopband ranging from 1.5 GHz to more than 8.5 GHz is achieved. Simulations were performed using Microwave Office AWR and CST Microwave Studio simulators. ## 6. Field Distribution along the Filter Structure The investigation of this EM field distribution has an objective of showing the frequency behaviour of this proposed filter and to prove the validity of the intuitive equivalent circuit. Figure13(a) shows the field distribution in the pass band region at the frequency of 0.5 GHz. The magnetic field is concentrated along the DMS resonators and on the 50 Ω lines, while a negligible electric field appears between both poles of this DMS structure. The transmission power between both ports is magnetic. The arm of the DMS represents an inductor. Figure 13(b) shows a filter with stopband behaviour at a resonant frequency of 4 GHz. The electric and magnetic fields show same distribution densities. The electric field is concentrated between extremities of the first slot, which represents the capacity. Based on this EM field investigation, the parallel LC circuit can be an approach model for the DMS-unit.Figure 13 Field distribution: (a) magnetic field atf = 0.5 GHz, (b) magnetic field at f = 4 GHz. (a) (b) ## 7. Fabrication and Measurement Figure14 shows photographs of the fabricated LPF filter. The simulations of the proposed DMS-DGS-UWRB-LPF are carried out using CST Microwave Studio and AWR Microwave Office. The simulation results show that the designed filter has a high sharpness factor, small losses in the pass band, and a wide reject band as shown in Figure 15. In order to verify the validity of the proposed DMS-DGS combination idea, the filter has been fabricated and measured using an HP8722D network analyzer. The LPF has been fabricated on a substrate with a relative dielectric constant ε r of 3.38 and a thickness h of 0.813 mm. The comparison between measured and simulated results is depicted in Figure 15. In the pass band, the measured insertion and return losses are less than –0.3 dB and −17 dB, respectively. The stopband rejection is higher than −20 dB from 1.3 GHz up to 8.9 GHz The compact low pass structure occupies an area of (0.40 λ g × 0.24 λ g), where λ g = 148 mm is the guided wavelength at cutoff frequency.Figure 14 Photograph of the fabricated DMS-DGS LPF, (a) top layer, (b) bottom layer. (a) (b)Figure 15 Comparison of simulation and measurement results of proposed DMS-DGS LPF.A very good agreement between simulations and measurements has been obtained. Some discrepancy between them can be interpreted as unexpected fabrication tolerances. The observed deviation between simulation and experimental results, which also means the loss of transmission line, has been caused by mismatching effects and the manufacturing tolerance errors. ## 8. Conclusion In this work, a novel DMS-DGS wide stopband low pass filter has been introduced and investigated. The filter structure with strong suppression of undesired harmonic responses has been presented, which is based on stopband behaviors of the DMS and DGS cells. It is demonstrated that low insertion loss (0.3 dB), deep return loss (greater than 17 dB), and a wide rejection bandwidth with overall 20 dB attenuation from 1.3 GHz up to 8.9 GHz and bandwidth with overall 40 dB attenuation from 1.9 GHz up to 7.7 GHz have been achieved in this type of filter. It has been shown that the simulated results achieved by full-wave EM were in excellent agreement with the measured ones. The newly proposed DMS-DGS-LPF and the related design method are compatible with monolithic microwave integrated circuit (MMIC) or multilayer technology and can be used in a wide range of microwave and millimeter wave applications. --- *Source: 101602-2015-10-15.xml*
2015
# Clinical Efficacy of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction in Chronic Perianal Eczema **Authors:** Weiwei Gao; Xueli Qiao; Jinxin Zhu; Xin Jin; Yuegang Wei **Journal:** Computational and Mathematical Methods in Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1016108 --- ## Abstract Objective. To unearth the clinical efficacy of tacrolimus ointment+3% boric acid lotion joint Chinese angelica decoction in chronic perianal eczema. Methods. Patients with chronic perianal eczema admitted to hospital from June 2018 and June 2019 were retrospectively analyzed. Patients in the control group (n=38) underwent basic therapy with tacrolimus ointment+3% boric acid lotion, whereas those in the observation group (n=38) were given oral Chinese angelica decoction on the basis of the above therapy. Patient’s baseline information before therapy and clinical symptoms after therapy were observed and compared, including pruritus ani score, anus drainage and damp score, skin lesion score, skin lesion area score, life quality index score, and IL-2, IL-4, and IgE levels in serum. Overall efficacy in the two groups was also evaluated. Results. No significant differences were found in the baseline information between the observation group and control group before therapy. After therapy, pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) of patients in the observation group were remarkably lower than those in the control group. Significantly higher release levels of clinical symptoms of patients in the observation group were indicated. With respect to the control group, the life quality score (P=0.020) and IgE level in serum (P=0.003) of patients in the observation group were significantly lower, while IL-4 level in serum was significantly higher (P=0.129). The therapy in the observation group achieved better clinical efficacy. Overall efficacy in the observation group was markedly favorable with respect to the control group. Conclusion. With respect to tacrolimus ointment+3% boric acid lotion, patients with chronic perianal eczema displayed better clinical efficacy after jointly being treated by Chinese angelica decoction. --- ## Body ## 1. Introduction Perianal eczema is a skin disease in perianal skins and mucosae and may spread to perineal region and externalia [1]. Clinical symptoms of perianal eczema are pruritus, calor, and exudative lesions. Three main types like irritant toxic, atopic, and anaphylactic contact dermatitis may be caused by various colon diseases, skin diseases, anaphylactic diseases, or pathogens [2–4]. To date, glucocorticoid drugs are given to treat perianal eczema patients and can achieve relatively ideal efficacy in the early stage. However, a massive number of investigations suggested that patients are prone to rely on these drugs, and after withdrawal, they are prone to suffer from disease recurrence and adverse events [5, 6]. A more effective alternative is therefore urgent for disease treatment.With the emergency and application of topical calcineurin inhibitors for perianal eczema treatment, their anti-inflammatory, immunoregulation, and steroid retention functions attract much attention. The nonsteroidal anti-inflammatory drugs (pimecrolimus and tacrolimus) display favorable efficacy in treating assertive perianal eczema [7–9]. Nonetheless, relevant investigations are still lacking. In addition, boric acid lotion can also be used for perianal eczema. Bai et al. [10] also revealed the suppression of boric acid lotion on bacteria and fungi. Currently, the extensively used treatment for perianal eczema is boric acid lotion plus tacrolimus ointment [6].Traditional Chinese medicines (angelica sinensis and radix sophorae flavescentis) are beneficial to treatment for eczema [11, 12]. Thus, we speculated that it is meaningful to apply Chinese angelica to the treatment of perianal eczema. Joint treatment of Chinese and Western medicine may achieve unanticipated clinical benefits. Chinese angelica decoction originates from the Sixth Chapter of Yan’s Prescription for Rescuing Lives (Jishengfangjuan VI): Chinese angelica decoction is mainly used for retention of qi and blood, internal wind-heat, symptoms like scabies, swelling, itch, pus, or reddish measles. It is composed of 50 g of Chinese angelica (remove residual stems, leaf stems, and rhizomes), white peony, Ligusticum wallichii, Rehmannia glutinosa (washed), Tribulusterrester (fried; remove shoots), Saposhnikovia divaricate (remove residual stems, leaf stems, and rhizomes), and Schizonepeta tenusfolia Briq, and 25 g of Fallopia multiflora (Thunb.) Harald, Astragalus mongholicus Bunge (remove residual stems, leaf stems, rhizomes), and Glycyrrhiza uralensis (baked). Major efficacies of this drug are replenishing qi and blood, treating skin diseases whose overall pathogeneses are blood dryness and wind-heat including scabies, urticaria, skin pruritus, feet and hands chap, withered appearance, and stubborn ringworm [13]. Chinese angelica decoction is a classical prescription for skin inflammation. A preceding investigation discovered the favorable benefit of Chinese angelica decoction in treating chronic perianal eczema, which is worth being introduced [14]. The clinical efficacy of Chinese angelica decoction joint tacrolimus ointment+3% boric acid lotion has been rarely involved.This investigation systematically researched tacrolimusointment+3% boric acid lotion joint Chinese angelica decoction on chronic perianal eczema. Patients in the control group (n=38) underwent basic therapy with tacrolimus ointment+3% boric acid lotion, whereas those in the observation group (n=38) were given Chinese angelica decoction on the basis of the above therapy. ## 2. Methods ### 2.1. Sample Collection and Grouping Totally, 76 perianal eczema patients were included as research objects. Diagnosis criteria were rough and hypertrophic perianal skin, lichenification, accompanied hyperpigmentation, symmetrically distributed and frequently recurrent rash, and itchy or extremely itchy. All patients were diagnosed and systematically treated in hospital during June 2018-June 2019. They were divided into a control group (n=38) and an observation group (n=38) according to therapy plans. No significant differences were found in the baseline information of perianal eczema patients in two groups (see Table 1).Table 1 Baseline information of patients in two groups. Baseline informationObservation group (n=38)Control group (n=38)P valueAge (years)41.08±9.0544.44±11.670.262aCourse of disease (month)21.27±10.2021.43±8.260.934aSexMale15170.817bFemale2321Pruritus ani score3.76±0.784.00±0.520.068aAnus drainage and damp score1790.563212831921Skin lesion score0 ~ 3560.9954 ~ 620197 ~ 91313Skin lesion area score2540.8824182061514Life quality index score0 ~ 10870.19411~20272321~3038IL-265.75±28.4871.53±23.730.386aIL-421.31±6.8220.54±7.570.827aIgE53.01±16.7251.89±15.580.983aNotes:aindependent sample t-test; bFisher exact test; all tests were two-tail P value. ### 2.2. Treatment Plans Two groups of patients were hydropathic compressed with 3% boric acid lotion (Shanghai Yunjia Huangpu Pharmaceutical Co., Ltd.; State Medical Permitment No. H31022883) and then smeared with tacrolimus ointment (LEO Laboratories Ltd.; active ingredient: 3 mg/10 g; Registration No. HJ20181015) in perianal region. The overall treatment cycle includes 2 courses; 2 weeks a course; twice a day. In addition, the observation group was given Chinese angelica decoction orally for 2 weeks a course for 2 courses. Chinese angelica decoction contains 15 g angelica, 30 g Rehmannia glutinosa, 20 g radix paeoniae alba, 10 g ligusticum chuanxiong hort, 15 g polygonum multiflorum, tenuifolia, 10 g saposhniovia root, 20 g tribulusterrestris, 30 g astragalus membranaceus, and 6 g licorice roots (one dose orally every day). ### 2.3. Observation Criteria #### 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. #### 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). #### 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). #### 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. #### 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. #### 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. #### 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ### 2.4. Data Analysis Data were analyzed by SPSS 26.0 software. Enumeration data were denoted in the form ofn. Fisher exact test or Chi-square test was used. Measurement data were subjected to tests for normality and homogeneity of variance. Data conforming to normal distribution were displayed by mean±standarddeviation. Differences between data were examined by t-test. P<0.05 symbolized statistically significant. ## 2.1. Sample Collection and Grouping Totally, 76 perianal eczema patients were included as research objects. Diagnosis criteria were rough and hypertrophic perianal skin, lichenification, accompanied hyperpigmentation, symmetrically distributed and frequently recurrent rash, and itchy or extremely itchy. All patients were diagnosed and systematically treated in hospital during June 2018-June 2019. They were divided into a control group (n=38) and an observation group (n=38) according to therapy plans. No significant differences were found in the baseline information of perianal eczema patients in two groups (see Table 1).Table 1 Baseline information of patients in two groups. Baseline informationObservation group (n=38)Control group (n=38)P valueAge (years)41.08±9.0544.44±11.670.262aCourse of disease (month)21.27±10.2021.43±8.260.934aSexMale15170.817bFemale2321Pruritus ani score3.76±0.784.00±0.520.068aAnus drainage and damp score1790.563212831921Skin lesion score0 ~ 3560.9954 ~ 620197 ~ 91313Skin lesion area score2540.8824182061514Life quality index score0 ~ 10870.19411~20272321~3038IL-265.75±28.4871.53±23.730.386aIL-421.31±6.8220.54±7.570.827aIgE53.01±16.7251.89±15.580.983aNotes:aindependent sample t-test; bFisher exact test; all tests were two-tail P value. ## 2.2. Treatment Plans Two groups of patients were hydropathic compressed with 3% boric acid lotion (Shanghai Yunjia Huangpu Pharmaceutical Co., Ltd.; State Medical Permitment No. H31022883) and then smeared with tacrolimus ointment (LEO Laboratories Ltd.; active ingredient: 3 mg/10 g; Registration No. HJ20181015) in perianal region. The overall treatment cycle includes 2 courses; 2 weeks a course; twice a day. In addition, the observation group was given Chinese angelica decoction orally for 2 weeks a course for 2 courses. Chinese angelica decoction contains 15 g angelica, 30 g Rehmannia glutinosa, 20 g radix paeoniae alba, 10 g ligusticum chuanxiong hort, 15 g polygonum multiflorum, tenuifolia, 10 g saposhniovia root, 20 g tribulusterrestris, 30 g astragalus membranaceus, and 6 g licorice roots (one dose orally every day). ## 2.3. Observation Criteria ### 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. ### 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). ### 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). ### 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. ### 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. ### 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. ### 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ## 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. ## 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). ## 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). ## 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. ## 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. ## 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. ## 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ## 2.4. Data Analysis Data were analyzed by SPSS 26.0 software. Enumeration data were denoted in the form ofn. Fisher exact test or Chi-square test was used. Measurement data were subjected to tests for normality and homogeneity of variance. Data conforming to normal distribution were displayed by mean±standarddeviation. Differences between data were examined by t-test. P<0.05 symbolized statistically significant. ## 3. Results ### 3.1. The Impact of Joint Chinese Angelica Decoction on Clinical Symptom Score of Chronic Perianal Eczema In this section, we compared several clinical symptom scores of chronic perianal eczema patients after treatment in two groups, including pruritus ani score, anus drainage and damp score, skin lesion score, and skin lesion area score. Scores of two groups of patients both dropped, wherein patient’s pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) in the observation group were significantly lower (Table 2).Table 2 Comparison of each clinical symptom score of two groups of patients after treatment. Clinical symptom scoreObservation group (n=38)Control group (n=38)P valuePruritus ani score2.82±0.553.10±0.390.023aAnus drainage and damp score01050.041114821425Skin lesion score0 ~ 328250.0254 ~ 61013Skin lesion area score01130.035219204815Notes:aindependent sample t-test; two-tailed P value was applied for all tests. ### 3.2. The Impact of Joint Chinese Angelica Decoction on Patient’s Life Quality Score and Immune Reaction-Related Proteins In this section, we mainly compared life quality indexes and levels of IL-2, IL-4, and IgE in serum of two groups of patients after treatment. It was found that life quality index score in the observation group was significantly lower (P=0.020) (Figure 1(a)). With respect to the control group, patients in the observation group had lower IL-2 (no significant difference, P=0.129) and IgE (statistical significance, P=0.013) levels and significantly higher IL-4 level (P=0.003) (Figures 1(b)–1(d)).Figure 1 Life quality index score and laboratory detection index of patients in two groups after treatment. (a) Life quality index score of patients in the observation and control groups after treatment. (b)–(d) IL-2, IL-4, and IgE levels in serum of two groups of patients after treatment, respectively. (a)(b)(c)(d) ### 3.3. The Impacts of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction on the Overall Efficacy on Patients with Chronic Perianal Eczema In this section, we compared overall efficacy of two groups of therapy plans. As shown in Table3, the overall efficacy of patients in the observation group was better than that in the control group (P=0.033).Table 3 Comparison of overall efficacy of two groups of patients. Overall efficacyCuredSlightly effectiveEffectiveIn-effectiveP valueControl group (n=38)891290.033Observation group (n=38)18992 ## 3.1. The Impact of Joint Chinese Angelica Decoction on Clinical Symptom Score of Chronic Perianal Eczema In this section, we compared several clinical symptom scores of chronic perianal eczema patients after treatment in two groups, including pruritus ani score, anus drainage and damp score, skin lesion score, and skin lesion area score. Scores of two groups of patients both dropped, wherein patient’s pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) in the observation group were significantly lower (Table 2).Table 2 Comparison of each clinical symptom score of two groups of patients after treatment. Clinical symptom scoreObservation group (n=38)Control group (n=38)P valuePruritus ani score2.82±0.553.10±0.390.023aAnus drainage and damp score01050.041114821425Skin lesion score0 ~ 328250.0254 ~ 61013Skin lesion area score01130.035219204815Notes:aindependent sample t-test; two-tailed P value was applied for all tests. ## 3.2. The Impact of Joint Chinese Angelica Decoction on Patient’s Life Quality Score and Immune Reaction-Related Proteins In this section, we mainly compared life quality indexes and levels of IL-2, IL-4, and IgE in serum of two groups of patients after treatment. It was found that life quality index score in the observation group was significantly lower (P=0.020) (Figure 1(a)). With respect to the control group, patients in the observation group had lower IL-2 (no significant difference, P=0.129) and IgE (statistical significance, P=0.013) levels and significantly higher IL-4 level (P=0.003) (Figures 1(b)–1(d)).Figure 1 Life quality index score and laboratory detection index of patients in two groups after treatment. (a) Life quality index score of patients in the observation and control groups after treatment. (b)–(d) IL-2, IL-4, and IgE levels in serum of two groups of patients after treatment, respectively. (a)(b)(c)(d) ## 3.3. The Impacts of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction on the Overall Efficacy on Patients with Chronic Perianal Eczema In this section, we compared overall efficacy of two groups of therapy plans. As shown in Table3, the overall efficacy of patients in the observation group was better than that in the control group (P=0.033).Table 3 Comparison of overall efficacy of two groups of patients. Overall efficacyCuredSlightly effectiveEffectiveIn-effectiveP valueControl group (n=38)891290.033Observation group (n=38)18992 ## 4. Discussion The purpose of therapy for perianal eczema is to improve patient’s clinical symptoms and reduce the impact on patient’s life quality [21]. This investigation comprehensively assessed the clinical efficacy of Chinese angelica decoction joint tacrolimus ointment+3% boric acid lotion on chronic perianal eczema patient’s clinical symptom-related indexes, life quality index score, expression changes of immune reaction-related protein level, and overall efficacy.A compelling investigation described that Chinese angelica decoction joint tacrolimus quointment+3% boric acid lotion can improve T lymphocytes subsets level and reduce skin lesion area and itching level of patients with blood deficiency and dryness type eczema [22]. This investigation enrolled 76 patients with chronic perianal eczema and divided them into the control group and observation group. Changes of each index and overall efficacy of two groups of patients were compared to observe clinical value of Chinese angelica decoction. It was indicated that after two courses of treatment, patients displayed differences in clinical symptom score, life quality score, and clinical efficacy. First, their clinical symptoms were all improved after treatment. Next, the observation group presented favorable therapeutic efficacy in scores of each index and overall efficacy, suggestive of more ideal effect of angelica based on basic therapy. Taken together, tacrolimus ointment+3% boric acid lotion joint Chinese angelica decoction was more effective than pure basic therapy. Patient’s pruritus ani score, anus drainage and damp score, skin lesion score, skin lesion area score, life quality index score and IL-4 and IgE levels in serum, and overall efficacy of treatment were all significantly enhanced.A preceding investigation elaborated that the absence of AQP3 correlates with intercellular edema and water homeostasis [23]. In the acute or subacute stages of edema, the expression of AQP3 is abnormally reduced in plasma membrane. Increasing the expression of AQP3 in local lesions may inhibit eczema inflammation. Chinese angelica decoction has been proved to strengthen AQP3 gene and protein expressions in the guinea pig psoriasis model [24]. Thus, we posited that a combination of Chinese angelica decoction can enhance perianal eczema treatment, and this impact may be associated with AQP3 regulation. Incremental experiments are planned in the future.All in all, this investigation verified the clinical efficacy of Chinese angelica decoction joint tacrolimusointment+3% boric acid lotion on chronic perianal eczema. The following treatment may take this Chinese and Western medicine combination as a new direction. Limitations shall be considered here. We did not clarify molecular mechanism of Chinese angelica decoction affecting perianal eczema, and clinical samples were not enough. We are about to design experiments to probe the mechanism and supplement recurrence rate-related studies. --- *Source: 1016108-2021-10-21.xml*
1016108-2021-10-21_1016108-2021-10-21.md
28,238
Clinical Efficacy of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction in Chronic Perianal Eczema
Weiwei Gao; Xueli Qiao; Jinxin Zhu; Xin Jin; Yuegang Wei
Computational and Mathematical Methods in Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1016108
1016108-2021-10-21.xml
--- ## Abstract Objective. To unearth the clinical efficacy of tacrolimus ointment+3% boric acid lotion joint Chinese angelica decoction in chronic perianal eczema. Methods. Patients with chronic perianal eczema admitted to hospital from June 2018 and June 2019 were retrospectively analyzed. Patients in the control group (n=38) underwent basic therapy with tacrolimus ointment+3% boric acid lotion, whereas those in the observation group (n=38) were given oral Chinese angelica decoction on the basis of the above therapy. Patient’s baseline information before therapy and clinical symptoms after therapy were observed and compared, including pruritus ani score, anus drainage and damp score, skin lesion score, skin lesion area score, life quality index score, and IL-2, IL-4, and IgE levels in serum. Overall efficacy in the two groups was also evaluated. Results. No significant differences were found in the baseline information between the observation group and control group before therapy. After therapy, pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) of patients in the observation group were remarkably lower than those in the control group. Significantly higher release levels of clinical symptoms of patients in the observation group were indicated. With respect to the control group, the life quality score (P=0.020) and IgE level in serum (P=0.003) of patients in the observation group were significantly lower, while IL-4 level in serum was significantly higher (P=0.129). The therapy in the observation group achieved better clinical efficacy. Overall efficacy in the observation group was markedly favorable with respect to the control group. Conclusion. With respect to tacrolimus ointment+3% boric acid lotion, patients with chronic perianal eczema displayed better clinical efficacy after jointly being treated by Chinese angelica decoction. --- ## Body ## 1. Introduction Perianal eczema is a skin disease in perianal skins and mucosae and may spread to perineal region and externalia [1]. Clinical symptoms of perianal eczema are pruritus, calor, and exudative lesions. Three main types like irritant toxic, atopic, and anaphylactic contact dermatitis may be caused by various colon diseases, skin diseases, anaphylactic diseases, or pathogens [2–4]. To date, glucocorticoid drugs are given to treat perianal eczema patients and can achieve relatively ideal efficacy in the early stage. However, a massive number of investigations suggested that patients are prone to rely on these drugs, and after withdrawal, they are prone to suffer from disease recurrence and adverse events [5, 6]. A more effective alternative is therefore urgent for disease treatment.With the emergency and application of topical calcineurin inhibitors for perianal eczema treatment, their anti-inflammatory, immunoregulation, and steroid retention functions attract much attention. The nonsteroidal anti-inflammatory drugs (pimecrolimus and tacrolimus) display favorable efficacy in treating assertive perianal eczema [7–9]. Nonetheless, relevant investigations are still lacking. In addition, boric acid lotion can also be used for perianal eczema. Bai et al. [10] also revealed the suppression of boric acid lotion on bacteria and fungi. Currently, the extensively used treatment for perianal eczema is boric acid lotion plus tacrolimus ointment [6].Traditional Chinese medicines (angelica sinensis and radix sophorae flavescentis) are beneficial to treatment for eczema [11, 12]. Thus, we speculated that it is meaningful to apply Chinese angelica to the treatment of perianal eczema. Joint treatment of Chinese and Western medicine may achieve unanticipated clinical benefits. Chinese angelica decoction originates from the Sixth Chapter of Yan’s Prescription for Rescuing Lives (Jishengfangjuan VI): Chinese angelica decoction is mainly used for retention of qi and blood, internal wind-heat, symptoms like scabies, swelling, itch, pus, or reddish measles. It is composed of 50 g of Chinese angelica (remove residual stems, leaf stems, and rhizomes), white peony, Ligusticum wallichii, Rehmannia glutinosa (washed), Tribulusterrester (fried; remove shoots), Saposhnikovia divaricate (remove residual stems, leaf stems, and rhizomes), and Schizonepeta tenusfolia Briq, and 25 g of Fallopia multiflora (Thunb.) Harald, Astragalus mongholicus Bunge (remove residual stems, leaf stems, rhizomes), and Glycyrrhiza uralensis (baked). Major efficacies of this drug are replenishing qi and blood, treating skin diseases whose overall pathogeneses are blood dryness and wind-heat including scabies, urticaria, skin pruritus, feet and hands chap, withered appearance, and stubborn ringworm [13]. Chinese angelica decoction is a classical prescription for skin inflammation. A preceding investigation discovered the favorable benefit of Chinese angelica decoction in treating chronic perianal eczema, which is worth being introduced [14]. The clinical efficacy of Chinese angelica decoction joint tacrolimus ointment+3% boric acid lotion has been rarely involved.This investigation systematically researched tacrolimusointment+3% boric acid lotion joint Chinese angelica decoction on chronic perianal eczema. Patients in the control group (n=38) underwent basic therapy with tacrolimus ointment+3% boric acid lotion, whereas those in the observation group (n=38) were given Chinese angelica decoction on the basis of the above therapy. ## 2. Methods ### 2.1. Sample Collection and Grouping Totally, 76 perianal eczema patients were included as research objects. Diagnosis criteria were rough and hypertrophic perianal skin, lichenification, accompanied hyperpigmentation, symmetrically distributed and frequently recurrent rash, and itchy or extremely itchy. All patients were diagnosed and systematically treated in hospital during June 2018-June 2019. They were divided into a control group (n=38) and an observation group (n=38) according to therapy plans. No significant differences were found in the baseline information of perianal eczema patients in two groups (see Table 1).Table 1 Baseline information of patients in two groups. Baseline informationObservation group (n=38)Control group (n=38)P valueAge (years)41.08±9.0544.44±11.670.262aCourse of disease (month)21.27±10.2021.43±8.260.934aSexMale15170.817bFemale2321Pruritus ani score3.76±0.784.00±0.520.068aAnus drainage and damp score1790.563212831921Skin lesion score0 ~ 3560.9954 ~ 620197 ~ 91313Skin lesion area score2540.8824182061514Life quality index score0 ~ 10870.19411~20272321~3038IL-265.75±28.4871.53±23.730.386aIL-421.31±6.8220.54±7.570.827aIgE53.01±16.7251.89±15.580.983aNotes:aindependent sample t-test; bFisher exact test; all tests were two-tail P value. ### 2.2. Treatment Plans Two groups of patients were hydropathic compressed with 3% boric acid lotion (Shanghai Yunjia Huangpu Pharmaceutical Co., Ltd.; State Medical Permitment No. H31022883) and then smeared with tacrolimus ointment (LEO Laboratories Ltd.; active ingredient: 3 mg/10 g; Registration No. HJ20181015) in perianal region. The overall treatment cycle includes 2 courses; 2 weeks a course; twice a day. In addition, the observation group was given Chinese angelica decoction orally for 2 weeks a course for 2 courses. Chinese angelica decoction contains 15 g angelica, 30 g Rehmannia glutinosa, 20 g radix paeoniae alba, 10 g ligusticum chuanxiong hort, 15 g polygonum multiflorum, tenuifolia, 10 g saposhniovia root, 20 g tribulusterrestris, 30 g astragalus membranaceus, and 6 g licorice roots (one dose orally every day). ### 2.3. Observation Criteria #### 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. #### 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). #### 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). #### 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. #### 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. #### 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. #### 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ### 2.4. Data Analysis Data were analyzed by SPSS 26.0 software. Enumeration data were denoted in the form ofn. Fisher exact test or Chi-square test was used. Measurement data were subjected to tests for normality and homogeneity of variance. Data conforming to normal distribution were displayed by mean±standarddeviation. Differences between data were examined by t-test. P<0.05 symbolized statistically significant. ## 2.1. Sample Collection and Grouping Totally, 76 perianal eczema patients were included as research objects. Diagnosis criteria were rough and hypertrophic perianal skin, lichenification, accompanied hyperpigmentation, symmetrically distributed and frequently recurrent rash, and itchy or extremely itchy. All patients were diagnosed and systematically treated in hospital during June 2018-June 2019. They were divided into a control group (n=38) and an observation group (n=38) according to therapy plans. No significant differences were found in the baseline information of perianal eczema patients in two groups (see Table 1).Table 1 Baseline information of patients in two groups. Baseline informationObservation group (n=38)Control group (n=38)P valueAge (years)41.08±9.0544.44±11.670.262aCourse of disease (month)21.27±10.2021.43±8.260.934aSexMale15170.817bFemale2321Pruritus ani score3.76±0.784.00±0.520.068aAnus drainage and damp score1790.563212831921Skin lesion score0 ~ 3560.9954 ~ 620197 ~ 91313Skin lesion area score2540.8824182061514Life quality index score0 ~ 10870.19411~20272321~3038IL-265.75±28.4871.53±23.730.386aIL-421.31±6.8220.54±7.570.827aIgE53.01±16.7251.89±15.580.983aNotes:aindependent sample t-test; bFisher exact test; all tests were two-tail P value. ## 2.2. Treatment Plans Two groups of patients were hydropathic compressed with 3% boric acid lotion (Shanghai Yunjia Huangpu Pharmaceutical Co., Ltd.; State Medical Permitment No. H31022883) and then smeared with tacrolimus ointment (LEO Laboratories Ltd.; active ingredient: 3 mg/10 g; Registration No. HJ20181015) in perianal region. The overall treatment cycle includes 2 courses; 2 weeks a course; twice a day. In addition, the observation group was given Chinese angelica decoction orally for 2 weeks a course for 2 courses. Chinese angelica decoction contains 15 g angelica, 30 g Rehmannia glutinosa, 20 g radix paeoniae alba, 10 g ligusticum chuanxiong hort, 15 g polygonum multiflorum, tenuifolia, 10 g saposhniovia root, 20 g tribulusterrestris, 30 g astragalus membranaceus, and 6 g licorice roots (one dose orally every day). ## 2.3. Observation Criteria ### 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. ### 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). ### 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). ### 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. ### 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. ### 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. ### 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ## 2.3.1. Pruritus Ani Score before and after Treatment Visual analogue scale (VAS) was adopted to assess pruritus ani before and after treatment [15]. To be specific, 10 cm VAS was divided into 0-10.0 for no pruritus ani, 10 for intense pruritus ani and unable to sleep, and middle numbers for different levels of pruritus ani. Patients were instructed to correspond their pruritus ani to a specific location of the scale and physicians scored them on this basis. ## 2.3.2. Anus Drainage and Damp Score [16] before and after Treatment 0 point: no seepage; 1 point: a little seepage (occasionally moist); 2 points: plenty of seepage (evident perianal maceration); 3 points: a great amount of seepage (perianal maceration pollutes underwear). ## 2.3.3. Skin Lesion Score [17] before and after Treatment Papule: 1 point-mild (slight red, scattered distribution, and no phlysis); 2 points-moderate (reddish, close distribution, and visible papulovesicle); 3 points-severe (rather red, very close distribution, and scattered phlysis). Erosion: 0 point-no erosion; 1 point-mild erosion (scattered distribution); 2 points-moderate erosion (small spots and partly confluent); 3 points-severe erosion (evident and vast erosion). Effusion: 0 point-no effusion; 1 point-mild effusion (scattered distribution and hard to observe); 2 points-mild effusion (much effusion and easily to soak toilet paper); 3 points-severe effusion (very much effusion and in the shape of beads). ## 2.3.4. Skin Lesion Area Score [18] Disinfected projection film was used to record the size of the wound on cardiogram paper. Wound area was recorded as the product of the length of the horizontal and vertical axes in cardiogram paper. 0 point-no skin lesion area: 0; 2 points-mild: <2∗2 cm; 4 points-mild: >2∗2 cm and<6∗6 cm; 6 points-severe: >6∗6 cm. ## 2.3.5. Life Quality Index Score before and after Treatment Skin disease life quality index was applied to assess changes in life quality [19]. There were 10 questions, each of which was scored by 4-level scoring method: 0, 1, 2, and 3 points. Total score ranges from 0 to 30 points. Higher scores indicate more effects of the disease on patient’s life quality. ## 2.3.6. Overall Efficacy Evaluation Criteria [20] Referring to Chinese Medicine Clinical Research of New Drugs Guiding Principles,improvementrateofclinicalsymptomscore=totalscoreofsymptomscorebeforetreatment–totalscoreofsymptomscoreaftertreatment/totalscorebeforetreatment×100%. Cure: descend range of symptomscore≥90%; very effective: descend range of symptomscore≥60% and <90%; effective: descend range of symptomscore≥20% and<60%; ineffective: descend range of symptomscore<20%. ## 2.3.7. Detection of Immune Reaction-Related Proteins Patient’s peripheral blood was drawn before and after treatment to test levels of IL-2, IL-4, and IgE in serum. Changes in the expression levels of the above proteins were observed. ## 2.4. Data Analysis Data were analyzed by SPSS 26.0 software. Enumeration data were denoted in the form ofn. Fisher exact test or Chi-square test was used. Measurement data were subjected to tests for normality and homogeneity of variance. Data conforming to normal distribution were displayed by mean±standarddeviation. Differences between data were examined by t-test. P<0.05 symbolized statistically significant. ## 3. Results ### 3.1. The Impact of Joint Chinese Angelica Decoction on Clinical Symptom Score of Chronic Perianal Eczema In this section, we compared several clinical symptom scores of chronic perianal eczema patients after treatment in two groups, including pruritus ani score, anus drainage and damp score, skin lesion score, and skin lesion area score. Scores of two groups of patients both dropped, wherein patient’s pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) in the observation group were significantly lower (Table 2).Table 2 Comparison of each clinical symptom score of two groups of patients after treatment. Clinical symptom scoreObservation group (n=38)Control group (n=38)P valuePruritus ani score2.82±0.553.10±0.390.023aAnus drainage and damp score01050.041114821425Skin lesion score0 ~ 328250.0254 ~ 61013Skin lesion area score01130.035219204815Notes:aindependent sample t-test; two-tailed P value was applied for all tests. ### 3.2. The Impact of Joint Chinese Angelica Decoction on Patient’s Life Quality Score and Immune Reaction-Related Proteins In this section, we mainly compared life quality indexes and levels of IL-2, IL-4, and IgE in serum of two groups of patients after treatment. It was found that life quality index score in the observation group was significantly lower (P=0.020) (Figure 1(a)). With respect to the control group, patients in the observation group had lower IL-2 (no significant difference, P=0.129) and IgE (statistical significance, P=0.013) levels and significantly higher IL-4 level (P=0.003) (Figures 1(b)–1(d)).Figure 1 Life quality index score and laboratory detection index of patients in two groups after treatment. (a) Life quality index score of patients in the observation and control groups after treatment. (b)–(d) IL-2, IL-4, and IgE levels in serum of two groups of patients after treatment, respectively. (a)(b)(c)(d) ### 3.3. The Impacts of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction on the Overall Efficacy on Patients with Chronic Perianal Eczema In this section, we compared overall efficacy of two groups of therapy plans. As shown in Table3, the overall efficacy of patients in the observation group was better than that in the control group (P=0.033).Table 3 Comparison of overall efficacy of two groups of patients. Overall efficacyCuredSlightly effectiveEffectiveIn-effectiveP valueControl group (n=38)891290.033Observation group (n=38)18992 ## 3.1. The Impact of Joint Chinese Angelica Decoction on Clinical Symptom Score of Chronic Perianal Eczema In this section, we compared several clinical symptom scores of chronic perianal eczema patients after treatment in two groups, including pruritus ani score, anus drainage and damp score, skin lesion score, and skin lesion area score. Scores of two groups of patients both dropped, wherein patient’s pruritus ani score (P=0.023), anus drainage and damp score (P=0.041), skin lesion score (P=0.025), and skin lesion area score (P=0.035) in the observation group were significantly lower (Table 2).Table 2 Comparison of each clinical symptom score of two groups of patients after treatment. Clinical symptom scoreObservation group (n=38)Control group (n=38)P valuePruritus ani score2.82±0.553.10±0.390.023aAnus drainage and damp score01050.041114821425Skin lesion score0 ~ 328250.0254 ~ 61013Skin lesion area score01130.035219204815Notes:aindependent sample t-test; two-tailed P value was applied for all tests. ## 3.2. The Impact of Joint Chinese Angelica Decoction on Patient’s Life Quality Score and Immune Reaction-Related Proteins In this section, we mainly compared life quality indexes and levels of IL-2, IL-4, and IgE in serum of two groups of patients after treatment. It was found that life quality index score in the observation group was significantly lower (P=0.020) (Figure 1(a)). With respect to the control group, patients in the observation group had lower IL-2 (no significant difference, P=0.129) and IgE (statistical significance, P=0.013) levels and significantly higher IL-4 level (P=0.003) (Figures 1(b)–1(d)).Figure 1 Life quality index score and laboratory detection index of patients in two groups after treatment. (a) Life quality index score of patients in the observation and control groups after treatment. (b)–(d) IL-2, IL-4, and IgE levels in serum of two groups of patients after treatment, respectively. (a)(b)(c)(d) ## 3.3. The Impacts of TacrolimusOintment+3% Boric Acid Lotion Joint Chinese Angelica Decoction on the Overall Efficacy on Patients with Chronic Perianal Eczema In this section, we compared overall efficacy of two groups of therapy plans. As shown in Table3, the overall efficacy of patients in the observation group was better than that in the control group (P=0.033).Table 3 Comparison of overall efficacy of two groups of patients. Overall efficacyCuredSlightly effectiveEffectiveIn-effectiveP valueControl group (n=38)891290.033Observation group (n=38)18992 ## 4. Discussion The purpose of therapy for perianal eczema is to improve patient’s clinical symptoms and reduce the impact on patient’s life quality [21]. This investigation comprehensively assessed the clinical efficacy of Chinese angelica decoction joint tacrolimus ointment+3% boric acid lotion on chronic perianal eczema patient’s clinical symptom-related indexes, life quality index score, expression changes of immune reaction-related protein level, and overall efficacy.A compelling investigation described that Chinese angelica decoction joint tacrolimus quointment+3% boric acid lotion can improve T lymphocytes subsets level and reduce skin lesion area and itching level of patients with blood deficiency and dryness type eczema [22]. This investigation enrolled 76 patients with chronic perianal eczema and divided them into the control group and observation group. Changes of each index and overall efficacy of two groups of patients were compared to observe clinical value of Chinese angelica decoction. It was indicated that after two courses of treatment, patients displayed differences in clinical symptom score, life quality score, and clinical efficacy. First, their clinical symptoms were all improved after treatment. Next, the observation group presented favorable therapeutic efficacy in scores of each index and overall efficacy, suggestive of more ideal effect of angelica based on basic therapy. Taken together, tacrolimus ointment+3% boric acid lotion joint Chinese angelica decoction was more effective than pure basic therapy. Patient’s pruritus ani score, anus drainage and damp score, skin lesion score, skin lesion area score, life quality index score and IL-4 and IgE levels in serum, and overall efficacy of treatment were all significantly enhanced.A preceding investigation elaborated that the absence of AQP3 correlates with intercellular edema and water homeostasis [23]. In the acute or subacute stages of edema, the expression of AQP3 is abnormally reduced in plasma membrane. Increasing the expression of AQP3 in local lesions may inhibit eczema inflammation. Chinese angelica decoction has been proved to strengthen AQP3 gene and protein expressions in the guinea pig psoriasis model [24]. Thus, we posited that a combination of Chinese angelica decoction can enhance perianal eczema treatment, and this impact may be associated with AQP3 regulation. Incremental experiments are planned in the future.All in all, this investigation verified the clinical efficacy of Chinese angelica decoction joint tacrolimusointment+3% boric acid lotion on chronic perianal eczema. The following treatment may take this Chinese and Western medicine combination as a new direction. Limitations shall be considered here. We did not clarify molecular mechanism of Chinese angelica decoction affecting perianal eczema, and clinical samples were not enough. We are about to design experiments to probe the mechanism and supplement recurrence rate-related studies. --- *Source: 1016108-2021-10-21.xml*
2021
# Research on Super-Resolution Relationship Extraction and Reconstruction Methods for Images Based on Multimodal Graph Convolutional Networks **Authors:** Jie Xiao **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016112 --- ## Abstract This study constructs a multimodal graph convolutional network model, conducts an in-depth study on image super-resolution relationship extraction and reconstruction methods, and constructs a model of image super-resolution relationship extraction and reconstruction methods based on multimodal graph convolutional networks. In this study, we study the domain adaptation model algorithm based on chart convolutional networks, which constructs a global relevance graph based on all samples using pre-extracted features and performs distribution approximation of sample features in two domains using a diagram convolutional neural network with maximum mean difference loss; with this approach, the model effectively preserves the structural information among the samples. In this study, several comparison experiments are designed based on the COCO and VG datasets; the image space information-based and knowledge graph-based target detection and recognition models substantially improve recognition performance over the baseline model. The super-pixel-based target detection and recognition model can also effectively reduce the number of floating-point operations and the complexity of the model. In this study, we propose a multiscale GAN-based image super-resolution reconstruction algorithm. Aiming at the problems of detail loss or blurring in the reconstruction of detail-rich images by SRGAN, it integrates the idea of the Laplace pyramid to complete the task of multiscale reconstruction of images through staged reconstruction. It incorporates the concept of a discriminative network with patch GAN to effectively improve the recovery effect of graph details and improve the reconstruction quality of images. Using Set5, Set14, BSD100, and Urban100 datasets as test sets, experimental analysis is conducted from objective and subjective evaluation metrics to effectively validate the performance of the improved algorithm proposed in this study. --- ## Body ## 1. Introduction With the continuous development of information technology and the popularity of intelligent terminal devices, people’s demand for information is also rising: from images in the 2G era to pictures in the 3G era, then to images in the 4G era, and then to holographic images such as AR and VR in the 5G era, the amount of information is rising, while the occupied storage is also exploding [1]. This considerably impacts the daily dissemination of information—the network speed cannot keep up, and the hard disk cannot store it. Therefore, there is an urgent need for an efficient means of information compression to help compress information to improve transmission efficiency and reduce the storage footprint [2]. With the development of high-performance processors, high-definition screens are becoming more and more popular with the emergence of intelligent devices. However, most media information on the Internet is still dominated by low-definition images, resulting in data quality not keeping up with display quality, thus reducing the user experience [3]. In addition, due to the limitations of image storage hardware, the resolution of images is limited, and the size of the smallest pixel determines the details that can be displayed. But the real world is often infinite, so people also want to much detail as possible in the image they can get. The solutions to the above pain points can be summarized as the compression and decompression of information. The most immediate and effective solution to reduce the image size is to store and disseminate multimedia information, especially the most informative image information, and reduce the image’s resolution.Super-resolution reconstruction (SR) technique is to reconstruct a single or multiframe low-resolution (LR) image into a high-resolution (HR) photo by applying specific image processing and other methods to achieve high-quality images. Usually, CNN-based target detection and recognition models use sliding windows or anchors to extract possible foregrounds and hardgrounds [4]. Then, the final localization frame is generated by identifying and regressing all possible foregrounds. By relying on the graph convolution network, we can obtain more abundant information about the location of the object picture. For example, by relying on the inference of the spatial map, we can roughly determine the object’s position and then fine-tune it according to work. Therefore, we can design more flexible and efficient positioning methods to generate positioning frames [5]. This study develops several graph convolutional network detection models acting in spatial, beyond-pixel, and knowledge graphs. We extract features beyond pixels to assist pixel information for accurate target detection and recognition. Finally, the experimental and comparative analyses of the model on the COCO dataset and VG dataset prove that the target detection and recognition model based on a graph convolutional network can break the bottleneck of image pixel recognition to a certain extent and help the target image to achieve better object recognition and localization [6].Image super resolution is used to solve the problem of recovering low-resolution images to high-resolution images. Image super resolution aims to up-sample a series of low-resolution photos output by a deterministic or uncertain degradation model to high resolution while providing more detail than low resolution [7]. Traditional upsampling algorithms have a solid prior relationship, considering that there is a specific mathematical relationship between neighboring pixel values so that the original pixels can be recovered by interpolating adjacent pixels. In the forward propagation process, each sample feature is transformed independently, which may lead to the separation of target domain features that are initially in the same class under the influence of the distribution difference function and eventually classified into different categories [8]. It enables problems with unstructured relationships, such as citation networks, to be well trained by importing correlation graphs between samples [9]. This feature also helps to compensate for the shortcomings of existing domain adaptation algorithms. Graph convolutional networks can be considered as a particular case of graph networks. In this study, we intend to study the scheme and practice of introducing chart convolutional networks into domain adaptation problems to improve the learning performance of domain adaptation problems, make a new direction to explore migration learning tasks, and provide a feasible solution for learning scenarios where labeled information is challenging to obtain [10]. The study of domain adaptation algorithms can effectively reduce the need for data annotation and enable various algorithmic models to have fast learning performance for similar tasks and improve their generalization and robustness, which is of great significance in various real-world tasks, where annotation information is not readily available. ## 2. Related Works The graph convolution layer is a simple extension of the fully connected layer that integrates valuable information from the knowledge graph into the feature vector, and the intuitive understanding of the graph convolution layer is simple. By importing a relevance graph (knowledge graph) into the neural network, the graph convolution layer can change the distribution of the feature vectors through the relevance variable of the relevance graph so that the relevant samples are closer to each other [11]. This feature facilitates the data to obtain and maintain useful structural information during the distribution approximation process, thus avoiding the loss of similar structures in the source domain caused by migration learning and improving the network performance. Some scholars have already researched migration learning using relevance graphs and convolution layers. When using local relevance graphs obtained by random sampling, neighboring samples may not be sampled simultaneously, making the graph convolution performance degrade. Altinkaya et al. first identified a few pieces by random sampling, they then added both the first-order and second-order neighbors of these samples to the set to be selected before selection, and the sampled set was guaranteed to correlate with the models [12]. Chadha et al. interpret graph convolution as an integral transformation of the embedding function under probability measures and use Monte Carlo methods to estimate the critical values [13]. They propose an important sampling method, in which the sum of the relevance weight values of each sample to other samples is used as the sampling weight. The above sampling is performed once in each graph convolutional layer; good results are obtained in the referenced network dataset.Among the reconstruction-based methods, projection onto convex sets (POCS) is proposed by Hong et al. This algorithm is based on the set projection theory of mathematical sets and can converge relatively quickly [14]. The iterative back-projection (IBP) method proposed by Kocsis et al. projects the error value between the input low-resolution image and the low-resolution image obtained from the degradation model backward onto the corresponding high-resolution large print, and the error converges continuously to reconstruct the sizeable high-resolution image [15]. Yanshan et al. proposed the maximum a posteriori probability (MAP) algorithm, which solves the image super-resolution reconstruction by probabilistic estimation in mathematics, the prerequisite is the low-resolution image sequence, and the goal of the algorithm is to obtain the maximum a posteriori probability to reconstruct the sizeable high-resolution image [16]. Chen et al. proposed the neighborhood embedding method, which first maps the local geometric information of the low-resolution image block to the corresponding high-resolution photo and then uses the linear combination to map the neighborhood to produce the high-resolution image block [17]. Many subsequent researchers have made optimization improvements to the neighborhood embedding-based method. The super-resolution algorithms have been explored around how to recover more OK texture information and edge details based on higher super-resolution magnification. Although traditional methods have low complexity, it is not easy to make a significant breakthrough in super-resolution reconstruction quality and visual effect [18]. Deep learning methods require a large amount of training data compared with traditional learning-based methods. Still, they can recover more full image details and texture information by using neural networks’ powerful feature representation capability to learn the complex mapping relationships between low- and high-resolution images [19]. In recent years, many results have emerged in the field of deep learning and achieved better performance and performance compared with traditional algorithms, especially the introduction of a new and more challenging generative model: generative adversarial networks, which opens a new world in the field of image super-resolution-based research.The multi-image super-resolution task is also known as the image super-resolution task. The significant difference between a multi-image super-resolution task and a single-image super-resolution task is that the single-image super-resolution task mainly models the image scene and the mapping between pixel distributions by learning a priori knowledge from the training data and inferring the pixel distribution of the image after super resolution by the pixel distribution of the target image [20]. The information that the model can ingest is the pixel mapping learned from the training data; when the pixel distribution of the test image does not appear in the training image, it will lead to significant degradation of the image’s super-resolution quality [21]. In the case of multi-image image super-resolution tasks, or image super-resolution tasks, additional information about the before and after frames of the image is introduced. From common sense, the data between photos in consecutive image frames are continuous and gradual, and it is entirely possible to use such an incremental information mechanism to extract the information that was discarded during the downsampling of the target image in the adjacent frames of the image to recover the target image after downsampling [22]. The convolutional graph networks are highly vulnerable to adversarial attacks, which makes their prospects for industrial applications challenging. Combining graph convolutional networks with target detection and recognition is difficult, as graph convolutional networks can obtain certain features based on the graph structure. However, there is still no fixed solution for using these features to complement or identify localized targets. Finally, as more and more graph convolutional networks are designed, selecting a suitable network based on the graph structure characteristics is also a significant issue. ## 3. Model Design of Super-Resolution Relationship Extraction and Reconstruction Method for Images Based on Multimodal Graph Convolutional Networks ### 3.1. Multimodal Graph Convolutional Network Model Construction Convolutional operations can extract structural features of structured data by using convolutional kernels with shared parameters. Single-modality image alignment refers to the floating of two images acquired with the same imaging device. It is mainly applied to the alignment between different MRI-weighted images and the alignment of image sequences, etc. Multimodal image alignment refers to the floating of two images from other imaging devices. Increasing the number of convolutional kernels can obtain multidimensional structural features to characterize the data. For unstructured data such as molecular structure and recommendation system, the information cannot be extracted directly by fixed convolutional kernels because they do not have uniformity. Therefore, the graph neural network (GNN), which simulates convolutional operations to remove features efficiently on unstructured data, emerged and continues to evolve. Like convolution on images, the information of each node is extracted by picking the perceptual field [23]. The most direct way is to aggregate the node whose features are to be removed with its neighbor nodes within a fixed number of hops, based on the idea of message passing to extract parts of the graph for subsequent scenarios such as node classification, graph classification, and edge prediction. GCN has been mathematically rigorous in reasoning and proof. Combining spectral convolution and Chebyshev polynomials and simplifying the operation by constraining k=1 to obtain a first-order linear approximation to the graph spectral convolution, an expression for the graph convolution neural network is derived as follows:(1)hl+1=∑σ+hl−1+hl−wlσ−d+ad,where Hl denotes the graph convolution network at layer lH0=x; D˜ is the degree matrix D˜ii=∑A˜ijj;A˜=A+I denotes the adjacency matrix introducing its information; Wl is the training parameter, and σ is the activation function. Therefore, the output of the two-layer graph convolutional network is as follows:(2)z=∑softmaxd−ad−xwowd−1/2ad−σ.The graph convolution neural network defines the graph convolution operation. It can achieve convolution-like feature extraction on unstructured data, and subsequent research on it is done based on graph convolution.During node updates, weights are determined based on the interrelationship between neighboring nodes and the current node, thus enhancing the ability to extract meaningful information and attenuating the weight of irrelevant knowledge. Like the graph convolutional neural network, the graph attention network introduces the calculation of attention. It adds it to the update operation, while the node weight value is determined by its interrelationship with the controller node. The node weights are calculated as shown in the following equation:(3)αij=∑expσ−atwhi+atwhjσatwhi−whk.In the above equation,αij denotes the attention weight of node j with respect to node i; Ni denotes the set of nodes adjacent to node i; hi is the feature of node i; the attention value αij denotes the degree of association between nodes, which can be obtained either by learning or by a similarity measure. The attention weights are introduced into the graph convolution process to emphasize the importance of different neighboring nodes to the current node so that the next layer of feature values can be calculated and updated as follows:(4)hi=∑aij+whjσ−j−ni+σ,where hj is the feature of node j in the current layer of the graph convolution network; hi is the feature of node i in the next layer of the graph convolution network. The graph attention network quantifies and introduces the relationship between nodes into the graph update process, and this relationship is equivalent to the adjacency matrix in the graph convolution a. Because of its ability to construct adjacency matrices based on node relationships can be applied to graphs without explicit edge concepts, such as graphs describing sample relationships. In essence, the principles of GCN and GAT are similar; the former uses Laplacian matrices and emphasizes the role of graph structure information in graph convolutional networks. At the same time, the latter introduces attention coefficients to enhance the role of correlation information between nodes. The last is suitable for a broader range of scenarios, such as inductive tasks, by calculating each node one by one, free from the strong constraints of the graph structure.The interaction enhancement between local information includes the interaction between the internal elements of local target information and local image information and the interaction between local target information and local image information. The principle of internal element interaction enhancement is that a subset of elements that are relatively important or create a common theme can be calculated using the interrelationship between the interior features [24]. The principle of interaction enhancement between local target and image information is that both information initially corresponds to the same scene theme, so there is a constraint and guidance between the data. Local target information can guide local image information to make the selection and fusion of a subset of crucial image elements. At the same time, local image information can also locally target information to make the selection and fusion of a subgroup of critical target elements. The graph convolutional neural network is a prevalent network model. Many algorithms use it as the basis for modeling and solving practical problems, whether in recommendation algorithms, computer vision, or natural language processing. In this study, we need to enhance the interaction and fusion between local information elements, so we design a practical information fusion module based on a graph convolutional network.First, the graph node feature is defined asr=f1,f2,…,fm,fi∈rd the feature vector corresponding to the i node and m the number of nodes. The graph network constructed with local target information elements can be represented as the graph network built with local target information elements, which can be defined as ro=fo1,fo2,…,f0p. The graph network created with local text information elements can be described as rt=ft1,ft2,…,ftp The graph network made with both parts together can be represented as rot=f1,f2,…,fp+q The graph convolution operation in this study is defined as follows:(5)rl=∫rl+1−wr−th,h=∫l=1wh−wtmr−rl×wh−wtmr−rl,m=r∫l=1rl−1+wt−k+rl−1×wt−q.This study’s multimodal local information interaction module consists of two branches, the independent graph convolution branch and the joint graph convolution branch. The separate graph convolution branch is a graph convolution operation forro and rt respectively, which enables the enhancement of information elements of the other modality through intermodal attention while preserving the information differences between the two different modalities. In contrast, the joint graph convolution branch is a graph convolution operation rot, enabling the two modal information to automatically learn the interaction model in the same graph network. The design and computation of the two graph convolution branches are described in detail, as shown in Figure 1.Figure 1 Multimodal local information interaction module.The independent graph convolution branch consists ofa groups of identical computational modules. The following computations are implemented in each computational module. First, the local target information graph network ro and the local image information graph network rt each perform a graph convolution operation to achieve an interactive fusion of information within a single modality. Then, the two unimodal information graph networks perform a crossmodal attention enhancement operation to accomplish the necessary computation and information enhancement between different modal nodes. Finally, a new graph node information is generated after a fully connected layer FC with the following modular computational flow:(6)ra−o=fcha+o−1+∑aij+ha−t,ha−o=∑gcn−ra+o+ra+o−1,ai−j=∑ha−o+wa−1×ha−t−wa−2. ### 3.2. Image Super-Resolution Relationship Extraction and Reconstruction Method Model Construction The core idea of the image super-resolution reconstruction algorithm is to process the low-resolution image using various technical software. The detailed information not available in the low-resolution print is extracted through some algorithms, and a clear, high-resolution image is reconstructed. This section mainly introduces the theoretical basis of image starting resolution reconstruction, some SFI reconstruction techniques, and the recognized image quality evaluation criteria for image super-resolution reconstruction. The evaluation criteria are the criteria for this study’s subsequent experimental results. Image resolution is expressed in computer storage as the resolution that digital images displayed and stored in a computer have, and the resolution refers to the amount of information stored in a snap [25]. Specifically, it relates to the number of pixel points stored per unit of the image, and the resolution team is expressed in PPI (pixels per inch). In general, the more pixel dots per unit of an embodiment, the higher the resolution of the image and the larger the image will be, thus allowing for a richer representation of detail. For example, a picture with a resolution of 160∗120 pixels has a resolution of 19,200 pixels or 200,000 pixels. The super-resolution image reconstruction algorithm can be divided into two types: image and static image, and this study focuses on the super-resolution reconstruction algorithm for static images. The original high-resolution image generates a low-resolution image due to some extraneous culmination of the imaging process, and the HDR image must be built. The low-resolution bong image is processed into a high-resolution image according to specific super-resolution techniques. In this process, the image degradation model degrades high-resolution photos into low resolution images.The structure of the domain adaptation model based on graph convolutional networks proposed in this study is shown in Figure2. Overall, we first extract the high-dimensional features of the input data using a pretrained deep convolutional network fine-tuned with the source domain dataset or some manually designed feature extraction algorithms. Then, to consider the correlation graph of the data, we obtain the correlation structure between the samples based on the extracted features by the k-nearest neighbor (KNN) method, thus introducing the correlation between the pieces in the source and target domains into the learning model. After that, we apply a convolutional graph network to learn similar feature representations based on the samples and their neighboring samples. Finally, we reduce the difference in distribution between the known source and target words using the maximum mean difference to ensure the migratable nature of the features.Figure 2 Domain adaptation model based on graph convolutional network.Because the traditional gcnt network cannot represent the relationship data such as vertices and edges, graph convolution neural network can solve the problem of such graph data, which belongs to the application of gcnt in the direction of graph expansion. In the training process, GNN will notice the graph structure, and there will be a gating mechanism to enter the graph structure, and convolution will be introduced into the graph structure to learn by extracting spatial features. The GNN that introduces convolution is the GCN, which knows by removing spatial features. GCN is a graph convolutional neural network, a kind of GNN; the difference is mainly in using convolutional operators for information aggregation. The structure of the SRCNN model is straightforward; the input image on the left is a low-resolution image generated by bi-triple interpolation, which is the exact resolution of the actual high-resolution image. However, the input image without image enhancement is still a low-resolution idea to distinguish between the two. The size of the convolution kernels for the three layers of convolution used in the model are, from left to right, 64, 32, and 1 for the output channels. The loss function used in this network is the mean square error, which is given by the following equation:(7)msex−y−θ=∑h×w−1yi−j+xi+j,where X denotes the high-resolution image output from the web, Y represents the actual high-resolution image and denotes the network parameters, and w and h denote the length and width of the output image, respectively. The proposed model broadly lays down the structural composition of the whole super-resolution network, and all convolutional networks doing super-resolution tasks after that largely follow the combination of these three modules.As important auxiliary information, the higher the accuracy of depth information, the more accurately it can reflect the geometric relationships between viewpoints, which helps to solve the artifacts and distortions that appear in the synthesized views. The existing view synthesis methods based on depth information generally have the following problems: the synthesized view is highly dependent on the quality of the depth map, but the predicted depth map suffers from insufficient accuracy due to the inability of the depth estimation module to capture long-range spatial correlations [26]. Therefore, it is essential to obtain effective feature representations to improve the depth map quality for subsequent operations. This module can thoroughly learn effective high-resolution feature representations and always keep the feature resolution uniform throughout the process. The multiscale fusion mechanism is designed to fuse the relevant features to obtain rich feature representations fully. This enables the proposed depth estimation module to fully capture the long-range spatial correlation. The predicted depth map can more accurately reflect the spatial distribution of the scene and provide information support for the next operation. The specific structure of the depth estimation module is shown in Figure 3.Figure 3 Specific structure of the depth estimation module.To address the computational inefficiency of prior upsampling, some researchers have proposed to perform most of the mappings in low-dimensional space with last. Unlike the prior upsampling, this class of models replaces the traditional upsampling operation in the prior upsampling with a learnable upsampling house at the end of the network. Since this class of models performs many linear convolution operations in the low-dimensional space, the time and space costs are significantly reduced, and training and testing are much faster. Progressive upsampling models reduce the learning difficulty of the model by decomposing a complex task into small, simple tasks. Such models provide an elegant solution to the multiscale super-resolution problem without adding time and space costs. ## 3.1. Multimodal Graph Convolutional Network Model Construction Convolutional operations can extract structural features of structured data by using convolutional kernels with shared parameters. Single-modality image alignment refers to the floating of two images acquired with the same imaging device. It is mainly applied to the alignment between different MRI-weighted images and the alignment of image sequences, etc. Multimodal image alignment refers to the floating of two images from other imaging devices. Increasing the number of convolutional kernels can obtain multidimensional structural features to characterize the data. For unstructured data such as molecular structure and recommendation system, the information cannot be extracted directly by fixed convolutional kernels because they do not have uniformity. Therefore, the graph neural network (GNN), which simulates convolutional operations to remove features efficiently on unstructured data, emerged and continues to evolve. Like convolution on images, the information of each node is extracted by picking the perceptual field [23]. The most direct way is to aggregate the node whose features are to be removed with its neighbor nodes within a fixed number of hops, based on the idea of message passing to extract parts of the graph for subsequent scenarios such as node classification, graph classification, and edge prediction. GCN has been mathematically rigorous in reasoning and proof. Combining spectral convolution and Chebyshev polynomials and simplifying the operation by constraining k=1 to obtain a first-order linear approximation to the graph spectral convolution, an expression for the graph convolution neural network is derived as follows:(1)hl+1=∑σ+hl−1+hl−wlσ−d+ad,where Hl denotes the graph convolution network at layer lH0=x; D˜ is the degree matrix D˜ii=∑A˜ijj;A˜=A+I denotes the adjacency matrix introducing its information; Wl is the training parameter, and σ is the activation function. Therefore, the output of the two-layer graph convolutional network is as follows:(2)z=∑softmaxd−ad−xwowd−1/2ad−σ.The graph convolution neural network defines the graph convolution operation. It can achieve convolution-like feature extraction on unstructured data, and subsequent research on it is done based on graph convolution.During node updates, weights are determined based on the interrelationship between neighboring nodes and the current node, thus enhancing the ability to extract meaningful information and attenuating the weight of irrelevant knowledge. Like the graph convolutional neural network, the graph attention network introduces the calculation of attention. It adds it to the update operation, while the node weight value is determined by its interrelationship with the controller node. The node weights are calculated as shown in the following equation:(3)αij=∑expσ−atwhi+atwhjσatwhi−whk.In the above equation,αij denotes the attention weight of node j with respect to node i; Ni denotes the set of nodes adjacent to node i; hi is the feature of node i; the attention value αij denotes the degree of association between nodes, which can be obtained either by learning or by a similarity measure. The attention weights are introduced into the graph convolution process to emphasize the importance of different neighboring nodes to the current node so that the next layer of feature values can be calculated and updated as follows:(4)hi=∑aij+whjσ−j−ni+σ,where hj is the feature of node j in the current layer of the graph convolution network; hi is the feature of node i in the next layer of the graph convolution network. The graph attention network quantifies and introduces the relationship between nodes into the graph update process, and this relationship is equivalent to the adjacency matrix in the graph convolution a. Because of its ability to construct adjacency matrices based on node relationships can be applied to graphs without explicit edge concepts, such as graphs describing sample relationships. In essence, the principles of GCN and GAT are similar; the former uses Laplacian matrices and emphasizes the role of graph structure information in graph convolutional networks. At the same time, the latter introduces attention coefficients to enhance the role of correlation information between nodes. The last is suitable for a broader range of scenarios, such as inductive tasks, by calculating each node one by one, free from the strong constraints of the graph structure.The interaction enhancement between local information includes the interaction between the internal elements of local target information and local image information and the interaction between local target information and local image information. The principle of internal element interaction enhancement is that a subset of elements that are relatively important or create a common theme can be calculated using the interrelationship between the interior features [24]. The principle of interaction enhancement between local target and image information is that both information initially corresponds to the same scene theme, so there is a constraint and guidance between the data. Local target information can guide local image information to make the selection and fusion of a subset of crucial image elements. At the same time, local image information can also locally target information to make the selection and fusion of a subgroup of critical target elements. The graph convolutional neural network is a prevalent network model. Many algorithms use it as the basis for modeling and solving practical problems, whether in recommendation algorithms, computer vision, or natural language processing. In this study, we need to enhance the interaction and fusion between local information elements, so we design a practical information fusion module based on a graph convolutional network.First, the graph node feature is defined asr=f1,f2,…,fm,fi∈rd the feature vector corresponding to the i node and m the number of nodes. The graph network constructed with local target information elements can be represented as the graph network built with local target information elements, which can be defined as ro=fo1,fo2,…,f0p. The graph network created with local text information elements can be described as rt=ft1,ft2,…,ftp The graph network made with both parts together can be represented as rot=f1,f2,…,fp+q The graph convolution operation in this study is defined as follows:(5)rl=∫rl+1−wr−th,h=∫l=1wh−wtmr−rl×wh−wtmr−rl,m=r∫l=1rl−1+wt−k+rl−1×wt−q.This study’s multimodal local information interaction module consists of two branches, the independent graph convolution branch and the joint graph convolution branch. The separate graph convolution branch is a graph convolution operation forro and rt respectively, which enables the enhancement of information elements of the other modality through intermodal attention while preserving the information differences between the two different modalities. In contrast, the joint graph convolution branch is a graph convolution operation rot, enabling the two modal information to automatically learn the interaction model in the same graph network. The design and computation of the two graph convolution branches are described in detail, as shown in Figure 1.Figure 1 Multimodal local information interaction module.The independent graph convolution branch consists ofa groups of identical computational modules. The following computations are implemented in each computational module. First, the local target information graph network ro and the local image information graph network rt each perform a graph convolution operation to achieve an interactive fusion of information within a single modality. Then, the two unimodal information graph networks perform a crossmodal attention enhancement operation to accomplish the necessary computation and information enhancement between different modal nodes. Finally, a new graph node information is generated after a fully connected layer FC with the following modular computational flow:(6)ra−o=fcha+o−1+∑aij+ha−t,ha−o=∑gcn−ra+o+ra+o−1,ai−j=∑ha−o+wa−1×ha−t−wa−2. ## 3.2. Image Super-Resolution Relationship Extraction and Reconstruction Method Model Construction The core idea of the image super-resolution reconstruction algorithm is to process the low-resolution image using various technical software. The detailed information not available in the low-resolution print is extracted through some algorithms, and a clear, high-resolution image is reconstructed. This section mainly introduces the theoretical basis of image starting resolution reconstruction, some SFI reconstruction techniques, and the recognized image quality evaluation criteria for image super-resolution reconstruction. The evaluation criteria are the criteria for this study’s subsequent experimental results. Image resolution is expressed in computer storage as the resolution that digital images displayed and stored in a computer have, and the resolution refers to the amount of information stored in a snap [25]. Specifically, it relates to the number of pixel points stored per unit of the image, and the resolution team is expressed in PPI (pixels per inch). In general, the more pixel dots per unit of an embodiment, the higher the resolution of the image and the larger the image will be, thus allowing for a richer representation of detail. For example, a picture with a resolution of 160∗120 pixels has a resolution of 19,200 pixels or 200,000 pixels. The super-resolution image reconstruction algorithm can be divided into two types: image and static image, and this study focuses on the super-resolution reconstruction algorithm for static images. The original high-resolution image generates a low-resolution image due to some extraneous culmination of the imaging process, and the HDR image must be built. The low-resolution bong image is processed into a high-resolution image according to specific super-resolution techniques. In this process, the image degradation model degrades high-resolution photos into low resolution images.The structure of the domain adaptation model based on graph convolutional networks proposed in this study is shown in Figure2. Overall, we first extract the high-dimensional features of the input data using a pretrained deep convolutional network fine-tuned with the source domain dataset or some manually designed feature extraction algorithms. Then, to consider the correlation graph of the data, we obtain the correlation structure between the samples based on the extracted features by the k-nearest neighbor (KNN) method, thus introducing the correlation between the pieces in the source and target domains into the learning model. After that, we apply a convolutional graph network to learn similar feature representations based on the samples and their neighboring samples. Finally, we reduce the difference in distribution between the known source and target words using the maximum mean difference to ensure the migratable nature of the features.Figure 2 Domain adaptation model based on graph convolutional network.Because the traditional gcnt network cannot represent the relationship data such as vertices and edges, graph convolution neural network can solve the problem of such graph data, which belongs to the application of gcnt in the direction of graph expansion. In the training process, GNN will notice the graph structure, and there will be a gating mechanism to enter the graph structure, and convolution will be introduced into the graph structure to learn by extracting spatial features. The GNN that introduces convolution is the GCN, which knows by removing spatial features. GCN is a graph convolutional neural network, a kind of GNN; the difference is mainly in using convolutional operators for information aggregation. The structure of the SRCNN model is straightforward; the input image on the left is a low-resolution image generated by bi-triple interpolation, which is the exact resolution of the actual high-resolution image. However, the input image without image enhancement is still a low-resolution idea to distinguish between the two. The size of the convolution kernels for the three layers of convolution used in the model are, from left to right, 64, 32, and 1 for the output channels. The loss function used in this network is the mean square error, which is given by the following equation:(7)msex−y−θ=∑h×w−1yi−j+xi+j,where X denotes the high-resolution image output from the web, Y represents the actual high-resolution image and denotes the network parameters, and w and h denote the length and width of the output image, respectively. The proposed model broadly lays down the structural composition of the whole super-resolution network, and all convolutional networks doing super-resolution tasks after that largely follow the combination of these three modules.As important auxiliary information, the higher the accuracy of depth information, the more accurately it can reflect the geometric relationships between viewpoints, which helps to solve the artifacts and distortions that appear in the synthesized views. The existing view synthesis methods based on depth information generally have the following problems: the synthesized view is highly dependent on the quality of the depth map, but the predicted depth map suffers from insufficient accuracy due to the inability of the depth estimation module to capture long-range spatial correlations [26]. Therefore, it is essential to obtain effective feature representations to improve the depth map quality for subsequent operations. This module can thoroughly learn effective high-resolution feature representations and always keep the feature resolution uniform throughout the process. The multiscale fusion mechanism is designed to fuse the relevant features to obtain rich feature representations fully. This enables the proposed depth estimation module to fully capture the long-range spatial correlation. The predicted depth map can more accurately reflect the spatial distribution of the scene and provide information support for the next operation. The specific structure of the depth estimation module is shown in Figure 3.Figure 3 Specific structure of the depth estimation module.To address the computational inefficiency of prior upsampling, some researchers have proposed to perform most of the mappings in low-dimensional space with last. Unlike the prior upsampling, this class of models replaces the traditional upsampling operation in the prior upsampling with a learnable upsampling house at the end of the network. Since this class of models performs many linear convolution operations in the low-dimensional space, the time and space costs are significantly reduced, and training and testing are much faster. Progressive upsampling models reduce the learning difficulty of the model by decomposing a complex task into small, simple tasks. Such models provide an elegant solution to the multiscale super-resolution problem without adding time and space costs. ## 4. Analysis of Results ### 4.1. Image Super-Resolution Relationship Analysis of Multimodal Graph Convolutional Networks The image super-resolution task is based on the single-image super resolution, in the case of having the most basic original low-resolution image, to acquire its neighboring low-resolution image frames, which is used to help the original image more quickly to obtain more information to help the image recovery. This section proposes a deep neural network module for image reconstruction, enhanced reconstruction block (ERB). This module is redesigned for the reconstruction module in the ultradeep model in image super resolution using a roll-up group plus a dense connection. It adds jump connections from shallow to deep features while maintaining the existing network depth to better-fit feature extraction and image reconstruction in deep networks. Meanwhile, to improve the deformable convolution in the feature alignment module during image super-resolution model training, a weight normalization layer is wrapped around the convolution operation in the PCD alignment module, and the stability against noise during network training is greatly improved after the replacement [27]. This section uses the classical image super-resolution model EDVR as the module framework based on the above work. It proposes a new image super-resolution model—enhanced reconstruction model for video super resolution (ERM-VSR). In practical experiments, the ERM-VSR image super-resolution model presented in this section achieves excellent performance that significantly exceeds that of the baseline EDVR model.With the development of deep learning techniques, the complexity of graph convolutional networks is increasing, and the number of layers of the network is also growing. Deepening the number of layers of the network within a specific range will make the web more expressive and richer in the features learned. However, in practical applications, increasing the number of layers of the network does not necessarily lead to better output results. The loss rate variation curve of the graph convolutional network versus the number of pieces of training is shown in Figure4.Figure 4 Effect of the number of network layers on the accuracy of the training and testing phases.During the algorithm validation training on this dataset, it was found that EDVR’s feature alignment module, PCD alignment module, often failed to converge due to excessive offsets. In the subsequent investigation of the reasons for the network convergence failure and the in-depth analysis of the training dataset, it was further found that for processing videos with too drastic scene switching (usually corresponding to the rapid movement of the filming equipment) and camera switching such as off-cut and jump-cut in transitions, PCD alignment module cannot effectively limit the size of the learned motion vector offset. Once it jumps out of the effective range and is input to the deformable, the motion vector is out of the compelling content. It is input to the deformable convolution, leading to the failure of feature extraction and loss of the whole feature alignment module.The performance of graphical convolutional neural networks depends on various factors such as network structure and depth. Studying how parameters affect the performance of super-resolution reconstruction networks can effectively guide the model design. It can fully exploit the performance of the networks. Since the network structure is crucial to the algorithm’s convergence, this section first conducts experiments on the effect of residual learning on the performance of the RLSR algorithm. All three experiments used T1-weighted imaging of the brain web dataset as the test set and PSNR as the evaluation index to test the results of the RLSR algorithm when there was super-resolution reconstruction of anisotropic 3D-MRI images with a resolution of 2mm×2mm×2mm. The effects of residual learning, network depth, and width are shown in Figure5.Figure 5 Effect of residual learning, network depth, and width. (a) Effect of residual learning. (b) Effect of network width. (c) Effect of network depth. (a)(b)(c)The best method among the interpolation methods is the B-spline interpolation algorithm. Still, the PSNR and SSIM of this algorithm are 3.95 dB/0.0059 and 3.36 dB/0.0407 lower than those of the RLSR algorithm for layer thicknesses of 2 mm and 5 mm, respectively. Due to the fixed parameters of the interpolation method, the image is only upsampled based on the spatial information of the pixels without using any a priori information. The NLM and SC methods exploit the self-similarity and sparsity of the image for super-resolution reconstruction, respectively, improving the super-resolution reconstruction effect [28]. Still, the PSNR and SSIM of the reconstructed image are not as good as the RLSR based on the residual learning deep convolutional neural network. The SRCNN method is driven by many training samples and directly learns the intrinsic mapping relationship between high and low resolutions without relying on artificially designed feature extraction methods. Its super-resolution reconstruction effect is significantly better than the interpolation method, NLM, and SC algorithms. Since the RLSR algorithm uses residual learning to alleviate the problem of difficult training of deep networks faced by SRCNN and effectively improves the nonlinear fitting ability of the network, the quality of super-resolution reconstructed images at a slice thickness of 2 mm is better than those reconstructed by SRCNN and VDSR methods, with PSNR values 1.28 dB and 0.06 dB higher than those of SRCNN and VDSR method approaches, respectively. The quality of the super-resolution reconstructed 3D-MRI images decreased to different degrees with the increase of the slice layer thickness. The SSIM of the 3D-MRI images reconstructed by the RLSR algorithm was 0.004 higher than that of the SRCNN method when the layer thickness was 2 mm, but the difference reached 0.0254 when the layer thickness was increased to 5 mm. The above experimental results indicate that the RLSR algorithm can achieve good T1-weighted imaging super-resolution reconstruction results and has good robustness for reconstructing different slice thicknesses. ### 4.2. A Multimodal Graph Convolutional Network-Based Approach for Super-Resolution Relation Extraction and Reconstruction of Images Implementation For the overall performance comparison, the number of SUB modules in SUGNet is set to 20, and the output channels of the convolutional layer are set to 64. Considering the performance and model parameters, the depth of the backbone branch in the SUB module is set to 3. During the training period, a randomly cropped 48 × 48 image block is used as the model’s input. To avoid overfitting the SUGNet algorithm during training, this section uses data enhancement techniques such as rotation and horizontal and vertical flipping for all fundus data sets. The Adam optimizer is used to train the network parameters with an initial learning rate of 0.0001, and the learning rate is reduced by half for every 100 rounds. For the same reconstruction factor, the generator loss of the algorithm in this study is lower than that of both SRRes Net-V54 and SRGAN. For different reconstruction factors, the generator losses of SRRes Net-V54 and SRGAN are in the order from small to large: 4 ×< 6 ×< 8×, while the order of the algorithm in this study is as follows: 4 ×≈ 6 ×< 8×. It proves that the generator network in this study can be used well for 4× and 6× reconstruction. Still, the other two algorithms are only suitable for 4× reconstruction and have more significant errors for 6× and 8×. Using feature matching loss (F-Loss) and Wasserstein distance loss (W-Loss) can improve the reconstruction quality and solve the gradient dispersion phenomenon that may occur during the training process. In addition, the multiplex conditional generator structure and the multiscale discriminator structure make the generator’s performance in this section almost the same as that of the reconstruction factor 4 when the reconstruction factor is 6. Therefore, the algorithm in this section can cope with more prominent reconstruction factors, while the performance of other algorithms decreases sharply when the reconstruction factor increases. The dynamics of the different network loss function values are shown in Figure6.Figure 6 Dynamics of different network loss function values.This study uses a network structure with only one hidden layer to simplify and prevent overfitting. The number of neurons in the hidden layer is as small as possible. Meanwhile, the graph convolutional network algorithm uses each node’s k-nearest neighbors to describe each vertex’s local information on the image model. 2D is also called two-dimensional, flat graphics. 2D graphics contentX-axis and Y-axis. 2D three-dimensional sense, light, and shadow are artificially drawn from the simulation. 3D is also called three-dimensional graphics content; in addition to the horizontal X-axis, vertical Y-axis, and the depth of the Z-axis, three-dimensional graphics can contain 360 degrees of information. Therefore, like the 2D reconstruction of images based on graph convolutional networks, determining the number of neurons in each subneural network and the number of k-nearest neighbors is also essential for the 3D reconstruction of faces. Therefore, in this study, from the 2,800 strictly aligned 3D face models obtained during the face data generation, 1,000 are randomly selected as the training set and 500 as the test set. First, we test the prediction results of the network under different k values. In the network initialization phase, the network weight parameters for the first forward propagation of the generator network can be initialized with the DGP-SRGAN network parameters by using the minimized mean square error MSE loss function, which is obtained by pretraining the network. Because of this, the following training process is chosen to “synchronize” the alternate iterative training of the generator network and the discriminator network; in the general GAN model, the generator network training learning speed is often slower than the discriminator network, which will cause the network parameters to update early end, and it will not get a robust generator model. In the training phase of the network, the discriminator network is updated once, followed by the generator network to update the parameters once. The super-resolution image of the output of each forward propagation of the generator network is compared with the original high-resolution image HR to obtain an error signal. This error signal is back propagated to produce a gradient (or derivative) for learning, which is used to readjust the weight parameters for the subsequent forward propagation. The discriminator network then compares the output probability score of the input super-resolution generated image with 0 and the original high-resolution extensive image HR with 1. It updates the discriminator network parameters by back propagating the error through a back-propagation mechanism to create the gradient used for network learning. The results of the network training for the image super-resolution relationship extraction and reconstruction method are shown in Figure 7.Figure 7 Network training results of image super-resolution relationship extraction and reconstruction method.DRCN is equivalent to SRCNN with a deepened network hierarchy. The DRCN network is more expressive and can be seen to have more apparent edge details than SRCNN. The SRGAN and the optimized and improved DGP-SRGAN algorithm in this section can reconstruct more texture details than the general GNN because they use the perceptual loss function to guide the network training, and the experimental results of the previous algorithms have better image visualization and more explicit edge details compared to each other. The proposed DGP-SRGAN has better subjective visual perception quality than the original SRGAN algorithm. The essence of graph convolution is to learn relevant information, so the learning effect of this network must include the neighbors of the sampled samples in the same training step; on the other hand, the distribution difference metric requires that the models in both domains can be as rich as possible and cannot be limited to only some categories. Balancing the needs of both in a limited batch training size is another critical issue in enhancing the effectiveness of graph convolution in deep learning frameworks. According to the scheme proposed in this section, the update relevance graph with the training trick allows the global relevance graph to be updated throughout the network training process and no longer overly dependent on fine-tuning the features extracted in the network. Class-label and pseudo-class-label sampling ensure, to some extent, the amount of data available for each class of samples when the model is trained in small batches, thus improving the performance of the overall model. The proposed two schemes enable the graph convolution model to be successfully integrated into the deep learning framework for end-to-end learning and achieve good results in experiments comparable to cutting-edge algorithms. ## 4.1. Image Super-Resolution Relationship Analysis of Multimodal Graph Convolutional Networks The image super-resolution task is based on the single-image super resolution, in the case of having the most basic original low-resolution image, to acquire its neighboring low-resolution image frames, which is used to help the original image more quickly to obtain more information to help the image recovery. This section proposes a deep neural network module for image reconstruction, enhanced reconstruction block (ERB). This module is redesigned for the reconstruction module in the ultradeep model in image super resolution using a roll-up group plus a dense connection. It adds jump connections from shallow to deep features while maintaining the existing network depth to better-fit feature extraction and image reconstruction in deep networks. Meanwhile, to improve the deformable convolution in the feature alignment module during image super-resolution model training, a weight normalization layer is wrapped around the convolution operation in the PCD alignment module, and the stability against noise during network training is greatly improved after the replacement [27]. This section uses the classical image super-resolution model EDVR as the module framework based on the above work. It proposes a new image super-resolution model—enhanced reconstruction model for video super resolution (ERM-VSR). In practical experiments, the ERM-VSR image super-resolution model presented in this section achieves excellent performance that significantly exceeds that of the baseline EDVR model.With the development of deep learning techniques, the complexity of graph convolutional networks is increasing, and the number of layers of the network is also growing. Deepening the number of layers of the network within a specific range will make the web more expressive and richer in the features learned. However, in practical applications, increasing the number of layers of the network does not necessarily lead to better output results. The loss rate variation curve of the graph convolutional network versus the number of pieces of training is shown in Figure4.Figure 4 Effect of the number of network layers on the accuracy of the training and testing phases.During the algorithm validation training on this dataset, it was found that EDVR’s feature alignment module, PCD alignment module, often failed to converge due to excessive offsets. In the subsequent investigation of the reasons for the network convergence failure and the in-depth analysis of the training dataset, it was further found that for processing videos with too drastic scene switching (usually corresponding to the rapid movement of the filming equipment) and camera switching such as off-cut and jump-cut in transitions, PCD alignment module cannot effectively limit the size of the learned motion vector offset. Once it jumps out of the effective range and is input to the deformable, the motion vector is out of the compelling content. It is input to the deformable convolution, leading to the failure of feature extraction and loss of the whole feature alignment module.The performance of graphical convolutional neural networks depends on various factors such as network structure and depth. Studying how parameters affect the performance of super-resolution reconstruction networks can effectively guide the model design. It can fully exploit the performance of the networks. Since the network structure is crucial to the algorithm’s convergence, this section first conducts experiments on the effect of residual learning on the performance of the RLSR algorithm. All three experiments used T1-weighted imaging of the brain web dataset as the test set and PSNR as the evaluation index to test the results of the RLSR algorithm when there was super-resolution reconstruction of anisotropic 3D-MRI images with a resolution of 2mm×2mm×2mm. The effects of residual learning, network depth, and width are shown in Figure5.Figure 5 Effect of residual learning, network depth, and width. (a) Effect of residual learning. (b) Effect of network width. (c) Effect of network depth. (a)(b)(c)The best method among the interpolation methods is the B-spline interpolation algorithm. Still, the PSNR and SSIM of this algorithm are 3.95 dB/0.0059 and 3.36 dB/0.0407 lower than those of the RLSR algorithm for layer thicknesses of 2 mm and 5 mm, respectively. Due to the fixed parameters of the interpolation method, the image is only upsampled based on the spatial information of the pixels without using any a priori information. The NLM and SC methods exploit the self-similarity and sparsity of the image for super-resolution reconstruction, respectively, improving the super-resolution reconstruction effect [28]. Still, the PSNR and SSIM of the reconstructed image are not as good as the RLSR based on the residual learning deep convolutional neural network. The SRCNN method is driven by many training samples and directly learns the intrinsic mapping relationship between high and low resolutions without relying on artificially designed feature extraction methods. Its super-resolution reconstruction effect is significantly better than the interpolation method, NLM, and SC algorithms. Since the RLSR algorithm uses residual learning to alleviate the problem of difficult training of deep networks faced by SRCNN and effectively improves the nonlinear fitting ability of the network, the quality of super-resolution reconstructed images at a slice thickness of 2 mm is better than those reconstructed by SRCNN and VDSR methods, with PSNR values 1.28 dB and 0.06 dB higher than those of SRCNN and VDSR method approaches, respectively. The quality of the super-resolution reconstructed 3D-MRI images decreased to different degrees with the increase of the slice layer thickness. The SSIM of the 3D-MRI images reconstructed by the RLSR algorithm was 0.004 higher than that of the SRCNN method when the layer thickness was 2 mm, but the difference reached 0.0254 when the layer thickness was increased to 5 mm. The above experimental results indicate that the RLSR algorithm can achieve good T1-weighted imaging super-resolution reconstruction results and has good robustness for reconstructing different slice thicknesses. ## 4.2. A Multimodal Graph Convolutional Network-Based Approach for Super-Resolution Relation Extraction and Reconstruction of Images Implementation For the overall performance comparison, the number of SUB modules in SUGNet is set to 20, and the output channels of the convolutional layer are set to 64. Considering the performance and model parameters, the depth of the backbone branch in the SUB module is set to 3. During the training period, a randomly cropped 48 × 48 image block is used as the model’s input. To avoid overfitting the SUGNet algorithm during training, this section uses data enhancement techniques such as rotation and horizontal and vertical flipping for all fundus data sets. The Adam optimizer is used to train the network parameters with an initial learning rate of 0.0001, and the learning rate is reduced by half for every 100 rounds. For the same reconstruction factor, the generator loss of the algorithm in this study is lower than that of both SRRes Net-V54 and SRGAN. For different reconstruction factors, the generator losses of SRRes Net-V54 and SRGAN are in the order from small to large: 4 ×< 6 ×< 8×, while the order of the algorithm in this study is as follows: 4 ×≈ 6 ×< 8×. It proves that the generator network in this study can be used well for 4× and 6× reconstruction. Still, the other two algorithms are only suitable for 4× reconstruction and have more significant errors for 6× and 8×. Using feature matching loss (F-Loss) and Wasserstein distance loss (W-Loss) can improve the reconstruction quality and solve the gradient dispersion phenomenon that may occur during the training process. In addition, the multiplex conditional generator structure and the multiscale discriminator structure make the generator’s performance in this section almost the same as that of the reconstruction factor 4 when the reconstruction factor is 6. Therefore, the algorithm in this section can cope with more prominent reconstruction factors, while the performance of other algorithms decreases sharply when the reconstruction factor increases. The dynamics of the different network loss function values are shown in Figure6.Figure 6 Dynamics of different network loss function values.This study uses a network structure with only one hidden layer to simplify and prevent overfitting. The number of neurons in the hidden layer is as small as possible. Meanwhile, the graph convolutional network algorithm uses each node’s k-nearest neighbors to describe each vertex’s local information on the image model. 2D is also called two-dimensional, flat graphics. 2D graphics contentX-axis and Y-axis. 2D three-dimensional sense, light, and shadow are artificially drawn from the simulation. 3D is also called three-dimensional graphics content; in addition to the horizontal X-axis, vertical Y-axis, and the depth of the Z-axis, three-dimensional graphics can contain 360 degrees of information. Therefore, like the 2D reconstruction of images based on graph convolutional networks, determining the number of neurons in each subneural network and the number of k-nearest neighbors is also essential for the 3D reconstruction of faces. Therefore, in this study, from the 2,800 strictly aligned 3D face models obtained during the face data generation, 1,000 are randomly selected as the training set and 500 as the test set. First, we test the prediction results of the network under different k values. In the network initialization phase, the network weight parameters for the first forward propagation of the generator network can be initialized with the DGP-SRGAN network parameters by using the minimized mean square error MSE loss function, which is obtained by pretraining the network. Because of this, the following training process is chosen to “synchronize” the alternate iterative training of the generator network and the discriminator network; in the general GAN model, the generator network training learning speed is often slower than the discriminator network, which will cause the network parameters to update early end, and it will not get a robust generator model. In the training phase of the network, the discriminator network is updated once, followed by the generator network to update the parameters once. The super-resolution image of the output of each forward propagation of the generator network is compared with the original high-resolution image HR to obtain an error signal. This error signal is back propagated to produce a gradient (or derivative) for learning, which is used to readjust the weight parameters for the subsequent forward propagation. The discriminator network then compares the output probability score of the input super-resolution generated image with 0 and the original high-resolution extensive image HR with 1. It updates the discriminator network parameters by back propagating the error through a back-propagation mechanism to create the gradient used for network learning. The results of the network training for the image super-resolution relationship extraction and reconstruction method are shown in Figure 7.Figure 7 Network training results of image super-resolution relationship extraction and reconstruction method.DRCN is equivalent to SRCNN with a deepened network hierarchy. The DRCN network is more expressive and can be seen to have more apparent edge details than SRCNN. The SRGAN and the optimized and improved DGP-SRGAN algorithm in this section can reconstruct more texture details than the general GNN because they use the perceptual loss function to guide the network training, and the experimental results of the previous algorithms have better image visualization and more explicit edge details compared to each other. The proposed DGP-SRGAN has better subjective visual perception quality than the original SRGAN algorithm. The essence of graph convolution is to learn relevant information, so the learning effect of this network must include the neighbors of the sampled samples in the same training step; on the other hand, the distribution difference metric requires that the models in both domains can be as rich as possible and cannot be limited to only some categories. Balancing the needs of both in a limited batch training size is another critical issue in enhancing the effectiveness of graph convolution in deep learning frameworks. According to the scheme proposed in this section, the update relevance graph with the training trick allows the global relevance graph to be updated throughout the network training process and no longer overly dependent on fine-tuning the features extracted in the network. Class-label and pseudo-class-label sampling ensure, to some extent, the amount of data available for each class of samples when the model is trained in small batches, thus improving the performance of the overall model. The proposed two schemes enable the graph convolution model to be successfully integrated into the deep learning framework for end-to-end learning and achieve good results in experiments comparable to cutting-edge algorithms. ## 5. Conclusion With the development of deep learning technology, more and more tools have been derived from continuously bringing new products and experiences to the public. Many technologies that were previously unlikely to be realized based on traditional methods are increasingly coming into the typical home. Image recovery, a classic task in computer vision, has a critical position in practical applications. As an essential carrier of information transmission, the quality of the image directly affects the ability of information expression. Image super-resolution reconstruction aims to recover high-quality photos, so it has a wide range of applications in many fields. We conducted comparison experiments on COCO and visual genome datasets in this study. By analyzing the experimental data, we can see that the target detection and recognition models based on graph convolutional networks significantly improve the correct average rate of the whole class of objects. In this study, Set5, Set14, BSD100, and Urban100s datasets are taken for experiments and compared with their algorithms Bicubic, SRCNN, VDSR, and SRGAN in the cases of reconstruction scales of 2× and 4× to verify the practical effect more fully. This algorithm increases the network’s nonlinear representation capability while acquiring multiple features than single-scale convolutional networks. The algorithm finally outputs reconstructed high-resolution images using the deconvolution layer, which obtains more high-frequency information during the upsampling process. The algorithm is experimentally demonstrated to have an advantage of super-resolution reconstruction compared with neural network algorithms of the same level of depth. --- *Source: 1016112-2022-09-10.xml*
1016112-2022-09-10_1016112-2022-09-10.md
72,782
Research on Super-Resolution Relationship Extraction and Reconstruction Methods for Images Based on Multimodal Graph Convolutional Networks
Jie Xiao
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016112
1016112-2022-09-10.xml
--- ## Abstract This study constructs a multimodal graph convolutional network model, conducts an in-depth study on image super-resolution relationship extraction and reconstruction methods, and constructs a model of image super-resolution relationship extraction and reconstruction methods based on multimodal graph convolutional networks. In this study, we study the domain adaptation model algorithm based on chart convolutional networks, which constructs a global relevance graph based on all samples using pre-extracted features and performs distribution approximation of sample features in two domains using a diagram convolutional neural network with maximum mean difference loss; with this approach, the model effectively preserves the structural information among the samples. In this study, several comparison experiments are designed based on the COCO and VG datasets; the image space information-based and knowledge graph-based target detection and recognition models substantially improve recognition performance over the baseline model. The super-pixel-based target detection and recognition model can also effectively reduce the number of floating-point operations and the complexity of the model. In this study, we propose a multiscale GAN-based image super-resolution reconstruction algorithm. Aiming at the problems of detail loss or blurring in the reconstruction of detail-rich images by SRGAN, it integrates the idea of the Laplace pyramid to complete the task of multiscale reconstruction of images through staged reconstruction. It incorporates the concept of a discriminative network with patch GAN to effectively improve the recovery effect of graph details and improve the reconstruction quality of images. Using Set5, Set14, BSD100, and Urban100 datasets as test sets, experimental analysis is conducted from objective and subjective evaluation metrics to effectively validate the performance of the improved algorithm proposed in this study. --- ## Body ## 1. Introduction With the continuous development of information technology and the popularity of intelligent terminal devices, people’s demand for information is also rising: from images in the 2G era to pictures in the 3G era, then to images in the 4G era, and then to holographic images such as AR and VR in the 5G era, the amount of information is rising, while the occupied storage is also exploding [1]. This considerably impacts the daily dissemination of information—the network speed cannot keep up, and the hard disk cannot store it. Therefore, there is an urgent need for an efficient means of information compression to help compress information to improve transmission efficiency and reduce the storage footprint [2]. With the development of high-performance processors, high-definition screens are becoming more and more popular with the emergence of intelligent devices. However, most media information on the Internet is still dominated by low-definition images, resulting in data quality not keeping up with display quality, thus reducing the user experience [3]. In addition, due to the limitations of image storage hardware, the resolution of images is limited, and the size of the smallest pixel determines the details that can be displayed. But the real world is often infinite, so people also want to much detail as possible in the image they can get. The solutions to the above pain points can be summarized as the compression and decompression of information. The most immediate and effective solution to reduce the image size is to store and disseminate multimedia information, especially the most informative image information, and reduce the image’s resolution.Super-resolution reconstruction (SR) technique is to reconstruct a single or multiframe low-resolution (LR) image into a high-resolution (HR) photo by applying specific image processing and other methods to achieve high-quality images. Usually, CNN-based target detection and recognition models use sliding windows or anchors to extract possible foregrounds and hardgrounds [4]. Then, the final localization frame is generated by identifying and regressing all possible foregrounds. By relying on the graph convolution network, we can obtain more abundant information about the location of the object picture. For example, by relying on the inference of the spatial map, we can roughly determine the object’s position and then fine-tune it according to work. Therefore, we can design more flexible and efficient positioning methods to generate positioning frames [5]. This study develops several graph convolutional network detection models acting in spatial, beyond-pixel, and knowledge graphs. We extract features beyond pixels to assist pixel information for accurate target detection and recognition. Finally, the experimental and comparative analyses of the model on the COCO dataset and VG dataset prove that the target detection and recognition model based on a graph convolutional network can break the bottleneck of image pixel recognition to a certain extent and help the target image to achieve better object recognition and localization [6].Image super resolution is used to solve the problem of recovering low-resolution images to high-resolution images. Image super resolution aims to up-sample a series of low-resolution photos output by a deterministic or uncertain degradation model to high resolution while providing more detail than low resolution [7]. Traditional upsampling algorithms have a solid prior relationship, considering that there is a specific mathematical relationship between neighboring pixel values so that the original pixels can be recovered by interpolating adjacent pixels. In the forward propagation process, each sample feature is transformed independently, which may lead to the separation of target domain features that are initially in the same class under the influence of the distribution difference function and eventually classified into different categories [8]. It enables problems with unstructured relationships, such as citation networks, to be well trained by importing correlation graphs between samples [9]. This feature also helps to compensate for the shortcomings of existing domain adaptation algorithms. Graph convolutional networks can be considered as a particular case of graph networks. In this study, we intend to study the scheme and practice of introducing chart convolutional networks into domain adaptation problems to improve the learning performance of domain adaptation problems, make a new direction to explore migration learning tasks, and provide a feasible solution for learning scenarios where labeled information is challenging to obtain [10]. The study of domain adaptation algorithms can effectively reduce the need for data annotation and enable various algorithmic models to have fast learning performance for similar tasks and improve their generalization and robustness, which is of great significance in various real-world tasks, where annotation information is not readily available. ## 2. Related Works The graph convolution layer is a simple extension of the fully connected layer that integrates valuable information from the knowledge graph into the feature vector, and the intuitive understanding of the graph convolution layer is simple. By importing a relevance graph (knowledge graph) into the neural network, the graph convolution layer can change the distribution of the feature vectors through the relevance variable of the relevance graph so that the relevant samples are closer to each other [11]. This feature facilitates the data to obtain and maintain useful structural information during the distribution approximation process, thus avoiding the loss of similar structures in the source domain caused by migration learning and improving the network performance. Some scholars have already researched migration learning using relevance graphs and convolution layers. When using local relevance graphs obtained by random sampling, neighboring samples may not be sampled simultaneously, making the graph convolution performance degrade. Altinkaya et al. first identified a few pieces by random sampling, they then added both the first-order and second-order neighbors of these samples to the set to be selected before selection, and the sampled set was guaranteed to correlate with the models [12]. Chadha et al. interpret graph convolution as an integral transformation of the embedding function under probability measures and use Monte Carlo methods to estimate the critical values [13]. They propose an important sampling method, in which the sum of the relevance weight values of each sample to other samples is used as the sampling weight. The above sampling is performed once in each graph convolutional layer; good results are obtained in the referenced network dataset.Among the reconstruction-based methods, projection onto convex sets (POCS) is proposed by Hong et al. This algorithm is based on the set projection theory of mathematical sets and can converge relatively quickly [14]. The iterative back-projection (IBP) method proposed by Kocsis et al. projects the error value between the input low-resolution image and the low-resolution image obtained from the degradation model backward onto the corresponding high-resolution large print, and the error converges continuously to reconstruct the sizeable high-resolution image [15]. Yanshan et al. proposed the maximum a posteriori probability (MAP) algorithm, which solves the image super-resolution reconstruction by probabilistic estimation in mathematics, the prerequisite is the low-resolution image sequence, and the goal of the algorithm is to obtain the maximum a posteriori probability to reconstruct the sizeable high-resolution image [16]. Chen et al. proposed the neighborhood embedding method, which first maps the local geometric information of the low-resolution image block to the corresponding high-resolution photo and then uses the linear combination to map the neighborhood to produce the high-resolution image block [17]. Many subsequent researchers have made optimization improvements to the neighborhood embedding-based method. The super-resolution algorithms have been explored around how to recover more OK texture information and edge details based on higher super-resolution magnification. Although traditional methods have low complexity, it is not easy to make a significant breakthrough in super-resolution reconstruction quality and visual effect [18]. Deep learning methods require a large amount of training data compared with traditional learning-based methods. Still, they can recover more full image details and texture information by using neural networks’ powerful feature representation capability to learn the complex mapping relationships between low- and high-resolution images [19]. In recent years, many results have emerged in the field of deep learning and achieved better performance and performance compared with traditional algorithms, especially the introduction of a new and more challenging generative model: generative adversarial networks, which opens a new world in the field of image super-resolution-based research.The multi-image super-resolution task is also known as the image super-resolution task. The significant difference between a multi-image super-resolution task and a single-image super-resolution task is that the single-image super-resolution task mainly models the image scene and the mapping between pixel distributions by learning a priori knowledge from the training data and inferring the pixel distribution of the image after super resolution by the pixel distribution of the target image [20]. The information that the model can ingest is the pixel mapping learned from the training data; when the pixel distribution of the test image does not appear in the training image, it will lead to significant degradation of the image’s super-resolution quality [21]. In the case of multi-image image super-resolution tasks, or image super-resolution tasks, additional information about the before and after frames of the image is introduced. From common sense, the data between photos in consecutive image frames are continuous and gradual, and it is entirely possible to use such an incremental information mechanism to extract the information that was discarded during the downsampling of the target image in the adjacent frames of the image to recover the target image after downsampling [22]. The convolutional graph networks are highly vulnerable to adversarial attacks, which makes their prospects for industrial applications challenging. Combining graph convolutional networks with target detection and recognition is difficult, as graph convolutional networks can obtain certain features based on the graph structure. However, there is still no fixed solution for using these features to complement or identify localized targets. Finally, as more and more graph convolutional networks are designed, selecting a suitable network based on the graph structure characteristics is also a significant issue. ## 3. Model Design of Super-Resolution Relationship Extraction and Reconstruction Method for Images Based on Multimodal Graph Convolutional Networks ### 3.1. Multimodal Graph Convolutional Network Model Construction Convolutional operations can extract structural features of structured data by using convolutional kernels with shared parameters. Single-modality image alignment refers to the floating of two images acquired with the same imaging device. It is mainly applied to the alignment between different MRI-weighted images and the alignment of image sequences, etc. Multimodal image alignment refers to the floating of two images from other imaging devices. Increasing the number of convolutional kernels can obtain multidimensional structural features to characterize the data. For unstructured data such as molecular structure and recommendation system, the information cannot be extracted directly by fixed convolutional kernels because they do not have uniformity. Therefore, the graph neural network (GNN), which simulates convolutional operations to remove features efficiently on unstructured data, emerged and continues to evolve. Like convolution on images, the information of each node is extracted by picking the perceptual field [23]. The most direct way is to aggregate the node whose features are to be removed with its neighbor nodes within a fixed number of hops, based on the idea of message passing to extract parts of the graph for subsequent scenarios such as node classification, graph classification, and edge prediction. GCN has been mathematically rigorous in reasoning and proof. Combining spectral convolution and Chebyshev polynomials and simplifying the operation by constraining k=1 to obtain a first-order linear approximation to the graph spectral convolution, an expression for the graph convolution neural network is derived as follows:(1)hl+1=∑σ+hl−1+hl−wlσ−d+ad,where Hl denotes the graph convolution network at layer lH0=x; D˜ is the degree matrix D˜ii=∑A˜ijj;A˜=A+I denotes the adjacency matrix introducing its information; Wl is the training parameter, and σ is the activation function. Therefore, the output of the two-layer graph convolutional network is as follows:(2)z=∑softmaxd−ad−xwowd−1/2ad−σ.The graph convolution neural network defines the graph convolution operation. It can achieve convolution-like feature extraction on unstructured data, and subsequent research on it is done based on graph convolution.During node updates, weights are determined based on the interrelationship between neighboring nodes and the current node, thus enhancing the ability to extract meaningful information and attenuating the weight of irrelevant knowledge. Like the graph convolutional neural network, the graph attention network introduces the calculation of attention. It adds it to the update operation, while the node weight value is determined by its interrelationship with the controller node. The node weights are calculated as shown in the following equation:(3)αij=∑expσ−atwhi+atwhjσatwhi−whk.In the above equation,αij denotes the attention weight of node j with respect to node i; Ni denotes the set of nodes adjacent to node i; hi is the feature of node i; the attention value αij denotes the degree of association between nodes, which can be obtained either by learning or by a similarity measure. The attention weights are introduced into the graph convolution process to emphasize the importance of different neighboring nodes to the current node so that the next layer of feature values can be calculated and updated as follows:(4)hi=∑aij+whjσ−j−ni+σ,where hj is the feature of node j in the current layer of the graph convolution network; hi is the feature of node i in the next layer of the graph convolution network. The graph attention network quantifies and introduces the relationship between nodes into the graph update process, and this relationship is equivalent to the adjacency matrix in the graph convolution a. Because of its ability to construct adjacency matrices based on node relationships can be applied to graphs without explicit edge concepts, such as graphs describing sample relationships. In essence, the principles of GCN and GAT are similar; the former uses Laplacian matrices and emphasizes the role of graph structure information in graph convolutional networks. At the same time, the latter introduces attention coefficients to enhance the role of correlation information between nodes. The last is suitable for a broader range of scenarios, such as inductive tasks, by calculating each node one by one, free from the strong constraints of the graph structure.The interaction enhancement between local information includes the interaction between the internal elements of local target information and local image information and the interaction between local target information and local image information. The principle of internal element interaction enhancement is that a subset of elements that are relatively important or create a common theme can be calculated using the interrelationship between the interior features [24]. The principle of interaction enhancement between local target and image information is that both information initially corresponds to the same scene theme, so there is a constraint and guidance between the data. Local target information can guide local image information to make the selection and fusion of a subset of crucial image elements. At the same time, local image information can also locally target information to make the selection and fusion of a subgroup of critical target elements. The graph convolutional neural network is a prevalent network model. Many algorithms use it as the basis for modeling and solving practical problems, whether in recommendation algorithms, computer vision, or natural language processing. In this study, we need to enhance the interaction and fusion between local information elements, so we design a practical information fusion module based on a graph convolutional network.First, the graph node feature is defined asr=f1,f2,…,fm,fi∈rd the feature vector corresponding to the i node and m the number of nodes. The graph network constructed with local target information elements can be represented as the graph network built with local target information elements, which can be defined as ro=fo1,fo2,…,f0p. The graph network created with local text information elements can be described as rt=ft1,ft2,…,ftp The graph network made with both parts together can be represented as rot=f1,f2,…,fp+q The graph convolution operation in this study is defined as follows:(5)rl=∫rl+1−wr−th,h=∫l=1wh−wtmr−rl×wh−wtmr−rl,m=r∫l=1rl−1+wt−k+rl−1×wt−q.This study’s multimodal local information interaction module consists of two branches, the independent graph convolution branch and the joint graph convolution branch. The separate graph convolution branch is a graph convolution operation forro and rt respectively, which enables the enhancement of information elements of the other modality through intermodal attention while preserving the information differences between the two different modalities. In contrast, the joint graph convolution branch is a graph convolution operation rot, enabling the two modal information to automatically learn the interaction model in the same graph network. The design and computation of the two graph convolution branches are described in detail, as shown in Figure 1.Figure 1 Multimodal local information interaction module.The independent graph convolution branch consists ofa groups of identical computational modules. The following computations are implemented in each computational module. First, the local target information graph network ro and the local image information graph network rt each perform a graph convolution operation to achieve an interactive fusion of information within a single modality. Then, the two unimodal information graph networks perform a crossmodal attention enhancement operation to accomplish the necessary computation and information enhancement between different modal nodes. Finally, a new graph node information is generated after a fully connected layer FC with the following modular computational flow:(6)ra−o=fcha+o−1+∑aij+ha−t,ha−o=∑gcn−ra+o+ra+o−1,ai−j=∑ha−o+wa−1×ha−t−wa−2. ### 3.2. Image Super-Resolution Relationship Extraction and Reconstruction Method Model Construction The core idea of the image super-resolution reconstruction algorithm is to process the low-resolution image using various technical software. The detailed information not available in the low-resolution print is extracted through some algorithms, and a clear, high-resolution image is reconstructed. This section mainly introduces the theoretical basis of image starting resolution reconstruction, some SFI reconstruction techniques, and the recognized image quality evaluation criteria for image super-resolution reconstruction. The evaluation criteria are the criteria for this study’s subsequent experimental results. Image resolution is expressed in computer storage as the resolution that digital images displayed and stored in a computer have, and the resolution refers to the amount of information stored in a snap [25]. Specifically, it relates to the number of pixel points stored per unit of the image, and the resolution team is expressed in PPI (pixels per inch). In general, the more pixel dots per unit of an embodiment, the higher the resolution of the image and the larger the image will be, thus allowing for a richer representation of detail. For example, a picture with a resolution of 160∗120 pixels has a resolution of 19,200 pixels or 200,000 pixels. The super-resolution image reconstruction algorithm can be divided into two types: image and static image, and this study focuses on the super-resolution reconstruction algorithm for static images. The original high-resolution image generates a low-resolution image due to some extraneous culmination of the imaging process, and the HDR image must be built. The low-resolution bong image is processed into a high-resolution image according to specific super-resolution techniques. In this process, the image degradation model degrades high-resolution photos into low resolution images.The structure of the domain adaptation model based on graph convolutional networks proposed in this study is shown in Figure2. Overall, we first extract the high-dimensional features of the input data using a pretrained deep convolutional network fine-tuned with the source domain dataset or some manually designed feature extraction algorithms. Then, to consider the correlation graph of the data, we obtain the correlation structure between the samples based on the extracted features by the k-nearest neighbor (KNN) method, thus introducing the correlation between the pieces in the source and target domains into the learning model. After that, we apply a convolutional graph network to learn similar feature representations based on the samples and their neighboring samples. Finally, we reduce the difference in distribution between the known source and target words using the maximum mean difference to ensure the migratable nature of the features.Figure 2 Domain adaptation model based on graph convolutional network.Because the traditional gcnt network cannot represent the relationship data such as vertices and edges, graph convolution neural network can solve the problem of such graph data, which belongs to the application of gcnt in the direction of graph expansion. In the training process, GNN will notice the graph structure, and there will be a gating mechanism to enter the graph structure, and convolution will be introduced into the graph structure to learn by extracting spatial features. The GNN that introduces convolution is the GCN, which knows by removing spatial features. GCN is a graph convolutional neural network, a kind of GNN; the difference is mainly in using convolutional operators for information aggregation. The structure of the SRCNN model is straightforward; the input image on the left is a low-resolution image generated by bi-triple interpolation, which is the exact resolution of the actual high-resolution image. However, the input image without image enhancement is still a low-resolution idea to distinguish between the two. The size of the convolution kernels for the three layers of convolution used in the model are, from left to right, 64, 32, and 1 for the output channels. The loss function used in this network is the mean square error, which is given by the following equation:(7)msex−y−θ=∑h×w−1yi−j+xi+j,where X denotes the high-resolution image output from the web, Y represents the actual high-resolution image and denotes the network parameters, and w and h denote the length and width of the output image, respectively. The proposed model broadly lays down the structural composition of the whole super-resolution network, and all convolutional networks doing super-resolution tasks after that largely follow the combination of these three modules.As important auxiliary information, the higher the accuracy of depth information, the more accurately it can reflect the geometric relationships between viewpoints, which helps to solve the artifacts and distortions that appear in the synthesized views. The existing view synthesis methods based on depth information generally have the following problems: the synthesized view is highly dependent on the quality of the depth map, but the predicted depth map suffers from insufficient accuracy due to the inability of the depth estimation module to capture long-range spatial correlations [26]. Therefore, it is essential to obtain effective feature representations to improve the depth map quality for subsequent operations. This module can thoroughly learn effective high-resolution feature representations and always keep the feature resolution uniform throughout the process. The multiscale fusion mechanism is designed to fuse the relevant features to obtain rich feature representations fully. This enables the proposed depth estimation module to fully capture the long-range spatial correlation. The predicted depth map can more accurately reflect the spatial distribution of the scene and provide information support for the next operation. The specific structure of the depth estimation module is shown in Figure 3.Figure 3 Specific structure of the depth estimation module.To address the computational inefficiency of prior upsampling, some researchers have proposed to perform most of the mappings in low-dimensional space with last. Unlike the prior upsampling, this class of models replaces the traditional upsampling operation in the prior upsampling with a learnable upsampling house at the end of the network. Since this class of models performs many linear convolution operations in the low-dimensional space, the time and space costs are significantly reduced, and training and testing are much faster. Progressive upsampling models reduce the learning difficulty of the model by decomposing a complex task into small, simple tasks. Such models provide an elegant solution to the multiscale super-resolution problem without adding time and space costs. ## 3.1. Multimodal Graph Convolutional Network Model Construction Convolutional operations can extract structural features of structured data by using convolutional kernels with shared parameters. Single-modality image alignment refers to the floating of two images acquired with the same imaging device. It is mainly applied to the alignment between different MRI-weighted images and the alignment of image sequences, etc. Multimodal image alignment refers to the floating of two images from other imaging devices. Increasing the number of convolutional kernels can obtain multidimensional structural features to characterize the data. For unstructured data such as molecular structure and recommendation system, the information cannot be extracted directly by fixed convolutional kernels because they do not have uniformity. Therefore, the graph neural network (GNN), which simulates convolutional operations to remove features efficiently on unstructured data, emerged and continues to evolve. Like convolution on images, the information of each node is extracted by picking the perceptual field [23]. The most direct way is to aggregate the node whose features are to be removed with its neighbor nodes within a fixed number of hops, based on the idea of message passing to extract parts of the graph for subsequent scenarios such as node classification, graph classification, and edge prediction. GCN has been mathematically rigorous in reasoning and proof. Combining spectral convolution and Chebyshev polynomials and simplifying the operation by constraining k=1 to obtain a first-order linear approximation to the graph spectral convolution, an expression for the graph convolution neural network is derived as follows:(1)hl+1=∑σ+hl−1+hl−wlσ−d+ad,where Hl denotes the graph convolution network at layer lH0=x; D˜ is the degree matrix D˜ii=∑A˜ijj;A˜=A+I denotes the adjacency matrix introducing its information; Wl is the training parameter, and σ is the activation function. Therefore, the output of the two-layer graph convolutional network is as follows:(2)z=∑softmaxd−ad−xwowd−1/2ad−σ.The graph convolution neural network defines the graph convolution operation. It can achieve convolution-like feature extraction on unstructured data, and subsequent research on it is done based on graph convolution.During node updates, weights are determined based on the interrelationship between neighboring nodes and the current node, thus enhancing the ability to extract meaningful information and attenuating the weight of irrelevant knowledge. Like the graph convolutional neural network, the graph attention network introduces the calculation of attention. It adds it to the update operation, while the node weight value is determined by its interrelationship with the controller node. The node weights are calculated as shown in the following equation:(3)αij=∑expσ−atwhi+atwhjσatwhi−whk.In the above equation,αij denotes the attention weight of node j with respect to node i; Ni denotes the set of nodes adjacent to node i; hi is the feature of node i; the attention value αij denotes the degree of association between nodes, which can be obtained either by learning or by a similarity measure. The attention weights are introduced into the graph convolution process to emphasize the importance of different neighboring nodes to the current node so that the next layer of feature values can be calculated and updated as follows:(4)hi=∑aij+whjσ−j−ni+σ,where hj is the feature of node j in the current layer of the graph convolution network; hi is the feature of node i in the next layer of the graph convolution network. The graph attention network quantifies and introduces the relationship between nodes into the graph update process, and this relationship is equivalent to the adjacency matrix in the graph convolution a. Because of its ability to construct adjacency matrices based on node relationships can be applied to graphs without explicit edge concepts, such as graphs describing sample relationships. In essence, the principles of GCN and GAT are similar; the former uses Laplacian matrices and emphasizes the role of graph structure information in graph convolutional networks. At the same time, the latter introduces attention coefficients to enhance the role of correlation information between nodes. The last is suitable for a broader range of scenarios, such as inductive tasks, by calculating each node one by one, free from the strong constraints of the graph structure.The interaction enhancement between local information includes the interaction between the internal elements of local target information and local image information and the interaction between local target information and local image information. The principle of internal element interaction enhancement is that a subset of elements that are relatively important or create a common theme can be calculated using the interrelationship between the interior features [24]. The principle of interaction enhancement between local target and image information is that both information initially corresponds to the same scene theme, so there is a constraint and guidance between the data. Local target information can guide local image information to make the selection and fusion of a subset of crucial image elements. At the same time, local image information can also locally target information to make the selection and fusion of a subgroup of critical target elements. The graph convolutional neural network is a prevalent network model. Many algorithms use it as the basis for modeling and solving practical problems, whether in recommendation algorithms, computer vision, or natural language processing. In this study, we need to enhance the interaction and fusion between local information elements, so we design a practical information fusion module based on a graph convolutional network.First, the graph node feature is defined asr=f1,f2,…,fm,fi∈rd the feature vector corresponding to the i node and m the number of nodes. The graph network constructed with local target information elements can be represented as the graph network built with local target information elements, which can be defined as ro=fo1,fo2,…,f0p. The graph network created with local text information elements can be described as rt=ft1,ft2,…,ftp The graph network made with both parts together can be represented as rot=f1,f2,…,fp+q The graph convolution operation in this study is defined as follows:(5)rl=∫rl+1−wr−th,h=∫l=1wh−wtmr−rl×wh−wtmr−rl,m=r∫l=1rl−1+wt−k+rl−1×wt−q.This study’s multimodal local information interaction module consists of two branches, the independent graph convolution branch and the joint graph convolution branch. The separate graph convolution branch is a graph convolution operation forro and rt respectively, which enables the enhancement of information elements of the other modality through intermodal attention while preserving the information differences between the two different modalities. In contrast, the joint graph convolution branch is a graph convolution operation rot, enabling the two modal information to automatically learn the interaction model in the same graph network. The design and computation of the two graph convolution branches are described in detail, as shown in Figure 1.Figure 1 Multimodal local information interaction module.The independent graph convolution branch consists ofa groups of identical computational modules. The following computations are implemented in each computational module. First, the local target information graph network ro and the local image information graph network rt each perform a graph convolution operation to achieve an interactive fusion of information within a single modality. Then, the two unimodal information graph networks perform a crossmodal attention enhancement operation to accomplish the necessary computation and information enhancement between different modal nodes. Finally, a new graph node information is generated after a fully connected layer FC with the following modular computational flow:(6)ra−o=fcha+o−1+∑aij+ha−t,ha−o=∑gcn−ra+o+ra+o−1,ai−j=∑ha−o+wa−1×ha−t−wa−2. ## 3.2. Image Super-Resolution Relationship Extraction and Reconstruction Method Model Construction The core idea of the image super-resolution reconstruction algorithm is to process the low-resolution image using various technical software. The detailed information not available in the low-resolution print is extracted through some algorithms, and a clear, high-resolution image is reconstructed. This section mainly introduces the theoretical basis of image starting resolution reconstruction, some SFI reconstruction techniques, and the recognized image quality evaluation criteria for image super-resolution reconstruction. The evaluation criteria are the criteria for this study’s subsequent experimental results. Image resolution is expressed in computer storage as the resolution that digital images displayed and stored in a computer have, and the resolution refers to the amount of information stored in a snap [25]. Specifically, it relates to the number of pixel points stored per unit of the image, and the resolution team is expressed in PPI (pixels per inch). In general, the more pixel dots per unit of an embodiment, the higher the resolution of the image and the larger the image will be, thus allowing for a richer representation of detail. For example, a picture with a resolution of 160∗120 pixels has a resolution of 19,200 pixels or 200,000 pixels. The super-resolution image reconstruction algorithm can be divided into two types: image and static image, and this study focuses on the super-resolution reconstruction algorithm for static images. The original high-resolution image generates a low-resolution image due to some extraneous culmination of the imaging process, and the HDR image must be built. The low-resolution bong image is processed into a high-resolution image according to specific super-resolution techniques. In this process, the image degradation model degrades high-resolution photos into low resolution images.The structure of the domain adaptation model based on graph convolutional networks proposed in this study is shown in Figure2. Overall, we first extract the high-dimensional features of the input data using a pretrained deep convolutional network fine-tuned with the source domain dataset or some manually designed feature extraction algorithms. Then, to consider the correlation graph of the data, we obtain the correlation structure between the samples based on the extracted features by the k-nearest neighbor (KNN) method, thus introducing the correlation between the pieces in the source and target domains into the learning model. After that, we apply a convolutional graph network to learn similar feature representations based on the samples and their neighboring samples. Finally, we reduce the difference in distribution between the known source and target words using the maximum mean difference to ensure the migratable nature of the features.Figure 2 Domain adaptation model based on graph convolutional network.Because the traditional gcnt network cannot represent the relationship data such as vertices and edges, graph convolution neural network can solve the problem of such graph data, which belongs to the application of gcnt in the direction of graph expansion. In the training process, GNN will notice the graph structure, and there will be a gating mechanism to enter the graph structure, and convolution will be introduced into the graph structure to learn by extracting spatial features. The GNN that introduces convolution is the GCN, which knows by removing spatial features. GCN is a graph convolutional neural network, a kind of GNN; the difference is mainly in using convolutional operators for information aggregation. The structure of the SRCNN model is straightforward; the input image on the left is a low-resolution image generated by bi-triple interpolation, which is the exact resolution of the actual high-resolution image. However, the input image without image enhancement is still a low-resolution idea to distinguish between the two. The size of the convolution kernels for the three layers of convolution used in the model are, from left to right, 64, 32, and 1 for the output channels. The loss function used in this network is the mean square error, which is given by the following equation:(7)msex−y−θ=∑h×w−1yi−j+xi+j,where X denotes the high-resolution image output from the web, Y represents the actual high-resolution image and denotes the network parameters, and w and h denote the length and width of the output image, respectively. The proposed model broadly lays down the structural composition of the whole super-resolution network, and all convolutional networks doing super-resolution tasks after that largely follow the combination of these three modules.As important auxiliary information, the higher the accuracy of depth information, the more accurately it can reflect the geometric relationships between viewpoints, which helps to solve the artifacts and distortions that appear in the synthesized views. The existing view synthesis methods based on depth information generally have the following problems: the synthesized view is highly dependent on the quality of the depth map, but the predicted depth map suffers from insufficient accuracy due to the inability of the depth estimation module to capture long-range spatial correlations [26]. Therefore, it is essential to obtain effective feature representations to improve the depth map quality for subsequent operations. This module can thoroughly learn effective high-resolution feature representations and always keep the feature resolution uniform throughout the process. The multiscale fusion mechanism is designed to fuse the relevant features to obtain rich feature representations fully. This enables the proposed depth estimation module to fully capture the long-range spatial correlation. The predicted depth map can more accurately reflect the spatial distribution of the scene and provide information support for the next operation. The specific structure of the depth estimation module is shown in Figure 3.Figure 3 Specific structure of the depth estimation module.To address the computational inefficiency of prior upsampling, some researchers have proposed to perform most of the mappings in low-dimensional space with last. Unlike the prior upsampling, this class of models replaces the traditional upsampling operation in the prior upsampling with a learnable upsampling house at the end of the network. Since this class of models performs many linear convolution operations in the low-dimensional space, the time and space costs are significantly reduced, and training and testing are much faster. Progressive upsampling models reduce the learning difficulty of the model by decomposing a complex task into small, simple tasks. Such models provide an elegant solution to the multiscale super-resolution problem without adding time and space costs. ## 4. Analysis of Results ### 4.1. Image Super-Resolution Relationship Analysis of Multimodal Graph Convolutional Networks The image super-resolution task is based on the single-image super resolution, in the case of having the most basic original low-resolution image, to acquire its neighboring low-resolution image frames, which is used to help the original image more quickly to obtain more information to help the image recovery. This section proposes a deep neural network module for image reconstruction, enhanced reconstruction block (ERB). This module is redesigned for the reconstruction module in the ultradeep model in image super resolution using a roll-up group plus a dense connection. It adds jump connections from shallow to deep features while maintaining the existing network depth to better-fit feature extraction and image reconstruction in deep networks. Meanwhile, to improve the deformable convolution in the feature alignment module during image super-resolution model training, a weight normalization layer is wrapped around the convolution operation in the PCD alignment module, and the stability against noise during network training is greatly improved after the replacement [27]. This section uses the classical image super-resolution model EDVR as the module framework based on the above work. It proposes a new image super-resolution model—enhanced reconstruction model for video super resolution (ERM-VSR). In practical experiments, the ERM-VSR image super-resolution model presented in this section achieves excellent performance that significantly exceeds that of the baseline EDVR model.With the development of deep learning techniques, the complexity of graph convolutional networks is increasing, and the number of layers of the network is also growing. Deepening the number of layers of the network within a specific range will make the web more expressive and richer in the features learned. However, in practical applications, increasing the number of layers of the network does not necessarily lead to better output results. The loss rate variation curve of the graph convolutional network versus the number of pieces of training is shown in Figure4.Figure 4 Effect of the number of network layers on the accuracy of the training and testing phases.During the algorithm validation training on this dataset, it was found that EDVR’s feature alignment module, PCD alignment module, often failed to converge due to excessive offsets. In the subsequent investigation of the reasons for the network convergence failure and the in-depth analysis of the training dataset, it was further found that for processing videos with too drastic scene switching (usually corresponding to the rapid movement of the filming equipment) and camera switching such as off-cut and jump-cut in transitions, PCD alignment module cannot effectively limit the size of the learned motion vector offset. Once it jumps out of the effective range and is input to the deformable, the motion vector is out of the compelling content. It is input to the deformable convolution, leading to the failure of feature extraction and loss of the whole feature alignment module.The performance of graphical convolutional neural networks depends on various factors such as network structure and depth. Studying how parameters affect the performance of super-resolution reconstruction networks can effectively guide the model design. It can fully exploit the performance of the networks. Since the network structure is crucial to the algorithm’s convergence, this section first conducts experiments on the effect of residual learning on the performance of the RLSR algorithm. All three experiments used T1-weighted imaging of the brain web dataset as the test set and PSNR as the evaluation index to test the results of the RLSR algorithm when there was super-resolution reconstruction of anisotropic 3D-MRI images with a resolution of 2mm×2mm×2mm. The effects of residual learning, network depth, and width are shown in Figure5.Figure 5 Effect of residual learning, network depth, and width. (a) Effect of residual learning. (b) Effect of network width. (c) Effect of network depth. (a)(b)(c)The best method among the interpolation methods is the B-spline interpolation algorithm. Still, the PSNR and SSIM of this algorithm are 3.95 dB/0.0059 and 3.36 dB/0.0407 lower than those of the RLSR algorithm for layer thicknesses of 2 mm and 5 mm, respectively. Due to the fixed parameters of the interpolation method, the image is only upsampled based on the spatial information of the pixels without using any a priori information. The NLM and SC methods exploit the self-similarity and sparsity of the image for super-resolution reconstruction, respectively, improving the super-resolution reconstruction effect [28]. Still, the PSNR and SSIM of the reconstructed image are not as good as the RLSR based on the residual learning deep convolutional neural network. The SRCNN method is driven by many training samples and directly learns the intrinsic mapping relationship between high and low resolutions without relying on artificially designed feature extraction methods. Its super-resolution reconstruction effect is significantly better than the interpolation method, NLM, and SC algorithms. Since the RLSR algorithm uses residual learning to alleviate the problem of difficult training of deep networks faced by SRCNN and effectively improves the nonlinear fitting ability of the network, the quality of super-resolution reconstructed images at a slice thickness of 2 mm is better than those reconstructed by SRCNN and VDSR methods, with PSNR values 1.28 dB and 0.06 dB higher than those of SRCNN and VDSR method approaches, respectively. The quality of the super-resolution reconstructed 3D-MRI images decreased to different degrees with the increase of the slice layer thickness. The SSIM of the 3D-MRI images reconstructed by the RLSR algorithm was 0.004 higher than that of the SRCNN method when the layer thickness was 2 mm, but the difference reached 0.0254 when the layer thickness was increased to 5 mm. The above experimental results indicate that the RLSR algorithm can achieve good T1-weighted imaging super-resolution reconstruction results and has good robustness for reconstructing different slice thicknesses. ### 4.2. A Multimodal Graph Convolutional Network-Based Approach for Super-Resolution Relation Extraction and Reconstruction of Images Implementation For the overall performance comparison, the number of SUB modules in SUGNet is set to 20, and the output channels of the convolutional layer are set to 64. Considering the performance and model parameters, the depth of the backbone branch in the SUB module is set to 3. During the training period, a randomly cropped 48 × 48 image block is used as the model’s input. To avoid overfitting the SUGNet algorithm during training, this section uses data enhancement techniques such as rotation and horizontal and vertical flipping for all fundus data sets. The Adam optimizer is used to train the network parameters with an initial learning rate of 0.0001, and the learning rate is reduced by half for every 100 rounds. For the same reconstruction factor, the generator loss of the algorithm in this study is lower than that of both SRRes Net-V54 and SRGAN. For different reconstruction factors, the generator losses of SRRes Net-V54 and SRGAN are in the order from small to large: 4 ×< 6 ×< 8×, while the order of the algorithm in this study is as follows: 4 ×≈ 6 ×< 8×. It proves that the generator network in this study can be used well for 4× and 6× reconstruction. Still, the other two algorithms are only suitable for 4× reconstruction and have more significant errors for 6× and 8×. Using feature matching loss (F-Loss) and Wasserstein distance loss (W-Loss) can improve the reconstruction quality and solve the gradient dispersion phenomenon that may occur during the training process. In addition, the multiplex conditional generator structure and the multiscale discriminator structure make the generator’s performance in this section almost the same as that of the reconstruction factor 4 when the reconstruction factor is 6. Therefore, the algorithm in this section can cope with more prominent reconstruction factors, while the performance of other algorithms decreases sharply when the reconstruction factor increases. The dynamics of the different network loss function values are shown in Figure6.Figure 6 Dynamics of different network loss function values.This study uses a network structure with only one hidden layer to simplify and prevent overfitting. The number of neurons in the hidden layer is as small as possible. Meanwhile, the graph convolutional network algorithm uses each node’s k-nearest neighbors to describe each vertex’s local information on the image model. 2D is also called two-dimensional, flat graphics. 2D graphics contentX-axis and Y-axis. 2D three-dimensional sense, light, and shadow are artificially drawn from the simulation. 3D is also called three-dimensional graphics content; in addition to the horizontal X-axis, vertical Y-axis, and the depth of the Z-axis, three-dimensional graphics can contain 360 degrees of information. Therefore, like the 2D reconstruction of images based on graph convolutional networks, determining the number of neurons in each subneural network and the number of k-nearest neighbors is also essential for the 3D reconstruction of faces. Therefore, in this study, from the 2,800 strictly aligned 3D face models obtained during the face data generation, 1,000 are randomly selected as the training set and 500 as the test set. First, we test the prediction results of the network under different k values. In the network initialization phase, the network weight parameters for the first forward propagation of the generator network can be initialized with the DGP-SRGAN network parameters by using the minimized mean square error MSE loss function, which is obtained by pretraining the network. Because of this, the following training process is chosen to “synchronize” the alternate iterative training of the generator network and the discriminator network; in the general GAN model, the generator network training learning speed is often slower than the discriminator network, which will cause the network parameters to update early end, and it will not get a robust generator model. In the training phase of the network, the discriminator network is updated once, followed by the generator network to update the parameters once. The super-resolution image of the output of each forward propagation of the generator network is compared with the original high-resolution image HR to obtain an error signal. This error signal is back propagated to produce a gradient (or derivative) for learning, which is used to readjust the weight parameters for the subsequent forward propagation. The discriminator network then compares the output probability score of the input super-resolution generated image with 0 and the original high-resolution extensive image HR with 1. It updates the discriminator network parameters by back propagating the error through a back-propagation mechanism to create the gradient used for network learning. The results of the network training for the image super-resolution relationship extraction and reconstruction method are shown in Figure 7.Figure 7 Network training results of image super-resolution relationship extraction and reconstruction method.DRCN is equivalent to SRCNN with a deepened network hierarchy. The DRCN network is more expressive and can be seen to have more apparent edge details than SRCNN. The SRGAN and the optimized and improved DGP-SRGAN algorithm in this section can reconstruct more texture details than the general GNN because they use the perceptual loss function to guide the network training, and the experimental results of the previous algorithms have better image visualization and more explicit edge details compared to each other. The proposed DGP-SRGAN has better subjective visual perception quality than the original SRGAN algorithm. The essence of graph convolution is to learn relevant information, so the learning effect of this network must include the neighbors of the sampled samples in the same training step; on the other hand, the distribution difference metric requires that the models in both domains can be as rich as possible and cannot be limited to only some categories. Balancing the needs of both in a limited batch training size is another critical issue in enhancing the effectiveness of graph convolution in deep learning frameworks. According to the scheme proposed in this section, the update relevance graph with the training trick allows the global relevance graph to be updated throughout the network training process and no longer overly dependent on fine-tuning the features extracted in the network. Class-label and pseudo-class-label sampling ensure, to some extent, the amount of data available for each class of samples when the model is trained in small batches, thus improving the performance of the overall model. The proposed two schemes enable the graph convolution model to be successfully integrated into the deep learning framework for end-to-end learning and achieve good results in experiments comparable to cutting-edge algorithms. ## 4.1. Image Super-Resolution Relationship Analysis of Multimodal Graph Convolutional Networks The image super-resolution task is based on the single-image super resolution, in the case of having the most basic original low-resolution image, to acquire its neighboring low-resolution image frames, which is used to help the original image more quickly to obtain more information to help the image recovery. This section proposes a deep neural network module for image reconstruction, enhanced reconstruction block (ERB). This module is redesigned for the reconstruction module in the ultradeep model in image super resolution using a roll-up group plus a dense connection. It adds jump connections from shallow to deep features while maintaining the existing network depth to better-fit feature extraction and image reconstruction in deep networks. Meanwhile, to improve the deformable convolution in the feature alignment module during image super-resolution model training, a weight normalization layer is wrapped around the convolution operation in the PCD alignment module, and the stability against noise during network training is greatly improved after the replacement [27]. This section uses the classical image super-resolution model EDVR as the module framework based on the above work. It proposes a new image super-resolution model—enhanced reconstruction model for video super resolution (ERM-VSR). In practical experiments, the ERM-VSR image super-resolution model presented in this section achieves excellent performance that significantly exceeds that of the baseline EDVR model.With the development of deep learning techniques, the complexity of graph convolutional networks is increasing, and the number of layers of the network is also growing. Deepening the number of layers of the network within a specific range will make the web more expressive and richer in the features learned. However, in practical applications, increasing the number of layers of the network does not necessarily lead to better output results. The loss rate variation curve of the graph convolutional network versus the number of pieces of training is shown in Figure4.Figure 4 Effect of the number of network layers on the accuracy of the training and testing phases.During the algorithm validation training on this dataset, it was found that EDVR’s feature alignment module, PCD alignment module, often failed to converge due to excessive offsets. In the subsequent investigation of the reasons for the network convergence failure and the in-depth analysis of the training dataset, it was further found that for processing videos with too drastic scene switching (usually corresponding to the rapid movement of the filming equipment) and camera switching such as off-cut and jump-cut in transitions, PCD alignment module cannot effectively limit the size of the learned motion vector offset. Once it jumps out of the effective range and is input to the deformable, the motion vector is out of the compelling content. It is input to the deformable convolution, leading to the failure of feature extraction and loss of the whole feature alignment module.The performance of graphical convolutional neural networks depends on various factors such as network structure and depth. Studying how parameters affect the performance of super-resolution reconstruction networks can effectively guide the model design. It can fully exploit the performance of the networks. Since the network structure is crucial to the algorithm’s convergence, this section first conducts experiments on the effect of residual learning on the performance of the RLSR algorithm. All three experiments used T1-weighted imaging of the brain web dataset as the test set and PSNR as the evaluation index to test the results of the RLSR algorithm when there was super-resolution reconstruction of anisotropic 3D-MRI images with a resolution of 2mm×2mm×2mm. The effects of residual learning, network depth, and width are shown in Figure5.Figure 5 Effect of residual learning, network depth, and width. (a) Effect of residual learning. (b) Effect of network width. (c) Effect of network depth. (a)(b)(c)The best method among the interpolation methods is the B-spline interpolation algorithm. Still, the PSNR and SSIM of this algorithm are 3.95 dB/0.0059 and 3.36 dB/0.0407 lower than those of the RLSR algorithm for layer thicknesses of 2 mm and 5 mm, respectively. Due to the fixed parameters of the interpolation method, the image is only upsampled based on the spatial information of the pixels without using any a priori information. The NLM and SC methods exploit the self-similarity and sparsity of the image for super-resolution reconstruction, respectively, improving the super-resolution reconstruction effect [28]. Still, the PSNR and SSIM of the reconstructed image are not as good as the RLSR based on the residual learning deep convolutional neural network. The SRCNN method is driven by many training samples and directly learns the intrinsic mapping relationship between high and low resolutions without relying on artificially designed feature extraction methods. Its super-resolution reconstruction effect is significantly better than the interpolation method, NLM, and SC algorithms. Since the RLSR algorithm uses residual learning to alleviate the problem of difficult training of deep networks faced by SRCNN and effectively improves the nonlinear fitting ability of the network, the quality of super-resolution reconstructed images at a slice thickness of 2 mm is better than those reconstructed by SRCNN and VDSR methods, with PSNR values 1.28 dB and 0.06 dB higher than those of SRCNN and VDSR method approaches, respectively. The quality of the super-resolution reconstructed 3D-MRI images decreased to different degrees with the increase of the slice layer thickness. The SSIM of the 3D-MRI images reconstructed by the RLSR algorithm was 0.004 higher than that of the SRCNN method when the layer thickness was 2 mm, but the difference reached 0.0254 when the layer thickness was increased to 5 mm. The above experimental results indicate that the RLSR algorithm can achieve good T1-weighted imaging super-resolution reconstruction results and has good robustness for reconstructing different slice thicknesses. ## 4.2. A Multimodal Graph Convolutional Network-Based Approach for Super-Resolution Relation Extraction and Reconstruction of Images Implementation For the overall performance comparison, the number of SUB modules in SUGNet is set to 20, and the output channels of the convolutional layer are set to 64. Considering the performance and model parameters, the depth of the backbone branch in the SUB module is set to 3. During the training period, a randomly cropped 48 × 48 image block is used as the model’s input. To avoid overfitting the SUGNet algorithm during training, this section uses data enhancement techniques such as rotation and horizontal and vertical flipping for all fundus data sets. The Adam optimizer is used to train the network parameters with an initial learning rate of 0.0001, and the learning rate is reduced by half for every 100 rounds. For the same reconstruction factor, the generator loss of the algorithm in this study is lower than that of both SRRes Net-V54 and SRGAN. For different reconstruction factors, the generator losses of SRRes Net-V54 and SRGAN are in the order from small to large: 4 ×< 6 ×< 8×, while the order of the algorithm in this study is as follows: 4 ×≈ 6 ×< 8×. It proves that the generator network in this study can be used well for 4× and 6× reconstruction. Still, the other two algorithms are only suitable for 4× reconstruction and have more significant errors for 6× and 8×. Using feature matching loss (F-Loss) and Wasserstein distance loss (W-Loss) can improve the reconstruction quality and solve the gradient dispersion phenomenon that may occur during the training process. In addition, the multiplex conditional generator structure and the multiscale discriminator structure make the generator’s performance in this section almost the same as that of the reconstruction factor 4 when the reconstruction factor is 6. Therefore, the algorithm in this section can cope with more prominent reconstruction factors, while the performance of other algorithms decreases sharply when the reconstruction factor increases. The dynamics of the different network loss function values are shown in Figure6.Figure 6 Dynamics of different network loss function values.This study uses a network structure with only one hidden layer to simplify and prevent overfitting. The number of neurons in the hidden layer is as small as possible. Meanwhile, the graph convolutional network algorithm uses each node’s k-nearest neighbors to describe each vertex’s local information on the image model. 2D is also called two-dimensional, flat graphics. 2D graphics contentX-axis and Y-axis. 2D three-dimensional sense, light, and shadow are artificially drawn from the simulation. 3D is also called three-dimensional graphics content; in addition to the horizontal X-axis, vertical Y-axis, and the depth of the Z-axis, three-dimensional graphics can contain 360 degrees of information. Therefore, like the 2D reconstruction of images based on graph convolutional networks, determining the number of neurons in each subneural network and the number of k-nearest neighbors is also essential for the 3D reconstruction of faces. Therefore, in this study, from the 2,800 strictly aligned 3D face models obtained during the face data generation, 1,000 are randomly selected as the training set and 500 as the test set. First, we test the prediction results of the network under different k values. In the network initialization phase, the network weight parameters for the first forward propagation of the generator network can be initialized with the DGP-SRGAN network parameters by using the minimized mean square error MSE loss function, which is obtained by pretraining the network. Because of this, the following training process is chosen to “synchronize” the alternate iterative training of the generator network and the discriminator network; in the general GAN model, the generator network training learning speed is often slower than the discriminator network, which will cause the network parameters to update early end, and it will not get a robust generator model. In the training phase of the network, the discriminator network is updated once, followed by the generator network to update the parameters once. The super-resolution image of the output of each forward propagation of the generator network is compared with the original high-resolution image HR to obtain an error signal. This error signal is back propagated to produce a gradient (or derivative) for learning, which is used to readjust the weight parameters for the subsequent forward propagation. The discriminator network then compares the output probability score of the input super-resolution generated image with 0 and the original high-resolution extensive image HR with 1. It updates the discriminator network parameters by back propagating the error through a back-propagation mechanism to create the gradient used for network learning. The results of the network training for the image super-resolution relationship extraction and reconstruction method are shown in Figure 7.Figure 7 Network training results of image super-resolution relationship extraction and reconstruction method.DRCN is equivalent to SRCNN with a deepened network hierarchy. The DRCN network is more expressive and can be seen to have more apparent edge details than SRCNN. The SRGAN and the optimized and improved DGP-SRGAN algorithm in this section can reconstruct more texture details than the general GNN because they use the perceptual loss function to guide the network training, and the experimental results of the previous algorithms have better image visualization and more explicit edge details compared to each other. The proposed DGP-SRGAN has better subjective visual perception quality than the original SRGAN algorithm. The essence of graph convolution is to learn relevant information, so the learning effect of this network must include the neighbors of the sampled samples in the same training step; on the other hand, the distribution difference metric requires that the models in both domains can be as rich as possible and cannot be limited to only some categories. Balancing the needs of both in a limited batch training size is another critical issue in enhancing the effectiveness of graph convolution in deep learning frameworks. According to the scheme proposed in this section, the update relevance graph with the training trick allows the global relevance graph to be updated throughout the network training process and no longer overly dependent on fine-tuning the features extracted in the network. Class-label and pseudo-class-label sampling ensure, to some extent, the amount of data available for each class of samples when the model is trained in small batches, thus improving the performance of the overall model. The proposed two schemes enable the graph convolution model to be successfully integrated into the deep learning framework for end-to-end learning and achieve good results in experiments comparable to cutting-edge algorithms. ## 5. Conclusion With the development of deep learning technology, more and more tools have been derived from continuously bringing new products and experiences to the public. Many technologies that were previously unlikely to be realized based on traditional methods are increasingly coming into the typical home. Image recovery, a classic task in computer vision, has a critical position in practical applications. As an essential carrier of information transmission, the quality of the image directly affects the ability of information expression. Image super-resolution reconstruction aims to recover high-quality photos, so it has a wide range of applications in many fields. We conducted comparison experiments on COCO and visual genome datasets in this study. By analyzing the experimental data, we can see that the target detection and recognition models based on graph convolutional networks significantly improve the correct average rate of the whole class of objects. In this study, Set5, Set14, BSD100, and Urban100s datasets are taken for experiments and compared with their algorithms Bicubic, SRCNN, VDSR, and SRGAN in the cases of reconstruction scales of 2× and 4× to verify the practical effect more fully. This algorithm increases the network’s nonlinear representation capability while acquiring multiple features than single-scale convolutional networks. The algorithm finally outputs reconstructed high-resolution images using the deconvolution layer, which obtains more high-frequency information during the upsampling process. The algorithm is experimentally demonstrated to have an advantage of super-resolution reconstruction compared with neural network algorithms of the same level of depth. --- *Source: 1016112-2022-09-10.xml*
2022
# Perinatal Pharmacology **Authors:** Karel Allegaert; Vassilios Fanos; Johannes N. van den Anker; Stephanie Laër **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101620 --- ## Body --- *Source: 101620-2014-04-13.xml*
101620-2014-04-13_101620-2014-04-13.md
357
Perinatal Pharmacology
Karel Allegaert; Vassilios Fanos; Johannes N. van den Anker; Stephanie Laër
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101620
101620-2014-04-13.xml
--- ## Body --- *Source: 101620-2014-04-13.xml*
2014
# Phenolic Profile, Antioxidant Activity, and Enzyme Inhibitory Properties ofLimonium delicatulum (Girard) Kuntze and Limonium quesadense Erben **Authors:** A. Ruiz-Riaguas; G. Zengin; K.I. Sinan; C. Salazar-Mendías; E.J. Llorent-Martínez **Journal:** Journal of Chemistry (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1016208 --- ## Abstract In this work, we report the phytochemical composition and bioactive potential of methanolic and aqueous extracts of leaves fromLimonium delicatulum (Girard) Kuntze and Limonium quesadense Erben. The characterization and quantitation of individual phytochemicals were performed with liquid chromatography with diode array and electrospray-tandem mass spectrometry detection. Myricetin glycosides were abundant in L. delicatulum, whereas L. quesadense was rich in gallo(epi)catechin-O-gallate. Total phenolics, flavonols, and flavonoids were assayed with conventional methods. Antioxidant and radical scavenging assays (phosphomolybdenum, DPPH, ABTS, CUPRAC, FRAP, and metal chelating activity), as well as enzyme inhibitory assays (acetylcholinesterase, butyrylcholinesterase, tyrosinase, amylase, glucosidase, and lipase), were performed to evaluate the potential bioactivity. The methanolic extracts of both species presented higher phenolic content and bioactivity than the aqueous extracts. Overall, L. quesadense extracts exhibited the most potent activity for most assays, representing a potential source of bioactive compounds for the pharmaceutical and food industries. --- ## Body ## 1. Introduction Plants represent a rich source of many bioactive compounds, particularly polyphenols, which are well known for their high antioxidant activity and various health benefits. As a result, an increasing number of plant species are constantly used in folk medicine. In fact, several recent studies have focused on the enzyme inhibitory properties of plant extracts as an interesting approach to prevent different chronic diseases, such as inflammation, diabetes mellitus, Alzheimer, and cancer. Here, we present an investigation concerning two species of the genusLimonium: L. delicatulum (Girard) Kuntze and L. quesadense Erben.Limonium Miller is a genus that belongs to the Plumbaginaceae family, specifically to the Staticoideae subfamily. There are two subgenera (Pteroclados and Limonium) with different sections—depending on the authors—and at least 10 subsections [1, 2]. There are between 350 and 470 species, mainly distributed in the western Mediterranean region [2–4]. This genus comprises perennial species and, rarely, annual herbaceous plants. Limonium species usually grow in arid or semiarid areas, occupying small isolated spaces over gypsic or saline soils. There are numerous local endemic species and, due to their isolation, many of them are threatened and protected species. Some species are cultivated as ornamental plants, whereas others have important medicinal properties [5, 6]. Research on species of this genus has revealed important bioactivity concerning the free radical scavenging [7, 8], antioxidant [5, 9–11], anti-inflammatory [10], antibacterial [12], antimicrobial [9], and antiviral [13] properties. The main phenolics identified as responsible for these activities were gallic acid, epigallocatechin gallate, and myricetin and isorhamnetin flavonoids [7, 8, 10, 11, 14].L. delicatulum is an Iberian-North African endemism [3], growing up to 100 cm. Leaves are green, usually ovate to elliptic or obovate (3.5–15 cm length × 2–5 cm width), with 4–10 lateral nerves. It blooms from February to October depending on the altitude, developing shoots of 20–90 cm with spikes of 5–25 mm and spikelets of 4-5 mm. It inhabits coastal and inland saline habitats between 0 and 800 m.a.s.l. It is not considered a threatened species [15] and its chemical composition and bioactivity have been scarcely studied. Its antioxidant activity has been reported [16, 17], as well as total phenolics, flavonoids, tannins, and antimicrobial activity [17]. However, the detailed phytochemical composition and potential enzyme inhibitory activities have not been reported so far.L. quesadense is endemic to the province of Jaén (southeastern Iberian Peninsula, Spain) of 35–60 cm. Leaves are green-bluish to green-violetish, oblanceolate to spathulate (4–12 cm length × 1.5–3 cm width), with 4 (rarely 6) lateral nerves. It blooms from June to August, developing shoots of 20–50 cm with spikes of 7–20 mm and spikelets of 4.5-5 mm. It takes part in continental halophytic vegetation and gypsophyte scrubs in the Guadiana Menor valley between 500 and 700 m.a.s.l. It is regarded as a threatened plant under the category of “endangered” (EN) [15, 18]. To date, there are no studies concerning the phytochemical composition and bioactivity of this species.Taking into account the lack of information concerning the two target species—as well as the reports of the bioactivity of otherLimonium species—this research aims at providing information concerning the phenolic composition of leaves of L. delicatulum and L. quesadense, examining their antioxidant activity (radical scavenging, reducing power, and metal chelating) and enzyme inhibitory properties (against acetylcholinesterase, butyrylcholinesterase, tyrosinase, amylase, glucosidase, and lipase). ## 2. Materials and Methods ### 2.1. Sample Preparation Leaves ofL. quesadense and L. delicatulum were collected at the Native Flora Garden of the University of Jaén (Jaén, Andalusia, Spain; 37°47′18.879″N 3°46′31.583″W, 427 m a.s.l.) in September 2018. Samples are stored at the Herbarium of the University of Jaén. Photographs of both species are shown in Figure 1.Figure 1 Photographs of (a)L. delicatulum and (b) L. quesadense. (a) (b)The taxonomical classification was confirmed by botanist Dr. Carlos Salazar-Mendías. Samples were washed with Milli-Q water and extracted by two different procedures:(i) Ultrasound-assisted extraction with MeOH: leaves were lyophilized (ModulyoD-23, Thermo Savant; Waltham, MA USA) and powdered; 2.5 g of sample was extracted with 50 mL of MeOH in an ultrasonic liquid processor (Qsonica Sonicator; Newtown, CT, USA; power of 55 W and frequency of 20 kHz) at 50% power for 10 min.(ii) Decoction: 2.5 g of sample (fresh and powdered) was extracted with 150 mL of boiling Milli-Q water for 30 minutes.Both extracts were filtered through Whatman No. 1 filters, and the solvent was evaporated under reduced pressure in a Hei-Vap Precision Rotary Evaporator (Heidolph; Schwabach; Germany) at 40°C. Dried extracts (DE) were stored at −20°C until analysis. ### 2.2. HPLC Analysis Dried extracts (5–10 mg) were redissolved in 1 mL of MeOH and filtered with 0.45μm nylon filters, and 10 μL of the sample was injected. The HPLC system was an Agilent Series 1100 with a G1315B diode array detector. The separation of the compounds was performed with a reversed-phase Luna Omega Polar C18 column (150 × 3.0 mm and 5 µm particle size; Phenomenex) with a Polar C18 Security Guard cartridge (Phenomenex) of 4 × 3.0 mm. The HPLC system was connected to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics) with an electrospray interface. Chromatographic conditions have been previously detailed [19]. All standards required to perform phenolic quantitation were purchased from Sigma-Aldrich (Madrid, Spain); individual stock solutions were prepared in methanol (MeOH, LC-MS grade, >99.9%; Sigma). LC-MS grade acetonitrile (CH3CN, 99%; LabScan; Dublin, Ireland) and ultrapure water (Milli-Q Water Purification System; Millipore; Milford, MA, USA) were also used. ### 2.3. Total Phenolic Content (TPC) and Total Flavonoid Content (TFC) TPC and TFC were determined using the Folin–Ciocalteu and AlCl3 assays, respectively [20]. Results were expressed as gallic acid equivalents (mg GAEs/g extract) and rutin equivalents (mg REs/g extract) for the respective assays. ### 2.4. Determination of Antioxidant and Enzyme Inhibitory Effects The metal chelating, phosphomolybdenum, FRAP, CUPRAC, ABTS, and DPPH activities of the extracts were assessed following the methods described by Uysal et al. [20]. The antioxidant activities were reported as trolox equivalents, whereas EDTA was used for metal chelating assay. The possible inhibitory effects of the extracts against cholinesterases (AChE (E.C. 3.1.1.7) from Electrophorus electricus (electric eel) and BChE (E.C. 3.1.1.8) from equine serum, by Ellman’s method), tyrosinase (from mushroom, E.C. 1.14.18.1), α-amylase (from porcine pancreas, E.C. 3.2.1.1), α-glucosidase (E.C. 3.2.1.20), and lipase (from porcine pancreas, E.C 3.1.1.3) were evaluated using standard in vitro bioassays [21]. ### 2.5. Data Analysis Bioactive compounds and biological activity data were prepared for univariate and multivariate statistical analysis. Firstly, one-way ANOVA followed by post hoc test, namely, Tukey’s multiple range was performed under Xlstat 2018 to investigate significant differencesp<0.05 between the studied samples. The data were subjected to unsupervised multivariate analysis PCA and HCA using R software v. 3.5.1 for the discrimination between the extracts and their classification according to biological activities. Finally, the relationship between biological activities and phenolic classes based on the estimation of Pearson’s correlation coefficients were conducted. ## 2.1. Sample Preparation Leaves ofL. quesadense and L. delicatulum were collected at the Native Flora Garden of the University of Jaén (Jaén, Andalusia, Spain; 37°47′18.879″N 3°46′31.583″W, 427 m a.s.l.) in September 2018. Samples are stored at the Herbarium of the University of Jaén. Photographs of both species are shown in Figure 1.Figure 1 Photographs of (a)L. delicatulum and (b) L. quesadense. (a) (b)The taxonomical classification was confirmed by botanist Dr. Carlos Salazar-Mendías. Samples were washed with Milli-Q water and extracted by two different procedures:(i) Ultrasound-assisted extraction with MeOH: leaves were lyophilized (ModulyoD-23, Thermo Savant; Waltham, MA USA) and powdered; 2.5 g of sample was extracted with 50 mL of MeOH in an ultrasonic liquid processor (Qsonica Sonicator; Newtown, CT, USA; power of 55 W and frequency of 20 kHz) at 50% power for 10 min.(ii) Decoction: 2.5 g of sample (fresh and powdered) was extracted with 150 mL of boiling Milli-Q water for 30 minutes.Both extracts were filtered through Whatman No. 1 filters, and the solvent was evaporated under reduced pressure in a Hei-Vap Precision Rotary Evaporator (Heidolph; Schwabach; Germany) at 40°C. Dried extracts (DE) were stored at −20°C until analysis. ## 2.2. HPLC Analysis Dried extracts (5–10 mg) were redissolved in 1 mL of MeOH and filtered with 0.45μm nylon filters, and 10 μL of the sample was injected. The HPLC system was an Agilent Series 1100 with a G1315B diode array detector. The separation of the compounds was performed with a reversed-phase Luna Omega Polar C18 column (150 × 3.0 mm and 5 µm particle size; Phenomenex) with a Polar C18 Security Guard cartridge (Phenomenex) of 4 × 3.0 mm. The HPLC system was connected to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics) with an electrospray interface. Chromatographic conditions have been previously detailed [19]. All standards required to perform phenolic quantitation were purchased from Sigma-Aldrich (Madrid, Spain); individual stock solutions were prepared in methanol (MeOH, LC-MS grade, >99.9%; Sigma). LC-MS grade acetonitrile (CH3CN, 99%; LabScan; Dublin, Ireland) and ultrapure water (Milli-Q Water Purification System; Millipore; Milford, MA, USA) were also used. ## 2.3. Total Phenolic Content (TPC) and Total Flavonoid Content (TFC) TPC and TFC were determined using the Folin–Ciocalteu and AlCl3 assays, respectively [20]. Results were expressed as gallic acid equivalents (mg GAEs/g extract) and rutin equivalents (mg REs/g extract) for the respective assays. ## 2.4. Determination of Antioxidant and Enzyme Inhibitory Effects The metal chelating, phosphomolybdenum, FRAP, CUPRAC, ABTS, and DPPH activities of the extracts were assessed following the methods described by Uysal et al. [20]. The antioxidant activities were reported as trolox equivalents, whereas EDTA was used for metal chelating assay. The possible inhibitory effects of the extracts against cholinesterases (AChE (E.C. 3.1.1.7) from Electrophorus electricus (electric eel) and BChE (E.C. 3.1.1.8) from equine serum, by Ellman’s method), tyrosinase (from mushroom, E.C. 1.14.18.1), α-amylase (from porcine pancreas, E.C. 3.2.1.1), α-glucosidase (E.C. 3.2.1.20), and lipase (from porcine pancreas, E.C 3.1.1.3) were evaluated using standard in vitro bioassays [21]. ## 2.5. Data Analysis Bioactive compounds and biological activity data were prepared for univariate and multivariate statistical analysis. Firstly, one-way ANOVA followed by post hoc test, namely, Tukey’s multiple range was performed under Xlstat 2018 to investigate significant differencesp<0.05 between the studied samples. The data were subjected to unsupervised multivariate analysis PCA and HCA using R software v. 3.5.1 for the discrimination between the extracts and their classification according to biological activities. Finally, the relationship between biological activities and phenolic classes based on the estimation of Pearson’s correlation coefficients were conducted. ## 3. Results and Discussion ### 3.1. Phytochemical Characterization The characterization of phytochemicals was performed by HPLC-ESI-MSn using negative and positive ion modes. Base peak chromatograms are shown in Figure 2, whereas the characterization of compounds is detailed in Table 1.Figure 2 HPLC-ESI-MSn base peak chromatograms of the methanolic extracts of (a) L. delicatulum and (b) L. quesadense. (a) (b)Table 1 Characterization of phenolics inL. delicatulum and L. quesadense extracts. No. tR(min) [M-H]−m/z m/z (% base peak) Assigned identification L. delicatulum L. quesadense 1 1.7 377 MS2 [377]: 341 (100)MS3 [377⟶341]: 179 (100), 131 (14), 113 (18)MS4 [377⟶341⟶179]: 161 (42), 143 (93), 119 (100) Disaccharide (HCl adduct) ✓ ✓ 2 2.0 191 MS2 [191]: 173 (25), 127 (19), 111 (100) Citric acid ✓b 3 3.1 411 MS2 [411]: 331 (7), 241 (100), 169 (14), 125 (6) Galloyl hexoside (sulfate adduct) ✓ 4 3.3 169 MS2 [169]: 125 (100) Gallic acid ✓b ✓b 5 3.6 439 MS2 [439]: 241 (100)MS3 [439⟶241]: 223 (96), 139 (100), 165 (16) Unknown ✓ 6 4.3 379 MS2 [379]: 379 (100), 241 (20) Unknown ✓ 7 4.5 325 MS2 [325]: 169 (100), 125 (9)MS3 [325⟶169]: 125 (100) Gallic acid derivative ✓ ✓ 8 5.6 365 MS2 [365]: 321 (68), 153 (100)MS3 [365⟶153]: 109 (100), 108 (61) Protocatechuic acid derivative ✓ 9 8.0 761 MS2 [761]: 609 (89), 593 (96), 575 (74), 423 (100)MS3 [761⟶423]: 297 (50), 283 (100), 243 (28) Prodelphinidin dimer B-type gallate (2 units (epi)GC) ✓a ✓ 10 9.1 443 MS2 [443]: 275 (24), 245 (100), 167 (27) Unknown ✓ 11 9.5 449 (+) MS2 [449]:431 (8), 288(18), 287(100) Cyanidin 3-glucoside ✓a 12 10.4 759 MS2 [759]: 481 (100), 423 (96), 301 (87)MS3 [759⟶423]: 297 (84), 283 (50), 243 (100) Unknown ✓ 13 11.0 363 MS2 [363]: 363 (100), 241 (10) Unknown ✓ 14 11.0 457 MS2 [457]: 331 (20), 305 (17), 193 (85), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓a 15 12.3 913 MS2 [913]: 761 (100), 423 (60)MS3 [913⟶761]: 423 (100), 609 (90), 305 (49)MS4 [913⟶761⟶423]: 297 (100), 253 (49), 405 (33) Gallo(epi)catechin-O-gallate dimer ✓ 16 13.3 457 MS2 [457]: 331 (22), 305 (13), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓ ✓ 17 15.0 631 MS2 [631]: 479 (80), 317 (100)MS3 [631⟶479]: 317 (100), 316 (91), 179 (9)MS4 [631⟶479⟶317]: 271 (100), 179 (54), 151 (18) Myricetin-galloyl-hexoside ✓ ✓ 18 15.4 457 MS2 [457]: 331 (16), 305 (15), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓b 19 16.2 625 MS2 [625]: 317 (100), 316 (89)MS3 [625⟶317]: 271 (100), 179 (48), 151 (26) Myricetin-O-rutinoside ✓ ✓ 20 16.6 479 MS2 [479]: 317 (79), 316 (100), 179 (5)MS3 [479⟶317]: 271 (100), 179 (67), 151 (16) Myricetin-O-hexoside ✓ ✓ 21 18.5 615 MS2 [615]: 463 (100), 301 (33)MS3 [615⟶463]: 301 (100)MS4 [615⟶463⟶301]: 179 (75), 151 (100) Quercetin-galloyl-hexoside ✓ ✓b 22 19.2 609 MS2 [609]: 301 (100)MS3 [609⟶301]: 271 (98), 179 (76), 151 (100) Rutin ✓ 23 19.9 463 MS2 [463]: 317 (77), 316 (100), 179 (10)MS3 [463⟶316]: 271 (100), 179 (60), 151 (18) Myricetin-O-deoxyhexoside ✓ ✓ 24 20.6 463 MS2 [463]: 301 (100), 151 (5)MS3 [463⟶301]: 255 (21), 179 (70), 151 (100) Quercetin-O-hexoside ✓ ✓ 25 20.6 465 (+) MS2 [465]: 303 (100)MS3 [465⟶303]: 257 (100), 137 (39) Delphinidin-O-hexoside ✓ 26 21.0 659 MS2 [659]: 317 (85), 316 (100)MS3 [659⟶316]: 271 (100), 179 (74), 151 (47) Myricetin derivative ✓a 27 22.2 497 MS2 [497]: 417 (100)MS3 [497⟶417]: 371 (14), 181 (100), 166 (50), 151 (24)MS4 [497⟶417⟶181]: 166 (100) Syringaresinol (sulfate adduct) ✓ 28 23.0 593 MS2 [593]: 286 (20), 285 (100)MS3 [595⟶285]: 257 (100), 239 (76), 213 (75), 151 (45) Kaempferol-O-rutinoside ✓ 29 23.5 599 MS2 [599]: 313 (100), 285 (94)MS3 [599⟶313]: 169 (100) Kaempferol-O-galloyl-hexoside ✓ 30 24.1 549 MS2 [549]: 505 (100)MS3 [549⟶505]: 317 (44), 316 (100)MS4 [549⟶505⟶316]: 271 (100), 179 (57), 151 (16) Myricetin derivative ✓a ✓ 31 24.6 447 MS2 [447]: 301 (100), 179 (5)MS3 [447⟶301]: 179 (91), 151 (100) Quercetin-O-deoxyhexoside ✓ ✓ 32 24.7 303 (+) MS2 [303]: 257 (100) Delphinidin ✓a 33 24.9 477 MS2 [477]: 331 (100)MS3 [477⟶331]: 316 (100), 315 (57) Mearnsetin-O-deoxyhexoside ✓a 34 27.2 437 MS2 [437]: 357 (100), 151 (39)MS3 [437⟶357]: 342 (48), 151 (100), 136 (40) Pinoresinol (sulfate adduct) ✓ 35 28.0 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (40) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 36 28.6 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (54) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 37 28.6 431 MS2 [431]: 286 (17), 285 (100), 284 (25), 255 (10)MS3 [431⟶285]: 257 (82), 255 (100), 197 (39) Kaempferol-O-deoxyhexoside ✓ 38 29.5 533 MS2 [533]: 489 (100)MS3 [533⟶489]: 447 (19), 301 (100)MS4 [533⟶489⟶301]: 271 (100), 179 (22), 151 (42) Quercetin derivative ✓a ✓ 39 33.4 599 MS2 [599]: 301 (100)MS3 [599⟶301]: 179 (81), 151 (100)MS4 [599⟶301⟶179]: 151 (100) Quercetin-O-galloyl-deoxyhexoside ✓ ✓ aOnly in MeOH extract; b only in H2O extract.Compound1 was identified as the HCl adduct of a disaccharide (dihexoside) due to its fragmentation pattern [22]. Compound 2, with deprotonated molecular ion at m/z 191 and base peak at m/z 111, was identified as citric acid.Compound4, with [M-H]- at m/z 169 and base peak at m/z 125, was identified as gallic acid by comparison with an analytical standard. Compound 3, after the loss of 80 Da (sulfate moiety), displayed fragment ions at m/z 331, 169, and 125, typical of galloyl hexoside. Compound 7 also presented gallic acid in its fragmentation pattern and was tentatively characterized as a derivative. Compound 9 was tentatively characterized as prodelphinidin dimer B-type gallate (2 units of gallo(epi)catechin) based on bibliographic information [23].Compounds14, 16, and 18 presented [M-H]- at m/z 457 and fragment ions at m/z 331, 305, 169, and 125, consistent with gallo(epi)catechin-O-gallate isomers [23]. Compound 15 was characterized as a dimer.Compound8 exhibited fragment ions at m/z 153 and 109/108, which corresponded to protocatechuic acid (comparison with an analytical standard), so it was characterized as a derivative.Compound11, identified using positive ion mode, suffered the neutral loss of 162 Da, yielding cyanidin at m/z 287, so it was characterized as cyanidin 3-glucoside [24].Several myricetin derivatives were characterized in the analyzed extracts. In all of them, myricetin was observed atm/z 317 (main fragment ions at m/z 179 and 151). The following neutral losses were observed in compounds 17, 19, 20, 23, 35, and 36: 152 Da (galloyl moiety), 146 Da (deoxyhexoside), 162 Da (hexoside), 308 Da (rutinoside). We could not elucidate the exact structure of compounds 26 and 30, so they were characterized as myricetin derivatives.Following the same neutral losses than myricetin, several quercetin (21, 22, 24, 31, 38, and 39) and kaempferol (28, 29, and 37) derivatives were characterized. Quercetin and kaempferol aglycones were detected at m/z 301 and 285, respectively.Compound27 suffered the neutral loss of 80 Da (sulfate) to yield the lignan syringaresinol at m/z 417, which was identified by the fragment ions at m/z 181, 166, and 151 [25]. Compound 34 was also characterized as a sulfate adduct of a lignan, pinoresinol [25].Compound32 was characterized as delphinidin due to the 303⟶257 fragmentation using positive ion mode. With an additional hexoside moiety, 25 was characterized as delphinidin-O-hexoside [24].Finally, compound33, with deprotonated molecular ion at m/z 477, suffered the neutral loss of 146 Da (deoxyhexoside) to yield mearnsetin at m/z 331 (main fragment ion at m/z 316) [26]. ### 3.2. Quantitation of Phenolic Compounds We quantified 16 compounds in the methanolic and aqueous extracts ofL. quesadense and L. delicatulum. The results are summarised in Table 2. Total individual phenolic content (TIPC) was defined as the sum of all the individual compounds that were quantified by HPLC-DAD (phenolic acids at 320 nm and flavonoids at 350 nm).Table 2 Contents (mg g−1 DE) of the main phenolic compounds present in L. delicatulum and L. quesadense extracts. No. Assigned identification L. delicatulum L. quesadense MeOH H2O MeOH H2O Phenolic acids 3 Galloyl hexoside 0.67 ± 0.01 1.91 ± 0.1 — — 7 Gallic acid derivative 0.84 ± 0.02 — — — Total 1.51 ± 0.02 1.9 ± 0.1 Flavonoids 9 Prodelphinidin dimer 0.58 ± 0.01c — 5.10 ± 0.01b 6.3 ± 0.3a 15 Gallo(epi)catechin-O-gallate dimer — — 10.0 ± 0.7 — 16 Gallo(epi)catechin-O-gallate 1.7 ± 0.2c — 26 ± 1a 15.0 ± 0.6b 17 Myricetin-galloyl-hexoside 1.2 ± 0.1a — 0.89 ± 0.07b 0.69 ± 0.05c 18 Gallo(epi)catechin-O-gallate — — — 2.68 ± 0.05 19 + 20 Myricetin glycosides 4.40 ± 0.01a 0.30 ± 0.02d 2.22 ± 0.06b 1.41 ± 0.08c 23 Myricetin-O-deoxyhexoside 5.1 ± 0.1b 0.64 ± 0.05d 7.5 ± 0.4a 4.1 ± 0.1c 24 Quercetin-O-hexoside 0.54 ± 0.05a 0.21 ± 0.02b — 0.21 ± 0.02b 30 Myricetin derivative — — 0.50 ± 0.07 0.39 ± 0.02 31 Quercetin-O-deoxyhexoside 0.93 ± 0.04b — 1.37 ± 0.06a 0.75 ± 0.04c 35 Myricetin-O-galloyl-deoxyhexoside 0.56 ± 0.03a — 0.48 ± 0.01b 0.24 ± 0.01c 36 Myricetin-O-galloyl-deoxyhexoside 0.28 ± 0.01 — — 0.25 ± 0.01 39 Quercetin-O-galloyl-deoxyhexoside 0.18 ± 0.01 — — — Total 15.5 ± 0.3c 1.15 ± 0.05d 54 ± 1 a 32.0 ± 0.7b TIPC 17.0 ± 0.3c 3.1 ± 0.1d 54 ± 1 a 32.0 ± 0.7b Values are means ± SD of three parallel measurements. Means in the same line not sharing the same letter are significantly differentp<0.05.L. quesadense presented higher TIPC (54 and 32 mg/g DE for MeOH and H2O extracts, respectively) than L. delicatulum (17 and 3.1 mg/g DE for MeOH and H2O extracts, respectively). In both cases, methanol extracts presented the highest concentration of phenolics due to the highest solubility of flavonoids in MeOH compared to water. The most abundant compounds in L. delicatulum MeOH extract were myricetin glycosides (compounds 19, 20, and 23), which have been previously reported as bioactive compounds in L. densiflorum [10]. On the other hand, the most abundant compounds in L. quesadense extracts were 15 and 16 (gallo(epi)catechin-O-gallate monomer and dimer), followed by myricetin glycosides. Epigallocatechin gallate has been previously reported as the potential mainly responsible for the bioactive properties of L. brasiliense and L. algarvense [11, 14]. ### 3.3. Antioxidant Properties The majority of plant-based aromatic natural products are phenols, which comprise flavonoids, tannins, flavonols, flavanols, and anthocyanins, among others. In the present study, the different prepared extracts have been screened for the presence of phenolics, flavonols, and flavonoids. Indeed, all extracts showed a good level of phenolics, followed by flavonols and flavonoids (Table3), observing higher levels in L. quesadense than L. delicatulum. In addition, MeOH extracts presented higher levels of phenolics than aqueous extracts. These results are in agreement with TIPC quantified by chromatography. The methanolic extract of L. quesadense possessed the highest TPC (172 ± 4 mg GAE/g·DE) and flavonol (74 ± 3 mg RE/g·DE). However, flavonoids were most abundant in the MeOH extract of L. delicatulum (42.1 ± 0.8 mg RE/g·DE).Table 3 Total bioactive components, total antioxidant capacity (by phosphomolybdenum assay), and radical scavenging abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents Total phenolic content (mg GAE/g·DE) Total flavonol content (mg RE/g·DE) Total flavonoid content (mg RE/g·DE) Phosphomolybdenum (mmol TE/g·DE) ABTS (mg TE/g DE) DPPH (mg TE/g DE) L. delicatulum MeOH 151 ± 1b 55 ± 2b 42.1 ± 0.8a 4.5 ± 0.6a 360 ± 10b 470 ± 10b H2O 31.1 ± 0.4c 1.08 ± 0.03d 5.80 ± 0.09d 0.67 ± 0.04b 53 ± 8d 56 ± 1d L. quesadense MeOH 172 ± 4a 74 ± 3a 30.8 ± 0.5b 5.1 ± 0.4a 510 ± 30a 620 ± 10a H2O 152 ± 1b 10.4 ± 0.2c 12.98 ± 0.07c 4.6 ± 0.7a 248 ± 6c 428 ± 8c Values expressed are means ± SD of three parallel measurements. GAE: gallic acid equivalent; RE: rutin equivalent; TE: trolox equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.Moreover, a series of antioxidant assays were conducted on the extracts of bothLimonium species, namely, total antioxidant capacity (phosphomolybdenum), radical scavenging (ABTS and DPPH), reducing power (CUPRAC and FRAP), and metal chelating. These results are presented in Tables 3 and 4.Table 4 Reducing power and metal chelating abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents CUPRAC (mg TE/g·DE) FRAP (mg TE/g·DE) Metal chelating activity (mg EDTAE/g·DE) L. delicatulum MeOH 853 ± 5b 470 ± 10b 26.74 ± 0.01b H2O 94 ± 2d 62.5 ± 0.6d 28.43 ± 0.01a L. quesadense MeOH 940 ± 10a 520 ± 10a 19.43 ± 0.01d H2O 640 ± 10c 431 ± 5c 22.30 ± 0.01c Values expressed are means ± SD of three parallel measurements. TE: trolox equivalent; EDTAE: EDTA equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.In terms of total antioxidant capacity, the most potent extract was the methanolic extract ofL. quesadense (5.1 ± 0.4 mmol TE/g·DE). However, it is essential to point out that there is no statistical difference between this extract and the aqueous and methanolic extracts of L. delicatulum. Interestingly, the methanolic extract of L. quesadense displayed the highest activity in reducing power and radical scavenging assays. It showed significant activity with ABTS (510 ± 30 mg TE/g·DE) and DPPH (620 ± 10 mg TE/g·DE), CUPRAC (940 ± 10 mg TE/g·DE), and FRAP (520 ± 10 mg TE/g·DE). The most abundant flavonoid identified in the methanolic extract of L. quesadense was gallo(epi)catechin-O-gallate (26 ± 1 mg·g−1·DE). Thus, it can be extrapolated that this compound, along with its dimer, might be mainly responsible for the observed antioxidant properties.In contrast to the aforementioned antioxidant assays, the results obtained in the metal chelating assay classified the aqueous extract ofL. delicatulum as the most effective metal chelator, with a significant activity of 28.43 ± 0.01 mg EDTAE/g·DE. Considering the quantitation analysis of phenolic compounds of all extracts (Table 2), it can be said that there exists a good correlation between the antioxidant results and the quantitation of polyphenols. As ample evidence, the total quantified phenolic content (TIPC) with HPLC reported the methanolic extract of L. quesadense as most rich in flavonoids (54 ± 1 mg·g−1 DE) which as discussed above presented the highest antioxidant property. In this sense, the observed significant antioxidant properties could be attributed mainly to the presence of compounds containing galloyl moieties, such a gallo(epi)catechin-O-gallate dimer. Our findings were also supported by several researchers [27, 28]. ### 3.4. Enzyme Inhibitory Effects Enzymes are the main targets to control the constantly emerging global health issues [29]. As an example, tyrosinase is an important enzyme involved in the melanogenesis process during which the pigment, melanin, is produced [30]. However, the inhibitor of tyrosinase, kojic acid, which is used inexhaustibly by the pharmaceutical and cosmetic industries, represents various side effects [31]. Furthermore, the drug orlistat, which is the only clinically approved pharmacologic agent against pancreatic lipase, is associated with considerable side effects [32]. Thus, there is a dire need to search for new and safer enzymatic inhibitors for future pharmaceutical development. Accordingly, this present study is in line with the current trend and has screened the prepared extracts from the two Limonium species against α-amylase, glucosidase, acetylcholinesterase (AChE), butyrylcholinesterase (BChE), tyrosinase, and lipase.Tyrosinase inhibitors from the methanolic extract ofL. delicatulum (155.87 ± 0.01 mg KAE/g·DE) and lipase inhibitors from the methanolic extract of L. quesadense (65 ± 7 mg OE/g·DE) seemed promising candidates. The methanolic and aqueous extracts of L. delicatulum and L. quesadense were screened for their inhibitory activities on both AChE and BChE (Table 5).Table 5 Enzyme inhibitory properties ofL. delicatulum and L. quesadense extracts. Plant species Solvents AChE inhibition (mg GALAE/g·DE) BChE inhibition (mg GALAE/g·DE) Tyrosinase (mg KAE/g·DE) Amylase (mmol ACAE/g·DE) Glucosidase (mmol ACAE/g·DE) Lipase (mg OE/g·DE) L. delicatulum MeOH 4.8 ± 0.7a 3.5 ± 0.4a 155.87 ± 0.01a 0.95 ± 0.03b 2.70 ± 0.01c 27 ± 4b H2O 1.0 ± 0.2c na 18.87 ± 0.01d 0.08 ± 0.00c 2.74 ± 0.01a na L. quesadense MeOH 4.3 ± 0.2a 2.63 ± 0.02b 155.27 ± 0.01b 1.00 ± 0.02b 2.72 ± 0.01b 65 ± 7a H2O 1.7 ± 0.2b 0.86 ± 0.01c 135.34 ± 0.01c 1.5 ± 0.3a na na Values expressed are means ± SD of three parallel measurements. GALAE: galantamine equivalent; KAE: kojic acid equivalent; ACAE: acarbose equivalent; OE: orlistat equivalent; na: not active. Means in the same column not sharing the same letter are significantly differentp<0.05.We observed that the methanolic extract ofL. delicatulum exhibited the highest activity (4.8 ± 0.7 mg GALAE/g·DE); nevertheless, there is no statistical difference between the latter extract and the methanolic extract of L. quesadense. Hence, the two mentioned extracts represent the most potent cholinesterase inhibitors. Furthermore, the aqueous extract of L. quesadense was the most active inhibitor for α-amylase (1.5 ± 0.3 mmol ACAE/g·DE). On the other hand, in terms of glucosidase enzymatic assay, we observed the aqueous extract of L. delicatulum to be more potent (2.74 ± 0.01 mmol ACAE/g·DE) followed by the methanolic extract of L. quesadense (2.72 ± 0.01 mmol ACAE/g·DE) and the methanolic extract of L. delicatulum (2.70 ± 0.01 mmol ACAE/g·DE). Further data collected in this present study showed that the methanolic extract of L. quesadense exhibited the most effective lipase inhibitor. A substantial amount of reports showed that several plant metabolites are prospective pancreatic lipase inhibitors. Principally, it is projected that the presence of galloyl moiety of flavan-3-ols is essential for lipase inhibition [33]. Indeed, the methanolic extract of L. quesadense contained the highest levels of gallo(epi)catechin-O-gallate and its dimer, as well as myricetin-O-hexoside (Table 2) which might be linked to the significant lipase activity. It is noteworthy to point out that although the methanolic extract of L. quesadense possessed the highest bioactive components, we did not observe the most significant activity in all enzymatic assays. These results display that there may not always a correlation between polyphenol contents and enzymatic inhibition assays. ### 3.5. Unsupervised Multivariate Data Analysis of Biological Activities of Limonium Extracts The analysis ofLimonium species extracts encompassing 12 biological activities justified the employment of multivariate data analysis tools. Thus, with the help of unsupervised PCA and hierarchical clustering analysis, the biological activity data allowed for discrimination between the different extracts.The first two principal components showed 73.1% and 19.5% of the total variance, respectively, suggesting only the two components could outline 90% information of the original data. As presented in Figure3, the extracts were clearly classified into three clusters. Likewise, hierarchical cluster analysis (HCA) based on the concept of Euclidean similarity measure and Wards as linkage rule between the extracts confirmed the PCA results.Figure 3 Relationship between total bioactive compounds and biological activities and multivariate analysis ofLimonium species. (a) Pearson’s correlation heatmap; (b) screen plot of explained variance by PCA components; (c) score plot of principal components 1 and 2; (d) hierarchical cluster dendrogram of extracts. The color box indicates the standardized biological activities of each extract. The red, yellow, and blue colors indicate high, middle, and low bioactivity, respectively. (a) (b) (c) (d)The MeOH extracts of two studied species were close enough. That suggested both extracts have similar properties against all evaluated biological activities. Otherwise, as opposed to MeOH extracts, the H2O extracts were different. It allowed to better discriminate the two species. Accordingly, L. delicatulum and L. quesadense had different properties against all biological activities excepted AChE, BChE, and lipase; nevertheless, a more significant difference between the two species was obtained with MCA and amylase assays. Therefore, it can be concluded that L. delicatulum was most active against MCA while L. quesadense showed better amylase inhibitory activity. ## 3.1. Phytochemical Characterization The characterization of phytochemicals was performed by HPLC-ESI-MSn using negative and positive ion modes. Base peak chromatograms are shown in Figure 2, whereas the characterization of compounds is detailed in Table 1.Figure 2 HPLC-ESI-MSn base peak chromatograms of the methanolic extracts of (a) L. delicatulum and (b) L. quesadense. (a) (b)Table 1 Characterization of phenolics inL. delicatulum and L. quesadense extracts. No. tR(min) [M-H]−m/z m/z (% base peak) Assigned identification L. delicatulum L. quesadense 1 1.7 377 MS2 [377]: 341 (100)MS3 [377⟶341]: 179 (100), 131 (14), 113 (18)MS4 [377⟶341⟶179]: 161 (42), 143 (93), 119 (100) Disaccharide (HCl adduct) ✓ ✓ 2 2.0 191 MS2 [191]: 173 (25), 127 (19), 111 (100) Citric acid ✓b 3 3.1 411 MS2 [411]: 331 (7), 241 (100), 169 (14), 125 (6) Galloyl hexoside (sulfate adduct) ✓ 4 3.3 169 MS2 [169]: 125 (100) Gallic acid ✓b ✓b 5 3.6 439 MS2 [439]: 241 (100)MS3 [439⟶241]: 223 (96), 139 (100), 165 (16) Unknown ✓ 6 4.3 379 MS2 [379]: 379 (100), 241 (20) Unknown ✓ 7 4.5 325 MS2 [325]: 169 (100), 125 (9)MS3 [325⟶169]: 125 (100) Gallic acid derivative ✓ ✓ 8 5.6 365 MS2 [365]: 321 (68), 153 (100)MS3 [365⟶153]: 109 (100), 108 (61) Protocatechuic acid derivative ✓ 9 8.0 761 MS2 [761]: 609 (89), 593 (96), 575 (74), 423 (100)MS3 [761⟶423]: 297 (50), 283 (100), 243 (28) Prodelphinidin dimer B-type gallate (2 units (epi)GC) ✓a ✓ 10 9.1 443 MS2 [443]: 275 (24), 245 (100), 167 (27) Unknown ✓ 11 9.5 449 (+) MS2 [449]:431 (8), 288(18), 287(100) Cyanidin 3-glucoside ✓a 12 10.4 759 MS2 [759]: 481 (100), 423 (96), 301 (87)MS3 [759⟶423]: 297 (84), 283 (50), 243 (100) Unknown ✓ 13 11.0 363 MS2 [363]: 363 (100), 241 (10) Unknown ✓ 14 11.0 457 MS2 [457]: 331 (20), 305 (17), 193 (85), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓a 15 12.3 913 MS2 [913]: 761 (100), 423 (60)MS3 [913⟶761]: 423 (100), 609 (90), 305 (49)MS4 [913⟶761⟶423]: 297 (100), 253 (49), 405 (33) Gallo(epi)catechin-O-gallate dimer ✓ 16 13.3 457 MS2 [457]: 331 (22), 305 (13), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓ ✓ 17 15.0 631 MS2 [631]: 479 (80), 317 (100)MS3 [631⟶479]: 317 (100), 316 (91), 179 (9)MS4 [631⟶479⟶317]: 271 (100), 179 (54), 151 (18) Myricetin-galloyl-hexoside ✓ ✓ 18 15.4 457 MS2 [457]: 331 (16), 305 (15), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓b 19 16.2 625 MS2 [625]: 317 (100), 316 (89)MS3 [625⟶317]: 271 (100), 179 (48), 151 (26) Myricetin-O-rutinoside ✓ ✓ 20 16.6 479 MS2 [479]: 317 (79), 316 (100), 179 (5)MS3 [479⟶317]: 271 (100), 179 (67), 151 (16) Myricetin-O-hexoside ✓ ✓ 21 18.5 615 MS2 [615]: 463 (100), 301 (33)MS3 [615⟶463]: 301 (100)MS4 [615⟶463⟶301]: 179 (75), 151 (100) Quercetin-galloyl-hexoside ✓ ✓b 22 19.2 609 MS2 [609]: 301 (100)MS3 [609⟶301]: 271 (98), 179 (76), 151 (100) Rutin ✓ 23 19.9 463 MS2 [463]: 317 (77), 316 (100), 179 (10)MS3 [463⟶316]: 271 (100), 179 (60), 151 (18) Myricetin-O-deoxyhexoside ✓ ✓ 24 20.6 463 MS2 [463]: 301 (100), 151 (5)MS3 [463⟶301]: 255 (21), 179 (70), 151 (100) Quercetin-O-hexoside ✓ ✓ 25 20.6 465 (+) MS2 [465]: 303 (100)MS3 [465⟶303]: 257 (100), 137 (39) Delphinidin-O-hexoside ✓ 26 21.0 659 MS2 [659]: 317 (85), 316 (100)MS3 [659⟶316]: 271 (100), 179 (74), 151 (47) Myricetin derivative ✓a 27 22.2 497 MS2 [497]: 417 (100)MS3 [497⟶417]: 371 (14), 181 (100), 166 (50), 151 (24)MS4 [497⟶417⟶181]: 166 (100) Syringaresinol (sulfate adduct) ✓ 28 23.0 593 MS2 [593]: 286 (20), 285 (100)MS3 [595⟶285]: 257 (100), 239 (76), 213 (75), 151 (45) Kaempferol-O-rutinoside ✓ 29 23.5 599 MS2 [599]: 313 (100), 285 (94)MS3 [599⟶313]: 169 (100) Kaempferol-O-galloyl-hexoside ✓ 30 24.1 549 MS2 [549]: 505 (100)MS3 [549⟶505]: 317 (44), 316 (100)MS4 [549⟶505⟶316]: 271 (100), 179 (57), 151 (16) Myricetin derivative ✓a ✓ 31 24.6 447 MS2 [447]: 301 (100), 179 (5)MS3 [447⟶301]: 179 (91), 151 (100) Quercetin-O-deoxyhexoside ✓ ✓ 32 24.7 303 (+) MS2 [303]: 257 (100) Delphinidin ✓a 33 24.9 477 MS2 [477]: 331 (100)MS3 [477⟶331]: 316 (100), 315 (57) Mearnsetin-O-deoxyhexoside ✓a 34 27.2 437 MS2 [437]: 357 (100), 151 (39)MS3 [437⟶357]: 342 (48), 151 (100), 136 (40) Pinoresinol (sulfate adduct) ✓ 35 28.0 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (40) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 36 28.6 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (54) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 37 28.6 431 MS2 [431]: 286 (17), 285 (100), 284 (25), 255 (10)MS3 [431⟶285]: 257 (82), 255 (100), 197 (39) Kaempferol-O-deoxyhexoside ✓ 38 29.5 533 MS2 [533]: 489 (100)MS3 [533⟶489]: 447 (19), 301 (100)MS4 [533⟶489⟶301]: 271 (100), 179 (22), 151 (42) Quercetin derivative ✓a ✓ 39 33.4 599 MS2 [599]: 301 (100)MS3 [599⟶301]: 179 (81), 151 (100)MS4 [599⟶301⟶179]: 151 (100) Quercetin-O-galloyl-deoxyhexoside ✓ ✓ aOnly in MeOH extract; b only in H2O extract.Compound1 was identified as the HCl adduct of a disaccharide (dihexoside) due to its fragmentation pattern [22]. Compound 2, with deprotonated molecular ion at m/z 191 and base peak at m/z 111, was identified as citric acid.Compound4, with [M-H]- at m/z 169 and base peak at m/z 125, was identified as gallic acid by comparison with an analytical standard. Compound 3, after the loss of 80 Da (sulfate moiety), displayed fragment ions at m/z 331, 169, and 125, typical of galloyl hexoside. Compound 7 also presented gallic acid in its fragmentation pattern and was tentatively characterized as a derivative. Compound 9 was tentatively characterized as prodelphinidin dimer B-type gallate (2 units of gallo(epi)catechin) based on bibliographic information [23].Compounds14, 16, and 18 presented [M-H]- at m/z 457 and fragment ions at m/z 331, 305, 169, and 125, consistent with gallo(epi)catechin-O-gallate isomers [23]. Compound 15 was characterized as a dimer.Compound8 exhibited fragment ions at m/z 153 and 109/108, which corresponded to protocatechuic acid (comparison with an analytical standard), so it was characterized as a derivative.Compound11, identified using positive ion mode, suffered the neutral loss of 162 Da, yielding cyanidin at m/z 287, so it was characterized as cyanidin 3-glucoside [24].Several myricetin derivatives were characterized in the analyzed extracts. In all of them, myricetin was observed atm/z 317 (main fragment ions at m/z 179 and 151). The following neutral losses were observed in compounds 17, 19, 20, 23, 35, and 36: 152 Da (galloyl moiety), 146 Da (deoxyhexoside), 162 Da (hexoside), 308 Da (rutinoside). We could not elucidate the exact structure of compounds 26 and 30, so they were characterized as myricetin derivatives.Following the same neutral losses than myricetin, several quercetin (21, 22, 24, 31, 38, and 39) and kaempferol (28, 29, and 37) derivatives were characterized. Quercetin and kaempferol aglycones were detected at m/z 301 and 285, respectively.Compound27 suffered the neutral loss of 80 Da (sulfate) to yield the lignan syringaresinol at m/z 417, which was identified by the fragment ions at m/z 181, 166, and 151 [25]. Compound 34 was also characterized as a sulfate adduct of a lignan, pinoresinol [25].Compound32 was characterized as delphinidin due to the 303⟶257 fragmentation using positive ion mode. With an additional hexoside moiety, 25 was characterized as delphinidin-O-hexoside [24].Finally, compound33, with deprotonated molecular ion at m/z 477, suffered the neutral loss of 146 Da (deoxyhexoside) to yield mearnsetin at m/z 331 (main fragment ion at m/z 316) [26]. ## 3.2. Quantitation of Phenolic Compounds We quantified 16 compounds in the methanolic and aqueous extracts ofL. quesadense and L. delicatulum. The results are summarised in Table 2. Total individual phenolic content (TIPC) was defined as the sum of all the individual compounds that were quantified by HPLC-DAD (phenolic acids at 320 nm and flavonoids at 350 nm).Table 2 Contents (mg g−1 DE) of the main phenolic compounds present in L. delicatulum and L. quesadense extracts. No. Assigned identification L. delicatulum L. quesadense MeOH H2O MeOH H2O Phenolic acids 3 Galloyl hexoside 0.67 ± 0.01 1.91 ± 0.1 — — 7 Gallic acid derivative 0.84 ± 0.02 — — — Total 1.51 ± 0.02 1.9 ± 0.1 Flavonoids 9 Prodelphinidin dimer 0.58 ± 0.01c — 5.10 ± 0.01b 6.3 ± 0.3a 15 Gallo(epi)catechin-O-gallate dimer — — 10.0 ± 0.7 — 16 Gallo(epi)catechin-O-gallate 1.7 ± 0.2c — 26 ± 1a 15.0 ± 0.6b 17 Myricetin-galloyl-hexoside 1.2 ± 0.1a — 0.89 ± 0.07b 0.69 ± 0.05c 18 Gallo(epi)catechin-O-gallate — — — 2.68 ± 0.05 19 + 20 Myricetin glycosides 4.40 ± 0.01a 0.30 ± 0.02d 2.22 ± 0.06b 1.41 ± 0.08c 23 Myricetin-O-deoxyhexoside 5.1 ± 0.1b 0.64 ± 0.05d 7.5 ± 0.4a 4.1 ± 0.1c 24 Quercetin-O-hexoside 0.54 ± 0.05a 0.21 ± 0.02b — 0.21 ± 0.02b 30 Myricetin derivative — — 0.50 ± 0.07 0.39 ± 0.02 31 Quercetin-O-deoxyhexoside 0.93 ± 0.04b — 1.37 ± 0.06a 0.75 ± 0.04c 35 Myricetin-O-galloyl-deoxyhexoside 0.56 ± 0.03a — 0.48 ± 0.01b 0.24 ± 0.01c 36 Myricetin-O-galloyl-deoxyhexoside 0.28 ± 0.01 — — 0.25 ± 0.01 39 Quercetin-O-galloyl-deoxyhexoside 0.18 ± 0.01 — — — Total 15.5 ± 0.3c 1.15 ± 0.05d 54 ± 1 a 32.0 ± 0.7b TIPC 17.0 ± 0.3c 3.1 ± 0.1d 54 ± 1 a 32.0 ± 0.7b Values are means ± SD of three parallel measurements. Means in the same line not sharing the same letter are significantly differentp<0.05.L. quesadense presented higher TIPC (54 and 32 mg/g DE for MeOH and H2O extracts, respectively) than L. delicatulum (17 and 3.1 mg/g DE for MeOH and H2O extracts, respectively). In both cases, methanol extracts presented the highest concentration of phenolics due to the highest solubility of flavonoids in MeOH compared to water. The most abundant compounds in L. delicatulum MeOH extract were myricetin glycosides (compounds 19, 20, and 23), which have been previously reported as bioactive compounds in L. densiflorum [10]. On the other hand, the most abundant compounds in L. quesadense extracts were 15 and 16 (gallo(epi)catechin-O-gallate monomer and dimer), followed by myricetin glycosides. Epigallocatechin gallate has been previously reported as the potential mainly responsible for the bioactive properties of L. brasiliense and L. algarvense [11, 14]. ## 3.3. Antioxidant Properties The majority of plant-based aromatic natural products are phenols, which comprise flavonoids, tannins, flavonols, flavanols, and anthocyanins, among others. In the present study, the different prepared extracts have been screened for the presence of phenolics, flavonols, and flavonoids. Indeed, all extracts showed a good level of phenolics, followed by flavonols and flavonoids (Table3), observing higher levels in L. quesadense than L. delicatulum. In addition, MeOH extracts presented higher levels of phenolics than aqueous extracts. These results are in agreement with TIPC quantified by chromatography. The methanolic extract of L. quesadense possessed the highest TPC (172 ± 4 mg GAE/g·DE) and flavonol (74 ± 3 mg RE/g·DE). However, flavonoids were most abundant in the MeOH extract of L. delicatulum (42.1 ± 0.8 mg RE/g·DE).Table 3 Total bioactive components, total antioxidant capacity (by phosphomolybdenum assay), and radical scavenging abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents Total phenolic content (mg GAE/g·DE) Total flavonol content (mg RE/g·DE) Total flavonoid content (mg RE/g·DE) Phosphomolybdenum (mmol TE/g·DE) ABTS (mg TE/g DE) DPPH (mg TE/g DE) L. delicatulum MeOH 151 ± 1b 55 ± 2b 42.1 ± 0.8a 4.5 ± 0.6a 360 ± 10b 470 ± 10b H2O 31.1 ± 0.4c 1.08 ± 0.03d 5.80 ± 0.09d 0.67 ± 0.04b 53 ± 8d 56 ± 1d L. quesadense MeOH 172 ± 4a 74 ± 3a 30.8 ± 0.5b 5.1 ± 0.4a 510 ± 30a 620 ± 10a H2O 152 ± 1b 10.4 ± 0.2c 12.98 ± 0.07c 4.6 ± 0.7a 248 ± 6c 428 ± 8c Values expressed are means ± SD of three parallel measurements. GAE: gallic acid equivalent; RE: rutin equivalent; TE: trolox equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.Moreover, a series of antioxidant assays were conducted on the extracts of bothLimonium species, namely, total antioxidant capacity (phosphomolybdenum), radical scavenging (ABTS and DPPH), reducing power (CUPRAC and FRAP), and metal chelating. These results are presented in Tables 3 and 4.Table 4 Reducing power and metal chelating abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents CUPRAC (mg TE/g·DE) FRAP (mg TE/g·DE) Metal chelating activity (mg EDTAE/g·DE) L. delicatulum MeOH 853 ± 5b 470 ± 10b 26.74 ± 0.01b H2O 94 ± 2d 62.5 ± 0.6d 28.43 ± 0.01a L. quesadense MeOH 940 ± 10a 520 ± 10a 19.43 ± 0.01d H2O 640 ± 10c 431 ± 5c 22.30 ± 0.01c Values expressed are means ± SD of three parallel measurements. TE: trolox equivalent; EDTAE: EDTA equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.In terms of total antioxidant capacity, the most potent extract was the methanolic extract ofL. quesadense (5.1 ± 0.4 mmol TE/g·DE). However, it is essential to point out that there is no statistical difference between this extract and the aqueous and methanolic extracts of L. delicatulum. Interestingly, the methanolic extract of L. quesadense displayed the highest activity in reducing power and radical scavenging assays. It showed significant activity with ABTS (510 ± 30 mg TE/g·DE) and DPPH (620 ± 10 mg TE/g·DE), CUPRAC (940 ± 10 mg TE/g·DE), and FRAP (520 ± 10 mg TE/g·DE). The most abundant flavonoid identified in the methanolic extract of L. quesadense was gallo(epi)catechin-O-gallate (26 ± 1 mg·g−1·DE). Thus, it can be extrapolated that this compound, along with its dimer, might be mainly responsible for the observed antioxidant properties.In contrast to the aforementioned antioxidant assays, the results obtained in the metal chelating assay classified the aqueous extract ofL. delicatulum as the most effective metal chelator, with a significant activity of 28.43 ± 0.01 mg EDTAE/g·DE. Considering the quantitation analysis of phenolic compounds of all extracts (Table 2), it can be said that there exists a good correlation between the antioxidant results and the quantitation of polyphenols. As ample evidence, the total quantified phenolic content (TIPC) with HPLC reported the methanolic extract of L. quesadense as most rich in flavonoids (54 ± 1 mg·g−1 DE) which as discussed above presented the highest antioxidant property. In this sense, the observed significant antioxidant properties could be attributed mainly to the presence of compounds containing galloyl moieties, such a gallo(epi)catechin-O-gallate dimer. Our findings were also supported by several researchers [27, 28]. ## 3.4. Enzyme Inhibitory Effects Enzymes are the main targets to control the constantly emerging global health issues [29]. As an example, tyrosinase is an important enzyme involved in the melanogenesis process during which the pigment, melanin, is produced [30]. However, the inhibitor of tyrosinase, kojic acid, which is used inexhaustibly by the pharmaceutical and cosmetic industries, represents various side effects [31]. Furthermore, the drug orlistat, which is the only clinically approved pharmacologic agent against pancreatic lipase, is associated with considerable side effects [32]. Thus, there is a dire need to search for new and safer enzymatic inhibitors for future pharmaceutical development. Accordingly, this present study is in line with the current trend and has screened the prepared extracts from the two Limonium species against α-amylase, glucosidase, acetylcholinesterase (AChE), butyrylcholinesterase (BChE), tyrosinase, and lipase.Tyrosinase inhibitors from the methanolic extract ofL. delicatulum (155.87 ± 0.01 mg KAE/g·DE) and lipase inhibitors from the methanolic extract of L. quesadense (65 ± 7 mg OE/g·DE) seemed promising candidates. The methanolic and aqueous extracts of L. delicatulum and L. quesadense were screened for their inhibitory activities on both AChE and BChE (Table 5).Table 5 Enzyme inhibitory properties ofL. delicatulum and L. quesadense extracts. Plant species Solvents AChE inhibition (mg GALAE/g·DE) BChE inhibition (mg GALAE/g·DE) Tyrosinase (mg KAE/g·DE) Amylase (mmol ACAE/g·DE) Glucosidase (mmol ACAE/g·DE) Lipase (mg OE/g·DE) L. delicatulum MeOH 4.8 ± 0.7a 3.5 ± 0.4a 155.87 ± 0.01a 0.95 ± 0.03b 2.70 ± 0.01c 27 ± 4b H2O 1.0 ± 0.2c na 18.87 ± 0.01d 0.08 ± 0.00c 2.74 ± 0.01a na L. quesadense MeOH 4.3 ± 0.2a 2.63 ± 0.02b 155.27 ± 0.01b 1.00 ± 0.02b 2.72 ± 0.01b 65 ± 7a H2O 1.7 ± 0.2b 0.86 ± 0.01c 135.34 ± 0.01c 1.5 ± 0.3a na na Values expressed are means ± SD of three parallel measurements. GALAE: galantamine equivalent; KAE: kojic acid equivalent; ACAE: acarbose equivalent; OE: orlistat equivalent; na: not active. Means in the same column not sharing the same letter are significantly differentp<0.05.We observed that the methanolic extract ofL. delicatulum exhibited the highest activity (4.8 ± 0.7 mg GALAE/g·DE); nevertheless, there is no statistical difference between the latter extract and the methanolic extract of L. quesadense. Hence, the two mentioned extracts represent the most potent cholinesterase inhibitors. Furthermore, the aqueous extract of L. quesadense was the most active inhibitor for α-amylase (1.5 ± 0.3 mmol ACAE/g·DE). On the other hand, in terms of glucosidase enzymatic assay, we observed the aqueous extract of L. delicatulum to be more potent (2.74 ± 0.01 mmol ACAE/g·DE) followed by the methanolic extract of L. quesadense (2.72 ± 0.01 mmol ACAE/g·DE) and the methanolic extract of L. delicatulum (2.70 ± 0.01 mmol ACAE/g·DE). Further data collected in this present study showed that the methanolic extract of L. quesadense exhibited the most effective lipase inhibitor. A substantial amount of reports showed that several plant metabolites are prospective pancreatic lipase inhibitors. Principally, it is projected that the presence of galloyl moiety of flavan-3-ols is essential for lipase inhibition [33]. Indeed, the methanolic extract of L. quesadense contained the highest levels of gallo(epi)catechin-O-gallate and its dimer, as well as myricetin-O-hexoside (Table 2) which might be linked to the significant lipase activity. It is noteworthy to point out that although the methanolic extract of L. quesadense possessed the highest bioactive components, we did not observe the most significant activity in all enzymatic assays. These results display that there may not always a correlation between polyphenol contents and enzymatic inhibition assays. ## 3.5. Unsupervised Multivariate Data Analysis of Biological Activities of Limonium Extracts The analysis ofLimonium species extracts encompassing 12 biological activities justified the employment of multivariate data analysis tools. Thus, with the help of unsupervised PCA and hierarchical clustering analysis, the biological activity data allowed for discrimination between the different extracts.The first two principal components showed 73.1% and 19.5% of the total variance, respectively, suggesting only the two components could outline 90% information of the original data. As presented in Figure3, the extracts were clearly classified into three clusters. Likewise, hierarchical cluster analysis (HCA) based on the concept of Euclidean similarity measure and Wards as linkage rule between the extracts confirmed the PCA results.Figure 3 Relationship between total bioactive compounds and biological activities and multivariate analysis ofLimonium species. (a) Pearson’s correlation heatmap; (b) screen plot of explained variance by PCA components; (c) score plot of principal components 1 and 2; (d) hierarchical cluster dendrogram of extracts. The color box indicates the standardized biological activities of each extract. The red, yellow, and blue colors indicate high, middle, and low bioactivity, respectively. (a) (b) (c) (d)The MeOH extracts of two studied species were close enough. That suggested both extracts have similar properties against all evaluated biological activities. Otherwise, as opposed to MeOH extracts, the H2O extracts were different. It allowed to better discriminate the two species. Accordingly, L. delicatulum and L. quesadense had different properties against all biological activities excepted AChE, BChE, and lipase; nevertheless, a more significant difference between the two species was obtained with MCA and amylase assays. Therefore, it can be concluded that L. delicatulum was most active against MCA while L. quesadense showed better amylase inhibitory activity. ## 4. Conclusions The phenolic composition and bioactive properties of leaves ofL. delicatulum and L. quesadense have been examined. L. delicatulum was rich in myricetin glycosides, whereas some of the most abundant compounds in L. quesadense were gallo(epi)catechin-O-gallate and its dimer. The presence of these compounds has been previously reported in other Limonium species and has been suggested as the main responsible for the bioactivity of Limonium extracts. In general, methanolic extracts presented the highest amounts of phenolics, along with the highest bioactive properties, although the most potent activities were observed in L. quesadense leaves. Not only the antioxidant activity was evaluated, but also the enzyme inhibitory properties against several key enzymes. The overall results indicate that leaves of L. quesadense may represent an interesting source of bioactive compounds. As L. quesadense is a threatened plant that is not currently protected by law, its cultivation on gypsic soils could be tested under the permission of the authorities by using seeds collected in the wild. It may also be an economic impulse for the population of semiarid areas in Jaén province. --- *Source: 1016208-2020-03-11.xml*
1016208-2020-03-11_1016208-2020-03-11.md
57,836
Phenolic Profile, Antioxidant Activity, and Enzyme Inhibitory Properties ofLimonium delicatulum (Girard) Kuntze and Limonium quesadense Erben
A. Ruiz-Riaguas; G. Zengin; K.I. Sinan; C. Salazar-Mendías; E.J. Llorent-Martínez
Journal of Chemistry (2020)
Chemistry and Chemical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1016208
1016208-2020-03-11.xml
--- ## Abstract In this work, we report the phytochemical composition and bioactive potential of methanolic and aqueous extracts of leaves fromLimonium delicatulum (Girard) Kuntze and Limonium quesadense Erben. The characterization and quantitation of individual phytochemicals were performed with liquid chromatography with diode array and electrospray-tandem mass spectrometry detection. Myricetin glycosides were abundant in L. delicatulum, whereas L. quesadense was rich in gallo(epi)catechin-O-gallate. Total phenolics, flavonols, and flavonoids were assayed with conventional methods. Antioxidant and radical scavenging assays (phosphomolybdenum, DPPH, ABTS, CUPRAC, FRAP, and metal chelating activity), as well as enzyme inhibitory assays (acetylcholinesterase, butyrylcholinesterase, tyrosinase, amylase, glucosidase, and lipase), were performed to evaluate the potential bioactivity. The methanolic extracts of both species presented higher phenolic content and bioactivity than the aqueous extracts. Overall, L. quesadense extracts exhibited the most potent activity for most assays, representing a potential source of bioactive compounds for the pharmaceutical and food industries. --- ## Body ## 1. Introduction Plants represent a rich source of many bioactive compounds, particularly polyphenols, which are well known for their high antioxidant activity and various health benefits. As a result, an increasing number of plant species are constantly used in folk medicine. In fact, several recent studies have focused on the enzyme inhibitory properties of plant extracts as an interesting approach to prevent different chronic diseases, such as inflammation, diabetes mellitus, Alzheimer, and cancer. Here, we present an investigation concerning two species of the genusLimonium: L. delicatulum (Girard) Kuntze and L. quesadense Erben.Limonium Miller is a genus that belongs to the Plumbaginaceae family, specifically to the Staticoideae subfamily. There are two subgenera (Pteroclados and Limonium) with different sections—depending on the authors—and at least 10 subsections [1, 2]. There are between 350 and 470 species, mainly distributed in the western Mediterranean region [2–4]. This genus comprises perennial species and, rarely, annual herbaceous plants. Limonium species usually grow in arid or semiarid areas, occupying small isolated spaces over gypsic or saline soils. There are numerous local endemic species and, due to their isolation, many of them are threatened and protected species. Some species are cultivated as ornamental plants, whereas others have important medicinal properties [5, 6]. Research on species of this genus has revealed important bioactivity concerning the free radical scavenging [7, 8], antioxidant [5, 9–11], anti-inflammatory [10], antibacterial [12], antimicrobial [9], and antiviral [13] properties. The main phenolics identified as responsible for these activities were gallic acid, epigallocatechin gallate, and myricetin and isorhamnetin flavonoids [7, 8, 10, 11, 14].L. delicatulum is an Iberian-North African endemism [3], growing up to 100 cm. Leaves are green, usually ovate to elliptic or obovate (3.5–15 cm length × 2–5 cm width), with 4–10 lateral nerves. It blooms from February to October depending on the altitude, developing shoots of 20–90 cm with spikes of 5–25 mm and spikelets of 4-5 mm. It inhabits coastal and inland saline habitats between 0 and 800 m.a.s.l. It is not considered a threatened species [15] and its chemical composition and bioactivity have been scarcely studied. Its antioxidant activity has been reported [16, 17], as well as total phenolics, flavonoids, tannins, and antimicrobial activity [17]. However, the detailed phytochemical composition and potential enzyme inhibitory activities have not been reported so far.L. quesadense is endemic to the province of Jaén (southeastern Iberian Peninsula, Spain) of 35–60 cm. Leaves are green-bluish to green-violetish, oblanceolate to spathulate (4–12 cm length × 1.5–3 cm width), with 4 (rarely 6) lateral nerves. It blooms from June to August, developing shoots of 20–50 cm with spikes of 7–20 mm and spikelets of 4.5-5 mm. It takes part in continental halophytic vegetation and gypsophyte scrubs in the Guadiana Menor valley between 500 and 700 m.a.s.l. It is regarded as a threatened plant under the category of “endangered” (EN) [15, 18]. To date, there are no studies concerning the phytochemical composition and bioactivity of this species.Taking into account the lack of information concerning the two target species—as well as the reports of the bioactivity of otherLimonium species—this research aims at providing information concerning the phenolic composition of leaves of L. delicatulum and L. quesadense, examining their antioxidant activity (radical scavenging, reducing power, and metal chelating) and enzyme inhibitory properties (against acetylcholinesterase, butyrylcholinesterase, tyrosinase, amylase, glucosidase, and lipase). ## 2. Materials and Methods ### 2.1. Sample Preparation Leaves ofL. quesadense and L. delicatulum were collected at the Native Flora Garden of the University of Jaén (Jaén, Andalusia, Spain; 37°47′18.879″N 3°46′31.583″W, 427 m a.s.l.) in September 2018. Samples are stored at the Herbarium of the University of Jaén. Photographs of both species are shown in Figure 1.Figure 1 Photographs of (a)L. delicatulum and (b) L. quesadense. (a) (b)The taxonomical classification was confirmed by botanist Dr. Carlos Salazar-Mendías. Samples were washed with Milli-Q water and extracted by two different procedures:(i) Ultrasound-assisted extraction with MeOH: leaves were lyophilized (ModulyoD-23, Thermo Savant; Waltham, MA USA) and powdered; 2.5 g of sample was extracted with 50 mL of MeOH in an ultrasonic liquid processor (Qsonica Sonicator; Newtown, CT, USA; power of 55 W and frequency of 20 kHz) at 50% power for 10 min.(ii) Decoction: 2.5 g of sample (fresh and powdered) was extracted with 150 mL of boiling Milli-Q water for 30 minutes.Both extracts were filtered through Whatman No. 1 filters, and the solvent was evaporated under reduced pressure in a Hei-Vap Precision Rotary Evaporator (Heidolph; Schwabach; Germany) at 40°C. Dried extracts (DE) were stored at −20°C until analysis. ### 2.2. HPLC Analysis Dried extracts (5–10 mg) were redissolved in 1 mL of MeOH and filtered with 0.45μm nylon filters, and 10 μL of the sample was injected. The HPLC system was an Agilent Series 1100 with a G1315B diode array detector. The separation of the compounds was performed with a reversed-phase Luna Omega Polar C18 column (150 × 3.0 mm and 5 µm particle size; Phenomenex) with a Polar C18 Security Guard cartridge (Phenomenex) of 4 × 3.0 mm. The HPLC system was connected to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics) with an electrospray interface. Chromatographic conditions have been previously detailed [19]. All standards required to perform phenolic quantitation were purchased from Sigma-Aldrich (Madrid, Spain); individual stock solutions were prepared in methanol (MeOH, LC-MS grade, >99.9%; Sigma). LC-MS grade acetonitrile (CH3CN, 99%; LabScan; Dublin, Ireland) and ultrapure water (Milli-Q Water Purification System; Millipore; Milford, MA, USA) were also used. ### 2.3. Total Phenolic Content (TPC) and Total Flavonoid Content (TFC) TPC and TFC were determined using the Folin–Ciocalteu and AlCl3 assays, respectively [20]. Results were expressed as gallic acid equivalents (mg GAEs/g extract) and rutin equivalents (mg REs/g extract) for the respective assays. ### 2.4. Determination of Antioxidant and Enzyme Inhibitory Effects The metal chelating, phosphomolybdenum, FRAP, CUPRAC, ABTS, and DPPH activities of the extracts were assessed following the methods described by Uysal et al. [20]. The antioxidant activities were reported as trolox equivalents, whereas EDTA was used for metal chelating assay. The possible inhibitory effects of the extracts against cholinesterases (AChE (E.C. 3.1.1.7) from Electrophorus electricus (electric eel) and BChE (E.C. 3.1.1.8) from equine serum, by Ellman’s method), tyrosinase (from mushroom, E.C. 1.14.18.1), α-amylase (from porcine pancreas, E.C. 3.2.1.1), α-glucosidase (E.C. 3.2.1.20), and lipase (from porcine pancreas, E.C 3.1.1.3) were evaluated using standard in vitro bioassays [21]. ### 2.5. Data Analysis Bioactive compounds and biological activity data were prepared for univariate and multivariate statistical analysis. Firstly, one-way ANOVA followed by post hoc test, namely, Tukey’s multiple range was performed under Xlstat 2018 to investigate significant differencesp<0.05 between the studied samples. The data were subjected to unsupervised multivariate analysis PCA and HCA using R software v. 3.5.1 for the discrimination between the extracts and their classification according to biological activities. Finally, the relationship between biological activities and phenolic classes based on the estimation of Pearson’s correlation coefficients were conducted. ## 2.1. Sample Preparation Leaves ofL. quesadense and L. delicatulum were collected at the Native Flora Garden of the University of Jaén (Jaén, Andalusia, Spain; 37°47′18.879″N 3°46′31.583″W, 427 m a.s.l.) in September 2018. Samples are stored at the Herbarium of the University of Jaén. Photographs of both species are shown in Figure 1.Figure 1 Photographs of (a)L. delicatulum and (b) L. quesadense. (a) (b)The taxonomical classification was confirmed by botanist Dr. Carlos Salazar-Mendías. Samples were washed with Milli-Q water and extracted by two different procedures:(i) Ultrasound-assisted extraction with MeOH: leaves were lyophilized (ModulyoD-23, Thermo Savant; Waltham, MA USA) and powdered; 2.5 g of sample was extracted with 50 mL of MeOH in an ultrasonic liquid processor (Qsonica Sonicator; Newtown, CT, USA; power of 55 W and frequency of 20 kHz) at 50% power for 10 min.(ii) Decoction: 2.5 g of sample (fresh and powdered) was extracted with 150 mL of boiling Milli-Q water for 30 minutes.Both extracts were filtered through Whatman No. 1 filters, and the solvent was evaporated under reduced pressure in a Hei-Vap Precision Rotary Evaporator (Heidolph; Schwabach; Germany) at 40°C. Dried extracts (DE) were stored at −20°C until analysis. ## 2.2. HPLC Analysis Dried extracts (5–10 mg) were redissolved in 1 mL of MeOH and filtered with 0.45μm nylon filters, and 10 μL of the sample was injected. The HPLC system was an Agilent Series 1100 with a G1315B diode array detector. The separation of the compounds was performed with a reversed-phase Luna Omega Polar C18 column (150 × 3.0 mm and 5 µm particle size; Phenomenex) with a Polar C18 Security Guard cartridge (Phenomenex) of 4 × 3.0 mm. The HPLC system was connected to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics) with an electrospray interface. Chromatographic conditions have been previously detailed [19]. All standards required to perform phenolic quantitation were purchased from Sigma-Aldrich (Madrid, Spain); individual stock solutions were prepared in methanol (MeOH, LC-MS grade, >99.9%; Sigma). LC-MS grade acetonitrile (CH3CN, 99%; LabScan; Dublin, Ireland) and ultrapure water (Milli-Q Water Purification System; Millipore; Milford, MA, USA) were also used. ## 2.3. Total Phenolic Content (TPC) and Total Flavonoid Content (TFC) TPC and TFC were determined using the Folin–Ciocalteu and AlCl3 assays, respectively [20]. Results were expressed as gallic acid equivalents (mg GAEs/g extract) and rutin equivalents (mg REs/g extract) for the respective assays. ## 2.4. Determination of Antioxidant and Enzyme Inhibitory Effects The metal chelating, phosphomolybdenum, FRAP, CUPRAC, ABTS, and DPPH activities of the extracts were assessed following the methods described by Uysal et al. [20]. The antioxidant activities were reported as trolox equivalents, whereas EDTA was used for metal chelating assay. The possible inhibitory effects of the extracts against cholinesterases (AChE (E.C. 3.1.1.7) from Electrophorus electricus (electric eel) and BChE (E.C. 3.1.1.8) from equine serum, by Ellman’s method), tyrosinase (from mushroom, E.C. 1.14.18.1), α-amylase (from porcine pancreas, E.C. 3.2.1.1), α-glucosidase (E.C. 3.2.1.20), and lipase (from porcine pancreas, E.C 3.1.1.3) were evaluated using standard in vitro bioassays [21]. ## 2.5. Data Analysis Bioactive compounds and biological activity data were prepared for univariate and multivariate statistical analysis. Firstly, one-way ANOVA followed by post hoc test, namely, Tukey’s multiple range was performed under Xlstat 2018 to investigate significant differencesp<0.05 between the studied samples. The data were subjected to unsupervised multivariate analysis PCA and HCA using R software v. 3.5.1 for the discrimination between the extracts and their classification according to biological activities. Finally, the relationship between biological activities and phenolic classes based on the estimation of Pearson’s correlation coefficients were conducted. ## 3. Results and Discussion ### 3.1. Phytochemical Characterization The characterization of phytochemicals was performed by HPLC-ESI-MSn using negative and positive ion modes. Base peak chromatograms are shown in Figure 2, whereas the characterization of compounds is detailed in Table 1.Figure 2 HPLC-ESI-MSn base peak chromatograms of the methanolic extracts of (a) L. delicatulum and (b) L. quesadense. (a) (b)Table 1 Characterization of phenolics inL. delicatulum and L. quesadense extracts. No. tR(min) [M-H]−m/z m/z (% base peak) Assigned identification L. delicatulum L. quesadense 1 1.7 377 MS2 [377]: 341 (100)MS3 [377⟶341]: 179 (100), 131 (14), 113 (18)MS4 [377⟶341⟶179]: 161 (42), 143 (93), 119 (100) Disaccharide (HCl adduct) ✓ ✓ 2 2.0 191 MS2 [191]: 173 (25), 127 (19), 111 (100) Citric acid ✓b 3 3.1 411 MS2 [411]: 331 (7), 241 (100), 169 (14), 125 (6) Galloyl hexoside (sulfate adduct) ✓ 4 3.3 169 MS2 [169]: 125 (100) Gallic acid ✓b ✓b 5 3.6 439 MS2 [439]: 241 (100)MS3 [439⟶241]: 223 (96), 139 (100), 165 (16) Unknown ✓ 6 4.3 379 MS2 [379]: 379 (100), 241 (20) Unknown ✓ 7 4.5 325 MS2 [325]: 169 (100), 125 (9)MS3 [325⟶169]: 125 (100) Gallic acid derivative ✓ ✓ 8 5.6 365 MS2 [365]: 321 (68), 153 (100)MS3 [365⟶153]: 109 (100), 108 (61) Protocatechuic acid derivative ✓ 9 8.0 761 MS2 [761]: 609 (89), 593 (96), 575 (74), 423 (100)MS3 [761⟶423]: 297 (50), 283 (100), 243 (28) Prodelphinidin dimer B-type gallate (2 units (epi)GC) ✓a ✓ 10 9.1 443 MS2 [443]: 275 (24), 245 (100), 167 (27) Unknown ✓ 11 9.5 449 (+) MS2 [449]:431 (8), 288(18), 287(100) Cyanidin 3-glucoside ✓a 12 10.4 759 MS2 [759]: 481 (100), 423 (96), 301 (87)MS3 [759⟶423]: 297 (84), 283 (50), 243 (100) Unknown ✓ 13 11.0 363 MS2 [363]: 363 (100), 241 (10) Unknown ✓ 14 11.0 457 MS2 [457]: 331 (20), 305 (17), 193 (85), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓a 15 12.3 913 MS2 [913]: 761 (100), 423 (60)MS3 [913⟶761]: 423 (100), 609 (90), 305 (49)MS4 [913⟶761⟶423]: 297 (100), 253 (49), 405 (33) Gallo(epi)catechin-O-gallate dimer ✓ 16 13.3 457 MS2 [457]: 331 (22), 305 (13), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓ ✓ 17 15.0 631 MS2 [631]: 479 (80), 317 (100)MS3 [631⟶479]: 317 (100), 316 (91), 179 (9)MS4 [631⟶479⟶317]: 271 (100), 179 (54), 151 (18) Myricetin-galloyl-hexoside ✓ ✓ 18 15.4 457 MS2 [457]: 331 (16), 305 (15), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓b 19 16.2 625 MS2 [625]: 317 (100), 316 (89)MS3 [625⟶317]: 271 (100), 179 (48), 151 (26) Myricetin-O-rutinoside ✓ ✓ 20 16.6 479 MS2 [479]: 317 (79), 316 (100), 179 (5)MS3 [479⟶317]: 271 (100), 179 (67), 151 (16) Myricetin-O-hexoside ✓ ✓ 21 18.5 615 MS2 [615]: 463 (100), 301 (33)MS3 [615⟶463]: 301 (100)MS4 [615⟶463⟶301]: 179 (75), 151 (100) Quercetin-galloyl-hexoside ✓ ✓b 22 19.2 609 MS2 [609]: 301 (100)MS3 [609⟶301]: 271 (98), 179 (76), 151 (100) Rutin ✓ 23 19.9 463 MS2 [463]: 317 (77), 316 (100), 179 (10)MS3 [463⟶316]: 271 (100), 179 (60), 151 (18) Myricetin-O-deoxyhexoside ✓ ✓ 24 20.6 463 MS2 [463]: 301 (100), 151 (5)MS3 [463⟶301]: 255 (21), 179 (70), 151 (100) Quercetin-O-hexoside ✓ ✓ 25 20.6 465 (+) MS2 [465]: 303 (100)MS3 [465⟶303]: 257 (100), 137 (39) Delphinidin-O-hexoside ✓ 26 21.0 659 MS2 [659]: 317 (85), 316 (100)MS3 [659⟶316]: 271 (100), 179 (74), 151 (47) Myricetin derivative ✓a 27 22.2 497 MS2 [497]: 417 (100)MS3 [497⟶417]: 371 (14), 181 (100), 166 (50), 151 (24)MS4 [497⟶417⟶181]: 166 (100) Syringaresinol (sulfate adduct) ✓ 28 23.0 593 MS2 [593]: 286 (20), 285 (100)MS3 [595⟶285]: 257 (100), 239 (76), 213 (75), 151 (45) Kaempferol-O-rutinoside ✓ 29 23.5 599 MS2 [599]: 313 (100), 285 (94)MS3 [599⟶313]: 169 (100) Kaempferol-O-galloyl-hexoside ✓ 30 24.1 549 MS2 [549]: 505 (100)MS3 [549⟶505]: 317 (44), 316 (100)MS4 [549⟶505⟶316]: 271 (100), 179 (57), 151 (16) Myricetin derivative ✓a ✓ 31 24.6 447 MS2 [447]: 301 (100), 179 (5)MS3 [447⟶301]: 179 (91), 151 (100) Quercetin-O-deoxyhexoside ✓ ✓ 32 24.7 303 (+) MS2 [303]: 257 (100) Delphinidin ✓a 33 24.9 477 MS2 [477]: 331 (100)MS3 [477⟶331]: 316 (100), 315 (57) Mearnsetin-O-deoxyhexoside ✓a 34 27.2 437 MS2 [437]: 357 (100), 151 (39)MS3 [437⟶357]: 342 (48), 151 (100), 136 (40) Pinoresinol (sulfate adduct) ✓ 35 28.0 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (40) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 36 28.6 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (54) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 37 28.6 431 MS2 [431]: 286 (17), 285 (100), 284 (25), 255 (10)MS3 [431⟶285]: 257 (82), 255 (100), 197 (39) Kaempferol-O-deoxyhexoside ✓ 38 29.5 533 MS2 [533]: 489 (100)MS3 [533⟶489]: 447 (19), 301 (100)MS4 [533⟶489⟶301]: 271 (100), 179 (22), 151 (42) Quercetin derivative ✓a ✓ 39 33.4 599 MS2 [599]: 301 (100)MS3 [599⟶301]: 179 (81), 151 (100)MS4 [599⟶301⟶179]: 151 (100) Quercetin-O-galloyl-deoxyhexoside ✓ ✓ aOnly in MeOH extract; b only in H2O extract.Compound1 was identified as the HCl adduct of a disaccharide (dihexoside) due to its fragmentation pattern [22]. Compound 2, with deprotonated molecular ion at m/z 191 and base peak at m/z 111, was identified as citric acid.Compound4, with [M-H]- at m/z 169 and base peak at m/z 125, was identified as gallic acid by comparison with an analytical standard. Compound 3, after the loss of 80 Da (sulfate moiety), displayed fragment ions at m/z 331, 169, and 125, typical of galloyl hexoside. Compound 7 also presented gallic acid in its fragmentation pattern and was tentatively characterized as a derivative. Compound 9 was tentatively characterized as prodelphinidin dimer B-type gallate (2 units of gallo(epi)catechin) based on bibliographic information [23].Compounds14, 16, and 18 presented [M-H]- at m/z 457 and fragment ions at m/z 331, 305, 169, and 125, consistent with gallo(epi)catechin-O-gallate isomers [23]. Compound 15 was characterized as a dimer.Compound8 exhibited fragment ions at m/z 153 and 109/108, which corresponded to protocatechuic acid (comparison with an analytical standard), so it was characterized as a derivative.Compound11, identified using positive ion mode, suffered the neutral loss of 162 Da, yielding cyanidin at m/z 287, so it was characterized as cyanidin 3-glucoside [24].Several myricetin derivatives were characterized in the analyzed extracts. In all of them, myricetin was observed atm/z 317 (main fragment ions at m/z 179 and 151). The following neutral losses were observed in compounds 17, 19, 20, 23, 35, and 36: 152 Da (galloyl moiety), 146 Da (deoxyhexoside), 162 Da (hexoside), 308 Da (rutinoside). We could not elucidate the exact structure of compounds 26 and 30, so they were characterized as myricetin derivatives.Following the same neutral losses than myricetin, several quercetin (21, 22, 24, 31, 38, and 39) and kaempferol (28, 29, and 37) derivatives were characterized. Quercetin and kaempferol aglycones were detected at m/z 301 and 285, respectively.Compound27 suffered the neutral loss of 80 Da (sulfate) to yield the lignan syringaresinol at m/z 417, which was identified by the fragment ions at m/z 181, 166, and 151 [25]. Compound 34 was also characterized as a sulfate adduct of a lignan, pinoresinol [25].Compound32 was characterized as delphinidin due to the 303⟶257 fragmentation using positive ion mode. With an additional hexoside moiety, 25 was characterized as delphinidin-O-hexoside [24].Finally, compound33, with deprotonated molecular ion at m/z 477, suffered the neutral loss of 146 Da (deoxyhexoside) to yield mearnsetin at m/z 331 (main fragment ion at m/z 316) [26]. ### 3.2. Quantitation of Phenolic Compounds We quantified 16 compounds in the methanolic and aqueous extracts ofL. quesadense and L. delicatulum. The results are summarised in Table 2. Total individual phenolic content (TIPC) was defined as the sum of all the individual compounds that were quantified by HPLC-DAD (phenolic acids at 320 nm and flavonoids at 350 nm).Table 2 Contents (mg g−1 DE) of the main phenolic compounds present in L. delicatulum and L. quesadense extracts. No. Assigned identification L. delicatulum L. quesadense MeOH H2O MeOH H2O Phenolic acids 3 Galloyl hexoside 0.67 ± 0.01 1.91 ± 0.1 — — 7 Gallic acid derivative 0.84 ± 0.02 — — — Total 1.51 ± 0.02 1.9 ± 0.1 Flavonoids 9 Prodelphinidin dimer 0.58 ± 0.01c — 5.10 ± 0.01b 6.3 ± 0.3a 15 Gallo(epi)catechin-O-gallate dimer — — 10.0 ± 0.7 — 16 Gallo(epi)catechin-O-gallate 1.7 ± 0.2c — 26 ± 1a 15.0 ± 0.6b 17 Myricetin-galloyl-hexoside 1.2 ± 0.1a — 0.89 ± 0.07b 0.69 ± 0.05c 18 Gallo(epi)catechin-O-gallate — — — 2.68 ± 0.05 19 + 20 Myricetin glycosides 4.40 ± 0.01a 0.30 ± 0.02d 2.22 ± 0.06b 1.41 ± 0.08c 23 Myricetin-O-deoxyhexoside 5.1 ± 0.1b 0.64 ± 0.05d 7.5 ± 0.4a 4.1 ± 0.1c 24 Quercetin-O-hexoside 0.54 ± 0.05a 0.21 ± 0.02b — 0.21 ± 0.02b 30 Myricetin derivative — — 0.50 ± 0.07 0.39 ± 0.02 31 Quercetin-O-deoxyhexoside 0.93 ± 0.04b — 1.37 ± 0.06a 0.75 ± 0.04c 35 Myricetin-O-galloyl-deoxyhexoside 0.56 ± 0.03a — 0.48 ± 0.01b 0.24 ± 0.01c 36 Myricetin-O-galloyl-deoxyhexoside 0.28 ± 0.01 — — 0.25 ± 0.01 39 Quercetin-O-galloyl-deoxyhexoside 0.18 ± 0.01 — — — Total 15.5 ± 0.3c 1.15 ± 0.05d 54 ± 1 a 32.0 ± 0.7b TIPC 17.0 ± 0.3c 3.1 ± 0.1d 54 ± 1 a 32.0 ± 0.7b Values are means ± SD of three parallel measurements. Means in the same line not sharing the same letter are significantly differentp<0.05.L. quesadense presented higher TIPC (54 and 32 mg/g DE for MeOH and H2O extracts, respectively) than L. delicatulum (17 and 3.1 mg/g DE for MeOH and H2O extracts, respectively). In both cases, methanol extracts presented the highest concentration of phenolics due to the highest solubility of flavonoids in MeOH compared to water. The most abundant compounds in L. delicatulum MeOH extract were myricetin glycosides (compounds 19, 20, and 23), which have been previously reported as bioactive compounds in L. densiflorum [10]. On the other hand, the most abundant compounds in L. quesadense extracts were 15 and 16 (gallo(epi)catechin-O-gallate monomer and dimer), followed by myricetin glycosides. Epigallocatechin gallate has been previously reported as the potential mainly responsible for the bioactive properties of L. brasiliense and L. algarvense [11, 14]. ### 3.3. Antioxidant Properties The majority of plant-based aromatic natural products are phenols, which comprise flavonoids, tannins, flavonols, flavanols, and anthocyanins, among others. In the present study, the different prepared extracts have been screened for the presence of phenolics, flavonols, and flavonoids. Indeed, all extracts showed a good level of phenolics, followed by flavonols and flavonoids (Table3), observing higher levels in L. quesadense than L. delicatulum. In addition, MeOH extracts presented higher levels of phenolics than aqueous extracts. These results are in agreement with TIPC quantified by chromatography. The methanolic extract of L. quesadense possessed the highest TPC (172 ± 4 mg GAE/g·DE) and flavonol (74 ± 3 mg RE/g·DE). However, flavonoids were most abundant in the MeOH extract of L. delicatulum (42.1 ± 0.8 mg RE/g·DE).Table 3 Total bioactive components, total antioxidant capacity (by phosphomolybdenum assay), and radical scavenging abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents Total phenolic content (mg GAE/g·DE) Total flavonol content (mg RE/g·DE) Total flavonoid content (mg RE/g·DE) Phosphomolybdenum (mmol TE/g·DE) ABTS (mg TE/g DE) DPPH (mg TE/g DE) L. delicatulum MeOH 151 ± 1b 55 ± 2b 42.1 ± 0.8a 4.5 ± 0.6a 360 ± 10b 470 ± 10b H2O 31.1 ± 0.4c 1.08 ± 0.03d 5.80 ± 0.09d 0.67 ± 0.04b 53 ± 8d 56 ± 1d L. quesadense MeOH 172 ± 4a 74 ± 3a 30.8 ± 0.5b 5.1 ± 0.4a 510 ± 30a 620 ± 10a H2O 152 ± 1b 10.4 ± 0.2c 12.98 ± 0.07c 4.6 ± 0.7a 248 ± 6c 428 ± 8c Values expressed are means ± SD of three parallel measurements. GAE: gallic acid equivalent; RE: rutin equivalent; TE: trolox equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.Moreover, a series of antioxidant assays were conducted on the extracts of bothLimonium species, namely, total antioxidant capacity (phosphomolybdenum), radical scavenging (ABTS and DPPH), reducing power (CUPRAC and FRAP), and metal chelating. These results are presented in Tables 3 and 4.Table 4 Reducing power and metal chelating abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents CUPRAC (mg TE/g·DE) FRAP (mg TE/g·DE) Metal chelating activity (mg EDTAE/g·DE) L. delicatulum MeOH 853 ± 5b 470 ± 10b 26.74 ± 0.01b H2O 94 ± 2d 62.5 ± 0.6d 28.43 ± 0.01a L. quesadense MeOH 940 ± 10a 520 ± 10a 19.43 ± 0.01d H2O 640 ± 10c 431 ± 5c 22.30 ± 0.01c Values expressed are means ± SD of three parallel measurements. TE: trolox equivalent; EDTAE: EDTA equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.In terms of total antioxidant capacity, the most potent extract was the methanolic extract ofL. quesadense (5.1 ± 0.4 mmol TE/g·DE). However, it is essential to point out that there is no statistical difference between this extract and the aqueous and methanolic extracts of L. delicatulum. Interestingly, the methanolic extract of L. quesadense displayed the highest activity in reducing power and radical scavenging assays. It showed significant activity with ABTS (510 ± 30 mg TE/g·DE) and DPPH (620 ± 10 mg TE/g·DE), CUPRAC (940 ± 10 mg TE/g·DE), and FRAP (520 ± 10 mg TE/g·DE). The most abundant flavonoid identified in the methanolic extract of L. quesadense was gallo(epi)catechin-O-gallate (26 ± 1 mg·g−1·DE). Thus, it can be extrapolated that this compound, along with its dimer, might be mainly responsible for the observed antioxidant properties.In contrast to the aforementioned antioxidant assays, the results obtained in the metal chelating assay classified the aqueous extract ofL. delicatulum as the most effective metal chelator, with a significant activity of 28.43 ± 0.01 mg EDTAE/g·DE. Considering the quantitation analysis of phenolic compounds of all extracts (Table 2), it can be said that there exists a good correlation between the antioxidant results and the quantitation of polyphenols. As ample evidence, the total quantified phenolic content (TIPC) with HPLC reported the methanolic extract of L. quesadense as most rich in flavonoids (54 ± 1 mg·g−1 DE) which as discussed above presented the highest antioxidant property. In this sense, the observed significant antioxidant properties could be attributed mainly to the presence of compounds containing galloyl moieties, such a gallo(epi)catechin-O-gallate dimer. Our findings were also supported by several researchers [27, 28]. ### 3.4. Enzyme Inhibitory Effects Enzymes are the main targets to control the constantly emerging global health issues [29]. As an example, tyrosinase is an important enzyme involved in the melanogenesis process during which the pigment, melanin, is produced [30]. However, the inhibitor of tyrosinase, kojic acid, which is used inexhaustibly by the pharmaceutical and cosmetic industries, represents various side effects [31]. Furthermore, the drug orlistat, which is the only clinically approved pharmacologic agent against pancreatic lipase, is associated with considerable side effects [32]. Thus, there is a dire need to search for new and safer enzymatic inhibitors for future pharmaceutical development. Accordingly, this present study is in line with the current trend and has screened the prepared extracts from the two Limonium species against α-amylase, glucosidase, acetylcholinesterase (AChE), butyrylcholinesterase (BChE), tyrosinase, and lipase.Tyrosinase inhibitors from the methanolic extract ofL. delicatulum (155.87 ± 0.01 mg KAE/g·DE) and lipase inhibitors from the methanolic extract of L. quesadense (65 ± 7 mg OE/g·DE) seemed promising candidates. The methanolic and aqueous extracts of L. delicatulum and L. quesadense were screened for their inhibitory activities on both AChE and BChE (Table 5).Table 5 Enzyme inhibitory properties ofL. delicatulum and L. quesadense extracts. Plant species Solvents AChE inhibition (mg GALAE/g·DE) BChE inhibition (mg GALAE/g·DE) Tyrosinase (mg KAE/g·DE) Amylase (mmol ACAE/g·DE) Glucosidase (mmol ACAE/g·DE) Lipase (mg OE/g·DE) L. delicatulum MeOH 4.8 ± 0.7a 3.5 ± 0.4a 155.87 ± 0.01a 0.95 ± 0.03b 2.70 ± 0.01c 27 ± 4b H2O 1.0 ± 0.2c na 18.87 ± 0.01d 0.08 ± 0.00c 2.74 ± 0.01a na L. quesadense MeOH 4.3 ± 0.2a 2.63 ± 0.02b 155.27 ± 0.01b 1.00 ± 0.02b 2.72 ± 0.01b 65 ± 7a H2O 1.7 ± 0.2b 0.86 ± 0.01c 135.34 ± 0.01c 1.5 ± 0.3a na na Values expressed are means ± SD of three parallel measurements. GALAE: galantamine equivalent; KAE: kojic acid equivalent; ACAE: acarbose equivalent; OE: orlistat equivalent; na: not active. Means in the same column not sharing the same letter are significantly differentp<0.05.We observed that the methanolic extract ofL. delicatulum exhibited the highest activity (4.8 ± 0.7 mg GALAE/g·DE); nevertheless, there is no statistical difference between the latter extract and the methanolic extract of L. quesadense. Hence, the two mentioned extracts represent the most potent cholinesterase inhibitors. Furthermore, the aqueous extract of L. quesadense was the most active inhibitor for α-amylase (1.5 ± 0.3 mmol ACAE/g·DE). On the other hand, in terms of glucosidase enzymatic assay, we observed the aqueous extract of L. delicatulum to be more potent (2.74 ± 0.01 mmol ACAE/g·DE) followed by the methanolic extract of L. quesadense (2.72 ± 0.01 mmol ACAE/g·DE) and the methanolic extract of L. delicatulum (2.70 ± 0.01 mmol ACAE/g·DE). Further data collected in this present study showed that the methanolic extract of L. quesadense exhibited the most effective lipase inhibitor. A substantial amount of reports showed that several plant metabolites are prospective pancreatic lipase inhibitors. Principally, it is projected that the presence of galloyl moiety of flavan-3-ols is essential for lipase inhibition [33]. Indeed, the methanolic extract of L. quesadense contained the highest levels of gallo(epi)catechin-O-gallate and its dimer, as well as myricetin-O-hexoside (Table 2) which might be linked to the significant lipase activity. It is noteworthy to point out that although the methanolic extract of L. quesadense possessed the highest bioactive components, we did not observe the most significant activity in all enzymatic assays. These results display that there may not always a correlation between polyphenol contents and enzymatic inhibition assays. ### 3.5. Unsupervised Multivariate Data Analysis of Biological Activities of Limonium Extracts The analysis ofLimonium species extracts encompassing 12 biological activities justified the employment of multivariate data analysis tools. Thus, with the help of unsupervised PCA and hierarchical clustering analysis, the biological activity data allowed for discrimination between the different extracts.The first two principal components showed 73.1% and 19.5% of the total variance, respectively, suggesting only the two components could outline 90% information of the original data. As presented in Figure3, the extracts were clearly classified into three clusters. Likewise, hierarchical cluster analysis (HCA) based on the concept of Euclidean similarity measure and Wards as linkage rule between the extracts confirmed the PCA results.Figure 3 Relationship between total bioactive compounds and biological activities and multivariate analysis ofLimonium species. (a) Pearson’s correlation heatmap; (b) screen plot of explained variance by PCA components; (c) score plot of principal components 1 and 2; (d) hierarchical cluster dendrogram of extracts. The color box indicates the standardized biological activities of each extract. The red, yellow, and blue colors indicate high, middle, and low bioactivity, respectively. (a) (b) (c) (d)The MeOH extracts of two studied species were close enough. That suggested both extracts have similar properties against all evaluated biological activities. Otherwise, as opposed to MeOH extracts, the H2O extracts were different. It allowed to better discriminate the two species. Accordingly, L. delicatulum and L. quesadense had different properties against all biological activities excepted AChE, BChE, and lipase; nevertheless, a more significant difference between the two species was obtained with MCA and amylase assays. Therefore, it can be concluded that L. delicatulum was most active against MCA while L. quesadense showed better amylase inhibitory activity. ## 3.1. Phytochemical Characterization The characterization of phytochemicals was performed by HPLC-ESI-MSn using negative and positive ion modes. Base peak chromatograms are shown in Figure 2, whereas the characterization of compounds is detailed in Table 1.Figure 2 HPLC-ESI-MSn base peak chromatograms of the methanolic extracts of (a) L. delicatulum and (b) L. quesadense. (a) (b)Table 1 Characterization of phenolics inL. delicatulum and L. quesadense extracts. No. tR(min) [M-H]−m/z m/z (% base peak) Assigned identification L. delicatulum L. quesadense 1 1.7 377 MS2 [377]: 341 (100)MS3 [377⟶341]: 179 (100), 131 (14), 113 (18)MS4 [377⟶341⟶179]: 161 (42), 143 (93), 119 (100) Disaccharide (HCl adduct) ✓ ✓ 2 2.0 191 MS2 [191]: 173 (25), 127 (19), 111 (100) Citric acid ✓b 3 3.1 411 MS2 [411]: 331 (7), 241 (100), 169 (14), 125 (6) Galloyl hexoside (sulfate adduct) ✓ 4 3.3 169 MS2 [169]: 125 (100) Gallic acid ✓b ✓b 5 3.6 439 MS2 [439]: 241 (100)MS3 [439⟶241]: 223 (96), 139 (100), 165 (16) Unknown ✓ 6 4.3 379 MS2 [379]: 379 (100), 241 (20) Unknown ✓ 7 4.5 325 MS2 [325]: 169 (100), 125 (9)MS3 [325⟶169]: 125 (100) Gallic acid derivative ✓ ✓ 8 5.6 365 MS2 [365]: 321 (68), 153 (100)MS3 [365⟶153]: 109 (100), 108 (61) Protocatechuic acid derivative ✓ 9 8.0 761 MS2 [761]: 609 (89), 593 (96), 575 (74), 423 (100)MS3 [761⟶423]: 297 (50), 283 (100), 243 (28) Prodelphinidin dimer B-type gallate (2 units (epi)GC) ✓a ✓ 10 9.1 443 MS2 [443]: 275 (24), 245 (100), 167 (27) Unknown ✓ 11 9.5 449 (+) MS2 [449]:431 (8), 288(18), 287(100) Cyanidin 3-glucoside ✓a 12 10.4 759 MS2 [759]: 481 (100), 423 (96), 301 (87)MS3 [759⟶423]: 297 (84), 283 (50), 243 (100) Unknown ✓ 13 11.0 363 MS2 [363]: 363 (100), 241 (10) Unknown ✓ 14 11.0 457 MS2 [457]: 331 (20), 305 (17), 193 (85), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓a 15 12.3 913 MS2 [913]: 761 (100), 423 (60)MS3 [913⟶761]: 423 (100), 609 (90), 305 (49)MS4 [913⟶761⟶423]: 297 (100), 253 (49), 405 (33) Gallo(epi)catechin-O-gallate dimer ✓ 16 13.3 457 MS2 [457]: 331 (22), 305 (13), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓ ✓ 17 15.0 631 MS2 [631]: 479 (80), 317 (100)MS3 [631⟶479]: 317 (100), 316 (91), 179 (9)MS4 [631⟶479⟶317]: 271 (100), 179 (54), 151 (18) Myricetin-galloyl-hexoside ✓ ✓ 18 15.4 457 MS2 [457]: 331 (16), 305 (15), 169 (100)MS3 [457⟶169]: 125 (100) Gallo(epi)catechin-O-gallate isomer ✓b 19 16.2 625 MS2 [625]: 317 (100), 316 (89)MS3 [625⟶317]: 271 (100), 179 (48), 151 (26) Myricetin-O-rutinoside ✓ ✓ 20 16.6 479 MS2 [479]: 317 (79), 316 (100), 179 (5)MS3 [479⟶317]: 271 (100), 179 (67), 151 (16) Myricetin-O-hexoside ✓ ✓ 21 18.5 615 MS2 [615]: 463 (100), 301 (33)MS3 [615⟶463]: 301 (100)MS4 [615⟶463⟶301]: 179 (75), 151 (100) Quercetin-galloyl-hexoside ✓ ✓b 22 19.2 609 MS2 [609]: 301 (100)MS3 [609⟶301]: 271 (98), 179 (76), 151 (100) Rutin ✓ 23 19.9 463 MS2 [463]: 317 (77), 316 (100), 179 (10)MS3 [463⟶316]: 271 (100), 179 (60), 151 (18) Myricetin-O-deoxyhexoside ✓ ✓ 24 20.6 463 MS2 [463]: 301 (100), 151 (5)MS3 [463⟶301]: 255 (21), 179 (70), 151 (100) Quercetin-O-hexoside ✓ ✓ 25 20.6 465 (+) MS2 [465]: 303 (100)MS3 [465⟶303]: 257 (100), 137 (39) Delphinidin-O-hexoside ✓ 26 21.0 659 MS2 [659]: 317 (85), 316 (100)MS3 [659⟶316]: 271 (100), 179 (74), 151 (47) Myricetin derivative ✓a 27 22.2 497 MS2 [497]: 417 (100)MS3 [497⟶417]: 371 (14), 181 (100), 166 (50), 151 (24)MS4 [497⟶417⟶181]: 166 (100) Syringaresinol (sulfate adduct) ✓ 28 23.0 593 MS2 [593]: 286 (20), 285 (100)MS3 [595⟶285]: 257 (100), 239 (76), 213 (75), 151 (45) Kaempferol-O-rutinoside ✓ 29 23.5 599 MS2 [599]: 313 (100), 285 (94)MS3 [599⟶313]: 169 (100) Kaempferol-O-galloyl-hexoside ✓ 30 24.1 549 MS2 [549]: 505 (100)MS3 [549⟶505]: 317 (44), 316 (100)MS4 [549⟶505⟶316]: 271 (100), 179 (57), 151 (16) Myricetin derivative ✓a ✓ 31 24.6 447 MS2 [447]: 301 (100), 179 (5)MS3 [447⟶301]: 179 (91), 151 (100) Quercetin-O-deoxyhexoside ✓ ✓ 32 24.7 303 (+) MS2 [303]: 257 (100) Delphinidin ✓a 33 24.9 477 MS2 [477]: 331 (100)MS3 [477⟶331]: 316 (100), 315 (57) Mearnsetin-O-deoxyhexoside ✓a 34 27.2 437 MS2 [437]: 357 (100), 151 (39)MS3 [437⟶357]: 342 (48), 151 (100), 136 (40) Pinoresinol (sulfate adduct) ✓ 35 28.0 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (40) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 36 28.6 615 MS2 [615]: 317 (100)MS3 [615⟶317]: 179 (100), 151 (54) Myricetin-O-galloyl-deoxyhexoside ✓a ✓ 37 28.6 431 MS2 [431]: 286 (17), 285 (100), 284 (25), 255 (10)MS3 [431⟶285]: 257 (82), 255 (100), 197 (39) Kaempferol-O-deoxyhexoside ✓ 38 29.5 533 MS2 [533]: 489 (100)MS3 [533⟶489]: 447 (19), 301 (100)MS4 [533⟶489⟶301]: 271 (100), 179 (22), 151 (42) Quercetin derivative ✓a ✓ 39 33.4 599 MS2 [599]: 301 (100)MS3 [599⟶301]: 179 (81), 151 (100)MS4 [599⟶301⟶179]: 151 (100) Quercetin-O-galloyl-deoxyhexoside ✓ ✓ aOnly in MeOH extract; b only in H2O extract.Compound1 was identified as the HCl adduct of a disaccharide (dihexoside) due to its fragmentation pattern [22]. Compound 2, with deprotonated molecular ion at m/z 191 and base peak at m/z 111, was identified as citric acid.Compound4, with [M-H]- at m/z 169 and base peak at m/z 125, was identified as gallic acid by comparison with an analytical standard. Compound 3, after the loss of 80 Da (sulfate moiety), displayed fragment ions at m/z 331, 169, and 125, typical of galloyl hexoside. Compound 7 also presented gallic acid in its fragmentation pattern and was tentatively characterized as a derivative. Compound 9 was tentatively characterized as prodelphinidin dimer B-type gallate (2 units of gallo(epi)catechin) based on bibliographic information [23].Compounds14, 16, and 18 presented [M-H]- at m/z 457 and fragment ions at m/z 331, 305, 169, and 125, consistent with gallo(epi)catechin-O-gallate isomers [23]. Compound 15 was characterized as a dimer.Compound8 exhibited fragment ions at m/z 153 and 109/108, which corresponded to protocatechuic acid (comparison with an analytical standard), so it was characterized as a derivative.Compound11, identified using positive ion mode, suffered the neutral loss of 162 Da, yielding cyanidin at m/z 287, so it was characterized as cyanidin 3-glucoside [24].Several myricetin derivatives were characterized in the analyzed extracts. In all of them, myricetin was observed atm/z 317 (main fragment ions at m/z 179 and 151). The following neutral losses were observed in compounds 17, 19, 20, 23, 35, and 36: 152 Da (galloyl moiety), 146 Da (deoxyhexoside), 162 Da (hexoside), 308 Da (rutinoside). We could not elucidate the exact structure of compounds 26 and 30, so they were characterized as myricetin derivatives.Following the same neutral losses than myricetin, several quercetin (21, 22, 24, 31, 38, and 39) and kaempferol (28, 29, and 37) derivatives were characterized. Quercetin and kaempferol aglycones were detected at m/z 301 and 285, respectively.Compound27 suffered the neutral loss of 80 Da (sulfate) to yield the lignan syringaresinol at m/z 417, which was identified by the fragment ions at m/z 181, 166, and 151 [25]. Compound 34 was also characterized as a sulfate adduct of a lignan, pinoresinol [25].Compound32 was characterized as delphinidin due to the 303⟶257 fragmentation using positive ion mode. With an additional hexoside moiety, 25 was characterized as delphinidin-O-hexoside [24].Finally, compound33, with deprotonated molecular ion at m/z 477, suffered the neutral loss of 146 Da (deoxyhexoside) to yield mearnsetin at m/z 331 (main fragment ion at m/z 316) [26]. ## 3.2. Quantitation of Phenolic Compounds We quantified 16 compounds in the methanolic and aqueous extracts ofL. quesadense and L. delicatulum. The results are summarised in Table 2. Total individual phenolic content (TIPC) was defined as the sum of all the individual compounds that were quantified by HPLC-DAD (phenolic acids at 320 nm and flavonoids at 350 nm).Table 2 Contents (mg g−1 DE) of the main phenolic compounds present in L. delicatulum and L. quesadense extracts. No. Assigned identification L. delicatulum L. quesadense MeOH H2O MeOH H2O Phenolic acids 3 Galloyl hexoside 0.67 ± 0.01 1.91 ± 0.1 — — 7 Gallic acid derivative 0.84 ± 0.02 — — — Total 1.51 ± 0.02 1.9 ± 0.1 Flavonoids 9 Prodelphinidin dimer 0.58 ± 0.01c — 5.10 ± 0.01b 6.3 ± 0.3a 15 Gallo(epi)catechin-O-gallate dimer — — 10.0 ± 0.7 — 16 Gallo(epi)catechin-O-gallate 1.7 ± 0.2c — 26 ± 1a 15.0 ± 0.6b 17 Myricetin-galloyl-hexoside 1.2 ± 0.1a — 0.89 ± 0.07b 0.69 ± 0.05c 18 Gallo(epi)catechin-O-gallate — — — 2.68 ± 0.05 19 + 20 Myricetin glycosides 4.40 ± 0.01a 0.30 ± 0.02d 2.22 ± 0.06b 1.41 ± 0.08c 23 Myricetin-O-deoxyhexoside 5.1 ± 0.1b 0.64 ± 0.05d 7.5 ± 0.4a 4.1 ± 0.1c 24 Quercetin-O-hexoside 0.54 ± 0.05a 0.21 ± 0.02b — 0.21 ± 0.02b 30 Myricetin derivative — — 0.50 ± 0.07 0.39 ± 0.02 31 Quercetin-O-deoxyhexoside 0.93 ± 0.04b — 1.37 ± 0.06a 0.75 ± 0.04c 35 Myricetin-O-galloyl-deoxyhexoside 0.56 ± 0.03a — 0.48 ± 0.01b 0.24 ± 0.01c 36 Myricetin-O-galloyl-deoxyhexoside 0.28 ± 0.01 — — 0.25 ± 0.01 39 Quercetin-O-galloyl-deoxyhexoside 0.18 ± 0.01 — — — Total 15.5 ± 0.3c 1.15 ± 0.05d 54 ± 1 a 32.0 ± 0.7b TIPC 17.0 ± 0.3c 3.1 ± 0.1d 54 ± 1 a 32.0 ± 0.7b Values are means ± SD of three parallel measurements. Means in the same line not sharing the same letter are significantly differentp<0.05.L. quesadense presented higher TIPC (54 and 32 mg/g DE for MeOH and H2O extracts, respectively) than L. delicatulum (17 and 3.1 mg/g DE for MeOH and H2O extracts, respectively). In both cases, methanol extracts presented the highest concentration of phenolics due to the highest solubility of flavonoids in MeOH compared to water. The most abundant compounds in L. delicatulum MeOH extract were myricetin glycosides (compounds 19, 20, and 23), which have been previously reported as bioactive compounds in L. densiflorum [10]. On the other hand, the most abundant compounds in L. quesadense extracts were 15 and 16 (gallo(epi)catechin-O-gallate monomer and dimer), followed by myricetin glycosides. Epigallocatechin gallate has been previously reported as the potential mainly responsible for the bioactive properties of L. brasiliense and L. algarvense [11, 14]. ## 3.3. Antioxidant Properties The majority of plant-based aromatic natural products are phenols, which comprise flavonoids, tannins, flavonols, flavanols, and anthocyanins, among others. In the present study, the different prepared extracts have been screened for the presence of phenolics, flavonols, and flavonoids. Indeed, all extracts showed a good level of phenolics, followed by flavonols and flavonoids (Table3), observing higher levels in L. quesadense than L. delicatulum. In addition, MeOH extracts presented higher levels of phenolics than aqueous extracts. These results are in agreement with TIPC quantified by chromatography. The methanolic extract of L. quesadense possessed the highest TPC (172 ± 4 mg GAE/g·DE) and flavonol (74 ± 3 mg RE/g·DE). However, flavonoids were most abundant in the MeOH extract of L. delicatulum (42.1 ± 0.8 mg RE/g·DE).Table 3 Total bioactive components, total antioxidant capacity (by phosphomolybdenum assay), and radical scavenging abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents Total phenolic content (mg GAE/g·DE) Total flavonol content (mg RE/g·DE) Total flavonoid content (mg RE/g·DE) Phosphomolybdenum (mmol TE/g·DE) ABTS (mg TE/g DE) DPPH (mg TE/g DE) L. delicatulum MeOH 151 ± 1b 55 ± 2b 42.1 ± 0.8a 4.5 ± 0.6a 360 ± 10b 470 ± 10b H2O 31.1 ± 0.4c 1.08 ± 0.03d 5.80 ± 0.09d 0.67 ± 0.04b 53 ± 8d 56 ± 1d L. quesadense MeOH 172 ± 4a 74 ± 3a 30.8 ± 0.5b 5.1 ± 0.4a 510 ± 30a 620 ± 10a H2O 152 ± 1b 10.4 ± 0.2c 12.98 ± 0.07c 4.6 ± 0.7a 248 ± 6c 428 ± 8c Values expressed are means ± SD of three parallel measurements. GAE: gallic acid equivalent; RE: rutin equivalent; TE: trolox equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.Moreover, a series of antioxidant assays were conducted on the extracts of bothLimonium species, namely, total antioxidant capacity (phosphomolybdenum), radical scavenging (ABTS and DPPH), reducing power (CUPRAC and FRAP), and metal chelating. These results are presented in Tables 3 and 4.Table 4 Reducing power and metal chelating abilities ofL. delicatulum and L. quesadense extracts. Plant species Solvents CUPRAC (mg TE/g·DE) FRAP (mg TE/g·DE) Metal chelating activity (mg EDTAE/g·DE) L. delicatulum MeOH 853 ± 5b 470 ± 10b 26.74 ± 0.01b H2O 94 ± 2d 62.5 ± 0.6d 28.43 ± 0.01a L. quesadense MeOH 940 ± 10a 520 ± 10a 19.43 ± 0.01d H2O 640 ± 10c 431 ± 5c 22.30 ± 0.01c Values expressed are means ± SD of three parallel measurements. TE: trolox equivalent; EDTAE: EDTA equivalent. Means in the same column not sharing the same letter are significantly differentp<0.05.In terms of total antioxidant capacity, the most potent extract was the methanolic extract ofL. quesadense (5.1 ± 0.4 mmol TE/g·DE). However, it is essential to point out that there is no statistical difference between this extract and the aqueous and methanolic extracts of L. delicatulum. Interestingly, the methanolic extract of L. quesadense displayed the highest activity in reducing power and radical scavenging assays. It showed significant activity with ABTS (510 ± 30 mg TE/g·DE) and DPPH (620 ± 10 mg TE/g·DE), CUPRAC (940 ± 10 mg TE/g·DE), and FRAP (520 ± 10 mg TE/g·DE). The most abundant flavonoid identified in the methanolic extract of L. quesadense was gallo(epi)catechin-O-gallate (26 ± 1 mg·g−1·DE). Thus, it can be extrapolated that this compound, along with its dimer, might be mainly responsible for the observed antioxidant properties.In contrast to the aforementioned antioxidant assays, the results obtained in the metal chelating assay classified the aqueous extract ofL. delicatulum as the most effective metal chelator, with a significant activity of 28.43 ± 0.01 mg EDTAE/g·DE. Considering the quantitation analysis of phenolic compounds of all extracts (Table 2), it can be said that there exists a good correlation between the antioxidant results and the quantitation of polyphenols. As ample evidence, the total quantified phenolic content (TIPC) with HPLC reported the methanolic extract of L. quesadense as most rich in flavonoids (54 ± 1 mg·g−1 DE) which as discussed above presented the highest antioxidant property. In this sense, the observed significant antioxidant properties could be attributed mainly to the presence of compounds containing galloyl moieties, such a gallo(epi)catechin-O-gallate dimer. Our findings were also supported by several researchers [27, 28]. ## 3.4. Enzyme Inhibitory Effects Enzymes are the main targets to control the constantly emerging global health issues [29]. As an example, tyrosinase is an important enzyme involved in the melanogenesis process during which the pigment, melanin, is produced [30]. However, the inhibitor of tyrosinase, kojic acid, which is used inexhaustibly by the pharmaceutical and cosmetic industries, represents various side effects [31]. Furthermore, the drug orlistat, which is the only clinically approved pharmacologic agent against pancreatic lipase, is associated with considerable side effects [32]. Thus, there is a dire need to search for new and safer enzymatic inhibitors for future pharmaceutical development. Accordingly, this present study is in line with the current trend and has screened the prepared extracts from the two Limonium species against α-amylase, glucosidase, acetylcholinesterase (AChE), butyrylcholinesterase (BChE), tyrosinase, and lipase.Tyrosinase inhibitors from the methanolic extract ofL. delicatulum (155.87 ± 0.01 mg KAE/g·DE) and lipase inhibitors from the methanolic extract of L. quesadense (65 ± 7 mg OE/g·DE) seemed promising candidates. The methanolic and aqueous extracts of L. delicatulum and L. quesadense were screened for their inhibitory activities on both AChE and BChE (Table 5).Table 5 Enzyme inhibitory properties ofL. delicatulum and L. quesadense extracts. Plant species Solvents AChE inhibition (mg GALAE/g·DE) BChE inhibition (mg GALAE/g·DE) Tyrosinase (mg KAE/g·DE) Amylase (mmol ACAE/g·DE) Glucosidase (mmol ACAE/g·DE) Lipase (mg OE/g·DE) L. delicatulum MeOH 4.8 ± 0.7a 3.5 ± 0.4a 155.87 ± 0.01a 0.95 ± 0.03b 2.70 ± 0.01c 27 ± 4b H2O 1.0 ± 0.2c na 18.87 ± 0.01d 0.08 ± 0.00c 2.74 ± 0.01a na L. quesadense MeOH 4.3 ± 0.2a 2.63 ± 0.02b 155.27 ± 0.01b 1.00 ± 0.02b 2.72 ± 0.01b 65 ± 7a H2O 1.7 ± 0.2b 0.86 ± 0.01c 135.34 ± 0.01c 1.5 ± 0.3a na na Values expressed are means ± SD of three parallel measurements. GALAE: galantamine equivalent; KAE: kojic acid equivalent; ACAE: acarbose equivalent; OE: orlistat equivalent; na: not active. Means in the same column not sharing the same letter are significantly differentp<0.05.We observed that the methanolic extract ofL. delicatulum exhibited the highest activity (4.8 ± 0.7 mg GALAE/g·DE); nevertheless, there is no statistical difference between the latter extract and the methanolic extract of L. quesadense. Hence, the two mentioned extracts represent the most potent cholinesterase inhibitors. Furthermore, the aqueous extract of L. quesadense was the most active inhibitor for α-amylase (1.5 ± 0.3 mmol ACAE/g·DE). On the other hand, in terms of glucosidase enzymatic assay, we observed the aqueous extract of L. delicatulum to be more potent (2.74 ± 0.01 mmol ACAE/g·DE) followed by the methanolic extract of L. quesadense (2.72 ± 0.01 mmol ACAE/g·DE) and the methanolic extract of L. delicatulum (2.70 ± 0.01 mmol ACAE/g·DE). Further data collected in this present study showed that the methanolic extract of L. quesadense exhibited the most effective lipase inhibitor. A substantial amount of reports showed that several plant metabolites are prospective pancreatic lipase inhibitors. Principally, it is projected that the presence of galloyl moiety of flavan-3-ols is essential for lipase inhibition [33]. Indeed, the methanolic extract of L. quesadense contained the highest levels of gallo(epi)catechin-O-gallate and its dimer, as well as myricetin-O-hexoside (Table 2) which might be linked to the significant lipase activity. It is noteworthy to point out that although the methanolic extract of L. quesadense possessed the highest bioactive components, we did not observe the most significant activity in all enzymatic assays. These results display that there may not always a correlation between polyphenol contents and enzymatic inhibition assays. ## 3.5. Unsupervised Multivariate Data Analysis of Biological Activities of Limonium Extracts The analysis ofLimonium species extracts encompassing 12 biological activities justified the employment of multivariate data analysis tools. Thus, with the help of unsupervised PCA and hierarchical clustering analysis, the biological activity data allowed for discrimination between the different extracts.The first two principal components showed 73.1% and 19.5% of the total variance, respectively, suggesting only the two components could outline 90% information of the original data. As presented in Figure3, the extracts were clearly classified into three clusters. Likewise, hierarchical cluster analysis (HCA) based on the concept of Euclidean similarity measure and Wards as linkage rule between the extracts confirmed the PCA results.Figure 3 Relationship between total bioactive compounds and biological activities and multivariate analysis ofLimonium species. (a) Pearson’s correlation heatmap; (b) screen plot of explained variance by PCA components; (c) score plot of principal components 1 and 2; (d) hierarchical cluster dendrogram of extracts. The color box indicates the standardized biological activities of each extract. The red, yellow, and blue colors indicate high, middle, and low bioactivity, respectively. (a) (b) (c) (d)The MeOH extracts of two studied species were close enough. That suggested both extracts have similar properties against all evaluated biological activities. Otherwise, as opposed to MeOH extracts, the H2O extracts were different. It allowed to better discriminate the two species. Accordingly, L. delicatulum and L. quesadense had different properties against all biological activities excepted AChE, BChE, and lipase; nevertheless, a more significant difference between the two species was obtained with MCA and amylase assays. Therefore, it can be concluded that L. delicatulum was most active against MCA while L. quesadense showed better amylase inhibitory activity. ## 4. Conclusions The phenolic composition and bioactive properties of leaves ofL. delicatulum and L. quesadense have been examined. L. delicatulum was rich in myricetin glycosides, whereas some of the most abundant compounds in L. quesadense were gallo(epi)catechin-O-gallate and its dimer. The presence of these compounds has been previously reported in other Limonium species and has been suggested as the main responsible for the bioactivity of Limonium extracts. In general, methanolic extracts presented the highest amounts of phenolics, along with the highest bioactive properties, although the most potent activities were observed in L. quesadense leaves. Not only the antioxidant activity was evaluated, but also the enzyme inhibitory properties against several key enzymes. The overall results indicate that leaves of L. quesadense may represent an interesting source of bioactive compounds. As L. quesadense is a threatened plant that is not currently protected by law, its cultivation on gypsic soils could be tested under the permission of the authorities by using seeds collected in the wild. It may also be an economic impulse for the population of semiarid areas in Jaén province. --- *Source: 1016208-2020-03-11.xml*
2020
# The Influence of Negative Pressure and of the Harvesting Site on the Characteristics of Human Adipose Tissue-Derived Stromal Cells from Lipoaspirates **Authors:** Martina Travnickova; Julia Pajorova; Jana Zarubova; Nikola Krocilova; Martin Molitor; Lucie Bacakova **Journal:** Stem Cells International (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1016231 --- ## Abstract Background. Adipose tissue-derived stromal cells (ADSCs) have great potential for cell-based therapies, including tissue engineering. However, various factors can influence the characteristics of isolated ADSCs. Methods. We studied the influence of the harvesting site, i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9), and of negative pressure, i.e., low (-200 mmHg) and high (-700 mmHg), on the characteristics of isolated ADSCs. We counted initial yields of attached cells after isolation. In subsequent passage, we studied the number, viability, diameter, doubling time, mitochondrial activity, and CD surface markers of isolated ADSCs. Results. We revealed higher initial cell yields from the outer thigh region than from the abdomen region. Negative pressure did not influence the cell yields from the outer thigh region, whereas the yields from the abdomen region were higher under high negative pressure than under low negative pressure. In the subsequent passage, in general, no significant relationship was identified between the different negative pressure and ADSC characteristics. No significant difference was observed in the characteristics of thigh ADSCs and abdomen ADSCs. Only on day 1, the diameter was significantly bigger in outer thigh ADSCs than in abdomen ADSCs. Moreover, we noted a tendency of thigh ADSCs (i.e., inner thigh+outer thigh) to reach a higher cell number on day 7. Discussion. The harvesting site and negative pressure can potentially influence initial cell yields from lipoaspirates. However, for subsequent in vitro culturing and for use in tissue engineering, it seems that the harvesting site and the level of negative pressure do not have a crucial or limiting effect on basic ADSC characteristics. --- ## Body ## 1. Background Stem cells of various origin are fundamental elements for cell-based therapies in regenerative medicine, particularly for tissue engineering. Nowadays, tissue engineering tends to use stem cells that (1) are pluripotent or multipotent, (2) can be routinely harvested in large quantities, and (3) are surrounded by fewer ethical issues than other types. Mesenchymal stromal cells (MSCs) are multipotent plastic-adherent fibroblast-like cells. They can be harvested predominantly from adult organs and tissues, i.e., bone marrow, peripheral blood, adipose tissue, skin, skeletal muscle, dental pulp, brain, and endometrium [1]. Not only adult tissues but also extrafoetal tissues, such as placenta, umbilical cord tissue, amniotic membrane, and amniotic fluid can also serve as sources of MSCs. The characteristics and the differentiation of bone marrow-derived stromal cells (BMSCs) have been widely studied, as they were the first MSCs to be described. BMSCs provide favourable differentiation characteristics. However, the BMSC harvesting procedure is uncomfortable for donors and adipose tissue-derived stromal cells (ADSCs) provide similar yields of isolated cells, together with greater subsequent proliferation capacity [2]. In recent years, ADSCs have become an ideal target for tissue engineering and cell-based therapies. A relatively easy harvesting procedure and the multipotent characteristics of ADSCs make these stromal cells suitable for various uses [3]. The possibility of autologous application in cell-based therapies can be a further advantage of ADSCs.The methods for isolating ADSCs from adipose tissue can be divided into enzymatic and nonenzymatic approaches [4, 5]. Until now, enzymatic digestion using collagenase has been the most widely performed procedure. However, newer alternative nonenzymatic techniques (e.g., vibration and centrifuging) can also be applied, especially for clinical purposes [6]. After enzymatic digestion and centrifugation, three separated parts are obtained, namely, the upper oily part containing adipocytes, the middle part consisting of digested tissue, and the reddish stromal vascular fraction (SVF) pellet at the bottom [7]. The SVF part is a mixture of distinct cell types consisting of ADSCs and variably also of pericytes, preadipocytes, endothelial precursor cells, endothelial cells, macrophages, smooth muscle cells, fibroblasts, and lymphocytes [5].A large number and range of studies focused on obtaining ADSCs have been published. The studies have investigated various fat-harvesting procedures, cell isolation procedures, and donor factors. All these factors can influence the viability, the yields, and the subsequent proliferation and differentiation of the isolated cells. Tumescent liposuction is used as one of the easiest procedures for harvesting adipose tissue. The negative pressure (vacuum) that is used during the liposuction procedure is an important factor that influences the quality and the amount of harvested tissue. Lee et al. studied the effect of different negative pressures (i.e., -381 mmHg and -635 mmHg) on fat grafting [8]. In their in vivo study, no significant differences in the weight or in the histology of the fat grafts were observed; moreover, higher negative pressure did not affect the viability of the fat grafts [8]. Similarly, in a study by Charles-de-Sá et al., no significant differences, either in the viability of the adipocytes or in the number of MSCs, were found in adipose tissue obtained under various negative pressures [9]. However, other studies have reported a significant influence of negative pressure on cell characteristics. Mojallal et al. measured greater cell yields in adipose tissue harvested under a lower negative pressure (-350 mmHg) than under a higher negative pressure (-700 mmHg) [10]. Similarly, Chen et al. reported more than 2-fold higher cell numbers in SVF isolated from adipose tissue harvested under a lower negative pressure (−225mmHg±37mmHg) than under a higher negative pressure (−410mmHg±37mmHg) [11]. They also reported faster cell growth and higher secretion of some growth factors in cells obtained under lower negative pressure in the initial passages [11].The harvesting site of the superficial adipose tissue seems to be another important donor factor potentially influencing the viability and the proliferation of the isolated cells. Jurgens et al. compared the numbers of cells isolated from the abdomen area and from the hip/thigh area. They found a significantly higher frequency of ADSCs in SVF isolates derived from the abdomen area, but no significant differences were found in the absolute numbers of nucleated cells [12]. However, the osteogenic and chondrogenic differentiation capacity of the ADSCs was not affected by the harvesting site [12]. Padoin et al. observed higher cell yields from the lower abdomen and from the inner thigh than from other liposuction areas (i.e., upper abdomen, flank, trochanteric area, and knee) [13]. Differences in the viability and in the amount of SVF and in the numbers of ADSCs after culturing, were also studied by Tsekouras and coworkers. In their study, the SVF from the outer thigh exhibited higher cell numbers [14]. This tendency also continued in subsequent cell culturing, where the outer and inner thigh samples both showed higher numbers of ADSCs than the abdomen, waist, or inner knee samples. Other studies reported no statistically significant differences in the volumes of fat grafts [15, 16] or in adipocyte viability [17] according to the donor sites.Not only the negative pressure during liposuction and in the donor harvesting site but also different harvesting procedures [18] and other individual donor factors have been found to influence the viability, proliferation, and differentiation characteristics of ADSCs. Further factors include body mass index (BMI), age, gender, intercurrent diseases, such as diabetes mellitus, and also radiotherapy and drug treatment [19].There is a need to investigate and confirm the best harvesting conditions for ADSCs, which could help to bring them into routine use in clinical practice. Until now, studies have not been uniform and have been focused predominantly on different cell types (adipocytes, preadipocytes, total SVF). The potential differences in the characteristics of ADSCs seem to be nonnegligible and need to be further clarified for future use in tissue engineering. The objective of our study was to investigate the influence of negative pressure during liposuction and also of the donor site on the yields of initially attached cells and on subsequent cell proliferation, achieved cell numbers, cell viability, diameter, and phenotypic markers of isolated ADSCs when cultured inin vitro conditions. ## 2. Materials and Methods ### 2.1. Group of Donors and Liposuction Procedure A comparative study was performed on samples of subcutaneous adipose tissue from 15 healthy donors after informed consent at Hospital Na Bulovce in Prague. The group of females (n=14) and one male (n=1) underwent tumescent liposuction, whereby adipose tissue from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9) was harvested. Harvesting was conducted in compliance with the tenets of the Declaration of Helsinki on experiments involving human tissues and under ethical approval issued by the Ethics Committee of Hospital Na Bulovce in Prague (August 21, 2014). The liposuctions were performed under sterile conditions, using tumescence. The tumescent solution contained a 1000 mL of physiological solution with adrenaline (1 : 200,000) 1 mL and bicarbonate 8.4% 20 mL. In order to protect the harvested stromal cells from possible toxicity, no local anaesthetics were used. We used a liposuction machine (MEDELA dominant) that enabled continuous negative pressure to be set, and we utilized negative pressure of -200 mmHg and -700 mmHg. Superficial fat tissue was harvested using a Coleman Style blunt cannula with 4 holes and an inner diameter of 3 mm. Both low negative pressure (i.e., -200 mmHg) and high negative pressure (i.e., -700 mmHg) were used during liposuction in selected harvesting sites for each donor. Specifically, in the abdominal region, low pressure was used on one side of the abdomen, while high pressure was applied on the opposite side of the abdomen. Similarly, in the outer and inner thigh regions, low pressure was applied on one leg and the high pressure was applied on the contralateral leg (Scheme 1). A different cannula and vacuum suction container was used for low and high pressure harvesting to prevent contamination of low pressure harvesting material with high pressure harvesting material and vice versa. The age range of the donors was 26–53 years (mean age 37.8±7.8 years) and the BMI range was 19.60–36.17kg/m2 (mean BMI 25.44±4.37kg/m2) (Table 1). The donors did not suffer from diabetes or from hypertension, and they were not tobacco users.Scheme 1 Scheme of the experiment. Sites in the abdomen, the inner thigh, and the outer thigh where liposuction at low negative pressure (-200 mmHg) and at high negative pressure (-700 mmHg) was performed. After the cell isolation, the initial yields of attached cells were counted. In subsequent passages, the number, viability, diameter, doubling time, mitochondrial activity (all in passage 1), and CD surface markers (passage 2) of isolated ADSCs were evaluated.Table 1 Donors included in our study. The group of females (n=14) and one male (n=1; abdomen site) underwent tumescent liposuction, in which adipose tissue was harvested from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9). In each harvesting site, the lipoaspirate was obtained both under low and under high negative pressure. Donor site Age (years) BMI (kg/m2) No. of samples Inner thigh 42.0±4.6 27.70±7.40 3 Outer thigh 35.4±7.8 23.56±2.45 7 Abdomen 38.3±8.6 25.06±4.08 9 Together 37.8±7.8 25.44±4.37 19 samples from 15 donors ### 2.2. Isolation of ADSCs The isolation procedure was performed in fresh lipoaspirates (within 2 hours after the liposuction procedure) according to the isolation protocol by Estes et al. [7]. However, we made some slight modifications, as described in our previous study [20]. In brief, the lipoaspirates were washed several times with phosphate-buffered saline (PBS; Sigma-Aldrich). Then, the lipoaspirate was digested, using PBS containing 1% (wt/vol) bovine serum albumin (BSA; Sigma-Aldrich) and type I collagenase 0.1% (wt/vol) (Worthington) for 1 hour at a temperature of 37°C. After the digestion procedure, the tissue was centrifuged, and the upper and middle layers were aspirated. The obtained SVF was washed three times. A filter with pores 100 μm in size (Cell Strainer, BD Falcon) was additionally used to filter the cell suspension of SVF right before seeding into culture flasks (75 cm2, TPP, Switzerland) in a density of 0.16 mL of original lipoaspirate/cm2. The isolated cells were cultured in Dulbecco’s modified Eagle medium (DMEM; Gibco), supplemented with 10% (vol/vol) foetal bovine serum (FBS; Gibco), gentamicin (40 μg/mL; LEK), and recombinant human fibroblast growth factor basic (FGF2; 10 ng/mL; GenScript). The primary cells, referred to as “passage 0,” were cultured until they reached 70%–80% confluence. Then, the cells were passaged.For the experiments that followed (Scheme1), the cells isolated from the lipoaspirate harvested under low negative pressure (i.e., -200 mmHg) are referred to as “low,” and the cells isolated from the lipoaspirate harvested under high negative pressure (i.e., -700 mmHg) are referred to as “high.” The compared groups of cells are referred to as low inner thigh (low I thigh), high inner thigh (high I thigh), low outer thigh (low O thigh), high outer thigh (high O thigh), low abdomen, and high abdomen. ### 2.3. Yields of Initially Attached Cells For the primary culture of isolated cells, as mentioned above, the seeding density was 0.16 mL of original lipoaspirate/cm2. On day 1 after isolation and seeding (passage 0), the culture medium was changed with the fresh medium, and the unattached cells were washed away. Then, the cell yields per 1 mL of lipoaspirate were counted from the number of attached cells, because only these cells are relevant for potential use in tissue engineering. Microphotographs of 4 to 6 randomly chosen microscopic fields for each sample were taken by phase-contrast microscope and were analysed by manual cell counting. Then, the number of attached cells was compared depending on different negative pressure or on the harvesting site. ### 2.4. Cell Number, Viability, Diameter, and Doubling Time The cells from each donor, harvested under low and high negative pressure within the corresponding areas in the abdomen or in the thigh, were cultured and then analysed. The isolated cells in passage 1 were seeded into 12-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 2.1 cm) in a density of 14,000 cells/cm2 (i.e., 50,000 cells/well) and were cultivated in DMEM+10% (vol/vol) FBS+10 ng/mL FGF2 for 7 days. The volume of the cell culture medium was 3 mL/well. The cells were cultivated in a humidified air atmosphere with 5% CO2 at a temperature of 37°C. On days 1, 3, and 7, the cells were washed with PBS and were then detached by incubation with Trypsin-EDTA Solution (Sigma-Aldrich) for 4 minutes at 37°C. The effect of the Trypsin-EDTA solution was subsequently inhibited by adding a medium with FBS, and the cells were resuspended. The number, the viability, and the diameter of the detached cells in each well were measured using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). In this analyser, the cell viability is evaluated by a trypan blue exclusion test. From 5 to 8 independent samples for each experimental group of a donor in each time interval were analysed. The cell population doubling time (DT) was calculated from the ADSC numbers, according to the following equation: DT=t×ln2/lnN–lnN0, where t represents the duration of culture, N represents the number of cells on day 3, and N0 represents the number of cells on day 1. ### 2.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes is generally measured in order to estimate the cell proliferation activity. The isolated cells in passage 1 were seeded into 24-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 1.5 cm) in a density of 14,000 cells/cm2 (i.e., 25,000 cells/well) and were cultivated in DMEM+10% FBS+10 ng/mL FGF2 for 7 days. The volume of cell culture medium was 1.5 mL/well. On days 3 and 7, a CellTiter 96® Aqueous One Solution Cell Proliferation Assay (MTS; Promega Corporation) was performed according to the manufacturer’s protocol. In brief, the principle of the MTS assay is based on a colorimetric change of the yellow tetrazolium salt to brown formazan. This change is brought about by the activity of mitochondrial enzymes. The absorbance was measured at a wavelength of 490 nm, using a VersaMax ELISA microplate reader (Molecular Devices LLC). From 5 to 6 independent samples were measured for each experimental group in each time interval. ### 2.6. Flow Cytometry In passage 2, the cells were characterised by flow cytometry, using antibodies against specific surface CD markers. An evaluation was made of the percentage of cells in the population that contained standard markers of ADSCs, i.e., CD105 (also referred to as endoglin, a membrane glycoprotein which is part of the TGF-β receptor complex), CD90 (Thy-1, a thymocyte antigen belonging to the immunoglobulin superfamily), and CD73 (ecto-5′-nucleotidase, a glycosylphosphatidylinositol-anchored membrane protein). Other evaluated markers included CD29 (integrin β1, a component of receptors for collagen and fibronectin), CD146 (a melanoma cell adhesion molecule, a receptor for laminin), CD31 (also referred to as platelet-endothelial cell adhesion molecule-1, PECAM-1), and hematopoietic cell markers CD34 and CD45 [3]. In brief, the cells were washed with PBS and were incubated with Trypsin-EDTA for 4 minutes at 37°C. Subsequently, the medium with FBS was added and the cells were centrifuged (5 min, 300 g). The supernatant was aspired off, and the cells were resuspended in PBS with 0.5% (wt/vol) BSA (Sigma-Aldrich). The cells were equally divided into aliquots (i.e., 250,000 cells/aliquot). FITC-, Alexa488-, Alexa647-, or PE-conjugated monoclonal antibodies, i.e., against CD105, CD45 (Exbio Praha), CD90 (BD Biosciences), CD73, CD146, CD31 (BioLegend), CD29 and CD34 (Invitrogen), were added separately into aliquots. The aliquots were incubated with the antibodies for 30 minutes at 4°C in dark conditions. Next, the stained cells were washed three times with PBS with 0.5% (wt/vol) BSA and were analysed with the Accuri C6 Flow Cytometer System (BD Biosciences). In each aliquot, 20,000 events were recorded for each CD surface marker. ### 2.7. Microscopy Techniques Phase-contrast microscopy was used to visualise the process of attachment, spreading, and growth in native ADSCs after isolation (passage 0). The immunofluorescence staining of CD surface markers was performed on native adhering ADSCs (passage 2) using PE-CD90 (BD Science) and Alexa488-CD29 (Invitrogen) antibodies. Cell nuclei in native cells were counterstained with Hoechst 33342 (Sigma-Aldrich) for 30 minutes at room temperature in the dark. Olympus microscope IX71 (objective magnification 10x or 20x) was used to take representative images. ### 2.8. Statistical Analysis First, to evaluate the significance of different negative pressures, the observed data (i.e., initial cell yields, later cell numbers, and mitochondrial activity) were presented as the ratio of low-pressure cells to high-pressure cells for each donor. The Mann-Whitney Rank Sum test was used to test the equality of the medians of the ratios on different days of the experiment. Second, an unpaired two-samplet-test (for parametric data) or a Mann-Whitney Rank Sum test (for nonparametric data) was used to test the significance of the differences between the outer thigh area and the abdomen area. The inner thigh region was not statistically compared with other harvesting sites due to a relatively small group of samples (i.e., from only 3 patients). All the measured data were tested for normality according to the Kolmogorov-Smirnov test. Data which showed a Gaussian distribution are expressed as mean±SD. However, due to the small sample size and the wide dispersion among the donors, some of the data did not show a Gaussian distribution. The nonparametric data are expressed as the median and the interquartile range (IQ range). The statistical analysis was performed using SigmaStat Software (Systat Software Inc., USA); p<0.001 (for flow cytometry) or p<0.05 (for all other methods) was considered statistically significant. The plots were generated in R (programming language). ## 2.1. Group of Donors and Liposuction Procedure A comparative study was performed on samples of subcutaneous adipose tissue from 15 healthy donors after informed consent at Hospital Na Bulovce in Prague. The group of females (n=14) and one male (n=1) underwent tumescent liposuction, whereby adipose tissue from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9) was harvested. Harvesting was conducted in compliance with the tenets of the Declaration of Helsinki on experiments involving human tissues and under ethical approval issued by the Ethics Committee of Hospital Na Bulovce in Prague (August 21, 2014). The liposuctions were performed under sterile conditions, using tumescence. The tumescent solution contained a 1000 mL of physiological solution with adrenaline (1 : 200,000) 1 mL and bicarbonate 8.4% 20 mL. In order to protect the harvested stromal cells from possible toxicity, no local anaesthetics were used. We used a liposuction machine (MEDELA dominant) that enabled continuous negative pressure to be set, and we utilized negative pressure of -200 mmHg and -700 mmHg. Superficial fat tissue was harvested using a Coleman Style blunt cannula with 4 holes and an inner diameter of 3 mm. Both low negative pressure (i.e., -200 mmHg) and high negative pressure (i.e., -700 mmHg) were used during liposuction in selected harvesting sites for each donor. Specifically, in the abdominal region, low pressure was used on one side of the abdomen, while high pressure was applied on the opposite side of the abdomen. Similarly, in the outer and inner thigh regions, low pressure was applied on one leg and the high pressure was applied on the contralateral leg (Scheme 1). A different cannula and vacuum suction container was used for low and high pressure harvesting to prevent contamination of low pressure harvesting material with high pressure harvesting material and vice versa. The age range of the donors was 26–53 years (mean age 37.8±7.8 years) and the BMI range was 19.60–36.17kg/m2 (mean BMI 25.44±4.37kg/m2) (Table 1). The donors did not suffer from diabetes or from hypertension, and they were not tobacco users.Scheme 1 Scheme of the experiment. Sites in the abdomen, the inner thigh, and the outer thigh where liposuction at low negative pressure (-200 mmHg) and at high negative pressure (-700 mmHg) was performed. After the cell isolation, the initial yields of attached cells were counted. In subsequent passages, the number, viability, diameter, doubling time, mitochondrial activity (all in passage 1), and CD surface markers (passage 2) of isolated ADSCs were evaluated.Table 1 Donors included in our study. The group of females (n=14) and one male (n=1; abdomen site) underwent tumescent liposuction, in which adipose tissue was harvested from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9). In each harvesting site, the lipoaspirate was obtained both under low and under high negative pressure. Donor site Age (years) BMI (kg/m2) No. of samples Inner thigh 42.0±4.6 27.70±7.40 3 Outer thigh 35.4±7.8 23.56±2.45 7 Abdomen 38.3±8.6 25.06±4.08 9 Together 37.8±7.8 25.44±4.37 19 samples from 15 donors ## 2.2. Isolation of ADSCs The isolation procedure was performed in fresh lipoaspirates (within 2 hours after the liposuction procedure) according to the isolation protocol by Estes et al. [7]. However, we made some slight modifications, as described in our previous study [20]. In brief, the lipoaspirates were washed several times with phosphate-buffered saline (PBS; Sigma-Aldrich). Then, the lipoaspirate was digested, using PBS containing 1% (wt/vol) bovine serum albumin (BSA; Sigma-Aldrich) and type I collagenase 0.1% (wt/vol) (Worthington) for 1 hour at a temperature of 37°C. After the digestion procedure, the tissue was centrifuged, and the upper and middle layers were aspirated. The obtained SVF was washed three times. A filter with pores 100 μm in size (Cell Strainer, BD Falcon) was additionally used to filter the cell suspension of SVF right before seeding into culture flasks (75 cm2, TPP, Switzerland) in a density of 0.16 mL of original lipoaspirate/cm2. The isolated cells were cultured in Dulbecco’s modified Eagle medium (DMEM; Gibco), supplemented with 10% (vol/vol) foetal bovine serum (FBS; Gibco), gentamicin (40 μg/mL; LEK), and recombinant human fibroblast growth factor basic (FGF2; 10 ng/mL; GenScript). The primary cells, referred to as “passage 0,” were cultured until they reached 70%–80% confluence. Then, the cells were passaged.For the experiments that followed (Scheme1), the cells isolated from the lipoaspirate harvested under low negative pressure (i.e., -200 mmHg) are referred to as “low,” and the cells isolated from the lipoaspirate harvested under high negative pressure (i.e., -700 mmHg) are referred to as “high.” The compared groups of cells are referred to as low inner thigh (low I thigh), high inner thigh (high I thigh), low outer thigh (low O thigh), high outer thigh (high O thigh), low abdomen, and high abdomen. ## 2.3. Yields of Initially Attached Cells For the primary culture of isolated cells, as mentioned above, the seeding density was 0.16 mL of original lipoaspirate/cm2. On day 1 after isolation and seeding (passage 0), the culture medium was changed with the fresh medium, and the unattached cells were washed away. Then, the cell yields per 1 mL of lipoaspirate were counted from the number of attached cells, because only these cells are relevant for potential use in tissue engineering. Microphotographs of 4 to 6 randomly chosen microscopic fields for each sample were taken by phase-contrast microscope and were analysed by manual cell counting. Then, the number of attached cells was compared depending on different negative pressure or on the harvesting site. ## 2.4. Cell Number, Viability, Diameter, and Doubling Time The cells from each donor, harvested under low and high negative pressure within the corresponding areas in the abdomen or in the thigh, were cultured and then analysed. The isolated cells in passage 1 were seeded into 12-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 2.1 cm) in a density of 14,000 cells/cm2 (i.e., 50,000 cells/well) and were cultivated in DMEM+10% (vol/vol) FBS+10 ng/mL FGF2 for 7 days. The volume of the cell culture medium was 3 mL/well. The cells were cultivated in a humidified air atmosphere with 5% CO2 at a temperature of 37°C. On days 1, 3, and 7, the cells were washed with PBS and were then detached by incubation with Trypsin-EDTA Solution (Sigma-Aldrich) for 4 minutes at 37°C. The effect of the Trypsin-EDTA solution was subsequently inhibited by adding a medium with FBS, and the cells were resuspended. The number, the viability, and the diameter of the detached cells in each well were measured using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). In this analyser, the cell viability is evaluated by a trypan blue exclusion test. From 5 to 8 independent samples for each experimental group of a donor in each time interval were analysed. The cell population doubling time (DT) was calculated from the ADSC numbers, according to the following equation: DT=t×ln2/lnN–lnN0, where t represents the duration of culture, N represents the number of cells on day 3, and N0 represents the number of cells on day 1. ## 2.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes is generally measured in order to estimate the cell proliferation activity. The isolated cells in passage 1 were seeded into 24-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 1.5 cm) in a density of 14,000 cells/cm2 (i.e., 25,000 cells/well) and were cultivated in DMEM+10% FBS+10 ng/mL FGF2 for 7 days. The volume of cell culture medium was 1.5 mL/well. On days 3 and 7, a CellTiter 96® Aqueous One Solution Cell Proliferation Assay (MTS; Promega Corporation) was performed according to the manufacturer’s protocol. In brief, the principle of the MTS assay is based on a colorimetric change of the yellow tetrazolium salt to brown formazan. This change is brought about by the activity of mitochondrial enzymes. The absorbance was measured at a wavelength of 490 nm, using a VersaMax ELISA microplate reader (Molecular Devices LLC). From 5 to 6 independent samples were measured for each experimental group in each time interval. ## 2.6. Flow Cytometry In passage 2, the cells were characterised by flow cytometry, using antibodies against specific surface CD markers. An evaluation was made of the percentage of cells in the population that contained standard markers of ADSCs, i.e., CD105 (also referred to as endoglin, a membrane glycoprotein which is part of the TGF-β receptor complex), CD90 (Thy-1, a thymocyte antigen belonging to the immunoglobulin superfamily), and CD73 (ecto-5′-nucleotidase, a glycosylphosphatidylinositol-anchored membrane protein). Other evaluated markers included CD29 (integrin β1, a component of receptors for collagen and fibronectin), CD146 (a melanoma cell adhesion molecule, a receptor for laminin), CD31 (also referred to as platelet-endothelial cell adhesion molecule-1, PECAM-1), and hematopoietic cell markers CD34 and CD45 [3]. In brief, the cells were washed with PBS and were incubated with Trypsin-EDTA for 4 minutes at 37°C. Subsequently, the medium with FBS was added and the cells were centrifuged (5 min, 300 g). The supernatant was aspired off, and the cells were resuspended in PBS with 0.5% (wt/vol) BSA (Sigma-Aldrich). The cells were equally divided into aliquots (i.e., 250,000 cells/aliquot). FITC-, Alexa488-, Alexa647-, or PE-conjugated monoclonal antibodies, i.e., against CD105, CD45 (Exbio Praha), CD90 (BD Biosciences), CD73, CD146, CD31 (BioLegend), CD29 and CD34 (Invitrogen), were added separately into aliquots. The aliquots were incubated with the antibodies for 30 minutes at 4°C in dark conditions. Next, the stained cells were washed three times with PBS with 0.5% (wt/vol) BSA and were analysed with the Accuri C6 Flow Cytometer System (BD Biosciences). In each aliquot, 20,000 events were recorded for each CD surface marker. ## 2.7. Microscopy Techniques Phase-contrast microscopy was used to visualise the process of attachment, spreading, and growth in native ADSCs after isolation (passage 0). The immunofluorescence staining of CD surface markers was performed on native adhering ADSCs (passage 2) using PE-CD90 (BD Science) and Alexa488-CD29 (Invitrogen) antibodies. Cell nuclei in native cells were counterstained with Hoechst 33342 (Sigma-Aldrich) for 30 minutes at room temperature in the dark. Olympus microscope IX71 (objective magnification 10x or 20x) was used to take representative images. ## 2.8. Statistical Analysis First, to evaluate the significance of different negative pressures, the observed data (i.e., initial cell yields, later cell numbers, and mitochondrial activity) were presented as the ratio of low-pressure cells to high-pressure cells for each donor. The Mann-Whitney Rank Sum test was used to test the equality of the medians of the ratios on different days of the experiment. Second, an unpaired two-samplet-test (for parametric data) or a Mann-Whitney Rank Sum test (for nonparametric data) was used to test the significance of the differences between the outer thigh area and the abdomen area. The inner thigh region was not statistically compared with other harvesting sites due to a relatively small group of samples (i.e., from only 3 patients). All the measured data were tested for normality according to the Kolmogorov-Smirnov test. Data which showed a Gaussian distribution are expressed as mean±SD. However, due to the small sample size and the wide dispersion among the donors, some of the data did not show a Gaussian distribution. The nonparametric data are expressed as the median and the interquartile range (IQ range). The statistical analysis was performed using SigmaStat Software (Systat Software Inc., USA); p<0.001 (for flow cytometry) or p<0.05 (for all other methods) was considered statistically significant. The plots were generated in R (programming language). ## 3. Results ### 3.1. Growth of Cells after Isolation and Cell Yields In passage 0, we observed slight differences in the range of cell adhesion and growth among the cells harvested from various donors. However, the cells from all donors usually reached 70% or 80% confluence by day 10. Figure1 shows representative images of the process of adhesion and growth in ADSCs after isolation from the same patient. On day 1 after isolation, the number of attached cells per 1 mL of lipoaspirate was counted in each sample. The ratio of attached low-pressure cells to attached high-pressure cells for each donor showed a median level near to 1.0 for the outer thigh region, which means a similar number of attached cells for both pressures (Figure 2(a)). However, the median level of this ratio (0.79) was significantly lower for the abdomen region (Figure 2(a)) which indicates higher cell yields from high-pressure lipoaspirates from this harvesting site. We observed a significantly 2-fold or 3-fold higher number of attached cells from the outer thigh region than from the abdomen region (Figure 2(b)). The inner thigh region was not statistically compared with other harvesting sites due to the relatively small group of samples.Figure 1 The process of attachment, spreading, and growth in ADSCs from the same patient on days 2, 5, and 7 after isolation. The ADSCs were isolated from the inner thigh area and from the abdomen area, under low negative pressure (-200 mmHg) and under high negative pressure (-700 mmHg). Passage 0. Scale bar 200μm. Representative images are shown.Figure 2 Cell yields counted from the number of initially attached cells. (a) The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor on day 1 after isolation; passage 0.p<0.05 (∗) is for harvesting area (outer thigh vs. abdomen) significance testing. (b) The number of attached cells per 1 mL of lipoaspirate; passage 0. p<0.05 (∗) is for harvesting area significance testing (outer thigh vs. abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. (a) (b) ### 3.2. Cell Number The number of cells obtained from the corresponding areas of the abdomen or the thigh under low negative pressure and under high negative pressure for the same donor was measured on days 1, 3, and 7. The ratio of the number of low-pressure cells to the number of high-pressure cells on a specific day of the culture from each donor showed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas (Figure3). There were no statistical differences in cell numbers between the outer thigh and abdomen areas on days 1 and 3 (Figure 4). When the groups of cells from the inner thigh and the outer thigh were evaluated together, we observed higher cell number in thigh ADSCs than in abdomen ADSCs (p=0.048) on day 7 (Figure 4).Figure 3 The influence of negative pressure on the number of ADSCs. The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 1 (1D), day 3 (3D), and day 7 (7D); passage 2. No significant differences among the groups were observed.Figure 4 The number of ADSCs. The ADSCs were harvested under low pressure and under high pressure from the inner thigh area, the outer thigh area, and the abdomen area. Days 1, 3, and 7; passage 2. On day 7, the thigh ADSCs (i.e., inner thigh+outer thigh) reached significantly higher (p=0.048) cell numbers than the abdomen ADSCs. p<0.05 (∗) is for harvesting area significance testing. ### 3.3. Doubling Time The doubling time was calculated between days 1 and 3 (i.e., 48 hours of cell culture). There were similar median values in all sample groups, from 24.99 hours (low abdomen) to 28.65 hours (high inner thigh) (Figure5). No significant differences were observed between the sample groups.Figure 5 Population doubling time. Population doubling time of low-pressure ADSCs and high-pressure ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area. No significant differences were observed among the groups investigated here. ### 3.4. Viability and Diameter No significant differences were found in the viability of the cells, measured by the trypan blue exclusion test, on day 1 (from 88.0% for low abdomen to 93.6% for low outer thigh), on day 3 (from 93.5% for high abdomen to 96.6% for high outer thigh), and on day 7 (from 90.3% for high inner thigh to 95.9% for high outer thigh) (Table2). We observed significantly larger diameter of outer thigh ADSCs than of abdomen ADSCs (p=0.038) on day 1. However, no significant differences in diameter were observed on day 3 and on day 7 (Table 3).Table 2 The viability of ADSCs. The viability of ADSCs harvested under low pressure and under high pressure from the inner thigh area, from the outer thigh area, and from the abdomen area on days 1, 3, and 7 in passage 2. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Viability of ADSCs (%) Day 1 Day 3 Day 7 Median IQ range Median IQ range Median IQ range Low I thigh 91.7 90.5-94.6 96.2 93.1-96.5 93.2 92.7-95.8 High I thigh 91.9 89.9-93.2 94.8 92.7-95.3 90.3 88.6-94.6 Low O thigh 93.6 84.8-95.8 93.8 91.2-96.5 95.2 89.4-96.9 High O thigh 93.5 80.7-94.5 96.6 94.3-97.3 95.9 95.6-97.0 Low abdomen 88.0 87.5-90.0 94.6 90.8-95.3 94.8 91.7-97.1 High abdomen 92.4 90.3-93.7 93.5 92.4-95.0 95.2 93.2-97.0Table 3 The diameter of ADSCs. The diameter of ADSCs was measured using the Vi-CELL XR Cell Counter on days 1, 3, and 7;p<0.05 (∗) is for harvesting area significance testing (i.e., outer thigh and abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Diameter of ADSCs (microns) Day 1 Day 3 Day 7 Mean±SD Mean±SD Mean±SD Low I thigh 16.76±0.74 14.67±0.13 12.55±0.38 High I thigh 15.73±0.39 14.92±0.26 12.91±1.18 Low O thigh 16.32 ± 0.82 14.71±1.46 12.97±0.49 High O thigh 17.04 ± 0.80 14.52±1.10 13.77±0.68 Low abdomen 15.67 ± 1.33 14.52±1.48 13.39±0.87 High abdomen 15.70 ± 1.27 14.93±1.43 13.86±1.10 ### 3.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes in ADSCs, considered as an indirect indicator of cell proliferation activity, was measured on days 3 and 7 after seeding. The ratio of the mitochondrial activity of the low-pressure cells to the mitochondrial activity of the high-pressure cells on a specific day of the culture from each donor revealed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas, and no significant differences were observed between the low-pressure cells and the high-pressure cells (Figure6). Similarly, there were no significant differences in the mitochondrial activity of cells from different donor sites on day 3 (Table 4). On day 7, we observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of other harvesting sites; however, no statistical analysis was performed due to the relatively small sample size.Figure 6 The influence of negative pressure on the mitochondrial activity of ADSCs. The ratio of the mitochondrial activity of low-pressure cells to the mitochondrial activity of high-pressure cells obtained for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 3 (3D) and on day 7 (7D). No significant differences among the observed groups were observed.Table 4 The cell mitochondrial activity of ADSCs. The cell mitochondrial activity of ADSCs measured on days 3 and 7. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Cell mitochondrial activity (absorbance) Day 3 Day 7 Mean±SD Mean±SD Low I thigh 0.38±0.25 0.41±0.22 High I thigh 0.34±0.29 0.41±0.31 Low O thigh 0.53±0.17 0.69±0.14 High O thigh 0.52±0.24 0.62±0.10 Low abdomen 0.51±0.25 0.71±0.29 High abdomen 0.47±0.24 0.69±0.29 ### 3.6. Flow Cytometry The percentage of cells positive for typical markers of mesenchymal stromal cells, i.e., CD105, CD90, CD73, and CD29, was very high in ADSCs obtained from all tested sources. No significant differences were found in the presence of these markers in cells obtained from lipoaspirates taken at different negative pressures and from different harvesting sites (Table5). However, slightly lower and more variable values were obtained in abdomen-derived ADSCs. Representative images of CD90 and CD29 immunostaining are shown in Figure 7(b). We also observed variability in the percentage of CD146+ cells among the donors (from 3.9% in low inner thigh and low outer thigh to 10.9% in low abdomen) (Figure 7(a)). This variability was slightly higher in ADSCs from the abdomen area and was not dependent on negative pressure. The percentage of cells bearing hematopoietic and endothelial cell markers, namely, CD45, CD34, and CD31, was very low and showed no significant differences between cells obtained at different negative pressures and from different donor sites (Table 5).Table 5 The percentage of CD surface markers in ADSCs. The percentage of CD105-, CD90-, CD73-, CD29-, CD146-, CD45-, CD31-, and CD34-positive ADSCs. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells CD markers (% positive cells) CD105 CD90 CD73 CD29 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 99.9 99.2-99.9 99.5 99.5-99.7 99.9 99.8-100 99.8 99.2-100 High I thigh 99.9 94.1-99.9 99.6 99.3-99.8 99.9 99.9-100 99.8 99.8-100 Low O thigh 99.9 98.3-100 99.6 99.2-99.9 100 99.9-100 99.8 99.8-100 High O thigh 99.9 96.2-99.9 99.6 99.2-99.9 100 99.9-100 99.9 99.8-100 Low abdomen 99.5 82.3-99.9 99.4 97.5-99.8 99.8 99.6-99.9 99.6 90.5-99.8 High abdomen 98.9 89.1-99.8 99.5 97.3-99.8 99.8 99.6-99.9 99.6 95.1-99.8 Group of cells CD markers (% positive cells) CD146 CD45 CD31 CD34 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 3.9 2.9-4.5 4.3 3.9-4.7 0.5 0.3-1.0 0.4 0.3-0.7 High I thigh 6.0 2.5-7.4 3.3 2.9-4.0 0.8 0.4-1.0 0.8 0.4-1.7 Low O thigh 3.9 1.3-5.2 1.8 1.5-6.9 0.5 0.2-0.7 1.1 0.4-6.3 High O thigh 5.4 2.7-24.6 1.8 1.1-12.6 0.6 0.1-2.4 0.9 0.3-6.1 Low abdomen 10.9 3.4-35.4 5.2 4.5-7.5 0.3 0.2-0.5 1.0 0.5-1.6 High abdomen 4.7 2.6-28.5 4.1 3.1-5.4 0.4 0.2-1.0 0.9 0.5-1.7Figure 7 (a) The percentage of CD146-positive cells in each group of cells. No significant differences among the harvesting sites were observed. (b) The immunofluorescence staining of CD29 and CD90 in ADSCs. Cell nuclei are counterstained with Hoechst 33342. Olympus microscope IX71. Scale bar 200μm (CD29) and 100 μm (CD90). (a) (b) ## 3.1. Growth of Cells after Isolation and Cell Yields In passage 0, we observed slight differences in the range of cell adhesion and growth among the cells harvested from various donors. However, the cells from all donors usually reached 70% or 80% confluence by day 10. Figure1 shows representative images of the process of adhesion and growth in ADSCs after isolation from the same patient. On day 1 after isolation, the number of attached cells per 1 mL of lipoaspirate was counted in each sample. The ratio of attached low-pressure cells to attached high-pressure cells for each donor showed a median level near to 1.0 for the outer thigh region, which means a similar number of attached cells for both pressures (Figure 2(a)). However, the median level of this ratio (0.79) was significantly lower for the abdomen region (Figure 2(a)) which indicates higher cell yields from high-pressure lipoaspirates from this harvesting site. We observed a significantly 2-fold or 3-fold higher number of attached cells from the outer thigh region than from the abdomen region (Figure 2(b)). The inner thigh region was not statistically compared with other harvesting sites due to the relatively small group of samples.Figure 1 The process of attachment, spreading, and growth in ADSCs from the same patient on days 2, 5, and 7 after isolation. The ADSCs were isolated from the inner thigh area and from the abdomen area, under low negative pressure (-200 mmHg) and under high negative pressure (-700 mmHg). Passage 0. Scale bar 200μm. Representative images are shown.Figure 2 Cell yields counted from the number of initially attached cells. (a) The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor on day 1 after isolation; passage 0.p<0.05 (∗) is for harvesting area (outer thigh vs. abdomen) significance testing. (b) The number of attached cells per 1 mL of lipoaspirate; passage 0. p<0.05 (∗) is for harvesting area significance testing (outer thigh vs. abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. (a) (b) ## 3.2. Cell Number The number of cells obtained from the corresponding areas of the abdomen or the thigh under low negative pressure and under high negative pressure for the same donor was measured on days 1, 3, and 7. The ratio of the number of low-pressure cells to the number of high-pressure cells on a specific day of the culture from each donor showed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas (Figure3). There were no statistical differences in cell numbers between the outer thigh and abdomen areas on days 1 and 3 (Figure 4). When the groups of cells from the inner thigh and the outer thigh were evaluated together, we observed higher cell number in thigh ADSCs than in abdomen ADSCs (p=0.048) on day 7 (Figure 4).Figure 3 The influence of negative pressure on the number of ADSCs. The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 1 (1D), day 3 (3D), and day 7 (7D); passage 2. No significant differences among the groups were observed.Figure 4 The number of ADSCs. The ADSCs were harvested under low pressure and under high pressure from the inner thigh area, the outer thigh area, and the abdomen area. Days 1, 3, and 7; passage 2. On day 7, the thigh ADSCs (i.e., inner thigh+outer thigh) reached significantly higher (p=0.048) cell numbers than the abdomen ADSCs. p<0.05 (∗) is for harvesting area significance testing. ## 3.3. Doubling Time The doubling time was calculated between days 1 and 3 (i.e., 48 hours of cell culture). There were similar median values in all sample groups, from 24.99 hours (low abdomen) to 28.65 hours (high inner thigh) (Figure5). No significant differences were observed between the sample groups.Figure 5 Population doubling time. Population doubling time of low-pressure ADSCs and high-pressure ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area. No significant differences were observed among the groups investigated here. ## 3.4. Viability and Diameter No significant differences were found in the viability of the cells, measured by the trypan blue exclusion test, on day 1 (from 88.0% for low abdomen to 93.6% for low outer thigh), on day 3 (from 93.5% for high abdomen to 96.6% for high outer thigh), and on day 7 (from 90.3% for high inner thigh to 95.9% for high outer thigh) (Table2). We observed significantly larger diameter of outer thigh ADSCs than of abdomen ADSCs (p=0.038) on day 1. However, no significant differences in diameter were observed on day 3 and on day 7 (Table 3).Table 2 The viability of ADSCs. The viability of ADSCs harvested under low pressure and under high pressure from the inner thigh area, from the outer thigh area, and from the abdomen area on days 1, 3, and 7 in passage 2. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Viability of ADSCs (%) Day 1 Day 3 Day 7 Median IQ range Median IQ range Median IQ range Low I thigh 91.7 90.5-94.6 96.2 93.1-96.5 93.2 92.7-95.8 High I thigh 91.9 89.9-93.2 94.8 92.7-95.3 90.3 88.6-94.6 Low O thigh 93.6 84.8-95.8 93.8 91.2-96.5 95.2 89.4-96.9 High O thigh 93.5 80.7-94.5 96.6 94.3-97.3 95.9 95.6-97.0 Low abdomen 88.0 87.5-90.0 94.6 90.8-95.3 94.8 91.7-97.1 High abdomen 92.4 90.3-93.7 93.5 92.4-95.0 95.2 93.2-97.0Table 3 The diameter of ADSCs. The diameter of ADSCs was measured using the Vi-CELL XR Cell Counter on days 1, 3, and 7;p<0.05 (∗) is for harvesting area significance testing (i.e., outer thigh and abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Diameter of ADSCs (microns) Day 1 Day 3 Day 7 Mean±SD Mean±SD Mean±SD Low I thigh 16.76±0.74 14.67±0.13 12.55±0.38 High I thigh 15.73±0.39 14.92±0.26 12.91±1.18 Low O thigh 16.32 ± 0.82 14.71±1.46 12.97±0.49 High O thigh 17.04 ± 0.80 14.52±1.10 13.77±0.68 Low abdomen 15.67 ± 1.33 14.52±1.48 13.39±0.87 High abdomen 15.70 ± 1.27 14.93±1.43 13.86±1.10 ## 3.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes in ADSCs, considered as an indirect indicator of cell proliferation activity, was measured on days 3 and 7 after seeding. The ratio of the mitochondrial activity of the low-pressure cells to the mitochondrial activity of the high-pressure cells on a specific day of the culture from each donor revealed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas, and no significant differences were observed between the low-pressure cells and the high-pressure cells (Figure6). Similarly, there were no significant differences in the mitochondrial activity of cells from different donor sites on day 3 (Table 4). On day 7, we observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of other harvesting sites; however, no statistical analysis was performed due to the relatively small sample size.Figure 6 The influence of negative pressure on the mitochondrial activity of ADSCs. The ratio of the mitochondrial activity of low-pressure cells to the mitochondrial activity of high-pressure cells obtained for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 3 (3D) and on day 7 (7D). No significant differences among the observed groups were observed.Table 4 The cell mitochondrial activity of ADSCs. The cell mitochondrial activity of ADSCs measured on days 3 and 7. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Cell mitochondrial activity (absorbance) Day 3 Day 7 Mean±SD Mean±SD Low I thigh 0.38±0.25 0.41±0.22 High I thigh 0.34±0.29 0.41±0.31 Low O thigh 0.53±0.17 0.69±0.14 High O thigh 0.52±0.24 0.62±0.10 Low abdomen 0.51±0.25 0.71±0.29 High abdomen 0.47±0.24 0.69±0.29 ## 3.6. Flow Cytometry The percentage of cells positive for typical markers of mesenchymal stromal cells, i.e., CD105, CD90, CD73, and CD29, was very high in ADSCs obtained from all tested sources. No significant differences were found in the presence of these markers in cells obtained from lipoaspirates taken at different negative pressures and from different harvesting sites (Table5). However, slightly lower and more variable values were obtained in abdomen-derived ADSCs. Representative images of CD90 and CD29 immunostaining are shown in Figure 7(b). We also observed variability in the percentage of CD146+ cells among the donors (from 3.9% in low inner thigh and low outer thigh to 10.9% in low abdomen) (Figure 7(a)). This variability was slightly higher in ADSCs from the abdomen area and was not dependent on negative pressure. The percentage of cells bearing hematopoietic and endothelial cell markers, namely, CD45, CD34, and CD31, was very low and showed no significant differences between cells obtained at different negative pressures and from different donor sites (Table 5).Table 5 The percentage of CD surface markers in ADSCs. The percentage of CD105-, CD90-, CD73-, CD29-, CD146-, CD45-, CD31-, and CD34-positive ADSCs. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells CD markers (% positive cells) CD105 CD90 CD73 CD29 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 99.9 99.2-99.9 99.5 99.5-99.7 99.9 99.8-100 99.8 99.2-100 High I thigh 99.9 94.1-99.9 99.6 99.3-99.8 99.9 99.9-100 99.8 99.8-100 Low O thigh 99.9 98.3-100 99.6 99.2-99.9 100 99.9-100 99.8 99.8-100 High O thigh 99.9 96.2-99.9 99.6 99.2-99.9 100 99.9-100 99.9 99.8-100 Low abdomen 99.5 82.3-99.9 99.4 97.5-99.8 99.8 99.6-99.9 99.6 90.5-99.8 High abdomen 98.9 89.1-99.8 99.5 97.3-99.8 99.8 99.6-99.9 99.6 95.1-99.8 Group of cells CD markers (% positive cells) CD146 CD45 CD31 CD34 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 3.9 2.9-4.5 4.3 3.9-4.7 0.5 0.3-1.0 0.4 0.3-0.7 High I thigh 6.0 2.5-7.4 3.3 2.9-4.0 0.8 0.4-1.0 0.8 0.4-1.7 Low O thigh 3.9 1.3-5.2 1.8 1.5-6.9 0.5 0.2-0.7 1.1 0.4-6.3 High O thigh 5.4 2.7-24.6 1.8 1.1-12.6 0.6 0.1-2.4 0.9 0.3-6.1 Low abdomen 10.9 3.4-35.4 5.2 4.5-7.5 0.3 0.2-0.5 1.0 0.5-1.6 High abdomen 4.7 2.6-28.5 4.1 3.1-5.4 0.4 0.2-1.0 0.9 0.5-1.7Figure 7 (a) The percentage of CD146-positive cells in each group of cells. No significant differences among the harvesting sites were observed. (b) The immunofluorescence staining of CD29 and CD90 in ADSCs. Cell nuclei are counterstained with Hoechst 33342. Olympus microscope IX71. Scale bar 200μm (CD29) and 100 μm (CD90). (a) (b) ## 4. Discussion A set of experiments was performed to reveal the influence of negative pressure and harvesting site on the characteristics of isolated ADSCs from a number of donors. For future use in tissue engineering, we were mainly interested in significant differences in the basic adhesion and growth characteristics of ADSCs in passages 1 and 2 after isolation. Our study provided an opportunity to compare isolated cells from the same topographic area that had been harvested under low negative pressure and under high negative pressure from each donor. In passage 0, we observed slight differences in the rate of attachment and spreading and in the growth of the ADSCs of the donors after the cells had been isolated. These initial interdonor differences may have been caused by differences in ADSC frequency in the obtained SVF cells. Varying frequencies of ADSCs, determined by a colony-forming unit assay and/or by a limiting dilution assay, have been found in the adipose tissue harvested from various donor sites [12] or when different harvesting procedures are used [21]. Specifically, Jurgens et al. observed significantly higher frequency of ADSCs isolated from adipose tissue harvested from the abdomen region than from the hip/thigh region [12]. Oedayrajsingh-Varma et al. observed a significantly higher frequency of ADSCs isolated from adipose tissue obtained by resection and tumescent liposuction than from tissue obtained by ultrasound-assisted liposuction [21]. In those studies, the absolute number of nucleated cells in the harvested adipose tissue and the number of viable cells in the stromal vascular fraction were not affected by the anatomical site or by the type of surgical procedure. However, in other studies, the anatomical site did have an influence on the total SVF and on the ADSC yields. Iyyanki et al. observed significantly higher total SVF yields from the abdominal harvesting site than from the flank and axilla harvesting sites; however, the ADSC yields did not differ significantly [18]. In a study by Fraser et al., the abdomen-adipocyte yield was 1.7-fold higher than the hip-adipocyte yield, and the adipocyte yields displayed large donor-to-donor variabilities [22]. However, neither the nucleated cell yields nor the preadipocyte yields differed significantly [22]. A large range of ADSC yields among donors was also observed, and no statistical differences were found between the abdomen, the thigh, and the mammary areas [21]. By contrast, our study showed a potential influence of harvesting site, as we observed a higher number of attached cells per 1 mL of lipoaspirate for the outer thigh area than for the abdomen area on day 1 after isolation in in vitro culture. Different results concerning the influence of harvesting site on cell yields might be obtained because of the differences in the target cell populations being studied in different papers. For plastic surgery purposes, the cell yields of all nucleated cells, adipocytes, preadipocytes, and SVF are also a subject of interest. However, tissue engineering focuses more on the yields of adherent ADSCs that can be further proliferated and/or differentiated.The total number of harvested cells can also be influenced by the level of negative pressure used during the liposuction procedure. In a study by Mojallal et al., a lower negative pressure (-350 mmHg) during liposuction resulted in higher SVF yields than a higher negative pressure (-700 mmHg) [10]. Similarly, in a more recent study by Cheriyan et al., higher counts and higher viability of adipocytes were found in lipoaspirates obtained at a lower negative pressure (-250 mmHg) than at a higher negative pressure (-760 mmHg) [23]. However, each of these studies was performed on three patients only. In our study, the number of attached cells after the isolation was similar for low- and high-pressure cells from the outer thigh region, whereas the abdomen region was characterised by initial higher cell yields of attached cells for high pressure.Although the initial SVF yields, adipocyte yields, and ADSC frequency in lipoaspirates can vary, later differences duringin vitro ADSC culturing were of particular interest to us. Our study was focused on the number, the mitochondrial activity, and the viability of the ADSCs in subsequent passaging. We observed similar cell numbers and mitochondrial activity independently of low- and high-negative pressure for a specific region. This means that the subsequent proliferation of ADSCs was not affected by the negative pressure used during the liposuction procedure. Chen et al. observed initial higher proliferation activity (assessed by Cell Counting Kit-8) in lower negative pressure SVF cells than in higher negative pressure SVF cells from the abdominal area in passages 1 and 2 [11]. However, these significant differences did not appear in passage 3 [11]. Similarly, our results could also provide support for the theory that the differences in proliferation activity between low-pressure cells and high-pressure cells become less noticeable after passaging during in vitro cultivation. Interestingly, other researchers have reported that different apparatuses and different levels of negative pressure during liposuction do not influence the percentage and the viability of adipocytes and isolated mesenchymal stromal cells [9]. The discrepancies among the comparative studies may also have arisen because different cell populations were being studied. That is, negative pressure techniques may have a bigger effect on adipocytes, due to their bigger size, while they may have only a minimal effect on smaller cells, including progenitor cells [22]. It is therefore necessary to consider carefully which types of cells from adipose tissue are to be harvested and used. In our study, the outer thigh ADSCs were bigger in diameter in the cell suspension on day 1 after seeding than the abdomen ADSCs. However, the cells were of similar diameters on days 3 and 7.The function and the representation of cell types in adipose tissue vary among the topographic regions. Preadipocytes and ADSCs obtained from subcutaneous, mesenteric, omental, or intrathoracic fat depots display distinct expression profiles and differentiation capacity [24, 25]. Subcutaneous fat depots are easier to obtain than other fat depots. Although the morphology of subcutaneous and visceral fat did not differ significantly, the harvested subcutaneous ADSCs displayed significantly higher cell numbers, a shorter doubling time, and higher CD146 expression than for visceral ADSCs in later passages [26]. Moreover, within the subcutaneous depots, superficial depots seem to have better stemness and multipotency characteristics of the cells than deep subcutaneous depots [27]. Until now, the harvesting site of fat depots has usually been selected on the basis of actual need or choice. However, the particular anatomic source of adipose tissue harvesting can play a role in further reconstructive surgery and cell-based therapies. The cells from different fat depots express different homeobox (Hox) genes. This supports the idea that they are of different embryonic origin, and so the donor and the host adipose tissue sites need to be carefully matched [28]. Kouidhi et al. compared the gene expression of human knee ADSCs with chin ADSCs [29]. They found more enhanced expression of Pax3 (i.e., a neural crest marker) in chin ADSCs than in knee ADSCs, whereas the expression of most of the Hox genes that are typical for the mesodermal environment was higher in knee ADSCs than in chin ADSCs. In later passages, chin ADSCs also displayed higher self-renewal potential [29]. In our study, we obtained similar numbers and similar viability of ADSCs from the inner thigh area, the outer thigh area, and the abdomen area on days 1 and 3. Thus, our results are in accordance with studies by other researchers, in which similar growth kinetics were found in ADSCs from the abdomen area and from the hip/thigh area [12, 30]. However, with similar cell numbers on days 1 and 3, we observed a tendency of thigh ADSCs (inner thigh+outer thigh) to reach higher values than abdomen ADSCs on day 7 (p=0.048). It therefore seems that there may be a significant difference in later cell numbers between the harvesting sites for most of the patients included in our study, though we observed large variation among the donors. Interestingly, we also observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of outer thigh ADSCs and abdomen ADSCs on day 7. These results may correspond with the slightly higher cell numbers of inner thigh ADSCs on day 7, when the cells have already reached confluence and have reduced their proliferation activity. However, the smaller number of inner thigh ADSC samples than in the case of other groups (i.e., outer thigh ADSCs and abdomen ADSC) may also have affected the results. The harvesting site can also influence the colony-forming unit (CFU) in isolated ADSCs. Fraser et al. observed that the CFU was higher in hip ADSCs than in abdomen ADSCs [22]. This finding could be in accordance with a higher proliferation rate of hip/thigh ADSCs in later time intervals of the culture [22].During our experiments, we observed a nonparametric distribution of the donors’ data. The interdonor variabilities that were not dependent on the harvesting site or on negative pressure may have been caused by other donor factors. Age and BMI are other factors known to play a considerable role in SVF and ADSC yields and characteristics [19]. However, research findings regarding the influence of age and BMI on ADSC yields are often contradictory [31–33]. For example, in the study by de Girolamo et al., the cellular yield of ADSCs was significantly greater from older patients than from younger patients [31], while in the study by Faustini et al., the patient’s age seemed not to influence the cell yield [32]. Significant donor-to-donor variability has also been reported in multilineage differentiation capacity, self-renewal capacity, and immunomodulatory cytokine secretion [34]. Although some of these variabilities can be explained by a medical history of breast cancer and subsequent treatment, there were also significant differences among donors who had not been diagnosed with cancer [34]. Atherosclerosis is another donor factor which can alter the secretome and reduce the immunomodulatory capacity of ADSCs due to impaired mitochondrial functions [35]. In addition, the ADSCs isolated from patients with renovascular disease exhibited a higher level of DNA damage and lower migratory capacity than ADSCs from healthy donors [36]. In another study, ADSCs isolated from patients suffering from scleroderma, an autoimmune connective tissue disease, showed a lower proliferation rate and lower migration capacity than in the control ADSCs from healthy donors [37].Many papers have reported on various donor-to-donor factors that have a potential impact on the characteristics of mesenchymal stromal cells. In addition, it seems that there are many cell-to-cell variations within the same donor. This cell-to-cell heterogeneity can be manifested bothin vitro and in vivo by interclonal functional and molecular variation, e.g., variable differentiation capacity, existing fast-growing and slow-growing clones, and other differences in proteome and transcriptome [38]. The percentage of various clones in MSCs develops and changes during cell passaging. Even within a single MSC clone, there is a growing body of evidence that the intraclonal heterogeneity alters cell behaviour and characteristics [38].In most of the donors, we proved a high level of positivity of the isolated cells for CD105, CD90, CD73, and CD29 (>80% in ADSCs) and a low level of positivity or absence of CD45, CD31, and CD33 (≤2% in ADSCs), according to the guidelines for characterizing ADSCs [39]. We observed no significant differences in the presence of CD markers depending on negative pressure or on harvesting site. Our results are in accordance with those reported by other researchers, who have found no differences in CD markers in SVF harvested from different sites [12, 14, 30]. In another study, the presence of pericytes, progenitor endothelial cells, preadipocyte cells, and mesenchymal cells in SVF was not influenced by different negative pressures [9]. In addition, in the study by Chen et al., where higher negative pressure had a negative influence on yields, on growth, and on the secretion of growth factors, no differences in CD markers were found [11]. Interestingly, we observed variability in the presence of CD146 among the donors. The presence of CD146+ cells in subcutaneous depots was also not negligible in a study by Lee et al. [26]. CD146 positivity can be a sign of pericytes. Pericytes are cells in contact with small vessels in the adipose tissue, and they are also present in the harvested SVF [40]. The origin of the pericyte is not the only possible explanation. For a review of other theories explaining the presence of CD146, see [41]. In MSCs, high expression of CD146 is associated with a commitment towards vascular smooth muscle cell lineage [42]. This commitment could be interesting for vascular tissue engineering, when differentiating ADSCs towards vascular smooth muscle cells is required. CD146+ cells in combination with human umbilical vein endothelial cells (HUVECs) were also reported to support the formation and the elongation of capillary-like tubular structures [26]. Lee et al. also observed greater proliferation of CD146+ cells than of CD146- cells; however, the percentage of CD146+ cells in an ADSC culture decreased with subsequent subculturing [26]. It seems that the CD146 expression among ADSCs is relatively heterogeneous and could play an important role in potential specific tissue engineering applications. The presence of other hematopoietic and endothelial cell markers (e.g., CD34, CD45, and CD31) can influence future therapies using SVF or ADSCs. The optimal ratio of ADSCs and hematopoietic stem cell progenitors in isolated SVF defined by specific CD surface markers seems to be the key for successful stem cell therapies [43]. ### 4.1. Limitation The first limitation of our study is the relatively small sample size, with uneven numbers of samples from each donor site (i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9)). Due to the smallest sample size of inner thigh ADSCs, we did not make a statistical comparison between this group of cells and outer thigh ADSCs or abdomen ADSCs. A greater number of donors would be desirable. However, we assume that for ADSC characterization under in vitro culture conditions and for later tissue engineering purposes, the sample size is sufficient.The second limitation of the study is that it was primarily focused on negative pressure and on the harvesting site and not on other patient factors, such as age, gender, or BMI; these other characteristics were therefore not completely uniform among the donors. Nevertheless, the studied groups showed similar age and BMI parameters with normal data distribution.The third limitation of the study is that it was focused on the later use of ADSCs in tissue engineering. Therefore, we characterized only the fraction of isolated ADSCs that adhered to the plastic culture flasks. The yields of ADSCs were counted after they had adhered to the flasks, and their characteristics (cell proliferation, flow cytometry analysis of surface markers) were studied in subsequent passages. No other cell types (i.e., adipocytes or all nucleated cells) were analysed in this study with respect to their yields or their viability. The conclusions concerning the influence of negative pressure and harvesting site therefore refer only to plastic-adherent ADSCs.To characterize the ADSCs inin vitro culture conditions, we chose passage 1 and passage 2 depending on specific analyses. These passages were the same for all analysed ADSCs. However, the growth dynamics of the cells is known to vary from passage to passage, and this variability can also be specific in each isolated ADSC population. ## 4.1. Limitation The first limitation of our study is the relatively small sample size, with uneven numbers of samples from each donor site (i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9)). Due to the smallest sample size of inner thigh ADSCs, we did not make a statistical comparison between this group of cells and outer thigh ADSCs or abdomen ADSCs. A greater number of donors would be desirable. However, we assume that for ADSC characterization under in vitro culture conditions and for later tissue engineering purposes, the sample size is sufficient.The second limitation of the study is that it was primarily focused on negative pressure and on the harvesting site and not on other patient factors, such as age, gender, or BMI; these other characteristics were therefore not completely uniform among the donors. Nevertheless, the studied groups showed similar age and BMI parameters with normal data distribution.The third limitation of the study is that it was focused on the later use of ADSCs in tissue engineering. Therefore, we characterized only the fraction of isolated ADSCs that adhered to the plastic culture flasks. The yields of ADSCs were counted after they had adhered to the flasks, and their characteristics (cell proliferation, flow cytometry analysis of surface markers) were studied in subsequent passages. No other cell types (i.e., adipocytes or all nucleated cells) were analysed in this study with respect to their yields or their viability. The conclusions concerning the influence of negative pressure and harvesting site therefore refer only to plastic-adherent ADSCs.To characterize the ADSCs inin vitro culture conditions, we chose passage 1 and passage 2 depending on specific analyses. These passages were the same for all analysed ADSCs. However, the growth dynamics of the cells is known to vary from passage to passage, and this variability can also be specific in each isolated ADSC population. ## 5. Conclusion In our study, we observed a significantly higher number of initially attached cells per 1 mL of lipoaspirate for the outer thigh region than for the abdomen region on day 1 after isolation. Different negative pressure was not the key determinant factor for cell yields of the outer thigh region, whereas high negative pressure had a positive influence on the cell yields of the abdomen region. However, for the subsequent culturing, no significant relationship was identified between the characteristics of isolated ADSCs and the level of negative pressure used during liposuction. In addition, the harvesting site influenced the ADSCs only mildly in some parameters on specific days of the culture (i.e., diameter on day 1). In general, no significant influence of the harvesting site was observed on the cell number, mitochondrial activity, viability, diameter, or on the presence of CD markers. These thigh ADSCs reached a higher cell number than for abdomen ADSCs on day 7 only in cases where cells from the inner thigh and outer thigh areas were evaluated together. However, we observed donor-to-donor variability in initial adhesion, in absolute cell numbers, and in the expression of some CD markers. Thus, our results could suggest that donor-to-donor differences may be affected not only by the harvesting site and by negative pressure but also by other factors. For subsequentin vitro culturing and use in tissue engineering, it seems that the harvesting site and the level of negative pressure do not have a crucial or limiting effect on basic ADSC characteristics. Nevertheless, it is necessary to make a thorough investigation of the area from which ADSCs are to be harvested and the specific liposuction procedure that is to be used, with reference to the purpose for which the adipose tissue is being harvested. --- *Source: 1016231-2020-02-10.xml*
1016231-2020-02-10_1016231-2020-02-10.md
76,655
The Influence of Negative Pressure and of the Harvesting Site on the Characteristics of Human Adipose Tissue-Derived Stromal Cells from Lipoaspirates
Martina Travnickova; Julia Pajorova; Jana Zarubova; Nikola Krocilova; Martin Molitor; Lucie Bacakova
Stem Cells International (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1016231
1016231-2020-02-10.xml
--- ## Abstract Background. Adipose tissue-derived stromal cells (ADSCs) have great potential for cell-based therapies, including tissue engineering. However, various factors can influence the characteristics of isolated ADSCs. Methods. We studied the influence of the harvesting site, i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9), and of negative pressure, i.e., low (-200 mmHg) and high (-700 mmHg), on the characteristics of isolated ADSCs. We counted initial yields of attached cells after isolation. In subsequent passage, we studied the number, viability, diameter, doubling time, mitochondrial activity, and CD surface markers of isolated ADSCs. Results. We revealed higher initial cell yields from the outer thigh region than from the abdomen region. Negative pressure did not influence the cell yields from the outer thigh region, whereas the yields from the abdomen region were higher under high negative pressure than under low negative pressure. In the subsequent passage, in general, no significant relationship was identified between the different negative pressure and ADSC characteristics. No significant difference was observed in the characteristics of thigh ADSCs and abdomen ADSCs. Only on day 1, the diameter was significantly bigger in outer thigh ADSCs than in abdomen ADSCs. Moreover, we noted a tendency of thigh ADSCs (i.e., inner thigh+outer thigh) to reach a higher cell number on day 7. Discussion. The harvesting site and negative pressure can potentially influence initial cell yields from lipoaspirates. However, for subsequent in vitro culturing and for use in tissue engineering, it seems that the harvesting site and the level of negative pressure do not have a crucial or limiting effect on basic ADSC characteristics. --- ## Body ## 1. Background Stem cells of various origin are fundamental elements for cell-based therapies in regenerative medicine, particularly for tissue engineering. Nowadays, tissue engineering tends to use stem cells that (1) are pluripotent or multipotent, (2) can be routinely harvested in large quantities, and (3) are surrounded by fewer ethical issues than other types. Mesenchymal stromal cells (MSCs) are multipotent plastic-adherent fibroblast-like cells. They can be harvested predominantly from adult organs and tissues, i.e., bone marrow, peripheral blood, adipose tissue, skin, skeletal muscle, dental pulp, brain, and endometrium [1]. Not only adult tissues but also extrafoetal tissues, such as placenta, umbilical cord tissue, amniotic membrane, and amniotic fluid can also serve as sources of MSCs. The characteristics and the differentiation of bone marrow-derived stromal cells (BMSCs) have been widely studied, as they were the first MSCs to be described. BMSCs provide favourable differentiation characteristics. However, the BMSC harvesting procedure is uncomfortable for donors and adipose tissue-derived stromal cells (ADSCs) provide similar yields of isolated cells, together with greater subsequent proliferation capacity [2]. In recent years, ADSCs have become an ideal target for tissue engineering and cell-based therapies. A relatively easy harvesting procedure and the multipotent characteristics of ADSCs make these stromal cells suitable for various uses [3]. The possibility of autologous application in cell-based therapies can be a further advantage of ADSCs.The methods for isolating ADSCs from adipose tissue can be divided into enzymatic and nonenzymatic approaches [4, 5]. Until now, enzymatic digestion using collagenase has been the most widely performed procedure. However, newer alternative nonenzymatic techniques (e.g., vibration and centrifuging) can also be applied, especially for clinical purposes [6]. After enzymatic digestion and centrifugation, three separated parts are obtained, namely, the upper oily part containing adipocytes, the middle part consisting of digested tissue, and the reddish stromal vascular fraction (SVF) pellet at the bottom [7]. The SVF part is a mixture of distinct cell types consisting of ADSCs and variably also of pericytes, preadipocytes, endothelial precursor cells, endothelial cells, macrophages, smooth muscle cells, fibroblasts, and lymphocytes [5].A large number and range of studies focused on obtaining ADSCs have been published. The studies have investigated various fat-harvesting procedures, cell isolation procedures, and donor factors. All these factors can influence the viability, the yields, and the subsequent proliferation and differentiation of the isolated cells. Tumescent liposuction is used as one of the easiest procedures for harvesting adipose tissue. The negative pressure (vacuum) that is used during the liposuction procedure is an important factor that influences the quality and the amount of harvested tissue. Lee et al. studied the effect of different negative pressures (i.e., -381 mmHg and -635 mmHg) on fat grafting [8]. In their in vivo study, no significant differences in the weight or in the histology of the fat grafts were observed; moreover, higher negative pressure did not affect the viability of the fat grafts [8]. Similarly, in a study by Charles-de-Sá et al., no significant differences, either in the viability of the adipocytes or in the number of MSCs, were found in adipose tissue obtained under various negative pressures [9]. However, other studies have reported a significant influence of negative pressure on cell characteristics. Mojallal et al. measured greater cell yields in adipose tissue harvested under a lower negative pressure (-350 mmHg) than under a higher negative pressure (-700 mmHg) [10]. Similarly, Chen et al. reported more than 2-fold higher cell numbers in SVF isolated from adipose tissue harvested under a lower negative pressure (−225mmHg±37mmHg) than under a higher negative pressure (−410mmHg±37mmHg) [11]. They also reported faster cell growth and higher secretion of some growth factors in cells obtained under lower negative pressure in the initial passages [11].The harvesting site of the superficial adipose tissue seems to be another important donor factor potentially influencing the viability and the proliferation of the isolated cells. Jurgens et al. compared the numbers of cells isolated from the abdomen area and from the hip/thigh area. They found a significantly higher frequency of ADSCs in SVF isolates derived from the abdomen area, but no significant differences were found in the absolute numbers of nucleated cells [12]. However, the osteogenic and chondrogenic differentiation capacity of the ADSCs was not affected by the harvesting site [12]. Padoin et al. observed higher cell yields from the lower abdomen and from the inner thigh than from other liposuction areas (i.e., upper abdomen, flank, trochanteric area, and knee) [13]. Differences in the viability and in the amount of SVF and in the numbers of ADSCs after culturing, were also studied by Tsekouras and coworkers. In their study, the SVF from the outer thigh exhibited higher cell numbers [14]. This tendency also continued in subsequent cell culturing, where the outer and inner thigh samples both showed higher numbers of ADSCs than the abdomen, waist, or inner knee samples. Other studies reported no statistically significant differences in the volumes of fat grafts [15, 16] or in adipocyte viability [17] according to the donor sites.Not only the negative pressure during liposuction and in the donor harvesting site but also different harvesting procedures [18] and other individual donor factors have been found to influence the viability, proliferation, and differentiation characteristics of ADSCs. Further factors include body mass index (BMI), age, gender, intercurrent diseases, such as diabetes mellitus, and also radiotherapy and drug treatment [19].There is a need to investigate and confirm the best harvesting conditions for ADSCs, which could help to bring them into routine use in clinical practice. Until now, studies have not been uniform and have been focused predominantly on different cell types (adipocytes, preadipocytes, total SVF). The potential differences in the characteristics of ADSCs seem to be nonnegligible and need to be further clarified for future use in tissue engineering. The objective of our study was to investigate the influence of negative pressure during liposuction and also of the donor site on the yields of initially attached cells and on subsequent cell proliferation, achieved cell numbers, cell viability, diameter, and phenotypic markers of isolated ADSCs when cultured inin vitro conditions. ## 2. Materials and Methods ### 2.1. Group of Donors and Liposuction Procedure A comparative study was performed on samples of subcutaneous adipose tissue from 15 healthy donors after informed consent at Hospital Na Bulovce in Prague. The group of females (n=14) and one male (n=1) underwent tumescent liposuction, whereby adipose tissue from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9) was harvested. Harvesting was conducted in compliance with the tenets of the Declaration of Helsinki on experiments involving human tissues and under ethical approval issued by the Ethics Committee of Hospital Na Bulovce in Prague (August 21, 2014). The liposuctions were performed under sterile conditions, using tumescence. The tumescent solution contained a 1000 mL of physiological solution with adrenaline (1 : 200,000) 1 mL and bicarbonate 8.4% 20 mL. In order to protect the harvested stromal cells from possible toxicity, no local anaesthetics were used. We used a liposuction machine (MEDELA dominant) that enabled continuous negative pressure to be set, and we utilized negative pressure of -200 mmHg and -700 mmHg. Superficial fat tissue was harvested using a Coleman Style blunt cannula with 4 holes and an inner diameter of 3 mm. Both low negative pressure (i.e., -200 mmHg) and high negative pressure (i.e., -700 mmHg) were used during liposuction in selected harvesting sites for each donor. Specifically, in the abdominal region, low pressure was used on one side of the abdomen, while high pressure was applied on the opposite side of the abdomen. Similarly, in the outer and inner thigh regions, low pressure was applied on one leg and the high pressure was applied on the contralateral leg (Scheme 1). A different cannula and vacuum suction container was used for low and high pressure harvesting to prevent contamination of low pressure harvesting material with high pressure harvesting material and vice versa. The age range of the donors was 26–53 years (mean age 37.8±7.8 years) and the BMI range was 19.60–36.17kg/m2 (mean BMI 25.44±4.37kg/m2) (Table 1). The donors did not suffer from diabetes or from hypertension, and they were not tobacco users.Scheme 1 Scheme of the experiment. Sites in the abdomen, the inner thigh, and the outer thigh where liposuction at low negative pressure (-200 mmHg) and at high negative pressure (-700 mmHg) was performed. After the cell isolation, the initial yields of attached cells were counted. In subsequent passages, the number, viability, diameter, doubling time, mitochondrial activity (all in passage 1), and CD surface markers (passage 2) of isolated ADSCs were evaluated.Table 1 Donors included in our study. The group of females (n=14) and one male (n=1; abdomen site) underwent tumescent liposuction, in which adipose tissue was harvested from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9). In each harvesting site, the lipoaspirate was obtained both under low and under high negative pressure. Donor site Age (years) BMI (kg/m2) No. of samples Inner thigh 42.0±4.6 27.70±7.40 3 Outer thigh 35.4±7.8 23.56±2.45 7 Abdomen 38.3±8.6 25.06±4.08 9 Together 37.8±7.8 25.44±4.37 19 samples from 15 donors ### 2.2. Isolation of ADSCs The isolation procedure was performed in fresh lipoaspirates (within 2 hours after the liposuction procedure) according to the isolation protocol by Estes et al. [7]. However, we made some slight modifications, as described in our previous study [20]. In brief, the lipoaspirates were washed several times with phosphate-buffered saline (PBS; Sigma-Aldrich). Then, the lipoaspirate was digested, using PBS containing 1% (wt/vol) bovine serum albumin (BSA; Sigma-Aldrich) and type I collagenase 0.1% (wt/vol) (Worthington) for 1 hour at a temperature of 37°C. After the digestion procedure, the tissue was centrifuged, and the upper and middle layers were aspirated. The obtained SVF was washed three times. A filter with pores 100 μm in size (Cell Strainer, BD Falcon) was additionally used to filter the cell suspension of SVF right before seeding into culture flasks (75 cm2, TPP, Switzerland) in a density of 0.16 mL of original lipoaspirate/cm2. The isolated cells were cultured in Dulbecco’s modified Eagle medium (DMEM; Gibco), supplemented with 10% (vol/vol) foetal bovine serum (FBS; Gibco), gentamicin (40 μg/mL; LEK), and recombinant human fibroblast growth factor basic (FGF2; 10 ng/mL; GenScript). The primary cells, referred to as “passage 0,” were cultured until they reached 70%–80% confluence. Then, the cells were passaged.For the experiments that followed (Scheme1), the cells isolated from the lipoaspirate harvested under low negative pressure (i.e., -200 mmHg) are referred to as “low,” and the cells isolated from the lipoaspirate harvested under high negative pressure (i.e., -700 mmHg) are referred to as “high.” The compared groups of cells are referred to as low inner thigh (low I thigh), high inner thigh (high I thigh), low outer thigh (low O thigh), high outer thigh (high O thigh), low abdomen, and high abdomen. ### 2.3. Yields of Initially Attached Cells For the primary culture of isolated cells, as mentioned above, the seeding density was 0.16 mL of original lipoaspirate/cm2. On day 1 after isolation and seeding (passage 0), the culture medium was changed with the fresh medium, and the unattached cells were washed away. Then, the cell yields per 1 mL of lipoaspirate were counted from the number of attached cells, because only these cells are relevant for potential use in tissue engineering. Microphotographs of 4 to 6 randomly chosen microscopic fields for each sample were taken by phase-contrast microscope and were analysed by manual cell counting. Then, the number of attached cells was compared depending on different negative pressure or on the harvesting site. ### 2.4. Cell Number, Viability, Diameter, and Doubling Time The cells from each donor, harvested under low and high negative pressure within the corresponding areas in the abdomen or in the thigh, were cultured and then analysed. The isolated cells in passage 1 were seeded into 12-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 2.1 cm) in a density of 14,000 cells/cm2 (i.e., 50,000 cells/well) and were cultivated in DMEM+10% (vol/vol) FBS+10 ng/mL FGF2 for 7 days. The volume of the cell culture medium was 3 mL/well. The cells were cultivated in a humidified air atmosphere with 5% CO2 at a temperature of 37°C. On days 1, 3, and 7, the cells were washed with PBS and were then detached by incubation with Trypsin-EDTA Solution (Sigma-Aldrich) for 4 minutes at 37°C. The effect of the Trypsin-EDTA solution was subsequently inhibited by adding a medium with FBS, and the cells were resuspended. The number, the viability, and the diameter of the detached cells in each well were measured using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). In this analyser, the cell viability is evaluated by a trypan blue exclusion test. From 5 to 8 independent samples for each experimental group of a donor in each time interval were analysed. The cell population doubling time (DT) was calculated from the ADSC numbers, according to the following equation: DT=t×ln2/lnN–lnN0, where t represents the duration of culture, N represents the number of cells on day 3, and N0 represents the number of cells on day 1. ### 2.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes is generally measured in order to estimate the cell proliferation activity. The isolated cells in passage 1 were seeded into 24-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 1.5 cm) in a density of 14,000 cells/cm2 (i.e., 25,000 cells/well) and were cultivated in DMEM+10% FBS+10 ng/mL FGF2 for 7 days. The volume of cell culture medium was 1.5 mL/well. On days 3 and 7, a CellTiter 96® Aqueous One Solution Cell Proliferation Assay (MTS; Promega Corporation) was performed according to the manufacturer’s protocol. In brief, the principle of the MTS assay is based on a colorimetric change of the yellow tetrazolium salt to brown formazan. This change is brought about by the activity of mitochondrial enzymes. The absorbance was measured at a wavelength of 490 nm, using a VersaMax ELISA microplate reader (Molecular Devices LLC). From 5 to 6 independent samples were measured for each experimental group in each time interval. ### 2.6. Flow Cytometry In passage 2, the cells were characterised by flow cytometry, using antibodies against specific surface CD markers. An evaluation was made of the percentage of cells in the population that contained standard markers of ADSCs, i.e., CD105 (also referred to as endoglin, a membrane glycoprotein which is part of the TGF-β receptor complex), CD90 (Thy-1, a thymocyte antigen belonging to the immunoglobulin superfamily), and CD73 (ecto-5′-nucleotidase, a glycosylphosphatidylinositol-anchored membrane protein). Other evaluated markers included CD29 (integrin β1, a component of receptors for collagen and fibronectin), CD146 (a melanoma cell adhesion molecule, a receptor for laminin), CD31 (also referred to as platelet-endothelial cell adhesion molecule-1, PECAM-1), and hematopoietic cell markers CD34 and CD45 [3]. In brief, the cells were washed with PBS and were incubated with Trypsin-EDTA for 4 minutes at 37°C. Subsequently, the medium with FBS was added and the cells were centrifuged (5 min, 300 g). The supernatant was aspired off, and the cells were resuspended in PBS with 0.5% (wt/vol) BSA (Sigma-Aldrich). The cells were equally divided into aliquots (i.e., 250,000 cells/aliquot). FITC-, Alexa488-, Alexa647-, or PE-conjugated monoclonal antibodies, i.e., against CD105, CD45 (Exbio Praha), CD90 (BD Biosciences), CD73, CD146, CD31 (BioLegend), CD29 and CD34 (Invitrogen), were added separately into aliquots. The aliquots were incubated with the antibodies for 30 minutes at 4°C in dark conditions. Next, the stained cells were washed three times with PBS with 0.5% (wt/vol) BSA and were analysed with the Accuri C6 Flow Cytometer System (BD Biosciences). In each aliquot, 20,000 events were recorded for each CD surface marker. ### 2.7. Microscopy Techniques Phase-contrast microscopy was used to visualise the process of attachment, spreading, and growth in native ADSCs after isolation (passage 0). The immunofluorescence staining of CD surface markers was performed on native adhering ADSCs (passage 2) using PE-CD90 (BD Science) and Alexa488-CD29 (Invitrogen) antibodies. Cell nuclei in native cells were counterstained with Hoechst 33342 (Sigma-Aldrich) for 30 minutes at room temperature in the dark. Olympus microscope IX71 (objective magnification 10x or 20x) was used to take representative images. ### 2.8. Statistical Analysis First, to evaluate the significance of different negative pressures, the observed data (i.e., initial cell yields, later cell numbers, and mitochondrial activity) were presented as the ratio of low-pressure cells to high-pressure cells for each donor. The Mann-Whitney Rank Sum test was used to test the equality of the medians of the ratios on different days of the experiment. Second, an unpaired two-samplet-test (for parametric data) or a Mann-Whitney Rank Sum test (for nonparametric data) was used to test the significance of the differences between the outer thigh area and the abdomen area. The inner thigh region was not statistically compared with other harvesting sites due to a relatively small group of samples (i.e., from only 3 patients). All the measured data were tested for normality according to the Kolmogorov-Smirnov test. Data which showed a Gaussian distribution are expressed as mean±SD. However, due to the small sample size and the wide dispersion among the donors, some of the data did not show a Gaussian distribution. The nonparametric data are expressed as the median and the interquartile range (IQ range). The statistical analysis was performed using SigmaStat Software (Systat Software Inc., USA); p<0.001 (for flow cytometry) or p<0.05 (for all other methods) was considered statistically significant. The plots were generated in R (programming language). ## 2.1. Group of Donors and Liposuction Procedure A comparative study was performed on samples of subcutaneous adipose tissue from 15 healthy donors after informed consent at Hospital Na Bulovce in Prague. The group of females (n=14) and one male (n=1) underwent tumescent liposuction, whereby adipose tissue from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9) was harvested. Harvesting was conducted in compliance with the tenets of the Declaration of Helsinki on experiments involving human tissues and under ethical approval issued by the Ethics Committee of Hospital Na Bulovce in Prague (August 21, 2014). The liposuctions were performed under sterile conditions, using tumescence. The tumescent solution contained a 1000 mL of physiological solution with adrenaline (1 : 200,000) 1 mL and bicarbonate 8.4% 20 mL. In order to protect the harvested stromal cells from possible toxicity, no local anaesthetics were used. We used a liposuction machine (MEDELA dominant) that enabled continuous negative pressure to be set, and we utilized negative pressure of -200 mmHg and -700 mmHg. Superficial fat tissue was harvested using a Coleman Style blunt cannula with 4 holes and an inner diameter of 3 mm. Both low negative pressure (i.e., -200 mmHg) and high negative pressure (i.e., -700 mmHg) were used during liposuction in selected harvesting sites for each donor. Specifically, in the abdominal region, low pressure was used on one side of the abdomen, while high pressure was applied on the opposite side of the abdomen. Similarly, in the outer and inner thigh regions, low pressure was applied on one leg and the high pressure was applied on the contralateral leg (Scheme 1). A different cannula and vacuum suction container was used for low and high pressure harvesting to prevent contamination of low pressure harvesting material with high pressure harvesting material and vice versa. The age range of the donors was 26–53 years (mean age 37.8±7.8 years) and the BMI range was 19.60–36.17kg/m2 (mean BMI 25.44±4.37kg/m2) (Table 1). The donors did not suffer from diabetes or from hypertension, and they were not tobacco users.Scheme 1 Scheme of the experiment. Sites in the abdomen, the inner thigh, and the outer thigh where liposuction at low negative pressure (-200 mmHg) and at high negative pressure (-700 mmHg) was performed. After the cell isolation, the initial yields of attached cells were counted. In subsequent passages, the number, viability, diameter, doubling time, mitochondrial activity (all in passage 1), and CD surface markers (passage 2) of isolated ADSCs were evaluated.Table 1 Donors included in our study. The group of females (n=14) and one male (n=1; abdomen site) underwent tumescent liposuction, in which adipose tissue was harvested from the inner thigh (n=3), from the outer thigh (n=7), and from the abdomen (n=9). In each harvesting site, the lipoaspirate was obtained both under low and under high negative pressure. Donor site Age (years) BMI (kg/m2) No. of samples Inner thigh 42.0±4.6 27.70±7.40 3 Outer thigh 35.4±7.8 23.56±2.45 7 Abdomen 38.3±8.6 25.06±4.08 9 Together 37.8±7.8 25.44±4.37 19 samples from 15 donors ## 2.2. Isolation of ADSCs The isolation procedure was performed in fresh lipoaspirates (within 2 hours after the liposuction procedure) according to the isolation protocol by Estes et al. [7]. However, we made some slight modifications, as described in our previous study [20]. In brief, the lipoaspirates were washed several times with phosphate-buffered saline (PBS; Sigma-Aldrich). Then, the lipoaspirate was digested, using PBS containing 1% (wt/vol) bovine serum albumin (BSA; Sigma-Aldrich) and type I collagenase 0.1% (wt/vol) (Worthington) for 1 hour at a temperature of 37°C. After the digestion procedure, the tissue was centrifuged, and the upper and middle layers were aspirated. The obtained SVF was washed three times. A filter with pores 100 μm in size (Cell Strainer, BD Falcon) was additionally used to filter the cell suspension of SVF right before seeding into culture flasks (75 cm2, TPP, Switzerland) in a density of 0.16 mL of original lipoaspirate/cm2. The isolated cells were cultured in Dulbecco’s modified Eagle medium (DMEM; Gibco), supplemented with 10% (vol/vol) foetal bovine serum (FBS; Gibco), gentamicin (40 μg/mL; LEK), and recombinant human fibroblast growth factor basic (FGF2; 10 ng/mL; GenScript). The primary cells, referred to as “passage 0,” were cultured until they reached 70%–80% confluence. Then, the cells were passaged.For the experiments that followed (Scheme1), the cells isolated from the lipoaspirate harvested under low negative pressure (i.e., -200 mmHg) are referred to as “low,” and the cells isolated from the lipoaspirate harvested under high negative pressure (i.e., -700 mmHg) are referred to as “high.” The compared groups of cells are referred to as low inner thigh (low I thigh), high inner thigh (high I thigh), low outer thigh (low O thigh), high outer thigh (high O thigh), low abdomen, and high abdomen. ## 2.3. Yields of Initially Attached Cells For the primary culture of isolated cells, as mentioned above, the seeding density was 0.16 mL of original lipoaspirate/cm2. On day 1 after isolation and seeding (passage 0), the culture medium was changed with the fresh medium, and the unattached cells were washed away. Then, the cell yields per 1 mL of lipoaspirate were counted from the number of attached cells, because only these cells are relevant for potential use in tissue engineering. Microphotographs of 4 to 6 randomly chosen microscopic fields for each sample were taken by phase-contrast microscope and were analysed by manual cell counting. Then, the number of attached cells was compared depending on different negative pressure or on the harvesting site. ## 2.4. Cell Number, Viability, Diameter, and Doubling Time The cells from each donor, harvested under low and high negative pressure within the corresponding areas in the abdomen or in the thigh, were cultured and then analysed. The isolated cells in passage 1 were seeded into 12-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 2.1 cm) in a density of 14,000 cells/cm2 (i.e., 50,000 cells/well) and were cultivated in DMEM+10% (vol/vol) FBS+10 ng/mL FGF2 for 7 days. The volume of the cell culture medium was 3 mL/well. The cells were cultivated in a humidified air atmosphere with 5% CO2 at a temperature of 37°C. On days 1, 3, and 7, the cells were washed with PBS and were then detached by incubation with Trypsin-EDTA Solution (Sigma-Aldrich) for 4 minutes at 37°C. The effect of the Trypsin-EDTA solution was subsequently inhibited by adding a medium with FBS, and the cells were resuspended. The number, the viability, and the diameter of the detached cells in each well were measured using a Vi-CELL XR Cell Viability Analyzer (Beckman Coulter). In this analyser, the cell viability is evaluated by a trypan blue exclusion test. From 5 to 8 independent samples for each experimental group of a donor in each time interval were analysed. The cell population doubling time (DT) was calculated from the ADSC numbers, according to the following equation: DT=t×ln2/lnN–lnN0, where t represents the duration of culture, N represents the number of cells on day 3, and N0 represents the number of cells on day 1. ## 2.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes is generally measured in order to estimate the cell proliferation activity. The isolated cells in passage 1 were seeded into 24-well tissue culture polystyrene plates (TPP, Switzerland; well diameter 1.5 cm) in a density of 14,000 cells/cm2 (i.e., 25,000 cells/well) and were cultivated in DMEM+10% FBS+10 ng/mL FGF2 for 7 days. The volume of cell culture medium was 1.5 mL/well. On days 3 and 7, a CellTiter 96® Aqueous One Solution Cell Proliferation Assay (MTS; Promega Corporation) was performed according to the manufacturer’s protocol. In brief, the principle of the MTS assay is based on a colorimetric change of the yellow tetrazolium salt to brown formazan. This change is brought about by the activity of mitochondrial enzymes. The absorbance was measured at a wavelength of 490 nm, using a VersaMax ELISA microplate reader (Molecular Devices LLC). From 5 to 6 independent samples were measured for each experimental group in each time interval. ## 2.6. Flow Cytometry In passage 2, the cells were characterised by flow cytometry, using antibodies against specific surface CD markers. An evaluation was made of the percentage of cells in the population that contained standard markers of ADSCs, i.e., CD105 (also referred to as endoglin, a membrane glycoprotein which is part of the TGF-β receptor complex), CD90 (Thy-1, a thymocyte antigen belonging to the immunoglobulin superfamily), and CD73 (ecto-5′-nucleotidase, a glycosylphosphatidylinositol-anchored membrane protein). Other evaluated markers included CD29 (integrin β1, a component of receptors for collagen and fibronectin), CD146 (a melanoma cell adhesion molecule, a receptor for laminin), CD31 (also referred to as platelet-endothelial cell adhesion molecule-1, PECAM-1), and hematopoietic cell markers CD34 and CD45 [3]. In brief, the cells were washed with PBS and were incubated with Trypsin-EDTA for 4 minutes at 37°C. Subsequently, the medium with FBS was added and the cells were centrifuged (5 min, 300 g). The supernatant was aspired off, and the cells were resuspended in PBS with 0.5% (wt/vol) BSA (Sigma-Aldrich). The cells were equally divided into aliquots (i.e., 250,000 cells/aliquot). FITC-, Alexa488-, Alexa647-, or PE-conjugated monoclonal antibodies, i.e., against CD105, CD45 (Exbio Praha), CD90 (BD Biosciences), CD73, CD146, CD31 (BioLegend), CD29 and CD34 (Invitrogen), were added separately into aliquots. The aliquots were incubated with the antibodies for 30 minutes at 4°C in dark conditions. Next, the stained cells were washed three times with PBS with 0.5% (wt/vol) BSA and were analysed with the Accuri C6 Flow Cytometer System (BD Biosciences). In each aliquot, 20,000 events were recorded for each CD surface marker. ## 2.7. Microscopy Techniques Phase-contrast microscopy was used to visualise the process of attachment, spreading, and growth in native ADSCs after isolation (passage 0). The immunofluorescence staining of CD surface markers was performed on native adhering ADSCs (passage 2) using PE-CD90 (BD Science) and Alexa488-CD29 (Invitrogen) antibodies. Cell nuclei in native cells were counterstained with Hoechst 33342 (Sigma-Aldrich) for 30 minutes at room temperature in the dark. Olympus microscope IX71 (objective magnification 10x or 20x) was used to take representative images. ## 2.8. Statistical Analysis First, to evaluate the significance of different negative pressures, the observed data (i.e., initial cell yields, later cell numbers, and mitochondrial activity) were presented as the ratio of low-pressure cells to high-pressure cells for each donor. The Mann-Whitney Rank Sum test was used to test the equality of the medians of the ratios on different days of the experiment. Second, an unpaired two-samplet-test (for parametric data) or a Mann-Whitney Rank Sum test (for nonparametric data) was used to test the significance of the differences between the outer thigh area and the abdomen area. The inner thigh region was not statistically compared with other harvesting sites due to a relatively small group of samples (i.e., from only 3 patients). All the measured data were tested for normality according to the Kolmogorov-Smirnov test. Data which showed a Gaussian distribution are expressed as mean±SD. However, due to the small sample size and the wide dispersion among the donors, some of the data did not show a Gaussian distribution. The nonparametric data are expressed as the median and the interquartile range (IQ range). The statistical analysis was performed using SigmaStat Software (Systat Software Inc., USA); p<0.001 (for flow cytometry) or p<0.05 (for all other methods) was considered statistically significant. The plots were generated in R (programming language). ## 3. Results ### 3.1. Growth of Cells after Isolation and Cell Yields In passage 0, we observed slight differences in the range of cell adhesion and growth among the cells harvested from various donors. However, the cells from all donors usually reached 70% or 80% confluence by day 10. Figure1 shows representative images of the process of adhesion and growth in ADSCs after isolation from the same patient. On day 1 after isolation, the number of attached cells per 1 mL of lipoaspirate was counted in each sample. The ratio of attached low-pressure cells to attached high-pressure cells for each donor showed a median level near to 1.0 for the outer thigh region, which means a similar number of attached cells for both pressures (Figure 2(a)). However, the median level of this ratio (0.79) was significantly lower for the abdomen region (Figure 2(a)) which indicates higher cell yields from high-pressure lipoaspirates from this harvesting site. We observed a significantly 2-fold or 3-fold higher number of attached cells from the outer thigh region than from the abdomen region (Figure 2(b)). The inner thigh region was not statistically compared with other harvesting sites due to the relatively small group of samples.Figure 1 The process of attachment, spreading, and growth in ADSCs from the same patient on days 2, 5, and 7 after isolation. The ADSCs were isolated from the inner thigh area and from the abdomen area, under low negative pressure (-200 mmHg) and under high negative pressure (-700 mmHg). Passage 0. Scale bar 200μm. Representative images are shown.Figure 2 Cell yields counted from the number of initially attached cells. (a) The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor on day 1 after isolation; passage 0.p<0.05 (∗) is for harvesting area (outer thigh vs. abdomen) significance testing. (b) The number of attached cells per 1 mL of lipoaspirate; passage 0. p<0.05 (∗) is for harvesting area significance testing (outer thigh vs. abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. (a) (b) ### 3.2. Cell Number The number of cells obtained from the corresponding areas of the abdomen or the thigh under low negative pressure and under high negative pressure for the same donor was measured on days 1, 3, and 7. The ratio of the number of low-pressure cells to the number of high-pressure cells on a specific day of the culture from each donor showed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas (Figure3). There were no statistical differences in cell numbers between the outer thigh and abdomen areas on days 1 and 3 (Figure 4). When the groups of cells from the inner thigh and the outer thigh were evaluated together, we observed higher cell number in thigh ADSCs than in abdomen ADSCs (p=0.048) on day 7 (Figure 4).Figure 3 The influence of negative pressure on the number of ADSCs. The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 1 (1D), day 3 (3D), and day 7 (7D); passage 2. No significant differences among the groups were observed.Figure 4 The number of ADSCs. The ADSCs were harvested under low pressure and under high pressure from the inner thigh area, the outer thigh area, and the abdomen area. Days 1, 3, and 7; passage 2. On day 7, the thigh ADSCs (i.e., inner thigh+outer thigh) reached significantly higher (p=0.048) cell numbers than the abdomen ADSCs. p<0.05 (∗) is for harvesting area significance testing. ### 3.3. Doubling Time The doubling time was calculated between days 1 and 3 (i.e., 48 hours of cell culture). There were similar median values in all sample groups, from 24.99 hours (low abdomen) to 28.65 hours (high inner thigh) (Figure5). No significant differences were observed between the sample groups.Figure 5 Population doubling time. Population doubling time of low-pressure ADSCs and high-pressure ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area. No significant differences were observed among the groups investigated here. ### 3.4. Viability and Diameter No significant differences were found in the viability of the cells, measured by the trypan blue exclusion test, on day 1 (from 88.0% for low abdomen to 93.6% for low outer thigh), on day 3 (from 93.5% for high abdomen to 96.6% for high outer thigh), and on day 7 (from 90.3% for high inner thigh to 95.9% for high outer thigh) (Table2). We observed significantly larger diameter of outer thigh ADSCs than of abdomen ADSCs (p=0.038) on day 1. However, no significant differences in diameter were observed on day 3 and on day 7 (Table 3).Table 2 The viability of ADSCs. The viability of ADSCs harvested under low pressure and under high pressure from the inner thigh area, from the outer thigh area, and from the abdomen area on days 1, 3, and 7 in passage 2. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Viability of ADSCs (%) Day 1 Day 3 Day 7 Median IQ range Median IQ range Median IQ range Low I thigh 91.7 90.5-94.6 96.2 93.1-96.5 93.2 92.7-95.8 High I thigh 91.9 89.9-93.2 94.8 92.7-95.3 90.3 88.6-94.6 Low O thigh 93.6 84.8-95.8 93.8 91.2-96.5 95.2 89.4-96.9 High O thigh 93.5 80.7-94.5 96.6 94.3-97.3 95.9 95.6-97.0 Low abdomen 88.0 87.5-90.0 94.6 90.8-95.3 94.8 91.7-97.1 High abdomen 92.4 90.3-93.7 93.5 92.4-95.0 95.2 93.2-97.0Table 3 The diameter of ADSCs. The diameter of ADSCs was measured using the Vi-CELL XR Cell Counter on days 1, 3, and 7;p<0.05 (∗) is for harvesting area significance testing (i.e., outer thigh and abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Diameter of ADSCs (microns) Day 1 Day 3 Day 7 Mean±SD Mean±SD Mean±SD Low I thigh 16.76±0.74 14.67±0.13 12.55±0.38 High I thigh 15.73±0.39 14.92±0.26 12.91±1.18 Low O thigh 16.32 ± 0.82 14.71±1.46 12.97±0.49 High O thigh 17.04 ± 0.80 14.52±1.10 13.77±0.68 Low abdomen 15.67 ± 1.33 14.52±1.48 13.39±0.87 High abdomen 15.70 ± 1.27 14.93±1.43 13.86±1.10 ### 3.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes in ADSCs, considered as an indirect indicator of cell proliferation activity, was measured on days 3 and 7 after seeding. The ratio of the mitochondrial activity of the low-pressure cells to the mitochondrial activity of the high-pressure cells on a specific day of the culture from each donor revealed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas, and no significant differences were observed between the low-pressure cells and the high-pressure cells (Figure6). Similarly, there were no significant differences in the mitochondrial activity of cells from different donor sites on day 3 (Table 4). On day 7, we observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of other harvesting sites; however, no statistical analysis was performed due to the relatively small sample size.Figure 6 The influence of negative pressure on the mitochondrial activity of ADSCs. The ratio of the mitochondrial activity of low-pressure cells to the mitochondrial activity of high-pressure cells obtained for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 3 (3D) and on day 7 (7D). No significant differences among the observed groups were observed.Table 4 The cell mitochondrial activity of ADSCs. The cell mitochondrial activity of ADSCs measured on days 3 and 7. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Cell mitochondrial activity (absorbance) Day 3 Day 7 Mean±SD Mean±SD Low I thigh 0.38±0.25 0.41±0.22 High I thigh 0.34±0.29 0.41±0.31 Low O thigh 0.53±0.17 0.69±0.14 High O thigh 0.52±0.24 0.62±0.10 Low abdomen 0.51±0.25 0.71±0.29 High abdomen 0.47±0.24 0.69±0.29 ### 3.6. Flow Cytometry The percentage of cells positive for typical markers of mesenchymal stromal cells, i.e., CD105, CD90, CD73, and CD29, was very high in ADSCs obtained from all tested sources. No significant differences were found in the presence of these markers in cells obtained from lipoaspirates taken at different negative pressures and from different harvesting sites (Table5). However, slightly lower and more variable values were obtained in abdomen-derived ADSCs. Representative images of CD90 and CD29 immunostaining are shown in Figure 7(b). We also observed variability in the percentage of CD146+ cells among the donors (from 3.9% in low inner thigh and low outer thigh to 10.9% in low abdomen) (Figure 7(a)). This variability was slightly higher in ADSCs from the abdomen area and was not dependent on negative pressure. The percentage of cells bearing hematopoietic and endothelial cell markers, namely, CD45, CD34, and CD31, was very low and showed no significant differences between cells obtained at different negative pressures and from different donor sites (Table 5).Table 5 The percentage of CD surface markers in ADSCs. The percentage of CD105-, CD90-, CD73-, CD29-, CD146-, CD45-, CD31-, and CD34-positive ADSCs. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells CD markers (% positive cells) CD105 CD90 CD73 CD29 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 99.9 99.2-99.9 99.5 99.5-99.7 99.9 99.8-100 99.8 99.2-100 High I thigh 99.9 94.1-99.9 99.6 99.3-99.8 99.9 99.9-100 99.8 99.8-100 Low O thigh 99.9 98.3-100 99.6 99.2-99.9 100 99.9-100 99.8 99.8-100 High O thigh 99.9 96.2-99.9 99.6 99.2-99.9 100 99.9-100 99.9 99.8-100 Low abdomen 99.5 82.3-99.9 99.4 97.5-99.8 99.8 99.6-99.9 99.6 90.5-99.8 High abdomen 98.9 89.1-99.8 99.5 97.3-99.8 99.8 99.6-99.9 99.6 95.1-99.8 Group of cells CD markers (% positive cells) CD146 CD45 CD31 CD34 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 3.9 2.9-4.5 4.3 3.9-4.7 0.5 0.3-1.0 0.4 0.3-0.7 High I thigh 6.0 2.5-7.4 3.3 2.9-4.0 0.8 0.4-1.0 0.8 0.4-1.7 Low O thigh 3.9 1.3-5.2 1.8 1.5-6.9 0.5 0.2-0.7 1.1 0.4-6.3 High O thigh 5.4 2.7-24.6 1.8 1.1-12.6 0.6 0.1-2.4 0.9 0.3-6.1 Low abdomen 10.9 3.4-35.4 5.2 4.5-7.5 0.3 0.2-0.5 1.0 0.5-1.6 High abdomen 4.7 2.6-28.5 4.1 3.1-5.4 0.4 0.2-1.0 0.9 0.5-1.7Figure 7 (a) The percentage of CD146-positive cells in each group of cells. No significant differences among the harvesting sites were observed. (b) The immunofluorescence staining of CD29 and CD90 in ADSCs. Cell nuclei are counterstained with Hoechst 33342. Olympus microscope IX71. Scale bar 200μm (CD29) and 100 μm (CD90). (a) (b) ## 3.1. Growth of Cells after Isolation and Cell Yields In passage 0, we observed slight differences in the range of cell adhesion and growth among the cells harvested from various donors. However, the cells from all donors usually reached 70% or 80% confluence by day 10. Figure1 shows representative images of the process of adhesion and growth in ADSCs after isolation from the same patient. On day 1 after isolation, the number of attached cells per 1 mL of lipoaspirate was counted in each sample. The ratio of attached low-pressure cells to attached high-pressure cells for each donor showed a median level near to 1.0 for the outer thigh region, which means a similar number of attached cells for both pressures (Figure 2(a)). However, the median level of this ratio (0.79) was significantly lower for the abdomen region (Figure 2(a)) which indicates higher cell yields from high-pressure lipoaspirates from this harvesting site. We observed a significantly 2-fold or 3-fold higher number of attached cells from the outer thigh region than from the abdomen region (Figure 2(b)). The inner thigh region was not statistically compared with other harvesting sites due to the relatively small group of samples.Figure 1 The process of attachment, spreading, and growth in ADSCs from the same patient on days 2, 5, and 7 after isolation. The ADSCs were isolated from the inner thigh area and from the abdomen area, under low negative pressure (-200 mmHg) and under high negative pressure (-700 mmHg). Passage 0. Scale bar 200μm. Representative images are shown.Figure 2 Cell yields counted from the number of initially attached cells. (a) The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor on day 1 after isolation; passage 0.p<0.05 (∗) is for harvesting area (outer thigh vs. abdomen) significance testing. (b) The number of attached cells per 1 mL of lipoaspirate; passage 0. p<0.05 (∗) is for harvesting area significance testing (outer thigh vs. abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. (a) (b) ## 3.2. Cell Number The number of cells obtained from the corresponding areas of the abdomen or the thigh under low negative pressure and under high negative pressure for the same donor was measured on days 1, 3, and 7. The ratio of the number of low-pressure cells to the number of high-pressure cells on a specific day of the culture from each donor showed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas (Figure3). There were no statistical differences in cell numbers between the outer thigh and abdomen areas on days 1 and 3 (Figure 4). When the groups of cells from the inner thigh and the outer thigh were evaluated together, we observed higher cell number in thigh ADSCs than in abdomen ADSCs (p=0.048) on day 7 (Figure 4).Figure 3 The influence of negative pressure on the number of ADSCs. The ratio of the number of low-pressure cells to the number of high-pressure cells for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 1 (1D), day 3 (3D), and day 7 (7D); passage 2. No significant differences among the groups were observed.Figure 4 The number of ADSCs. The ADSCs were harvested under low pressure and under high pressure from the inner thigh area, the outer thigh area, and the abdomen area. Days 1, 3, and 7; passage 2. On day 7, the thigh ADSCs (i.e., inner thigh+outer thigh) reached significantly higher (p=0.048) cell numbers than the abdomen ADSCs. p<0.05 (∗) is for harvesting area significance testing. ## 3.3. Doubling Time The doubling time was calculated between days 1 and 3 (i.e., 48 hours of cell culture). There were similar median values in all sample groups, from 24.99 hours (low abdomen) to 28.65 hours (high inner thigh) (Figure5). No significant differences were observed between the sample groups.Figure 5 Population doubling time. Population doubling time of low-pressure ADSCs and high-pressure ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area. No significant differences were observed among the groups investigated here. ## 3.4. Viability and Diameter No significant differences were found in the viability of the cells, measured by the trypan blue exclusion test, on day 1 (from 88.0% for low abdomen to 93.6% for low outer thigh), on day 3 (from 93.5% for high abdomen to 96.6% for high outer thigh), and on day 7 (from 90.3% for high inner thigh to 95.9% for high outer thigh) (Table2). We observed significantly larger diameter of outer thigh ADSCs than of abdomen ADSCs (p=0.038) on day 1. However, no significant differences in diameter were observed on day 3 and on day 7 (Table 3).Table 2 The viability of ADSCs. The viability of ADSCs harvested under low pressure and under high pressure from the inner thigh area, from the outer thigh area, and from the abdomen area on days 1, 3, and 7 in passage 2. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Viability of ADSCs (%) Day 1 Day 3 Day 7 Median IQ range Median IQ range Median IQ range Low I thigh 91.7 90.5-94.6 96.2 93.1-96.5 93.2 92.7-95.8 High I thigh 91.9 89.9-93.2 94.8 92.7-95.3 90.3 88.6-94.6 Low O thigh 93.6 84.8-95.8 93.8 91.2-96.5 95.2 89.4-96.9 High O thigh 93.5 80.7-94.5 96.6 94.3-97.3 95.9 95.6-97.0 Low abdomen 88.0 87.5-90.0 94.6 90.8-95.3 94.8 91.7-97.1 High abdomen 92.4 90.3-93.7 93.5 92.4-95.0 95.2 93.2-97.0Table 3 The diameter of ADSCs. The diameter of ADSCs was measured using the Vi-CELL XR Cell Counter on days 1, 3, and 7;p<0.05 (∗) is for harvesting area significance testing (i.e., outer thigh and abdomen). The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Diameter of ADSCs (microns) Day 1 Day 3 Day 7 Mean±SD Mean±SD Mean±SD Low I thigh 16.76±0.74 14.67±0.13 12.55±0.38 High I thigh 15.73±0.39 14.92±0.26 12.91±1.18 Low O thigh 16.32 ± 0.82 14.71±1.46 12.97±0.49 High O thigh 17.04 ± 0.80 14.52±1.10 13.77±0.68 Low abdomen 15.67 ± 1.33 14.52±1.48 13.39±0.87 High abdomen 15.70 ± 1.27 14.93±1.43 13.86±1.10 ## 3.5. Cell Mitochondrial Activity The activity of mitochondrial enzymes in ADSCs, considered as an indirect indicator of cell proliferation activity, was measured on days 3 and 7 after seeding. The ratio of the mitochondrial activity of the low-pressure cells to the mitochondrial activity of the high-pressure cells on a specific day of the culture from each donor revealed median levels near to 1.0 in cells from the inner thigh, outer thigh, and abdomen areas, and no significant differences were observed between the low-pressure cells and the high-pressure cells (Figure6). Similarly, there were no significant differences in the mitochondrial activity of cells from different donor sites on day 3 (Table 4). On day 7, we observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of other harvesting sites; however, no statistical analysis was performed due to the relatively small sample size.Figure 6 The influence of negative pressure on the mitochondrial activity of ADSCs. The ratio of the mitochondrial activity of low-pressure cells to the mitochondrial activity of high-pressure cells obtained for each donor. The measurements were performed on ADSCs from the inner thigh (n=3) area, from the outer thigh (n=7) area, and from the abdomen (n=9) area on day 3 (3D) and on day 7 (7D). No significant differences among the observed groups were observed.Table 4 The cell mitochondrial activity of ADSCs. The cell mitochondrial activity of ADSCs measured on days 3 and 7. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells Cell mitochondrial activity (absorbance) Day 3 Day 7 Mean±SD Mean±SD Low I thigh 0.38±0.25 0.41±0.22 High I thigh 0.34±0.29 0.41±0.31 Low O thigh 0.53±0.17 0.69±0.14 High O thigh 0.52±0.24 0.62±0.10 Low abdomen 0.51±0.25 0.71±0.29 High abdomen 0.47±0.24 0.69±0.29 ## 3.6. Flow Cytometry The percentage of cells positive for typical markers of mesenchymal stromal cells, i.e., CD105, CD90, CD73, and CD29, was very high in ADSCs obtained from all tested sources. No significant differences were found in the presence of these markers in cells obtained from lipoaspirates taken at different negative pressures and from different harvesting sites (Table5). However, slightly lower and more variable values were obtained in abdomen-derived ADSCs. Representative images of CD90 and CD29 immunostaining are shown in Figure 7(b). We also observed variability in the percentage of CD146+ cells among the donors (from 3.9% in low inner thigh and low outer thigh to 10.9% in low abdomen) (Figure 7(a)). This variability was slightly higher in ADSCs from the abdomen area and was not dependent on negative pressure. The percentage of cells bearing hematopoietic and endothelial cell markers, namely, CD45, CD34, and CD31, was very low and showed no significant differences between cells obtained at different negative pressures and from different donor sites (Table 5).Table 5 The percentage of CD surface markers in ADSCs. The percentage of CD105-, CD90-, CD73-, CD29-, CD146-, CD45-, CD31-, and CD34-positive ADSCs. No significant difference was observed between the outer thigh and the abdomen harvesting sites. The inner thigh region was not statistically compared to the outer thigh and abdomen regions due to the relatively small sample size. Group of cells CD markers (% positive cells) CD105 CD90 CD73 CD29 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 99.9 99.2-99.9 99.5 99.5-99.7 99.9 99.8-100 99.8 99.2-100 High I thigh 99.9 94.1-99.9 99.6 99.3-99.8 99.9 99.9-100 99.8 99.8-100 Low O thigh 99.9 98.3-100 99.6 99.2-99.9 100 99.9-100 99.8 99.8-100 High O thigh 99.9 96.2-99.9 99.6 99.2-99.9 100 99.9-100 99.9 99.8-100 Low abdomen 99.5 82.3-99.9 99.4 97.5-99.8 99.8 99.6-99.9 99.6 90.5-99.8 High abdomen 98.9 89.1-99.8 99.5 97.3-99.8 99.8 99.6-99.9 99.6 95.1-99.8 Group of cells CD markers (% positive cells) CD146 CD45 CD31 CD34 Median IQ range Median IQ range Median IQ range Median IQ range Low I thigh 3.9 2.9-4.5 4.3 3.9-4.7 0.5 0.3-1.0 0.4 0.3-0.7 High I thigh 6.0 2.5-7.4 3.3 2.9-4.0 0.8 0.4-1.0 0.8 0.4-1.7 Low O thigh 3.9 1.3-5.2 1.8 1.5-6.9 0.5 0.2-0.7 1.1 0.4-6.3 High O thigh 5.4 2.7-24.6 1.8 1.1-12.6 0.6 0.1-2.4 0.9 0.3-6.1 Low abdomen 10.9 3.4-35.4 5.2 4.5-7.5 0.3 0.2-0.5 1.0 0.5-1.6 High abdomen 4.7 2.6-28.5 4.1 3.1-5.4 0.4 0.2-1.0 0.9 0.5-1.7Figure 7 (a) The percentage of CD146-positive cells in each group of cells. No significant differences among the harvesting sites were observed. (b) The immunofluorescence staining of CD29 and CD90 in ADSCs. Cell nuclei are counterstained with Hoechst 33342. Olympus microscope IX71. Scale bar 200μm (CD29) and 100 μm (CD90). (a) (b) ## 4. Discussion A set of experiments was performed to reveal the influence of negative pressure and harvesting site on the characteristics of isolated ADSCs from a number of donors. For future use in tissue engineering, we were mainly interested in significant differences in the basic adhesion and growth characteristics of ADSCs in passages 1 and 2 after isolation. Our study provided an opportunity to compare isolated cells from the same topographic area that had been harvested under low negative pressure and under high negative pressure from each donor. In passage 0, we observed slight differences in the rate of attachment and spreading and in the growth of the ADSCs of the donors after the cells had been isolated. These initial interdonor differences may have been caused by differences in ADSC frequency in the obtained SVF cells. Varying frequencies of ADSCs, determined by a colony-forming unit assay and/or by a limiting dilution assay, have been found in the adipose tissue harvested from various donor sites [12] or when different harvesting procedures are used [21]. Specifically, Jurgens et al. observed significantly higher frequency of ADSCs isolated from adipose tissue harvested from the abdomen region than from the hip/thigh region [12]. Oedayrajsingh-Varma et al. observed a significantly higher frequency of ADSCs isolated from adipose tissue obtained by resection and tumescent liposuction than from tissue obtained by ultrasound-assisted liposuction [21]. In those studies, the absolute number of nucleated cells in the harvested adipose tissue and the number of viable cells in the stromal vascular fraction were not affected by the anatomical site or by the type of surgical procedure. However, in other studies, the anatomical site did have an influence on the total SVF and on the ADSC yields. Iyyanki et al. observed significantly higher total SVF yields from the abdominal harvesting site than from the flank and axilla harvesting sites; however, the ADSC yields did not differ significantly [18]. In a study by Fraser et al., the abdomen-adipocyte yield was 1.7-fold higher than the hip-adipocyte yield, and the adipocyte yields displayed large donor-to-donor variabilities [22]. However, neither the nucleated cell yields nor the preadipocyte yields differed significantly [22]. A large range of ADSC yields among donors was also observed, and no statistical differences were found between the abdomen, the thigh, and the mammary areas [21]. By contrast, our study showed a potential influence of harvesting site, as we observed a higher number of attached cells per 1 mL of lipoaspirate for the outer thigh area than for the abdomen area on day 1 after isolation in in vitro culture. Different results concerning the influence of harvesting site on cell yields might be obtained because of the differences in the target cell populations being studied in different papers. For plastic surgery purposes, the cell yields of all nucleated cells, adipocytes, preadipocytes, and SVF are also a subject of interest. However, tissue engineering focuses more on the yields of adherent ADSCs that can be further proliferated and/or differentiated.The total number of harvested cells can also be influenced by the level of negative pressure used during the liposuction procedure. In a study by Mojallal et al., a lower negative pressure (-350 mmHg) during liposuction resulted in higher SVF yields than a higher negative pressure (-700 mmHg) [10]. Similarly, in a more recent study by Cheriyan et al., higher counts and higher viability of adipocytes were found in lipoaspirates obtained at a lower negative pressure (-250 mmHg) than at a higher negative pressure (-760 mmHg) [23]. However, each of these studies was performed on three patients only. In our study, the number of attached cells after the isolation was similar for low- and high-pressure cells from the outer thigh region, whereas the abdomen region was characterised by initial higher cell yields of attached cells for high pressure.Although the initial SVF yields, adipocyte yields, and ADSC frequency in lipoaspirates can vary, later differences duringin vitro ADSC culturing were of particular interest to us. Our study was focused on the number, the mitochondrial activity, and the viability of the ADSCs in subsequent passaging. We observed similar cell numbers and mitochondrial activity independently of low- and high-negative pressure for a specific region. This means that the subsequent proliferation of ADSCs was not affected by the negative pressure used during the liposuction procedure. Chen et al. observed initial higher proliferation activity (assessed by Cell Counting Kit-8) in lower negative pressure SVF cells than in higher negative pressure SVF cells from the abdominal area in passages 1 and 2 [11]. However, these significant differences did not appear in passage 3 [11]. Similarly, our results could also provide support for the theory that the differences in proliferation activity between low-pressure cells and high-pressure cells become less noticeable after passaging during in vitro cultivation. Interestingly, other researchers have reported that different apparatuses and different levels of negative pressure during liposuction do not influence the percentage and the viability of adipocytes and isolated mesenchymal stromal cells [9]. The discrepancies among the comparative studies may also have arisen because different cell populations were being studied. That is, negative pressure techniques may have a bigger effect on adipocytes, due to their bigger size, while they may have only a minimal effect on smaller cells, including progenitor cells [22]. It is therefore necessary to consider carefully which types of cells from adipose tissue are to be harvested and used. In our study, the outer thigh ADSCs were bigger in diameter in the cell suspension on day 1 after seeding than the abdomen ADSCs. However, the cells were of similar diameters on days 3 and 7.The function and the representation of cell types in adipose tissue vary among the topographic regions. Preadipocytes and ADSCs obtained from subcutaneous, mesenteric, omental, or intrathoracic fat depots display distinct expression profiles and differentiation capacity [24, 25]. Subcutaneous fat depots are easier to obtain than other fat depots. Although the morphology of subcutaneous and visceral fat did not differ significantly, the harvested subcutaneous ADSCs displayed significantly higher cell numbers, a shorter doubling time, and higher CD146 expression than for visceral ADSCs in later passages [26]. Moreover, within the subcutaneous depots, superficial depots seem to have better stemness and multipotency characteristics of the cells than deep subcutaneous depots [27]. Until now, the harvesting site of fat depots has usually been selected on the basis of actual need or choice. However, the particular anatomic source of adipose tissue harvesting can play a role in further reconstructive surgery and cell-based therapies. The cells from different fat depots express different homeobox (Hox) genes. This supports the idea that they are of different embryonic origin, and so the donor and the host adipose tissue sites need to be carefully matched [28]. Kouidhi et al. compared the gene expression of human knee ADSCs with chin ADSCs [29]. They found more enhanced expression of Pax3 (i.e., a neural crest marker) in chin ADSCs than in knee ADSCs, whereas the expression of most of the Hox genes that are typical for the mesodermal environment was higher in knee ADSCs than in chin ADSCs. In later passages, chin ADSCs also displayed higher self-renewal potential [29]. In our study, we obtained similar numbers and similar viability of ADSCs from the inner thigh area, the outer thigh area, and the abdomen area on days 1 and 3. Thus, our results are in accordance with studies by other researchers, in which similar growth kinetics were found in ADSCs from the abdomen area and from the hip/thigh area [12, 30]. However, with similar cell numbers on days 1 and 3, we observed a tendency of thigh ADSCs (inner thigh+outer thigh) to reach higher values than abdomen ADSCs on day 7 (p=0.048). It therefore seems that there may be a significant difference in later cell numbers between the harvesting sites for most of the patients included in our study, though we observed large variation among the donors. Interestingly, we also observed a tendency toward lower mitochondrial activity of inner thigh ADSCs than of outer thigh ADSCs and abdomen ADSCs on day 7. These results may correspond with the slightly higher cell numbers of inner thigh ADSCs on day 7, when the cells have already reached confluence and have reduced their proliferation activity. However, the smaller number of inner thigh ADSC samples than in the case of other groups (i.e., outer thigh ADSCs and abdomen ADSC) may also have affected the results. The harvesting site can also influence the colony-forming unit (CFU) in isolated ADSCs. Fraser et al. observed that the CFU was higher in hip ADSCs than in abdomen ADSCs [22]. This finding could be in accordance with a higher proliferation rate of hip/thigh ADSCs in later time intervals of the culture [22].During our experiments, we observed a nonparametric distribution of the donors’ data. The interdonor variabilities that were not dependent on the harvesting site or on negative pressure may have been caused by other donor factors. Age and BMI are other factors known to play a considerable role in SVF and ADSC yields and characteristics [19]. However, research findings regarding the influence of age and BMI on ADSC yields are often contradictory [31–33]. For example, in the study by de Girolamo et al., the cellular yield of ADSCs was significantly greater from older patients than from younger patients [31], while in the study by Faustini et al., the patient’s age seemed not to influence the cell yield [32]. Significant donor-to-donor variability has also been reported in multilineage differentiation capacity, self-renewal capacity, and immunomodulatory cytokine secretion [34]. Although some of these variabilities can be explained by a medical history of breast cancer and subsequent treatment, there were also significant differences among donors who had not been diagnosed with cancer [34]. Atherosclerosis is another donor factor which can alter the secretome and reduce the immunomodulatory capacity of ADSCs due to impaired mitochondrial functions [35]. In addition, the ADSCs isolated from patients with renovascular disease exhibited a higher level of DNA damage and lower migratory capacity than ADSCs from healthy donors [36]. In another study, ADSCs isolated from patients suffering from scleroderma, an autoimmune connective tissue disease, showed a lower proliferation rate and lower migration capacity than in the control ADSCs from healthy donors [37].Many papers have reported on various donor-to-donor factors that have a potential impact on the characteristics of mesenchymal stromal cells. In addition, it seems that there are many cell-to-cell variations within the same donor. This cell-to-cell heterogeneity can be manifested bothin vitro and in vivo by interclonal functional and molecular variation, e.g., variable differentiation capacity, existing fast-growing and slow-growing clones, and other differences in proteome and transcriptome [38]. The percentage of various clones in MSCs develops and changes during cell passaging. Even within a single MSC clone, there is a growing body of evidence that the intraclonal heterogeneity alters cell behaviour and characteristics [38].In most of the donors, we proved a high level of positivity of the isolated cells for CD105, CD90, CD73, and CD29 (>80% in ADSCs) and a low level of positivity or absence of CD45, CD31, and CD33 (≤2% in ADSCs), according to the guidelines for characterizing ADSCs [39]. We observed no significant differences in the presence of CD markers depending on negative pressure or on harvesting site. Our results are in accordance with those reported by other researchers, who have found no differences in CD markers in SVF harvested from different sites [12, 14, 30]. In another study, the presence of pericytes, progenitor endothelial cells, preadipocyte cells, and mesenchymal cells in SVF was not influenced by different negative pressures [9]. In addition, in the study by Chen et al., where higher negative pressure had a negative influence on yields, on growth, and on the secretion of growth factors, no differences in CD markers were found [11]. Interestingly, we observed variability in the presence of CD146 among the donors. The presence of CD146+ cells in subcutaneous depots was also not negligible in a study by Lee et al. [26]. CD146 positivity can be a sign of pericytes. Pericytes are cells in contact with small vessels in the adipose tissue, and they are also present in the harvested SVF [40]. The origin of the pericyte is not the only possible explanation. For a review of other theories explaining the presence of CD146, see [41]. In MSCs, high expression of CD146 is associated with a commitment towards vascular smooth muscle cell lineage [42]. This commitment could be interesting for vascular tissue engineering, when differentiating ADSCs towards vascular smooth muscle cells is required. CD146+ cells in combination with human umbilical vein endothelial cells (HUVECs) were also reported to support the formation and the elongation of capillary-like tubular structures [26]. Lee et al. also observed greater proliferation of CD146+ cells than of CD146- cells; however, the percentage of CD146+ cells in an ADSC culture decreased with subsequent subculturing [26]. It seems that the CD146 expression among ADSCs is relatively heterogeneous and could play an important role in potential specific tissue engineering applications. The presence of other hematopoietic and endothelial cell markers (e.g., CD34, CD45, and CD31) can influence future therapies using SVF or ADSCs. The optimal ratio of ADSCs and hematopoietic stem cell progenitors in isolated SVF defined by specific CD surface markers seems to be the key for successful stem cell therapies [43]. ### 4.1. Limitation The first limitation of our study is the relatively small sample size, with uneven numbers of samples from each donor site (i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9)). Due to the smallest sample size of inner thigh ADSCs, we did not make a statistical comparison between this group of cells and outer thigh ADSCs or abdomen ADSCs. A greater number of donors would be desirable. However, we assume that for ADSC characterization under in vitro culture conditions and for later tissue engineering purposes, the sample size is sufficient.The second limitation of the study is that it was primarily focused on negative pressure and on the harvesting site and not on other patient factors, such as age, gender, or BMI; these other characteristics were therefore not completely uniform among the donors. Nevertheless, the studied groups showed similar age and BMI parameters with normal data distribution.The third limitation of the study is that it was focused on the later use of ADSCs in tissue engineering. Therefore, we characterized only the fraction of isolated ADSCs that adhered to the plastic culture flasks. The yields of ADSCs were counted after they had adhered to the flasks, and their characteristics (cell proliferation, flow cytometry analysis of surface markers) were studied in subsequent passages. No other cell types (i.e., adipocytes or all nucleated cells) were analysed in this study with respect to their yields or their viability. The conclusions concerning the influence of negative pressure and harvesting site therefore refer only to plastic-adherent ADSCs.To characterize the ADSCs inin vitro culture conditions, we chose passage 1 and passage 2 depending on specific analyses. These passages were the same for all analysed ADSCs. However, the growth dynamics of the cells is known to vary from passage to passage, and this variability can also be specific in each isolated ADSC population. ## 4.1. Limitation The first limitation of our study is the relatively small sample size, with uneven numbers of samples from each donor site (i.e., inner thigh (n=3), outer thigh (n=7), and abdomen (n=9)). Due to the smallest sample size of inner thigh ADSCs, we did not make a statistical comparison between this group of cells and outer thigh ADSCs or abdomen ADSCs. A greater number of donors would be desirable. However, we assume that for ADSC characterization under in vitro culture conditions and for later tissue engineering purposes, the sample size is sufficient.The second limitation of the study is that it was primarily focused on negative pressure and on the harvesting site and not on other patient factors, such as age, gender, or BMI; these other characteristics were therefore not completely uniform among the donors. Nevertheless, the studied groups showed similar age and BMI parameters with normal data distribution.The third limitation of the study is that it was focused on the later use of ADSCs in tissue engineering. Therefore, we characterized only the fraction of isolated ADSCs that adhered to the plastic culture flasks. The yields of ADSCs were counted after they had adhered to the flasks, and their characteristics (cell proliferation, flow cytometry analysis of surface markers) were studied in subsequent passages. No other cell types (i.e., adipocytes or all nucleated cells) were analysed in this study with respect to their yields or their viability. The conclusions concerning the influence of negative pressure and harvesting site therefore refer only to plastic-adherent ADSCs.To characterize the ADSCs inin vitro culture conditions, we chose passage 1 and passage 2 depending on specific analyses. These passages were the same for all analysed ADSCs. However, the growth dynamics of the cells is known to vary from passage to passage, and this variability can also be specific in each isolated ADSC population. ## 5. Conclusion In our study, we observed a significantly higher number of initially attached cells per 1 mL of lipoaspirate for the outer thigh region than for the abdomen region on day 1 after isolation. Different negative pressure was not the key determinant factor for cell yields of the outer thigh region, whereas high negative pressure had a positive influence on the cell yields of the abdomen region. However, for the subsequent culturing, no significant relationship was identified between the characteristics of isolated ADSCs and the level of negative pressure used during liposuction. In addition, the harvesting site influenced the ADSCs only mildly in some parameters on specific days of the culture (i.e., diameter on day 1). In general, no significant influence of the harvesting site was observed on the cell number, mitochondrial activity, viability, diameter, or on the presence of CD markers. These thigh ADSCs reached a higher cell number than for abdomen ADSCs on day 7 only in cases where cells from the inner thigh and outer thigh areas were evaluated together. However, we observed donor-to-donor variability in initial adhesion, in absolute cell numbers, and in the expression of some CD markers. Thus, our results could suggest that donor-to-donor differences may be affected not only by the harvesting site and by negative pressure but also by other factors. For subsequentin vitro culturing and use in tissue engineering, it seems that the harvesting site and the level of negative pressure do not have a crucial or limiting effect on basic ADSC characteristics. Nevertheless, it is necessary to make a thorough investigation of the area from which ADSCs are to be harvested and the specific liposuction procedure that is to be used, with reference to the purpose for which the adipose tissue is being harvested. --- *Source: 1016231-2020-02-10.xml*
2020
# Gastrodia elata Ameliorates High-Fructose Diet-Induced Lipid Metabolism and Endothelial Dysfunction **Authors:** Min Chul Kho; Yun Jung Lee; Jeong Dan Cha; Kyung Min Choi; Dae Gill Kang; Ho Sub Lee **Journal:** Evidence-Based Complementary and Alternative Medicine (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101624 --- ## Abstract Overconsumption of fructose results in dyslipidemia, hypertension, and impaired glucose tolerance, which have documented correlation with metabolic syndrome.Gastrodia elata, a widely used traditional herbal medicine, was reported with anti-inflammatory and antidiabetes activities. Thus, this study examined whether ethanol extract of Gastrodia elata Blume (EGB) attenuate lipid metabolism and endothelial dysfunction in a high-fructose (HF) diet animal model. Rats were fed the 65% HF diet with/without EGB 100 mg/kg/day for 8 weeks. Treatment with EGB significantly suppressed the increments of epididymal fat weight, blood pressure, plasma triglyceride, total cholesterol levels, and oral glucose tolerance, respectively. In addition, EGB markedly prevented increase of adipocyte size and hepatic accumulation of triglycerides. EGB ameliorated endothelial dysfunction by downregulation of endothelin-1 (ET-1) and adhesion molecules in the aorta. Moreover, EGB significantly recovered the impairment of vasorelaxation to acetylcholine and levels of endothelial nitric oxide synthase (eNOS) expression and induced markedly upregulation of phosphorylation AMP-activated protein kinase (AMPK)α in the liver, muscle, and fat. These results indicate that EGB ameliorates dyslipidemia, hypertension, and insulin resistance as well as impaired vascular endothelial function in HF diet rats. Taken together, EGB may be a beneficial therapeutic approach for metabolic syndrome. --- ## Body ## 1. Introduction Metabolic syndrome, a worldwide issue, is characterized by insulin resistance, impaired glucose tolerance and/or hyperglycemia, high blood serum triglycerides, low concentration of high-density lipoprotein (HDL) cholesterol, high blood pressure, and central obesity. The association of 3 (or more) of these factors leads to an increased morbidity and mortality from several predominant diseases such as type 2 diabetes, cancer, and cardiovascular diseases including atherosclerosis, myocardial infarction, and stroke [1, 2].Fructose is an isomer of glucose with a hydroxyl group on carbon-4 reversed in position. It is promptly absorbed and rapidly metabolized by liver. Recent decades westernization of diets has resulted in significant increases in added fructose, enormous rised in fructose consumption typical daily [3]. The exposure of the liver to such enormous rising fructose consumption leads to rapid stimulation of lipogenesis and triglyceride accumulation, which in turn leads to reduced insulin sensitivity and hepatic insulin resistance/glucose intolerance [4]. Thus, high-fructose diet induces a well-characterised metabolic syndrome, generally resulting in hypertension, dyslipidaemia, and low level of HDL-cholesterol [5]. Recent studies suggest that high fructose intake may be an important risk factor for the development of fatty liver [6]. Rats are commonly used as a model to mimic human disease, including metabolic syndrome [7]. Similarly, emerging data suggest that experiment on fructose-diet rats tends to produce some of the changes associated with metabolic syndrome, such as altered lipid metabolism, fatty liver, hypertension, obesity, and dyslipidemia [8].Gastrodia elata Blume is a traditional herbal medicine in Korea, China, and Japan, which has been used for the treatment of headaches, hypertension, rheumatism, and cardiovascular diseases [9]. Several major physiological substances have been identified from Gastrodia elata Blume such as gastrodin, vanillyl alcohol, vanillin, glycoprotein, p-endoxybenzyl alcohol, and polysaccharides including alpha-D-glucan [10–12]. Our previous studies showed that Gastrodia elata Blume exhibits anti-inflammatory and antiatherosclerotic properties by inhibiting the expression of proinflammatory cytokines in vascular endothelial cells [13, 14]. However, the effect of ethanol extract of Gastrodia elata Blume on high-fructose (HF) diet animal model has not been yet reported. Thus, the present study was designed to determine whether an ethanol extract of Gastrodia elata Blume (EGB) improves high-fructose diet-induced lipid metabolism and endothelial dysfunction. ## 2. Materials and Methods ### 2.1. Preparation ofGastrodia elata Blume TheGastrodia elataBlume was purchased from the Herbal Medicine Co-operative Association, Iksan, Jeonbuk Province, Korea, in May 2012. A voucher specimen (no. HBJ1041) was deposited in the herbarium of the Professional Graduate School of Oriental Medicine, Wonkwang University, Iksan, Jeonbuk, South Korea. The dried Gastrodia elataBlume (400 g) was extracted with 4 L of 95% ethanol at room temperature for 1 week. The extract was filtered through Whatman no. 3 filter paper (Whatman International Ltd., England) and concentrated using rotary evaporator. The resulting extract (12.741 g) was lyophilized by using a freeze drier and retained until required. ### 2.2. Animal Experiments and Diet All experimental procedures were carried out in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Utilization Committee for Medical Science of Wonkwang University. Seven week-old male Sprague-Dawley (SD) rats were obtained from Samtako (Osan, Korea). Rats were kept in a room automatically maintained at a temperature (23 ± 2°C), humidity (50~60%), and 12-h light/dark cycle throughout the experiments. After 1 week of acclimatization, animals were randomly divided into three groups (n = 10 per group). Control group (Cont.) was fed regular diet, high-fructose group (HF) was fed 65% fructose diet (Research Diet, USA), and the third group (HF + EGB) was fed with 65% fructose along with a single dose of 100 mg/kg/day of EGB orally for a period of 8 weeks. The regular diet was composed of 50% starch, 21% protein, 4% fat, and standard vitamins and mineral mix. The high-fructose diet was composed of 65% fructose, 20% protein, 10% fat, and standard vitamins and mineral mix. ### 2.3. Blood and Tissue Sampling At the end of the experiments, the aorta, liver, adipose tissue (epididymal fat pads), and muscle were separated and frozen until analysis after being rinsed with cold saline. The plasma was obtained from the coagulated blood by centrifugation at 3,000 rpm 15 min at 4°C. The separation of plasma was frozen at −80°C until analysis. ### 2.4. Measurements of Blood Pressure Systolic blood pressure (SBP) was determined by using noninvasive tail-cuff plethysmography method and recorded with an automatic sphygmotonography (MK2000; Muromachi Kikai, Tokyo, Japan). The systolic blood pressure (SBP) was measured at week 1, week 3, and week 7, respectively. At least seven determinations were made in every session. Values were presented as the mean ± SEM of five measurements. ### 2.5. Analysis of Plasma Lipids The levels of triglyceride in plasma were measured by using commercial kits (ARKRAY, Inc., MINAMI-KU, KYOTO, Japan). The levels of high-density lipoprotein (HDL)-cholesterol, total cholesterol, and LDL-cholesterol in plasma were measured by using HDL and LDL assay kit (E2HL-100, BioAssay Systems). ### 2.6. Estimation of Blood Glucose and Oral Glucose Tolerance Test The concentration of glucose in blood was measured which was obtained from tail vein using glucometer (Onetouch Ultra) and Test Strip (Life Scan Inc., CA, USA), respectively.The oral glucose tolerance test (OGTT) was performed 2 days apart at 7 weeks. For the OGTT, briefly, basal blood glucose concentrations were measured after 10~12 h of overnight food privation; then the glucose solution (2 g/kg body weight) was immediately administered via oral gavage, and fourth more tail vein blood samples were taken at 30, 60, 90, and 120 min after glucose administration. ### 2.7. Preparation of Carotid Artery and Measurement of Vascular Reactivity The carotid arteries of the rats were rapidly and carefully isolated and placed into cold Kreb’s solution of the following composition (mM): NaCl 118, KCl 4.7, MgSO4 1.1, KH2PO4 1.2, CaCl 1.5, NaHCO3 25, glucose 10, and pH 7.4. The carotid arteries were removed to connective tissue and fat and cut into rings of approximately 3 mm in length. All dissecting procedures were carried out for caring to protect the endothelium from accidental damage. The carotid artery rings were suspended by means of two L-shaped stainless-steel wires inserted into the lumen in a tissue bath containing Kreb’s solution at 37°C and aerated with 95% O2 and 5% CO2. The isometric forces of the rings were measured by using a Grass FT 03 force displacement transducer connected to a Model 7E polygraph recording system (Grass Technologies, Quincy, MA, USA). In the carotid artery rings of rats, a passive stretch of 1 g was determined to be optimal tension for maximal responsiveness to phenylephrine (10−6 M). The preparations were allowed to equilibrate for approximately 1 h with an exchange of Kreb’s solution every 10 min. The relaxant effects of acetylcholine (ACh, 10−9~10−6 M) and sodium nitroprusside (SNP, 10−10~10−5 M) were studied in carotid artery rings constricted submaximally with phenylephrine (10−6 M). ### 2.8. Western Blot Analysis in the Rat Aorta, Liver, Muscle, and Fat The aorta, liver muscle, and fat tissues homogenate were prepared in ice-cold buffer containing 250 mM sucrose, 1 mM EDTA, 0.1 mM phenylmethylsufonyl fluoride, and 20 mM potassium phosphate buffer (pH 7.6). The homogenates were then centrifuged at 8,000 rpm for 10 min at 4°C, and the supernatant was centrifuged at 13,000 rpm for 5 min at 4°C, and as a cytosolic fraction for the analysis of protein. The recovered proteins were separated by 10% SDS-polyacrylamide gel electrophoresis and electrophoresis transferred to nitrocellulose membranes. Membranes were blocked by 5% BSA powder in 0.05% Tween 20-Tris-bufferd saline (TBS-T) for 1 h. The antibodies against ICAM-1, VCAM-1, E-selectin, eNOS, ET-1 (in aorta), AMPK, and p-AMPK (in liver, muscle, and fat) were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The nitrocellulose membranes were incubated overnight at 4°C with protein antibodies. The blots were washed several times with TBS-T and incubated with horseradish peroxidase-conjugated secondary antibody for 1 h, and then the immunoreactive bands were visualized by using enhanced chemiluminescence (Amersham, Buchinghamshire, UK). The bands were analyzed densitometrically by using a Chemi-doc image analyzer (Bio-Rad, Hercules, CA, USA). ### 2.9. Histopathological Staining of Aorta, Epididymal Fat, and Liver Aortic tissues were fixed in 10% (v/v) formalin in 0.01 M phosphate buffered saline (PBS) for 2 days with change of formalin solution every day to remove traces of blood from tissue. The tissue samples were dehydrated and embedded in paraffin, and then thin sections (6μm) of the aortic arch in each group were cut and stained with hematoxylin and eosin (H&E). Epididymal fat and liver tissues were fixed by immersion in 4% paraformaldehyde for 48 h at 4°C and incubated with 30% sucrose for 2 days. Each fat and liver was embedded in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), frozen in liquid nitrogen, and stored at −80°C. Frozen sections were cut with a Shandon Cryotome SME (Thermo Electron Corporation, Pittsburg, PA, USA) and placed on poly-L-lysine-coated slide. Epididymal fat sections were stained with H&E. Liver sections were assessed by using Oil Red O staining. For quantitative histopathological comparisons, each section was determined by Axiovision 4 Imaging/Archiving software. ### 2.10. Immunihistochemical Staining of Aortic Tissues Paraffin sections for immunohistochemical staining were placed on poly-L-lysine-coated slide (Fisher scientific, Pittsburgh, PA, USA). Slides were immunostained by Invitrogen’s HISOTO-STAIN-SP kits using the Labeled-(strept) Avidin-Biotin (LAB-SA) method. After antigen retrieval, slides were immersed in 3% hydrogen peroxide for 10 min at room temperature to block endogenous peroxidase activity and rinsed with PBS. After being rinsed, slides were incubated with 10% nonimmune goat serum for 10 min at room temperature and incubated with primary antibodies of ICAM-1, VCAM-1, and E-selectin (1:200; Santa Cruz, CA, USA) in humidified chambers overnight at 4°C. All slides were then incubated with biotinylated secondary antibody for 20 min at room temperature and then incubated with horseradish peroxidase-conjugated streptavidin for 20 min at room temperature. Peroxidase activity was visualized by 3,3′-Diaminobenzidine (DAB; Novex, CA) substrate-chromogen system, counterstaining with hematoxylin (Zymed, CA, USA). For quantitative analysis, the average score of 10~20 randomly selected area was calculated by using NIH Image analysis software, Image J (NIH, Bethesda, MD, USA). ### 2.11. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ±SD or mean ±SE. The data was analyzed using SIGMAPLOT 10.0 program. The Student’st-test was used to determine any significant differences. P < 0.05 was considered as statistically significant. ## 2.1. Preparation ofGastrodia elata Blume TheGastrodia elataBlume was purchased from the Herbal Medicine Co-operative Association, Iksan, Jeonbuk Province, Korea, in May 2012. A voucher specimen (no. HBJ1041) was deposited in the herbarium of the Professional Graduate School of Oriental Medicine, Wonkwang University, Iksan, Jeonbuk, South Korea. The dried Gastrodia elataBlume (400 g) was extracted with 4 L of 95% ethanol at room temperature for 1 week. The extract was filtered through Whatman no. 3 filter paper (Whatman International Ltd., England) and concentrated using rotary evaporator. The resulting extract (12.741 g) was lyophilized by using a freeze drier and retained until required. ## 2.2. Animal Experiments and Diet All experimental procedures were carried out in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Utilization Committee for Medical Science of Wonkwang University. Seven week-old male Sprague-Dawley (SD) rats were obtained from Samtako (Osan, Korea). Rats were kept in a room automatically maintained at a temperature (23 ± 2°C), humidity (50~60%), and 12-h light/dark cycle throughout the experiments. After 1 week of acclimatization, animals were randomly divided into three groups (n = 10 per group). Control group (Cont.) was fed regular diet, high-fructose group (HF) was fed 65% fructose diet (Research Diet, USA), and the third group (HF + EGB) was fed with 65% fructose along with a single dose of 100 mg/kg/day of EGB orally for a period of 8 weeks. The regular diet was composed of 50% starch, 21% protein, 4% fat, and standard vitamins and mineral mix. The high-fructose diet was composed of 65% fructose, 20% protein, 10% fat, and standard vitamins and mineral mix. ## 2.3. Blood and Tissue Sampling At the end of the experiments, the aorta, liver, adipose tissue (epididymal fat pads), and muscle were separated and frozen until analysis after being rinsed with cold saline. The plasma was obtained from the coagulated blood by centrifugation at 3,000 rpm 15 min at 4°C. The separation of plasma was frozen at −80°C until analysis. ## 2.4. Measurements of Blood Pressure Systolic blood pressure (SBP) was determined by using noninvasive tail-cuff plethysmography method and recorded with an automatic sphygmotonography (MK2000; Muromachi Kikai, Tokyo, Japan). The systolic blood pressure (SBP) was measured at week 1, week 3, and week 7, respectively. At least seven determinations were made in every session. Values were presented as the mean ± SEM of five measurements. ## 2.5. Analysis of Plasma Lipids The levels of triglyceride in plasma were measured by using commercial kits (ARKRAY, Inc., MINAMI-KU, KYOTO, Japan). The levels of high-density lipoprotein (HDL)-cholesterol, total cholesterol, and LDL-cholesterol in plasma were measured by using HDL and LDL assay kit (E2HL-100, BioAssay Systems). ## 2.6. Estimation of Blood Glucose and Oral Glucose Tolerance Test The concentration of glucose in blood was measured which was obtained from tail vein using glucometer (Onetouch Ultra) and Test Strip (Life Scan Inc., CA, USA), respectively.The oral glucose tolerance test (OGTT) was performed 2 days apart at 7 weeks. For the OGTT, briefly, basal blood glucose concentrations were measured after 10~12 h of overnight food privation; then the glucose solution (2 g/kg body weight) was immediately administered via oral gavage, and fourth more tail vein blood samples were taken at 30, 60, 90, and 120 min after glucose administration. ## 2.7. Preparation of Carotid Artery and Measurement of Vascular Reactivity The carotid arteries of the rats were rapidly and carefully isolated and placed into cold Kreb’s solution of the following composition (mM): NaCl 118, KCl 4.7, MgSO4 1.1, KH2PO4 1.2, CaCl 1.5, NaHCO3 25, glucose 10, and pH 7.4. The carotid arteries were removed to connective tissue and fat and cut into rings of approximately 3 mm in length. All dissecting procedures were carried out for caring to protect the endothelium from accidental damage. The carotid artery rings were suspended by means of two L-shaped stainless-steel wires inserted into the lumen in a tissue bath containing Kreb’s solution at 37°C and aerated with 95% O2 and 5% CO2. The isometric forces of the rings were measured by using a Grass FT 03 force displacement transducer connected to a Model 7E polygraph recording system (Grass Technologies, Quincy, MA, USA). In the carotid artery rings of rats, a passive stretch of 1 g was determined to be optimal tension for maximal responsiveness to phenylephrine (10−6 M). The preparations were allowed to equilibrate for approximately 1 h with an exchange of Kreb’s solution every 10 min. The relaxant effects of acetylcholine (ACh, 10−9~10−6 M) and sodium nitroprusside (SNP, 10−10~10−5 M) were studied in carotid artery rings constricted submaximally with phenylephrine (10−6 M). ## 2.8. Western Blot Analysis in the Rat Aorta, Liver, Muscle, and Fat The aorta, liver muscle, and fat tissues homogenate were prepared in ice-cold buffer containing 250 mM sucrose, 1 mM EDTA, 0.1 mM phenylmethylsufonyl fluoride, and 20 mM potassium phosphate buffer (pH 7.6). The homogenates were then centrifuged at 8,000 rpm for 10 min at 4°C, and the supernatant was centrifuged at 13,000 rpm for 5 min at 4°C, and as a cytosolic fraction for the analysis of protein. The recovered proteins were separated by 10% SDS-polyacrylamide gel electrophoresis and electrophoresis transferred to nitrocellulose membranes. Membranes were blocked by 5% BSA powder in 0.05% Tween 20-Tris-bufferd saline (TBS-T) for 1 h. The antibodies against ICAM-1, VCAM-1, E-selectin, eNOS, ET-1 (in aorta), AMPK, and p-AMPK (in liver, muscle, and fat) were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The nitrocellulose membranes were incubated overnight at 4°C with protein antibodies. The blots were washed several times with TBS-T and incubated with horseradish peroxidase-conjugated secondary antibody for 1 h, and then the immunoreactive bands were visualized by using enhanced chemiluminescence (Amersham, Buchinghamshire, UK). The bands were analyzed densitometrically by using a Chemi-doc image analyzer (Bio-Rad, Hercules, CA, USA). ## 2.9. Histopathological Staining of Aorta, Epididymal Fat, and Liver Aortic tissues were fixed in 10% (v/v) formalin in 0.01 M phosphate buffered saline (PBS) for 2 days with change of formalin solution every day to remove traces of blood from tissue. The tissue samples were dehydrated and embedded in paraffin, and then thin sections (6μm) of the aortic arch in each group were cut and stained with hematoxylin and eosin (H&E). Epididymal fat and liver tissues were fixed by immersion in 4% paraformaldehyde for 48 h at 4°C and incubated with 30% sucrose for 2 days. Each fat and liver was embedded in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), frozen in liquid nitrogen, and stored at −80°C. Frozen sections were cut with a Shandon Cryotome SME (Thermo Electron Corporation, Pittsburg, PA, USA) and placed on poly-L-lysine-coated slide. Epididymal fat sections were stained with H&E. Liver sections were assessed by using Oil Red O staining. For quantitative histopathological comparisons, each section was determined by Axiovision 4 Imaging/Archiving software. ## 2.10. Immunihistochemical Staining of Aortic Tissues Paraffin sections for immunohistochemical staining were placed on poly-L-lysine-coated slide (Fisher scientific, Pittsburgh, PA, USA). Slides were immunostained by Invitrogen’s HISOTO-STAIN-SP kits using the Labeled-(strept) Avidin-Biotin (LAB-SA) method. After antigen retrieval, slides were immersed in 3% hydrogen peroxide for 10 min at room temperature to block endogenous peroxidase activity and rinsed with PBS. After being rinsed, slides were incubated with 10% nonimmune goat serum for 10 min at room temperature and incubated with primary antibodies of ICAM-1, VCAM-1, and E-selectin (1:200; Santa Cruz, CA, USA) in humidified chambers overnight at 4°C. All slides were then incubated with biotinylated secondary antibody for 20 min at room temperature and then incubated with horseradish peroxidase-conjugated streptavidin for 20 min at room temperature. Peroxidase activity was visualized by 3,3′-Diaminobenzidine (DAB; Novex, CA) substrate-chromogen system, counterstaining with hematoxylin (Zymed, CA, USA). For quantitative analysis, the average score of 10~20 randomly selected area was calculated by using NIH Image analysis software, Image J (NIH, Bethesda, MD, USA). ## 2.11. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ±SD or mean ±SE. The data was analyzed using SIGMAPLOT 10.0 program. The Student’st-test was used to determine any significant differences. P < 0.05 was considered as statistically significant. ## 3. Results ### 3.1. Characteristics of Experimental Animals During the entire experimental period, all groups showed significant increase in body weight. There was no significant change in body weight after 8 weeks of fructose feeding in HF group. However, treatment of EGB group showed significant decrease in body weight (439.8 ± 26.5 versus 402.5 ± 22.1, P < 0.05) (Table 1). Moreover, HF diet results in a significant increase in epididymal fat pads weight. The weight of epididymal fat pads was 60.8 ± 17.4% higher than that of the HF diet group compared with control group. However, treatment of EGB group significantly reduced the epididymal fat pads weight (57.5 ± 7.3%) compared with HF diet group (Table 1).Table 1 Effect of EGB on body weight, epididymal fat pads, and blood glucose. Groups Control HF HF + EGB Initial BW (g) 245.8 ± 7.6 244.4 ± 7.4 244.4 ± 9.0 Terminal BW (g) 449.4 ± 28.9 439.8 ± 26.5 402.5 ± 22.1 # Epididymal fat pads weight (g) 2.5 ± 0.7 3.9 ± 1.2 * * 2.5 ± 0.5 # # Blood glucose (mg/dL) 94.63 ± 6.48 99.50 ± 7.30 96.70 ± 8.54 Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high fructose diet with EGB; BW: body weight. ### 3.2. Effect of EGB on Blood Pressure At the beginning of the experimental feeding period, the levels of systolic blood pressure in all groups were approximately 95~100 mmHg as investigated by the tail-cuff technique. After 4 weeks, systolic blood pressure of HF group was significantly increased than that of control group (P < 0.01). However, EGB group was significantly decreased than that of HF group during all the experimental period (136.71 ± 1.24 versus 116.4 ± 1.21, P < 0.01) (Figure 1(a)).Effects of EGB on systolic blood pressure (a) and oral glucose tolerance test (b). Values were expressed as mean ± SE (n = 10). *P < 0.05, **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ### 3.3. Effect of EGB on Blood Glucose Level and Oral Glucose Tolerance Test Plasma blood glucose levels were not statistically different in HF diet rats with chronic treatment of EGB (Table1). Oral glucose tolerance test was carried out to check insulin resistance in high-fructose diet rats after 8 weeks. The results showed that HF diet group maintained the significant increase in blood glucose levels at 30, 60, 90 (P < 0.01), and 120 min (P < 0.05), respectively. However, the plasma glucose levels in treatment of EGB were significantly decreased at 30 and 90 min as compared with HF diet group (P < 0.05) (Figure 1(b)). ### 3.4. Effect of EGB on Plasma Lipids Group fed a HF diet displayed was increased plasma triglyceride levels, total cholesterol levels, and LDL-c levels; however, treatment of EGB group significantly decreased plasma triglyceride levels (272.67 ± 107.0 versus 177.33 ± 59.6, P < 0.05), total cholesterol levels (102.94 ± 19.7 versus 67.79 ± 5.8, P < 0.01), and LDL-c levels (44.56 ± 8.1 versus 24.28 ± 3.1, P < 0.01), respectively. Beside the plasma levels of HDL-c levels in EGB group increased compared with HF diet group (16.02 ± 2.9 versus 20.2 ± 2.2, P < 0.05) (Table 2).Table 2 Effect of EGB on plasma lipid levels. Groups Control HF HF + EGB T-Cho (mg/dL) 67.86 ± 7.6 102.94 ± 19.7 * * 67.79 ± 5.8 # # TG (mg/dL) 83.83 ± 16.4 272.67 ± 107.0 * * 177.33 ± 59.6 # HDL-c (mg/dL) 13.75 ± 1.3 16.02 ± 2.9 20.2 ± 2.2 # LDL-c (mg/dL) 28.37 ± 3.9 44.56 ± 8.1 * * 24.28 ± 3.1 # # Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high-fructose diet with EGB; T-Cho: total cholesterol; TG: triglyceride; HDL-c: high-density lipoprotein cholesterol; LDL-c: low-density lipoprotein cholesterol. ### 3.5. Effect of EGB on Vascular Tension Vascular responses to ACh, endothelium-dependent vasodilator (1 × 10−9 to 1 × 10−6 M), SNP, and endothelium-independent vasodilator (1 × 10−10 to 1 × 10−7 M) were measured in carotid artery. Responses to ACh-induced relaxation of carotid artery rings were significantly decreased in the HF diet group compared with control group (1 × 10−7.5 to 1 × 10−6 M. P < 0.05). However, the impairment of vasorelaxation was remarkably attenuated by treatment with EGB (1 × 10−8.5 to 1 × 10−6.5 M. P < 0.01; 1 × 10−6 M. P < 0.05) (Figure 2(a)). On the other hand, response to SNP-induced relaxation of carotid artery rings had no significant difference in all the groups (Figure 2(b)).Effect of EGB on relaxation of carotid arteries. Cumulative concentration-response curves to acetylcholine (ACh), endothelium-dependent vasodilator (a) and sodium nitroprusside (SNP), endothelium-independent vasodilator (b) in phenylephrine precontracted carotid arteries from experiment rats. Values were expressed as mean ± SE (n = 5). *P < 0.05 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ### 3.6. Effect of EGB on the Morphology of Aorta and Epididymal Fat Pads EGB effectively decreased blood pressure and attenuated impairment of vasorelaxation. Thus, we examined histological changes by staining with H&E in thoracic aorta. Figure3 showed that thoracic aorta of HF diet group revealed roughened endothelial layers and increased tunica intima-media of layers compared with control group (+24.13%, P < 0.01). However, treatment of EGB group significantly maintained the smooth character of the intima endothelial layers and decreased tunica intima-media thickness in aortic section (−16.10%, P < 0.01) (Figures 3(a) and 3(c)).Effects of EGB on aortic wall and adipocytes in HF diet rats. Representative microscopic photographs of H&E stained section of the thoracic aorta (a) and epididymal fat pads (b) in HF diet rats. Lower panel indicated the length of intima-media (c) and size of adipose cells (magnification ×400). Values were expressed as mean ± SE (n = 5). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) (d)Because EGB effectively reduced the epididymal fat pads weight, we prepared frozen section of epididymal fat pads and stained with H&E. The adipocytes were hypertrophy induced by HF diet compared with control group (+40.97%,P < 0.01). However, treatment of EGB significantly decreased the hypertrophy of adipocytes (−13.04%, P < 0.05) (Figures 3(b), and 3(d)). ### 3.7. Effect of EGB on the Hepatic Lipids To investigate the existence of fat accumulation of liver in all experimental groups, we prepared frozen section of liver and stained with Oil Red O. Lipid droplets were detected in HF diet groups. However, treatment of EGB showed that the number of lipid droplets significantly decreased compared with HF diet group (Figure4).Figure 4 Effect of EGB on fatty liver in HF diet rats. Representative microscopic photographs of Oil Red O stained section of the liver. ### 3.8. Effect of EGB on the Expressions Levels of Adhesion Molecules, eNOS, and ET-1 in Aorta Protein expression levels of VCAM-1, ICAM-1, E-selectin, eNOS, and ET-1 in aorta were determined by western blotting, respectively. Adhesion molecules (VCAM-1, ICAM-1, and E-selectin) and ET-1 protein levels were increased in the HF diet group compared with control group. However, treatment of EGB group significantly decreased expression levels of protein compared with HF diet group. Moreover, we examined the expression of eNOS levels to evaluate vascular endothelial function. The eNOS protein levels decreased in the HF diet group compared with control group. However, treatment of EGB group increased expression levels of protein compared with HF diet group (Figure5).Figure 5 Effects of EGB on the expression of adhesion molecules, eNOS and ET-1 in the aorta of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments.Immunohistochemistry was performed to determine the direct expression of adhesion molecules in the aortic wall. Adhesion molecules expressions such as VCAM-1, ICAM-1, and E-selectin were increased in the HF diet group (P < 0.01); however, treatment of EGB group significantly decreased expression levels of protein (VCAM-1, ICAM-1. P < 0.01; E-selectin. P < 0.05) (Figure 6).Effects of EGB on VCAM-1 (a), ICAM-1 (b), and E-selectin (c) immunoreactivity in aortic tissues of HF diet rats. Representative immunohistochemistry (left) and quantifications (right) are shown. Values were expressed as mean ± SE **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) ### 3.9. Effect of EGB on the Expressions Levels of AMPK in Liver, Muscle, and Fat Tissues Because EGB effectively suppressed the development of impaired glucose tolerance, dyslipidemia, fatty liver, and endothelial dysfunction, the expression of AMPK was examined in liver, muscle, and fat tissues. The expression of AMPK was significantly decreased in HF diet group. However, treatment of EGB group increased expression levels of protein in liver, muscle, and fat tissues (Figure7).Figure 7 Effects of EGB on the expression of AMPK and p-AMPK in the liver (a), muscle (b), and fat (c) of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments. ## 3.1. Characteristics of Experimental Animals During the entire experimental period, all groups showed significant increase in body weight. There was no significant change in body weight after 8 weeks of fructose feeding in HF group. However, treatment of EGB group showed significant decrease in body weight (439.8 ± 26.5 versus 402.5 ± 22.1, P < 0.05) (Table 1). Moreover, HF diet results in a significant increase in epididymal fat pads weight. The weight of epididymal fat pads was 60.8 ± 17.4% higher than that of the HF diet group compared with control group. However, treatment of EGB group significantly reduced the epididymal fat pads weight (57.5 ± 7.3%) compared with HF diet group (Table 1).Table 1 Effect of EGB on body weight, epididymal fat pads, and blood glucose. Groups Control HF HF + EGB Initial BW (g) 245.8 ± 7.6 244.4 ± 7.4 244.4 ± 9.0 Terminal BW (g) 449.4 ± 28.9 439.8 ± 26.5 402.5 ± 22.1 # Epididymal fat pads weight (g) 2.5 ± 0.7 3.9 ± 1.2 * * 2.5 ± 0.5 # # Blood glucose (mg/dL) 94.63 ± 6.48 99.50 ± 7.30 96.70 ± 8.54 Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high fructose diet with EGB; BW: body weight. ## 3.2. Effect of EGB on Blood Pressure At the beginning of the experimental feeding period, the levels of systolic blood pressure in all groups were approximately 95~100 mmHg as investigated by the tail-cuff technique. After 4 weeks, systolic blood pressure of HF group was significantly increased than that of control group (P < 0.01). However, EGB group was significantly decreased than that of HF group during all the experimental period (136.71 ± 1.24 versus 116.4 ± 1.21, P < 0.01) (Figure 1(a)).Effects of EGB on systolic blood pressure (a) and oral glucose tolerance test (b). Values were expressed as mean ± SE (n = 10). *P < 0.05, **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ## 3.3. Effect of EGB on Blood Glucose Level and Oral Glucose Tolerance Test Plasma blood glucose levels were not statistically different in HF diet rats with chronic treatment of EGB (Table1). Oral glucose tolerance test was carried out to check insulin resistance in high-fructose diet rats after 8 weeks. The results showed that HF diet group maintained the significant increase in blood glucose levels at 30, 60, 90 (P < 0.01), and 120 min (P < 0.05), respectively. However, the plasma glucose levels in treatment of EGB were significantly decreased at 30 and 90 min as compared with HF diet group (P < 0.05) (Figure 1(b)). ## 3.4. Effect of EGB on Plasma Lipids Group fed a HF diet displayed was increased plasma triglyceride levels, total cholesterol levels, and LDL-c levels; however, treatment of EGB group significantly decreased plasma triglyceride levels (272.67 ± 107.0 versus 177.33 ± 59.6, P < 0.05), total cholesterol levels (102.94 ± 19.7 versus 67.79 ± 5.8, P < 0.01), and LDL-c levels (44.56 ± 8.1 versus 24.28 ± 3.1, P < 0.01), respectively. Beside the plasma levels of HDL-c levels in EGB group increased compared with HF diet group (16.02 ± 2.9 versus 20.2 ± 2.2, P < 0.05) (Table 2).Table 2 Effect of EGB on plasma lipid levels. Groups Control HF HF + EGB T-Cho (mg/dL) 67.86 ± 7.6 102.94 ± 19.7 * * 67.79 ± 5.8 # # TG (mg/dL) 83.83 ± 16.4 272.67 ± 107.0 * * 177.33 ± 59.6 # HDL-c (mg/dL) 13.75 ± 1.3 16.02 ± 2.9 20.2 ± 2.2 # LDL-c (mg/dL) 28.37 ± 3.9 44.56 ± 8.1 * * 24.28 ± 3.1 # # Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high-fructose diet with EGB; T-Cho: total cholesterol; TG: triglyceride; HDL-c: high-density lipoprotein cholesterol; LDL-c: low-density lipoprotein cholesterol. ## 3.5. Effect of EGB on Vascular Tension Vascular responses to ACh, endothelium-dependent vasodilator (1 × 10−9 to 1 × 10−6 M), SNP, and endothelium-independent vasodilator (1 × 10−10 to 1 × 10−7 M) were measured in carotid artery. Responses to ACh-induced relaxation of carotid artery rings were significantly decreased in the HF diet group compared with control group (1 × 10−7.5 to 1 × 10−6 M. P < 0.05). However, the impairment of vasorelaxation was remarkably attenuated by treatment with EGB (1 × 10−8.5 to 1 × 10−6.5 M. P < 0.01; 1 × 10−6 M. P < 0.05) (Figure 2(a)). On the other hand, response to SNP-induced relaxation of carotid artery rings had no significant difference in all the groups (Figure 2(b)).Effect of EGB on relaxation of carotid arteries. Cumulative concentration-response curves to acetylcholine (ACh), endothelium-dependent vasodilator (a) and sodium nitroprusside (SNP), endothelium-independent vasodilator (b) in phenylephrine precontracted carotid arteries from experiment rats. Values were expressed as mean ± SE (n = 5). *P < 0.05 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ## 3.6. Effect of EGB on the Morphology of Aorta and Epididymal Fat Pads EGB effectively decreased blood pressure and attenuated impairment of vasorelaxation. Thus, we examined histological changes by staining with H&E in thoracic aorta. Figure3 showed that thoracic aorta of HF diet group revealed roughened endothelial layers and increased tunica intima-media of layers compared with control group (+24.13%, P < 0.01). However, treatment of EGB group significantly maintained the smooth character of the intima endothelial layers and decreased tunica intima-media thickness in aortic section (−16.10%, P < 0.01) (Figures 3(a) and 3(c)).Effects of EGB on aortic wall and adipocytes in HF diet rats. Representative microscopic photographs of H&E stained section of the thoracic aorta (a) and epididymal fat pads (b) in HF diet rats. Lower panel indicated the length of intima-media (c) and size of adipose cells (magnification ×400). Values were expressed as mean ± SE (n = 5). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) (d)Because EGB effectively reduced the epididymal fat pads weight, we prepared frozen section of epididymal fat pads and stained with H&E. The adipocytes were hypertrophy induced by HF diet compared with control group (+40.97%,P < 0.01). However, treatment of EGB significantly decreased the hypertrophy of adipocytes (−13.04%, P < 0.05) (Figures 3(b), and 3(d)). ## 3.7. Effect of EGB on the Hepatic Lipids To investigate the existence of fat accumulation of liver in all experimental groups, we prepared frozen section of liver and stained with Oil Red O. Lipid droplets were detected in HF diet groups. However, treatment of EGB showed that the number of lipid droplets significantly decreased compared with HF diet group (Figure4).Figure 4 Effect of EGB on fatty liver in HF diet rats. Representative microscopic photographs of Oil Red O stained section of the liver. ## 3.8. Effect of EGB on the Expressions Levels of Adhesion Molecules, eNOS, and ET-1 in Aorta Protein expression levels of VCAM-1, ICAM-1, E-selectin, eNOS, and ET-1 in aorta were determined by western blotting, respectively. Adhesion molecules (VCAM-1, ICAM-1, and E-selectin) and ET-1 protein levels were increased in the HF diet group compared with control group. However, treatment of EGB group significantly decreased expression levels of protein compared with HF diet group. Moreover, we examined the expression of eNOS levels to evaluate vascular endothelial function. The eNOS protein levels decreased in the HF diet group compared with control group. However, treatment of EGB group increased expression levels of protein compared with HF diet group (Figure5).Figure 5 Effects of EGB on the expression of adhesion molecules, eNOS and ET-1 in the aorta of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments.Immunohistochemistry was performed to determine the direct expression of adhesion molecules in the aortic wall. Adhesion molecules expressions such as VCAM-1, ICAM-1, and E-selectin were increased in the HF diet group (P < 0.01); however, treatment of EGB group significantly decreased expression levels of protein (VCAM-1, ICAM-1. P < 0.01; E-selectin. P < 0.05) (Figure 6).Effects of EGB on VCAM-1 (a), ICAM-1 (b), and E-selectin (c) immunoreactivity in aortic tissues of HF diet rats. Representative immunohistochemistry (left) and quantifications (right) are shown. Values were expressed as mean ± SE **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) ## 3.9. Effect of EGB on the Expressions Levels of AMPK in Liver, Muscle, and Fat Tissues Because EGB effectively suppressed the development of impaired glucose tolerance, dyslipidemia, fatty liver, and endothelial dysfunction, the expression of AMPK was examined in liver, muscle, and fat tissues. The expression of AMPK was significantly decreased in HF diet group. However, treatment of EGB group increased expression levels of protein in liver, muscle, and fat tissues (Figure7).Figure 7 Effects of EGB on the expression of AMPK and p-AMPK in the liver (a), muscle (b), and fat (c) of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments. ## 4. Discussion Herb, Acupuncture, and Natural Medicine (HAN), one of the most ancient and revered forms of healing, has been used to diagnose, treat, and prevent disease for over 3,000 years. HAN is now used worldwide as an effective means of overcoming disease.Gastrodia elata is a well-known traditional Korean medicinal herb specifically for promoting blood circulation to remove blood stasis. In the present study, we provided the evidence for the beneficial effect of Gastrodia elata on lipid metabolism and endothelial dysfunction in high fructose-induced metabolic syndrome rat model.Fructose is a lipogenic component, its consumption promotes the development of atherogenic lipid profile and elevation of postprandial hypertriglycemia [15, 16]. In addition, HF diet animals develop hypertriglyceridemia, obesity, impaired glucose tolerance, fatty liver, increased SBP, and vascular remodeling [17, 18]. In the present study, HF diet clearly increased visceral epididymal fat pads weight resulting from the increases in triglyceride and LDL cholesterol. Treatment with EGB lowered epididymal fat pads weight, triglyceride, and LDL cholesterol levels, whereas it elevated HDL cholesterol levels which assist lipid metabolism. Thus, EGB improves lipid metabolism by the decrease of triglyceride and LDL cholesterol. Although increased epididymal fat pads, body weight was not different from control diet and HF diet group. We suppose that proper experimental periods should be longer than the present periods for 8 weeks to increase body weight. It is sure that EGB is effective in obesity in HF diet rats, since EGB significantly decreased HF diet-induced increase in body weight.In addition, disorder of lipid levels induced by HF diet was associated with aortic lesion. Histological analysis demonstrated that the endothelial layers were rougher in aortic sections of HF diet rats associated with a trend towards an increased development of atherosclerosis. Intima-media thickness of the thoracic aorta has been shown to correlate with prognosis and extend of coronary artery disease [19]. Treatment of EGB maintained smooth and soft intima endothelial layers and decreased intima-media thickness in aortic sections of HF diet rats.Dyslipidemia, impaired glucose tolerance, and fatty liver are major features associated with metabolic syndrome in HF diet rats [19, 20]. Fructose induces impaired glucose tolerance via the elevation of plasma triglyceride levels. In addition, previous study demonstrated that an elevated fructose diet associated with impaired glucose tolerance and endothelial dysfunction precedes the development of hypertension [21]. Impaired glucose tolerance plays an important role in the development of such abnormalities as insulin resistance, type 2 diabetes, and dyslipidemia [22]. Similarly, HF diet induced impaired glucose tolerance and dyslipidemia, whereas treatment of EGB improved impaired glucose tolerance with the amelioration of dyslipidemia. In addition, EGB significantly suppressed the increasing adipocyte size and fatty liver. Thus, these results suggest that EGB may be useful to suppress the development of atherosclerotic lesions, obesity, and ameliorated lipid metabolism in metabolic syndrome model.Endothelial dysfunction plays an important role in hypertension and vascular inflammation, other cardiovascular diseases, and metabolic syndrome [23, 24]. In this experimental model, the expression of ET-1 and inducible adhesion molecules such as ICAM-1, VCAM-1, and E-selectin in the arterial wall represent a key event in the development of atherosclerosis. EGB ameliorated vascular inflammation by downregulation of ET-1 as well as ICAM-1, VCAM-1, and E-selectin expressions in thoracic arota. Several studies have shown that lowering blood pressure and endothelial functions are related to an increase of eNOS reactivity, thereby increasing NO production roles as a strong vasodilator [25, 26]. In the present study, EGB upregulated eNOS levels in the aorta and recovered the HF diet-induced impairment of endothelium-dependent vasorelaxation. However, endothelium-independent vasodilator-induced vasorelaxation was not affected by EGB. These results suggest that hypotensive effect of EGB is mediated by endothelium-dependent NO/cGMP pathway. Histological study revealed that EGB suppressed vascular inflammation, compatible with the processes of atherosclerosis. In fact, endothelial dysfunction was initially identified as impaired vasodilation to specific stimuli such as ACh or bradykinin; therefore, improvement of endothelial function is predicted to regulate lipid homeostasis [27]. Thus, antihypertension and antivascular inflammatory effects of EGB contribute to the beneficial effects on endothelial function and lipid metabolism in metabolic syndrome.To clarify the mechanism for EGB suppressing the development of visceral obesity, impaired glucose tolerance, dyslipidemia, and fatty liver, the study was focused on the expression of AMP-activated protein kinase (AMPK). There is a strong correlation between low activation state of AMPK with metabolic disorder associated with insulin resistance, fat deposition, and dyslipidemia [28–30]. AMPK is a key regulator of glucose and lipid metabolism. In the liver and muscle, activation of AMPK results in enhanced fatty acid oxidation and decreased production of glucose, cholesterol, and triglycerides [31]. Recently Misra reported that the suspected role of AMPK appeared as a promising tool to prevent and/or to treat metabolic disorders [32]. Also, the activation of AMPK signaling pathway is associated with eNOS regulation and alteration of systemic endothelin pathway in fructose diet animal models [25]. AMPK is required for adiponectin-, thrombin-, and histamine-induced eNOS phosphorylation and subsequent NO production in endothelium [33]. However, our study showed that EGB induced markedly not only activation of phosphorylation AMPKα in the liver, muscle, and fat, but also activation of eNOS levels in aorta. It could be hypothesized that EGB could lead to novel AMPK-mediated eNOS pathways which could in turn recover HF diet-induced metabolic disorders. ## 5. Conclusion These results suppose that EGB ameliorates lipid metabolism, impaired glucose tolerance, hypertension, and endothelial dysfunction in HF diet-induced metabolic syndrome, at least in part, via activation of AMPK and eNOS/NO pathway. Therefore,Gastrodia elata Blume might be a beneficial therapeutic approach for metabolic syndrome. --- *Source: 101624-2014-02-26.xml*
101624-2014-02-26_101624-2014-02-26.md
47,936
Gastrodia elata Ameliorates High-Fructose Diet-Induced Lipid Metabolism and Endothelial Dysfunction
Min Chul Kho; Yun Jung Lee; Jeong Dan Cha; Kyung Min Choi; Dae Gill Kang; Ho Sub Lee
Evidence-Based Complementary and Alternative Medicine (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101624
101624-2014-02-26.xml
--- ## Abstract Overconsumption of fructose results in dyslipidemia, hypertension, and impaired glucose tolerance, which have documented correlation with metabolic syndrome.Gastrodia elata, a widely used traditional herbal medicine, was reported with anti-inflammatory and antidiabetes activities. Thus, this study examined whether ethanol extract of Gastrodia elata Blume (EGB) attenuate lipid metabolism and endothelial dysfunction in a high-fructose (HF) diet animal model. Rats were fed the 65% HF diet with/without EGB 100 mg/kg/day for 8 weeks. Treatment with EGB significantly suppressed the increments of epididymal fat weight, blood pressure, plasma triglyceride, total cholesterol levels, and oral glucose tolerance, respectively. In addition, EGB markedly prevented increase of adipocyte size and hepatic accumulation of triglycerides. EGB ameliorated endothelial dysfunction by downregulation of endothelin-1 (ET-1) and adhesion molecules in the aorta. Moreover, EGB significantly recovered the impairment of vasorelaxation to acetylcholine and levels of endothelial nitric oxide synthase (eNOS) expression and induced markedly upregulation of phosphorylation AMP-activated protein kinase (AMPK)α in the liver, muscle, and fat. These results indicate that EGB ameliorates dyslipidemia, hypertension, and insulin resistance as well as impaired vascular endothelial function in HF diet rats. Taken together, EGB may be a beneficial therapeutic approach for metabolic syndrome. --- ## Body ## 1. Introduction Metabolic syndrome, a worldwide issue, is characterized by insulin resistance, impaired glucose tolerance and/or hyperglycemia, high blood serum triglycerides, low concentration of high-density lipoprotein (HDL) cholesterol, high blood pressure, and central obesity. The association of 3 (or more) of these factors leads to an increased morbidity and mortality from several predominant diseases such as type 2 diabetes, cancer, and cardiovascular diseases including atherosclerosis, myocardial infarction, and stroke [1, 2].Fructose is an isomer of glucose with a hydroxyl group on carbon-4 reversed in position. It is promptly absorbed and rapidly metabolized by liver. Recent decades westernization of diets has resulted in significant increases in added fructose, enormous rised in fructose consumption typical daily [3]. The exposure of the liver to such enormous rising fructose consumption leads to rapid stimulation of lipogenesis and triglyceride accumulation, which in turn leads to reduced insulin sensitivity and hepatic insulin resistance/glucose intolerance [4]. Thus, high-fructose diet induces a well-characterised metabolic syndrome, generally resulting in hypertension, dyslipidaemia, and low level of HDL-cholesterol [5]. Recent studies suggest that high fructose intake may be an important risk factor for the development of fatty liver [6]. Rats are commonly used as a model to mimic human disease, including metabolic syndrome [7]. Similarly, emerging data suggest that experiment on fructose-diet rats tends to produce some of the changes associated with metabolic syndrome, such as altered lipid metabolism, fatty liver, hypertension, obesity, and dyslipidemia [8].Gastrodia elata Blume is a traditional herbal medicine in Korea, China, and Japan, which has been used for the treatment of headaches, hypertension, rheumatism, and cardiovascular diseases [9]. Several major physiological substances have been identified from Gastrodia elata Blume such as gastrodin, vanillyl alcohol, vanillin, glycoprotein, p-endoxybenzyl alcohol, and polysaccharides including alpha-D-glucan [10–12]. Our previous studies showed that Gastrodia elata Blume exhibits anti-inflammatory and antiatherosclerotic properties by inhibiting the expression of proinflammatory cytokines in vascular endothelial cells [13, 14]. However, the effect of ethanol extract of Gastrodia elata Blume on high-fructose (HF) diet animal model has not been yet reported. Thus, the present study was designed to determine whether an ethanol extract of Gastrodia elata Blume (EGB) improves high-fructose diet-induced lipid metabolism and endothelial dysfunction. ## 2. Materials and Methods ### 2.1. Preparation ofGastrodia elata Blume TheGastrodia elataBlume was purchased from the Herbal Medicine Co-operative Association, Iksan, Jeonbuk Province, Korea, in May 2012. A voucher specimen (no. HBJ1041) was deposited in the herbarium of the Professional Graduate School of Oriental Medicine, Wonkwang University, Iksan, Jeonbuk, South Korea. The dried Gastrodia elataBlume (400 g) was extracted with 4 L of 95% ethanol at room temperature for 1 week. The extract was filtered through Whatman no. 3 filter paper (Whatman International Ltd., England) and concentrated using rotary evaporator. The resulting extract (12.741 g) was lyophilized by using a freeze drier and retained until required. ### 2.2. Animal Experiments and Diet All experimental procedures were carried out in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Utilization Committee for Medical Science of Wonkwang University. Seven week-old male Sprague-Dawley (SD) rats were obtained from Samtako (Osan, Korea). Rats were kept in a room automatically maintained at a temperature (23 ± 2°C), humidity (50~60%), and 12-h light/dark cycle throughout the experiments. After 1 week of acclimatization, animals were randomly divided into three groups (n = 10 per group). Control group (Cont.) was fed regular diet, high-fructose group (HF) was fed 65% fructose diet (Research Diet, USA), and the third group (HF + EGB) was fed with 65% fructose along with a single dose of 100 mg/kg/day of EGB orally for a period of 8 weeks. The regular diet was composed of 50% starch, 21% protein, 4% fat, and standard vitamins and mineral mix. The high-fructose diet was composed of 65% fructose, 20% protein, 10% fat, and standard vitamins and mineral mix. ### 2.3. Blood and Tissue Sampling At the end of the experiments, the aorta, liver, adipose tissue (epididymal fat pads), and muscle were separated and frozen until analysis after being rinsed with cold saline. The plasma was obtained from the coagulated blood by centrifugation at 3,000 rpm 15 min at 4°C. The separation of plasma was frozen at −80°C until analysis. ### 2.4. Measurements of Blood Pressure Systolic blood pressure (SBP) was determined by using noninvasive tail-cuff plethysmography method and recorded with an automatic sphygmotonography (MK2000; Muromachi Kikai, Tokyo, Japan). The systolic blood pressure (SBP) was measured at week 1, week 3, and week 7, respectively. At least seven determinations were made in every session. Values were presented as the mean ± SEM of five measurements. ### 2.5. Analysis of Plasma Lipids The levels of triglyceride in plasma were measured by using commercial kits (ARKRAY, Inc., MINAMI-KU, KYOTO, Japan). The levels of high-density lipoprotein (HDL)-cholesterol, total cholesterol, and LDL-cholesterol in plasma were measured by using HDL and LDL assay kit (E2HL-100, BioAssay Systems). ### 2.6. Estimation of Blood Glucose and Oral Glucose Tolerance Test The concentration of glucose in blood was measured which was obtained from tail vein using glucometer (Onetouch Ultra) and Test Strip (Life Scan Inc., CA, USA), respectively.The oral glucose tolerance test (OGTT) was performed 2 days apart at 7 weeks. For the OGTT, briefly, basal blood glucose concentrations were measured after 10~12 h of overnight food privation; then the glucose solution (2 g/kg body weight) was immediately administered via oral gavage, and fourth more tail vein blood samples were taken at 30, 60, 90, and 120 min after glucose administration. ### 2.7. Preparation of Carotid Artery and Measurement of Vascular Reactivity The carotid arteries of the rats were rapidly and carefully isolated and placed into cold Kreb’s solution of the following composition (mM): NaCl 118, KCl 4.7, MgSO4 1.1, KH2PO4 1.2, CaCl 1.5, NaHCO3 25, glucose 10, and pH 7.4. The carotid arteries were removed to connective tissue and fat and cut into rings of approximately 3 mm in length. All dissecting procedures were carried out for caring to protect the endothelium from accidental damage. The carotid artery rings were suspended by means of two L-shaped stainless-steel wires inserted into the lumen in a tissue bath containing Kreb’s solution at 37°C and aerated with 95% O2 and 5% CO2. The isometric forces of the rings were measured by using a Grass FT 03 force displacement transducer connected to a Model 7E polygraph recording system (Grass Technologies, Quincy, MA, USA). In the carotid artery rings of rats, a passive stretch of 1 g was determined to be optimal tension for maximal responsiveness to phenylephrine (10−6 M). The preparations were allowed to equilibrate for approximately 1 h with an exchange of Kreb’s solution every 10 min. The relaxant effects of acetylcholine (ACh, 10−9~10−6 M) and sodium nitroprusside (SNP, 10−10~10−5 M) were studied in carotid artery rings constricted submaximally with phenylephrine (10−6 M). ### 2.8. Western Blot Analysis in the Rat Aorta, Liver, Muscle, and Fat The aorta, liver muscle, and fat tissues homogenate were prepared in ice-cold buffer containing 250 mM sucrose, 1 mM EDTA, 0.1 mM phenylmethylsufonyl fluoride, and 20 mM potassium phosphate buffer (pH 7.6). The homogenates were then centrifuged at 8,000 rpm for 10 min at 4°C, and the supernatant was centrifuged at 13,000 rpm for 5 min at 4°C, and as a cytosolic fraction for the analysis of protein. The recovered proteins were separated by 10% SDS-polyacrylamide gel electrophoresis and electrophoresis transferred to nitrocellulose membranes. Membranes were blocked by 5% BSA powder in 0.05% Tween 20-Tris-bufferd saline (TBS-T) for 1 h. The antibodies against ICAM-1, VCAM-1, E-selectin, eNOS, ET-1 (in aorta), AMPK, and p-AMPK (in liver, muscle, and fat) were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The nitrocellulose membranes were incubated overnight at 4°C with protein antibodies. The blots were washed several times with TBS-T and incubated with horseradish peroxidase-conjugated secondary antibody for 1 h, and then the immunoreactive bands were visualized by using enhanced chemiluminescence (Amersham, Buchinghamshire, UK). The bands were analyzed densitometrically by using a Chemi-doc image analyzer (Bio-Rad, Hercules, CA, USA). ### 2.9. Histopathological Staining of Aorta, Epididymal Fat, and Liver Aortic tissues were fixed in 10% (v/v) formalin in 0.01 M phosphate buffered saline (PBS) for 2 days with change of formalin solution every day to remove traces of blood from tissue. The tissue samples were dehydrated and embedded in paraffin, and then thin sections (6μm) of the aortic arch in each group were cut and stained with hematoxylin and eosin (H&E). Epididymal fat and liver tissues were fixed by immersion in 4% paraformaldehyde for 48 h at 4°C and incubated with 30% sucrose for 2 days. Each fat and liver was embedded in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), frozen in liquid nitrogen, and stored at −80°C. Frozen sections were cut with a Shandon Cryotome SME (Thermo Electron Corporation, Pittsburg, PA, USA) and placed on poly-L-lysine-coated slide. Epididymal fat sections were stained with H&E. Liver sections were assessed by using Oil Red O staining. For quantitative histopathological comparisons, each section was determined by Axiovision 4 Imaging/Archiving software. ### 2.10. Immunihistochemical Staining of Aortic Tissues Paraffin sections for immunohistochemical staining were placed on poly-L-lysine-coated slide (Fisher scientific, Pittsburgh, PA, USA). Slides were immunostained by Invitrogen’s HISOTO-STAIN-SP kits using the Labeled-(strept) Avidin-Biotin (LAB-SA) method. After antigen retrieval, slides were immersed in 3% hydrogen peroxide for 10 min at room temperature to block endogenous peroxidase activity and rinsed with PBS. After being rinsed, slides were incubated with 10% nonimmune goat serum for 10 min at room temperature and incubated with primary antibodies of ICAM-1, VCAM-1, and E-selectin (1:200; Santa Cruz, CA, USA) in humidified chambers overnight at 4°C. All slides were then incubated with biotinylated secondary antibody for 20 min at room temperature and then incubated with horseradish peroxidase-conjugated streptavidin for 20 min at room temperature. Peroxidase activity was visualized by 3,3′-Diaminobenzidine (DAB; Novex, CA) substrate-chromogen system, counterstaining with hematoxylin (Zymed, CA, USA). For quantitative analysis, the average score of 10~20 randomly selected area was calculated by using NIH Image analysis software, Image J (NIH, Bethesda, MD, USA). ### 2.11. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ±SD or mean ±SE. The data was analyzed using SIGMAPLOT 10.0 program. The Student’st-test was used to determine any significant differences. P < 0.05 was considered as statistically significant. ## 2.1. Preparation ofGastrodia elata Blume TheGastrodia elataBlume was purchased from the Herbal Medicine Co-operative Association, Iksan, Jeonbuk Province, Korea, in May 2012. A voucher specimen (no. HBJ1041) was deposited in the herbarium of the Professional Graduate School of Oriental Medicine, Wonkwang University, Iksan, Jeonbuk, South Korea. The dried Gastrodia elataBlume (400 g) was extracted with 4 L of 95% ethanol at room temperature for 1 week. The extract was filtered through Whatman no. 3 filter paper (Whatman International Ltd., England) and concentrated using rotary evaporator. The resulting extract (12.741 g) was lyophilized by using a freeze drier and retained until required. ## 2.2. Animal Experiments and Diet All experimental procedures were carried out in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Utilization Committee for Medical Science of Wonkwang University. Seven week-old male Sprague-Dawley (SD) rats were obtained from Samtako (Osan, Korea). Rats were kept in a room automatically maintained at a temperature (23 ± 2°C), humidity (50~60%), and 12-h light/dark cycle throughout the experiments. After 1 week of acclimatization, animals were randomly divided into three groups (n = 10 per group). Control group (Cont.) was fed regular diet, high-fructose group (HF) was fed 65% fructose diet (Research Diet, USA), and the third group (HF + EGB) was fed with 65% fructose along with a single dose of 100 mg/kg/day of EGB orally for a period of 8 weeks. The regular diet was composed of 50% starch, 21% protein, 4% fat, and standard vitamins and mineral mix. The high-fructose diet was composed of 65% fructose, 20% protein, 10% fat, and standard vitamins and mineral mix. ## 2.3. Blood and Tissue Sampling At the end of the experiments, the aorta, liver, adipose tissue (epididymal fat pads), and muscle were separated and frozen until analysis after being rinsed with cold saline. The plasma was obtained from the coagulated blood by centrifugation at 3,000 rpm 15 min at 4°C. The separation of plasma was frozen at −80°C until analysis. ## 2.4. Measurements of Blood Pressure Systolic blood pressure (SBP) was determined by using noninvasive tail-cuff plethysmography method and recorded with an automatic sphygmotonography (MK2000; Muromachi Kikai, Tokyo, Japan). The systolic blood pressure (SBP) was measured at week 1, week 3, and week 7, respectively. At least seven determinations were made in every session. Values were presented as the mean ± SEM of five measurements. ## 2.5. Analysis of Plasma Lipids The levels of triglyceride in plasma were measured by using commercial kits (ARKRAY, Inc., MINAMI-KU, KYOTO, Japan). The levels of high-density lipoprotein (HDL)-cholesterol, total cholesterol, and LDL-cholesterol in plasma were measured by using HDL and LDL assay kit (E2HL-100, BioAssay Systems). ## 2.6. Estimation of Blood Glucose and Oral Glucose Tolerance Test The concentration of glucose in blood was measured which was obtained from tail vein using glucometer (Onetouch Ultra) and Test Strip (Life Scan Inc., CA, USA), respectively.The oral glucose tolerance test (OGTT) was performed 2 days apart at 7 weeks. For the OGTT, briefly, basal blood glucose concentrations were measured after 10~12 h of overnight food privation; then the glucose solution (2 g/kg body weight) was immediately administered via oral gavage, and fourth more tail vein blood samples were taken at 30, 60, 90, and 120 min after glucose administration. ## 2.7. Preparation of Carotid Artery and Measurement of Vascular Reactivity The carotid arteries of the rats were rapidly and carefully isolated and placed into cold Kreb’s solution of the following composition (mM): NaCl 118, KCl 4.7, MgSO4 1.1, KH2PO4 1.2, CaCl 1.5, NaHCO3 25, glucose 10, and pH 7.4. The carotid arteries were removed to connective tissue and fat and cut into rings of approximately 3 mm in length. All dissecting procedures were carried out for caring to protect the endothelium from accidental damage. The carotid artery rings were suspended by means of two L-shaped stainless-steel wires inserted into the lumen in a tissue bath containing Kreb’s solution at 37°C and aerated with 95% O2 and 5% CO2. The isometric forces of the rings were measured by using a Grass FT 03 force displacement transducer connected to a Model 7E polygraph recording system (Grass Technologies, Quincy, MA, USA). In the carotid artery rings of rats, a passive stretch of 1 g was determined to be optimal tension for maximal responsiveness to phenylephrine (10−6 M). The preparations were allowed to equilibrate for approximately 1 h with an exchange of Kreb’s solution every 10 min. The relaxant effects of acetylcholine (ACh, 10−9~10−6 M) and sodium nitroprusside (SNP, 10−10~10−5 M) were studied in carotid artery rings constricted submaximally with phenylephrine (10−6 M). ## 2.8. Western Blot Analysis in the Rat Aorta, Liver, Muscle, and Fat The aorta, liver muscle, and fat tissues homogenate were prepared in ice-cold buffer containing 250 mM sucrose, 1 mM EDTA, 0.1 mM phenylmethylsufonyl fluoride, and 20 mM potassium phosphate buffer (pH 7.6). The homogenates were then centrifuged at 8,000 rpm for 10 min at 4°C, and the supernatant was centrifuged at 13,000 rpm for 5 min at 4°C, and as a cytosolic fraction for the analysis of protein. The recovered proteins were separated by 10% SDS-polyacrylamide gel electrophoresis and electrophoresis transferred to nitrocellulose membranes. Membranes were blocked by 5% BSA powder in 0.05% Tween 20-Tris-bufferd saline (TBS-T) for 1 h. The antibodies against ICAM-1, VCAM-1, E-selectin, eNOS, ET-1 (in aorta), AMPK, and p-AMPK (in liver, muscle, and fat) were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA, USA). The nitrocellulose membranes were incubated overnight at 4°C with protein antibodies. The blots were washed several times with TBS-T and incubated with horseradish peroxidase-conjugated secondary antibody for 1 h, and then the immunoreactive bands were visualized by using enhanced chemiluminescence (Amersham, Buchinghamshire, UK). The bands were analyzed densitometrically by using a Chemi-doc image analyzer (Bio-Rad, Hercules, CA, USA). ## 2.9. Histopathological Staining of Aorta, Epididymal Fat, and Liver Aortic tissues were fixed in 10% (v/v) formalin in 0.01 M phosphate buffered saline (PBS) for 2 days with change of formalin solution every day to remove traces of blood from tissue. The tissue samples were dehydrated and embedded in paraffin, and then thin sections (6μm) of the aortic arch in each group were cut and stained with hematoxylin and eosin (H&E). Epididymal fat and liver tissues were fixed by immersion in 4% paraformaldehyde for 48 h at 4°C and incubated with 30% sucrose for 2 days. Each fat and liver was embedded in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), frozen in liquid nitrogen, and stored at −80°C. Frozen sections were cut with a Shandon Cryotome SME (Thermo Electron Corporation, Pittsburg, PA, USA) and placed on poly-L-lysine-coated slide. Epididymal fat sections were stained with H&E. Liver sections were assessed by using Oil Red O staining. For quantitative histopathological comparisons, each section was determined by Axiovision 4 Imaging/Archiving software. ## 2.10. Immunihistochemical Staining of Aortic Tissues Paraffin sections for immunohistochemical staining were placed on poly-L-lysine-coated slide (Fisher scientific, Pittsburgh, PA, USA). Slides were immunostained by Invitrogen’s HISOTO-STAIN-SP kits using the Labeled-(strept) Avidin-Biotin (LAB-SA) method. After antigen retrieval, slides were immersed in 3% hydrogen peroxide for 10 min at room temperature to block endogenous peroxidase activity and rinsed with PBS. After being rinsed, slides were incubated with 10% nonimmune goat serum for 10 min at room temperature and incubated with primary antibodies of ICAM-1, VCAM-1, and E-selectin (1:200; Santa Cruz, CA, USA) in humidified chambers overnight at 4°C. All slides were then incubated with biotinylated secondary antibody for 20 min at room temperature and then incubated with horseradish peroxidase-conjugated streptavidin for 20 min at room temperature. Peroxidase activity was visualized by 3,3′-Diaminobenzidine (DAB; Novex, CA) substrate-chromogen system, counterstaining with hematoxylin (Zymed, CA, USA). For quantitative analysis, the average score of 10~20 randomly selected area was calculated by using NIH Image analysis software, Image J (NIH, Bethesda, MD, USA). ## 2.11. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ±SD or mean ±SE. The data was analyzed using SIGMAPLOT 10.0 program. The Student’st-test was used to determine any significant differences. P < 0.05 was considered as statistically significant. ## 3. Results ### 3.1. Characteristics of Experimental Animals During the entire experimental period, all groups showed significant increase in body weight. There was no significant change in body weight after 8 weeks of fructose feeding in HF group. However, treatment of EGB group showed significant decrease in body weight (439.8 ± 26.5 versus 402.5 ± 22.1, P < 0.05) (Table 1). Moreover, HF diet results in a significant increase in epididymal fat pads weight. The weight of epididymal fat pads was 60.8 ± 17.4% higher than that of the HF diet group compared with control group. However, treatment of EGB group significantly reduced the epididymal fat pads weight (57.5 ± 7.3%) compared with HF diet group (Table 1).Table 1 Effect of EGB on body weight, epididymal fat pads, and blood glucose. Groups Control HF HF + EGB Initial BW (g) 245.8 ± 7.6 244.4 ± 7.4 244.4 ± 9.0 Terminal BW (g) 449.4 ± 28.9 439.8 ± 26.5 402.5 ± 22.1 # Epididymal fat pads weight (g) 2.5 ± 0.7 3.9 ± 1.2 * * 2.5 ± 0.5 # # Blood glucose (mg/dL) 94.63 ± 6.48 99.50 ± 7.30 96.70 ± 8.54 Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high fructose diet with EGB; BW: body weight. ### 3.2. Effect of EGB on Blood Pressure At the beginning of the experimental feeding period, the levels of systolic blood pressure in all groups were approximately 95~100 mmHg as investigated by the tail-cuff technique. After 4 weeks, systolic blood pressure of HF group was significantly increased than that of control group (P < 0.01). However, EGB group was significantly decreased than that of HF group during all the experimental period (136.71 ± 1.24 versus 116.4 ± 1.21, P < 0.01) (Figure 1(a)).Effects of EGB on systolic blood pressure (a) and oral glucose tolerance test (b). Values were expressed as mean ± SE (n = 10). *P < 0.05, **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ### 3.3. Effect of EGB on Blood Glucose Level and Oral Glucose Tolerance Test Plasma blood glucose levels were not statistically different in HF diet rats with chronic treatment of EGB (Table1). Oral glucose tolerance test was carried out to check insulin resistance in high-fructose diet rats after 8 weeks. The results showed that HF diet group maintained the significant increase in blood glucose levels at 30, 60, 90 (P < 0.01), and 120 min (P < 0.05), respectively. However, the plasma glucose levels in treatment of EGB were significantly decreased at 30 and 90 min as compared with HF diet group (P < 0.05) (Figure 1(b)). ### 3.4. Effect of EGB on Plasma Lipids Group fed a HF diet displayed was increased plasma triglyceride levels, total cholesterol levels, and LDL-c levels; however, treatment of EGB group significantly decreased plasma triglyceride levels (272.67 ± 107.0 versus 177.33 ± 59.6, P < 0.05), total cholesterol levels (102.94 ± 19.7 versus 67.79 ± 5.8, P < 0.01), and LDL-c levels (44.56 ± 8.1 versus 24.28 ± 3.1, P < 0.01), respectively. Beside the plasma levels of HDL-c levels in EGB group increased compared with HF diet group (16.02 ± 2.9 versus 20.2 ± 2.2, P < 0.05) (Table 2).Table 2 Effect of EGB on plasma lipid levels. Groups Control HF HF + EGB T-Cho (mg/dL) 67.86 ± 7.6 102.94 ± 19.7 * * 67.79 ± 5.8 # # TG (mg/dL) 83.83 ± 16.4 272.67 ± 107.0 * * 177.33 ± 59.6 # HDL-c (mg/dL) 13.75 ± 1.3 16.02 ± 2.9 20.2 ± 2.2 # LDL-c (mg/dL) 28.37 ± 3.9 44.56 ± 8.1 * * 24.28 ± 3.1 # # Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high-fructose diet with EGB; T-Cho: total cholesterol; TG: triglyceride; HDL-c: high-density lipoprotein cholesterol; LDL-c: low-density lipoprotein cholesterol. ### 3.5. Effect of EGB on Vascular Tension Vascular responses to ACh, endothelium-dependent vasodilator (1 × 10−9 to 1 × 10−6 M), SNP, and endothelium-independent vasodilator (1 × 10−10 to 1 × 10−7 M) were measured in carotid artery. Responses to ACh-induced relaxation of carotid artery rings were significantly decreased in the HF diet group compared with control group (1 × 10−7.5 to 1 × 10−6 M. P < 0.05). However, the impairment of vasorelaxation was remarkably attenuated by treatment with EGB (1 × 10−8.5 to 1 × 10−6.5 M. P < 0.01; 1 × 10−6 M. P < 0.05) (Figure 2(a)). On the other hand, response to SNP-induced relaxation of carotid artery rings had no significant difference in all the groups (Figure 2(b)).Effect of EGB on relaxation of carotid arteries. Cumulative concentration-response curves to acetylcholine (ACh), endothelium-dependent vasodilator (a) and sodium nitroprusside (SNP), endothelium-independent vasodilator (b) in phenylephrine precontracted carotid arteries from experiment rats. Values were expressed as mean ± SE (n = 5). *P < 0.05 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ### 3.6. Effect of EGB on the Morphology of Aorta and Epididymal Fat Pads EGB effectively decreased blood pressure and attenuated impairment of vasorelaxation. Thus, we examined histological changes by staining with H&E in thoracic aorta. Figure3 showed that thoracic aorta of HF diet group revealed roughened endothelial layers and increased tunica intima-media of layers compared with control group (+24.13%, P < 0.01). However, treatment of EGB group significantly maintained the smooth character of the intima endothelial layers and decreased tunica intima-media thickness in aortic section (−16.10%, P < 0.01) (Figures 3(a) and 3(c)).Effects of EGB on aortic wall and adipocytes in HF diet rats. Representative microscopic photographs of H&E stained section of the thoracic aorta (a) and epididymal fat pads (b) in HF diet rats. Lower panel indicated the length of intima-media (c) and size of adipose cells (magnification ×400). Values were expressed as mean ± SE (n = 5). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) (d)Because EGB effectively reduced the epididymal fat pads weight, we prepared frozen section of epididymal fat pads and stained with H&E. The adipocytes were hypertrophy induced by HF diet compared with control group (+40.97%,P < 0.01). However, treatment of EGB significantly decreased the hypertrophy of adipocytes (−13.04%, P < 0.05) (Figures 3(b), and 3(d)). ### 3.7. Effect of EGB on the Hepatic Lipids To investigate the existence of fat accumulation of liver in all experimental groups, we prepared frozen section of liver and stained with Oil Red O. Lipid droplets were detected in HF diet groups. However, treatment of EGB showed that the number of lipid droplets significantly decreased compared with HF diet group (Figure4).Figure 4 Effect of EGB on fatty liver in HF diet rats. Representative microscopic photographs of Oil Red O stained section of the liver. ### 3.8. Effect of EGB on the Expressions Levels of Adhesion Molecules, eNOS, and ET-1 in Aorta Protein expression levels of VCAM-1, ICAM-1, E-selectin, eNOS, and ET-1 in aorta were determined by western blotting, respectively. Adhesion molecules (VCAM-1, ICAM-1, and E-selectin) and ET-1 protein levels were increased in the HF diet group compared with control group. However, treatment of EGB group significantly decreased expression levels of protein compared with HF diet group. Moreover, we examined the expression of eNOS levels to evaluate vascular endothelial function. The eNOS protein levels decreased in the HF diet group compared with control group. However, treatment of EGB group increased expression levels of protein compared with HF diet group (Figure5).Figure 5 Effects of EGB on the expression of adhesion molecules, eNOS and ET-1 in the aorta of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments.Immunohistochemistry was performed to determine the direct expression of adhesion molecules in the aortic wall. Adhesion molecules expressions such as VCAM-1, ICAM-1, and E-selectin were increased in the HF diet group (P < 0.01); however, treatment of EGB group significantly decreased expression levels of protein (VCAM-1, ICAM-1. P < 0.01; E-selectin. P < 0.05) (Figure 6).Effects of EGB on VCAM-1 (a), ICAM-1 (b), and E-selectin (c) immunoreactivity in aortic tissues of HF diet rats. Representative immunohistochemistry (left) and quantifications (right) are shown. Values were expressed as mean ± SE **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) ### 3.9. Effect of EGB on the Expressions Levels of AMPK in Liver, Muscle, and Fat Tissues Because EGB effectively suppressed the development of impaired glucose tolerance, dyslipidemia, fatty liver, and endothelial dysfunction, the expression of AMPK was examined in liver, muscle, and fat tissues. The expression of AMPK was significantly decreased in HF diet group. However, treatment of EGB group increased expression levels of protein in liver, muscle, and fat tissues (Figure7).Figure 7 Effects of EGB on the expression of AMPK and p-AMPK in the liver (a), muscle (b), and fat (c) of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments. ## 3.1. Characteristics of Experimental Animals During the entire experimental period, all groups showed significant increase in body weight. There was no significant change in body weight after 8 weeks of fructose feeding in HF group. However, treatment of EGB group showed significant decrease in body weight (439.8 ± 26.5 versus 402.5 ± 22.1, P < 0.05) (Table 1). Moreover, HF diet results in a significant increase in epididymal fat pads weight. The weight of epididymal fat pads was 60.8 ± 17.4% higher than that of the HF diet group compared with control group. However, treatment of EGB group significantly reduced the epididymal fat pads weight (57.5 ± 7.3%) compared with HF diet group (Table 1).Table 1 Effect of EGB on body weight, epididymal fat pads, and blood glucose. Groups Control HF HF + EGB Initial BW (g) 245.8 ± 7.6 244.4 ± 7.4 244.4 ± 9.0 Terminal BW (g) 449.4 ± 28.9 439.8 ± 26.5 402.5 ± 22.1 # Epididymal fat pads weight (g) 2.5 ± 0.7 3.9 ± 1.2 * * 2.5 ± 0.5 # # Blood glucose (mg/dL) 94.63 ± 6.48 99.50 ± 7.30 96.70 ± 8.54 Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high fructose diet with EGB; BW: body weight. ## 3.2. Effect of EGB on Blood Pressure At the beginning of the experimental feeding period, the levels of systolic blood pressure in all groups were approximately 95~100 mmHg as investigated by the tail-cuff technique. After 4 weeks, systolic blood pressure of HF group was significantly increased than that of control group (P < 0.01). However, EGB group was significantly decreased than that of HF group during all the experimental period (136.71 ± 1.24 versus 116.4 ± 1.21, P < 0.01) (Figure 1(a)).Effects of EGB on systolic blood pressure (a) and oral glucose tolerance test (b). Values were expressed as mean ± SE (n = 10). *P < 0.05, **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ## 3.3. Effect of EGB on Blood Glucose Level and Oral Glucose Tolerance Test Plasma blood glucose levels were not statistically different in HF diet rats with chronic treatment of EGB (Table1). Oral glucose tolerance test was carried out to check insulin resistance in high-fructose diet rats after 8 weeks. The results showed that HF diet group maintained the significant increase in blood glucose levels at 30, 60, 90 (P < 0.01), and 120 min (P < 0.05), respectively. However, the plasma glucose levels in treatment of EGB were significantly decreased at 30 and 90 min as compared with HF diet group (P < 0.05) (Figure 1(b)). ## 3.4. Effect of EGB on Plasma Lipids Group fed a HF diet displayed was increased plasma triglyceride levels, total cholesterol levels, and LDL-c levels; however, treatment of EGB group significantly decreased plasma triglyceride levels (272.67 ± 107.0 versus 177.33 ± 59.6, P < 0.05), total cholesterol levels (102.94 ± 19.7 versus 67.79 ± 5.8, P < 0.01), and LDL-c levels (44.56 ± 8.1 versus 24.28 ± 3.1, P < 0.01), respectively. Beside the plasma levels of HDL-c levels in EGB group increased compared with HF diet group (16.02 ± 2.9 versus 20.2 ± 2.2, P < 0.05) (Table 2).Table 2 Effect of EGB on plasma lipid levels. Groups Control HF HF + EGB T-Cho (mg/dL) 67.86 ± 7.6 102.94 ± 19.7 * * 67.79 ± 5.8 # # TG (mg/dL) 83.83 ± 16.4 272.67 ± 107.0 * * 177.33 ± 59.6 # HDL-c (mg/dL) 13.75 ± 1.3 16.02 ± 2.9 20.2 ± 2.2 # LDL-c (mg/dL) 28.37 ± 3.9 44.56 ± 8.1 * * 24.28 ± 3.1 # # Values were expressed as mean ± SD (n = 10). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. HF: high fructose; HF + EGB: high-fructose diet with EGB; T-Cho: total cholesterol; TG: triglyceride; HDL-c: high-density lipoprotein cholesterol; LDL-c: low-density lipoprotein cholesterol. ## 3.5. Effect of EGB on Vascular Tension Vascular responses to ACh, endothelium-dependent vasodilator (1 × 10−9 to 1 × 10−6 M), SNP, and endothelium-independent vasodilator (1 × 10−10 to 1 × 10−7 M) were measured in carotid artery. Responses to ACh-induced relaxation of carotid artery rings were significantly decreased in the HF diet group compared with control group (1 × 10−7.5 to 1 × 10−6 M. P < 0.05). However, the impairment of vasorelaxation was remarkably attenuated by treatment with EGB (1 × 10−8.5 to 1 × 10−6.5 M. P < 0.01; 1 × 10−6 M. P < 0.05) (Figure 2(a)). On the other hand, response to SNP-induced relaxation of carotid artery rings had no significant difference in all the groups (Figure 2(b)).Effect of EGB on relaxation of carotid arteries. Cumulative concentration-response curves to acetylcholine (ACh), endothelium-dependent vasodilator (a) and sodium nitroprusside (SNP), endothelium-independent vasodilator (b) in phenylephrine precontracted carotid arteries from experiment rats. Values were expressed as mean ± SE (n = 5). *P < 0.05 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) ## 3.6. Effect of EGB on the Morphology of Aorta and Epididymal Fat Pads EGB effectively decreased blood pressure and attenuated impairment of vasorelaxation. Thus, we examined histological changes by staining with H&E in thoracic aorta. Figure3 showed that thoracic aorta of HF diet group revealed roughened endothelial layers and increased tunica intima-media of layers compared with control group (+24.13%, P < 0.01). However, treatment of EGB group significantly maintained the smooth character of the intima endothelial layers and decreased tunica intima-media thickness in aortic section (−16.10%, P < 0.01) (Figures 3(a) and 3(c)).Effects of EGB on aortic wall and adipocytes in HF diet rats. Representative microscopic photographs of H&E stained section of the thoracic aorta (a) and epididymal fat pads (b) in HF diet rats. Lower panel indicated the length of intima-media (c) and size of adipose cells (magnification ×400). Values were expressed as mean ± SE (n = 5). **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) (d)Because EGB effectively reduced the epididymal fat pads weight, we prepared frozen section of epididymal fat pads and stained with H&E. The adipocytes were hypertrophy induced by HF diet compared with control group (+40.97%,P < 0.01). However, treatment of EGB significantly decreased the hypertrophy of adipocytes (−13.04%, P < 0.05) (Figures 3(b), and 3(d)). ## 3.7. Effect of EGB on the Hepatic Lipids To investigate the existence of fat accumulation of liver in all experimental groups, we prepared frozen section of liver and stained with Oil Red O. Lipid droplets were detected in HF diet groups. However, treatment of EGB showed that the number of lipid droplets significantly decreased compared with HF diet group (Figure4).Figure 4 Effect of EGB on fatty liver in HF diet rats. Representative microscopic photographs of Oil Red O stained section of the liver. ## 3.8. Effect of EGB on the Expressions Levels of Adhesion Molecules, eNOS, and ET-1 in Aorta Protein expression levels of VCAM-1, ICAM-1, E-selectin, eNOS, and ET-1 in aorta were determined by western blotting, respectively. Adhesion molecules (VCAM-1, ICAM-1, and E-selectin) and ET-1 protein levels were increased in the HF diet group compared with control group. However, treatment of EGB group significantly decreased expression levels of protein compared with HF diet group. Moreover, we examined the expression of eNOS levels to evaluate vascular endothelial function. The eNOS protein levels decreased in the HF diet group compared with control group. However, treatment of EGB group increased expression levels of protein compared with HF diet group (Figure5).Figure 5 Effects of EGB on the expression of adhesion molecules, eNOS and ET-1 in the aorta of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments.Immunohistochemistry was performed to determine the direct expression of adhesion molecules in the aortic wall. Adhesion molecules expressions such as VCAM-1, ICAM-1, and E-selectin were increased in the HF diet group (P < 0.01); however, treatment of EGB group significantly decreased expression levels of protein (VCAM-1, ICAM-1. P < 0.01; E-selectin. P < 0.05) (Figure 6).Effects of EGB on VCAM-1 (a), ICAM-1 (b), and E-selectin (c) immunoreactivity in aortic tissues of HF diet rats. Representative immunohistochemistry (left) and quantifications (right) are shown. Values were expressed as mean ± SE **P < 0.01 versus Cont.; # P < 0.05, ## P < 0.01 versus HF. (a) (b) (c) ## 3.9. Effect of EGB on the Expressions Levels of AMPK in Liver, Muscle, and Fat Tissues Because EGB effectively suppressed the development of impaired glucose tolerance, dyslipidemia, fatty liver, and endothelial dysfunction, the expression of AMPK was examined in liver, muscle, and fat tissues. The expression of AMPK was significantly decreased in HF diet group. However, treatment of EGB group increased expression levels of protein in liver, muscle, and fat tissues (Figure7).Figure 7 Effects of EGB on the expression of AMPK and p-AMPK in the liver (a), muscle (b), and fat (c) of HF diet rats. Each electrophoretogram is representative of the results from three individual experiments. ## 4. Discussion Herb, Acupuncture, and Natural Medicine (HAN), one of the most ancient and revered forms of healing, has been used to diagnose, treat, and prevent disease for over 3,000 years. HAN is now used worldwide as an effective means of overcoming disease.Gastrodia elata is a well-known traditional Korean medicinal herb specifically for promoting blood circulation to remove blood stasis. In the present study, we provided the evidence for the beneficial effect of Gastrodia elata on lipid metabolism and endothelial dysfunction in high fructose-induced metabolic syndrome rat model.Fructose is a lipogenic component, its consumption promotes the development of atherogenic lipid profile and elevation of postprandial hypertriglycemia [15, 16]. In addition, HF diet animals develop hypertriglyceridemia, obesity, impaired glucose tolerance, fatty liver, increased SBP, and vascular remodeling [17, 18]. In the present study, HF diet clearly increased visceral epididymal fat pads weight resulting from the increases in triglyceride and LDL cholesterol. Treatment with EGB lowered epididymal fat pads weight, triglyceride, and LDL cholesterol levels, whereas it elevated HDL cholesterol levels which assist lipid metabolism. Thus, EGB improves lipid metabolism by the decrease of triglyceride and LDL cholesterol. Although increased epididymal fat pads, body weight was not different from control diet and HF diet group. We suppose that proper experimental periods should be longer than the present periods for 8 weeks to increase body weight. It is sure that EGB is effective in obesity in HF diet rats, since EGB significantly decreased HF diet-induced increase in body weight.In addition, disorder of lipid levels induced by HF diet was associated with aortic lesion. Histological analysis demonstrated that the endothelial layers were rougher in aortic sections of HF diet rats associated with a trend towards an increased development of atherosclerosis. Intima-media thickness of the thoracic aorta has been shown to correlate with prognosis and extend of coronary artery disease [19]. Treatment of EGB maintained smooth and soft intima endothelial layers and decreased intima-media thickness in aortic sections of HF diet rats.Dyslipidemia, impaired glucose tolerance, and fatty liver are major features associated with metabolic syndrome in HF diet rats [19, 20]. Fructose induces impaired glucose tolerance via the elevation of plasma triglyceride levels. In addition, previous study demonstrated that an elevated fructose diet associated with impaired glucose tolerance and endothelial dysfunction precedes the development of hypertension [21]. Impaired glucose tolerance plays an important role in the development of such abnormalities as insulin resistance, type 2 diabetes, and dyslipidemia [22]. Similarly, HF diet induced impaired glucose tolerance and dyslipidemia, whereas treatment of EGB improved impaired glucose tolerance with the amelioration of dyslipidemia. In addition, EGB significantly suppressed the increasing adipocyte size and fatty liver. Thus, these results suggest that EGB may be useful to suppress the development of atherosclerotic lesions, obesity, and ameliorated lipid metabolism in metabolic syndrome model.Endothelial dysfunction plays an important role in hypertension and vascular inflammation, other cardiovascular diseases, and metabolic syndrome [23, 24]. In this experimental model, the expression of ET-1 and inducible adhesion molecules such as ICAM-1, VCAM-1, and E-selectin in the arterial wall represent a key event in the development of atherosclerosis. EGB ameliorated vascular inflammation by downregulation of ET-1 as well as ICAM-1, VCAM-1, and E-selectin expressions in thoracic arota. Several studies have shown that lowering blood pressure and endothelial functions are related to an increase of eNOS reactivity, thereby increasing NO production roles as a strong vasodilator [25, 26]. In the present study, EGB upregulated eNOS levels in the aorta and recovered the HF diet-induced impairment of endothelium-dependent vasorelaxation. However, endothelium-independent vasodilator-induced vasorelaxation was not affected by EGB. These results suggest that hypotensive effect of EGB is mediated by endothelium-dependent NO/cGMP pathway. Histological study revealed that EGB suppressed vascular inflammation, compatible with the processes of atherosclerosis. In fact, endothelial dysfunction was initially identified as impaired vasodilation to specific stimuli such as ACh or bradykinin; therefore, improvement of endothelial function is predicted to regulate lipid homeostasis [27]. Thus, antihypertension and antivascular inflammatory effects of EGB contribute to the beneficial effects on endothelial function and lipid metabolism in metabolic syndrome.To clarify the mechanism for EGB suppressing the development of visceral obesity, impaired glucose tolerance, dyslipidemia, and fatty liver, the study was focused on the expression of AMP-activated protein kinase (AMPK). There is a strong correlation between low activation state of AMPK with metabolic disorder associated with insulin resistance, fat deposition, and dyslipidemia [28–30]. AMPK is a key regulator of glucose and lipid metabolism. In the liver and muscle, activation of AMPK results in enhanced fatty acid oxidation and decreased production of glucose, cholesterol, and triglycerides [31]. Recently Misra reported that the suspected role of AMPK appeared as a promising tool to prevent and/or to treat metabolic disorders [32]. Also, the activation of AMPK signaling pathway is associated with eNOS regulation and alteration of systemic endothelin pathway in fructose diet animal models [25]. AMPK is required for adiponectin-, thrombin-, and histamine-induced eNOS phosphorylation and subsequent NO production in endothelium [33]. However, our study showed that EGB induced markedly not only activation of phosphorylation AMPKα in the liver, muscle, and fat, but also activation of eNOS levels in aorta. It could be hypothesized that EGB could lead to novel AMPK-mediated eNOS pathways which could in turn recover HF diet-induced metabolic disorders. ## 5. Conclusion These results suppose that EGB ameliorates lipid metabolism, impaired glucose tolerance, hypertension, and endothelial dysfunction in HF diet-induced metabolic syndrome, at least in part, via activation of AMPK and eNOS/NO pathway. Therefore,Gastrodia elata Blume might be a beneficial therapeutic approach for metabolic syndrome. --- *Source: 101624-2014-02-26.xml*
2014
# Modified Integral Homotopy Expansive Method to Find Power Series Solutions of Linear Ordinary Differential Equations about Ordinary Points **Authors:** Uriel Filobello-Nino; Hector Vazquez-Leal; Jesus Huerta-Chua; Victor Manuel Jimenez-Fernandez; Agustin L. Herrera-May; Darwin Mayorga-Cruz **Journal:** Discrete Dynamics in Nature and Society (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016251 --- ## Abstract This article presents the Modified Integral Homotopy Expansive Method (MIHEM) which is utilized to find power series solutions for linear ordinary differential equations about ordinary points. This method is a modification of the integral homotopy expansive method. The proposal consists in providing a versatile, easy to employ and systematic method. Thus, we will see that MIHEM requires only of elementary integrations and that the initial function will be always the same for the linear ordinary differential equations of the same order which contributes to ease the procedure. Therefore, it is expected that this article contributes to change the idea that an effective method has to be long and difficult, such as it is the case of Power Series Method (PSM). This method expresses a differential equation as an integral equation, and the integrand of the equation in terms of a homotopy. We will see along this work the convenience of this procedure. --- ## Body ## 1. Introduction The subject of nonlinear differential equations is important because many physical phenomena are described by this type of equations; however, finding solutions for such equations usually is a difficult task. Hence the importance of developing approximate methods to find their solutions such as variational approaches [1, 2], tanh method [3], exp-function [4, 5], Adomian’s decomposition method [6, 7], parameter expansion [8], homotopy perturbation method [9–12], homotopy analysis method [13, 14], perturbation method [15–17], integral homotopy expansive method [18], among many others. In general, these equations are still an open research topic.In contrast, finding solutions of linear differential equations that contain variable coefficients is considered a subject which is closed because their theory was established a long time ago. Most of the times, the methods to find solutions for these equations are presented in terms of power series, because it is difficult to find exact solutions. We will see that those solutions are established about both ordinary points and singular points. As a matter of fact, the case of the so called, singular regular points is important because there always exist the Frobenius series solution for these equations, despite the difficulties that singular points carry on [19–21]. Most of the times, the search for solutions of linear differential equations is concerned rather with the obtaining of approximate solutions than with an alternative proposal in order to obtain power series solutions as an alternative to basic PSM. Thus, [22] obtained an exact solution for two cases which corresponds the values n=0, and n=1 of a parameter of the denominated Lane-Emden singular equation (LEE) which is related with a great deal of phenomena in physics such as stellar structure among others. We note that for the above mentioned cases LEE is a linear ordinary differential equation with variable coefficients. Although originally [22] was conceived to find approximations for nonlinear differential equations, the above mentioned article obtained solutions about the regular singular point x=0. We note that for cases different from n=0, 1, and 5, the Lane-Emden equation is nonlinear. On the other hand [23] proposed Adomian decomposition method and differential transform method in order to solve once more the Lane-Emden equation for the same values n=0, and n=1. Although these methods solved again the problem, they were long in comparison with [22]. As a matter of fact, Adomian decomposition method and homotopy analysis method (HAM) are usually long and cumbersome although effective. On the other hand [13] proposed HAM method in order to find analytical solutions but for the case of linear systems of partial differential equations with good precision. Besides [24] found an analytical approximate solution for the important Bessel linear differential equation of order zero by using the exponentially fitted collocation approximation method. On the other hand [25] proposed the spectral method in order to solve linear second order differential equations with constant coefficients, and [9] solved among others the important case of linear Euler-Lagrange equations. In a sequence, [12] solved also variational problems by using Laplace Transform Homotopy Perturbation Method, besides of nonlinear problems it found solutions for linear problems about singular regular points. In all these cases, the goal was obtaining a solution for the differential equation to solve, but not to provide an alternative method to get power series solutions.In particular, this article is concerned with the obtaining of the general solution for a linear ODE about an ordinary point through the use of power series. Although [26] employed HPM and nonlinearities distribution HPM with the objective to find power series solutions for linear ODES, that article was focused for the case of initial value problems due to the nature of HPM. It is well known that basic HPM method requires the knowledge of the initial conditions of the problem to solve. This work not only seeks to provide a general solution for linear equations about regular points, but will provide a proof that its series solutions are equivalent to that, obtained for PSM. This point is important because as it is well known, although PSM is usually long and cumbersome there are not alternatives that simplify the procedure. This work proposes an alternative method, the Modified Integral Homotopy Expansive Method (MIHEM). We will see that MIHEM is relatively easy to use and provides an adequate alternative method based in the solution of elementary integrals with the end to find general solutions in series for the case of ordinary points.Paper organization is as follows. Section2, introduces brief review of Linear Differential Equations with Variable Coefficients. Section 3, presents a basic idea of the proposed method MIHEM. In Section 4, we will apply MIHEM with the purpose to find power series solutions to linear differential equations about ordinary points. The main results obtained in this work are discussed in Section 5. Section 6 has to do with future work as a continuation of this method and the conclusions. Finally, Appendix presents the equivalence of PSM and MIHEM. ## 2. Linear Differential Equations As it is found in the literature, linear differential equations with constant coefficients have exact analytical solutions [19, 20]; in contrast, the methods to find solutions for linear differential equations of variable coefficients are based on infinite series expansions.To start, consider a differential equation of the form:(1)y″x+pxy′x+qxyx=0.If the functionspx and qx are analytic at x=x0 (that is, they can be represented by means of a powers series of x−x0, with positive convergence radius [19, 20]) then x=x0 is an ordinary point of the differential equation; otherwise, the point x=x0 can be considered as a singular point [19]. This let us to present two methods for the solution of (1). ### 2.1. Power Series The simplest method occurs when the solutions for (1) are expressed in the neighbourhood of an ordinary point x0 [19, 20]. For this case, the solutions are found in the power series form(2)y=∑n=0cnx−x0n,where cn are the unknown coefficients, which will be determined by substituting (2) into the equation that is to be solved. This work is concerned with the case of ordinary points. ### 2.2. Frobenius Series Singular points can be classified into regular and irregular [19]. The Frobenius method, for the case of regular singular points, allow us to find power series solutions of the form(3)y=∑n=0cnx−x0n+r,where r is a parameter to be determined besides of coefficients cn.In accordance to the theory of linear differential equations, the general solution of (1), for the case of ordinary points, can be expressed in terms of the superposition of two linearly independent series of the form (2), and for the simpler case of regular singular points, in the form of (3)An important result that accounts the nature of solutions near of ordinary points is contained in the following fundamental theorem [20].Theorem 1. Letx0 be an ordinary point of the differential (1) and a0, a1 arbitrary constants. Then, there is a unique function yx that is analytic at x0 and is a solution of (1) in a certain neighbourhood of this point, and also satisfies the initial conditions yx0=a0, y′x0=a1. Besides, if the powers series expansions of Px and Qx are valid in an interval x−x0<R, R>0, then the power series expansion of this solution is also valid on the same interval [20]. We will see later the importance of this theorem and its relation with the results obtained for this article. Next, we establish the following fundamental theorem for non-homogeneous second order linear differential equations which will result useful in the next section [20].Theorem 2. LetPx, Qx and Rx be continuous functions on an interval a≤x≤b. If x0 is any point in this interval, and y0 and y0′ are any numbers whatever, then the initial value problem. y″x+Pxy′x+Qvyx=Rx, yx0=y0, and y′x0=y0, has one and only one solution y=yx on the interval a≤x≤b [20]. ## 2.1. Power Series The simplest method occurs when the solutions for (1) are expressed in the neighbourhood of an ordinary point x0 [19, 20]. For this case, the solutions are found in the power series form(2)y=∑n=0cnx−x0n,where cn are the unknown coefficients, which will be determined by substituting (2) into the equation that is to be solved. This work is concerned with the case of ordinary points. ## 2.2. Frobenius Series Singular points can be classified into regular and irregular [19]. The Frobenius method, for the case of regular singular points, allow us to find power series solutions of the form(3)y=∑n=0cnx−x0n+r,where r is a parameter to be determined besides of coefficients cn.In accordance to the theory of linear differential equations, the general solution of (1), for the case of ordinary points, can be expressed in terms of the superposition of two linearly independent series of the form (2), and for the simpler case of regular singular points, in the form of (3)An important result that accounts the nature of solutions near of ordinary points is contained in the following fundamental theorem [20].Theorem 1. Letx0 be an ordinary point of the differential (1) and a0, a1 arbitrary constants. Then, there is a unique function yx that is analytic at x0 and is a solution of (1) in a certain neighbourhood of this point, and also satisfies the initial conditions yx0=a0, y′x0=a1. Besides, if the powers series expansions of Px and Qx are valid in an interval x−x0<R, R>0, then the power series expansion of this solution is also valid on the same interval [20]. We will see later the importance of this theorem and its relation with the results obtained for this article. Next, we establish the following fundamental theorem for non-homogeneous second order linear differential equations which will result useful in the next section [20].Theorem 2. LetPx, Qx and Rx be continuous functions on an interval a≤x≤b. If x0 is any point in this interval, and y0 and y0′ are any numbers whatever, then the initial value problem. y″x+Pxy′x+Qvyx=Rx, yx0=y0, and y′x0=y0, has one and only one solution y=yx on the interval a≤x≤b [20]. ## 3. MIHEM Method The contribution of this work is focused to the case of ordinary points, but employing a novel method, the Modified Integral Homotopy Expansive Method (MIHEM). We will see that MIHEM method is easy to use and provide an adequate method in order to find general solutions in series for the case of solutions for which the solutions of (1) are expressed in the neighbourhood of an ordinary point x0. Although initially, the integral homotopy expansive method (IHEM) [18] was conceived above all as a method to get approximate and exact solutions for ordinary differential equations. This work will show the manner as MIHEM modify to IHEM in order to widen the application of Integral Homotopy Expansive Method with the end to solve linear differential equations. In summary MIHEM is expressed as follows. Given a linear ordinary differential equation, MIHEM expresses it as an integral equation, which somehow is solved for y. At this step, the integrand of the equation is expressed in terms of a homotopy and it is assumed that y is expressed as a power series of the homotopy parameter p [10, 11]. In this point, unlike of IHEM, we will assume zero the initial conditions of the problem (in case the problem provided said conditions) and will propose as initial function y0x=A+Bx, where A and B will result to be the arbitrary constants for the general solution. The rest of the procedure is the same of IHEM method [18], equating in both sides of the integral equation the identical powers of p terms, it is possible to determine a sequence of unknown functions equation which will determine the MIHEM solution.To understand how MIHEM works, consider a general nonlinear differential equation, which can be expressed as [10, 11].(4)Lu+fx=0.x∈Ω,with the following boundary conditions(5)Fu,∂u∂n=0,x∈Γ,where F, is a boundary operator; fx, is a given analytical function; Γ, is the domain boundary for Ω and L is a linear operator.From (4), we will solve for u in terms of the integral equation:(6)ux=A+Bx+∫x0x∫x0t−a1su′s−a0sus−fsdsdt.In (6) it is assumed that L is a second order linear operator corresponding to the general linear differential equation of the form:(7)u″x+a1xu′x+a0xux+fx=0.According to the method, a homotopy is introduced assuming that the incognita functionu is expressed as a power series of homotopy parameter p.(8)ux=v0x+v1xp+v2xp2+….Besides, another assumption of the proposed method is, unlike IHEM method [18], that the integral version of the problem does not contain the initial conditions out of the integral so that (6) is rewritten as(9)ux=∫x0x∫x0t−a1su′s−a0sus−fsdsdt,from all the above, we propose the following iterative process(10)∑n=0∞pnνx=∫x0x∫x0t1−pws+p−a1s∑n=0∞pnνn′s−a0s∑n=0∞pnνns−fsdsdt,where p is the homotopy parameter belonging to the interval 0,1; wx is a function introduced using the flexibility of the homotopy method. Although (10) could be useful, the final version of MIHEM is obtained assuming that wr=0, in such a way that(11)∑n=0∞pnνnx=∫x0x∫x0tp−a1s∑n=0∞pnνn′s−a0s∑n=0∞pnνns−fsdsdt.In accordance with the proposed method, we propose as initial function(12)y0=A+Bx,where A and B will result to be the arbitrary constants of the general solution.Then, after equating identicalp power terms, the values for the sequence ν0, ν1, ν2 can be found by solving in a systematic way the integrals arising from the different orders.(13)ν0x=A+Bx,ν1x=∫x0x∫x0t−a1sν0′s−a0sν0s−fsdsdt,ν2x=∫x0x∫x0t−a1sν1′s−a0sν1sdsdt,ν3x=∫x0x∫x0t−a1sν2′s−a0sν2sdsdt,…νjx=∫x0x∫x0t−a1sνj−1′s−a0sνj−1sdsdt.…In this way, to get an approximate or exact solution for (7), the results of (13) are substituted into (8), taking the limit p⟶1, the following solution is obtained [10, 11].(14)Ux=v0x+v1x+v2x+v3x….For the case of first order linear differential equations the procedure is very similar but instead of (12) the initial function y0=A.With the end to prove that (14) indeed is a solution of (9) we have to show that function Ux is continuous. We will appeal to the following argument based on Theorem 2 already mentioned. Next, we restrict our considerations to the conditions imposed for this theorem. We will assume that the coefficient functions a1x, a0x and fx from (7) are continuous on an interval a≤x≤b. Then, given a point in this interval x0 and any numbers y0 and y0′, the ODE (7) subject to yx0=y0 and y′x0=y0′ has one and only one solution y=yx in a≤x≤b (we note that many times the above mentioned interval can be extended to −∞,∞ [20]. The proposed method MIHEM express (7) in terms of the integral (9) and the solution of the problem is expressed through the iterative process proposed by (10). We begin from the initial function (12) where A and B correspond to the initial conditions yx0=A and y′x0=B. Next, the first iteration of the process corresponds to ν1x (see (13)) is given by integrals of a1x, a0x⋅y0x and fx. Since y0x is a polynomial function, in principle, these integrals provide continuous functions. In other way, given that a1xa0x and fx are supposed to be continuous in a≤x≤b, then it is possible to express them in terms of a power series of x−x0 assuming that the above mentioned functions are analytic at x0 with positive convergence radius R. From the properties of these series in its convergence interval, the operations above mentioned, even the integrals involved are valid for x−x0<R. As a consequence, it is possible to make the necessary rearrangements to express function ν1x as a convergent series for the values x−x0<R and for the same reason ν1x is continuous. On the other hand, similar arguments apply to the other integrals expressed by ν2x, ν3x,…, Thus, for instance, ν2x is obtained by integrating the products of series that emanate from a1x⋅ν1′x and a0x⋅ν1x.From the above argument, these operations are valid ina≤x≤b. As a matter of fact differentiation ν1′x is a valid operation inside the convergence interval and the products of convergent series above mentioned are performed applying the distributive property and grouping the like terms in such a way that result convergent series for x−x0<R, even after performing the required integrations for MIHEM. As a matter of fact, ν1x is represented for a convergent series in x−x0<R and for the same reason is a continuous function in the same interval. It is possible to follow in this way and note that (14) (after applying the limit p⟶1) is expressed for a sum of convergent series in this interval and for the same reason we can rearrange their terms added term-wise. Thus, we finally get a convergent series valid in x−x0<R which represents a continuous function. Given that, in accordance with Theorem 2, the posed second order inhomogeneous problem possesses a unique solution, then the MIHEM solution (14) is a continuous function that indeed represents the solution of the proposed problem with the conditions established in Theorem 2 and for the same reason also is a solution for (9) assuming valid the conditions of the aforementioned theorem.To ease the application of MIHEM, the homotopy technique allows to introduce the homotopy parameterp in the coefficient functions a1x,a0x and fx, that is a1px, a0px and fpx in the iterative process given by (10). This procedure is especially useful when some of these functions is not a polynomial [19]. Example 4 shows this procedure. ## 4. Application of MIHEM to Find Power Series Solutions to Linear Differential Equations for the Case of Ordinary Points Next, we exemplify the use of MIHEM method with to end to solve linear differential equations for the case of ordinary points.Example 1. Obtain the general solution for the following linear first order differential equation.(15)y′+2xy=0. This example compares in detail the MIHEM method and Power Series Method (PSM). Since the pointx0=0 is an ordinary point of (15), the Power Series Method is appropriate in order to obtain the solution for (15). ### 4.1. Power Series Method In accordance with PSM we will assume a solution of the form (see (2)) [19].(16)y=∑n=0∞cnxn,next, we substitute (16) into (15) to obtain.(17)∑n=1∞ncnxn−1+∑n=0∞2cnxn+1=0,with the purpose to add the sums we have to rewrite (17) in the following manner(18)c1+∑n=2∞ncnxn−1+∑n=0∞2cnxn+1=0.Following the power series algorithm, we change the mute variable as follows:for the first sum we substitutek=n−1, and for the second one k=n+1, in such way that (18) is written as(19)c1+∑k=1∞k+1ck+1xk+∑k=1∞2ck−1xk=0.After adding the sums we rewrite (19) in the following compact way(20)c1+∑k=1∞k+1ck+1+2ck−1xk=0,thus, equating to zero the coefficients of the different powers of x, we get(21)c1=0,(22)k+1ck+1+2ck−1=0,for k=1,2,3,…(22) is denominated recursive relation, which is employed with the purpose to determine the ck coefficients. Given that k+1≠0, then (22) can be written as(23)ck+1=−2ck−1k+1,for k=1,2,3,…After iterating (23), we obtain the values:(24)k=1,c2=−c0,k=2,c3=0,k=3,c4=−2c24=c02!,k=4,c5=0,k=5,c6=−2c46=−c03!,k=6,c7=0,k=7,c8=−2c68=c04!,…Therefore, after substituting (24) into (16) we get(25)yx=c0+c1x+c2x2+c3x3+c4x4+c5x5+c6x6+…,yx=c0+0−c0x2+0+c02!x4+0−c03!x6+c04!x8−….or(26)yx=c01−x2+12!x4−13!x6+14!x8−…,where c0 is arbitrary and for the same reason we have found the general solution for (15).As a matter of fact, we recognize the series (26) as e−x2, therefore the general solution can be written as(27)yx=c0e−x2.We note that this procedure is even longer and cumbersome for the case of linear differential equations of first order. ### 4.2. MIHEM Method In order to employ the proposed method we express (15) as an integral equation:(28)yx=−∫0x2sysds.In accordance with MIHEM we express (29) as follows(29)∑n=0∞pnνnx=−2p∫0xx∑n=0∞pnνnsds.Given that the differential equation is of first order, we propose as initial function the constant(30)ν0x=A.In such a way that iterating (31) we get(31)ν1x=−2∫0xsν0sds=−2∫0xsAds.thus,(32)ν1x=−Ax2,ν2x=−2∫0xsν1sds=2∫0xsAs2ds.therefore,(33)ν2x=Ax42,ν3x=−2∫0xsν2sds=2∫0xsAs42ds,ν3x=−Ax66,ν4x=−2∫0xsν3sds=2∫0xsAs66ds,integrating(34)ν4x=Ax824.After substituting equations (31)–(34) into (14) we get(35)yx=A1−x2+12!x4−13!x6+14!x8−…,given that A is arbitrary, then (35) is a general solution of (15), as a matter of fact this result is the same obtained for Power Series Method (27). We emphasize the ease of MIHEM to obtain (35).Example 2. Obtain the general solution for the following linear second order differential equation.(36)y″−1+xy=0. This example will compare in detail the MIHEM and Power Series Method [19]. Given that the pointx0=0 is an ordinary point of (36), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form (16)(37)y=∑n=0∞cnxn. Since PSM method is extremely long, we will provide a summary of the followed procedure. By substituting (37) into (36) we get on the one hand(38)c2=c02,on the other hand the following recurrence relation(39)ck+2=ck+ck−1k+1k+2,k=1,2,3,…,the procedure that yields in (38) and (39) is similar but more complicated than the one that yielded in (21) and (23). We note that unlike (23), (39) is a recurrence relation with three terms. By iterating (39) and choosing c1=0 we obtain the coefficients(40)c1=0,c2=c02,c3=c06,c4=c024,c5=c030,…,which values are substituted into (37), to get(41)y1x=c01+12x2+16x3+124x4+130x5+…. Iterating a second time by choosingc2=0 we get the coefficients(42)c0=0,c2=0,c3=c16,c4=c112,c5=c1120,…,substituting these coefficients into (37) we obtain(43)y2x=c1x+16x3+112x4+1120x5+…. From (41) and (43), the general solution of (36) is expressed as(44)yx=c01+12x2+16x3+124x4+130x5+…+c1x+16x3+112x4+1120x5+…. In accordance with Theorem1, series (41) and (43) converge for all x. As a matter of fact, this process is long and hard for most of applications. ### 4.3. MIHEM Method In order to employ the proposed method we express (36) as an integral equation:(45)yx=∫0x∫0ts+1ysdsdt.In accordance with MIHEM we express (45) as follows(46)∑n=0∞pnνnx=p∫0x∫0t1+s∑n=0∞pnνnsdsdt.Given that the differential equation to solve is of second order, we propose as initial function(47)y0=A+Bx.In such a way that after iterating (46)(48)ν1x=∫0x∫0t1+xν0sdsdt,thus, by substituting (47) into (48) we get(49)ν1x=∫0x∫0t1+sA+Bsdsdt.After perform the elementary successive integrals in (49) we obtain:(50)ν1x=Ax22+A+Bx36+Bx412.On the other hand(51)ν2x=∫0x∫0t1+xν1sdsdt.By substituting (50) into (51) and after performing basic integrals we get(52)ν2x=Ax424+x530+x6180+…+Bx5120+x6120+x7504+….After substituting (47), (50), and (52) into (14), we get(53)yx=A1+12x2+16x3+124x4+130x5+…+Bx+16x3+112x4+1120x5+….We note that (44) and (53) are the same. Nevertheless we emphasize the ease of MIHEM to obtain the same result but employing only two iterations and elementary integrals in a systematic way.Example 3. This example considers Hermite differential equation which is important for its applications in physics [21]. This example will employ the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. The differential equation to solve is:(54)y′′−2xy′+2αy=0,where α is a constant. In order to employ the proposed method we express (54) as an integral equation:(55)yx=∫0x∫0t2sy′s−2αysdsdt. In accordance with MIHEM we express (55) as follows(56)∑n=0∞pnνnx=p∫0x∫0t2s∑n=0∞pnνn′s−2α∑n=0∞pnνnsdsdt. Given that (54) is a second order linear differential equation we propose as initial function(57)y0=A+Bx. So that, iterating (56) we get the following elementary integral:(58)ν1x=∫0x∫0t2xν0′s−2αν0sdsdt,thus, by substituting (57) into (58) and performing the indicated integrations we get(59)ν1x=2B1−αx36−αx2A. On the other hand the second iteration results in(60)ν2x=∫0x∫0t2sν1′s−2αν1sdsdt,from (59) and (60), after performing some elementary operations we obtain(61)ν2x=A22αα−24!x4+B22α−1α−35!x5. On the other hand(62)ν3x=∫0x∫0t2sν2′s−2αν2sdsdt,substituting (61) into (62) yields in(63)ν3x=−A23αα−2α−46!x6−B23α−1α−3α−57!x7. Therefore, the general solution of (54) is obtained substituting (57), (59), (61), and (63) into (14).(64)yx=Ay1x+By2x,where(65)y1x=1−2α2!x2+22αα−24!x4−23αα−2α−46!x6+…,(66)y2x=x−2α−13!x3+22α−1α−35!x5−23α−1α−3α−57!x7+…, Given that this same result it is obtained for Power Series Method [20], then in accordance with Theorem 1, both series (65) and (66) converge for all x. The case of non-negative integerα is particularly relevant for the following. For eachα value, one of these series ends and results a polynomial while the other is an infinite series. From (65) and (66) it is clear that y1x will be a polynomial if α is even and y2x will result a polynomial if α is odd. Taking successively the values α=0,1,2,3,4,5, we obtain the polynomials:(67)p0x=1,p1x=x,p2x=1−2x2,p3x=x−23x3,p4x=1−4x2+43x4,p5x=1−43x3+415x5, The so called Hermite polynomials are obtained considering constant multiples of the polynomials (67) with the property that the terms containing the highest powers of x are of the form 2nxn. From (67) it is possible to obtain some Hermite polynomials (denoted Hnx) as follows [21].(68)H0x=1,H1x=2x,H2x=4x2−2,H3x=8x3−12x,H4x=16x4−48x2+12,H5x=32x5−160x3+120x…. In this case MIHEM not only provided a general solution for (54) but provided the adequate forms (65) and (66) from which easily let to identify the important Hermite polynomials (68) Next, we compare PSM and MIHEM methods with the end to study the following linear differential equation with non-polynomial coefficients.Example 4. Obtain the general solution for the following linear second order differential equation(69)y″+e−xy=0. Given that the pointx0=0 is an ordinary point of (69), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form(70)y=∑n=0∞anxn,after differentiating (70) we get(71)y′′x=∑n=2∞annn−1xn−2. On the other hand is well known the Taylor series ofe−x(72)e−x=1−x+12x2−16x3+124x4−1120x5+1720x6−…. We will get the first six terms of (70). The substitution of results (70)–(72) into the left hand side of (69) yields in(73)∑n=2∞annn−1xn−2+a5+a32−a26+a124−a0120−a4x5+a4+a22−a16+a024−a3x4+a3+a12−a06−a2x3+a2+a02−a1x2+a1−a0x+a0=0. After developing the sum in (73) and grouping in powers of x we get(74)a0+2a2x0+6a3+a1−a0x1+12a4+a2+a02−a1x2+20a5+a3+a12−a06−a2x3+30a6+a4+a22−a16+a024−a3x4+42a7+a5+a32−a26+a124−a0120−a4+…=0. After setting each power ofx equal to zero, we obtain.(75)a2=−a02,a3=16a0−a1,a4=a112,a5=120−a02−a13,…. Therefore, after substituting (75) into (70) we obtain the general solution(76)yx=a01−x22+x36−x540+…+a1x−x36+x412−x560+…. Although we simplified several steps, the above process is long and cumbersome for most applications. ### 4.4. MIHEM Method In order to employ the proposed method we express (69) as an integral equation:(77)yx=−∫0x∫0te−sysdsdt.In accordance with MIHEM we express (77) as follows(78)∑n=0∞pnνnx=−p∫0x∫0te−ps∑n=0∞pnνnsdsdt.In (78) we introduced the homotopy parameter into the non polynomial factor with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM.After substituting(79)e−px=1−px+12p2x2−16p3x3+124p4x4−1120p5x5+1720p6x6−…,into (78) and after equating the coefficients of identical powers of p, we get the following relations:(80)ν0x=A+Bx.(initial function).(81)ν1x=−∫0x∫0tν0sdsdt,thus, by substituting (80) into (81) we get(82)ν1x=−Ax22−Bx36,(83)ν2x=−∫0x∫0t−sν0s+ν1sdsdt,substituting (80) and (82) into (83) and after performing elementary integrals we get(84)ν2x=Ax36+B+A2x412+Bx5120.On the other hand the following iteration results in(85)ν3x=−∫0x∫0tν2s−sν1s+s22ν0sdsdt,from (80), (82), (84) and (85) we get(86)ν3x=−Ax424−x5202A3+B2−x630B4+A24−Bx75040.Iterating (78) we get the following elementary integral:(87)ν4x=−∫0x∫0tν3s−sν2s+s22ν1s−s36ν0sdsdt.Taking into account that we have proposed to obtain the first six terms of the series solution (70), we note that only the last integral of (87) contributes to this part of the solution, whereby we write (87) as follows(88)ν4x=∫0x∫0ts36ν0sdsdt+….After substituting (80) into (88) we obtain(89)ν4x=Ax5120+Bx6180+….Substituting (80), (82), (84), (86), and (89) into (14) we get(90)yx=A1−x22+x36−x540+…+Bx−x36+x412−x560+….We note that we obtained the same results (76) (PSM) and (90) (MIHEM) as it should be.On the other hand, we note that the fifth iteration of PSM provides, until the fifth power of the solution while the fourth iteration of MIHEM provides the same information and more of the following powers.Example 5. This example provides the general solution for the following linear second order inhomogeneous differential equation by using MIHEM.(91)y″−2+4x2y+x2+4x2=0, To employ the proposed method, we express the following integral equation in terms of(92)yx=∫0x∫0t2+4s2ys−s2+4s2dsdt. In accordance with the proposed method we express (92) as follows(93)∑n=0∞pnνnx=p∫0x∫0t2+4s2∑n=0∞pnνns−s2+4s2dsdt. Next, we propose as initial function(94)y0=A+Bx,so that we obtain from (93).(95)ν1x=∫0x∫0t2+4s2ν0s−s2+4s2dsdt. Thus, after substituting (94) in the above expression we get(96)ν1x=∫0x∫0t2+4s2A+Bs−2s+4s3dsdt. After performing elementary operations we obtain(97)ν1x=Ax2+B−13x3+A3x4+B−15x5. In the same way,(98)ν2x=∫0x∫0t2+4s2ν1sdsdt. After substituting (97) into (98) we get(99)ν2x=∫0x∫0t2+4s2As2+B−13s3+A3s4+B−15s5dsdt. After performing elementary operations we get(100)ν2x=Ax46+B−130x5+2A3+4Ax630+2B−15+4B−13x742+A3x814+B−190x9…. From the above results, we get an approximate solution for (91) in the following way(101)yx=A1+x2+x42+745x6+142x8+…+Bx+B−13x3+7B−130x5+13B−1315x7+B−190x9+…. ## 4.1. Power Series Method In accordance with PSM we will assume a solution of the form (see (2)) [19].(16)y=∑n=0∞cnxn,next, we substitute (16) into (15) to obtain.(17)∑n=1∞ncnxn−1+∑n=0∞2cnxn+1=0,with the purpose to add the sums we have to rewrite (17) in the following manner(18)c1+∑n=2∞ncnxn−1+∑n=0∞2cnxn+1=0.Following the power series algorithm, we change the mute variable as follows:for the first sum we substitutek=n−1, and for the second one k=n+1, in such way that (18) is written as(19)c1+∑k=1∞k+1ck+1xk+∑k=1∞2ck−1xk=0.After adding the sums we rewrite (19) in the following compact way(20)c1+∑k=1∞k+1ck+1+2ck−1xk=0,thus, equating to zero the coefficients of the different powers of x, we get(21)c1=0,(22)k+1ck+1+2ck−1=0,for k=1,2,3,…(22) is denominated recursive relation, which is employed with the purpose to determine the ck coefficients. Given that k+1≠0, then (22) can be written as(23)ck+1=−2ck−1k+1,for k=1,2,3,…After iterating (23), we obtain the values:(24)k=1,c2=−c0,k=2,c3=0,k=3,c4=−2c24=c02!,k=4,c5=0,k=5,c6=−2c46=−c03!,k=6,c7=0,k=7,c8=−2c68=c04!,…Therefore, after substituting (24) into (16) we get(25)yx=c0+c1x+c2x2+c3x3+c4x4+c5x5+c6x6+…,yx=c0+0−c0x2+0+c02!x4+0−c03!x6+c04!x8−….or(26)yx=c01−x2+12!x4−13!x6+14!x8−…,where c0 is arbitrary and for the same reason we have found the general solution for (15).As a matter of fact, we recognize the series (26) as e−x2, therefore the general solution can be written as(27)yx=c0e−x2.We note that this procedure is even longer and cumbersome for the case of linear differential equations of first order. ## 4.2. MIHEM Method In order to employ the proposed method we express (15) as an integral equation:(28)yx=−∫0x2sysds.In accordance with MIHEM we express (29) as follows(29)∑n=0∞pnνnx=−2p∫0xx∑n=0∞pnνnsds.Given that the differential equation is of first order, we propose as initial function the constant(30)ν0x=A.In such a way that iterating (31) we get(31)ν1x=−2∫0xsν0sds=−2∫0xsAds.thus,(32)ν1x=−Ax2,ν2x=−2∫0xsν1sds=2∫0xsAs2ds.therefore,(33)ν2x=Ax42,ν3x=−2∫0xsν2sds=2∫0xsAs42ds,ν3x=−Ax66,ν4x=−2∫0xsν3sds=2∫0xsAs66ds,integrating(34)ν4x=Ax824.After substituting equations (31)–(34) into (14) we get(35)yx=A1−x2+12!x4−13!x6+14!x8−…,given that A is arbitrary, then (35) is a general solution of (15), as a matter of fact this result is the same obtained for Power Series Method (27). We emphasize the ease of MIHEM to obtain (35).Example 2. Obtain the general solution for the following linear second order differential equation.(36)y″−1+xy=0. This example will compare in detail the MIHEM and Power Series Method [19]. Given that the pointx0=0 is an ordinary point of (36), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form (16)(37)y=∑n=0∞cnxn. Since PSM method is extremely long, we will provide a summary of the followed procedure. By substituting (37) into (36) we get on the one hand(38)c2=c02,on the other hand the following recurrence relation(39)ck+2=ck+ck−1k+1k+2,k=1,2,3,…,the procedure that yields in (38) and (39) is similar but more complicated than the one that yielded in (21) and (23). We note that unlike (23), (39) is a recurrence relation with three terms. By iterating (39) and choosing c1=0 we obtain the coefficients(40)c1=0,c2=c02,c3=c06,c4=c024,c5=c030,…,which values are substituted into (37), to get(41)y1x=c01+12x2+16x3+124x4+130x5+…. Iterating a second time by choosingc2=0 we get the coefficients(42)c0=0,c2=0,c3=c16,c4=c112,c5=c1120,…,substituting these coefficients into (37) we obtain(43)y2x=c1x+16x3+112x4+1120x5+…. From (41) and (43), the general solution of (36) is expressed as(44)yx=c01+12x2+16x3+124x4+130x5+…+c1x+16x3+112x4+1120x5+…. In accordance with Theorem1, series (41) and (43) converge for all x. As a matter of fact, this process is long and hard for most of applications. ## 4.3. MIHEM Method In order to employ the proposed method we express (36) as an integral equation:(45)yx=∫0x∫0ts+1ysdsdt.In accordance with MIHEM we express (45) as follows(46)∑n=0∞pnνnx=p∫0x∫0t1+s∑n=0∞pnνnsdsdt.Given that the differential equation to solve is of second order, we propose as initial function(47)y0=A+Bx.In such a way that after iterating (46)(48)ν1x=∫0x∫0t1+xν0sdsdt,thus, by substituting (47) into (48) we get(49)ν1x=∫0x∫0t1+sA+Bsdsdt.After perform the elementary successive integrals in (49) we obtain:(50)ν1x=Ax22+A+Bx36+Bx412.On the other hand(51)ν2x=∫0x∫0t1+xν1sdsdt.By substituting (50) into (51) and after performing basic integrals we get(52)ν2x=Ax424+x530+x6180+…+Bx5120+x6120+x7504+….After substituting (47), (50), and (52) into (14), we get(53)yx=A1+12x2+16x3+124x4+130x5+…+Bx+16x3+112x4+1120x5+….We note that (44) and (53) are the same. Nevertheless we emphasize the ease of MIHEM to obtain the same result but employing only two iterations and elementary integrals in a systematic way.Example 3. This example considers Hermite differential equation which is important for its applications in physics [21]. This example will employ the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. The differential equation to solve is:(54)y′′−2xy′+2αy=0,where α is a constant. In order to employ the proposed method we express (54) as an integral equation:(55)yx=∫0x∫0t2sy′s−2αysdsdt. In accordance with MIHEM we express (55) as follows(56)∑n=0∞pnνnx=p∫0x∫0t2s∑n=0∞pnνn′s−2α∑n=0∞pnνnsdsdt. Given that (54) is a second order linear differential equation we propose as initial function(57)y0=A+Bx. So that, iterating (56) we get the following elementary integral:(58)ν1x=∫0x∫0t2xν0′s−2αν0sdsdt,thus, by substituting (57) into (58) and performing the indicated integrations we get(59)ν1x=2B1−αx36−αx2A. On the other hand the second iteration results in(60)ν2x=∫0x∫0t2sν1′s−2αν1sdsdt,from (59) and (60), after performing some elementary operations we obtain(61)ν2x=A22αα−24!x4+B22α−1α−35!x5. On the other hand(62)ν3x=∫0x∫0t2sν2′s−2αν2sdsdt,substituting (61) into (62) yields in(63)ν3x=−A23αα−2α−46!x6−B23α−1α−3α−57!x7. Therefore, the general solution of (54) is obtained substituting (57), (59), (61), and (63) into (14).(64)yx=Ay1x+By2x,where(65)y1x=1−2α2!x2+22αα−24!x4−23αα−2α−46!x6+…,(66)y2x=x−2α−13!x3+22α−1α−35!x5−23α−1α−3α−57!x7+…, Given that this same result it is obtained for Power Series Method [20], then in accordance with Theorem 1, both series (65) and (66) converge for all x. The case of non-negative integerα is particularly relevant for the following. For eachα value, one of these series ends and results a polynomial while the other is an infinite series. From (65) and (66) it is clear that y1x will be a polynomial if α is even and y2x will result a polynomial if α is odd. Taking successively the values α=0,1,2,3,4,5, we obtain the polynomials:(67)p0x=1,p1x=x,p2x=1−2x2,p3x=x−23x3,p4x=1−4x2+43x4,p5x=1−43x3+415x5, The so called Hermite polynomials are obtained considering constant multiples of the polynomials (67) with the property that the terms containing the highest powers of x are of the form 2nxn. From (67) it is possible to obtain some Hermite polynomials (denoted Hnx) as follows [21].(68)H0x=1,H1x=2x,H2x=4x2−2,H3x=8x3−12x,H4x=16x4−48x2+12,H5x=32x5−160x3+120x…. In this case MIHEM not only provided a general solution for (54) but provided the adequate forms (65) and (66) from which easily let to identify the important Hermite polynomials (68) Next, we compare PSM and MIHEM methods with the end to study the following linear differential equation with non-polynomial coefficients.Example 4. Obtain the general solution for the following linear second order differential equation(69)y″+e−xy=0. Given that the pointx0=0 is an ordinary point of (69), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form(70)y=∑n=0∞anxn,after differentiating (70) we get(71)y′′x=∑n=2∞annn−1xn−2. On the other hand is well known the Taylor series ofe−x(72)e−x=1−x+12x2−16x3+124x4−1120x5+1720x6−…. We will get the first six terms of (70). The substitution of results (70)–(72) into the left hand side of (69) yields in(73)∑n=2∞annn−1xn−2+a5+a32−a26+a124−a0120−a4x5+a4+a22−a16+a024−a3x4+a3+a12−a06−a2x3+a2+a02−a1x2+a1−a0x+a0=0. After developing the sum in (73) and grouping in powers of x we get(74)a0+2a2x0+6a3+a1−a0x1+12a4+a2+a02−a1x2+20a5+a3+a12−a06−a2x3+30a6+a4+a22−a16+a024−a3x4+42a7+a5+a32−a26+a124−a0120−a4+…=0. After setting each power ofx equal to zero, we obtain.(75)a2=−a02,a3=16a0−a1,a4=a112,a5=120−a02−a13,…. Therefore, after substituting (75) into (70) we obtain the general solution(76)yx=a01−x22+x36−x540+…+a1x−x36+x412−x560+…. Although we simplified several steps, the above process is long and cumbersome for most applications. ## 4.4. MIHEM Method In order to employ the proposed method we express (69) as an integral equation:(77)yx=−∫0x∫0te−sysdsdt.In accordance with MIHEM we express (77) as follows(78)∑n=0∞pnνnx=−p∫0x∫0te−ps∑n=0∞pnνnsdsdt.In (78) we introduced the homotopy parameter into the non polynomial factor with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM.After substituting(79)e−px=1−px+12p2x2−16p3x3+124p4x4−1120p5x5+1720p6x6−…,into (78) and after equating the coefficients of identical powers of p, we get the following relations:(80)ν0x=A+Bx.(initial function).(81)ν1x=−∫0x∫0tν0sdsdt,thus, by substituting (80) into (81) we get(82)ν1x=−Ax22−Bx36,(83)ν2x=−∫0x∫0t−sν0s+ν1sdsdt,substituting (80) and (82) into (83) and after performing elementary integrals we get(84)ν2x=Ax36+B+A2x412+Bx5120.On the other hand the following iteration results in(85)ν3x=−∫0x∫0tν2s−sν1s+s22ν0sdsdt,from (80), (82), (84) and (85) we get(86)ν3x=−Ax424−x5202A3+B2−x630B4+A24−Bx75040.Iterating (78) we get the following elementary integral:(87)ν4x=−∫0x∫0tν3s−sν2s+s22ν1s−s36ν0sdsdt.Taking into account that we have proposed to obtain the first six terms of the series solution (70), we note that only the last integral of (87) contributes to this part of the solution, whereby we write (87) as follows(88)ν4x=∫0x∫0ts36ν0sdsdt+….After substituting (80) into (88) we obtain(89)ν4x=Ax5120+Bx6180+….Substituting (80), (82), (84), (86), and (89) into (14) we get(90)yx=A1−x22+x36−x540+…+Bx−x36+x412−x560+….We note that we obtained the same results (76) (PSM) and (90) (MIHEM) as it should be.On the other hand, we note that the fifth iteration of PSM provides, until the fifth power of the solution while the fourth iteration of MIHEM provides the same information and more of the following powers.Example 5. This example provides the general solution for the following linear second order inhomogeneous differential equation by using MIHEM.(91)y″−2+4x2y+x2+4x2=0, To employ the proposed method, we express the following integral equation in terms of(92)yx=∫0x∫0t2+4s2ys−s2+4s2dsdt. In accordance with the proposed method we express (92) as follows(93)∑n=0∞pnνnx=p∫0x∫0t2+4s2∑n=0∞pnνns−s2+4s2dsdt. Next, we propose as initial function(94)y0=A+Bx,so that we obtain from (93).(95)ν1x=∫0x∫0t2+4s2ν0s−s2+4s2dsdt. Thus, after substituting (94) in the above expression we get(96)ν1x=∫0x∫0t2+4s2A+Bs−2s+4s3dsdt. After performing elementary operations we obtain(97)ν1x=Ax2+B−13x3+A3x4+B−15x5. In the same way,(98)ν2x=∫0x∫0t2+4s2ν1sdsdt. After substituting (97) into (98) we get(99)ν2x=∫0x∫0t2+4s2As2+B−13s3+A3s4+B−15s5dsdt. After performing elementary operations we get(100)ν2x=Ax46+B−130x5+2A3+4Ax630+2B−15+4B−13x742+A3x814+B−190x9…. From the above results, we get an approximate solution for (91) in the following way(101)yx=A1+x2+x42+745x6+142x8+…+Bx+B−13x3+7B−130x5+13B−1315x7+B−190x9+…. ## 5. Discussion This work proposed the MIHEM method, as a useful tool in order to find power series solutions of linear ordinary differential equations for the case of ordinary points.Although our examples sustain that Power Series Method and the proposed method MIHEM provided the same results under the conditions above mentioned, appendix, will be dedicated to show the equivalence of two methods for solving the general case (1) of linear differential equations assuming that coefficient functionspx and qx are analytic at x=x0 (this equivalence can be inferred from the theoretical discussion established just below of (14)). Despite Power Series Method is the classical method to solve linear equations about ordinary points, we emphasize that PSM usually is long and cumbersome and it requires concentration to avoid errors at the handle of sums and its index. Unlike it, MIHEM was introduced with the same objective, but we noted that it is a method systematically based in elementary integrals, beginning always of the same initial function for the linear differential equations of the same order. As a matter of fact, from the proposed problems it is clear that, besides elementary algebra, the other systematically employed mathematical result is the basic integral ∫xndx=xn+1/n+1.Example1 solved a linear first order differential equation, after comparing both methods, it resulted clear that MIHEM is more direct and easier to use in comparison with the traditional PSM method. In fact, we noted that PSM method is most of the times cumbersome and difficult to use. Thus, the Power Series Method algorithm implies among other things, to change the mute variable, adding of sums, obtaining the so-called recursive relation, which is employed with the purpose to determinate the ck coefficients, and so on. Conversely, we emphasized the ease of MIHEM to obtain the same results but with less effort, its procedure is systematically based in the solution of basic integrals. Example 2 solved a second order differential equation by using again both methods. We note that for this case PSM resulted even more cumbersome and long in comparison with the first case study; thus, we noted that its iterations were performed using a recurrence relation with three terms. In general terms, PSM requires more work as it is employed to solve higher order differential equations. The third case study considered the solution of Hermite differential equation which is important for its applications in quantum mechanics. This example employed the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. Its application was useful to obtain the so called Hermite polynomials which are important functions of the so called mathematical-physics branch [19, 20]. Example 4 provided a case of non-polynomial coefficients. PSM directly multiplied series (70) and (72) in order to get (74) and a equations system whose solution is given by (75). Although we simplify several steps, the above process is long and cumbersome for most of applications. On the other hand, the homotopy parameter was introduced into the non-polynomial factor e−x in (78) with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM. We note that the rest of the procedure is similar to the one used in the previous examples that is, based in basic integrals. Given that PSM and MIHEM provide the same results, the proposal of this work is taking advantage of the ease of the proposed method in the search of solutions for linear differential equations for the case of ordinary points. Finally, Example 5 shows the application of MIHEM, for the case of an inhomogeneous problem. The obtained approximate solution shows the ease with which MIHEM handles these problems.We emphasize the relevance of the proposed method MIHEM for the series solution of linear ordinary differential equations. The method is convenient for the applications, not only because this is based in the solution of elementary integrals but, it is wholly systematic. For instance, the method always begins with the same initial function for all equations of the same order. As a matter of fact, given a linear problem MIHEM always expresses the problem in terms of the integral (6) and express it in terms of the iterative process (10). Our examples show that this procedure is straightforward and simple. The case of PSM usually is cumbersome and in general requires to get the so called recurrence relation from which are calculated the coefficients of the series solution. From (22) and (39) and the example (4), we note that this procedure many times cannot be established without waiting for the implementation of the recurrence formula to have no relevant changes. Thus, we note that recurrence (22) directly relate two c coefficients while (39) required to relate three of them. What is more, example (4) did not even require a recurrence relation.From the mentioned above, it is clear that MIHEM is a powerful method which indeed ease the obtaining of series solutions about ordinary points for ordinary linear differential equations, but [18] essentially considered the application of the proposed method, although slightly modified, with the purpose to get exact and approximate solutions for the case of nonlinear differential equations. In a sequence, the next section will show, as a future work, the possibility of employing MIHEM with the end to solve linear problems about singular points. Thus, the proposed method is not only a powerful tool for research, but is a method widely recommended to be implemented in university courses from the undergraduate level onwards. ## 6. Concluding Remarks Although MIHEM was introduced for obtaining power series solutions of linear ordinary differential equations about of ordinary points, a natural continuation or as future work, would be to adapt this method with the purpose to solve linear ordinary differential equations about singular points, at least for the case of regular singular points (see Section3 and references [19, 20]). In spite of that, we are not yet in the position to justify the use of MIHEM to solve problems about singular points, we will present two case studies in which the proposed method works satisfactory although the conditions to ensure the validity of MIHEM are not satisfied.Example 6. Obtain the general solution for the following linear first order differential equation.(102)xy′−y=0,about the singular point x=0. Given that (102) can be rewritten as(103)y′=yx,then, in accordance with MIHEM algorithm we convert (103) as the following integral equation(104)y=∫0xyssds. Introducing the handy process described for Section4, we get(105)∑n=0∞pnνnx=p∫0x∑n=0∞pnνnssds. In accordance with the proposed method, we will propose as initial function(106)y0x=A.but said choice would yield in a solution with a singular point in x = 0 due to the the presence of lnx in the result. In order to avoid this result we propose the following initial function(107)ν0x=Ax.in such a way that(108)ν1x=∫0xν0ssds. After substituting (107) into (108) we obtain(109)ν1x=Ax.in the same way:(110)ν2x=Ax,(111)ν3x=Ax.thus, after n iterations we get(112)νnx=Ax. By substituting (107) and (109)–(112) into (14) we obtain:(113)yx=Bx,where we defined the arbitrary constant B=nA. By mere substitution, we note that (113) is the general solution for linear (102).Example 7. Obtain the general solution for the following linear second order inhomogeneous differential equation.(114)y′′−2x2y−8x=0.about the regular singular point x=0. As usual, we express (114) in terms of the following integral equation:(115)yx=∫0x∫0t2s2ys+8sdsdt. In agreement with the proposed method we express (115) as(116)∑n=0∞pnνnx=p∫0x∫0t2∑n=0∞pnνnss2+8sdsdt. In the same way we will avoid to get a singular solution forx=0, considering the initial function (instead of (12))(117)ν0x=Ax2+Bx3,so that from (116) we obtain:(118)ν1x=∫0x∫0t2ν0ss2+8sdsdt. Thus, from (117) and (118) we get, after performing elementary operations(119)ν1x=Ax2+2B+86x3. In the same way it is straightforward to show that(120)ν2x=Ax2+2B+836x3.as a matter of fact, the only difference of the successive iterations from (119), is a factor acting on the cubic term, so the nth iteration can be expressed from the above iterations as follows:(121)yx=nAx3+B1+118+836+⋯x3. Expressing the above result in terms of just two constants(122)yx=A′x2+Cx3,where A′=nA and C=B+2B/6+2B/36+…+8/6+8/36+…. Since that is a linear equation, the general solution only has to have one arbitrary constant [19, 20], to understand how this occurs, it is enough to substitute (122) into the differential equation to obtain C=2. After substituting this value into (122) we obtain(123)yx=A′x2+2x3. It is easy to verify thatyhx=A′x2 is the solution of the homogeneous part of the (114) and (123) is the solution for (114). We note that the general solution for a homogeneous linear differential equation has to contain two arbitrary constants and (123) only shows yhx=A′x2, it results that the manner in choosing the initial function (117) eliminates a singular solution. To see it, we know from the theory of linear differential equations that given a known solution for linear (1) it is possible to get a second solution [19, 20]. Therefore if y1=x2, then a second solution is given by: y2=x2∫dx/x4, or y2=−1/3x and the general solution for the homogeneous part is yhx=A′x2−D/3x for other arbitrary constant D. Thus the obtained solution omitted the singular part D=0 because we proposed an initial function with the purpose to avoid singularity solutions but it is clear that always it is possible to recover the total solution. From these examples, it is conjectured that MIHEM method has potentiality in order to handle in general singular points. Although MIHEM is not sufficiently proved as a general tool to solve linear equations about singular points, it is noted that conversely, PSM is totally inadequate for getting solutions of (102) and (114) about x=0 because for these cases it is a singular point. In fact, instead of PSM, it should be employed the already explained Frobenius method [19, 20], but this is even more cumbersome and long than PSM. As conclusion, this work introduced the Modified Integral Homotopy Expansive Method (MIHEM) which showed potential in order to find power series solutions for linear ordinary differential equations for the case of ordinary points. A relevance difference with PSM is the versatility of the proposed method which is shown in five case studies. The first advantage of the method is that MIHEM requires only of elementary integrals. The second is that the initial function for the case of linear second order differential equations is alwaysA+Bx, and A for the case of linear first order differential equations. This contributes to systematize the procedure. Finally, once the series (14) is obtained, we proceed to factorize A and B in order to get two linearly independent solutions and for the same a general solution. Therefore, we note that the simplicity of the proposed method does not make it less effective, but on the contrary, we emphasize its convenience in practical applications. The proposal of MIHEM is that of an effective method and easy to use for linear ordinary differential equations for the case of ordinary points. As a matter of fact, we noted that the proposed MIHEM is a potential useful method to solve also ordinary differential equations about singular points, although in general, this subject should be part of a future work. --- *Source: 1016251-2022-04-28.xml*
1016251-2022-04-28_1016251-2022-04-28.md
56,828
Modified Integral Homotopy Expansive Method to Find Power Series Solutions of Linear Ordinary Differential Equations about Ordinary Points
Uriel Filobello-Nino; Hector Vazquez-Leal; Jesus Huerta-Chua; Victor Manuel Jimenez-Fernandez; Agustin L. Herrera-May; Darwin Mayorga-Cruz
Discrete Dynamics in Nature and Society (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016251
1016251-2022-04-28.xml
--- ## Abstract This article presents the Modified Integral Homotopy Expansive Method (MIHEM) which is utilized to find power series solutions for linear ordinary differential equations about ordinary points. This method is a modification of the integral homotopy expansive method. The proposal consists in providing a versatile, easy to employ and systematic method. Thus, we will see that MIHEM requires only of elementary integrations and that the initial function will be always the same for the linear ordinary differential equations of the same order which contributes to ease the procedure. Therefore, it is expected that this article contributes to change the idea that an effective method has to be long and difficult, such as it is the case of Power Series Method (PSM). This method expresses a differential equation as an integral equation, and the integrand of the equation in terms of a homotopy. We will see along this work the convenience of this procedure. --- ## Body ## 1. Introduction The subject of nonlinear differential equations is important because many physical phenomena are described by this type of equations; however, finding solutions for such equations usually is a difficult task. Hence the importance of developing approximate methods to find their solutions such as variational approaches [1, 2], tanh method [3], exp-function [4, 5], Adomian’s decomposition method [6, 7], parameter expansion [8], homotopy perturbation method [9–12], homotopy analysis method [13, 14], perturbation method [15–17], integral homotopy expansive method [18], among many others. In general, these equations are still an open research topic.In contrast, finding solutions of linear differential equations that contain variable coefficients is considered a subject which is closed because their theory was established a long time ago. Most of the times, the methods to find solutions for these equations are presented in terms of power series, because it is difficult to find exact solutions. We will see that those solutions are established about both ordinary points and singular points. As a matter of fact, the case of the so called, singular regular points is important because there always exist the Frobenius series solution for these equations, despite the difficulties that singular points carry on [19–21]. Most of the times, the search for solutions of linear differential equations is concerned rather with the obtaining of approximate solutions than with an alternative proposal in order to obtain power series solutions as an alternative to basic PSM. Thus, [22] obtained an exact solution for two cases which corresponds the values n=0, and n=1 of a parameter of the denominated Lane-Emden singular equation (LEE) which is related with a great deal of phenomena in physics such as stellar structure among others. We note that for the above mentioned cases LEE is a linear ordinary differential equation with variable coefficients. Although originally [22] was conceived to find approximations for nonlinear differential equations, the above mentioned article obtained solutions about the regular singular point x=0. We note that for cases different from n=0, 1, and 5, the Lane-Emden equation is nonlinear. On the other hand [23] proposed Adomian decomposition method and differential transform method in order to solve once more the Lane-Emden equation for the same values n=0, and n=1. Although these methods solved again the problem, they were long in comparison with [22]. As a matter of fact, Adomian decomposition method and homotopy analysis method (HAM) are usually long and cumbersome although effective. On the other hand [13] proposed HAM method in order to find analytical solutions but for the case of linear systems of partial differential equations with good precision. Besides [24] found an analytical approximate solution for the important Bessel linear differential equation of order zero by using the exponentially fitted collocation approximation method. On the other hand [25] proposed the spectral method in order to solve linear second order differential equations with constant coefficients, and [9] solved among others the important case of linear Euler-Lagrange equations. In a sequence, [12] solved also variational problems by using Laplace Transform Homotopy Perturbation Method, besides of nonlinear problems it found solutions for linear problems about singular regular points. In all these cases, the goal was obtaining a solution for the differential equation to solve, but not to provide an alternative method to get power series solutions.In particular, this article is concerned with the obtaining of the general solution for a linear ODE about an ordinary point through the use of power series. Although [26] employed HPM and nonlinearities distribution HPM with the objective to find power series solutions for linear ODES, that article was focused for the case of initial value problems due to the nature of HPM. It is well known that basic HPM method requires the knowledge of the initial conditions of the problem to solve. This work not only seeks to provide a general solution for linear equations about regular points, but will provide a proof that its series solutions are equivalent to that, obtained for PSM. This point is important because as it is well known, although PSM is usually long and cumbersome there are not alternatives that simplify the procedure. This work proposes an alternative method, the Modified Integral Homotopy Expansive Method (MIHEM). We will see that MIHEM is relatively easy to use and provides an adequate alternative method based in the solution of elementary integrals with the end to find general solutions in series for the case of ordinary points.Paper organization is as follows. Section2, introduces brief review of Linear Differential Equations with Variable Coefficients. Section 3, presents a basic idea of the proposed method MIHEM. In Section 4, we will apply MIHEM with the purpose to find power series solutions to linear differential equations about ordinary points. The main results obtained in this work are discussed in Section 5. Section 6 has to do with future work as a continuation of this method and the conclusions. Finally, Appendix presents the equivalence of PSM and MIHEM. ## 2. Linear Differential Equations As it is found in the literature, linear differential equations with constant coefficients have exact analytical solutions [19, 20]; in contrast, the methods to find solutions for linear differential equations of variable coefficients are based on infinite series expansions.To start, consider a differential equation of the form:(1)y″x+pxy′x+qxyx=0.If the functionspx and qx are analytic at x=x0 (that is, they can be represented by means of a powers series of x−x0, with positive convergence radius [19, 20]) then x=x0 is an ordinary point of the differential equation; otherwise, the point x=x0 can be considered as a singular point [19]. This let us to present two methods for the solution of (1). ### 2.1. Power Series The simplest method occurs when the solutions for (1) are expressed in the neighbourhood of an ordinary point x0 [19, 20]. For this case, the solutions are found in the power series form(2)y=∑n=0cnx−x0n,where cn are the unknown coefficients, which will be determined by substituting (2) into the equation that is to be solved. This work is concerned with the case of ordinary points. ### 2.2. Frobenius Series Singular points can be classified into regular and irregular [19]. The Frobenius method, for the case of regular singular points, allow us to find power series solutions of the form(3)y=∑n=0cnx−x0n+r,where r is a parameter to be determined besides of coefficients cn.In accordance to the theory of linear differential equations, the general solution of (1), for the case of ordinary points, can be expressed in terms of the superposition of two linearly independent series of the form (2), and for the simpler case of regular singular points, in the form of (3)An important result that accounts the nature of solutions near of ordinary points is contained in the following fundamental theorem [20].Theorem 1. Letx0 be an ordinary point of the differential (1) and a0, a1 arbitrary constants. Then, there is a unique function yx that is analytic at x0 and is a solution of (1) in a certain neighbourhood of this point, and also satisfies the initial conditions yx0=a0, y′x0=a1. Besides, if the powers series expansions of Px and Qx are valid in an interval x−x0<R, R>0, then the power series expansion of this solution is also valid on the same interval [20]. We will see later the importance of this theorem and its relation with the results obtained for this article. Next, we establish the following fundamental theorem for non-homogeneous second order linear differential equations which will result useful in the next section [20].Theorem 2. LetPx, Qx and Rx be continuous functions on an interval a≤x≤b. If x0 is any point in this interval, and y0 and y0′ are any numbers whatever, then the initial value problem. y″x+Pxy′x+Qvyx=Rx, yx0=y0, and y′x0=y0, has one and only one solution y=yx on the interval a≤x≤b [20]. ## 2.1. Power Series The simplest method occurs when the solutions for (1) are expressed in the neighbourhood of an ordinary point x0 [19, 20]. For this case, the solutions are found in the power series form(2)y=∑n=0cnx−x0n,where cn are the unknown coefficients, which will be determined by substituting (2) into the equation that is to be solved. This work is concerned with the case of ordinary points. ## 2.2. Frobenius Series Singular points can be classified into regular and irregular [19]. The Frobenius method, for the case of regular singular points, allow us to find power series solutions of the form(3)y=∑n=0cnx−x0n+r,where r is a parameter to be determined besides of coefficients cn.In accordance to the theory of linear differential equations, the general solution of (1), for the case of ordinary points, can be expressed in terms of the superposition of two linearly independent series of the form (2), and for the simpler case of regular singular points, in the form of (3)An important result that accounts the nature of solutions near of ordinary points is contained in the following fundamental theorem [20].Theorem 1. Letx0 be an ordinary point of the differential (1) and a0, a1 arbitrary constants. Then, there is a unique function yx that is analytic at x0 and is a solution of (1) in a certain neighbourhood of this point, and also satisfies the initial conditions yx0=a0, y′x0=a1. Besides, if the powers series expansions of Px and Qx are valid in an interval x−x0<R, R>0, then the power series expansion of this solution is also valid on the same interval [20]. We will see later the importance of this theorem and its relation with the results obtained for this article. Next, we establish the following fundamental theorem for non-homogeneous second order linear differential equations which will result useful in the next section [20].Theorem 2. LetPx, Qx and Rx be continuous functions on an interval a≤x≤b. If x0 is any point in this interval, and y0 and y0′ are any numbers whatever, then the initial value problem. y″x+Pxy′x+Qvyx=Rx, yx0=y0, and y′x0=y0, has one and only one solution y=yx on the interval a≤x≤b [20]. ## 3. MIHEM Method The contribution of this work is focused to the case of ordinary points, but employing a novel method, the Modified Integral Homotopy Expansive Method (MIHEM). We will see that MIHEM method is easy to use and provide an adequate method in order to find general solutions in series for the case of solutions for which the solutions of (1) are expressed in the neighbourhood of an ordinary point x0. Although initially, the integral homotopy expansive method (IHEM) [18] was conceived above all as a method to get approximate and exact solutions for ordinary differential equations. This work will show the manner as MIHEM modify to IHEM in order to widen the application of Integral Homotopy Expansive Method with the end to solve linear differential equations. In summary MIHEM is expressed as follows. Given a linear ordinary differential equation, MIHEM expresses it as an integral equation, which somehow is solved for y. At this step, the integrand of the equation is expressed in terms of a homotopy and it is assumed that y is expressed as a power series of the homotopy parameter p [10, 11]. In this point, unlike of IHEM, we will assume zero the initial conditions of the problem (in case the problem provided said conditions) and will propose as initial function y0x=A+Bx, where A and B will result to be the arbitrary constants for the general solution. The rest of the procedure is the same of IHEM method [18], equating in both sides of the integral equation the identical powers of p terms, it is possible to determine a sequence of unknown functions equation which will determine the MIHEM solution.To understand how MIHEM works, consider a general nonlinear differential equation, which can be expressed as [10, 11].(4)Lu+fx=0.x∈Ω,with the following boundary conditions(5)Fu,∂u∂n=0,x∈Γ,where F, is a boundary operator; fx, is a given analytical function; Γ, is the domain boundary for Ω and L is a linear operator.From (4), we will solve for u in terms of the integral equation:(6)ux=A+Bx+∫x0x∫x0t−a1su′s−a0sus−fsdsdt.In (6) it is assumed that L is a second order linear operator corresponding to the general linear differential equation of the form:(7)u″x+a1xu′x+a0xux+fx=0.According to the method, a homotopy is introduced assuming that the incognita functionu is expressed as a power series of homotopy parameter p.(8)ux=v0x+v1xp+v2xp2+….Besides, another assumption of the proposed method is, unlike IHEM method [18], that the integral version of the problem does not contain the initial conditions out of the integral so that (6) is rewritten as(9)ux=∫x0x∫x0t−a1su′s−a0sus−fsdsdt,from all the above, we propose the following iterative process(10)∑n=0∞pnνx=∫x0x∫x0t1−pws+p−a1s∑n=0∞pnνn′s−a0s∑n=0∞pnνns−fsdsdt,where p is the homotopy parameter belonging to the interval 0,1; wx is a function introduced using the flexibility of the homotopy method. Although (10) could be useful, the final version of MIHEM is obtained assuming that wr=0, in such a way that(11)∑n=0∞pnνnx=∫x0x∫x0tp−a1s∑n=0∞pnνn′s−a0s∑n=0∞pnνns−fsdsdt.In accordance with the proposed method, we propose as initial function(12)y0=A+Bx,where A and B will result to be the arbitrary constants of the general solution.Then, after equating identicalp power terms, the values for the sequence ν0, ν1, ν2 can be found by solving in a systematic way the integrals arising from the different orders.(13)ν0x=A+Bx,ν1x=∫x0x∫x0t−a1sν0′s−a0sν0s−fsdsdt,ν2x=∫x0x∫x0t−a1sν1′s−a0sν1sdsdt,ν3x=∫x0x∫x0t−a1sν2′s−a0sν2sdsdt,…νjx=∫x0x∫x0t−a1sνj−1′s−a0sνj−1sdsdt.…In this way, to get an approximate or exact solution for (7), the results of (13) are substituted into (8), taking the limit p⟶1, the following solution is obtained [10, 11].(14)Ux=v0x+v1x+v2x+v3x….For the case of first order linear differential equations the procedure is very similar but instead of (12) the initial function y0=A.With the end to prove that (14) indeed is a solution of (9) we have to show that function Ux is continuous. We will appeal to the following argument based on Theorem 2 already mentioned. Next, we restrict our considerations to the conditions imposed for this theorem. We will assume that the coefficient functions a1x, a0x and fx from (7) are continuous on an interval a≤x≤b. Then, given a point in this interval x0 and any numbers y0 and y0′, the ODE (7) subject to yx0=y0 and y′x0=y0′ has one and only one solution y=yx in a≤x≤b (we note that many times the above mentioned interval can be extended to −∞,∞ [20]. The proposed method MIHEM express (7) in terms of the integral (9) and the solution of the problem is expressed through the iterative process proposed by (10). We begin from the initial function (12) where A and B correspond to the initial conditions yx0=A and y′x0=B. Next, the first iteration of the process corresponds to ν1x (see (13)) is given by integrals of a1x, a0x⋅y0x and fx. Since y0x is a polynomial function, in principle, these integrals provide continuous functions. In other way, given that a1xa0x and fx are supposed to be continuous in a≤x≤b, then it is possible to express them in terms of a power series of x−x0 assuming that the above mentioned functions are analytic at x0 with positive convergence radius R. From the properties of these series in its convergence interval, the operations above mentioned, even the integrals involved are valid for x−x0<R. As a consequence, it is possible to make the necessary rearrangements to express function ν1x as a convergent series for the values x−x0<R and for the same reason ν1x is continuous. On the other hand, similar arguments apply to the other integrals expressed by ν2x, ν3x,…, Thus, for instance, ν2x is obtained by integrating the products of series that emanate from a1x⋅ν1′x and a0x⋅ν1x.From the above argument, these operations are valid ina≤x≤b. As a matter of fact differentiation ν1′x is a valid operation inside the convergence interval and the products of convergent series above mentioned are performed applying the distributive property and grouping the like terms in such a way that result convergent series for x−x0<R, even after performing the required integrations for MIHEM. As a matter of fact, ν1x is represented for a convergent series in x−x0<R and for the same reason is a continuous function in the same interval. It is possible to follow in this way and note that (14) (after applying the limit p⟶1) is expressed for a sum of convergent series in this interval and for the same reason we can rearrange their terms added term-wise. Thus, we finally get a convergent series valid in x−x0<R which represents a continuous function. Given that, in accordance with Theorem 2, the posed second order inhomogeneous problem possesses a unique solution, then the MIHEM solution (14) is a continuous function that indeed represents the solution of the proposed problem with the conditions established in Theorem 2 and for the same reason also is a solution for (9) assuming valid the conditions of the aforementioned theorem.To ease the application of MIHEM, the homotopy technique allows to introduce the homotopy parameterp in the coefficient functions a1x,a0x and fx, that is a1px, a0px and fpx in the iterative process given by (10). This procedure is especially useful when some of these functions is not a polynomial [19]. Example 4 shows this procedure. ## 4. Application of MIHEM to Find Power Series Solutions to Linear Differential Equations for the Case of Ordinary Points Next, we exemplify the use of MIHEM method with to end to solve linear differential equations for the case of ordinary points.Example 1. Obtain the general solution for the following linear first order differential equation.(15)y′+2xy=0. This example compares in detail the MIHEM method and Power Series Method (PSM). Since the pointx0=0 is an ordinary point of (15), the Power Series Method is appropriate in order to obtain the solution for (15). ### 4.1. Power Series Method In accordance with PSM we will assume a solution of the form (see (2)) [19].(16)y=∑n=0∞cnxn,next, we substitute (16) into (15) to obtain.(17)∑n=1∞ncnxn−1+∑n=0∞2cnxn+1=0,with the purpose to add the sums we have to rewrite (17) in the following manner(18)c1+∑n=2∞ncnxn−1+∑n=0∞2cnxn+1=0.Following the power series algorithm, we change the mute variable as follows:for the first sum we substitutek=n−1, and for the second one k=n+1, in such way that (18) is written as(19)c1+∑k=1∞k+1ck+1xk+∑k=1∞2ck−1xk=0.After adding the sums we rewrite (19) in the following compact way(20)c1+∑k=1∞k+1ck+1+2ck−1xk=0,thus, equating to zero the coefficients of the different powers of x, we get(21)c1=0,(22)k+1ck+1+2ck−1=0,for k=1,2,3,…(22) is denominated recursive relation, which is employed with the purpose to determine the ck coefficients. Given that k+1≠0, then (22) can be written as(23)ck+1=−2ck−1k+1,for k=1,2,3,…After iterating (23), we obtain the values:(24)k=1,c2=−c0,k=2,c3=0,k=3,c4=−2c24=c02!,k=4,c5=0,k=5,c6=−2c46=−c03!,k=6,c7=0,k=7,c8=−2c68=c04!,…Therefore, after substituting (24) into (16) we get(25)yx=c0+c1x+c2x2+c3x3+c4x4+c5x5+c6x6+…,yx=c0+0−c0x2+0+c02!x4+0−c03!x6+c04!x8−….or(26)yx=c01−x2+12!x4−13!x6+14!x8−…,where c0 is arbitrary and for the same reason we have found the general solution for (15).As a matter of fact, we recognize the series (26) as e−x2, therefore the general solution can be written as(27)yx=c0e−x2.We note that this procedure is even longer and cumbersome for the case of linear differential equations of first order. ### 4.2. MIHEM Method In order to employ the proposed method we express (15) as an integral equation:(28)yx=−∫0x2sysds.In accordance with MIHEM we express (29) as follows(29)∑n=0∞pnνnx=−2p∫0xx∑n=0∞pnνnsds.Given that the differential equation is of first order, we propose as initial function the constant(30)ν0x=A.In such a way that iterating (31) we get(31)ν1x=−2∫0xsν0sds=−2∫0xsAds.thus,(32)ν1x=−Ax2,ν2x=−2∫0xsν1sds=2∫0xsAs2ds.therefore,(33)ν2x=Ax42,ν3x=−2∫0xsν2sds=2∫0xsAs42ds,ν3x=−Ax66,ν4x=−2∫0xsν3sds=2∫0xsAs66ds,integrating(34)ν4x=Ax824.After substituting equations (31)–(34) into (14) we get(35)yx=A1−x2+12!x4−13!x6+14!x8−…,given that A is arbitrary, then (35) is a general solution of (15), as a matter of fact this result is the same obtained for Power Series Method (27). We emphasize the ease of MIHEM to obtain (35).Example 2. Obtain the general solution for the following linear second order differential equation.(36)y″−1+xy=0. This example will compare in detail the MIHEM and Power Series Method [19]. Given that the pointx0=0 is an ordinary point of (36), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form (16)(37)y=∑n=0∞cnxn. Since PSM method is extremely long, we will provide a summary of the followed procedure. By substituting (37) into (36) we get on the one hand(38)c2=c02,on the other hand the following recurrence relation(39)ck+2=ck+ck−1k+1k+2,k=1,2,3,…,the procedure that yields in (38) and (39) is similar but more complicated than the one that yielded in (21) and (23). We note that unlike (23), (39) is a recurrence relation with three terms. By iterating (39) and choosing c1=0 we obtain the coefficients(40)c1=0,c2=c02,c3=c06,c4=c024,c5=c030,…,which values are substituted into (37), to get(41)y1x=c01+12x2+16x3+124x4+130x5+…. Iterating a second time by choosingc2=0 we get the coefficients(42)c0=0,c2=0,c3=c16,c4=c112,c5=c1120,…,substituting these coefficients into (37) we obtain(43)y2x=c1x+16x3+112x4+1120x5+…. From (41) and (43), the general solution of (36) is expressed as(44)yx=c01+12x2+16x3+124x4+130x5+…+c1x+16x3+112x4+1120x5+…. In accordance with Theorem1, series (41) and (43) converge for all x. As a matter of fact, this process is long and hard for most of applications. ### 4.3. MIHEM Method In order to employ the proposed method we express (36) as an integral equation:(45)yx=∫0x∫0ts+1ysdsdt.In accordance with MIHEM we express (45) as follows(46)∑n=0∞pnνnx=p∫0x∫0t1+s∑n=0∞pnνnsdsdt.Given that the differential equation to solve is of second order, we propose as initial function(47)y0=A+Bx.In such a way that after iterating (46)(48)ν1x=∫0x∫0t1+xν0sdsdt,thus, by substituting (47) into (48) we get(49)ν1x=∫0x∫0t1+sA+Bsdsdt.After perform the elementary successive integrals in (49) we obtain:(50)ν1x=Ax22+A+Bx36+Bx412.On the other hand(51)ν2x=∫0x∫0t1+xν1sdsdt.By substituting (50) into (51) and after performing basic integrals we get(52)ν2x=Ax424+x530+x6180+…+Bx5120+x6120+x7504+….After substituting (47), (50), and (52) into (14), we get(53)yx=A1+12x2+16x3+124x4+130x5+…+Bx+16x3+112x4+1120x5+….We note that (44) and (53) are the same. Nevertheless we emphasize the ease of MIHEM to obtain the same result but employing only two iterations and elementary integrals in a systematic way.Example 3. This example considers Hermite differential equation which is important for its applications in physics [21]. This example will employ the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. The differential equation to solve is:(54)y′′−2xy′+2αy=0,where α is a constant. In order to employ the proposed method we express (54) as an integral equation:(55)yx=∫0x∫0t2sy′s−2αysdsdt. In accordance with MIHEM we express (55) as follows(56)∑n=0∞pnνnx=p∫0x∫0t2s∑n=0∞pnνn′s−2α∑n=0∞pnνnsdsdt. Given that (54) is a second order linear differential equation we propose as initial function(57)y0=A+Bx. So that, iterating (56) we get the following elementary integral:(58)ν1x=∫0x∫0t2xν0′s−2αν0sdsdt,thus, by substituting (57) into (58) and performing the indicated integrations we get(59)ν1x=2B1−αx36−αx2A. On the other hand the second iteration results in(60)ν2x=∫0x∫0t2sν1′s−2αν1sdsdt,from (59) and (60), after performing some elementary operations we obtain(61)ν2x=A22αα−24!x4+B22α−1α−35!x5. On the other hand(62)ν3x=∫0x∫0t2sν2′s−2αν2sdsdt,substituting (61) into (62) yields in(63)ν3x=−A23αα−2α−46!x6−B23α−1α−3α−57!x7. Therefore, the general solution of (54) is obtained substituting (57), (59), (61), and (63) into (14).(64)yx=Ay1x+By2x,where(65)y1x=1−2α2!x2+22αα−24!x4−23αα−2α−46!x6+…,(66)y2x=x−2α−13!x3+22α−1α−35!x5−23α−1α−3α−57!x7+…, Given that this same result it is obtained for Power Series Method [20], then in accordance with Theorem 1, both series (65) and (66) converge for all x. The case of non-negative integerα is particularly relevant for the following. For eachα value, one of these series ends and results a polynomial while the other is an infinite series. From (65) and (66) it is clear that y1x will be a polynomial if α is even and y2x will result a polynomial if α is odd. Taking successively the values α=0,1,2,3,4,5, we obtain the polynomials:(67)p0x=1,p1x=x,p2x=1−2x2,p3x=x−23x3,p4x=1−4x2+43x4,p5x=1−43x3+415x5, The so called Hermite polynomials are obtained considering constant multiples of the polynomials (67) with the property that the terms containing the highest powers of x are of the form 2nxn. From (67) it is possible to obtain some Hermite polynomials (denoted Hnx) as follows [21].(68)H0x=1,H1x=2x,H2x=4x2−2,H3x=8x3−12x,H4x=16x4−48x2+12,H5x=32x5−160x3+120x…. In this case MIHEM not only provided a general solution for (54) but provided the adequate forms (65) and (66) from which easily let to identify the important Hermite polynomials (68) Next, we compare PSM and MIHEM methods with the end to study the following linear differential equation with non-polynomial coefficients.Example 4. Obtain the general solution for the following linear second order differential equation(69)y″+e−xy=0. Given that the pointx0=0 is an ordinary point of (69), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form(70)y=∑n=0∞anxn,after differentiating (70) we get(71)y′′x=∑n=2∞annn−1xn−2. On the other hand is well known the Taylor series ofe−x(72)e−x=1−x+12x2−16x3+124x4−1120x5+1720x6−…. We will get the first six terms of (70). The substitution of results (70)–(72) into the left hand side of (69) yields in(73)∑n=2∞annn−1xn−2+a5+a32−a26+a124−a0120−a4x5+a4+a22−a16+a024−a3x4+a3+a12−a06−a2x3+a2+a02−a1x2+a1−a0x+a0=0. After developing the sum in (73) and grouping in powers of x we get(74)a0+2a2x0+6a3+a1−a0x1+12a4+a2+a02−a1x2+20a5+a3+a12−a06−a2x3+30a6+a4+a22−a16+a024−a3x4+42a7+a5+a32−a26+a124−a0120−a4+…=0. After setting each power ofx equal to zero, we obtain.(75)a2=−a02,a3=16a0−a1,a4=a112,a5=120−a02−a13,…. Therefore, after substituting (75) into (70) we obtain the general solution(76)yx=a01−x22+x36−x540+…+a1x−x36+x412−x560+…. Although we simplified several steps, the above process is long and cumbersome for most applications. ### 4.4. MIHEM Method In order to employ the proposed method we express (69) as an integral equation:(77)yx=−∫0x∫0te−sysdsdt.In accordance with MIHEM we express (77) as follows(78)∑n=0∞pnνnx=−p∫0x∫0te−ps∑n=0∞pnνnsdsdt.In (78) we introduced the homotopy parameter into the non polynomial factor with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM.After substituting(79)e−px=1−px+12p2x2−16p3x3+124p4x4−1120p5x5+1720p6x6−…,into (78) and after equating the coefficients of identical powers of p, we get the following relations:(80)ν0x=A+Bx.(initial function).(81)ν1x=−∫0x∫0tν0sdsdt,thus, by substituting (80) into (81) we get(82)ν1x=−Ax22−Bx36,(83)ν2x=−∫0x∫0t−sν0s+ν1sdsdt,substituting (80) and (82) into (83) and after performing elementary integrals we get(84)ν2x=Ax36+B+A2x412+Bx5120.On the other hand the following iteration results in(85)ν3x=−∫0x∫0tν2s−sν1s+s22ν0sdsdt,from (80), (82), (84) and (85) we get(86)ν3x=−Ax424−x5202A3+B2−x630B4+A24−Bx75040.Iterating (78) we get the following elementary integral:(87)ν4x=−∫0x∫0tν3s−sν2s+s22ν1s−s36ν0sdsdt.Taking into account that we have proposed to obtain the first six terms of the series solution (70), we note that only the last integral of (87) contributes to this part of the solution, whereby we write (87) as follows(88)ν4x=∫0x∫0ts36ν0sdsdt+….After substituting (80) into (88) we obtain(89)ν4x=Ax5120+Bx6180+….Substituting (80), (82), (84), (86), and (89) into (14) we get(90)yx=A1−x22+x36−x540+…+Bx−x36+x412−x560+….We note that we obtained the same results (76) (PSM) and (90) (MIHEM) as it should be.On the other hand, we note that the fifth iteration of PSM provides, until the fifth power of the solution while the fourth iteration of MIHEM provides the same information and more of the following powers.Example 5. This example provides the general solution for the following linear second order inhomogeneous differential equation by using MIHEM.(91)y″−2+4x2y+x2+4x2=0, To employ the proposed method, we express the following integral equation in terms of(92)yx=∫0x∫0t2+4s2ys−s2+4s2dsdt. In accordance with the proposed method we express (92) as follows(93)∑n=0∞pnνnx=p∫0x∫0t2+4s2∑n=0∞pnνns−s2+4s2dsdt. Next, we propose as initial function(94)y0=A+Bx,so that we obtain from (93).(95)ν1x=∫0x∫0t2+4s2ν0s−s2+4s2dsdt. Thus, after substituting (94) in the above expression we get(96)ν1x=∫0x∫0t2+4s2A+Bs−2s+4s3dsdt. After performing elementary operations we obtain(97)ν1x=Ax2+B−13x3+A3x4+B−15x5. In the same way,(98)ν2x=∫0x∫0t2+4s2ν1sdsdt. After substituting (97) into (98) we get(99)ν2x=∫0x∫0t2+4s2As2+B−13s3+A3s4+B−15s5dsdt. After performing elementary operations we get(100)ν2x=Ax46+B−130x5+2A3+4Ax630+2B−15+4B−13x742+A3x814+B−190x9…. From the above results, we get an approximate solution for (91) in the following way(101)yx=A1+x2+x42+745x6+142x8+…+Bx+B−13x3+7B−130x5+13B−1315x7+B−190x9+…. ## 4.1. Power Series Method In accordance with PSM we will assume a solution of the form (see (2)) [19].(16)y=∑n=0∞cnxn,next, we substitute (16) into (15) to obtain.(17)∑n=1∞ncnxn−1+∑n=0∞2cnxn+1=0,with the purpose to add the sums we have to rewrite (17) in the following manner(18)c1+∑n=2∞ncnxn−1+∑n=0∞2cnxn+1=0.Following the power series algorithm, we change the mute variable as follows:for the first sum we substitutek=n−1, and for the second one k=n+1, in such way that (18) is written as(19)c1+∑k=1∞k+1ck+1xk+∑k=1∞2ck−1xk=0.After adding the sums we rewrite (19) in the following compact way(20)c1+∑k=1∞k+1ck+1+2ck−1xk=0,thus, equating to zero the coefficients of the different powers of x, we get(21)c1=0,(22)k+1ck+1+2ck−1=0,for k=1,2,3,…(22) is denominated recursive relation, which is employed with the purpose to determine the ck coefficients. Given that k+1≠0, then (22) can be written as(23)ck+1=−2ck−1k+1,for k=1,2,3,…After iterating (23), we obtain the values:(24)k=1,c2=−c0,k=2,c3=0,k=3,c4=−2c24=c02!,k=4,c5=0,k=5,c6=−2c46=−c03!,k=6,c7=0,k=7,c8=−2c68=c04!,…Therefore, after substituting (24) into (16) we get(25)yx=c0+c1x+c2x2+c3x3+c4x4+c5x5+c6x6+…,yx=c0+0−c0x2+0+c02!x4+0−c03!x6+c04!x8−….or(26)yx=c01−x2+12!x4−13!x6+14!x8−…,where c0 is arbitrary and for the same reason we have found the general solution for (15).As a matter of fact, we recognize the series (26) as e−x2, therefore the general solution can be written as(27)yx=c0e−x2.We note that this procedure is even longer and cumbersome for the case of linear differential equations of first order. ## 4.2. MIHEM Method In order to employ the proposed method we express (15) as an integral equation:(28)yx=−∫0x2sysds.In accordance with MIHEM we express (29) as follows(29)∑n=0∞pnνnx=−2p∫0xx∑n=0∞pnνnsds.Given that the differential equation is of first order, we propose as initial function the constant(30)ν0x=A.In such a way that iterating (31) we get(31)ν1x=−2∫0xsν0sds=−2∫0xsAds.thus,(32)ν1x=−Ax2,ν2x=−2∫0xsν1sds=2∫0xsAs2ds.therefore,(33)ν2x=Ax42,ν3x=−2∫0xsν2sds=2∫0xsAs42ds,ν3x=−Ax66,ν4x=−2∫0xsν3sds=2∫0xsAs66ds,integrating(34)ν4x=Ax824.After substituting equations (31)–(34) into (14) we get(35)yx=A1−x2+12!x4−13!x6+14!x8−…,given that A is arbitrary, then (35) is a general solution of (15), as a matter of fact this result is the same obtained for Power Series Method (27). We emphasize the ease of MIHEM to obtain (35).Example 2. Obtain the general solution for the following linear second order differential equation.(36)y″−1+xy=0. This example will compare in detail the MIHEM and Power Series Method [19]. Given that the pointx0=0 is an ordinary point of (36), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form (16)(37)y=∑n=0∞cnxn. Since PSM method is extremely long, we will provide a summary of the followed procedure. By substituting (37) into (36) we get on the one hand(38)c2=c02,on the other hand the following recurrence relation(39)ck+2=ck+ck−1k+1k+2,k=1,2,3,…,the procedure that yields in (38) and (39) is similar but more complicated than the one that yielded in (21) and (23). We note that unlike (23), (39) is a recurrence relation with three terms. By iterating (39) and choosing c1=0 we obtain the coefficients(40)c1=0,c2=c02,c3=c06,c4=c024,c5=c030,…,which values are substituted into (37), to get(41)y1x=c01+12x2+16x3+124x4+130x5+…. Iterating a second time by choosingc2=0 we get the coefficients(42)c0=0,c2=0,c3=c16,c4=c112,c5=c1120,…,substituting these coefficients into (37) we obtain(43)y2x=c1x+16x3+112x4+1120x5+…. From (41) and (43), the general solution of (36) is expressed as(44)yx=c01+12x2+16x3+124x4+130x5+…+c1x+16x3+112x4+1120x5+…. In accordance with Theorem1, series (41) and (43) converge for all x. As a matter of fact, this process is long and hard for most of applications. ## 4.3. MIHEM Method In order to employ the proposed method we express (36) as an integral equation:(45)yx=∫0x∫0ts+1ysdsdt.In accordance with MIHEM we express (45) as follows(46)∑n=0∞pnνnx=p∫0x∫0t1+s∑n=0∞pnνnsdsdt.Given that the differential equation to solve is of second order, we propose as initial function(47)y0=A+Bx.In such a way that after iterating (46)(48)ν1x=∫0x∫0t1+xν0sdsdt,thus, by substituting (47) into (48) we get(49)ν1x=∫0x∫0t1+sA+Bsdsdt.After perform the elementary successive integrals in (49) we obtain:(50)ν1x=Ax22+A+Bx36+Bx412.On the other hand(51)ν2x=∫0x∫0t1+xν1sdsdt.By substituting (50) into (51) and after performing basic integrals we get(52)ν2x=Ax424+x530+x6180+…+Bx5120+x6120+x7504+….After substituting (47), (50), and (52) into (14), we get(53)yx=A1+12x2+16x3+124x4+130x5+…+Bx+16x3+112x4+1120x5+….We note that (44) and (53) are the same. Nevertheless we emphasize the ease of MIHEM to obtain the same result but employing only two iterations and elementary integrals in a systematic way.Example 3. This example considers Hermite differential equation which is important for its applications in physics [21]. This example will employ the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. The differential equation to solve is:(54)y′′−2xy′+2αy=0,where α is a constant. In order to employ the proposed method we express (54) as an integral equation:(55)yx=∫0x∫0t2sy′s−2αysdsdt. In accordance with MIHEM we express (55) as follows(56)∑n=0∞pnνnx=p∫0x∫0t2s∑n=0∞pnνn′s−2α∑n=0∞pnνnsdsdt. Given that (54) is a second order linear differential equation we propose as initial function(57)y0=A+Bx. So that, iterating (56) we get the following elementary integral:(58)ν1x=∫0x∫0t2xν0′s−2αν0sdsdt,thus, by substituting (57) into (58) and performing the indicated integrations we get(59)ν1x=2B1−αx36−αx2A. On the other hand the second iteration results in(60)ν2x=∫0x∫0t2sν1′s−2αν1sdsdt,from (59) and (60), after performing some elementary operations we obtain(61)ν2x=A22αα−24!x4+B22α−1α−35!x5. On the other hand(62)ν3x=∫0x∫0t2sν2′s−2αν2sdsdt,substituting (61) into (62) yields in(63)ν3x=−A23αα−2α−46!x6−B23α−1α−3α−57!x7. Therefore, the general solution of (54) is obtained substituting (57), (59), (61), and (63) into (14).(64)yx=Ay1x+By2x,where(65)y1x=1−2α2!x2+22αα−24!x4−23αα−2α−46!x6+…,(66)y2x=x−2α−13!x3+22α−1α−35!x5−23α−1α−3α−57!x7+…, Given that this same result it is obtained for Power Series Method [20], then in accordance with Theorem 1, both series (65) and (66) converge for all x. The case of non-negative integerα is particularly relevant for the following. For eachα value, one of these series ends and results a polynomial while the other is an infinite series. From (65) and (66) it is clear that y1x will be a polynomial if α is even and y2x will result a polynomial if α is odd. Taking successively the values α=0,1,2,3,4,5, we obtain the polynomials:(67)p0x=1,p1x=x,p2x=1−2x2,p3x=x−23x3,p4x=1−4x2+43x4,p5x=1−43x3+415x5, The so called Hermite polynomials are obtained considering constant multiples of the polynomials (67) with the property that the terms containing the highest powers of x are of the form 2nxn. From (67) it is possible to obtain some Hermite polynomials (denoted Hnx) as follows [21].(68)H0x=1,H1x=2x,H2x=4x2−2,H3x=8x3−12x,H4x=16x4−48x2+12,H5x=32x5−160x3+120x…. In this case MIHEM not only provided a general solution for (54) but provided the adequate forms (65) and (66) from which easily let to identify the important Hermite polynomials (68) Next, we compare PSM and MIHEM methods with the end to study the following linear differential equation with non-polynomial coefficients.Example 4. Obtain the general solution for the following linear second order differential equation(69)y″+e−xy=0. Given that the pointx0=0 is an ordinary point of (69), we will employ the Power Series Method with the end to obtain a solution. In accordance with PSM we will assume a solution of the form(70)y=∑n=0∞anxn,after differentiating (70) we get(71)y′′x=∑n=2∞annn−1xn−2. On the other hand is well known the Taylor series ofe−x(72)e−x=1−x+12x2−16x3+124x4−1120x5+1720x6−…. We will get the first six terms of (70). The substitution of results (70)–(72) into the left hand side of (69) yields in(73)∑n=2∞annn−1xn−2+a5+a32−a26+a124−a0120−a4x5+a4+a22−a16+a024−a3x4+a3+a12−a06−a2x3+a2+a02−a1x2+a1−a0x+a0=0. After developing the sum in (73) and grouping in powers of x we get(74)a0+2a2x0+6a3+a1−a0x1+12a4+a2+a02−a1x2+20a5+a3+a12−a06−a2x3+30a6+a4+a22−a16+a024−a3x4+42a7+a5+a32−a26+a124−a0120−a4+…=0. After setting each power ofx equal to zero, we obtain.(75)a2=−a02,a3=16a0−a1,a4=a112,a5=120−a02−a13,…. Therefore, after substituting (75) into (70) we obtain the general solution(76)yx=a01−x22+x36−x540+…+a1x−x36+x412−x560+…. Although we simplified several steps, the above process is long and cumbersome for most applications. ## 4.4. MIHEM Method In order to employ the proposed method we express (69) as an integral equation:(77)yx=−∫0x∫0te−sysdsdt.In accordance with MIHEM we express (77) as follows(78)∑n=0∞pnνnx=−p∫0x∫0te−ps∑n=0∞pnνnsdsdt.In (78) we introduced the homotopy parameter into the non polynomial factor with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM.After substituting(79)e−px=1−px+12p2x2−16p3x3+124p4x4−1120p5x5+1720p6x6−…,into (78) and after equating the coefficients of identical powers of p, we get the following relations:(80)ν0x=A+Bx.(initial function).(81)ν1x=−∫0x∫0tν0sdsdt,thus, by substituting (80) into (81) we get(82)ν1x=−Ax22−Bx36,(83)ν2x=−∫0x∫0t−sν0s+ν1sdsdt,substituting (80) and (82) into (83) and after performing elementary integrals we get(84)ν2x=Ax36+B+A2x412+Bx5120.On the other hand the following iteration results in(85)ν3x=−∫0x∫0tν2s−sν1s+s22ν0sdsdt,from (80), (82), (84) and (85) we get(86)ν3x=−Ax424−x5202A3+B2−x630B4+A24−Bx75040.Iterating (78) we get the following elementary integral:(87)ν4x=−∫0x∫0tν3s−sν2s+s22ν1s−s36ν0sdsdt.Taking into account that we have proposed to obtain the first six terms of the series solution (70), we note that only the last integral of (87) contributes to this part of the solution, whereby we write (87) as follows(88)ν4x=∫0x∫0ts36ν0sdsdt+….After substituting (80) into (88) we obtain(89)ν4x=Ax5120+Bx6180+….Substituting (80), (82), (84), (86), and (89) into (14) we get(90)yx=A1−x22+x36−x540+…+Bx−x36+x412−x560+….We note that we obtained the same results (76) (PSM) and (90) (MIHEM) as it should be.On the other hand, we note that the fifth iteration of PSM provides, until the fifth power of the solution while the fourth iteration of MIHEM provides the same information and more of the following powers.Example 5. This example provides the general solution for the following linear second order inhomogeneous differential equation by using MIHEM.(91)y″−2+4x2y+x2+4x2=0, To employ the proposed method, we express the following integral equation in terms of(92)yx=∫0x∫0t2+4s2ys−s2+4s2dsdt. In accordance with the proposed method we express (92) as follows(93)∑n=0∞pnνnx=p∫0x∫0t2+4s2∑n=0∞pnνns−s2+4s2dsdt. Next, we propose as initial function(94)y0=A+Bx,so that we obtain from (93).(95)ν1x=∫0x∫0t2+4s2ν0s−s2+4s2dsdt. Thus, after substituting (94) in the above expression we get(96)ν1x=∫0x∫0t2+4s2A+Bs−2s+4s3dsdt. After performing elementary operations we obtain(97)ν1x=Ax2+B−13x3+A3x4+B−15x5. In the same way,(98)ν2x=∫0x∫0t2+4s2ν1sdsdt. After substituting (97) into (98) we get(99)ν2x=∫0x∫0t2+4s2As2+B−13s3+A3s4+B−15s5dsdt. After performing elementary operations we get(100)ν2x=Ax46+B−130x5+2A3+4Ax630+2B−15+4B−13x742+A3x814+B−190x9…. From the above results, we get an approximate solution for (91) in the following way(101)yx=A1+x2+x42+745x6+142x8+…+Bx+B−13x3+7B−130x5+13B−1315x7+B−190x9+…. ## 5. Discussion This work proposed the MIHEM method, as a useful tool in order to find power series solutions of linear ordinary differential equations for the case of ordinary points.Although our examples sustain that Power Series Method and the proposed method MIHEM provided the same results under the conditions above mentioned, appendix, will be dedicated to show the equivalence of two methods for solving the general case (1) of linear differential equations assuming that coefficient functionspx and qx are analytic at x=x0 (this equivalence can be inferred from the theoretical discussion established just below of (14)). Despite Power Series Method is the classical method to solve linear equations about ordinary points, we emphasize that PSM usually is long and cumbersome and it requires concentration to avoid errors at the handle of sums and its index. Unlike it, MIHEM was introduced with the same objective, but we noted that it is a method systematically based in elementary integrals, beginning always of the same initial function for the linear differential equations of the same order. As a matter of fact, from the proposed problems it is clear that, besides elementary algebra, the other systematically employed mathematical result is the basic integral ∫xndx=xn+1/n+1.Example1 solved a linear first order differential equation, after comparing both methods, it resulted clear that MIHEM is more direct and easier to use in comparison with the traditional PSM method. In fact, we noted that PSM method is most of the times cumbersome and difficult to use. Thus, the Power Series Method algorithm implies among other things, to change the mute variable, adding of sums, obtaining the so-called recursive relation, which is employed with the purpose to determinate the ck coefficients, and so on. Conversely, we emphasized the ease of MIHEM to obtain the same results but with less effort, its procedure is systematically based in the solution of basic integrals. Example 2 solved a second order differential equation by using again both methods. We note that for this case PSM resulted even more cumbersome and long in comparison with the first case study; thus, we noted that its iterations were performed using a recurrence relation with three terms. In general terms, PSM requires more work as it is employed to solve higher order differential equations. The third case study considered the solution of Hermite differential equation which is important for its applications in quantum mechanics. This example employed the MIHEM method, not only for solving the Hermite differential equation but to provide a deeper analysis of this problem. Its application was useful to obtain the so called Hermite polynomials which are important functions of the so called mathematical-physics branch [19, 20]. Example 4 provided a case of non-polynomial coefficients. PSM directly multiplied series (70) and (72) in order to get (74) and a equations system whose solution is given by (75). Although we simplify several steps, the above process is long and cumbersome for most of applications. On the other hand, the homotopy parameter was introduced into the non-polynomial factor e−x in (78) with the purpose to distribute the exponential function in the different iterations of the proposed method to ease the application of MIHEM. We note that the rest of the procedure is similar to the one used in the previous examples that is, based in basic integrals. Given that PSM and MIHEM provide the same results, the proposal of this work is taking advantage of the ease of the proposed method in the search of solutions for linear differential equations for the case of ordinary points. Finally, Example 5 shows the application of MIHEM, for the case of an inhomogeneous problem. The obtained approximate solution shows the ease with which MIHEM handles these problems.We emphasize the relevance of the proposed method MIHEM for the series solution of linear ordinary differential equations. The method is convenient for the applications, not only because this is based in the solution of elementary integrals but, it is wholly systematic. For instance, the method always begins with the same initial function for all equations of the same order. As a matter of fact, given a linear problem MIHEM always expresses the problem in terms of the integral (6) and express it in terms of the iterative process (10). Our examples show that this procedure is straightforward and simple. The case of PSM usually is cumbersome and in general requires to get the so called recurrence relation from which are calculated the coefficients of the series solution. From (22) and (39) and the example (4), we note that this procedure many times cannot be established without waiting for the implementation of the recurrence formula to have no relevant changes. Thus, we note that recurrence (22) directly relate two c coefficients while (39) required to relate three of them. What is more, example (4) did not even require a recurrence relation.From the mentioned above, it is clear that MIHEM is a powerful method which indeed ease the obtaining of series solutions about ordinary points for ordinary linear differential equations, but [18] essentially considered the application of the proposed method, although slightly modified, with the purpose to get exact and approximate solutions for the case of nonlinear differential equations. In a sequence, the next section will show, as a future work, the possibility of employing MIHEM with the end to solve linear problems about singular points. Thus, the proposed method is not only a powerful tool for research, but is a method widely recommended to be implemented in university courses from the undergraduate level onwards. ## 6. Concluding Remarks Although MIHEM was introduced for obtaining power series solutions of linear ordinary differential equations about of ordinary points, a natural continuation or as future work, would be to adapt this method with the purpose to solve linear ordinary differential equations about singular points, at least for the case of regular singular points (see Section3 and references [19, 20]). In spite of that, we are not yet in the position to justify the use of MIHEM to solve problems about singular points, we will present two case studies in which the proposed method works satisfactory although the conditions to ensure the validity of MIHEM are not satisfied.Example 6. Obtain the general solution for the following linear first order differential equation.(102)xy′−y=0,about the singular point x=0. Given that (102) can be rewritten as(103)y′=yx,then, in accordance with MIHEM algorithm we convert (103) as the following integral equation(104)y=∫0xyssds. Introducing the handy process described for Section4, we get(105)∑n=0∞pnνnx=p∫0x∑n=0∞pnνnssds. In accordance with the proposed method, we will propose as initial function(106)y0x=A.but said choice would yield in a solution with a singular point in x = 0 due to the the presence of lnx in the result. In order to avoid this result we propose the following initial function(107)ν0x=Ax.in such a way that(108)ν1x=∫0xν0ssds. After substituting (107) into (108) we obtain(109)ν1x=Ax.in the same way:(110)ν2x=Ax,(111)ν3x=Ax.thus, after n iterations we get(112)νnx=Ax. By substituting (107) and (109)–(112) into (14) we obtain:(113)yx=Bx,where we defined the arbitrary constant B=nA. By mere substitution, we note that (113) is the general solution for linear (102).Example 7. Obtain the general solution for the following linear second order inhomogeneous differential equation.(114)y′′−2x2y−8x=0.about the regular singular point x=0. As usual, we express (114) in terms of the following integral equation:(115)yx=∫0x∫0t2s2ys+8sdsdt. In agreement with the proposed method we express (115) as(116)∑n=0∞pnνnx=p∫0x∫0t2∑n=0∞pnνnss2+8sdsdt. In the same way we will avoid to get a singular solution forx=0, considering the initial function (instead of (12))(117)ν0x=Ax2+Bx3,so that from (116) we obtain:(118)ν1x=∫0x∫0t2ν0ss2+8sdsdt. Thus, from (117) and (118) we get, after performing elementary operations(119)ν1x=Ax2+2B+86x3. In the same way it is straightforward to show that(120)ν2x=Ax2+2B+836x3.as a matter of fact, the only difference of the successive iterations from (119), is a factor acting on the cubic term, so the nth iteration can be expressed from the above iterations as follows:(121)yx=nAx3+B1+118+836+⋯x3. Expressing the above result in terms of just two constants(122)yx=A′x2+Cx3,where A′=nA and C=B+2B/6+2B/36+…+8/6+8/36+…. Since that is a linear equation, the general solution only has to have one arbitrary constant [19, 20], to understand how this occurs, it is enough to substitute (122) into the differential equation to obtain C=2. After substituting this value into (122) we obtain(123)yx=A′x2+2x3. It is easy to verify thatyhx=A′x2 is the solution of the homogeneous part of the (114) and (123) is the solution for (114). We note that the general solution for a homogeneous linear differential equation has to contain two arbitrary constants and (123) only shows yhx=A′x2, it results that the manner in choosing the initial function (117) eliminates a singular solution. To see it, we know from the theory of linear differential equations that given a known solution for linear (1) it is possible to get a second solution [19, 20]. Therefore if y1=x2, then a second solution is given by: y2=x2∫dx/x4, or y2=−1/3x and the general solution for the homogeneous part is yhx=A′x2−D/3x for other arbitrary constant D. Thus the obtained solution omitted the singular part D=0 because we proposed an initial function with the purpose to avoid singularity solutions but it is clear that always it is possible to recover the total solution. From these examples, it is conjectured that MIHEM method has potentiality in order to handle in general singular points. Although MIHEM is not sufficiently proved as a general tool to solve linear equations about singular points, it is noted that conversely, PSM is totally inadequate for getting solutions of (102) and (114) about x=0 because for these cases it is a singular point. In fact, instead of PSM, it should be employed the already explained Frobenius method [19, 20], but this is even more cumbersome and long than PSM. As conclusion, this work introduced the Modified Integral Homotopy Expansive Method (MIHEM) which showed potential in order to find power series solutions for linear ordinary differential equations for the case of ordinary points. A relevance difference with PSM is the versatility of the proposed method which is shown in five case studies. The first advantage of the method is that MIHEM requires only of elementary integrals. The second is that the initial function for the case of linear second order differential equations is alwaysA+Bx, and A for the case of linear first order differential equations. This contributes to systematize the procedure. Finally, once the series (14) is obtained, we proceed to factorize A and B in order to get two linearly independent solutions and for the same a general solution. Therefore, we note that the simplicity of the proposed method does not make it less effective, but on the contrary, we emphasize its convenience in practical applications. The proposal of MIHEM is that of an effective method and easy to use for linear ordinary differential equations for the case of ordinary points. As a matter of fact, we noted that the proposed MIHEM is a potential useful method to solve also ordinary differential equations about singular points, although in general, this subject should be part of a future work. --- *Source: 1016251-2022-04-28.xml*
2022
# Design and Characteristic Analysis of a Novel Bearingless SRM considering Decoupling between Torque and Suspension Force **Authors:** Yan Yang; Zeyuan Liu; Zhiquan Deng; Xin Cao **Journal:** Mathematical Problems in Engineering (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101626 --- ## Abstract A Bearingless Switched Reluctance Motor (BSRM) has a complicated character of nonlinear coupling; therefore, it is a hard work to operate BSRM stably. In this paper, a new type of BSRMs with novel rotor structure is proposed by analyzing relationships between motor structure and theoretical formulae of levitation force and torque. The stator structure of this new motor is same as that of traditional BSRM and each stator pole can coil one winding or two windings, while the pole arc of rotor is wider. In order to analyze the characteristics of the proposed BSRM, finite-element (FE) models are used and a 12/4 one-set-winding BSRM and a 12/8 two-sets-windings BSRM are taken as examples. The analysis results indicate that the new scheme is effective for a stable levitation. It can realize decoupling control of torque and radial force, thus simplifying its control strategy and improving the use ratio of winding currents. A control system is designed for the 12/8 BSRM based on deducing its mathematical model. Compared with traditional BSRM, the proposed scheme is easier to be implemented. --- ## Body ## 1. Introduction Because of the similarity between the structure of a magnetic bearing and that of a conventional switched reluctance motor (SRM), a Bearingless Switched Reluctance Motor (BSRM) integrates the magnetic suspension winding into a motor. It combines merits of a magnetic bearing and a conventional SRM such as ruggedness, low cost, fail-safe, no friction, no contact, high efficiency, fault-tolerance, and possible operation at high temperature; therefore, BSRM can operate at a high speed [1–3]. It has an advantage in a high-speed and super high-speed starter/generator for an advanced aircraft engine [4, 5]. Hence, it is expected to be suitable for commercial, industrial, and military applications.According to the number of coil windings embedded in the stator, the BSRMs are divided into one set of windings and two sets of windings [6]. Both of the two motors produce unbalanced radial force by changing the air gap magnetic field density to realize the suspension of rotor. However, strong coupling exists between torque and levitation force in both traditional two-sets-windings BSRM and traditional one-set-winding BSRM. It is difficult to realize the complete decoupling of the two in the mathematical model and control strategy; thus, improvement of BSRM’s levitating and rotating performance is limited. Scholars have done a lot of researches on the motor topology to solve the coupling problem and some research results have been obtained.Scholars at Kyungsung University have proposed one method for two-phase BSRM with 8/10 or 12/14 hybrid pole type, in which each stator has one-set-winding. This proposal takes advantage of the hybrid structure of narrow and wider teeth to separate the radial force stator pole from the torque stator pole [7–10]. Levitation force and torque are produced separately by suspension winding and torque winding, thus realizing the natural decoupling of torque and levitation. However, the wider stator teeth occupy a large space, and the operation of motor is two-phase excitation, which limits the output of power density. Moreover, the number of rotor poles is larger than the number of stator poles, so it is difficult to improve the high speed performance.NASA Glen Research Center proposed a one-set-winding BSRM with hybrid rotor, which consists of two parts: circular and scalloped lamination segments [11, 12]. When controlling in practice, coils of one set of four stator poles form a set of windings, providing levitation force for the circular laminated section of rotor and playing the role of the magnetic bearing to levitate the rotor. The other set of four stator poles imparts torque to the scalloped portion of the rotor. The motor of this structure is easy to control, but its biggest shortcoming is that same as the magnetic bearing motor system and the axial length is longer, which limits the critical speed of rotor.The high-speed motor research center of Nanjing University of Aeronautics and Astronautics improves the suspension winding arrangements of traditional 12/8 BSRM with two-sets-windings, proposing a three-phase, two-sets-windings, 12/8 series-excited BSRM [13], which connect the traditional three-phase suspension winding in the same direction in series to become a one-set-winding. Thus it makes the suspension winding inductance unchanged at one rotor cycle. The suspension current does not produce torque, which realizes the decoupling control of torque and levitation force. Nevertheless, since the demand for copper wire of suspension winding is high, it is wasteful and causes the utilization rate of winding to be lower.In this paper, a new type of BSRM with novel rotor structure is proposed based on 12/8 two-sets-winding BSRMs and 12/4 one-set-winding BSRMs. The scheme uses wider pole arcs in the rotor and makes the winding inductance curve a flat area in the inductance maximum position, thus realizing decoupling control of the levitation force and torque. First, the structure characteristics of the proposed BSRM are summed up by analyzing relationships between torque, suspension force, and inductance. Then, operating principle of the new structure BSRM is illustrated. Accordingly, the torque and suspension force performances of the proposed BSRM are analyzed in detail with finite-element (FE) calculation. Mathematical model for suspension force is deduced and a control system is designed for the 12/8 BSRM with two-sets-windings. Compared with traditional BSRM, the proposed scheme has a simpler suspension force model and is easier to be controlled; besides, it can realize decoupling between torque and radial force. ## 2. Characteristics and Analysis of Traditional BSRM Figure1(a) shows the configuration of the traditional 12/8 BSRM with only phase-A winding [14]. The pole arcs of the stator and rotor are both equal to 15 mechanical degrees (°M). The aligned position is defined as θ = 0 °M. There are two kinds of stator windings: the motor main winding and radial force windings. The main winding N m consists of four coils connected in series. Each radial force winding N s consists of two coils. When the two differential windings conduct the currents as shown in Figure 1(a), the flux density in air gap 1 increases, whereas it decreases in air gap 3. Thus, an unbalanced magnetic force is produced toward the positive direction in the α-axis. Radial force toward the β-axis can also be produced in the same way. Therefore, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. Because the number of rotor poles is 8, the inductance cycle of winding is 45°M. Figure 1(b) shows the form of the main winding self-inductance curve.Configuration and inductance curve of traditional 12/8 BSRM. (a) Configuration (b) Inductance curveThe stored magnetic energyW a in phase-A can be expressed as [14, 15] (1) W a = 1 2 i m a i s a 1 i s a 2 × L a × i m a i s a 1 i s a 2 . The inductance matrix L a of phase-A is a 3 × 3 matrix constructed by self and mutual inductances and it can be written as (2) L a = L m a M ( m a , s a 1 ) M ( m a , s a 2 ) M ( m a , s a 1 ) L s a 1 M ( s a 1 , s a 2 ) M ( m a , s a 2 ) M ( s a 1 , s a 2 ) L s a 2 , where L m a, L s a 1, and L s a 2 are the self-inductances of N m a, N s a 1, and N s a 2, respectively; M ( m a , s a 1 ) is the mutual inductance between N m a and N s a 1; M ( m a , s a 2 ) is the mutual inductance between N m a and N s a 2; and M ( s a 1 , s a 2 ) is mutual inductance between N s a 1 and N s a 2.According to the principle of electromechanical energy conversion, the torqueT a due to phase-A can be written as (3) T a = ∂ W a ∂ θ = ∂ 1 / 2 L m a i m a 2 + L s a 1 i s a 1 2 + L s a 2 i s a 2 2 ∂ θ .When the shaft has no load or light load in its radial direction, the suspension current value will be very small because a small levitation force is needed to maintain rotor stable suspension in that case. Therefore, the contribution of suspension winding currents on the torque can be ignored. The torqueT a can be expressed as a linearized model: (4) T a = 1 2 ∂ L ∂ θ i m a 2 .In the same way, suspension forces in two directions produced by the current in phase-A can be obtained by the virtual displacement method and can be written as(5) F α = ∂ W a ∂ α = ∂ M ( m a , s a 1 ) i m a i s a 1 + M ( m a , s a 2 ) i m a i s a 2 ∂ α = K f θ i m a i s a 1 , F β = ∂ W a ∂ β = ∂ M ( m a , s a 1 ) i m a i s a 1 + M ( m a , s a 2 ) i m a i s a 2 ∂ β = K f θ i m a i s a 2 , where K f θ is a proportional coefficient of radial force; it is a function of the rotor position angle θ and the dimensions of BSRM [14–17].It is known from (4) that the torque is proportional to the partial derivative of the main winding inductance to rotor position angle. As shown in Figure 1(b), the inductance in Sections 1 and 4 is constant. Therefore, the winding current does not generate torque in these two regions. The current conducted in Section 2 produces positive torque, while the current conducted in Section 3 produces negative torque. In practice, it can realize torque control by adjusting the conduction width of Sections 2 and 3 flexibly according to the load torque.We can also see from (5) that the magnitude of suspending force is proportional to the coefficient K f θ and winding currents. For improving the use ratio of currents, a largeK f θ is needed to generate certain suspension force. If rotor and stator poles are aligned, K f θ obtains a large value for the magnetic reluctance of air gap which is minimum at this angle position. As the rotor rotates away from the aligned position, K f θ value becomes less because of the increase in magnetic reluctance causing a decrease in air gap flux. Therefore, the turn-ON angle should be shifted toward aligned position to improve the suspension performance. That is to say, in Figure 1(b), suspension force should also be generated in intervals II and III around aligned position. So, the suspension force has nonlinear character and coupled with torque. It is complicated to calculate the suspension force value for position-dependent K f θ and currents. This will make algorithm more complex and demand higher requirements upon digital controller. Meanwhile, large negative torque will be produced when the currents conduct in region III, which will lead to low-usage of winding current and limit speed performance. ## 3. Structure Characteristics and Operating Principles of the Novel BSRM According to the above analysis of traditional BSRM, the proportional coefficientK f θ and inductance L m a obtain their maximum values in aligned position at the same time. Thus, if changing the structure of the motor can generate a flat area in the winding inductance curve located in the inductance maximum position, then it will only generate a large and linear suspension force but fails to produce torque, if the motor in this flat area is excited. Then the winding inductance curve has the form as shown in Figure 2. The suspension force of motor can be controlled in region V and the torque can be controlled in the inductance rising region II. BSRMs with this new inductance feature, whose suspension force will be linear and this scheme is undesired to compromise between torque and radial force; furthermore, no negative torques are generated. So, it may improve the efficiency of winding currents and will be easier to be controlled.Figure 2 The aim form of inductance curve. ### 3.1. Structure Characteristics of Novel BSRMs On the premise of the fact that the stator structure of the new BSRM is same as the stator structure of traditional BSRM, we increase the width of rotor pole. The rotor pole arc will be greater than the stator pole arc. We name it wider-rotor-teeth BSRM. The reason to do so is that when the stator and rotor pole of the motor are in aligned position, the proportional coefficientK f θ and the inductance of winding reach their maximums; hence, increasing overlap area of the rotor and stator pole during rotor rotation by increasing the width of rotor pole can produce the flat area of the inductance. Based on this principle, when it comes to a m-phase motor whose rotor teeth number is N r, one rotor cycle angle will be (360/N r/m)°M. So the region where suspension forces are generated is (360/N r/m)°M; m phases conduct in turn to provide steady suspension forces. Therefore, it requires that the inductance curve in the maximum inductance has a flat area of (360/N r/m)°M at least to provide stable suspension force for rotor and to realize decoupling control of the torque and the suspension force. Thus, the relationship between rotor pole arc angle and stator pole arc angle must satisfy (6) β r ≥ β s + 360 ° m · N r , where β r and β s are the rotor pole arc angle and the stator pole arc angle, respectively. N r is the number of rotor poles.The structure of wider-rotor-teeth BSRM not only can be two-sets-winding BSRM but also can be one-set-winding BSRM. ### 3.2. Operating Principles of Novel BSRM Taking a 12/4 structure with wider-rotor-teeth and one set of windings BSRM and a 12/8 structure with wider-rotor-teeth and two sets of windings BSRM as examples, this section introduces the rotation operation principle of the novel BSRM. #### 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. #### 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 3.1. Structure Characteristics of Novel BSRMs On the premise of the fact that the stator structure of the new BSRM is same as the stator structure of traditional BSRM, we increase the width of rotor pole. The rotor pole arc will be greater than the stator pole arc. We name it wider-rotor-teeth BSRM. The reason to do so is that when the stator and rotor pole of the motor are in aligned position, the proportional coefficientK f θ and the inductance of winding reach their maximums; hence, increasing overlap area of the rotor and stator pole during rotor rotation by increasing the width of rotor pole can produce the flat area of the inductance. Based on this principle, when it comes to a m-phase motor whose rotor teeth number is N r, one rotor cycle angle will be (360/N r/m)°M. So the region where suspension forces are generated is (360/N r/m)°M; m phases conduct in turn to provide steady suspension forces. Therefore, it requires that the inductance curve in the maximum inductance has a flat area of (360/N r/m)°M at least to provide stable suspension force for rotor and to realize decoupling control of the torque and the suspension force. Thus, the relationship between rotor pole arc angle and stator pole arc angle must satisfy (6) β r ≥ β s + 360 ° m · N r , where β r and β s are the rotor pole arc angle and the stator pole arc angle, respectively. N r is the number of rotor poles.The structure of wider-rotor-teeth BSRM not only can be two-sets-winding BSRM but also can be one-set-winding BSRM. ## 3.2. Operating Principles of Novel BSRM Taking a 12/4 structure with wider-rotor-teeth and one set of windings BSRM and a 12/8 structure with wider-rotor-teeth and two sets of windings BSRM as examples, this section introduces the rotation operation principle of the novel BSRM. ### 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. ### 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. ## 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 4. Electromagnetic Analysis of Two Wider-Rotor-Teeth BSRMs In order to verify the validity of the suspension operation principles and provide the basis theory for motor control strategy, the above two kinds of wider-rotor-teeth structure BSRM are analyzed through FE method. In order to facilitate the comparison between the two prototypes, the same rated condition is adopted here. The rated power is 2 kW, the rated speed is 20000 r/min, and the maximum radial force is 100 N. The dimensions of the simulation motors are shown in Table1.Table 1 Parameters of 12/4 and 12/8 wider-rotor-teeth BSRMs. Stator diameter/mm 95 Rotor diameter/mm 49.8 Stator yoke/mm 6.1 Rotor yoke/mm 7.65 Stator pole height/mm 16.5 Rotor pole height/mm 7 Stator pole arc/°M 15 Diameter of axle/mm 20 Gap length/mm 0.25 Length of stator stack/mm 55 Rotor pole arc of 12/4 BSRM/°M 45 Number of windings of 12/4 BSRM 13 Rotor pole arc of 12/8 BSRM/°M 30 Number of main windings of 12/8 BSRM 9 Number of suspension windings of 12/8 BSRM 13The software of Ansys is used to calculate electromagnetic field. The 2-dimensional (2D) FE models of 12/4 BSRM and 12/8 BSRM are established and their 2D FE mesh models are shown in Figure5. We use enhanced incremental energy method to calculate inductances of winding, since it only needs to calculate incremental energy without the need for system energy [18].2D FE mesh models of 12/4 BSRM and 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRMFor a single loop magnetic system, if the operating current of the loop isi 0, the static inductances L can be calculated by (7) L = Ψ i 0 = Δ W C i 0 Δ i , where Δ W C is the magnetic field coenergy increment, i 0 is current pass through the magnetic loop, Ψ is the flux linkage, and Δ i is the increment of i 0.The increment of magnetic field coenergyΔ W C can be calculated as (8) Δ W C = ∫ B Δ H d V , where B is flux density and Δ H is the magnetic field increment.Then, combining (7) and (8), inductances can be written as (9) L = ∫ B Δ H d V i 0 Δ i . ### 4.1. FE Analysis Results with Only Phase-A Excited To the 12/8 BSRM shown in Table1, the given currents conducted in main winding and suspension winding are i m a = 5 A, i s a 1 = 3.6 A, and i s a 2 = 0, respectively. To the 12/4 BSRM, the given currents are i a 1 = 8.6 A and i a 2 = 1.4 A. The main winding inductances of phase-A are calculated. Figure 6 shows relationships between inductance and the rotor position angle of two kinds of BSRM. For the purposes of comparison, here, all the actual position mechanical angles of rotor were multiplied by the respective number of rotor teeth, so that each cycle of rotor angle corresponding to the motor becomes 360 electrical degrees (°E). It can be seen from Figure 6 that the winding inductances of the two kinds of BSRM are all about constant maximum values in the interval [−60°E, 60°E], this is consistent with the results of theoretical analysis, and the inductance curve has the same shape as shown in Figure 2.Figure 6 Inductances of 12/4 and 12/8 wider-rotor-teeth BSRMs.Figure7 shows relationships suspension force, or instantaneous torque and the rotor position angle of two kinds of BSRM, respectively. Figure 7(a) shows that the maximum levitation force can also be obtained in [−60°E, 60°E] interval, while the torque is basically 0 in this interval as shown in Figure 7(b). That is to say, there is large suspension force produced while it fails to produce torque in this region. Figure 7(b) also shows that torques are generated in the inductance rising and drop region. The levitation force and torque produced in different regions; thus, it can realize the decoupling control of torque and levitation force. This result is in agreement with theoretical result.Suspension forces and torques of 12/4 and 12/8 wider-rotor-teeth BSRMs. (a) Suspension forces. (b) Torques. (a) Suspending forces atα-direction (b) TorquesThe results also show that the output torque width of 12/4 BSRM is only 1/2 of the output torque width of 12/8 BSRM, so the torque angle characteristic of 12/8 BSRM is better. The 12/4 BSRM is more suitable for light load applications. ### 4.2. FE Analysis Results with Two Phases Excited Simultaneously Keep the phase-A currents in [−60°E, 60°E] region the same as that in former simulation. phase-A produces larger suspension force while it fails to produce torque according to the analysis of previous section. To the 12/4 BSRM, the currents of phase-B were conducted to produce torque since the inductance of phase-B is just rising in this region. Given currents conducted in phase-B are i b 1 = i b 2 = 5 A. In the same way, to the 12/8 BSRM, the conduction of phase-C to produce torque and the given current is i m c = 5 A. Figure 8 shows the FE analysis results of suspension force and torque. It can be seen that no matter two phases excited simultaneously or only phase-A excited, the suspension force is approximately the same. The values of two phases excited are slightly larger than that of only one phase excited, but the difference is less than 4%. Thus the effects of phase-B or phase-C conduction to levitation forces can be ignored in the two BSRMs. It can also be seen from Figure 8 that torque values are approximately zero with only phase-A conducted, while they greatly increased when two phases excited simultaneously. Compared with Figure 7, the generated torque values of the two motors are all very close to those of only one phase conducted in their inductance rising region. The difference is also less than 4%. Thus the torque generated by the phase, which mainly produces the suspension force, can be ignored. That is to say, the coupling effect between phase and phase can be ignored.Suspension forces and torques with two phases excited simultaneously. (a) 12/4 BSRM. (b) 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRM ## 4.1. FE Analysis Results with Only Phase-A Excited To the 12/8 BSRM shown in Table1, the given currents conducted in main winding and suspension winding are i m a = 5 A, i s a 1 = 3.6 A, and i s a 2 = 0, respectively. To the 12/4 BSRM, the given currents are i a 1 = 8.6 A and i a 2 = 1.4 A. The main winding inductances of phase-A are calculated. Figure 6 shows relationships between inductance and the rotor position angle of two kinds of BSRM. For the purposes of comparison, here, all the actual position mechanical angles of rotor were multiplied by the respective number of rotor teeth, so that each cycle of rotor angle corresponding to the motor becomes 360 electrical degrees (°E). It can be seen from Figure 6 that the winding inductances of the two kinds of BSRM are all about constant maximum values in the interval [−60°E, 60°E], this is consistent with the results of theoretical analysis, and the inductance curve has the same shape as shown in Figure 2.Figure 6 Inductances of 12/4 and 12/8 wider-rotor-teeth BSRMs.Figure7 shows relationships suspension force, or instantaneous torque and the rotor position angle of two kinds of BSRM, respectively. Figure 7(a) shows that the maximum levitation force can also be obtained in [−60°E, 60°E] interval, while the torque is basically 0 in this interval as shown in Figure 7(b). That is to say, there is large suspension force produced while it fails to produce torque in this region. Figure 7(b) also shows that torques are generated in the inductance rising and drop region. The levitation force and torque produced in different regions; thus, it can realize the decoupling control of torque and levitation force. This result is in agreement with theoretical result.Suspension forces and torques of 12/4 and 12/8 wider-rotor-teeth BSRMs. (a) Suspension forces. (b) Torques. (a) Suspending forces atα-direction (b) TorquesThe results also show that the output torque width of 12/4 BSRM is only 1/2 of the output torque width of 12/8 BSRM, so the torque angle characteristic of 12/8 BSRM is better. The 12/4 BSRM is more suitable for light load applications. ## 4.2. FE Analysis Results with Two Phases Excited Simultaneously Keep the phase-A currents in [−60°E, 60°E] region the same as that in former simulation. phase-A produces larger suspension force while it fails to produce torque according to the analysis of previous section. To the 12/4 BSRM, the currents of phase-B were conducted to produce torque since the inductance of phase-B is just rising in this region. Given currents conducted in phase-B are i b 1 = i b 2 = 5 A. In the same way, to the 12/8 BSRM, the conduction of phase-C to produce torque and the given current is i m c = 5 A. Figure 8 shows the FE analysis results of suspension force and torque. It can be seen that no matter two phases excited simultaneously or only phase-A excited, the suspension force is approximately the same. The values of two phases excited are slightly larger than that of only one phase excited, but the difference is less than 4%. Thus the effects of phase-B or phase-C conduction to levitation forces can be ignored in the two BSRMs. It can also be seen from Figure 8 that torque values are approximately zero with only phase-A conducted, while they greatly increased when two phases excited simultaneously. Compared with Figure 7, the generated torque values of the two motors are all very close to those of only one phase conducted in their inductance rising region. The difference is also less than 4%. Thus the torque generated by the phase, which mainly produces the suspension force, can be ignored. That is to say, the coupling effect between phase and phase can be ignored.Suspension forces and torques with two phases excited simultaneously. (a) 12/4 BSRM. (b) 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRM ## 5. System Analysis of Two Wider-Rotor-Teeth BSRMs ### 5.1. Mathematical Model of 12/8 BSRM The radial suspension force and torque acting on rotor should be acquired in order to control the BSRM. This paper derives the mathematical model through virtual displacement method as traditional BSRM [14, 19]. The current of suspension winding conducts in the interval [−7.5°M, 7.5°M] where the stator teeth and the rotor teeth are always overlapped. It can be seen from Figure 4 that the magnetic field lines are almost perpendicular from the rotor teeth to the stator teeth in this area, while the fringing flux is small and can be ignored. Therefore, the magnetic circuit diagram of the magnetic circuit can be represented as straight lines as shown in Figure 9.Figure 9 Magnetic circuit diagram.Accordingly, the magnetic permeance can be expressed as(10) P = μ 0 h r π 12 l 0 .Here,μ 0 is the permeability in the air, h is the axial length of stator, r is the radius of rotor, and l 0 is the average air gap length.The levitation force in two directions and the torque can also be obtained by the virtual displacement method. Equation (10) shows that the permeance is only decided by dimensions of BSRM; therefore, the proportional coefficient K f ( θ ) of radial force in (5) becomes a constant and the suspension force mathematical model for novel motor becomes (11) F α = K f i m a i s a 1 , F β = K f i m a i s a 2 , where the proportional coefficient K f is a constant and can be expressed as (12) K f = μ 0 h r π 6 l 0 2 N m N s .Also according to the principle of virtual displacement, the instantaneous torque of phase A can be expressed as(13) T a = J t θ 2 N m 2 i m a 2 + N s 2 i s a 1 2 + N s 2 i s a 2 2 .In inductance rising region, the positive torque coefficient can be expressed as(14) J t p θ = μ 0 h r 1 l 0 - 4 l 0 - 1.28 π r θ 2 l 0 - π r θ 2 .It can be seen from (11) and (12) that suspension forces in suspension region are simply determined by winding currents after motor structure parameters have been determined, which are independent of rotor position angle. Thus, they can be easier controlled than that in traditional case. In practice, if there is a positive or negative eccentric displacement x in α-direction or β-direction, the average air gapl 0 will be revised to l 0 ∓ x in (11) and (13). ### 5.2. Control Scheme Figure10 illustrates the principle of the proposed control scheme. The rotor pole arc angle is 15°M larger than the stator pole arc according to the above analysis of 12/8 BSRM, which forms a 15°M maximum flat area of windings inductance within [−7.5°M, 7.5°M]. The proposed scheme fixes the width of radial force winding currents just at [−7.5°M, 7.5°M] area, shown as region I in Figure 10. The combination of the main winding and suspension winding currents in this region I produces larger and consecutive radial force to levitate rotor. However, the instantaneous torque within region I is approximately equal to 0. The suspension winding does not conduct in any nonoverlapping area, whereas the main windings are also excited in the interval [−22.5°M, −7.5°M] to generate torque. Thus, region I and region II are used to control suspension force and torque, respectively, which can realize the decoupling control of torque and suspension force. In practice, the turn-ON angle of main windings can be adjusted according to the requirements of actual average torque. To obtain a larger average torque, we can advance the turn-ON angle of main winding currents as much as possible. The waveforms of currents can also be changed by the corresponding control objectives, such as higher efficiency and lower vibration. In Figure 10, the turn-ON angle and waveform of main windings are selected as −15°M and square-wave, respectively.Figure 10 Control scheme of new 12/8 BSRM. ### 5.3. Block Diagram of Control System According to the above analysis, the control system can be made according to Figure11 during practice. After the rotor’s location is detected through the encoder, the real-time rotation speed of the motor can be got through location computation. The difference between it and the given speed is formed as the active phase current i m 1 * through the PI controller. The rotor radial displacements at the two perpendicular directions are measured and converted into electrical signals through the radial displacement sensor and are output as the desired radial forces F α * and F β * after the PID controller. Based on the desired value, according to the corresponding control targets and (11), the currents i m 2 *, i s 1 *, and i s 2 * in the suspension interval can be computed. Finally, the stable suspension during the operation of the motor can be achieved by tracing the set value of currents through the power controllers on the two windings.Figure 11 Block diagram of control system. ### 5.4. Simulation and Analysis The 12/8 BSRM dimensions of the simulation motor are the same as shown in Table1. The given radial force in α-axis is 50 N, whereas it is 30 N in the β-axis. Figures 12(a) and 12(b) show simulation waveforms of the currents, torque, and levitation forces under giving the main winding turn-ON angle θ on ⁡ equal to −15°M and −22.5°M, respectively. Here, the currents are controlled as square-wave. It can be seen that the change of the turn-ON angle makes the output average torque change. When the angle θ on ⁡ is advanced, the output average torque is increased. So the output of average torque can be controlled by adjusting the turn-ON angle of the main winding. However, due to the winding inductance, the main winding current in the torque production section cannot be rapidly converted into the current required by the suspension section, and a delay exists, as shown in the area circled by broken lines in Figure 12. As a result, the suspension forces cannot track their given values well during currents conversion area. To solve this problem, it is necessary to add a compulsive excitation unit to the main winding converter. That is, by applying an inverse voltage when the current changes, rapidly convert the current of main windings to the value required by the suspension section. Figure 13 shows the simulation waveforms of the currents, torque, and levitation forces. As can be seen from Figures 13(a) and 13(b), after the compulsive excitation unit is added, the suspension force was well tracked.Simulation waveforms of the currents, torque, and levitation forces without compulsive excitation unit. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° MSimulation waveforms of the currents, torque, and levitation forces after compulsive excitation unit is added. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° M ## 5.1. Mathematical Model of 12/8 BSRM The radial suspension force and torque acting on rotor should be acquired in order to control the BSRM. This paper derives the mathematical model through virtual displacement method as traditional BSRM [14, 19]. The current of suspension winding conducts in the interval [−7.5°M, 7.5°M] where the stator teeth and the rotor teeth are always overlapped. It can be seen from Figure 4 that the magnetic field lines are almost perpendicular from the rotor teeth to the stator teeth in this area, while the fringing flux is small and can be ignored. Therefore, the magnetic circuit diagram of the magnetic circuit can be represented as straight lines as shown in Figure 9.Figure 9 Magnetic circuit diagram.Accordingly, the magnetic permeance can be expressed as(10) P = μ 0 h r π 12 l 0 .Here,μ 0 is the permeability in the air, h is the axial length of stator, r is the radius of rotor, and l 0 is the average air gap length.The levitation force in two directions and the torque can also be obtained by the virtual displacement method. Equation (10) shows that the permeance is only decided by dimensions of BSRM; therefore, the proportional coefficient K f ( θ ) of radial force in (5) becomes a constant and the suspension force mathematical model for novel motor becomes (11) F α = K f i m a i s a 1 , F β = K f i m a i s a 2 , where the proportional coefficient K f is a constant and can be expressed as (12) K f = μ 0 h r π 6 l 0 2 N m N s .Also according to the principle of virtual displacement, the instantaneous torque of phase A can be expressed as(13) T a = J t θ 2 N m 2 i m a 2 + N s 2 i s a 1 2 + N s 2 i s a 2 2 .In inductance rising region, the positive torque coefficient can be expressed as(14) J t p θ = μ 0 h r 1 l 0 - 4 l 0 - 1.28 π r θ 2 l 0 - π r θ 2 .It can be seen from (11) and (12) that suspension forces in suspension region are simply determined by winding currents after motor structure parameters have been determined, which are independent of rotor position angle. Thus, they can be easier controlled than that in traditional case. In practice, if there is a positive or negative eccentric displacement x in α-direction or β-direction, the average air gapl 0 will be revised to l 0 ∓ x in (11) and (13). ## 5.2. Control Scheme Figure10 illustrates the principle of the proposed control scheme. The rotor pole arc angle is 15°M larger than the stator pole arc according to the above analysis of 12/8 BSRM, which forms a 15°M maximum flat area of windings inductance within [−7.5°M, 7.5°M]. The proposed scheme fixes the width of radial force winding currents just at [−7.5°M, 7.5°M] area, shown as region I in Figure 10. The combination of the main winding and suspension winding currents in this region I produces larger and consecutive radial force to levitate rotor. However, the instantaneous torque within region I is approximately equal to 0. The suspension winding does not conduct in any nonoverlapping area, whereas the main windings are also excited in the interval [−22.5°M, −7.5°M] to generate torque. Thus, region I and region II are used to control suspension force and torque, respectively, which can realize the decoupling control of torque and suspension force. In practice, the turn-ON angle of main windings can be adjusted according to the requirements of actual average torque. To obtain a larger average torque, we can advance the turn-ON angle of main winding currents as much as possible. The waveforms of currents can also be changed by the corresponding control objectives, such as higher efficiency and lower vibration. In Figure 10, the turn-ON angle and waveform of main windings are selected as −15°M and square-wave, respectively.Figure 10 Control scheme of new 12/8 BSRM. ## 5.3. Block Diagram of Control System According to the above analysis, the control system can be made according to Figure11 during practice. After the rotor’s location is detected through the encoder, the real-time rotation speed of the motor can be got through location computation. The difference between it and the given speed is formed as the active phase current i m 1 * through the PI controller. The rotor radial displacements at the two perpendicular directions are measured and converted into electrical signals through the radial displacement sensor and are output as the desired radial forces F α * and F β * after the PID controller. Based on the desired value, according to the corresponding control targets and (11), the currents i m 2 *, i s 1 *, and i s 2 * in the suspension interval can be computed. Finally, the stable suspension during the operation of the motor can be achieved by tracing the set value of currents through the power controllers on the two windings.Figure 11 Block diagram of control system. ## 5.4. Simulation and Analysis The 12/8 BSRM dimensions of the simulation motor are the same as shown in Table1. The given radial force in α-axis is 50 N, whereas it is 30 N in the β-axis. Figures 12(a) and 12(b) show simulation waveforms of the currents, torque, and levitation forces under giving the main winding turn-ON angle θ on ⁡ equal to −15°M and −22.5°M, respectively. Here, the currents are controlled as square-wave. It can be seen that the change of the turn-ON angle makes the output average torque change. When the angle θ on ⁡ is advanced, the output average torque is increased. So the output of average torque can be controlled by adjusting the turn-ON angle of the main winding. However, due to the winding inductance, the main winding current in the torque production section cannot be rapidly converted into the current required by the suspension section, and a delay exists, as shown in the area circled by broken lines in Figure 12. As a result, the suspension forces cannot track their given values well during currents conversion area. To solve this problem, it is necessary to add a compulsive excitation unit to the main winding converter. That is, by applying an inverse voltage when the current changes, rapidly convert the current of main windings to the value required by the suspension section. Figure 13 shows the simulation waveforms of the currents, torque, and levitation forces. As can be seen from Figures 13(a) and 13(b), after the compulsive excitation unit is added, the suspension force was well tracked.Simulation waveforms of the currents, torque, and levitation forces without compulsive excitation unit. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° MSimulation waveforms of the currents, torque, and levitation forces after compulsive excitation unit is added. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° M ## 6. Conclusions This paper studied a new-structure BSRM, which adopts a wide rotor pole structure. The nonlinear FEM and system simulation analysis based on MATLAB are used to analyze the performance of the novel motor. Analysis results of two prototypes indicate that(1) the novel BSRM has linear mathematical model of suspension force, since it can produce a flat area at the position of maximum inductance on the winding inductance curve; (2) the novel BSRM can realize decoupling control of torque and suspension force. Therefore, the algorithm is easier to be implemented and demands lower requirements upon digital controller; (3) since no negative torques are generated in control, the availability of winding current is higher than traditional case; (4) to avoid the problem of poor suspension force performance caused by current delay during converting the main winding currents from torque production region to suspension force production region, it is necessary to add a compulsive excitation unit into the converter of the main winding. --- *Source: 101626-2014-11-02.xml*
101626-2014-11-02_101626-2014-11-02.md
50,946
Design and Characteristic Analysis of a Novel Bearingless SRM considering Decoupling between Torque and Suspension Force
Yan Yang; Zeyuan Liu; Zhiquan Deng; Xin Cao
Mathematical Problems in Engineering (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101626
101626-2014-11-02.xml
--- ## Abstract A Bearingless Switched Reluctance Motor (BSRM) has a complicated character of nonlinear coupling; therefore, it is a hard work to operate BSRM stably. In this paper, a new type of BSRMs with novel rotor structure is proposed by analyzing relationships between motor structure and theoretical formulae of levitation force and torque. The stator structure of this new motor is same as that of traditional BSRM and each stator pole can coil one winding or two windings, while the pole arc of rotor is wider. In order to analyze the characteristics of the proposed BSRM, finite-element (FE) models are used and a 12/4 one-set-winding BSRM and a 12/8 two-sets-windings BSRM are taken as examples. The analysis results indicate that the new scheme is effective for a stable levitation. It can realize decoupling control of torque and radial force, thus simplifying its control strategy and improving the use ratio of winding currents. A control system is designed for the 12/8 BSRM based on deducing its mathematical model. Compared with traditional BSRM, the proposed scheme is easier to be implemented. --- ## Body ## 1. Introduction Because of the similarity between the structure of a magnetic bearing and that of a conventional switched reluctance motor (SRM), a Bearingless Switched Reluctance Motor (BSRM) integrates the magnetic suspension winding into a motor. It combines merits of a magnetic bearing and a conventional SRM such as ruggedness, low cost, fail-safe, no friction, no contact, high efficiency, fault-tolerance, and possible operation at high temperature; therefore, BSRM can operate at a high speed [1–3]. It has an advantage in a high-speed and super high-speed starter/generator for an advanced aircraft engine [4, 5]. Hence, it is expected to be suitable for commercial, industrial, and military applications.According to the number of coil windings embedded in the stator, the BSRMs are divided into one set of windings and two sets of windings [6]. Both of the two motors produce unbalanced radial force by changing the air gap magnetic field density to realize the suspension of rotor. However, strong coupling exists between torque and levitation force in both traditional two-sets-windings BSRM and traditional one-set-winding BSRM. It is difficult to realize the complete decoupling of the two in the mathematical model and control strategy; thus, improvement of BSRM’s levitating and rotating performance is limited. Scholars have done a lot of researches on the motor topology to solve the coupling problem and some research results have been obtained.Scholars at Kyungsung University have proposed one method for two-phase BSRM with 8/10 or 12/14 hybrid pole type, in which each stator has one-set-winding. This proposal takes advantage of the hybrid structure of narrow and wider teeth to separate the radial force stator pole from the torque stator pole [7–10]. Levitation force and torque are produced separately by suspension winding and torque winding, thus realizing the natural decoupling of torque and levitation. However, the wider stator teeth occupy a large space, and the operation of motor is two-phase excitation, which limits the output of power density. Moreover, the number of rotor poles is larger than the number of stator poles, so it is difficult to improve the high speed performance.NASA Glen Research Center proposed a one-set-winding BSRM with hybrid rotor, which consists of two parts: circular and scalloped lamination segments [11, 12]. When controlling in practice, coils of one set of four stator poles form a set of windings, providing levitation force for the circular laminated section of rotor and playing the role of the magnetic bearing to levitate the rotor. The other set of four stator poles imparts torque to the scalloped portion of the rotor. The motor of this structure is easy to control, but its biggest shortcoming is that same as the magnetic bearing motor system and the axial length is longer, which limits the critical speed of rotor.The high-speed motor research center of Nanjing University of Aeronautics and Astronautics improves the suspension winding arrangements of traditional 12/8 BSRM with two-sets-windings, proposing a three-phase, two-sets-windings, 12/8 series-excited BSRM [13], which connect the traditional three-phase suspension winding in the same direction in series to become a one-set-winding. Thus it makes the suspension winding inductance unchanged at one rotor cycle. The suspension current does not produce torque, which realizes the decoupling control of torque and levitation force. Nevertheless, since the demand for copper wire of suspension winding is high, it is wasteful and causes the utilization rate of winding to be lower.In this paper, a new type of BSRM with novel rotor structure is proposed based on 12/8 two-sets-winding BSRMs and 12/4 one-set-winding BSRMs. The scheme uses wider pole arcs in the rotor and makes the winding inductance curve a flat area in the inductance maximum position, thus realizing decoupling control of the levitation force and torque. First, the structure characteristics of the proposed BSRM are summed up by analyzing relationships between torque, suspension force, and inductance. Then, operating principle of the new structure BSRM is illustrated. Accordingly, the torque and suspension force performances of the proposed BSRM are analyzed in detail with finite-element (FE) calculation. Mathematical model for suspension force is deduced and a control system is designed for the 12/8 BSRM with two-sets-windings. Compared with traditional BSRM, the proposed scheme has a simpler suspension force model and is easier to be controlled; besides, it can realize decoupling between torque and radial force. ## 2. Characteristics and Analysis of Traditional BSRM Figure1(a) shows the configuration of the traditional 12/8 BSRM with only phase-A winding [14]. The pole arcs of the stator and rotor are both equal to 15 mechanical degrees (°M). The aligned position is defined as θ = 0 °M. There are two kinds of stator windings: the motor main winding and radial force windings. The main winding N m consists of four coils connected in series. Each radial force winding N s consists of two coils. When the two differential windings conduct the currents as shown in Figure 1(a), the flux density in air gap 1 increases, whereas it decreases in air gap 3. Thus, an unbalanced magnetic force is produced toward the positive direction in the α-axis. Radial force toward the β-axis can also be produced in the same way. Therefore, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. Because the number of rotor poles is 8, the inductance cycle of winding is 45°M. Figure 1(b) shows the form of the main winding self-inductance curve.Configuration and inductance curve of traditional 12/8 BSRM. (a) Configuration (b) Inductance curveThe stored magnetic energyW a in phase-A can be expressed as [14, 15] (1) W a = 1 2 i m a i s a 1 i s a 2 × L a × i m a i s a 1 i s a 2 . The inductance matrix L a of phase-A is a 3 × 3 matrix constructed by self and mutual inductances and it can be written as (2) L a = L m a M ( m a , s a 1 ) M ( m a , s a 2 ) M ( m a , s a 1 ) L s a 1 M ( s a 1 , s a 2 ) M ( m a , s a 2 ) M ( s a 1 , s a 2 ) L s a 2 , where L m a, L s a 1, and L s a 2 are the self-inductances of N m a, N s a 1, and N s a 2, respectively; M ( m a , s a 1 ) is the mutual inductance between N m a and N s a 1; M ( m a , s a 2 ) is the mutual inductance between N m a and N s a 2; and M ( s a 1 , s a 2 ) is mutual inductance between N s a 1 and N s a 2.According to the principle of electromechanical energy conversion, the torqueT a due to phase-A can be written as (3) T a = ∂ W a ∂ θ = ∂ 1 / 2 L m a i m a 2 + L s a 1 i s a 1 2 + L s a 2 i s a 2 2 ∂ θ .When the shaft has no load or light load in its radial direction, the suspension current value will be very small because a small levitation force is needed to maintain rotor stable suspension in that case. Therefore, the contribution of suspension winding currents on the torque can be ignored. The torqueT a can be expressed as a linearized model: (4) T a = 1 2 ∂ L ∂ θ i m a 2 .In the same way, suspension forces in two directions produced by the current in phase-A can be obtained by the virtual displacement method and can be written as(5) F α = ∂ W a ∂ α = ∂ M ( m a , s a 1 ) i m a i s a 1 + M ( m a , s a 2 ) i m a i s a 2 ∂ α = K f θ i m a i s a 1 , F β = ∂ W a ∂ β = ∂ M ( m a , s a 1 ) i m a i s a 1 + M ( m a , s a 2 ) i m a i s a 2 ∂ β = K f θ i m a i s a 2 , where K f θ is a proportional coefficient of radial force; it is a function of the rotor position angle θ and the dimensions of BSRM [14–17].It is known from (4) that the torque is proportional to the partial derivative of the main winding inductance to rotor position angle. As shown in Figure 1(b), the inductance in Sections 1 and 4 is constant. Therefore, the winding current does not generate torque in these two regions. The current conducted in Section 2 produces positive torque, while the current conducted in Section 3 produces negative torque. In practice, it can realize torque control by adjusting the conduction width of Sections 2 and 3 flexibly according to the load torque.We can also see from (5) that the magnitude of suspending force is proportional to the coefficient K f θ and winding currents. For improving the use ratio of currents, a largeK f θ is needed to generate certain suspension force. If rotor and stator poles are aligned, K f θ obtains a large value for the magnetic reluctance of air gap which is minimum at this angle position. As the rotor rotates away from the aligned position, K f θ value becomes less because of the increase in magnetic reluctance causing a decrease in air gap flux. Therefore, the turn-ON angle should be shifted toward aligned position to improve the suspension performance. That is to say, in Figure 1(b), suspension force should also be generated in intervals II and III around aligned position. So, the suspension force has nonlinear character and coupled with torque. It is complicated to calculate the suspension force value for position-dependent K f θ and currents. This will make algorithm more complex and demand higher requirements upon digital controller. Meanwhile, large negative torque will be produced when the currents conduct in region III, which will lead to low-usage of winding current and limit speed performance. ## 3. Structure Characteristics and Operating Principles of the Novel BSRM According to the above analysis of traditional BSRM, the proportional coefficientK f θ and inductance L m a obtain their maximum values in aligned position at the same time. Thus, if changing the structure of the motor can generate a flat area in the winding inductance curve located in the inductance maximum position, then it will only generate a large and linear suspension force but fails to produce torque, if the motor in this flat area is excited. Then the winding inductance curve has the form as shown in Figure 2. The suspension force of motor can be controlled in region V and the torque can be controlled in the inductance rising region II. BSRMs with this new inductance feature, whose suspension force will be linear and this scheme is undesired to compromise between torque and radial force; furthermore, no negative torques are generated. So, it may improve the efficiency of winding currents and will be easier to be controlled.Figure 2 The aim form of inductance curve. ### 3.1. Structure Characteristics of Novel BSRMs On the premise of the fact that the stator structure of the new BSRM is same as the stator structure of traditional BSRM, we increase the width of rotor pole. The rotor pole arc will be greater than the stator pole arc. We name it wider-rotor-teeth BSRM. The reason to do so is that when the stator and rotor pole of the motor are in aligned position, the proportional coefficientK f θ and the inductance of winding reach their maximums; hence, increasing overlap area of the rotor and stator pole during rotor rotation by increasing the width of rotor pole can produce the flat area of the inductance. Based on this principle, when it comes to a m-phase motor whose rotor teeth number is N r, one rotor cycle angle will be (360/N r/m)°M. So the region where suspension forces are generated is (360/N r/m)°M; m phases conduct in turn to provide steady suspension forces. Therefore, it requires that the inductance curve in the maximum inductance has a flat area of (360/N r/m)°M at least to provide stable suspension force for rotor and to realize decoupling control of the torque and the suspension force. Thus, the relationship between rotor pole arc angle and stator pole arc angle must satisfy (6) β r ≥ β s + 360 ° m · N r , where β r and β s are the rotor pole arc angle and the stator pole arc angle, respectively. N r is the number of rotor poles.The structure of wider-rotor-teeth BSRM not only can be two-sets-winding BSRM but also can be one-set-winding BSRM. ### 3.2. Operating Principles of Novel BSRM Taking a 12/4 structure with wider-rotor-teeth and one set of windings BSRM and a 12/8 structure with wider-rotor-teeth and two sets of windings BSRM as examples, this section introduces the rotation operation principle of the novel BSRM. #### 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. #### 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 3.1. Structure Characteristics of Novel BSRMs On the premise of the fact that the stator structure of the new BSRM is same as the stator structure of traditional BSRM, we increase the width of rotor pole. The rotor pole arc will be greater than the stator pole arc. We name it wider-rotor-teeth BSRM. The reason to do so is that when the stator and rotor pole of the motor are in aligned position, the proportional coefficientK f θ and the inductance of winding reach their maximums; hence, increasing overlap area of the rotor and stator pole during rotor rotation by increasing the width of rotor pole can produce the flat area of the inductance. Based on this principle, when it comes to a m-phase motor whose rotor teeth number is N r, one rotor cycle angle will be (360/N r/m)°M. So the region where suspension forces are generated is (360/N r/m)°M; m phases conduct in turn to provide steady suspension forces. Therefore, it requires that the inductance curve in the maximum inductance has a flat area of (360/N r/m)°M at least to provide stable suspension force for rotor and to realize decoupling control of the torque and the suspension force. Thus, the relationship between rotor pole arc angle and stator pole arc angle must satisfy (6) β r ≥ β s + 360 ° m · N r , where β r and β s are the rotor pole arc angle and the stator pole arc angle, respectively. N r is the number of rotor poles.The structure of wider-rotor-teeth BSRM not only can be two-sets-winding BSRM but also can be one-set-winding BSRM. ## 3.2. Operating Principles of Novel BSRM Taking a 12/4 structure with wider-rotor-teeth and one set of windings BSRM and a 12/8 structure with wider-rotor-teeth and two sets of windings BSRM as examples, this section introduces the rotation operation principle of the novel BSRM. ### 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. ### 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 3.2.1. 12/4 BSRM with One Set of Windings and Wider-Rotor-Teeth Figure3 shows the configuration of the 12/4 wider-rotor-teeth BSRM with only A-phase and B-phase windings. The arc angle of stator pole is still 15°M as traditional BSRMs, and each stator tooth is embedded in one set of concentrated windings. Four windings of each phase are controlled independently. According to (6), the arc angle of rotor pole should be not less than 45°M, and here the minimum value 45°M is taken. In Figure 3, phase-A is at the maximum inductance position. When the two windings of phase-A in the α-direction adopt asymmetric excitation, as shown in Figure 3, the flux density in air gap 1 increases, whereas it decreases in air gap 3. Therefore, an unbalanced magnetic force acting on rotor F α is produced toward the positive direction in the α-axis. The radial force F β in the β-axis can also be produced in the same way. Thus, radial force in any desired direction can be produced by composing the two radial forces in perpendicular directions. This principle can be similarly applied to the phase-B and phase-C windings.Configuration and operating principle of 12/4 BSRM with one set of windings. (a) Flux lines (b) Magnetic flux density vectorObviously, because phase-A is in the flat area of maximum inductance as the rotor position shown in Figure3, it only generates suspension force but generates no torque. Therefore, it is necessary to turn-ON windings of two phases at the same time to realize stable operation: one in the flat area of inductance to generate suspension force, another in the inductance rising to generate torque. Taking counterclockwise as the positive orientation, phase-B in Figure 3 is just in inductance rising region. Torque for motor’s rotation can be generated by exciting the four windings of phase-B symmetrically. ## 3.2.2. 12/8 BSRM with Two Sets of Windings and Wider-Rotor-Teeth The number of coil windings embedded in the stator can also be two sets. A three-phase 12/8 BSRM with proposed novel rotor was taken as example to illustrate the suspension and operation principle of the BSRM with two sets of windings. Figure4 is the configuration of the novel BSRM, which has two sets of windings and wider-rotor-teeth. The arc angle of stator pole is still 15°M, and each stator pole is embedded in two sets of concentrated windings. The windings arrangement is the same as that of the traditional 12/8 two-sets-windings BSRM. Figure 4 only gives the two sets of windings of phase-A and the main winding of phase-C. According to (6), the pole arc of rotor is designed as 30°M.Configuration and operating principle of 12/8 novel BSRM with two sets of windings. (a) Flux lines (b) Magnetic flux density vectorSimilar to 12/4 BSRM above, it is necessary to turn-ON two phase windings at the same time. One phase winding is used to generate radial force, while the other phase winding is used to generate torque. In Figure4, the main winding of phase A adopts symmetric excitation and conducts current i m a. The current i s a 1 of radial force winding enhances the flux density in air gap 1 and reduces it in air gap 3. Therefore, controllable radial force to levitate rotor is generated in α-direction. The radial force in the β-axis can also be produced in the same way. Radial force in any desired direction can be produced by composing the two radial forces in α and β directions. The torque for motor’s rotation is generated by symmetrically exciting main winding of phase-C, because phase-C is just in inductance rising region at this moment. ## 4. Electromagnetic Analysis of Two Wider-Rotor-Teeth BSRMs In order to verify the validity of the suspension operation principles and provide the basis theory for motor control strategy, the above two kinds of wider-rotor-teeth structure BSRM are analyzed through FE method. In order to facilitate the comparison between the two prototypes, the same rated condition is adopted here. The rated power is 2 kW, the rated speed is 20000 r/min, and the maximum radial force is 100 N. The dimensions of the simulation motors are shown in Table1.Table 1 Parameters of 12/4 and 12/8 wider-rotor-teeth BSRMs. Stator diameter/mm 95 Rotor diameter/mm 49.8 Stator yoke/mm 6.1 Rotor yoke/mm 7.65 Stator pole height/mm 16.5 Rotor pole height/mm 7 Stator pole arc/°M 15 Diameter of axle/mm 20 Gap length/mm 0.25 Length of stator stack/mm 55 Rotor pole arc of 12/4 BSRM/°M 45 Number of windings of 12/4 BSRM 13 Rotor pole arc of 12/8 BSRM/°M 30 Number of main windings of 12/8 BSRM 9 Number of suspension windings of 12/8 BSRM 13The software of Ansys is used to calculate electromagnetic field. The 2-dimensional (2D) FE models of 12/4 BSRM and 12/8 BSRM are established and their 2D FE mesh models are shown in Figure5. We use enhanced incremental energy method to calculate inductances of winding, since it only needs to calculate incremental energy without the need for system energy [18].2D FE mesh models of 12/4 BSRM and 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRMFor a single loop magnetic system, if the operating current of the loop isi 0, the static inductances L can be calculated by (7) L = Ψ i 0 = Δ W C i 0 Δ i , where Δ W C is the magnetic field coenergy increment, i 0 is current pass through the magnetic loop, Ψ is the flux linkage, and Δ i is the increment of i 0.The increment of magnetic field coenergyΔ W C can be calculated as (8) Δ W C = ∫ B Δ H d V , where B is flux density and Δ H is the magnetic field increment.Then, combining (7) and (8), inductances can be written as (9) L = ∫ B Δ H d V i 0 Δ i . ### 4.1. FE Analysis Results with Only Phase-A Excited To the 12/8 BSRM shown in Table1, the given currents conducted in main winding and suspension winding are i m a = 5 A, i s a 1 = 3.6 A, and i s a 2 = 0, respectively. To the 12/4 BSRM, the given currents are i a 1 = 8.6 A and i a 2 = 1.4 A. The main winding inductances of phase-A are calculated. Figure 6 shows relationships between inductance and the rotor position angle of two kinds of BSRM. For the purposes of comparison, here, all the actual position mechanical angles of rotor were multiplied by the respective number of rotor teeth, so that each cycle of rotor angle corresponding to the motor becomes 360 electrical degrees (°E). It can be seen from Figure 6 that the winding inductances of the two kinds of BSRM are all about constant maximum values in the interval [−60°E, 60°E], this is consistent with the results of theoretical analysis, and the inductance curve has the same shape as shown in Figure 2.Figure 6 Inductances of 12/4 and 12/8 wider-rotor-teeth BSRMs.Figure7 shows relationships suspension force, or instantaneous torque and the rotor position angle of two kinds of BSRM, respectively. Figure 7(a) shows that the maximum levitation force can also be obtained in [−60°E, 60°E] interval, while the torque is basically 0 in this interval as shown in Figure 7(b). That is to say, there is large suspension force produced while it fails to produce torque in this region. Figure 7(b) also shows that torques are generated in the inductance rising and drop region. The levitation force and torque produced in different regions; thus, it can realize the decoupling control of torque and levitation force. This result is in agreement with theoretical result.Suspension forces and torques of 12/4 and 12/8 wider-rotor-teeth BSRMs. (a) Suspension forces. (b) Torques. (a) Suspending forces atα-direction (b) TorquesThe results also show that the output torque width of 12/4 BSRM is only 1/2 of the output torque width of 12/8 BSRM, so the torque angle characteristic of 12/8 BSRM is better. The 12/4 BSRM is more suitable for light load applications. ### 4.2. FE Analysis Results with Two Phases Excited Simultaneously Keep the phase-A currents in [−60°E, 60°E] region the same as that in former simulation. phase-A produces larger suspension force while it fails to produce torque according to the analysis of previous section. To the 12/4 BSRM, the currents of phase-B were conducted to produce torque since the inductance of phase-B is just rising in this region. Given currents conducted in phase-B are i b 1 = i b 2 = 5 A. In the same way, to the 12/8 BSRM, the conduction of phase-C to produce torque and the given current is i m c = 5 A. Figure 8 shows the FE analysis results of suspension force and torque. It can be seen that no matter two phases excited simultaneously or only phase-A excited, the suspension force is approximately the same. The values of two phases excited are slightly larger than that of only one phase excited, but the difference is less than 4%. Thus the effects of phase-B or phase-C conduction to levitation forces can be ignored in the two BSRMs. It can also be seen from Figure 8 that torque values are approximately zero with only phase-A conducted, while they greatly increased when two phases excited simultaneously. Compared with Figure 7, the generated torque values of the two motors are all very close to those of only one phase conducted in their inductance rising region. The difference is also less than 4%. Thus the torque generated by the phase, which mainly produces the suspension force, can be ignored. That is to say, the coupling effect between phase and phase can be ignored.Suspension forces and torques with two phases excited simultaneously. (a) 12/4 BSRM. (b) 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRM ## 4.1. FE Analysis Results with Only Phase-A Excited To the 12/8 BSRM shown in Table1, the given currents conducted in main winding and suspension winding are i m a = 5 A, i s a 1 = 3.6 A, and i s a 2 = 0, respectively. To the 12/4 BSRM, the given currents are i a 1 = 8.6 A and i a 2 = 1.4 A. The main winding inductances of phase-A are calculated. Figure 6 shows relationships between inductance and the rotor position angle of two kinds of BSRM. For the purposes of comparison, here, all the actual position mechanical angles of rotor were multiplied by the respective number of rotor teeth, so that each cycle of rotor angle corresponding to the motor becomes 360 electrical degrees (°E). It can be seen from Figure 6 that the winding inductances of the two kinds of BSRM are all about constant maximum values in the interval [−60°E, 60°E], this is consistent with the results of theoretical analysis, and the inductance curve has the same shape as shown in Figure 2.Figure 6 Inductances of 12/4 and 12/8 wider-rotor-teeth BSRMs.Figure7 shows relationships suspension force, or instantaneous torque and the rotor position angle of two kinds of BSRM, respectively. Figure 7(a) shows that the maximum levitation force can also be obtained in [−60°E, 60°E] interval, while the torque is basically 0 in this interval as shown in Figure 7(b). That is to say, there is large suspension force produced while it fails to produce torque in this region. Figure 7(b) also shows that torques are generated in the inductance rising and drop region. The levitation force and torque produced in different regions; thus, it can realize the decoupling control of torque and levitation force. This result is in agreement with theoretical result.Suspension forces and torques of 12/4 and 12/8 wider-rotor-teeth BSRMs. (a) Suspension forces. (b) Torques. (a) Suspending forces atα-direction (b) TorquesThe results also show that the output torque width of 12/4 BSRM is only 1/2 of the output torque width of 12/8 BSRM, so the torque angle characteristic of 12/8 BSRM is better. The 12/4 BSRM is more suitable for light load applications. ## 4.2. FE Analysis Results with Two Phases Excited Simultaneously Keep the phase-A currents in [−60°E, 60°E] region the same as that in former simulation. phase-A produces larger suspension force while it fails to produce torque according to the analysis of previous section. To the 12/4 BSRM, the currents of phase-B were conducted to produce torque since the inductance of phase-B is just rising in this region. Given currents conducted in phase-B are i b 1 = i b 2 = 5 A. In the same way, to the 12/8 BSRM, the conduction of phase-C to produce torque and the given current is i m c = 5 A. Figure 8 shows the FE analysis results of suspension force and torque. It can be seen that no matter two phases excited simultaneously or only phase-A excited, the suspension force is approximately the same. The values of two phases excited are slightly larger than that of only one phase excited, but the difference is less than 4%. Thus the effects of phase-B or phase-C conduction to levitation forces can be ignored in the two BSRMs. It can also be seen from Figure 8 that torque values are approximately zero with only phase-A conducted, while they greatly increased when two phases excited simultaneously. Compared with Figure 7, the generated torque values of the two motors are all very close to those of only one phase conducted in their inductance rising region. The difference is also less than 4%. Thus the torque generated by the phase, which mainly produces the suspension force, can be ignored. That is to say, the coupling effect between phase and phase can be ignored.Suspension forces and torques with two phases excited simultaneously. (a) 12/4 BSRM. (b) 12/8 BSRM. (a) 12/4 BSRM (b) 12/8 BSRM ## 5. System Analysis of Two Wider-Rotor-Teeth BSRMs ### 5.1. Mathematical Model of 12/8 BSRM The radial suspension force and torque acting on rotor should be acquired in order to control the BSRM. This paper derives the mathematical model through virtual displacement method as traditional BSRM [14, 19]. The current of suspension winding conducts in the interval [−7.5°M, 7.5°M] where the stator teeth and the rotor teeth are always overlapped. It can be seen from Figure 4 that the magnetic field lines are almost perpendicular from the rotor teeth to the stator teeth in this area, while the fringing flux is small and can be ignored. Therefore, the magnetic circuit diagram of the magnetic circuit can be represented as straight lines as shown in Figure 9.Figure 9 Magnetic circuit diagram.Accordingly, the magnetic permeance can be expressed as(10) P = μ 0 h r π 12 l 0 .Here,μ 0 is the permeability in the air, h is the axial length of stator, r is the radius of rotor, and l 0 is the average air gap length.The levitation force in two directions and the torque can also be obtained by the virtual displacement method. Equation (10) shows that the permeance is only decided by dimensions of BSRM; therefore, the proportional coefficient K f ( θ ) of radial force in (5) becomes a constant and the suspension force mathematical model for novel motor becomes (11) F α = K f i m a i s a 1 , F β = K f i m a i s a 2 , where the proportional coefficient K f is a constant and can be expressed as (12) K f = μ 0 h r π 6 l 0 2 N m N s .Also according to the principle of virtual displacement, the instantaneous torque of phase A can be expressed as(13) T a = J t θ 2 N m 2 i m a 2 + N s 2 i s a 1 2 + N s 2 i s a 2 2 .In inductance rising region, the positive torque coefficient can be expressed as(14) J t p θ = μ 0 h r 1 l 0 - 4 l 0 - 1.28 π r θ 2 l 0 - π r θ 2 .It can be seen from (11) and (12) that suspension forces in suspension region are simply determined by winding currents after motor structure parameters have been determined, which are independent of rotor position angle. Thus, they can be easier controlled than that in traditional case. In practice, if there is a positive or negative eccentric displacement x in α-direction or β-direction, the average air gapl 0 will be revised to l 0 ∓ x in (11) and (13). ### 5.2. Control Scheme Figure10 illustrates the principle of the proposed control scheme. The rotor pole arc angle is 15°M larger than the stator pole arc according to the above analysis of 12/8 BSRM, which forms a 15°M maximum flat area of windings inductance within [−7.5°M, 7.5°M]. The proposed scheme fixes the width of radial force winding currents just at [−7.5°M, 7.5°M] area, shown as region I in Figure 10. The combination of the main winding and suspension winding currents in this region I produces larger and consecutive radial force to levitate rotor. However, the instantaneous torque within region I is approximately equal to 0. The suspension winding does not conduct in any nonoverlapping area, whereas the main windings are also excited in the interval [−22.5°M, −7.5°M] to generate torque. Thus, region I and region II are used to control suspension force and torque, respectively, which can realize the decoupling control of torque and suspension force. In practice, the turn-ON angle of main windings can be adjusted according to the requirements of actual average torque. To obtain a larger average torque, we can advance the turn-ON angle of main winding currents as much as possible. The waveforms of currents can also be changed by the corresponding control objectives, such as higher efficiency and lower vibration. In Figure 10, the turn-ON angle and waveform of main windings are selected as −15°M and square-wave, respectively.Figure 10 Control scheme of new 12/8 BSRM. ### 5.3. Block Diagram of Control System According to the above analysis, the control system can be made according to Figure11 during practice. After the rotor’s location is detected through the encoder, the real-time rotation speed of the motor can be got through location computation. The difference between it and the given speed is formed as the active phase current i m 1 * through the PI controller. The rotor radial displacements at the two perpendicular directions are measured and converted into electrical signals through the radial displacement sensor and are output as the desired radial forces F α * and F β * after the PID controller. Based on the desired value, according to the corresponding control targets and (11), the currents i m 2 *, i s 1 *, and i s 2 * in the suspension interval can be computed. Finally, the stable suspension during the operation of the motor can be achieved by tracing the set value of currents through the power controllers on the two windings.Figure 11 Block diagram of control system. ### 5.4. Simulation and Analysis The 12/8 BSRM dimensions of the simulation motor are the same as shown in Table1. The given radial force in α-axis is 50 N, whereas it is 30 N in the β-axis. Figures 12(a) and 12(b) show simulation waveforms of the currents, torque, and levitation forces under giving the main winding turn-ON angle θ on ⁡ equal to −15°M and −22.5°M, respectively. Here, the currents are controlled as square-wave. It can be seen that the change of the turn-ON angle makes the output average torque change. When the angle θ on ⁡ is advanced, the output average torque is increased. So the output of average torque can be controlled by adjusting the turn-ON angle of the main winding. However, due to the winding inductance, the main winding current in the torque production section cannot be rapidly converted into the current required by the suspension section, and a delay exists, as shown in the area circled by broken lines in Figure 12. As a result, the suspension forces cannot track their given values well during currents conversion area. To solve this problem, it is necessary to add a compulsive excitation unit to the main winding converter. That is, by applying an inverse voltage when the current changes, rapidly convert the current of main windings to the value required by the suspension section. Figure 13 shows the simulation waveforms of the currents, torque, and levitation forces. As can be seen from Figures 13(a) and 13(b), after the compulsive excitation unit is added, the suspension force was well tracked.Simulation waveforms of the currents, torque, and levitation forces without compulsive excitation unit. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° MSimulation waveforms of the currents, torque, and levitation forces after compulsive excitation unit is added. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° M ## 5.1. Mathematical Model of 12/8 BSRM The radial suspension force and torque acting on rotor should be acquired in order to control the BSRM. This paper derives the mathematical model through virtual displacement method as traditional BSRM [14, 19]. The current of suspension winding conducts in the interval [−7.5°M, 7.5°M] where the stator teeth and the rotor teeth are always overlapped. It can be seen from Figure 4 that the magnetic field lines are almost perpendicular from the rotor teeth to the stator teeth in this area, while the fringing flux is small and can be ignored. Therefore, the magnetic circuit diagram of the magnetic circuit can be represented as straight lines as shown in Figure 9.Figure 9 Magnetic circuit diagram.Accordingly, the magnetic permeance can be expressed as(10) P = μ 0 h r π 12 l 0 .Here,μ 0 is the permeability in the air, h is the axial length of stator, r is the radius of rotor, and l 0 is the average air gap length.The levitation force in two directions and the torque can also be obtained by the virtual displacement method. Equation (10) shows that the permeance is only decided by dimensions of BSRM; therefore, the proportional coefficient K f ( θ ) of radial force in (5) becomes a constant and the suspension force mathematical model for novel motor becomes (11) F α = K f i m a i s a 1 , F β = K f i m a i s a 2 , where the proportional coefficient K f is a constant and can be expressed as (12) K f = μ 0 h r π 6 l 0 2 N m N s .Also according to the principle of virtual displacement, the instantaneous torque of phase A can be expressed as(13) T a = J t θ 2 N m 2 i m a 2 + N s 2 i s a 1 2 + N s 2 i s a 2 2 .In inductance rising region, the positive torque coefficient can be expressed as(14) J t p θ = μ 0 h r 1 l 0 - 4 l 0 - 1.28 π r θ 2 l 0 - π r θ 2 .It can be seen from (11) and (12) that suspension forces in suspension region are simply determined by winding currents after motor structure parameters have been determined, which are independent of rotor position angle. Thus, they can be easier controlled than that in traditional case. In practice, if there is a positive or negative eccentric displacement x in α-direction or β-direction, the average air gapl 0 will be revised to l 0 ∓ x in (11) and (13). ## 5.2. Control Scheme Figure10 illustrates the principle of the proposed control scheme. The rotor pole arc angle is 15°M larger than the stator pole arc according to the above analysis of 12/8 BSRM, which forms a 15°M maximum flat area of windings inductance within [−7.5°M, 7.5°M]. The proposed scheme fixes the width of radial force winding currents just at [−7.5°M, 7.5°M] area, shown as region I in Figure 10. The combination of the main winding and suspension winding currents in this region I produces larger and consecutive radial force to levitate rotor. However, the instantaneous torque within region I is approximately equal to 0. The suspension winding does not conduct in any nonoverlapping area, whereas the main windings are also excited in the interval [−22.5°M, −7.5°M] to generate torque. Thus, region I and region II are used to control suspension force and torque, respectively, which can realize the decoupling control of torque and suspension force. In practice, the turn-ON angle of main windings can be adjusted according to the requirements of actual average torque. To obtain a larger average torque, we can advance the turn-ON angle of main winding currents as much as possible. The waveforms of currents can also be changed by the corresponding control objectives, such as higher efficiency and lower vibration. In Figure 10, the turn-ON angle and waveform of main windings are selected as −15°M and square-wave, respectively.Figure 10 Control scheme of new 12/8 BSRM. ## 5.3. Block Diagram of Control System According to the above analysis, the control system can be made according to Figure11 during practice. After the rotor’s location is detected through the encoder, the real-time rotation speed of the motor can be got through location computation. The difference between it and the given speed is formed as the active phase current i m 1 * through the PI controller. The rotor radial displacements at the two perpendicular directions are measured and converted into electrical signals through the radial displacement sensor and are output as the desired radial forces F α * and F β * after the PID controller. Based on the desired value, according to the corresponding control targets and (11), the currents i m 2 *, i s 1 *, and i s 2 * in the suspension interval can be computed. Finally, the stable suspension during the operation of the motor can be achieved by tracing the set value of currents through the power controllers on the two windings.Figure 11 Block diagram of control system. ## 5.4. Simulation and Analysis The 12/8 BSRM dimensions of the simulation motor are the same as shown in Table1. The given radial force in α-axis is 50 N, whereas it is 30 N in the β-axis. Figures 12(a) and 12(b) show simulation waveforms of the currents, torque, and levitation forces under giving the main winding turn-ON angle θ on ⁡ equal to −15°M and −22.5°M, respectively. Here, the currents are controlled as square-wave. It can be seen that the change of the turn-ON angle makes the output average torque change. When the angle θ on ⁡ is advanced, the output average torque is increased. So the output of average torque can be controlled by adjusting the turn-ON angle of the main winding. However, due to the winding inductance, the main winding current in the torque production section cannot be rapidly converted into the current required by the suspension section, and a delay exists, as shown in the area circled by broken lines in Figure 12. As a result, the suspension forces cannot track their given values well during currents conversion area. To solve this problem, it is necessary to add a compulsive excitation unit to the main winding converter. That is, by applying an inverse voltage when the current changes, rapidly convert the current of main windings to the value required by the suspension section. Figure 13 shows the simulation waveforms of the currents, torque, and levitation forces. As can be seen from Figures 13(a) and 13(b), after the compulsive excitation unit is added, the suspension force was well tracked.Simulation waveforms of the currents, torque, and levitation forces without compulsive excitation unit. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° MSimulation waveforms of the currents, torque, and levitation forces after compulsive excitation unit is added. (a) θ on ⁡ = - 15 ° M (b) θ on ⁡ = - 22.5 ° M ## 6. Conclusions This paper studied a new-structure BSRM, which adopts a wide rotor pole structure. The nonlinear FEM and system simulation analysis based on MATLAB are used to analyze the performance of the novel motor. Analysis results of two prototypes indicate that(1) the novel BSRM has linear mathematical model of suspension force, since it can produce a flat area at the position of maximum inductance on the winding inductance curve; (2) the novel BSRM can realize decoupling control of torque and suspension force. Therefore, the algorithm is easier to be implemented and demands lower requirements upon digital controller; (3) since no negative torques are generated in control, the availability of winding current is higher than traditional case; (4) to avoid the problem of poor suspension force performance caused by current delay during converting the main winding currents from torque production region to suspension force production region, it is necessary to add a compulsive excitation unit into the converter of the main winding. --- *Source: 101626-2014-11-02.xml*
2014
# Pedestrian Delay Model for Continuous Flow Intersections under Three Design Patterns **Authors:** Tao Wang; Jing Zhao; Chaoyang Li **Journal:** Mathematical Problems in Engineering (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1016261 --- ## Abstract In order to accurately evaluate the level of service of pedestrians and provide the basis for the optimized design, the pedestrian delay of the continuous flow intersection was analyzed. According to the characteristics of streams of pedestrians’ arriving and leaving, the pedestrian delay models of different directions (namely, straight and diagonal) were established for three pedestrian passing patterns of the continuous flow intersection. The accuracy of the models was verified by VISSIM. The deviation was less than 3%. The effects of three key factors, namely, the vehicle demands, pedestrian demands and percentage of diagonal crossing, on the delay of the pedestrian under three modes were discussed by sensitivity analysis. The results show that the traditional pedestrian passing pattern mainly applies on the conditions that vehicle and pedestrian demands are low. The pattern of interspersed pedestrian passing is mainly applicable to the conditions of high vehicle and pedestrian demands. Although the pattern of exclusive pedestrian passing phase was least selected as the optimal design, it can apply to traffic demand fluctuating condition for its insensitive to volume and pedestrian demand pattern. --- ## Body ## 1. Introduction The intersection is the bottleneck node of the urban road network, in which the traffic problems are mainly caused by the complicated traffic flow. To better deal with the conflict between left-turn and through vehicles, a series of unconventional intersection designs were proposed, including median U-turn intersections [1, 2], superstreet intersections [1, 3, 4], uninterrupted flow intersections [5], special width approach lanes [6], dynamic reversible lane control [7], displaced left-turn intersections [8, 9], exit-lanes for left-turn intersections (EFL) [10–12], and tandem intersections [13–15].The continuous flow intersection (CFI) is one of these unconventional intersections, which was first proposed by Mier [16]. It can eliminate the conflict between left-turn and through vehicles at the main signal by transferring the conflict point between the left-turn vehicles and opposite through vehicles to the upstream presignal, so that the capacity of intersections will be improved. A series of theoretical research and practical tests about CFI have been carried out, including geometric design, signal control, and operational evaluation. The CFI has been taken into application in real life in many countries [17], such as United States, Australia, and China.From the perspective of the geometric design, Inman [18] studied the traffic sign and lane marking system of the continuous flow intersection to improve its visual recognition based on driving simulator experiments. Hughes [17] gave a series of detailed suggestions on layout design, including left-turn lane length, presignal intersection width, turning radius, and other detail structure at continuous flow intersections. The distances between the main signal and presignal under different traffic demands were further recommended by Tanwanichkul [19] based on the operational efficiency comparison under different distances between the main signal and the presignal using the VISSIM simulation.From the perspective of signal control, Tarko [20] developed the basic strategy of signal timing according to the traffic characteristics of continuous flow intersections. On this basis, considering the conditions of the balance and unbalance of traffic flow in each approach and exit lane, Esawey [21] proposed a signal control method composed of six phases and realized it by Synchro software. Zhao [22, 23] established an integrated optimization model for the geometric layout and signal timing in which many key parameters, such as intersection form, lane function, left-turn lane length, and signal timing, were optimized in a unified framework.From the perspective of operational evaluation, Chang et al. developed a well-calibrated CFI traffic simulators using VISSIM to evaluate the operational properties under various constraints and traffic conditions, which can assist engineers in identifying potential bottlenecks and estimating delays for CFI designs at the planning stage [24]. Moreover, in practice, the average waiting time of vehicles decreased by 50% and the traffic capacity increased by 31% after the continuous flow intersections of Utah's 3500 South Highway and Bangerter Highway opened to the public in 2007, which further verified the practical significance of continuous flow intersections [25].The above research on continuous flow intersections mainly focuses on vehicles. However, in developing countries, walking is an important travel mode [26–28], so it is a necessary condition for its application in developing countries to deal with pedestrian crossing street at continuous flow intersections. For this, Jagannathan [29] proposed a new design for pedestrians, namely, interlaced crossing. The performance of vehicles and pedestrians was discussed based on Vissim simulation, in which the signal timing was optimized for vehicular traffic performance. Coates [30] further proposed a flexible signal control program under the interlaced pedestrian crossing design to reduce vehicle delay while prioritizing pedestrian crossing. The signal control procedure dynamically chooses the appropriate phase and green time combination to minimize delay by considering pedestrian wait time and existing queue length. However, these studies did not provide in-depth discussions about pedestrian delay at continuous flow intersections.A series of calculation models for pedestrian delay at conventional intersections have been established in the past. One commonly used pedestrian delay model is the one presented in the highway capacity manual (HCM) [31]. It is a function of the ratio of the length of the pedestrian effective green time and the length of the signal cycle for one-stage crossings based on uniform arrival rates. Based on this model, many influencing factors were deeply analyzed by researchers to enhance the accuracy of the calculation model. These factors include the pedestrian-vehicle conflict considering the variety of the motorist yielding behavior [32], the pedestrians’ arrival rate considering the irregularity of the pedestrians’ arrival rate [33–35], the signal noncompliance considering pedestrians who entered crosswalks during clearance phases or red phases [36–39], the information of remaining green time [40], the two-stage crossing design considering the effect of the signal timing at the first-stage crossing on the pedestrian arrivals at the second-stage crossing [41, 42], and the bidirectional pedestrian flow considering the opposing pedestrian flow on the walking speed [43, 44].Pedestrian service level is an important factor for the application of continuous flow intersections on urban roads. At present, the research on continuous flow intersections mainly focuses on vehicles and the evaluation of pedestrian delay is mainly based on simulation, which limits the application, especially in developing countries. Therefore, this paper aims to establish a calculation model of pedestrian delay at CFI, in which three design patterns of pedestrian crossing are considered. Considering the proposed delay models can be used to provide the guidance for engineering practice in the future, all possible pedestrian crossing patterns are discussed, even though some of them have not been taken into practice in the real world.The rest of the paper is organized as follows. In Section2, the three design patterns of pedestrian crossing at CFI are described. The detailed pedestrian delay calculation model is established in Section 3. The proposed model is validated in Section 4. The sensitivity of the key parameters in the proposed model is discussed in Section 5. Conclusions and recommendations are given at the end. ## 2. Design Patterns of Pedestrian Crossing at CFI The basic geometric design concept of continuous flow intersection is shown in Figure1(a). The presignal is set at the upstream of the intersection so that left-turn vehicles can be guided to the left of opposite traffic flow, which can eliminate the conflicts between left-turn and opposite through vehicles at the main signals. Therefore, the two-phase signal plan can be used in the main-signal and presignals. Combining the main-signal and presignals, the CFI can run under a six-phase plan, as illustrated in Figure 1(b).Figure 1 Concept of CFI. (a) Geometric design. (b) Phase plan. (a) (b)For the pedestrians, there are three design patterns, namely, the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern.Under the conventional pattern (pattern 1), the pedestrians cross the CFI as the same way as the conventional intersections, as shown in Figure2. There are conflicts between pedestrians and left/right turn vehicles when pedestrians are crossing the intersection at the green light through vehicles, because the continuous flow intersection runs in two phases at the main signal.Figure 2 Conventional pattern.Under the exclusive phase pattern (pattern 2), the pedestrians can only cross the street during the exclusive phase, as shown in Figure3. The exclusive phase for pedestrians is added on the basis of two phases for vehicles at the main signal. Pedestrians can cross the intersection in each direction (including diagonally) without any conflicts.Figure 3 Exclusive phase pattern.Under the interlaced crossing pattern (pattern 3), the pedestrians can cross the intersection between through vehicles in the same direction and left-turn vehicles in the opposite direction on the designed crosswalk, as shown in Figure4. By this way, the conflicts between pedestrian and left-turn vehicles can be eliminated while pedestrians crossing the street will be retarded by multiple signals.Figure 4 Interlaced crossing pattern. ## 3. Pedestrian Delay Model Establishment ### 3.1. Model Parameters To facilitate the model presentation, the notations used hereafter are summarized in Table1.Table 1 Notations of key model parameters and variables. Notation Definition i Index of the design pattern,i=1,2,3 for the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern, respectively j Index of pedestrian movements,j=1,2, represents through movement and diagonal movement respectively C Cycle length of the intersection, s g Green time of the crosswalk, s r Red time of the crosswalk, s t b Interval from the start of green of the crosswalk to that of the crosswalk on the crossing street, s t ′ b Interval from the start of green of the crosswalk on the crossing street to that of the crosswalk on the studied street, s t w Walking time on the crosswalk, s t ′ w Walking time on the crosswalk on the crossing street, s t p Waiting time for pedestrian to decide whether to cross, s t 0 Passing time of vehicle, s τ Acceptable gap for pedestrians to cross, s q 1, q2 Arriving rate of pedestrian for through movement and diagonal movement, respectively, ped/s s Saturation rate of pedestrian, ped/s W Width of one vehicle lane, m v p Speed of pedestrian crossing the street, m/s λ Arriving rate of conflicting vehicles, veh/s λ ′ Arriving rate of conflicting vehicles on the crossing street, veh/s l Length of the segment in the calculation diagrams S Area of the shadow in the calculation diagrams d s i j Delay caused by signal control for pedestrian movementj under design pattern i, s d c i j Delay caused by conflicts for pedestrian movementj under design pattern i, s d i Delay of pedestrian under design patterni, s ### 3.2. Pedestrian Delay Model under Conventional Crossing Pattern There are two aspects of pedestrian delay in this pattern, including(1) delay caused by signal control: pedestrians have to wait at the roadside due to the pedestrian phase, which leads to signal delay; (2) delay caused by conflicts: pedestrians must pass through conflict zones of left-right turning vehicles, which causes conflict delays.(1) Delay Caused by Signal Control. Under the conventional crossing pattern, only the through movement of pedestrian is allowed. The diagonal movement can be accomplished by crossing the street twice. Therefore, the through and diagonal movement have to meet the signal control once and twice, respectively. The calculation diagrams of the two conditions are shown in Figures 5 and 6, respectively.Figure 5 Calculation diagram for the pedestrians meeting the signal control once.Figure 6 Calculation diagram for the pedestrians meeting the signal control twice.For through movement, as shown in Figure5, the segment “AC” represents the arrival of pedestrians to the intersection and its slope is the arriving rate. The segment “BC” represents the departure of pedestrians at the beginning of the green phase and its slope is the saturation flow rate. The segment “BD” is the pedestrian dissipation time. The area of the shadow triangle “ABC” is the total pedestrian delay, which can be calculated by (1). The length of segments “AB” and “CD” can be calculated by (2) and (3), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (4).(1)S=lABlCD2(2)lAB=r(3)lCD=sq1rs-q1(4)ds11=SABCCq=sr22Cs-q1For diagonal movement, as shown in Figure 5, the segment “AD” represents the arrival of pedestrians to the intersection. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to the second stage of crossing. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. The length of horizontal segment in triangle “ABC” denotes the delay when pedestrians waiting for green light at the first stage of crossing; the length of horizontal segment in polygon “BCDEFG” denotes the pedestrian walking time on the crosswalk; the length of horizontal segment in polygon “EFGHI” denotes the delay when pedestrians waiting for green light at the second stage of crossing. Therefore, the area of the shadow part is the total pedestrian delay, which can be calculated by (5). The length of segments “AB”, “GH”, “EI”, and “IK” can be calculated by (6), (7), (8), and (9), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (10).(5)S=lAB+lGH+lEIlIK2(6)lAB=r(7)lGH=tb-tw(8)lEI=tb-tw-g+Cqs(9)lIK=Cq(10)ds12=r2+tb-tw-g2+Cq2s(2) Delay Caused by Conflicts.The delay caused by conflicts is directly related to the left-turn and right-turn traffic flow and the distance distribution between vehicles, which can be calculated according to the equation in the literature [45]. Therefore, the delay caused by conflicts for through movement and diagonal movement can be calculated by (11) and (12), respectively, in which the acceptable gap for pedestrians to cross, τ, can be calculated by (13) [45].(11)dc11=1/λ-τ+1/λe-λτe-λτ(12)dc12=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τ(13)τ=Wvp+tp+t0Combining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (14).(14)d1=∑j=12qjds1j+dc1j∑j=12qj ### 3.3. Pedestrian Delay Model under Exclusive Phase Pattern Pedestrians have exclusive phases so that the through and diagonal pedestrians can cross the street simultaneously, and its delay is only caused by signal control which can be illustrated by Figure5. The shadow triangle abc is the total pedestrian delay and the calculation equation of average pedestrian delay is shown in (15).(15)d2=∑j=12sqjr2/2Cs-qj∑j=12qj ### 3.4. Pedestrian Delay Model under Interlaced Crossing Pattern Pedestrian delay in this pattern is mainly caused by signal control and conflict delay of right-turn vehicles.(1) Delay Caused by Signal Control.The setting of interlaced crosswalk is illustrated in Figure 7. Under the interlaced crossing pattern, the through and diagonal movement have to meet the signal control twice and thrice, respectively. The calculation diagrams of the two conditions are shown in Figures 6 and 8, respectively.Figure 7 Setting of the interlaced crosswalk.Figure 8 Calculation diagram for the pedestrians meeting the signal control thrice.For through movement, pedestrians should meet the signal control twice. For easy discussion, using the route from point 1 to point 4 in Figure7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1 and wait for green signal of the south-north direction at point 3. As shown in Figure 6, the segment “AD” represents the arrival of pedestrians to point 1. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to point 3. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. Therefore, the average pedestrian delay of signal control can be calculated by (16)ds31=r2+tb-tw-g2+Cq2sFor diagonal movement, pedestrians should meet the signal control thrice. For easy discussion, using the route from point 1 to point 6 in Figure 7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1, wait for green signal of the south-north direction at point 3, and wait for green signal of the east-west direction at point 5. As shown in Figure 8, the area of the shadow part is the total pedestrian delay, which can be calculated by (17). The length of segments “AB”, “GH”, “KM”, “EI”, “JN”, and “NK” can be calculated by (18), (19), (20), (21), (22), and (23), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (24).(17)S=lAB+lGH+lKM+lEI+lJNlNO2(18)lAB=r(19)lGH=tb-tw(20)lKM=t′b-t′w(21)lEI=tb-tw-g+Cqs(22)lJN=t′b-t′w(23)lNO=Cq(24)ds32=r2+tb-tw-g2+Cq2s+t′b-t′w(2) Delay Caused by Signal Control.The calculation principle of the delay caused by signal control under this design pattern is the same as the conventional crossing pattern. Therefore, the delay caused by right-turn vehicles can be calculated by (25)dc31=1/λ-τ+1/λe-λτe-λτ(26)dc32=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τCombining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (27)d3=∑j=12qjds3j+dc3j∑j=12qj ## 3.1. Model Parameters To facilitate the model presentation, the notations used hereafter are summarized in Table1.Table 1 Notations of key model parameters and variables. Notation Definition i Index of the design pattern,i=1,2,3 for the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern, respectively j Index of pedestrian movements,j=1,2, represents through movement and diagonal movement respectively C Cycle length of the intersection, s g Green time of the crosswalk, s r Red time of the crosswalk, s t b Interval from the start of green of the crosswalk to that of the crosswalk on the crossing street, s t ′ b Interval from the start of green of the crosswalk on the crossing street to that of the crosswalk on the studied street, s t w Walking time on the crosswalk, s t ′ w Walking time on the crosswalk on the crossing street, s t p Waiting time for pedestrian to decide whether to cross, s t 0 Passing time of vehicle, s τ Acceptable gap for pedestrians to cross, s q 1, q2 Arriving rate of pedestrian for through movement and diagonal movement, respectively, ped/s s Saturation rate of pedestrian, ped/s W Width of one vehicle lane, m v p Speed of pedestrian crossing the street, m/s λ Arriving rate of conflicting vehicles, veh/s λ ′ Arriving rate of conflicting vehicles on the crossing street, veh/s l Length of the segment in the calculation diagrams S Area of the shadow in the calculation diagrams d s i j Delay caused by signal control for pedestrian movementj under design pattern i, s d c i j Delay caused by conflicts for pedestrian movementj under design pattern i, s d i Delay of pedestrian under design patterni, s ## 3.2. Pedestrian Delay Model under Conventional Crossing Pattern There are two aspects of pedestrian delay in this pattern, including(1) delay caused by signal control: pedestrians have to wait at the roadside due to the pedestrian phase, which leads to signal delay; (2) delay caused by conflicts: pedestrians must pass through conflict zones of left-right turning vehicles, which causes conflict delays.(1) Delay Caused by Signal Control. Under the conventional crossing pattern, only the through movement of pedestrian is allowed. The diagonal movement can be accomplished by crossing the street twice. Therefore, the through and diagonal movement have to meet the signal control once and twice, respectively. The calculation diagrams of the two conditions are shown in Figures 5 and 6, respectively.Figure 5 Calculation diagram for the pedestrians meeting the signal control once.Figure 6 Calculation diagram for the pedestrians meeting the signal control twice.For through movement, as shown in Figure5, the segment “AC” represents the arrival of pedestrians to the intersection and its slope is the arriving rate. The segment “BC” represents the departure of pedestrians at the beginning of the green phase and its slope is the saturation flow rate. The segment “BD” is the pedestrian dissipation time. The area of the shadow triangle “ABC” is the total pedestrian delay, which can be calculated by (1). The length of segments “AB” and “CD” can be calculated by (2) and (3), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (4).(1)S=lABlCD2(2)lAB=r(3)lCD=sq1rs-q1(4)ds11=SABCCq=sr22Cs-q1For diagonal movement, as shown in Figure 5, the segment “AD” represents the arrival of pedestrians to the intersection. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to the second stage of crossing. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. The length of horizontal segment in triangle “ABC” denotes the delay when pedestrians waiting for green light at the first stage of crossing; the length of horizontal segment in polygon “BCDEFG” denotes the pedestrian walking time on the crosswalk; the length of horizontal segment in polygon “EFGHI” denotes the delay when pedestrians waiting for green light at the second stage of crossing. Therefore, the area of the shadow part is the total pedestrian delay, which can be calculated by (5). The length of segments “AB”, “GH”, “EI”, and “IK” can be calculated by (6), (7), (8), and (9), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (10).(5)S=lAB+lGH+lEIlIK2(6)lAB=r(7)lGH=tb-tw(8)lEI=tb-tw-g+Cqs(9)lIK=Cq(10)ds12=r2+tb-tw-g2+Cq2s(2) Delay Caused by Conflicts.The delay caused by conflicts is directly related to the left-turn and right-turn traffic flow and the distance distribution between vehicles, which can be calculated according to the equation in the literature [45]. Therefore, the delay caused by conflicts for through movement and diagonal movement can be calculated by (11) and (12), respectively, in which the acceptable gap for pedestrians to cross, τ, can be calculated by (13) [45].(11)dc11=1/λ-τ+1/λe-λτe-λτ(12)dc12=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τ(13)τ=Wvp+tp+t0Combining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (14).(14)d1=∑j=12qjds1j+dc1j∑j=12qj ## 3.3. Pedestrian Delay Model under Exclusive Phase Pattern Pedestrians have exclusive phases so that the through and diagonal pedestrians can cross the street simultaneously, and its delay is only caused by signal control which can be illustrated by Figure5. The shadow triangle abc is the total pedestrian delay and the calculation equation of average pedestrian delay is shown in (15).(15)d2=∑j=12sqjr2/2Cs-qj∑j=12qj ## 3.4. Pedestrian Delay Model under Interlaced Crossing Pattern Pedestrian delay in this pattern is mainly caused by signal control and conflict delay of right-turn vehicles.(1) Delay Caused by Signal Control.The setting of interlaced crosswalk is illustrated in Figure 7. Under the interlaced crossing pattern, the through and diagonal movement have to meet the signal control twice and thrice, respectively. The calculation diagrams of the two conditions are shown in Figures 6 and 8, respectively.Figure 7 Setting of the interlaced crosswalk.Figure 8 Calculation diagram for the pedestrians meeting the signal control thrice.For through movement, pedestrians should meet the signal control twice. For easy discussion, using the route from point 1 to point 4 in Figure7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1 and wait for green signal of the south-north direction at point 3. As shown in Figure 6, the segment “AD” represents the arrival of pedestrians to point 1. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to point 3. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. Therefore, the average pedestrian delay of signal control can be calculated by (16)ds31=r2+tb-tw-g2+Cq2sFor diagonal movement, pedestrians should meet the signal control thrice. For easy discussion, using the route from point 1 to point 6 in Figure 7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1, wait for green signal of the south-north direction at point 3, and wait for green signal of the east-west direction at point 5. As shown in Figure 8, the area of the shadow part is the total pedestrian delay, which can be calculated by (17). The length of segments “AB”, “GH”, “KM”, “EI”, “JN”, and “NK” can be calculated by (18), (19), (20), (21), (22), and (23), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (24).(17)S=lAB+lGH+lKM+lEI+lJNlNO2(18)lAB=r(19)lGH=tb-tw(20)lKM=t′b-t′w(21)lEI=tb-tw-g+Cqs(22)lJN=t′b-t′w(23)lNO=Cq(24)ds32=r2+tb-tw-g2+Cq2s+t′b-t′w(2) Delay Caused by Signal Control.The calculation principle of the delay caused by signal control under this design pattern is the same as the conventional crossing pattern. Therefore, the delay caused by right-turn vehicles can be calculated by (25)dc31=1/λ-τ+1/λe-λτe-λτ(26)dc32=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τCombining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (27)d3=∑j=12qjds3j+dc3j∑j=12qj ## 4. Model Validation The accuracy of models was tested by VISSIM simulation. The geometric design of CFI to be tested is shown in Figure9. The distance of the main signal and presignal is 100 m. The pedestrian speed is 1.2 m/s. The pedestrian volume is 720 ped/h (the arriving rate is 0.2 ped/s). The saturation flow rate of the crosswalk is 8 ped/s. The traffic volume of is shown in Table 2. The average pedestrian crossing delay can be measured by calculating the weighted average delay of all pedestrian crossing routes.Table 2 Traffic volume. Movement East leg West leg South leg North leg Left-turn 900 800 800 900 Through 1100 1200 1200 1100 Right-turn 400 500 500 400 Total 2400 2500 2500 2400Figure 9 Geometric design of CFI.The experiment was simulated by VISSIM and the simulation results were then compared with the calculation results of the model proposed, as is shown in Figure10. The average error for the conventional pattern (pattern 1), the exclusive phase pattern (pattern 2), and the interlaced crossing pattern (pattern 3) are 1.98 s, 1.24 s, and 2.56 s, respectively. Moreover, the results of paired t-test, as shown in Table 3, further show no significant difference between the results of the proposed model and that from simulation (p value = 0.363 > 0.05), which indicates the accuracy of the proposed pedestrian delay model is acceptable.Table 3 Paired t-test results. Mean Std. deviation Std. error mean 95% Confidence interval of the difference t df Sig. (2-tailed) Lower Upper 0.310 1.528 0.333 -0.385 1.006 0.931 20 0.363Figure 10 Geometric design of CFI. ## 5. Sensitivity Analyses The impact of three pedestrian crossing patterns on the delay of pedestrian is related to vehicular volume, pedestrian volume, and the proportion of diagonal crossing. The sensitivity analyses of the three design patterns were conducted below to further reveal the applicability of the three patterns.The impact of vehicular volume on pedestrian delay is shown in Figure11. The vehicular volume is changed from 300 veh/h/ln to 540 veh/h/ln. The pedestrian delays under the three patterns increase with the increase of vehicular volume. The conventional pattern is most affected and the exclusive phase pattern is least affected. In the condition of low vehicular volume (less than 380 veh/h/ln), the conventional pattern has the least pedestrian delay. It is due to the fact that there are lots of gap for pedestrians to cross in the vehicular flow when the volume is not high, so it is the most efficient for pedestrians to cross the street by using the gap between vehicles. However, in the condition of medium vehicular volume (between 380 and 480 veh/h/ln) the interlaced crossing pattern has the shortest average pedestrian delay. In the condition of high vehicular volume (more than 480 veh/h/ln), the exclusive phase pattern has the shortest average pedestrian delay, while the pedestrian delay of conventional pattern turns to be the longest. It is due to the fact that the conflict delay of conventional pattern increases dramatically with the increase of traffic volume and it increases a little in the other two patterns.Figure 11 Impact of vehicular volume.The impact of pedestrian volume on pedestrian delay is illustrated in Figure12. The pedestrian volume is changed from 500 ped/h to 1500 ped/h. It shows a linear growth trend with slight increase intensity. It is due to the fact that the saturation flow rate of the crosswalk is quite large. All the waiting pedestrians can depart in the first several seconds of the green phase.Figure 12 Impact of pedestrian volume.The impact of the proportion of diagonal crossing on pedestrian delay is shown in Figure13. The proportion of diagonal crossing is changed from 0.2 to 0.8. It can be found that it has the most effect on conventional pattern. The average pedestrian delay of conventional pattern is the shortest when the proportion of diagonal crossing is less than 0.6 and the average delay of interlaced crossing pattern is the shortest when the proportion of diagonal crossing is from 0.6 to 0.75, while exclusive phase pattern is the shortest when the proportion is more than 0.75.Figure 13 Impact of the proportion of diagonal crossing. ## 6. Conclusions As for three design patterns of pedestrian crossing at the continuous flow intersection, the delay model of different pedestrian flow directions (including through and diagonal crossing) was established, respectively, in this paper according to the characteristics of pedestrian arriving and leaving the intersection. The model was validated by VISSIM simulation. The impact of vehicular volume, pedestrian volume, and the proportion of diagonal crossing on the pedestrian delay in the three patterns were discussed by sensitivity analyses. This provided guidance for decision-makers to better select pedestrian crossing patterns at signalized intersections.( 1 ) The accuracy of the proposed model for the three design patterns is acceptable. The deviation is less than 3%. There is no significant difference between the results of the proposed model and that from simulation.( 2 ) Among these three design patterns, the conventional crossing pattern is mainly applicable to the case of low vehicular and pedestrian volume while the pattern of interlaced pedestrian crossing is mainly applicable to the case of heavy traffic and high pedestrian demand.( 3 ) Although the exclusive phase pattern is seldom selected as the optimal one (with the shortest delay) in the tested examples, it has the least sensitivity to the change of traffic flow rate and direction ratio, so it is more suitable for the case of high traffic demand volatility.Please note the proposed delay models are established based on the assumptions that the traffic operates in order and the walking speed and saturation flow rate of pedestrians do not fluctuate, which should be considered in the real-life application. Moreover, the effect of the traffic fluctuation should be considered. Moreover, for the person with vision impairment (blind), it is very important to give them the requisite guidance for crossing, such as the typhlosole, which can be the direction of future work to improve the applicability of the CFIs. --- *Source: 1016261-2019-01-23.xml*
1016261-2019-01-23_1016261-2019-01-23.md
33,975
Pedestrian Delay Model for Continuous Flow Intersections under Three Design Patterns
Tao Wang; Jing Zhao; Chaoyang Li
Mathematical Problems in Engineering (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1016261
1016261-2019-01-23.xml
--- ## Abstract In order to accurately evaluate the level of service of pedestrians and provide the basis for the optimized design, the pedestrian delay of the continuous flow intersection was analyzed. According to the characteristics of streams of pedestrians’ arriving and leaving, the pedestrian delay models of different directions (namely, straight and diagonal) were established for three pedestrian passing patterns of the continuous flow intersection. The accuracy of the models was verified by VISSIM. The deviation was less than 3%. The effects of three key factors, namely, the vehicle demands, pedestrian demands and percentage of diagonal crossing, on the delay of the pedestrian under three modes were discussed by sensitivity analysis. The results show that the traditional pedestrian passing pattern mainly applies on the conditions that vehicle and pedestrian demands are low. The pattern of interspersed pedestrian passing is mainly applicable to the conditions of high vehicle and pedestrian demands. Although the pattern of exclusive pedestrian passing phase was least selected as the optimal design, it can apply to traffic demand fluctuating condition for its insensitive to volume and pedestrian demand pattern. --- ## Body ## 1. Introduction The intersection is the bottleneck node of the urban road network, in which the traffic problems are mainly caused by the complicated traffic flow. To better deal with the conflict between left-turn and through vehicles, a series of unconventional intersection designs were proposed, including median U-turn intersections [1, 2], superstreet intersections [1, 3, 4], uninterrupted flow intersections [5], special width approach lanes [6], dynamic reversible lane control [7], displaced left-turn intersections [8, 9], exit-lanes for left-turn intersections (EFL) [10–12], and tandem intersections [13–15].The continuous flow intersection (CFI) is one of these unconventional intersections, which was first proposed by Mier [16]. It can eliminate the conflict between left-turn and through vehicles at the main signal by transferring the conflict point between the left-turn vehicles and opposite through vehicles to the upstream presignal, so that the capacity of intersections will be improved. A series of theoretical research and practical tests about CFI have been carried out, including geometric design, signal control, and operational evaluation. The CFI has been taken into application in real life in many countries [17], such as United States, Australia, and China.From the perspective of the geometric design, Inman [18] studied the traffic sign and lane marking system of the continuous flow intersection to improve its visual recognition based on driving simulator experiments. Hughes [17] gave a series of detailed suggestions on layout design, including left-turn lane length, presignal intersection width, turning radius, and other detail structure at continuous flow intersections. The distances between the main signal and presignal under different traffic demands were further recommended by Tanwanichkul [19] based on the operational efficiency comparison under different distances between the main signal and the presignal using the VISSIM simulation.From the perspective of signal control, Tarko [20] developed the basic strategy of signal timing according to the traffic characteristics of continuous flow intersections. On this basis, considering the conditions of the balance and unbalance of traffic flow in each approach and exit lane, Esawey [21] proposed a signal control method composed of six phases and realized it by Synchro software. Zhao [22, 23] established an integrated optimization model for the geometric layout and signal timing in which many key parameters, such as intersection form, lane function, left-turn lane length, and signal timing, were optimized in a unified framework.From the perspective of operational evaluation, Chang et al. developed a well-calibrated CFI traffic simulators using VISSIM to evaluate the operational properties under various constraints and traffic conditions, which can assist engineers in identifying potential bottlenecks and estimating delays for CFI designs at the planning stage [24]. Moreover, in practice, the average waiting time of vehicles decreased by 50% and the traffic capacity increased by 31% after the continuous flow intersections of Utah's 3500 South Highway and Bangerter Highway opened to the public in 2007, which further verified the practical significance of continuous flow intersections [25].The above research on continuous flow intersections mainly focuses on vehicles. However, in developing countries, walking is an important travel mode [26–28], so it is a necessary condition for its application in developing countries to deal with pedestrian crossing street at continuous flow intersections. For this, Jagannathan [29] proposed a new design for pedestrians, namely, interlaced crossing. The performance of vehicles and pedestrians was discussed based on Vissim simulation, in which the signal timing was optimized for vehicular traffic performance. Coates [30] further proposed a flexible signal control program under the interlaced pedestrian crossing design to reduce vehicle delay while prioritizing pedestrian crossing. The signal control procedure dynamically chooses the appropriate phase and green time combination to minimize delay by considering pedestrian wait time and existing queue length. However, these studies did not provide in-depth discussions about pedestrian delay at continuous flow intersections.A series of calculation models for pedestrian delay at conventional intersections have been established in the past. One commonly used pedestrian delay model is the one presented in the highway capacity manual (HCM) [31]. It is a function of the ratio of the length of the pedestrian effective green time and the length of the signal cycle for one-stage crossings based on uniform arrival rates. Based on this model, many influencing factors were deeply analyzed by researchers to enhance the accuracy of the calculation model. These factors include the pedestrian-vehicle conflict considering the variety of the motorist yielding behavior [32], the pedestrians’ arrival rate considering the irregularity of the pedestrians’ arrival rate [33–35], the signal noncompliance considering pedestrians who entered crosswalks during clearance phases or red phases [36–39], the information of remaining green time [40], the two-stage crossing design considering the effect of the signal timing at the first-stage crossing on the pedestrian arrivals at the second-stage crossing [41, 42], and the bidirectional pedestrian flow considering the opposing pedestrian flow on the walking speed [43, 44].Pedestrian service level is an important factor for the application of continuous flow intersections on urban roads. At present, the research on continuous flow intersections mainly focuses on vehicles and the evaluation of pedestrian delay is mainly based on simulation, which limits the application, especially in developing countries. Therefore, this paper aims to establish a calculation model of pedestrian delay at CFI, in which three design patterns of pedestrian crossing are considered. Considering the proposed delay models can be used to provide the guidance for engineering practice in the future, all possible pedestrian crossing patterns are discussed, even though some of them have not been taken into practice in the real world.The rest of the paper is organized as follows. In Section2, the three design patterns of pedestrian crossing at CFI are described. The detailed pedestrian delay calculation model is established in Section 3. The proposed model is validated in Section 4. The sensitivity of the key parameters in the proposed model is discussed in Section 5. Conclusions and recommendations are given at the end. ## 2. Design Patterns of Pedestrian Crossing at CFI The basic geometric design concept of continuous flow intersection is shown in Figure1(a). The presignal is set at the upstream of the intersection so that left-turn vehicles can be guided to the left of opposite traffic flow, which can eliminate the conflicts between left-turn and opposite through vehicles at the main signals. Therefore, the two-phase signal plan can be used in the main-signal and presignals. Combining the main-signal and presignals, the CFI can run under a six-phase plan, as illustrated in Figure 1(b).Figure 1 Concept of CFI. (a) Geometric design. (b) Phase plan. (a) (b)For the pedestrians, there are three design patterns, namely, the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern.Under the conventional pattern (pattern 1), the pedestrians cross the CFI as the same way as the conventional intersections, as shown in Figure2. There are conflicts between pedestrians and left/right turn vehicles when pedestrians are crossing the intersection at the green light through vehicles, because the continuous flow intersection runs in two phases at the main signal.Figure 2 Conventional pattern.Under the exclusive phase pattern (pattern 2), the pedestrians can only cross the street during the exclusive phase, as shown in Figure3. The exclusive phase for pedestrians is added on the basis of two phases for vehicles at the main signal. Pedestrians can cross the intersection in each direction (including diagonally) without any conflicts.Figure 3 Exclusive phase pattern.Under the interlaced crossing pattern (pattern 3), the pedestrians can cross the intersection between through vehicles in the same direction and left-turn vehicles in the opposite direction on the designed crosswalk, as shown in Figure4. By this way, the conflicts between pedestrian and left-turn vehicles can be eliminated while pedestrians crossing the street will be retarded by multiple signals.Figure 4 Interlaced crossing pattern. ## 3. Pedestrian Delay Model Establishment ### 3.1. Model Parameters To facilitate the model presentation, the notations used hereafter are summarized in Table1.Table 1 Notations of key model parameters and variables. Notation Definition i Index of the design pattern,i=1,2,3 for the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern, respectively j Index of pedestrian movements,j=1,2, represents through movement and diagonal movement respectively C Cycle length of the intersection, s g Green time of the crosswalk, s r Red time of the crosswalk, s t b Interval from the start of green of the crosswalk to that of the crosswalk on the crossing street, s t ′ b Interval from the start of green of the crosswalk on the crossing street to that of the crosswalk on the studied street, s t w Walking time on the crosswalk, s t ′ w Walking time on the crosswalk on the crossing street, s t p Waiting time for pedestrian to decide whether to cross, s t 0 Passing time of vehicle, s τ Acceptable gap for pedestrians to cross, s q 1, q2 Arriving rate of pedestrian for through movement and diagonal movement, respectively, ped/s s Saturation rate of pedestrian, ped/s W Width of one vehicle lane, m v p Speed of pedestrian crossing the street, m/s λ Arriving rate of conflicting vehicles, veh/s λ ′ Arriving rate of conflicting vehicles on the crossing street, veh/s l Length of the segment in the calculation diagrams S Area of the shadow in the calculation diagrams d s i j Delay caused by signal control for pedestrian movementj under design pattern i, s d c i j Delay caused by conflicts for pedestrian movementj under design pattern i, s d i Delay of pedestrian under design patterni, s ### 3.2. Pedestrian Delay Model under Conventional Crossing Pattern There are two aspects of pedestrian delay in this pattern, including(1) delay caused by signal control: pedestrians have to wait at the roadside due to the pedestrian phase, which leads to signal delay; (2) delay caused by conflicts: pedestrians must pass through conflict zones of left-right turning vehicles, which causes conflict delays.(1) Delay Caused by Signal Control. Under the conventional crossing pattern, only the through movement of pedestrian is allowed. The diagonal movement can be accomplished by crossing the street twice. Therefore, the through and diagonal movement have to meet the signal control once and twice, respectively. The calculation diagrams of the two conditions are shown in Figures 5 and 6, respectively.Figure 5 Calculation diagram for the pedestrians meeting the signal control once.Figure 6 Calculation diagram for the pedestrians meeting the signal control twice.For through movement, as shown in Figure5, the segment “AC” represents the arrival of pedestrians to the intersection and its slope is the arriving rate. The segment “BC” represents the departure of pedestrians at the beginning of the green phase and its slope is the saturation flow rate. The segment “BD” is the pedestrian dissipation time. The area of the shadow triangle “ABC” is the total pedestrian delay, which can be calculated by (1). The length of segments “AB” and “CD” can be calculated by (2) and (3), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (4).(1)S=lABlCD2(2)lAB=r(3)lCD=sq1rs-q1(4)ds11=SABCCq=sr22Cs-q1For diagonal movement, as shown in Figure 5, the segment “AD” represents the arrival of pedestrians to the intersection. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to the second stage of crossing. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. The length of horizontal segment in triangle “ABC” denotes the delay when pedestrians waiting for green light at the first stage of crossing; the length of horizontal segment in polygon “BCDEFG” denotes the pedestrian walking time on the crosswalk; the length of horizontal segment in polygon “EFGHI” denotes the delay when pedestrians waiting for green light at the second stage of crossing. Therefore, the area of the shadow part is the total pedestrian delay, which can be calculated by (5). The length of segments “AB”, “GH”, “EI”, and “IK” can be calculated by (6), (7), (8), and (9), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (10).(5)S=lAB+lGH+lEIlIK2(6)lAB=r(7)lGH=tb-tw(8)lEI=tb-tw-g+Cqs(9)lIK=Cq(10)ds12=r2+tb-tw-g2+Cq2s(2) Delay Caused by Conflicts.The delay caused by conflicts is directly related to the left-turn and right-turn traffic flow and the distance distribution between vehicles, which can be calculated according to the equation in the literature [45]. Therefore, the delay caused by conflicts for through movement and diagonal movement can be calculated by (11) and (12), respectively, in which the acceptable gap for pedestrians to cross, τ, can be calculated by (13) [45].(11)dc11=1/λ-τ+1/λe-λτe-λτ(12)dc12=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τ(13)τ=Wvp+tp+t0Combining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (14).(14)d1=∑j=12qjds1j+dc1j∑j=12qj ### 3.3. Pedestrian Delay Model under Exclusive Phase Pattern Pedestrians have exclusive phases so that the through and diagonal pedestrians can cross the street simultaneously, and its delay is only caused by signal control which can be illustrated by Figure5. The shadow triangle abc is the total pedestrian delay and the calculation equation of average pedestrian delay is shown in (15).(15)d2=∑j=12sqjr2/2Cs-qj∑j=12qj ### 3.4. Pedestrian Delay Model under Interlaced Crossing Pattern Pedestrian delay in this pattern is mainly caused by signal control and conflict delay of right-turn vehicles.(1) Delay Caused by Signal Control.The setting of interlaced crosswalk is illustrated in Figure 7. Under the interlaced crossing pattern, the through and diagonal movement have to meet the signal control twice and thrice, respectively. The calculation diagrams of the two conditions are shown in Figures 6 and 8, respectively.Figure 7 Setting of the interlaced crosswalk.Figure 8 Calculation diagram for the pedestrians meeting the signal control thrice.For through movement, pedestrians should meet the signal control twice. For easy discussion, using the route from point 1 to point 4 in Figure7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1 and wait for green signal of the south-north direction at point 3. As shown in Figure 6, the segment “AD” represents the arrival of pedestrians to point 1. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to point 3. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. Therefore, the average pedestrian delay of signal control can be calculated by (16)ds31=r2+tb-tw-g2+Cq2sFor diagonal movement, pedestrians should meet the signal control thrice. For easy discussion, using the route from point 1 to point 6 in Figure 7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1, wait for green signal of the south-north direction at point 3, and wait for green signal of the east-west direction at point 5. As shown in Figure 8, the area of the shadow part is the total pedestrian delay, which can be calculated by (17). The length of segments “AB”, “GH”, “KM”, “EI”, “JN”, and “NK” can be calculated by (18), (19), (20), (21), (22), and (23), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (24).(17)S=lAB+lGH+lKM+lEI+lJNlNO2(18)lAB=r(19)lGH=tb-tw(20)lKM=t′b-t′w(21)lEI=tb-tw-g+Cqs(22)lJN=t′b-t′w(23)lNO=Cq(24)ds32=r2+tb-tw-g2+Cq2s+t′b-t′w(2) Delay Caused by Signal Control.The calculation principle of the delay caused by signal control under this design pattern is the same as the conventional crossing pattern. Therefore, the delay caused by right-turn vehicles can be calculated by (25)dc31=1/λ-τ+1/λe-λτe-λτ(26)dc32=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τCombining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (27)d3=∑j=12qjds3j+dc3j∑j=12qj ## 3.1. Model Parameters To facilitate the model presentation, the notations used hereafter are summarized in Table1.Table 1 Notations of key model parameters and variables. Notation Definition i Index of the design pattern,i=1,2,3 for the conventional pattern, the exclusive phase pattern, and the interlaced crossing pattern, respectively j Index of pedestrian movements,j=1,2, represents through movement and diagonal movement respectively C Cycle length of the intersection, s g Green time of the crosswalk, s r Red time of the crosswalk, s t b Interval from the start of green of the crosswalk to that of the crosswalk on the crossing street, s t ′ b Interval from the start of green of the crosswalk on the crossing street to that of the crosswalk on the studied street, s t w Walking time on the crosswalk, s t ′ w Walking time on the crosswalk on the crossing street, s t p Waiting time for pedestrian to decide whether to cross, s t 0 Passing time of vehicle, s τ Acceptable gap for pedestrians to cross, s q 1, q2 Arriving rate of pedestrian for through movement and diagonal movement, respectively, ped/s s Saturation rate of pedestrian, ped/s W Width of one vehicle lane, m v p Speed of pedestrian crossing the street, m/s λ Arriving rate of conflicting vehicles, veh/s λ ′ Arriving rate of conflicting vehicles on the crossing street, veh/s l Length of the segment in the calculation diagrams S Area of the shadow in the calculation diagrams d s i j Delay caused by signal control for pedestrian movementj under design pattern i, s d c i j Delay caused by conflicts for pedestrian movementj under design pattern i, s d i Delay of pedestrian under design patterni, s ## 3.2. Pedestrian Delay Model under Conventional Crossing Pattern There are two aspects of pedestrian delay in this pattern, including(1) delay caused by signal control: pedestrians have to wait at the roadside due to the pedestrian phase, which leads to signal delay; (2) delay caused by conflicts: pedestrians must pass through conflict zones of left-right turning vehicles, which causes conflict delays.(1) Delay Caused by Signal Control. Under the conventional crossing pattern, only the through movement of pedestrian is allowed. The diagonal movement can be accomplished by crossing the street twice. Therefore, the through and diagonal movement have to meet the signal control once and twice, respectively. The calculation diagrams of the two conditions are shown in Figures 5 and 6, respectively.Figure 5 Calculation diagram for the pedestrians meeting the signal control once.Figure 6 Calculation diagram for the pedestrians meeting the signal control twice.For through movement, as shown in Figure5, the segment “AC” represents the arrival of pedestrians to the intersection and its slope is the arriving rate. The segment “BC” represents the departure of pedestrians at the beginning of the green phase and its slope is the saturation flow rate. The segment “BD” is the pedestrian dissipation time. The area of the shadow triangle “ABC” is the total pedestrian delay, which can be calculated by (1). The length of segments “AB” and “CD” can be calculated by (2) and (3), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (4).(1)S=lABlCD2(2)lAB=r(3)lCD=sq1rs-q1(4)ds11=SABCCq=sr22Cs-q1For diagonal movement, as shown in Figure 5, the segment “AD” represents the arrival of pedestrians to the intersection. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to the second stage of crossing. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. The length of horizontal segment in triangle “ABC” denotes the delay when pedestrians waiting for green light at the first stage of crossing; the length of horizontal segment in polygon “BCDEFG” denotes the pedestrian walking time on the crosswalk; the length of horizontal segment in polygon “EFGHI” denotes the delay when pedestrians waiting for green light at the second stage of crossing. Therefore, the area of the shadow part is the total pedestrian delay, which can be calculated by (5). The length of segments “AB”, “GH”, “EI”, and “IK” can be calculated by (6), (7), (8), and (9), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (10).(5)S=lAB+lGH+lEIlIK2(6)lAB=r(7)lGH=tb-tw(8)lEI=tb-tw-g+Cqs(9)lIK=Cq(10)ds12=r2+tb-tw-g2+Cq2s(2) Delay Caused by Conflicts.The delay caused by conflicts is directly related to the left-turn and right-turn traffic flow and the distance distribution between vehicles, which can be calculated according to the equation in the literature [45]. Therefore, the delay caused by conflicts for through movement and diagonal movement can be calculated by (11) and (12), respectively, in which the acceptable gap for pedestrians to cross, τ, can be calculated by (13) [45].(11)dc11=1/λ-τ+1/λe-λτe-λτ(12)dc12=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τ(13)τ=Wvp+tp+t0Combining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (14).(14)d1=∑j=12qjds1j+dc1j∑j=12qj ## 3.3. Pedestrian Delay Model under Exclusive Phase Pattern Pedestrians have exclusive phases so that the through and diagonal pedestrians can cross the street simultaneously, and its delay is only caused by signal control which can be illustrated by Figure5. The shadow triangle abc is the total pedestrian delay and the calculation equation of average pedestrian delay is shown in (15).(15)d2=∑j=12sqjr2/2Cs-qj∑j=12qj ## 3.4. Pedestrian Delay Model under Interlaced Crossing Pattern Pedestrian delay in this pattern is mainly caused by signal control and conflict delay of right-turn vehicles.(1) Delay Caused by Signal Control.The setting of interlaced crosswalk is illustrated in Figure 7. Under the interlaced crossing pattern, the through and diagonal movement have to meet the signal control twice and thrice, respectively. The calculation diagrams of the two conditions are shown in Figures 6 and 8, respectively.Figure 7 Setting of the interlaced crosswalk.Figure 8 Calculation diagram for the pedestrians meeting the signal control thrice.For through movement, pedestrians should meet the signal control twice. For easy discussion, using the route from point 1 to point 4 in Figure7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1 and wait for green signal of the south-north direction at point 3. As shown in Figure 6, the segment “AD” represents the arrival of pedestrians to point 1. The segment “BC” represents the departure of pedestrians at the beginning of the green phase. The segment “EFG” represents the arrival of pedestrians to point 3. The segment “HI” represents the departure of pedestrians at the beginning of the green phase along the crossing street. Therefore, the average pedestrian delay of signal control can be calculated by (16)ds31=r2+tb-tw-g2+Cq2sFor diagonal movement, pedestrians should meet the signal control thrice. For easy discussion, using the route from point 1 to point 6 in Figure 7 as an example, pedestrians have to wait for green signal of the east-west direction at point 1, wait for green signal of the south-north direction at point 3, and wait for green signal of the east-west direction at point 5. As shown in Figure 8, the area of the shadow part is the total pedestrian delay, which can be calculated by (17). The length of segments “AB”, “GH”, “KM”, “EI”, “JN”, and “NK” can be calculated by (18), (19), (20), (21), (22), and (23), respectively. Therefore, the average pedestrian delay of signal control can be calculated by (24).(17)S=lAB+lGH+lKM+lEI+lJNlNO2(18)lAB=r(19)lGH=tb-tw(20)lKM=t′b-t′w(21)lEI=tb-tw-g+Cqs(22)lJN=t′b-t′w(23)lNO=Cq(24)ds32=r2+tb-tw-g2+Cq2s+t′b-t′w(2) Delay Caused by Signal Control.The calculation principle of the delay caused by signal control under this design pattern is the same as the conventional crossing pattern. Therefore, the delay caused by right-turn vehicles can be calculated by (25)dc31=1/λ-τ+1/λe-λτe-λτ(26)dc32=1/λ-τ+1/λe-λτe-λτ+1/λ′-τ+1/λ′e-λ′τe-λ′τCombining the delay caused by signal control and the delay caused by conflicts, the pedestrian delay under conventional crossing pattern can be calculated by (27)d3=∑j=12qjds3j+dc3j∑j=12qj ## 4. Model Validation The accuracy of models was tested by VISSIM simulation. The geometric design of CFI to be tested is shown in Figure9. The distance of the main signal and presignal is 100 m. The pedestrian speed is 1.2 m/s. The pedestrian volume is 720 ped/h (the arriving rate is 0.2 ped/s). The saturation flow rate of the crosswalk is 8 ped/s. The traffic volume of is shown in Table 2. The average pedestrian crossing delay can be measured by calculating the weighted average delay of all pedestrian crossing routes.Table 2 Traffic volume. Movement East leg West leg South leg North leg Left-turn 900 800 800 900 Through 1100 1200 1200 1100 Right-turn 400 500 500 400 Total 2400 2500 2500 2400Figure 9 Geometric design of CFI.The experiment was simulated by VISSIM and the simulation results were then compared with the calculation results of the model proposed, as is shown in Figure10. The average error for the conventional pattern (pattern 1), the exclusive phase pattern (pattern 2), and the interlaced crossing pattern (pattern 3) are 1.98 s, 1.24 s, and 2.56 s, respectively. Moreover, the results of paired t-test, as shown in Table 3, further show no significant difference between the results of the proposed model and that from simulation (p value = 0.363 > 0.05), which indicates the accuracy of the proposed pedestrian delay model is acceptable.Table 3 Paired t-test results. Mean Std. deviation Std. error mean 95% Confidence interval of the difference t df Sig. (2-tailed) Lower Upper 0.310 1.528 0.333 -0.385 1.006 0.931 20 0.363Figure 10 Geometric design of CFI. ## 5. Sensitivity Analyses The impact of three pedestrian crossing patterns on the delay of pedestrian is related to vehicular volume, pedestrian volume, and the proportion of diagonal crossing. The sensitivity analyses of the three design patterns were conducted below to further reveal the applicability of the three patterns.The impact of vehicular volume on pedestrian delay is shown in Figure11. The vehicular volume is changed from 300 veh/h/ln to 540 veh/h/ln. The pedestrian delays under the three patterns increase with the increase of vehicular volume. The conventional pattern is most affected and the exclusive phase pattern is least affected. In the condition of low vehicular volume (less than 380 veh/h/ln), the conventional pattern has the least pedestrian delay. It is due to the fact that there are lots of gap for pedestrians to cross in the vehicular flow when the volume is not high, so it is the most efficient for pedestrians to cross the street by using the gap between vehicles. However, in the condition of medium vehicular volume (between 380 and 480 veh/h/ln) the interlaced crossing pattern has the shortest average pedestrian delay. In the condition of high vehicular volume (more than 480 veh/h/ln), the exclusive phase pattern has the shortest average pedestrian delay, while the pedestrian delay of conventional pattern turns to be the longest. It is due to the fact that the conflict delay of conventional pattern increases dramatically with the increase of traffic volume and it increases a little in the other two patterns.Figure 11 Impact of vehicular volume.The impact of pedestrian volume on pedestrian delay is illustrated in Figure12. The pedestrian volume is changed from 500 ped/h to 1500 ped/h. It shows a linear growth trend with slight increase intensity. It is due to the fact that the saturation flow rate of the crosswalk is quite large. All the waiting pedestrians can depart in the first several seconds of the green phase.Figure 12 Impact of pedestrian volume.The impact of the proportion of diagonal crossing on pedestrian delay is shown in Figure13. The proportion of diagonal crossing is changed from 0.2 to 0.8. It can be found that it has the most effect on conventional pattern. The average pedestrian delay of conventional pattern is the shortest when the proportion of diagonal crossing is less than 0.6 and the average delay of interlaced crossing pattern is the shortest when the proportion of diagonal crossing is from 0.6 to 0.75, while exclusive phase pattern is the shortest when the proportion is more than 0.75.Figure 13 Impact of the proportion of diagonal crossing. ## 6. Conclusions As for three design patterns of pedestrian crossing at the continuous flow intersection, the delay model of different pedestrian flow directions (including through and diagonal crossing) was established, respectively, in this paper according to the characteristics of pedestrian arriving and leaving the intersection. The model was validated by VISSIM simulation. The impact of vehicular volume, pedestrian volume, and the proportion of diagonal crossing on the pedestrian delay in the three patterns were discussed by sensitivity analyses. This provided guidance for decision-makers to better select pedestrian crossing patterns at signalized intersections.( 1 ) The accuracy of the proposed model for the three design patterns is acceptable. The deviation is less than 3%. There is no significant difference between the results of the proposed model and that from simulation.( 2 ) Among these three design patterns, the conventional crossing pattern is mainly applicable to the case of low vehicular and pedestrian volume while the pattern of interlaced pedestrian crossing is mainly applicable to the case of heavy traffic and high pedestrian demand.( 3 ) Although the exclusive phase pattern is seldom selected as the optimal one (with the shortest delay) in the tested examples, it has the least sensitivity to the change of traffic flow rate and direction ratio, so it is more suitable for the case of high traffic demand volatility.Please note the proposed delay models are established based on the assumptions that the traffic operates in order and the walking speed and saturation flow rate of pedestrians do not fluctuate, which should be considered in the real-life application. Moreover, the effect of the traffic fluctuation should be considered. Moreover, for the person with vision impairment (blind), it is very important to give them the requisite guidance for crossing, such as the typhlosole, which can be the direction of future work to improve the applicability of the CFIs. --- *Source: 1016261-2019-01-23.xml*
2019
# A Medical Decision Support System to Assess Risk Factors for Gastric Cancer Based on Fuzzy Cognitive Map **Authors:** Seyed Abbas Mahmoodi; Kamal Mirzaie; Maryam Sadat Mahmoodi; Seyed Mostafa Mahmoudi **Journal:** Computational and Mathematical Methods in Medicine (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1016284 --- ## Abstract Gastric cancer (GC), one of the most common cancers around the world, is a multifactorial disease and there are many risk factors for this disease. Assessing the risk of GC is essential for choosing an appropriate healthcare strategy. There have been very few studies conducted on the development of risk assessment systems for GC. This study is aimed at providing a medical decision support system based on soft computing using fuzzy cognitive maps (FCMs) which will help healthcare professionals to decide on an appropriate individual healthcare strategy based on the risk level of the disease. FCMs are considered as one of the strongest artificial intelligence techniques for complex system modeling. In this system, an FCM based on Nonlinear Hebbian Learning (NHL) algorithm is used. The data used in this study are collected from the medical records of 560 patients referring to Imam Reza Hospital in Tabriz City. 27 effective features in gastric cancer were selected using the opinions of three experts. The prediction accuracy of the proposed method is 95.83%. The results show that the proposed method is more accurate than other decision-making algorithms, such as decision trees, Naïve Bayes, and ANN. From the perspective of healthcare professionals, the proposed medical decision support system is simple, comprehensive, and more effective than previous models for assessing the risk of GC and can help them to predict the risk factors for GC in the clinical setting. --- ## Body ## 1. Introduction Gastric cancer (GC) which is one of the major cancers around the world with about one million new patients each year is known to be the third cause of cancer deaths [1, 2]. This represents an important public health issue in the world, especially in Central Asian countries, where the incidence of this disease is very high [2]. GC is a multifactorial disease, and its formation is related to various risk factors [3]. Various scientific methods, such as photofluorography and esophagogastroduodenoscopy, are used to diagnose GC in the early stages and can help reduce the mortality rate of GC with a practical approach [3]. Given that these methods are invasive and expensive, it is necessary to provide a simple inexpensive and effective tool for the diagnosis of people at risk for GC, which can then be followed by more accurate examinations. Moreover, appropriate prevention efforts can be made to reduce the incidence of this disease.The initial definitions of the decision support system (DSS) consider it as a system to support decision-makers of the management in the semistructured and unstructured positions and decisions [4]. Accordingly, DSS means helping decision-makers and increasing their ability, not replacing their judgments [4]. Today, the use of DSSs has expanded in a variety of areas, such as management, industry, agriculture, information systems, medicine, and hundreds of other topics. The medical decision support system (MDSS) is a computer system designed to help physicians or other healthcare professionals in making clinical decisions. Some applications of the medical decision support system are outlined below [5]: (i) Preventive care services, for example, screenings for blood pressure and cancer(ii) Patient symptom checker(iii) Care plan(iv) Guide to reducing long hospital stays(v) Intelligent health monitoring systemsMDSS contains numerous advantages, of which the most important is to minimalize medical failure and make a relatively stable structure for diagnosing and treating the disease, thereby resolving various and conflicting ideas of specialists [5]. Therefore, it is vital to design and implement these models.FCMs are regarded as soft computing methods that try attempting to act like humans for decision-making and reasoning [6]. In fact, an FCM is an instrument for modeling multifaceted systems, which is attained by integrating neural networks and fuzzy logic [7, 8], and to describe the complex system’s performance utilizing concepts. This technique creates a conceptual model where each concept provides a characteristic or a state of a system dynamically interacting with these notions [9]. FCM is a graphical representation of a system structure [10]. According to the artificial intelligence, FCMs are dynamic learning networks; thus, more data to model the problem can help the system with adapting itself and reaching a solution. This conceptual model is not restricted to the exact measurements and quantities. Hence, it is very appropriate for concepts without accurate structures.FCMs were presented by Kosko as a fuzzy directed graph with sign and feedback loops to illustrate the computational complexity and dependence of a model symbolically and explicitly [11]. In other words, a set of nodes is created by the FCM affecting each other via causal relations. The details and mathematical formulation of this technique are described in Supplementary Materials (available here). Using the benefits of fuzzy systems (if-then rules) and neural networks (teaching and learning), FCM was able to quickly prove its effectiveness in various areas so that we can see its successful presence in politics, economics, engineering, medicine, etc. [12].In recent years, MDSS using FCM has been developed as one of the main applications of this tool. FCM has emerged as a tool for representing and studying the behavior of systems, and it can deal with complex systems using an argumentative process. This study is aimed at providing an MDSS for assessing the risk of GC using FCM.In the following, some successful instances of FCM applications regarding decision support systems are provided. Papageorgiou et al. [13] utilized FCM for predicting infectious diseases and infection severity. A novel FCM-based technique was presented by Amirkhani et al. [14] to screen and isolate UDH from other internal brain lesions. Hence, they examined 86 patients in Shahid Beheshti Hospital in Isfahan City. The pathologist extracted the ten key properties needed to screen these lesions to use them as the key concepts of FCM. The accurateness of the suggested technique was 95.35%. Based on the results, it was indicated that not only the suggested FCM contained a high accuracy level it is also able to preset an acceptable false-negative rate (FNR). A decision support system was proposed by Baena de Moraes Lopes et al. [7] to diagnose the changes in urinary elimination, based on the nursing terminology of North American Nursing Diagnosis Association International (NANDA-I). For 195 cases of urinary incontinence, an FCM model was utilized after the NANDA-I classifications. The high specificity and sensitivity of 0.92 and 0.95, were, respectively, found by the FCM model; however, a low specificity value was provided in the determination of the diagnosis of urge urinary incontinence (0.43) along with a low sensitivity value to overall urinary incontinence (0.42).Recently, the use of FCM with Hebbian-based learning capabilities has increased. According to [15], a decision-making framework was proposed that can accurately assess the progression of depression symptoms in the elderly people and warn healthcare providers by providing useful information for regulating the patient’s treatment. According to [16], a risk management system for familial breast cancer was presented using the NHL-based FCM technique. Data needed for this study were extracted from 40 patients and 18 key features were selected. The results showed that the accuracy is 95%. According to [17], the first specialized diagnostic system for obesity was proposed based on psychological and social characteristics. In this study, a mathematical model based on FCM was presented. According to the proposed model, the effects of different weight-loss treatment methods can be studied.No certain reason exists for GC. The cause-effect associations are not systematically investigated and understood so far between the integrated impacts of the multiple risk factors on the probability of developing GC. Even the ideas of radiologists and oncologists are greatly subjective in this regard. In such instances, it is considered to use an FCM as a human-friendly and transparent clinical support instrument to determine the cause-effect associations between the factors and the subjectivity can be remarkably eliminated by the degrees of its effects on the risk level. The present work is mainly focused on developing a clinical decision-making instrument in terms of an FCM to evaluate GC risk. ## 2. Methods ### 2.1. FCM Model for GC Risk Factors Addressing GC is a complex process that needs to understand the various parameters, risk factors, and symptoms to make the right decision and assessment. This study assesses the risk of GC by providing a medical decision-making system. The design of this decision-making system is based on a proposed model of FCM, which is presented below. Designing and developing a suitable FCM require human knowledge to describe a decision support system. In this study, GC specialists are used for the development of the FCM model. The development of the FCM model is divided into three main steps, which is briefly summarized:(1) Identify concepts(2) Determine the relationships between concepts and initial weights(3) WeightingFirst, the experts individually identify the factors that contribute to GC. In the following, common concepts among specialists are selected as model nodes. The second step is to identify the relationships between concepts. To this end, experts define the interactions between concepts with respect to fuzzy variables. To do so, determine the relationship and the direction of the relationship (if any). The amounts of these effects are expressed as very low, low, medium, high, and very high. Finally, the linguistic variables expressed by the experts are integrated. Using the SUM technique, these values are aggregated and the total linguistic weight is generated by the “centric” defuzzification method and converted to a numerical value. The corresponding weight matrix is then constructed. Choosing a learning algorithm to teach initial weights is the third step of this method. The purpose of a learning algorithm, setting the initial weight, is the same way as neural networks to improve the modeling FCM.To better understand, these steps were used step by step to develop an FCM model for GC. For this purpose, the opinions of three specialists were used. In the first phase of the research presented in this article, information on GC risk factors was collected from medical sources, pathologists, and informal sources [18–48]. The collected knowledge was transformed into a well-structured questionnaire and presented to three experts. The questionnaire includes risk factors associated with GC. According to three experts, 27 common features were identified as the major risk factors for end-stage GC. To better understand, we used the mentioned process step by step to develop an FCM model for GC.Risk factors for gastric cancer may be categorized into four groups (personal features, systemic conditions, stomach condition, and diet food), each of which includes several risk factors. The final features are presented in Figure1, and their explanations are given in Table 1.Figure 1 Classification of GC risk factors.Table 1 Risk factors of GC. Risk factorsDescriptionC1: sexStudies show that men around the world are diagnosed with GC almost twice as much as women [18].C2: blood groupScientific research shows that there is a significant relationship between blood type and GC. The blood groups A and O have the highest and lowest incidence of GC, respectively [19].C3: BMIHigh BMI increases GC [20]. In 2016, the IACR formed a team of specialists. They reported that GC is one of the diseases caused by excessive fat gain and high BMI [21].C4: ageThe risk of GC increases with age [18, 22, 23].C5: motilityPeople with any regular physical activity have a lower risk of GC than nonactive people. According to the US Physical Activity Guidelines Advisory Committee (2018), moderate evidence showed that physical activity reduces the risk of various cancers, including GC [21].C6: alcohol consumptionRegular alcohol consumption increases the risk of GC [24, 25].C7: exposed to chemicalsSome jobs exposed to chemicals, such as cement and chromium, increase the risk of GC [26].C8: smokingSmoking increases the risk of GC [27, 28].C9: salt consumptionHigh salt intake increases the risk of GC [23, 29, 30].C10: consumption of vegetableThe daily consumption of 200-200 grams of vegetables per day may reduce the risk of GC [31].C11: consumption of smoked foodThe smoked food is a great source of polycyclic aromatic hydrocarbons (PAHs). Scientific research has shown that this biopollutant is one of the factors involved in many cancers, including GC [32, 33].C12: milk consumptionIncreasing dairy consumption, such as milk, is associated with a lower risk of GC [34].C13: fast food consumptionFast food consumption is one of the factors affecting the incidence of GC [35].C14: consumption of fried foodsThe results of scientific studies show that people who use a lot of fried foods in their diet are at increased risk of GC [27, 28].C15: fruit consumptionA daily consumption of 120-150 grams of fruit per day may reduce the risk of GC [31].C16: food storage containerToday’s food containers are often made of chemicals, such as plastics that contain bisphenol A. Thus, it can be the source of various types of cancer and hormonal disorders [36].C17: baking dishThe use of metal containers, such as aluminum for cooking, can be a factor in the development of diseases because these types of metals, when exposed to heat, emit a small amount of lead [37].C18: history of allergyRecent studies indicate that the history of allergic diseases is associated with a lower risk of GC [38].C19: family history of cancerA family history of cancer in certain specific sites may be associated with a risk of GC [39].C20: family of GCThis risk factor is strongly associated with different types of GC [40, 41].C21: history of cardiovascular diseasePeople with cardiovascular disease are at a lower risk of GC because of using some drugs [42].C22: general status of cancerPeople with a good general health status are less likely to be at risk of GC [43].C23: history of gastric refluxGastric reflux causes a 3-10% percent increase in being at risk of GC [44].C24: history of stomach surgeryGastric surgeries, such as gastric ulcers, may increase the risk of cancer [45].C25: history of stomach infectionHelicobacter pylorus is the most important risk factor for GC [46–48].C26: mucosa statusGastric ulcers are considered as a risk factor for GC [35].C27: history of gastric inflammationThe history of gastric inflammation is one of the most important factors in the incidence of GC [35].In the second phase, first, the sign for the relationship between the two concepts is determined, and finally, the numerical values of the two concepts are calculated. Five membership functions were used for this purpose. Consider the following example.1st specialist: C4 has a great impact on C27.2nd specialist: C4 has a moderate impact on C27.3rd specialist: C4 has a great impact on C27.Using the SUM method, the above three linguistic weights (high, very high, and very high) are aggregated. The above three linguistic weights (high, very high, and very high) are aggregated using the SUM method. Figure2 represents the centroid defuzzification method that is implemented to calculate the numerical value of the weight in the range −1,1.Figure 2 Aggregation and defuzzification of linguistic weights.Using this method, the weight of all relationships between the concepts related to FCM for GC was calculated. The developed FCM is shown in Figure3. In the third step, we used a learning algorithm to train the model, which includes updating the relationship weight, and finally, a fuzzy cognition map for GC risk factors was extracted. For this purpose, data collected from 560 patients referred to Imam Reza Hospital in Tabriz (after the preprocessing steps) were used through a questionnaire. Table 2 shows the features, values, and frequency of patients.Figure 3 User interface of the proposed MDSS.Table 2 Data sets. FeaturesRangeNumberPercentSexMale25645.7%Female30454.3%Age<40203.47%41–6021037.5%≥6133059.03%Blood groupA12321.96%B7813.92%AB8014.28%O27949.82%BMIBMI>306912.32%25<BMI>29.57613.57%18.5<BMI>24.912021.42%BMI<18.529352.32%MotilityLight15627.85%Medium23642.14%High16830%Alcohol consumptionYes8515.17%No47584.82%Exposed to chemicalsYes549.64%No50690.35%SmokingYes19835.35%No36264.64%Salt consumptionNone101.78%Low17531.25%High37566.96%Consumption of vegetableDaily264.64%1-3 times a week21438.21%1-3 times a month32057.14%Consumption of smoked foodNone50.89%Daily00%1-3 times a week14926.60%1-3 times a month40672.5%Milk consumptionYes21438.21%No34661.78%Fast food consumptionNone40.71%1-3 times a week31556.25%1-3 times a month24143.03%Consumption of fried foodsNone00%1-3 times a week19134.10%1-3 times a month36965.89%Fruit consumptionNone61.07%1-3 times a week18533.03%1-3 times a month36965.89%Food storage containerAluminum21638.57%Plastic30153.75%Copper325.71%Style91.60%Chinese20.35%Baking dishAluminum101.78%Teflon39069.64%Copper213.75%History of allergyYes8915.89%No47184.10%Family history of cancerYes21137.67%No34962.32%Family of GCYes12321.965No43778.03%History of cardiovascular diseaseYes18533.03%No37566.96%General statusGood7914.10%So-so19033.92%Poor29151.965History of gastric refluxYes23441.78%No32658.21%History of stomach surgeryYes488.57%No51291.42%History of stomach infectionYes17631.42%No38468.57%Mucosa statusNormal9416.78%Swollen12622.5%Red15728.03%Sore18332.67%History of gastric inflammationYes16329.10%No39770.89%Risk scoreHigh30053.57%Moderate18633.21%Low748.39%Figure4 shows the proposed FCM model for risk factors of GC. This FCM has 28 concepts and 38 edges with their weights. Considering the 28 concept nodes, 27 are the ultimate physician-selected features that interfere with the disease and are shown by the values C1 to C27. The central node is the concept of GC, which receives and collects interactions from all other nodes. The positive weight of an edge indicates that it has a positive effect on the incidence of GC, and the negative weight indicates the role of deterrence in the incidence of the disease. The yellow, purple, blue, and green colors were used to specify the category of any feature or concept. The C1 to C8 features specified with yellow were classified as personal features. The violet color was used for the C9 to C17 features of the diet food category. Blue and green were also used for the C18 to C22 features of the systemic condition, respectively, and C23 to C27 features were used for the stomach condition category.Figure 4 FCM model for GC risk factors. ### 2.2. Learning FCM Using NHL Algorithm GC specialists were well positioned to create FCM in our method. Nonlinear Hebbian Learning (NHL) is utilized to learn the weights due to no access to a relatively large data set, causal weight optimization, and more accurate results [49]. The Hebbian-based algorithms were used for FCM training to determine the best matrix in terms of expert knowledge [50]. Algorithms set the FCM weights through existing data and a learning formula in terms of repetition and Hebbian rule methods [50]. The NHL algorithm is based on the assumption that all of the concepts of the FCM model are stimulated at each time step and their values change. The value ωji corresponding to the concepts of cj and ci is updated, and the weight ωji is corrected in iteration k. The value of Aik+1 is determined in the k+1th iteration. The impact of concepts with values Aj and corrected weighted values ωijk in iteration k is determined by (1)Aik+1=fAik+∑j=1,i≠jnωjik·Ajk.Each of the concepts in the FCM model may be input or output concepts. A number of concepts are defined as output concepts (OCs). These concepts are the state of the system in which we want to estimate the value that represents the final state of the system. The classification of concepts as input and output concepts is by the experts of the group and according to the subject under consideration. The mathematical relations used in the NHL algorithm for learning FCM are shown in equations (1) and (2). (2)Δωji=ηtAiK−1AjK−1−ωjiK−1AiK−1,whereη is a scaling parameter called the learning rate. η is a very small positive scaler factor called learning parameter. Its value is obtained through test error. (3)ωjik=γ·ωjik−1+ηAik−1Ajk−1−sgnωjiωjik−1Aik−1.Equation (3) is the main equation of the NHL algorithm. γ is the weight decay parameter. The values of concepts and weightsωjik are calculated by equations (1) and (3), respectively. In fact, the NHL algorithm updates the basic matrix nonzero elements suggested by the experts in each iteration. The following criteria determine when the NHL algorithm ends [50].(a) The terminating functionF1 is given as (4)F1=OCi−Ti,whereTi is the mean value of OCi.This kind of metric function is suitable for the NHL algorithm used in the FCMs. In each step,F1 calculates the Euclidean distance for OCi and Ti. Assuming that OCi=Timin,Timax, Ti is calculated by (5)Ti=Timin+Timax2.Given that the FCM model hasm-OCs, for calculating F1, the sum of the square between m-OCs and m‐Ts can be calculated by (6)F1=∑j=1mOCi−Ti2.AfterF1 is minimized, the situation ends. b) The second condition for completing the algorithm is the difference between two consecutive OCs. This value should be less than e. Therefore, the value of the k+1th iteration should be less than e based on (7)F2=OCit+1−OCit<e=0.002.In this algorithm, the values of the parametersη and γ are determined through test error. After several tests, the values of η and γ show the best performing algorithm. Finally, when the algorithmic termination conditions are met, the final weight matrix (ωNHL) is obtained.For the convenience of end-users, a graphical interface is designed using the GUI in MATLAB for the proposed system. The user interface for the GC risk prediction software is shown in Figure3.For example, the user enters the requested information into the system. The system displays the risk assessment result after receiving information from the user and using the proposed NHL-FCM model.For the comparison of classification accuracy, the same data set is used for classification with other machine learning models. Backpropagation neural network, support vector machine, decision tree, and Bayesian classifier were used in the Weka toolkit V3.7 to test other learning algorithms. For this purpose, the Excel file containing the collected data collection was converted to .arff format so that it can be read for Weka. Then, the required steps for data preprocessing were performed. In this software, one of the most common methods of evaluating the performance of categories that divide the tagged data set into several subsets is cross-validation. 10-fold cross-validation was used for all the studied algorithms. 10-fold cross-validation divides the data set into 10 parts and performs the test 10 times. In each step, one part is considered as a test and the other 9 parts are considered for training. In this way, each data is used once for testing and 9 times for training. As a result, the entire data set is covered for training and testing.The backpropagation neural network with 27 input neurons, 10 neurons, and 3 output nodes was used as the multilayer perceptron. Also, for classification of the assess risk into three classes, high, medium, and low, the support vector machine, decision tree C4.5, and Naïve Bayesian classifier were used.. Given that the data studied are not linearly separable, we need to use the core technology to implement the SVM algorithm. The core technology is one of the most common techniques for solving problems that are not linearly separable. In this method, a suitable core function is selected and executed. In fact, the purpose of kernel functions is to linearize nonlinear problems. There are several kernel functions in Weka. The RBF (Radial Basis Function) was used to run the SVM algorithm. By selecting and running the C4.5 algorithm, you can see the results of the classification. Also, the tree created by this algorithm can be seen graphically, which is a large tree. The three categories of high risk, medium risk, and low risk were selected as target variables and other characteristics as predictive variables. The leaves of the tree are the target variables and can be seen as a number of rules according to the model made by the tree. Naïve Bayesian was another classification algorithm that was implemented using Weka on the studied data, and its results were examined. This algorithm uses a possible framework to solve classification problems. ## 2.1. FCM Model for GC Risk Factors Addressing GC is a complex process that needs to understand the various parameters, risk factors, and symptoms to make the right decision and assessment. This study assesses the risk of GC by providing a medical decision-making system. The design of this decision-making system is based on a proposed model of FCM, which is presented below. Designing and developing a suitable FCM require human knowledge to describe a decision support system. In this study, GC specialists are used for the development of the FCM model. The development of the FCM model is divided into three main steps, which is briefly summarized:(1) Identify concepts(2) Determine the relationships between concepts and initial weights(3) WeightingFirst, the experts individually identify the factors that contribute to GC. In the following, common concepts among specialists are selected as model nodes. The second step is to identify the relationships between concepts. To this end, experts define the interactions between concepts with respect to fuzzy variables. To do so, determine the relationship and the direction of the relationship (if any). The amounts of these effects are expressed as very low, low, medium, high, and very high. Finally, the linguistic variables expressed by the experts are integrated. Using the SUM technique, these values are aggregated and the total linguistic weight is generated by the “centric” defuzzification method and converted to a numerical value. The corresponding weight matrix is then constructed. Choosing a learning algorithm to teach initial weights is the third step of this method. The purpose of a learning algorithm, setting the initial weight, is the same way as neural networks to improve the modeling FCM.To better understand, these steps were used step by step to develop an FCM model for GC. For this purpose, the opinions of three specialists were used. In the first phase of the research presented in this article, information on GC risk factors was collected from medical sources, pathologists, and informal sources [18–48]. The collected knowledge was transformed into a well-structured questionnaire and presented to three experts. The questionnaire includes risk factors associated with GC. According to three experts, 27 common features were identified as the major risk factors for end-stage GC. To better understand, we used the mentioned process step by step to develop an FCM model for GC.Risk factors for gastric cancer may be categorized into four groups (personal features, systemic conditions, stomach condition, and diet food), each of which includes several risk factors. The final features are presented in Figure1, and their explanations are given in Table 1.Figure 1 Classification of GC risk factors.Table 1 Risk factors of GC. Risk factorsDescriptionC1: sexStudies show that men around the world are diagnosed with GC almost twice as much as women [18].C2: blood groupScientific research shows that there is a significant relationship between blood type and GC. The blood groups A and O have the highest and lowest incidence of GC, respectively [19].C3: BMIHigh BMI increases GC [20]. In 2016, the IACR formed a team of specialists. They reported that GC is one of the diseases caused by excessive fat gain and high BMI [21].C4: ageThe risk of GC increases with age [18, 22, 23].C5: motilityPeople with any regular physical activity have a lower risk of GC than nonactive people. According to the US Physical Activity Guidelines Advisory Committee (2018), moderate evidence showed that physical activity reduces the risk of various cancers, including GC [21].C6: alcohol consumptionRegular alcohol consumption increases the risk of GC [24, 25].C7: exposed to chemicalsSome jobs exposed to chemicals, such as cement and chromium, increase the risk of GC [26].C8: smokingSmoking increases the risk of GC [27, 28].C9: salt consumptionHigh salt intake increases the risk of GC [23, 29, 30].C10: consumption of vegetableThe daily consumption of 200-200 grams of vegetables per day may reduce the risk of GC [31].C11: consumption of smoked foodThe smoked food is a great source of polycyclic aromatic hydrocarbons (PAHs). Scientific research has shown that this biopollutant is one of the factors involved in many cancers, including GC [32, 33].C12: milk consumptionIncreasing dairy consumption, such as milk, is associated with a lower risk of GC [34].C13: fast food consumptionFast food consumption is one of the factors affecting the incidence of GC [35].C14: consumption of fried foodsThe results of scientific studies show that people who use a lot of fried foods in their diet are at increased risk of GC [27, 28].C15: fruit consumptionA daily consumption of 120-150 grams of fruit per day may reduce the risk of GC [31].C16: food storage containerToday’s food containers are often made of chemicals, such as plastics that contain bisphenol A. Thus, it can be the source of various types of cancer and hormonal disorders [36].C17: baking dishThe use of metal containers, such as aluminum for cooking, can be a factor in the development of diseases because these types of metals, when exposed to heat, emit a small amount of lead [37].C18: history of allergyRecent studies indicate that the history of allergic diseases is associated with a lower risk of GC [38].C19: family history of cancerA family history of cancer in certain specific sites may be associated with a risk of GC [39].C20: family of GCThis risk factor is strongly associated with different types of GC [40, 41].C21: history of cardiovascular diseasePeople with cardiovascular disease are at a lower risk of GC because of using some drugs [42].C22: general status of cancerPeople with a good general health status are less likely to be at risk of GC [43].C23: history of gastric refluxGastric reflux causes a 3-10% percent increase in being at risk of GC [44].C24: history of stomach surgeryGastric surgeries, such as gastric ulcers, may increase the risk of cancer [45].C25: history of stomach infectionHelicobacter pylorus is the most important risk factor for GC [46–48].C26: mucosa statusGastric ulcers are considered as a risk factor for GC [35].C27: history of gastric inflammationThe history of gastric inflammation is one of the most important factors in the incidence of GC [35].In the second phase, first, the sign for the relationship between the two concepts is determined, and finally, the numerical values of the two concepts are calculated. Five membership functions were used for this purpose. Consider the following example.1st specialist: C4 has a great impact on C27.2nd specialist: C4 has a moderate impact on C27.3rd specialist: C4 has a great impact on C27.Using the SUM method, the above three linguistic weights (high, very high, and very high) are aggregated. The above three linguistic weights (high, very high, and very high) are aggregated using the SUM method. Figure2 represents the centroid defuzzification method that is implemented to calculate the numerical value of the weight in the range −1,1.Figure 2 Aggregation and defuzzification of linguistic weights.Using this method, the weight of all relationships between the concepts related to FCM for GC was calculated. The developed FCM is shown in Figure3. In the third step, we used a learning algorithm to train the model, which includes updating the relationship weight, and finally, a fuzzy cognition map for GC risk factors was extracted. For this purpose, data collected from 560 patients referred to Imam Reza Hospital in Tabriz (after the preprocessing steps) were used through a questionnaire. Table 2 shows the features, values, and frequency of patients.Figure 3 User interface of the proposed MDSS.Table 2 Data sets. FeaturesRangeNumberPercentSexMale25645.7%Female30454.3%Age<40203.47%41–6021037.5%≥6133059.03%Blood groupA12321.96%B7813.92%AB8014.28%O27949.82%BMIBMI>306912.32%25<BMI>29.57613.57%18.5<BMI>24.912021.42%BMI<18.529352.32%MotilityLight15627.85%Medium23642.14%High16830%Alcohol consumptionYes8515.17%No47584.82%Exposed to chemicalsYes549.64%No50690.35%SmokingYes19835.35%No36264.64%Salt consumptionNone101.78%Low17531.25%High37566.96%Consumption of vegetableDaily264.64%1-3 times a week21438.21%1-3 times a month32057.14%Consumption of smoked foodNone50.89%Daily00%1-3 times a week14926.60%1-3 times a month40672.5%Milk consumptionYes21438.21%No34661.78%Fast food consumptionNone40.71%1-3 times a week31556.25%1-3 times a month24143.03%Consumption of fried foodsNone00%1-3 times a week19134.10%1-3 times a month36965.89%Fruit consumptionNone61.07%1-3 times a week18533.03%1-3 times a month36965.89%Food storage containerAluminum21638.57%Plastic30153.75%Copper325.71%Style91.60%Chinese20.35%Baking dishAluminum101.78%Teflon39069.64%Copper213.75%History of allergyYes8915.89%No47184.10%Family history of cancerYes21137.67%No34962.32%Family of GCYes12321.965No43778.03%History of cardiovascular diseaseYes18533.03%No37566.96%General statusGood7914.10%So-so19033.92%Poor29151.965History of gastric refluxYes23441.78%No32658.21%History of stomach surgeryYes488.57%No51291.42%History of stomach infectionYes17631.42%No38468.57%Mucosa statusNormal9416.78%Swollen12622.5%Red15728.03%Sore18332.67%History of gastric inflammationYes16329.10%No39770.89%Risk scoreHigh30053.57%Moderate18633.21%Low748.39%Figure4 shows the proposed FCM model for risk factors of GC. This FCM has 28 concepts and 38 edges with their weights. Considering the 28 concept nodes, 27 are the ultimate physician-selected features that interfere with the disease and are shown by the values C1 to C27. The central node is the concept of GC, which receives and collects interactions from all other nodes. The positive weight of an edge indicates that it has a positive effect on the incidence of GC, and the negative weight indicates the role of deterrence in the incidence of the disease. The yellow, purple, blue, and green colors were used to specify the category of any feature or concept. The C1 to C8 features specified with yellow were classified as personal features. The violet color was used for the C9 to C17 features of the diet food category. Blue and green were also used for the C18 to C22 features of the systemic condition, respectively, and C23 to C27 features were used for the stomach condition category.Figure 4 FCM model for GC risk factors. ## 2.2. Learning FCM Using NHL Algorithm GC specialists were well positioned to create FCM in our method. Nonlinear Hebbian Learning (NHL) is utilized to learn the weights due to no access to a relatively large data set, causal weight optimization, and more accurate results [49]. The Hebbian-based algorithms were used for FCM training to determine the best matrix in terms of expert knowledge [50]. Algorithms set the FCM weights through existing data and a learning formula in terms of repetition and Hebbian rule methods [50]. The NHL algorithm is based on the assumption that all of the concepts of the FCM model are stimulated at each time step and their values change. The value ωji corresponding to the concepts of cj and ci is updated, and the weight ωji is corrected in iteration k. The value of Aik+1 is determined in the k+1th iteration. The impact of concepts with values Aj and corrected weighted values ωijk in iteration k is determined by (1)Aik+1=fAik+∑j=1,i≠jnωjik·Ajk.Each of the concepts in the FCM model may be input or output concepts. A number of concepts are defined as output concepts (OCs). These concepts are the state of the system in which we want to estimate the value that represents the final state of the system. The classification of concepts as input and output concepts is by the experts of the group and according to the subject under consideration. The mathematical relations used in the NHL algorithm for learning FCM are shown in equations (1) and (2). (2)Δωji=ηtAiK−1AjK−1−ωjiK−1AiK−1,whereη is a scaling parameter called the learning rate. η is a very small positive scaler factor called learning parameter. Its value is obtained through test error. (3)ωjik=γ·ωjik−1+ηAik−1Ajk−1−sgnωjiωjik−1Aik−1.Equation (3) is the main equation of the NHL algorithm. γ is the weight decay parameter. The values of concepts and weightsωjik are calculated by equations (1) and (3), respectively. In fact, the NHL algorithm updates the basic matrix nonzero elements suggested by the experts in each iteration. The following criteria determine when the NHL algorithm ends [50].(a) The terminating functionF1 is given as (4)F1=OCi−Ti,whereTi is the mean value of OCi.This kind of metric function is suitable for the NHL algorithm used in the FCMs. In each step,F1 calculates the Euclidean distance for OCi and Ti. Assuming that OCi=Timin,Timax, Ti is calculated by (5)Ti=Timin+Timax2.Given that the FCM model hasm-OCs, for calculating F1, the sum of the square between m-OCs and m‐Ts can be calculated by (6)F1=∑j=1mOCi−Ti2.AfterF1 is minimized, the situation ends. b) The second condition for completing the algorithm is the difference between two consecutive OCs. This value should be less than e. Therefore, the value of the k+1th iteration should be less than e based on (7)F2=OCit+1−OCit<e=0.002.In this algorithm, the values of the parametersη and γ are determined through test error. After several tests, the values of η and γ show the best performing algorithm. Finally, when the algorithmic termination conditions are met, the final weight matrix (ωNHL) is obtained.For the convenience of end-users, a graphical interface is designed using the GUI in MATLAB for the proposed system. The user interface for the GC risk prediction software is shown in Figure3.For example, the user enters the requested information into the system. The system displays the risk assessment result after receiving information from the user and using the proposed NHL-FCM model.For the comparison of classification accuracy, the same data set is used for classification with other machine learning models. Backpropagation neural network, support vector machine, decision tree, and Bayesian classifier were used in the Weka toolkit V3.7 to test other learning algorithms. For this purpose, the Excel file containing the collected data collection was converted to .arff format so that it can be read for Weka. Then, the required steps for data preprocessing were performed. In this software, one of the most common methods of evaluating the performance of categories that divide the tagged data set into several subsets is cross-validation. 10-fold cross-validation was used for all the studied algorithms. 10-fold cross-validation divides the data set into 10 parts and performs the test 10 times. In each step, one part is considered as a test and the other 9 parts are considered for training. In this way, each data is used once for testing and 9 times for training. As a result, the entire data set is covered for training and testing.The backpropagation neural network with 27 input neurons, 10 neurons, and 3 output nodes was used as the multilayer perceptron. Also, for classification of the assess risk into three classes, high, medium, and low, the support vector machine, decision tree C4.5, and Naïve Bayesian classifier were used.. Given that the data studied are not linearly separable, we need to use the core technology to implement the SVM algorithm. The core technology is one of the most common techniques for solving problems that are not linearly separable. In this method, a suitable core function is selected and executed. In fact, the purpose of kernel functions is to linearize nonlinear problems. There are several kernel functions in Weka. The RBF (Radial Basis Function) was used to run the SVM algorithm. By selecting and running the C4.5 algorithm, you can see the results of the classification. Also, the tree created by this algorithm can be seen graphically, which is a large tree. The three categories of high risk, medium risk, and low risk were selected as target variables and other characteristics as predictive variables. The leaves of the tree are the target variables and can be seen as a number of rules according to the model made by the tree. Naïve Bayesian was another classification algorithm that was implemented using Weka on the studied data, and its results were examined. This algorithm uses a possible framework to solve classification problems. ## 3. Results To analyze the performance of the proposed method, we divided the data into two categories. The proposed model was trained using 70% of the patient records (392 records) based on the NHL algorithm and tested using 30% of the records (168 records). Considering 168 patient records selected for testing randomly, there were 56 records in the high category, 64 records in the medium category, and 48 records in the low category.Root square error (RMSE) and performance measure accuracy, recall, precision, and mean absolute error (MAE) are the key behavior measures in the medical field [17] widely utilized in the literature. To determine accuracy, recall, and precision, the turbulence matrix was utilized. A confusion matrix is a table making possible to visualize the behavior of an algorithm. Table 3 represents the general scheme of a confusion matrix (with two groups C1 and C2).Table 3 Confusion matrix. Predicted classActual classC1C2C1True positive(TP)False positive(FP)C2False negative (FN)True negative(TN)The matrix contains two columns and two rows specifying the values including the number of true negatives (TN), false negatives (FN), false positives (FP), and true positives (TP). TP shows the number of specimens for class C1 classified appropriately. FP represents the number of specimens for group C2 classified inaccurately as C1. TN shows the number of samples for class C2 classified correctly. FN represents the number of specimens for class C1 classified incorrectly as class C2.(i) Accuracy: accuracy represents the ratio of accurately classified specimens to the total number of tested samples. It is determined by(8)Accuracy=TN+TPTN+TP+FN+FP.(ii) Recall: recall is the number of instances of the class C1 that has actually predicted correctly. It is calculated by(9)Recall=TPTP+FN.(iii) Precision: it represents the classifier’s ability not to label a C2 sample as C1. It is calculated by(10)Precision=TPTP+FP.The MAE performance index is calculated by(11)MAE=1N∑L=1N∑J=1COCJLReal−OCJLPredicted.In equation (11), N represents the number of training data (N=560), C shows the number of output concepts (C=3), and OCLReal−OCLPredicted denotes the difference between the lth decision output concept (OC) and its equivalent real value (target) by appearing the kth set of input concepts to the input of the tool.The RMSE evaluation index is defined based on(12)RMSE=1NC∑L=1N∑J=1COCJLReal−OCJLPredicted2,where N is the number of training sets and C is the system outputs.Table4 shows the accuracy results obtained from the proposed method and other standard categorizers. The proposed method works better than other categories because of the efficiency of the NHL’s efficiency for working with very small data to correct FCM weight. As a result, optimal decisions are made for output concepts.Table 4 Performance metrics. Classifiers+HighMediumLowClass recallClass precisionOverall accuracyRMSEMAEDecision treesHigh3010153.5773.1776.780.51200.721Medium1652081.2576.47Low1024797.9179.66Naïve BayesHigh408571.4275.4780.350.3340.645Medium856487.577.77Low803981.2582.97SVMHigh462482.1488.4686.90.1930.342Medium060493.7593.75Low1024083.376.92MLP-ANNHigh492787.584.4890.470.2480.097Medium458490.6287.87Low344593.7586.53Proposed modelHigh551198.2196.4995.830.1730.0471Medium160193.7596.77Low034695.8393.87The results show that the highest total accuracy is related to the proposed method (95.83%) which is about 5% higher than the accuracy of the MLP-ANN algorithm. The highest precision and recall are related to the proposed algorithm, which are, respectively, 96.77% (medium) and 98.21% (high). It also shows that the training error of the proposed method based on NHL is less than the other algorithms used in this study.As stated,γ and η are two learning parameters in the NHL algorithm. In this algorithm, the upper and lower limits of these parameters are determined by trial and error in order to optimize the final solution. After several simulations with parameters γ and η, it was observed that the use of large amounts of γ causes significant changes in weights and weight marks. Also, simulation with small η also creates significant weight changes, thus preventing the weight of concepts from entering the desired range. For this reason, values γ and η are limited to 0<γ<0.1 and 0.9<η<1. In each study, a constant value is considered for these parameters.After several investigations, it was found that the best performance of the category is related toη=0.045 and γ=0.98. The classification results obtained for the different values of learning parameters are presented in Table 5.Table 5 Classification results, based on different values ofη and γ. ηγConfusion matrixClassification accuracy (%)HighMediumLow0.010.97504788.69459121400.030.95456189.28558060470.0450.98551195.83160103460.050.96546094.04156012480.0550.96532591.625801443 ## 4. Discussion In this study, we designed a risk prediction model and a GC risk assessment tool using data from a study on a population of patients referring to the gastroenterology unit of Imam Reza Hospital in Tabriz. The proposed model presented in this study is attempting to rationalize beyond the analyses of clinical experts and increase the ability of experts to make logical decisions in a clinical setting for patients with different levels of risk factors for GC and help clinical specialists to make a logical decision about optimal preventive methods for patients.The 95.8% overall classification accuracy obtained through the Hebbian-based FCM using 560 patients indicates a high level of coordination between the proposed system and medical decisions, and the proposed decision support tool can be trusted for clinical professionals and also helps them in the process of risk assessment of gastric GC.Specifically, our risk assessment tool is simple and inexpensive to use in the clinical environment, because many other methods to predict the risk of GC are invasive. Therefore, this is an effective instrument for estimating the population at risk of cancer in the future. The results show that this new model can predict the probability of developing GC concerning the characteristics specified in this study with a better accuracy than previous studies.In recent years, several researches have been carried out on the development and validation of risk assessment tools for various cancers [51, 52]. Recent studies have shown that the combination of H. pylori antibody and serum pepsinogen can be a good predictor of GC [53, 54].We believe that only two other evaluation instruments exist for GC rather than ours. Based on the Japan Public Health Center-based Prospective Study, a device was designed to estimate the cumulative probability of GC incidence including sex, age, smoking status, the mixture ofH. pylori antibody and serum pepsinogen, consumption of salty food, and family history of GC as the risk factors [55]. A good performance was found by the model based on calibration and discrimination. Based on [2], a risk evaluation instrument for GC was proposed in the general population of Japan. In this work, gender, age, the combination of Helicobacter pylori antibody and pepsinogen status, smoking status, and hemoglobin A1C level were risk factors for GC.The risk factors chosen in these two studies were very limited to a few specific characteristics and had little similarity to the factors in our study. Risks such as consumption of fruits and vegetables, alcohol consumption, history of cardiovascular disease, blood type, milk consumption, history of allergy, gastric reflux, storage containers, food intake, and family history of cancer did not exist in both studies in spite of their importance in previous studies. Factors such as salt intake and a history of GC are known as causes of GC that did not exist in [2]. Another remarkable point in our study is that, given the nature of the proposed model, this method addresses the effects of factors that are sometimes related to each other or even the mutual effects that might put each other at risk, but it is not included in the two previous studies.Another advantage of the proposed method than other algorithms is that other methods cannot provide any explicit causal relationship and the system works as a black box. This problem also makes these algorithms less suited to medical decision support systems. Finally, the new system has the following benefits:(i) It examines the factors that have not been taken into account in previous models to assess the risk of GC(ii) Because of the use of new factors, this model can be more effective in predicting the risk of GC(iii) The proposed model is presented by a software that has a simple, convenient, and user-friendly interface(iv) The use of this software by physicians and other researchers can tackle individual healthcare decisions(v) It helps healthcare professionals decide on individual risk management mechanismsThe system presented in this study has the following limitations: (1) a small sample of patients used to learn and anticipate GC, (2) the heavy dependence of this model on knowledge of domain specialists, (3) dependence on initial conditions and communication, and (4) the absence of external validation of the forecast system. Although this system has nice results due to the use of an appropriate database and the important and relevant GC factors, the generalizability of our results cannot be proved without the experiment of the system in another data set. As a result, it is necessary to use a larger statistical population to test the proposed model. ## 5. Conclusions Assessing the level of risk for GC is very important and helps make decisions about screening. Given the limited number of GC risk assessment tools that have been proposed so far, there is no tool that comprehensively covers the risk factors in scientific studies on GC. The proposed model based on soft computing covers all the factors influencing the incidence of GC. The classification accuracy of the proposed method is higher than other methods of the machine learning classification, such as the decision tree and SVM. This is due to the useful features of FCM for checking domain knowledge and determining the initial structure of FCM and the initial weights and then using the NHL algorithm to teach the FCM model and adjust these weights. The FCM-based model is comprehensive, transparent, and more effective than previous models for assessing the risk of GC. As a result, this risk assessment tool can help diagnose people with a high risk of GC and help both healthcare providers and patients with the decision-making process. Our future work is to use more features and variations and other learning algorithms to determine the weight of the edges in the FCM. --- *Source: 1016284-2020-10-05.xml*
1016284-2020-10-05_1016284-2020-10-05.md
52,812
A Medical Decision Support System to Assess Risk Factors for Gastric Cancer Based on Fuzzy Cognitive Map
Seyed Abbas Mahmoodi; Kamal Mirzaie; Maryam Sadat Mahmoodi; Seyed Mostafa Mahmoudi
Computational and Mathematical Methods in Medicine (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1016284
1016284-2020-10-05.xml
--- ## Abstract Gastric cancer (GC), one of the most common cancers around the world, is a multifactorial disease and there are many risk factors for this disease. Assessing the risk of GC is essential for choosing an appropriate healthcare strategy. There have been very few studies conducted on the development of risk assessment systems for GC. This study is aimed at providing a medical decision support system based on soft computing using fuzzy cognitive maps (FCMs) which will help healthcare professionals to decide on an appropriate individual healthcare strategy based on the risk level of the disease. FCMs are considered as one of the strongest artificial intelligence techniques for complex system modeling. In this system, an FCM based on Nonlinear Hebbian Learning (NHL) algorithm is used. The data used in this study are collected from the medical records of 560 patients referring to Imam Reza Hospital in Tabriz City. 27 effective features in gastric cancer were selected using the opinions of three experts. The prediction accuracy of the proposed method is 95.83%. The results show that the proposed method is more accurate than other decision-making algorithms, such as decision trees, Naïve Bayes, and ANN. From the perspective of healthcare professionals, the proposed medical decision support system is simple, comprehensive, and more effective than previous models for assessing the risk of GC and can help them to predict the risk factors for GC in the clinical setting. --- ## Body ## 1. Introduction Gastric cancer (GC) which is one of the major cancers around the world with about one million new patients each year is known to be the third cause of cancer deaths [1, 2]. This represents an important public health issue in the world, especially in Central Asian countries, where the incidence of this disease is very high [2]. GC is a multifactorial disease, and its formation is related to various risk factors [3]. Various scientific methods, such as photofluorography and esophagogastroduodenoscopy, are used to diagnose GC in the early stages and can help reduce the mortality rate of GC with a practical approach [3]. Given that these methods are invasive and expensive, it is necessary to provide a simple inexpensive and effective tool for the diagnosis of people at risk for GC, which can then be followed by more accurate examinations. Moreover, appropriate prevention efforts can be made to reduce the incidence of this disease.The initial definitions of the decision support system (DSS) consider it as a system to support decision-makers of the management in the semistructured and unstructured positions and decisions [4]. Accordingly, DSS means helping decision-makers and increasing their ability, not replacing their judgments [4]. Today, the use of DSSs has expanded in a variety of areas, such as management, industry, agriculture, information systems, medicine, and hundreds of other topics. The medical decision support system (MDSS) is a computer system designed to help physicians or other healthcare professionals in making clinical decisions. Some applications of the medical decision support system are outlined below [5]: (i) Preventive care services, for example, screenings for blood pressure and cancer(ii) Patient symptom checker(iii) Care plan(iv) Guide to reducing long hospital stays(v) Intelligent health monitoring systemsMDSS contains numerous advantages, of which the most important is to minimalize medical failure and make a relatively stable structure for diagnosing and treating the disease, thereby resolving various and conflicting ideas of specialists [5]. Therefore, it is vital to design and implement these models.FCMs are regarded as soft computing methods that try attempting to act like humans for decision-making and reasoning [6]. In fact, an FCM is an instrument for modeling multifaceted systems, which is attained by integrating neural networks and fuzzy logic [7, 8], and to describe the complex system’s performance utilizing concepts. This technique creates a conceptual model where each concept provides a characteristic or a state of a system dynamically interacting with these notions [9]. FCM is a graphical representation of a system structure [10]. According to the artificial intelligence, FCMs are dynamic learning networks; thus, more data to model the problem can help the system with adapting itself and reaching a solution. This conceptual model is not restricted to the exact measurements and quantities. Hence, it is very appropriate for concepts without accurate structures.FCMs were presented by Kosko as a fuzzy directed graph with sign and feedback loops to illustrate the computational complexity and dependence of a model symbolically and explicitly [11]. In other words, a set of nodes is created by the FCM affecting each other via causal relations. The details and mathematical formulation of this technique are described in Supplementary Materials (available here). Using the benefits of fuzzy systems (if-then rules) and neural networks (teaching and learning), FCM was able to quickly prove its effectiveness in various areas so that we can see its successful presence in politics, economics, engineering, medicine, etc. [12].In recent years, MDSS using FCM has been developed as one of the main applications of this tool. FCM has emerged as a tool for representing and studying the behavior of systems, and it can deal with complex systems using an argumentative process. This study is aimed at providing an MDSS for assessing the risk of GC using FCM.In the following, some successful instances of FCM applications regarding decision support systems are provided. Papageorgiou et al. [13] utilized FCM for predicting infectious diseases and infection severity. A novel FCM-based technique was presented by Amirkhani et al. [14] to screen and isolate UDH from other internal brain lesions. Hence, they examined 86 patients in Shahid Beheshti Hospital in Isfahan City. The pathologist extracted the ten key properties needed to screen these lesions to use them as the key concepts of FCM. The accurateness of the suggested technique was 95.35%. Based on the results, it was indicated that not only the suggested FCM contained a high accuracy level it is also able to preset an acceptable false-negative rate (FNR). A decision support system was proposed by Baena de Moraes Lopes et al. [7] to diagnose the changes in urinary elimination, based on the nursing terminology of North American Nursing Diagnosis Association International (NANDA-I). For 195 cases of urinary incontinence, an FCM model was utilized after the NANDA-I classifications. The high specificity and sensitivity of 0.92 and 0.95, were, respectively, found by the FCM model; however, a low specificity value was provided in the determination of the diagnosis of urge urinary incontinence (0.43) along with a low sensitivity value to overall urinary incontinence (0.42).Recently, the use of FCM with Hebbian-based learning capabilities has increased. According to [15], a decision-making framework was proposed that can accurately assess the progression of depression symptoms in the elderly people and warn healthcare providers by providing useful information for regulating the patient’s treatment. According to [16], a risk management system for familial breast cancer was presented using the NHL-based FCM technique. Data needed for this study were extracted from 40 patients and 18 key features were selected. The results showed that the accuracy is 95%. According to [17], the first specialized diagnostic system for obesity was proposed based on psychological and social characteristics. In this study, a mathematical model based on FCM was presented. According to the proposed model, the effects of different weight-loss treatment methods can be studied.No certain reason exists for GC. The cause-effect associations are not systematically investigated and understood so far between the integrated impacts of the multiple risk factors on the probability of developing GC. Even the ideas of radiologists and oncologists are greatly subjective in this regard. In such instances, it is considered to use an FCM as a human-friendly and transparent clinical support instrument to determine the cause-effect associations between the factors and the subjectivity can be remarkably eliminated by the degrees of its effects on the risk level. The present work is mainly focused on developing a clinical decision-making instrument in terms of an FCM to evaluate GC risk. ## 2. Methods ### 2.1. FCM Model for GC Risk Factors Addressing GC is a complex process that needs to understand the various parameters, risk factors, and symptoms to make the right decision and assessment. This study assesses the risk of GC by providing a medical decision-making system. The design of this decision-making system is based on a proposed model of FCM, which is presented below. Designing and developing a suitable FCM require human knowledge to describe a decision support system. In this study, GC specialists are used for the development of the FCM model. The development of the FCM model is divided into three main steps, which is briefly summarized:(1) Identify concepts(2) Determine the relationships between concepts and initial weights(3) WeightingFirst, the experts individually identify the factors that contribute to GC. In the following, common concepts among specialists are selected as model nodes. The second step is to identify the relationships between concepts. To this end, experts define the interactions between concepts with respect to fuzzy variables. To do so, determine the relationship and the direction of the relationship (if any). The amounts of these effects are expressed as very low, low, medium, high, and very high. Finally, the linguistic variables expressed by the experts are integrated. Using the SUM technique, these values are aggregated and the total linguistic weight is generated by the “centric” defuzzification method and converted to a numerical value. The corresponding weight matrix is then constructed. Choosing a learning algorithm to teach initial weights is the third step of this method. The purpose of a learning algorithm, setting the initial weight, is the same way as neural networks to improve the modeling FCM.To better understand, these steps were used step by step to develop an FCM model for GC. For this purpose, the opinions of three specialists were used. In the first phase of the research presented in this article, information on GC risk factors was collected from medical sources, pathologists, and informal sources [18–48]. The collected knowledge was transformed into a well-structured questionnaire and presented to three experts. The questionnaire includes risk factors associated with GC. According to three experts, 27 common features were identified as the major risk factors for end-stage GC. To better understand, we used the mentioned process step by step to develop an FCM model for GC.Risk factors for gastric cancer may be categorized into four groups (personal features, systemic conditions, stomach condition, and diet food), each of which includes several risk factors. The final features are presented in Figure1, and their explanations are given in Table 1.Figure 1 Classification of GC risk factors.Table 1 Risk factors of GC. Risk factorsDescriptionC1: sexStudies show that men around the world are diagnosed with GC almost twice as much as women [18].C2: blood groupScientific research shows that there is a significant relationship between blood type and GC. The blood groups A and O have the highest and lowest incidence of GC, respectively [19].C3: BMIHigh BMI increases GC [20]. In 2016, the IACR formed a team of specialists. They reported that GC is one of the diseases caused by excessive fat gain and high BMI [21].C4: ageThe risk of GC increases with age [18, 22, 23].C5: motilityPeople with any regular physical activity have a lower risk of GC than nonactive people. According to the US Physical Activity Guidelines Advisory Committee (2018), moderate evidence showed that physical activity reduces the risk of various cancers, including GC [21].C6: alcohol consumptionRegular alcohol consumption increases the risk of GC [24, 25].C7: exposed to chemicalsSome jobs exposed to chemicals, such as cement and chromium, increase the risk of GC [26].C8: smokingSmoking increases the risk of GC [27, 28].C9: salt consumptionHigh salt intake increases the risk of GC [23, 29, 30].C10: consumption of vegetableThe daily consumption of 200-200 grams of vegetables per day may reduce the risk of GC [31].C11: consumption of smoked foodThe smoked food is a great source of polycyclic aromatic hydrocarbons (PAHs). Scientific research has shown that this biopollutant is one of the factors involved in many cancers, including GC [32, 33].C12: milk consumptionIncreasing dairy consumption, such as milk, is associated with a lower risk of GC [34].C13: fast food consumptionFast food consumption is one of the factors affecting the incidence of GC [35].C14: consumption of fried foodsThe results of scientific studies show that people who use a lot of fried foods in their diet are at increased risk of GC [27, 28].C15: fruit consumptionA daily consumption of 120-150 grams of fruit per day may reduce the risk of GC [31].C16: food storage containerToday’s food containers are often made of chemicals, such as plastics that contain bisphenol A. Thus, it can be the source of various types of cancer and hormonal disorders [36].C17: baking dishThe use of metal containers, such as aluminum for cooking, can be a factor in the development of diseases because these types of metals, when exposed to heat, emit a small amount of lead [37].C18: history of allergyRecent studies indicate that the history of allergic diseases is associated with a lower risk of GC [38].C19: family history of cancerA family history of cancer in certain specific sites may be associated with a risk of GC [39].C20: family of GCThis risk factor is strongly associated with different types of GC [40, 41].C21: history of cardiovascular diseasePeople with cardiovascular disease are at a lower risk of GC because of using some drugs [42].C22: general status of cancerPeople with a good general health status are less likely to be at risk of GC [43].C23: history of gastric refluxGastric reflux causes a 3-10% percent increase in being at risk of GC [44].C24: history of stomach surgeryGastric surgeries, such as gastric ulcers, may increase the risk of cancer [45].C25: history of stomach infectionHelicobacter pylorus is the most important risk factor for GC [46–48].C26: mucosa statusGastric ulcers are considered as a risk factor for GC [35].C27: history of gastric inflammationThe history of gastric inflammation is one of the most important factors in the incidence of GC [35].In the second phase, first, the sign for the relationship between the two concepts is determined, and finally, the numerical values of the two concepts are calculated. Five membership functions were used for this purpose. Consider the following example.1st specialist: C4 has a great impact on C27.2nd specialist: C4 has a moderate impact on C27.3rd specialist: C4 has a great impact on C27.Using the SUM method, the above three linguistic weights (high, very high, and very high) are aggregated. The above three linguistic weights (high, very high, and very high) are aggregated using the SUM method. Figure2 represents the centroid defuzzification method that is implemented to calculate the numerical value of the weight in the range −1,1.Figure 2 Aggregation and defuzzification of linguistic weights.Using this method, the weight of all relationships between the concepts related to FCM for GC was calculated. The developed FCM is shown in Figure3. In the third step, we used a learning algorithm to train the model, which includes updating the relationship weight, and finally, a fuzzy cognition map for GC risk factors was extracted. For this purpose, data collected from 560 patients referred to Imam Reza Hospital in Tabriz (after the preprocessing steps) were used through a questionnaire. Table 2 shows the features, values, and frequency of patients.Figure 3 User interface of the proposed MDSS.Table 2 Data sets. FeaturesRangeNumberPercentSexMale25645.7%Female30454.3%Age<40203.47%41–6021037.5%≥6133059.03%Blood groupA12321.96%B7813.92%AB8014.28%O27949.82%BMIBMI>306912.32%25<BMI>29.57613.57%18.5<BMI>24.912021.42%BMI<18.529352.32%MotilityLight15627.85%Medium23642.14%High16830%Alcohol consumptionYes8515.17%No47584.82%Exposed to chemicalsYes549.64%No50690.35%SmokingYes19835.35%No36264.64%Salt consumptionNone101.78%Low17531.25%High37566.96%Consumption of vegetableDaily264.64%1-3 times a week21438.21%1-3 times a month32057.14%Consumption of smoked foodNone50.89%Daily00%1-3 times a week14926.60%1-3 times a month40672.5%Milk consumptionYes21438.21%No34661.78%Fast food consumptionNone40.71%1-3 times a week31556.25%1-3 times a month24143.03%Consumption of fried foodsNone00%1-3 times a week19134.10%1-3 times a month36965.89%Fruit consumptionNone61.07%1-3 times a week18533.03%1-3 times a month36965.89%Food storage containerAluminum21638.57%Plastic30153.75%Copper325.71%Style91.60%Chinese20.35%Baking dishAluminum101.78%Teflon39069.64%Copper213.75%History of allergyYes8915.89%No47184.10%Family history of cancerYes21137.67%No34962.32%Family of GCYes12321.965No43778.03%History of cardiovascular diseaseYes18533.03%No37566.96%General statusGood7914.10%So-so19033.92%Poor29151.965History of gastric refluxYes23441.78%No32658.21%History of stomach surgeryYes488.57%No51291.42%History of stomach infectionYes17631.42%No38468.57%Mucosa statusNormal9416.78%Swollen12622.5%Red15728.03%Sore18332.67%History of gastric inflammationYes16329.10%No39770.89%Risk scoreHigh30053.57%Moderate18633.21%Low748.39%Figure4 shows the proposed FCM model for risk factors of GC. This FCM has 28 concepts and 38 edges with their weights. Considering the 28 concept nodes, 27 are the ultimate physician-selected features that interfere with the disease and are shown by the values C1 to C27. The central node is the concept of GC, which receives and collects interactions from all other nodes. The positive weight of an edge indicates that it has a positive effect on the incidence of GC, and the negative weight indicates the role of deterrence in the incidence of the disease. The yellow, purple, blue, and green colors were used to specify the category of any feature or concept. The C1 to C8 features specified with yellow were classified as personal features. The violet color was used for the C9 to C17 features of the diet food category. Blue and green were also used for the C18 to C22 features of the systemic condition, respectively, and C23 to C27 features were used for the stomach condition category.Figure 4 FCM model for GC risk factors. ### 2.2. Learning FCM Using NHL Algorithm GC specialists were well positioned to create FCM in our method. Nonlinear Hebbian Learning (NHL) is utilized to learn the weights due to no access to a relatively large data set, causal weight optimization, and more accurate results [49]. The Hebbian-based algorithms were used for FCM training to determine the best matrix in terms of expert knowledge [50]. Algorithms set the FCM weights through existing data and a learning formula in terms of repetition and Hebbian rule methods [50]. The NHL algorithm is based on the assumption that all of the concepts of the FCM model are stimulated at each time step and their values change. The value ωji corresponding to the concepts of cj and ci is updated, and the weight ωji is corrected in iteration k. The value of Aik+1 is determined in the k+1th iteration. The impact of concepts with values Aj and corrected weighted values ωijk in iteration k is determined by (1)Aik+1=fAik+∑j=1,i≠jnωjik·Ajk.Each of the concepts in the FCM model may be input or output concepts. A number of concepts are defined as output concepts (OCs). These concepts are the state of the system in which we want to estimate the value that represents the final state of the system. The classification of concepts as input and output concepts is by the experts of the group and according to the subject under consideration. The mathematical relations used in the NHL algorithm for learning FCM are shown in equations (1) and (2). (2)Δωji=ηtAiK−1AjK−1−ωjiK−1AiK−1,whereη is a scaling parameter called the learning rate. η is a very small positive scaler factor called learning parameter. Its value is obtained through test error. (3)ωjik=γ·ωjik−1+ηAik−1Ajk−1−sgnωjiωjik−1Aik−1.Equation (3) is the main equation of the NHL algorithm. γ is the weight decay parameter. The values of concepts and weightsωjik are calculated by equations (1) and (3), respectively. In fact, the NHL algorithm updates the basic matrix nonzero elements suggested by the experts in each iteration. The following criteria determine when the NHL algorithm ends [50].(a) The terminating functionF1 is given as (4)F1=OCi−Ti,whereTi is the mean value of OCi.This kind of metric function is suitable for the NHL algorithm used in the FCMs. In each step,F1 calculates the Euclidean distance for OCi and Ti. Assuming that OCi=Timin,Timax, Ti is calculated by (5)Ti=Timin+Timax2.Given that the FCM model hasm-OCs, for calculating F1, the sum of the square between m-OCs and m‐Ts can be calculated by (6)F1=∑j=1mOCi−Ti2.AfterF1 is minimized, the situation ends. b) The second condition for completing the algorithm is the difference between two consecutive OCs. This value should be less than e. Therefore, the value of the k+1th iteration should be less than e based on (7)F2=OCit+1−OCit<e=0.002.In this algorithm, the values of the parametersη and γ are determined through test error. After several tests, the values of η and γ show the best performing algorithm. Finally, when the algorithmic termination conditions are met, the final weight matrix (ωNHL) is obtained.For the convenience of end-users, a graphical interface is designed using the GUI in MATLAB for the proposed system. The user interface for the GC risk prediction software is shown in Figure3.For example, the user enters the requested information into the system. The system displays the risk assessment result after receiving information from the user and using the proposed NHL-FCM model.For the comparison of classification accuracy, the same data set is used for classification with other machine learning models. Backpropagation neural network, support vector machine, decision tree, and Bayesian classifier were used in the Weka toolkit V3.7 to test other learning algorithms. For this purpose, the Excel file containing the collected data collection was converted to .arff format so that it can be read for Weka. Then, the required steps for data preprocessing were performed. In this software, one of the most common methods of evaluating the performance of categories that divide the tagged data set into several subsets is cross-validation. 10-fold cross-validation was used for all the studied algorithms. 10-fold cross-validation divides the data set into 10 parts and performs the test 10 times. In each step, one part is considered as a test and the other 9 parts are considered for training. In this way, each data is used once for testing and 9 times for training. As a result, the entire data set is covered for training and testing.The backpropagation neural network with 27 input neurons, 10 neurons, and 3 output nodes was used as the multilayer perceptron. Also, for classification of the assess risk into three classes, high, medium, and low, the support vector machine, decision tree C4.5, and Naïve Bayesian classifier were used.. Given that the data studied are not linearly separable, we need to use the core technology to implement the SVM algorithm. The core technology is one of the most common techniques for solving problems that are not linearly separable. In this method, a suitable core function is selected and executed. In fact, the purpose of kernel functions is to linearize nonlinear problems. There are several kernel functions in Weka. The RBF (Radial Basis Function) was used to run the SVM algorithm. By selecting and running the C4.5 algorithm, you can see the results of the classification. Also, the tree created by this algorithm can be seen graphically, which is a large tree. The three categories of high risk, medium risk, and low risk were selected as target variables and other characteristics as predictive variables. The leaves of the tree are the target variables and can be seen as a number of rules according to the model made by the tree. Naïve Bayesian was another classification algorithm that was implemented using Weka on the studied data, and its results were examined. This algorithm uses a possible framework to solve classification problems. ## 2.1. FCM Model for GC Risk Factors Addressing GC is a complex process that needs to understand the various parameters, risk factors, and symptoms to make the right decision and assessment. This study assesses the risk of GC by providing a medical decision-making system. The design of this decision-making system is based on a proposed model of FCM, which is presented below. Designing and developing a suitable FCM require human knowledge to describe a decision support system. In this study, GC specialists are used for the development of the FCM model. The development of the FCM model is divided into three main steps, which is briefly summarized:(1) Identify concepts(2) Determine the relationships between concepts and initial weights(3) WeightingFirst, the experts individually identify the factors that contribute to GC. In the following, common concepts among specialists are selected as model nodes. The second step is to identify the relationships between concepts. To this end, experts define the interactions between concepts with respect to fuzzy variables. To do so, determine the relationship and the direction of the relationship (if any). The amounts of these effects are expressed as very low, low, medium, high, and very high. Finally, the linguistic variables expressed by the experts are integrated. Using the SUM technique, these values are aggregated and the total linguistic weight is generated by the “centric” defuzzification method and converted to a numerical value. The corresponding weight matrix is then constructed. Choosing a learning algorithm to teach initial weights is the third step of this method. The purpose of a learning algorithm, setting the initial weight, is the same way as neural networks to improve the modeling FCM.To better understand, these steps were used step by step to develop an FCM model for GC. For this purpose, the opinions of three specialists were used. In the first phase of the research presented in this article, information on GC risk factors was collected from medical sources, pathologists, and informal sources [18–48]. The collected knowledge was transformed into a well-structured questionnaire and presented to three experts. The questionnaire includes risk factors associated with GC. According to three experts, 27 common features were identified as the major risk factors for end-stage GC. To better understand, we used the mentioned process step by step to develop an FCM model for GC.Risk factors for gastric cancer may be categorized into four groups (personal features, systemic conditions, stomach condition, and diet food), each of which includes several risk factors. The final features are presented in Figure1, and their explanations are given in Table 1.Figure 1 Classification of GC risk factors.Table 1 Risk factors of GC. Risk factorsDescriptionC1: sexStudies show that men around the world are diagnosed with GC almost twice as much as women [18].C2: blood groupScientific research shows that there is a significant relationship between blood type and GC. The blood groups A and O have the highest and lowest incidence of GC, respectively [19].C3: BMIHigh BMI increases GC [20]. In 2016, the IACR formed a team of specialists. They reported that GC is one of the diseases caused by excessive fat gain and high BMI [21].C4: ageThe risk of GC increases with age [18, 22, 23].C5: motilityPeople with any regular physical activity have a lower risk of GC than nonactive people. According to the US Physical Activity Guidelines Advisory Committee (2018), moderate evidence showed that physical activity reduces the risk of various cancers, including GC [21].C6: alcohol consumptionRegular alcohol consumption increases the risk of GC [24, 25].C7: exposed to chemicalsSome jobs exposed to chemicals, such as cement and chromium, increase the risk of GC [26].C8: smokingSmoking increases the risk of GC [27, 28].C9: salt consumptionHigh salt intake increases the risk of GC [23, 29, 30].C10: consumption of vegetableThe daily consumption of 200-200 grams of vegetables per day may reduce the risk of GC [31].C11: consumption of smoked foodThe smoked food is a great source of polycyclic aromatic hydrocarbons (PAHs). Scientific research has shown that this biopollutant is one of the factors involved in many cancers, including GC [32, 33].C12: milk consumptionIncreasing dairy consumption, such as milk, is associated with a lower risk of GC [34].C13: fast food consumptionFast food consumption is one of the factors affecting the incidence of GC [35].C14: consumption of fried foodsThe results of scientific studies show that people who use a lot of fried foods in their diet are at increased risk of GC [27, 28].C15: fruit consumptionA daily consumption of 120-150 grams of fruit per day may reduce the risk of GC [31].C16: food storage containerToday’s food containers are often made of chemicals, such as plastics that contain bisphenol A. Thus, it can be the source of various types of cancer and hormonal disorders [36].C17: baking dishThe use of metal containers, such as aluminum for cooking, can be a factor in the development of diseases because these types of metals, when exposed to heat, emit a small amount of lead [37].C18: history of allergyRecent studies indicate that the history of allergic diseases is associated with a lower risk of GC [38].C19: family history of cancerA family history of cancer in certain specific sites may be associated with a risk of GC [39].C20: family of GCThis risk factor is strongly associated with different types of GC [40, 41].C21: history of cardiovascular diseasePeople with cardiovascular disease are at a lower risk of GC because of using some drugs [42].C22: general status of cancerPeople with a good general health status are less likely to be at risk of GC [43].C23: history of gastric refluxGastric reflux causes a 3-10% percent increase in being at risk of GC [44].C24: history of stomach surgeryGastric surgeries, such as gastric ulcers, may increase the risk of cancer [45].C25: history of stomach infectionHelicobacter pylorus is the most important risk factor for GC [46–48].C26: mucosa statusGastric ulcers are considered as a risk factor for GC [35].C27: history of gastric inflammationThe history of gastric inflammation is one of the most important factors in the incidence of GC [35].In the second phase, first, the sign for the relationship between the two concepts is determined, and finally, the numerical values of the two concepts are calculated. Five membership functions were used for this purpose. Consider the following example.1st specialist: C4 has a great impact on C27.2nd specialist: C4 has a moderate impact on C27.3rd specialist: C4 has a great impact on C27.Using the SUM method, the above three linguistic weights (high, very high, and very high) are aggregated. The above three linguistic weights (high, very high, and very high) are aggregated using the SUM method. Figure2 represents the centroid defuzzification method that is implemented to calculate the numerical value of the weight in the range −1,1.Figure 2 Aggregation and defuzzification of linguistic weights.Using this method, the weight of all relationships between the concepts related to FCM for GC was calculated. The developed FCM is shown in Figure3. In the third step, we used a learning algorithm to train the model, which includes updating the relationship weight, and finally, a fuzzy cognition map for GC risk factors was extracted. For this purpose, data collected from 560 patients referred to Imam Reza Hospital in Tabriz (after the preprocessing steps) were used through a questionnaire. Table 2 shows the features, values, and frequency of patients.Figure 3 User interface of the proposed MDSS.Table 2 Data sets. FeaturesRangeNumberPercentSexMale25645.7%Female30454.3%Age<40203.47%41–6021037.5%≥6133059.03%Blood groupA12321.96%B7813.92%AB8014.28%O27949.82%BMIBMI>306912.32%25<BMI>29.57613.57%18.5<BMI>24.912021.42%BMI<18.529352.32%MotilityLight15627.85%Medium23642.14%High16830%Alcohol consumptionYes8515.17%No47584.82%Exposed to chemicalsYes549.64%No50690.35%SmokingYes19835.35%No36264.64%Salt consumptionNone101.78%Low17531.25%High37566.96%Consumption of vegetableDaily264.64%1-3 times a week21438.21%1-3 times a month32057.14%Consumption of smoked foodNone50.89%Daily00%1-3 times a week14926.60%1-3 times a month40672.5%Milk consumptionYes21438.21%No34661.78%Fast food consumptionNone40.71%1-3 times a week31556.25%1-3 times a month24143.03%Consumption of fried foodsNone00%1-3 times a week19134.10%1-3 times a month36965.89%Fruit consumptionNone61.07%1-3 times a week18533.03%1-3 times a month36965.89%Food storage containerAluminum21638.57%Plastic30153.75%Copper325.71%Style91.60%Chinese20.35%Baking dishAluminum101.78%Teflon39069.64%Copper213.75%History of allergyYes8915.89%No47184.10%Family history of cancerYes21137.67%No34962.32%Family of GCYes12321.965No43778.03%History of cardiovascular diseaseYes18533.03%No37566.96%General statusGood7914.10%So-so19033.92%Poor29151.965History of gastric refluxYes23441.78%No32658.21%History of stomach surgeryYes488.57%No51291.42%History of stomach infectionYes17631.42%No38468.57%Mucosa statusNormal9416.78%Swollen12622.5%Red15728.03%Sore18332.67%History of gastric inflammationYes16329.10%No39770.89%Risk scoreHigh30053.57%Moderate18633.21%Low748.39%Figure4 shows the proposed FCM model for risk factors of GC. This FCM has 28 concepts and 38 edges with their weights. Considering the 28 concept nodes, 27 are the ultimate physician-selected features that interfere with the disease and are shown by the values C1 to C27. The central node is the concept of GC, which receives and collects interactions from all other nodes. The positive weight of an edge indicates that it has a positive effect on the incidence of GC, and the negative weight indicates the role of deterrence in the incidence of the disease. The yellow, purple, blue, and green colors were used to specify the category of any feature or concept. The C1 to C8 features specified with yellow were classified as personal features. The violet color was used for the C9 to C17 features of the diet food category. Blue and green were also used for the C18 to C22 features of the systemic condition, respectively, and C23 to C27 features were used for the stomach condition category.Figure 4 FCM model for GC risk factors. ## 2.2. Learning FCM Using NHL Algorithm GC specialists were well positioned to create FCM in our method. Nonlinear Hebbian Learning (NHL) is utilized to learn the weights due to no access to a relatively large data set, causal weight optimization, and more accurate results [49]. The Hebbian-based algorithms were used for FCM training to determine the best matrix in terms of expert knowledge [50]. Algorithms set the FCM weights through existing data and a learning formula in terms of repetition and Hebbian rule methods [50]. The NHL algorithm is based on the assumption that all of the concepts of the FCM model are stimulated at each time step and their values change. The value ωji corresponding to the concepts of cj and ci is updated, and the weight ωji is corrected in iteration k. The value of Aik+1 is determined in the k+1th iteration. The impact of concepts with values Aj and corrected weighted values ωijk in iteration k is determined by (1)Aik+1=fAik+∑j=1,i≠jnωjik·Ajk.Each of the concepts in the FCM model may be input or output concepts. A number of concepts are defined as output concepts (OCs). These concepts are the state of the system in which we want to estimate the value that represents the final state of the system. The classification of concepts as input and output concepts is by the experts of the group and according to the subject under consideration. The mathematical relations used in the NHL algorithm for learning FCM are shown in equations (1) and (2). (2)Δωji=ηtAiK−1AjK−1−ωjiK−1AiK−1,whereη is a scaling parameter called the learning rate. η is a very small positive scaler factor called learning parameter. Its value is obtained through test error. (3)ωjik=γ·ωjik−1+ηAik−1Ajk−1−sgnωjiωjik−1Aik−1.Equation (3) is the main equation of the NHL algorithm. γ is the weight decay parameter. The values of concepts and weightsωjik are calculated by equations (1) and (3), respectively. In fact, the NHL algorithm updates the basic matrix nonzero elements suggested by the experts in each iteration. The following criteria determine when the NHL algorithm ends [50].(a) The terminating functionF1 is given as (4)F1=OCi−Ti,whereTi is the mean value of OCi.This kind of metric function is suitable for the NHL algorithm used in the FCMs. In each step,F1 calculates the Euclidean distance for OCi and Ti. Assuming that OCi=Timin,Timax, Ti is calculated by (5)Ti=Timin+Timax2.Given that the FCM model hasm-OCs, for calculating F1, the sum of the square between m-OCs and m‐Ts can be calculated by (6)F1=∑j=1mOCi−Ti2.AfterF1 is minimized, the situation ends. b) The second condition for completing the algorithm is the difference between two consecutive OCs. This value should be less than e. Therefore, the value of the k+1th iteration should be less than e based on (7)F2=OCit+1−OCit<e=0.002.In this algorithm, the values of the parametersη and γ are determined through test error. After several tests, the values of η and γ show the best performing algorithm. Finally, when the algorithmic termination conditions are met, the final weight matrix (ωNHL) is obtained.For the convenience of end-users, a graphical interface is designed using the GUI in MATLAB for the proposed system. The user interface for the GC risk prediction software is shown in Figure3.For example, the user enters the requested information into the system. The system displays the risk assessment result after receiving information from the user and using the proposed NHL-FCM model.For the comparison of classification accuracy, the same data set is used for classification with other machine learning models. Backpropagation neural network, support vector machine, decision tree, and Bayesian classifier were used in the Weka toolkit V3.7 to test other learning algorithms. For this purpose, the Excel file containing the collected data collection was converted to .arff format so that it can be read for Weka. Then, the required steps for data preprocessing were performed. In this software, one of the most common methods of evaluating the performance of categories that divide the tagged data set into several subsets is cross-validation. 10-fold cross-validation was used for all the studied algorithms. 10-fold cross-validation divides the data set into 10 parts and performs the test 10 times. In each step, one part is considered as a test and the other 9 parts are considered for training. In this way, each data is used once for testing and 9 times for training. As a result, the entire data set is covered for training and testing.The backpropagation neural network with 27 input neurons, 10 neurons, and 3 output nodes was used as the multilayer perceptron. Also, for classification of the assess risk into three classes, high, medium, and low, the support vector machine, decision tree C4.5, and Naïve Bayesian classifier were used.. Given that the data studied are not linearly separable, we need to use the core technology to implement the SVM algorithm. The core technology is one of the most common techniques for solving problems that are not linearly separable. In this method, a suitable core function is selected and executed. In fact, the purpose of kernel functions is to linearize nonlinear problems. There are several kernel functions in Weka. The RBF (Radial Basis Function) was used to run the SVM algorithm. By selecting and running the C4.5 algorithm, you can see the results of the classification. Also, the tree created by this algorithm can be seen graphically, which is a large tree. The three categories of high risk, medium risk, and low risk were selected as target variables and other characteristics as predictive variables. The leaves of the tree are the target variables and can be seen as a number of rules according to the model made by the tree. Naïve Bayesian was another classification algorithm that was implemented using Weka on the studied data, and its results were examined. This algorithm uses a possible framework to solve classification problems. ## 3. Results To analyze the performance of the proposed method, we divided the data into two categories. The proposed model was trained using 70% of the patient records (392 records) based on the NHL algorithm and tested using 30% of the records (168 records). Considering 168 patient records selected for testing randomly, there were 56 records in the high category, 64 records in the medium category, and 48 records in the low category.Root square error (RMSE) and performance measure accuracy, recall, precision, and mean absolute error (MAE) are the key behavior measures in the medical field [17] widely utilized in the literature. To determine accuracy, recall, and precision, the turbulence matrix was utilized. A confusion matrix is a table making possible to visualize the behavior of an algorithm. Table 3 represents the general scheme of a confusion matrix (with two groups C1 and C2).Table 3 Confusion matrix. Predicted classActual classC1C2C1True positive(TP)False positive(FP)C2False negative (FN)True negative(TN)The matrix contains two columns and two rows specifying the values including the number of true negatives (TN), false negatives (FN), false positives (FP), and true positives (TP). TP shows the number of specimens for class C1 classified appropriately. FP represents the number of specimens for group C2 classified inaccurately as C1. TN shows the number of samples for class C2 classified correctly. FN represents the number of specimens for class C1 classified incorrectly as class C2.(i) Accuracy: accuracy represents the ratio of accurately classified specimens to the total number of tested samples. It is determined by(8)Accuracy=TN+TPTN+TP+FN+FP.(ii) Recall: recall is the number of instances of the class C1 that has actually predicted correctly. It is calculated by(9)Recall=TPTP+FN.(iii) Precision: it represents the classifier’s ability not to label a C2 sample as C1. It is calculated by(10)Precision=TPTP+FP.The MAE performance index is calculated by(11)MAE=1N∑L=1N∑J=1COCJLReal−OCJLPredicted.In equation (11), N represents the number of training data (N=560), C shows the number of output concepts (C=3), and OCLReal−OCLPredicted denotes the difference between the lth decision output concept (OC) and its equivalent real value (target) by appearing the kth set of input concepts to the input of the tool.The RMSE evaluation index is defined based on(12)RMSE=1NC∑L=1N∑J=1COCJLReal−OCJLPredicted2,where N is the number of training sets and C is the system outputs.Table4 shows the accuracy results obtained from the proposed method and other standard categorizers. The proposed method works better than other categories because of the efficiency of the NHL’s efficiency for working with very small data to correct FCM weight. As a result, optimal decisions are made for output concepts.Table 4 Performance metrics. Classifiers+HighMediumLowClass recallClass precisionOverall accuracyRMSEMAEDecision treesHigh3010153.5773.1776.780.51200.721Medium1652081.2576.47Low1024797.9179.66Naïve BayesHigh408571.4275.4780.350.3340.645Medium856487.577.77Low803981.2582.97SVMHigh462482.1488.4686.90.1930.342Medium060493.7593.75Low1024083.376.92MLP-ANNHigh492787.584.4890.470.2480.097Medium458490.6287.87Low344593.7586.53Proposed modelHigh551198.2196.4995.830.1730.0471Medium160193.7596.77Low034695.8393.87The results show that the highest total accuracy is related to the proposed method (95.83%) which is about 5% higher than the accuracy of the MLP-ANN algorithm. The highest precision and recall are related to the proposed algorithm, which are, respectively, 96.77% (medium) and 98.21% (high). It also shows that the training error of the proposed method based on NHL is less than the other algorithms used in this study.As stated,γ and η are two learning parameters in the NHL algorithm. In this algorithm, the upper and lower limits of these parameters are determined by trial and error in order to optimize the final solution. After several simulations with parameters γ and η, it was observed that the use of large amounts of γ causes significant changes in weights and weight marks. Also, simulation with small η also creates significant weight changes, thus preventing the weight of concepts from entering the desired range. For this reason, values γ and η are limited to 0<γ<0.1 and 0.9<η<1. In each study, a constant value is considered for these parameters.After several investigations, it was found that the best performance of the category is related toη=0.045 and γ=0.98. The classification results obtained for the different values of learning parameters are presented in Table 5.Table 5 Classification results, based on different values ofη and γ. ηγConfusion matrixClassification accuracy (%)HighMediumLow0.010.97504788.69459121400.030.95456189.28558060470.0450.98551195.83160103460.050.96546094.04156012480.0550.96532591.625801443 ## 4. Discussion In this study, we designed a risk prediction model and a GC risk assessment tool using data from a study on a population of patients referring to the gastroenterology unit of Imam Reza Hospital in Tabriz. The proposed model presented in this study is attempting to rationalize beyond the analyses of clinical experts and increase the ability of experts to make logical decisions in a clinical setting for patients with different levels of risk factors for GC and help clinical specialists to make a logical decision about optimal preventive methods for patients.The 95.8% overall classification accuracy obtained through the Hebbian-based FCM using 560 patients indicates a high level of coordination between the proposed system and medical decisions, and the proposed decision support tool can be trusted for clinical professionals and also helps them in the process of risk assessment of gastric GC.Specifically, our risk assessment tool is simple and inexpensive to use in the clinical environment, because many other methods to predict the risk of GC are invasive. Therefore, this is an effective instrument for estimating the population at risk of cancer in the future. The results show that this new model can predict the probability of developing GC concerning the characteristics specified in this study with a better accuracy than previous studies.In recent years, several researches have been carried out on the development and validation of risk assessment tools for various cancers [51, 52]. Recent studies have shown that the combination of H. pylori antibody and serum pepsinogen can be a good predictor of GC [53, 54].We believe that only two other evaluation instruments exist for GC rather than ours. Based on the Japan Public Health Center-based Prospective Study, a device was designed to estimate the cumulative probability of GC incidence including sex, age, smoking status, the mixture ofH. pylori antibody and serum pepsinogen, consumption of salty food, and family history of GC as the risk factors [55]. A good performance was found by the model based on calibration and discrimination. Based on [2], a risk evaluation instrument for GC was proposed in the general population of Japan. In this work, gender, age, the combination of Helicobacter pylori antibody and pepsinogen status, smoking status, and hemoglobin A1C level were risk factors for GC.The risk factors chosen in these two studies were very limited to a few specific characteristics and had little similarity to the factors in our study. Risks such as consumption of fruits and vegetables, alcohol consumption, history of cardiovascular disease, blood type, milk consumption, history of allergy, gastric reflux, storage containers, food intake, and family history of cancer did not exist in both studies in spite of their importance in previous studies. Factors such as salt intake and a history of GC are known as causes of GC that did not exist in [2]. Another remarkable point in our study is that, given the nature of the proposed model, this method addresses the effects of factors that are sometimes related to each other or even the mutual effects that might put each other at risk, but it is not included in the two previous studies.Another advantage of the proposed method than other algorithms is that other methods cannot provide any explicit causal relationship and the system works as a black box. This problem also makes these algorithms less suited to medical decision support systems. Finally, the new system has the following benefits:(i) It examines the factors that have not been taken into account in previous models to assess the risk of GC(ii) Because of the use of new factors, this model can be more effective in predicting the risk of GC(iii) The proposed model is presented by a software that has a simple, convenient, and user-friendly interface(iv) The use of this software by physicians and other researchers can tackle individual healthcare decisions(v) It helps healthcare professionals decide on individual risk management mechanismsThe system presented in this study has the following limitations: (1) a small sample of patients used to learn and anticipate GC, (2) the heavy dependence of this model on knowledge of domain specialists, (3) dependence on initial conditions and communication, and (4) the absence of external validation of the forecast system. Although this system has nice results due to the use of an appropriate database and the important and relevant GC factors, the generalizability of our results cannot be proved without the experiment of the system in another data set. As a result, it is necessary to use a larger statistical population to test the proposed model. ## 5. Conclusions Assessing the level of risk for GC is very important and helps make decisions about screening. Given the limited number of GC risk assessment tools that have been proposed so far, there is no tool that comprehensively covers the risk factors in scientific studies on GC. The proposed model based on soft computing covers all the factors influencing the incidence of GC. The classification accuracy of the proposed method is higher than other methods of the machine learning classification, such as the decision tree and SVM. This is due to the useful features of FCM for checking domain knowledge and determining the initial structure of FCM and the initial weights and then using the NHL algorithm to teach the FCM model and adjust these weights. The FCM-based model is comprehensive, transparent, and more effective than previous models for assessing the risk of GC. As a result, this risk assessment tool can help diagnose people with a high risk of GC and help both healthcare providers and patients with the decision-making process. Our future work is to use more features and variations and other learning algorithms to determine the weight of the edges in the FCM. --- *Source: 1016284-2020-10-05.xml*
2020
# Identification of Pharmacologically Tractable Protein Complexes in Cancer Using the R-Based Network Clustering and Visualization Program MCODER **Authors:** Sungjin Kwon; Hyosil Kim; Hyun Seok Kim **Journal:** BioMed Research International (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1016305 --- ## Abstract Current multiomics assay platforms facilitate systematic identification of functional entities that are mappable in a biological network, and computational methods that are better able to detect densely connected clusters of signals within a biological network are considered increasingly important. One of the most famous algorithms for detecting network subclusters is Molecular Complex Detection (MCODE). MCODE, however, is limited in simultaneous analyses of multiple, large-scale data sets, since it runs on the Cytoscape platform, which requires extensive computational resources and has limited coding flexibility. In the present study, we implemented the MCODE algorithm in R programming language and developed a related package, which we called MCODER. We found the MCODER package to be particularly useful in analyzing multiple omics data sets simultaneously within the R framework. Thus, we applied MCODER to detect pharmacologically tractable protein-protein interactions selectively elevated in molecular subtypes of ovarian and colorectal tumors. In doing so, we found that a single molecular subtype representing epithelial-mesenchymal transition in both cancer types exhibited enhanced production of the collagen-integrin protein complex. These results suggest that tumors of this molecular subtype could be susceptible to pharmacological inhibition of integrin signaling. --- ## Body ## 1. Introduction Biological functions often arise from multisubunit protein complexes, rather than a single, isolated protein [1, 2]. Many high throughput assay platforms in genomics, transcriptomics, and proteomics have become standard methods for investigating gene/protein interactions that give rise to biological functions [3]. However, because of biological and technical errors, these methods are hindered by a limited signal-to-noise ratio, rendering them vulnerable to high rates of false positives and false negatives; particularly when discovered hits represent a single gene, protein, and so forth. In this regard, codiscovery of hits for multiple subunits of a protein complex in an experimental condition helps mutually support the significance of such findings [4]. Detection of higher order clusters in a large network, however, is computationally challenging [5]. A number of algorithms have been developed over the past decade to tackle this problem, including the Markov Cluster Algorithm (MCL) [6], Molecular Complex Detection (MCODE) [7], DPClus [8], Affinity Propagation Clustering (APC) [9], Clustering based on Maximal Clique (CMC) [10], ClusterMaker [11], and Clustering with Overlapping Neighborhood Expansion (ClusterONE) [12]. Many of these algorithms have been implemented in various Cytoscape applications (CytoCluster, ClusterViz [13], and ClusterMaker [11]), as well as in java-based applications (C-DEVA [14]). Of these, as of February 2017, MCODE was the most downloaded Cytoscape application within the clustering category. MCODE discovers interconnected network clusters based on k-core score: the k-core of a particular graph (graph X) represents the maximal number of connected subgraphs of graph X, in which all nodes are connected by k (minimum number of degrees). Although Cytoscape is a java-based, open source, bioinformatics software platform with a user-friendly graphic-user interface [15], it requires extensive computational resources due to the memory restraints of java virtual machines (Cytoscape version 3.2.1: 2 GB+ recommended). Thus, its capacity to process input networks and graphical outputs is limited. For a computationally intensive task, R may be a better-suited platform. R is the most popular open source, statistical programming language, and data analysis platform used in analysis of broad, high throughput, and multiomics data. While the platform is suitable for iterative analysis of large-scale data sets in batch mode, R-based network clustering software is rare. Herein, we describe our implementation of the MCODE algorithm in R programming language and a related package, hereinafter referred to as MCODER. The MCODER package can be easily integrated into custom R projects and provides powerful and enhanced graphical output options, compared to its Cytoscape counterpart.The Cancer Genome Atlas projects have classified tumors into subtypes that share distinct molecular and genetic features. To do so, researchers have leveraged multiomics data sets, including global and phosphoproteomic quantification, as well as DNA- and RNA-level measurements. Nevertheless, drawing associations between these subtypes and clinically important features, such as prognosis and therapeutic options, remains important challenges. In this study, we intended to focus on these challenges in high-grade serous ovarian carcinoma (HGS-OvCa) and colorectal cancer (CRC). Currently, standard treatment for ovarian cancer involves primary cytoreductive surgery, followed by platinum-based chemotherapy. Only two targeted therapies are clinically available for ovarian cancer, including poly (ADP-ribose) polymerase inhibitors and angiogenesis inhibitors in recurrent ovarian cancer [16], although they have been shown to offer little survival benefit. The four molecular subtypes of HGS-OvCa are differentiated, immunoreactive, proliferative, and mesenchymal, according to gene content analysis within each subtype, following transcriptome-based subtype classification [17, 18]. Of these, the mesenchymal subtype displays the worst prognosis [19, 20]. Meanwhile, CRC has four consensus molecular subtypes (CMSs): CMS1, CMS2, CMS3, and CMS4. The CMS subtypes of CRC are associated with various clinical features, such as sex, tumor site, stage at diagnosis, histopathological grade, and prognosis, as well as molecular features of microsatellite status, CpG island methylator phenotype (CIMP), somatic copy number alteration (SCNA), and enrichment of particular driver mutations. The CMS1 subtype exhibits high MSI, high CIMP, strong immune activation, and frequentBRAF mutation and involves an intermediate prognosis, showing worse survival after relapse. The CMS2 subtype displays a high degree of chromosomal instability (high SCNA), frequentAPC mutation, and good prognosis. Tumors of the CMS3 subtype display mixed MSI, high SCNA, frequentKRAS mutation, metabolic deregulation, and good prognosis. Finally, the CMS4 subtype is characterized by distinct epithelial-mesenchymal transition (EMT) signature, high SCNA, and the poorest prognosis. Notably, in both cancers, mesenchymal subtype confers the worst prognosis. To gain insights into molecular subtype-selective opportunities for targeted therapies in ovarian and colorectal cancer, HGS-OvCa [21] and CRC [22] data sets were analyzed using MCODER. Both data sets contained mass-spec-based quantitative proteomic assay results for the well-defined molecular subtypes of these cancers. In particular, we aimed to identify pharmacologically tractable protein complexes selectively elevated within the distinct molecular subtypes of both cancers. ## 2. Implementation MCODER identifies the maximal subset of vertices interconnected by the minimal number of degrees (k) from an input network of nodes (genes or proteins) and edges (pairwise interactions). Although the MCODER package does not account for the direction of the edges when calculating k-core scores and when detecting subnetworks, it can indicate directions using arrows and display multiple edges between a pair of nodes, which is not supported by the original MCODE. Moreover, various graphical parameters provided by “igraph” (http://igraph.org/redirect.html) can be manipulated in MCODER, facilitating customization of the shape, size, and color of the network output. The MCODER R package requires preinstallation of two other packages, “sna” (Social Network Analysis) (https://cran.r-project.org/web/packages/sna/index.html) for calculating k-cores and “igraph” for plotting figures.The overall workflow of the present study to identify pharmacologically tractable protein complexes is presented in Figure1. Before running MCODER, we downloaded the STRING database (Homo sapiens, v10.0) from http://string-db.org: STRING is an archive of direct (physical) and indirect (functional) protein-protein interactions [23]. We filtered low confident interactions by applying an interaction-score cutoff (score < 0.4) to obtain 13,159 genes with 738,312 interactions. In parallel, we downloaded and preprocessed proteome data sets by selecting samples that have preassigned molecular subtypes and matched normal controls to obtain input data sets: HGS-OvCa (n = 3,329 proteins, 140 samples) and CRC (n = 3,718 proteins, 70 samples) [21, 22]. HGS-OvCa consisted of four molecular subtypes: differentiated (n = 35 samples), immunoreactive (n = 37 samples), proliferative (n = 34 samples), and mesenchymal (n = 34 samples). CRC consisted of four molecular subtypes: CMS1 (n = 14 samples), CMS2 (n = 28 samples), CMS3 (n = 9 samples), and CMS4 (n = 14 samples). To identify differentially expressed proteins (DEPs) selectively elevated in a particular molecular subtype, a one-sided t-test was conducted iteratively within a tumor (e.g., CMS1 versus CMS2, CMS3, CMS4). After preparing differentially expressed protein sets, we converted them into adjacency matrices for each set, with connection information between nodes according to the STRING database, followed by calculation of k-core values, vertex density, and vertex score. Self-loop and duplicated connections between nodes were not considered for the calculation. Clusters were detected with the following parameters: minimal k-core value = 2, haircut = TRUE, fluff = FALSE, self-loop = FALSE, node score cutoff = 0.2, depth = 20, and degree cutoff = 2. Subsequently, vertices in the clusters were annotated according to the DGI database [24], allowing for detection of druggable DEPs.Figure 1 Workflow for detecting densely connected network clusters using MCODER. See Implementation for further details. ## 3. Results First, we examined the performance of MCODER (Figure1) in comparison to the MCODE Cytoscape application, testing input networks of different sizes (Table 1). All tests were performed using MacBook Pro (Mac OS X, Late 2013, 2.4-GHz Intel Core i5, 8 GB RAM). Input data sets were prepared by random sampling of the given number of interactions from the STRING database. We found that both software packages returned identical protein complexes as an output. Meanwhile, however, MCODER in the R environment offered enhanced performance in regard to speed and memory usage in all test settings (Table 1). The MCODER installation package is available online at https://sourceforge.net/projects/mcoder.Table 1 Comparison of computational time and memory usage between MCODER and the MCODE Cytoscape application. Network size Performance MCODER Cytoscape MCODE 5K edges, 2,902 vertexes 6 s. 1 m. 14 s. 100K edges, 3,786 vertexes 11 s. 3 m. 44 s. 200K edges, 4,625 vertexes 19 s. 18 m. 47 s. Memory usage 0.45 GB 5 GBNext, for the individual molecular subtypes, we identified selectively elevated proteins under ap value threshold of 0.01: 300 proteins for differentiated, 284 proteins for immunoreactive, 547 proteins for proliferative, and 493 proteins for mesenchymal HGS-OvCa and 236 proteins for CMS1, 284 proteins for CMS2, 134 proteins for CMS3, and 137 proteins for CMS4 subtypes of CRC (see Supplementary Data 1 in the Supplementary Material available online at https://doi.org/10.1155/2017/1016305). For each of the DEP sets, MCODER identified highly interconnected subnetworks of protein-protein interactions. For HGS-OvCa, we detected pharmacologically targetable clusters in three of the four subtypes (Supplementary Data 2). In the immunoreactive subtype, two clusters showed connections with pharmacological agents. The first cluster contained interferon-stimulated gene 15(ISG15), which is a biomarker for predicting sensitivity to irinotecan, an anticancer drug and topoisomerase I inhibitor (Figure 2(a)). Previous studies have demonstrated thatISG15 encodes an ubiquitin-like protein conjugated to specific E3 ubiquitin ligases and seems to inhibit the signaling consequences of ubiquitin/26S proteasome pathways [25]. Currently, treatments with irinotecan, in combination with bevacizumab or cisplatin, are in clinical trials for recurrent ovarian cancer [26]. Our findings suggest that selecting patients with immunoreactive features might increase response rates to irinotecan in future trials. The second cluster comprised a chemokine signaling related protein complex, including STAT3, which can be inhibited by RTA402, acitretin, and atiprimod (Figure 2(b)). Previous studies have indicated that STAT3 inhibitors, in combination with cisplatin, enhance cisplatin sensitivity in cisplatin-resistant ovarian cancer [27, 28]. Thus, a combination of irinotecan and STAT3 inhibitors might be plausible in treating ovarian cancers of immunoreactive subtype. In the proliferative subtype, two clusters displayed connections with pharmacological compounds. The CDK2-proteasome-XPO1 cluster was enriched with pharmacological options, including the proteasome inhibitor bortezomib, which is available clinically, and CDK2 and XPO1 inhibitors, which are under active clinical trials for various tumor types (Figure 2(c)) [29, 30]: XPO1 inhibitors have been used to target platinum-resistant ovarian tumors [31] and have been described as potentially inhibiting abnormal NF-kB signaling [32]. The second DEP cluster was the tubulin complex, in which TUBA4A can be targeted by vincristine to blunt mitotic chromosomal separation (Figure 2(d)). Similar to paclitaxel, a microtubule stabilizer and an antiproliferative agent [33], vincristine may be a potential agent for the treatment of ovarian cancer, particularly that of proliferative subtype. In the mesenchymal subtype, focal adhesion, endocytosis, vascular smooth muscle contraction, the PI3K-AKT signaling pathway, and so forth were identified. Of these, the integrin-collagen complex is a pharmacologically tractable target; various integrin signaling inhibitors include ITGA5 inhibitors (JSM 6427, PF-04605412, Volociximab, and Vitaxin), ITGB1 inhibitors (Volociximab, JSM 6427, R411, Vitaxin, and PF-04605412), and an ITGAV inhibitor (L-000845704) (Figure 2(e)). Integrin signaling is involved in the migration, invasion, proliferation, and survival of cancer cells [34]. Recently published studies have demonstrated that integrins participate in maintaining cancer stem cell populations and contribute to cancer progression and drug resistance [35]. Although integrin inhibitors as monotherapy agents have failed to demonstrate benefits in metastatic ovarian tumors, possibly due to compensation by other integrins [36], simultaneous targeting of integrin-FAK and c-Myc signaling has been found to synergistically disrupt tumor cell proliferation and survival in HGS-OvCa [37], supporting the notion of combinatorial targeting of integrin as a valid approach for treating ovarian cancer, particularly that of mesenchymal subtype.Figure 2 Pharmacologically targetable network clusters overexpressed in molecular subtypes of HGS-OvCa: (a, b) immunoreactive, (c, d) proliferative, and (e) mesenchymal subtype.For CRC, MCODER identified pharmacologically targetable protein complexes in three of the four CMSs (Supplementary Data 2). In CMS1 subtype (MSI immune), proteasome complex (similar to the HGS-OvCa proliferative subtype) and ROCK1 signaling subnetworks were found to be overexpressed (Figures3(a)-3(b)). Bortezomib treatment has been shown to induce G2-M arrest by activation of an ataxia-telangiectasia mutated protein-cell cycle checkpoint kinase 1 pathway in colon cancer cells [38]. Combination of platelet-derived growth factor and the ROCK inhibitor Y27632 has been found to decrease the invasive potential of SW620 colon cancer cells [39]. In the CMS2 subtype (canonical), tubulin complex was found to be elevated, similar to the HGS-OvCa proliferative subtype (Figure 3(c)). This observation suggests that vincristine could have therapeutic effects on CRCs of CMS2 subtype. Alternatively, or in combination with microtubule inhibitors, Src inhibitors may also be a plausible approach for CMS2 tumors (Figure 3(d)). The CMS4 subtype of CRCs exhibits EMT activation and confers the poorest prognosis. Other study groups have formerly referred to this subtype as colon cancer subtype 3 [40] or stem-like subtype [41]. In CMS4 tumors, we found the total MAPK3 (ERK1) protein complex to be elevated, which is targetable with ERK inhibitor II (Figure 3(e)). Surprisingly, in accordance with the HGS-OvCa mesenchymal subtype, CMS4 was also characterized by elevation of the extracellular matrix collagen-integrin complex (Figure 3(f)): collagen in the extracellular matrix has indeed been found to drive EMT in CRC [42]. Thus, the collagen-integrin protein complex may work as a molecular linchpin that, when removed, could diminish the malignant potential of EMT tumors. Accordingly, we suggest that therapeutic antibodies that interrupt the signaling of integrin proteins could potentially be utilized as therapeutic options, in combination with other chemo- or targeted therapies, for this refractory subtype of colon cancer.Figure 3 Pharmacologically targetable network clusters overexpressed in molecular subtypes of CRC: (a, b) CMS1, (c, d) CMS2, and (e, f) CMS4.Finally, we sought to determine whether our findings are reproducible with other network clustering algorithms, including ClusterONE [12] and MCL [6]. Although the sizes of the detected clusters varied, all of the subclusters detected by MCODER were identified by these algorithms as well, indicating that our findings are robust across different clustering algorithms. ## 4. Discussion In this study, we implemented the network clustering algorithm MCODE into the R software environment (which we called MCODER) and demonstrated that the MCODER package saves computational resources and time, making it particularly suited for analyzing multiple omics data sets. Using MCODER, we identified potential candidates for anticancer therapy in molecular subtypes of ovarian and colorectal cancer by detecting protein complexes that were selectively overexpressed therein and that could be targeted with known pharmacological agents. For HGS-OvCa, we found that irinotecan and STAT3 inhibitors may be candidates for the immunoreactive subtype, along with bortezomib, CDK2, XPO1 inhibitors, and vincristine for the proliferative subtype and integrin signaling inhibitors for the mesenchymal subtype. For CRC, we found bortezomib and ROCK inhibitors to be potential candidates for the CMS1 subtype, along with vincristine and Src inhibitors for the CMS2 subtype and ERK inhibitor II and integrin signaling inhibitors for the CMS4 subtype. Importantly, our analyses revealed that the collagen-integrin protein complex, which is pharmacologically tractable, is commonly overexpressed in EMT subtypes of both ovarian and colorectal cancers. Further studies are needed to determine whether pharmacological inhibition of collagen-integrin signaling blunts tumor growth in an in vivo model of EMT cancer. --- *Source: 1016305-2017-06-13.xml*
1016305-2017-06-13_1016305-2017-06-13.md
19,759
Identification of Pharmacologically Tractable Protein Complexes in Cancer Using the R-Based Network Clustering and Visualization Program MCODER
Sungjin Kwon; Hyosil Kim; Hyun Seok Kim
BioMed Research International (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1016305
1016305-2017-06-13.xml
--- ## Abstract Current multiomics assay platforms facilitate systematic identification of functional entities that are mappable in a biological network, and computational methods that are better able to detect densely connected clusters of signals within a biological network are considered increasingly important. One of the most famous algorithms for detecting network subclusters is Molecular Complex Detection (MCODE). MCODE, however, is limited in simultaneous analyses of multiple, large-scale data sets, since it runs on the Cytoscape platform, which requires extensive computational resources and has limited coding flexibility. In the present study, we implemented the MCODE algorithm in R programming language and developed a related package, which we called MCODER. We found the MCODER package to be particularly useful in analyzing multiple omics data sets simultaneously within the R framework. Thus, we applied MCODER to detect pharmacologically tractable protein-protein interactions selectively elevated in molecular subtypes of ovarian and colorectal tumors. In doing so, we found that a single molecular subtype representing epithelial-mesenchymal transition in both cancer types exhibited enhanced production of the collagen-integrin protein complex. These results suggest that tumors of this molecular subtype could be susceptible to pharmacological inhibition of integrin signaling. --- ## Body ## 1. Introduction Biological functions often arise from multisubunit protein complexes, rather than a single, isolated protein [1, 2]. Many high throughput assay platforms in genomics, transcriptomics, and proteomics have become standard methods for investigating gene/protein interactions that give rise to biological functions [3]. However, because of biological and technical errors, these methods are hindered by a limited signal-to-noise ratio, rendering them vulnerable to high rates of false positives and false negatives; particularly when discovered hits represent a single gene, protein, and so forth. In this regard, codiscovery of hits for multiple subunits of a protein complex in an experimental condition helps mutually support the significance of such findings [4]. Detection of higher order clusters in a large network, however, is computationally challenging [5]. A number of algorithms have been developed over the past decade to tackle this problem, including the Markov Cluster Algorithm (MCL) [6], Molecular Complex Detection (MCODE) [7], DPClus [8], Affinity Propagation Clustering (APC) [9], Clustering based on Maximal Clique (CMC) [10], ClusterMaker [11], and Clustering with Overlapping Neighborhood Expansion (ClusterONE) [12]. Many of these algorithms have been implemented in various Cytoscape applications (CytoCluster, ClusterViz [13], and ClusterMaker [11]), as well as in java-based applications (C-DEVA [14]). Of these, as of February 2017, MCODE was the most downloaded Cytoscape application within the clustering category. MCODE discovers interconnected network clusters based on k-core score: the k-core of a particular graph (graph X) represents the maximal number of connected subgraphs of graph X, in which all nodes are connected by k (minimum number of degrees). Although Cytoscape is a java-based, open source, bioinformatics software platform with a user-friendly graphic-user interface [15], it requires extensive computational resources due to the memory restraints of java virtual machines (Cytoscape version 3.2.1: 2 GB+ recommended). Thus, its capacity to process input networks and graphical outputs is limited. For a computationally intensive task, R may be a better-suited platform. R is the most popular open source, statistical programming language, and data analysis platform used in analysis of broad, high throughput, and multiomics data. While the platform is suitable for iterative analysis of large-scale data sets in batch mode, R-based network clustering software is rare. Herein, we describe our implementation of the MCODE algorithm in R programming language and a related package, hereinafter referred to as MCODER. The MCODER package can be easily integrated into custom R projects and provides powerful and enhanced graphical output options, compared to its Cytoscape counterpart.The Cancer Genome Atlas projects have classified tumors into subtypes that share distinct molecular and genetic features. To do so, researchers have leveraged multiomics data sets, including global and phosphoproteomic quantification, as well as DNA- and RNA-level measurements. Nevertheless, drawing associations between these subtypes and clinically important features, such as prognosis and therapeutic options, remains important challenges. In this study, we intended to focus on these challenges in high-grade serous ovarian carcinoma (HGS-OvCa) and colorectal cancer (CRC). Currently, standard treatment for ovarian cancer involves primary cytoreductive surgery, followed by platinum-based chemotherapy. Only two targeted therapies are clinically available for ovarian cancer, including poly (ADP-ribose) polymerase inhibitors and angiogenesis inhibitors in recurrent ovarian cancer [16], although they have been shown to offer little survival benefit. The four molecular subtypes of HGS-OvCa are differentiated, immunoreactive, proliferative, and mesenchymal, according to gene content analysis within each subtype, following transcriptome-based subtype classification [17, 18]. Of these, the mesenchymal subtype displays the worst prognosis [19, 20]. Meanwhile, CRC has four consensus molecular subtypes (CMSs): CMS1, CMS2, CMS3, and CMS4. The CMS subtypes of CRC are associated with various clinical features, such as sex, tumor site, stage at diagnosis, histopathological grade, and prognosis, as well as molecular features of microsatellite status, CpG island methylator phenotype (CIMP), somatic copy number alteration (SCNA), and enrichment of particular driver mutations. The CMS1 subtype exhibits high MSI, high CIMP, strong immune activation, and frequentBRAF mutation and involves an intermediate prognosis, showing worse survival after relapse. The CMS2 subtype displays a high degree of chromosomal instability (high SCNA), frequentAPC mutation, and good prognosis. Tumors of the CMS3 subtype display mixed MSI, high SCNA, frequentKRAS mutation, metabolic deregulation, and good prognosis. Finally, the CMS4 subtype is characterized by distinct epithelial-mesenchymal transition (EMT) signature, high SCNA, and the poorest prognosis. Notably, in both cancers, mesenchymal subtype confers the worst prognosis. To gain insights into molecular subtype-selective opportunities for targeted therapies in ovarian and colorectal cancer, HGS-OvCa [21] and CRC [22] data sets were analyzed using MCODER. Both data sets contained mass-spec-based quantitative proteomic assay results for the well-defined molecular subtypes of these cancers. In particular, we aimed to identify pharmacologically tractable protein complexes selectively elevated within the distinct molecular subtypes of both cancers. ## 2. Implementation MCODER identifies the maximal subset of vertices interconnected by the minimal number of degrees (k) from an input network of nodes (genes or proteins) and edges (pairwise interactions). Although the MCODER package does not account for the direction of the edges when calculating k-core scores and when detecting subnetworks, it can indicate directions using arrows and display multiple edges between a pair of nodes, which is not supported by the original MCODE. Moreover, various graphical parameters provided by “igraph” (http://igraph.org/redirect.html) can be manipulated in MCODER, facilitating customization of the shape, size, and color of the network output. The MCODER R package requires preinstallation of two other packages, “sna” (Social Network Analysis) (https://cran.r-project.org/web/packages/sna/index.html) for calculating k-cores and “igraph” for plotting figures.The overall workflow of the present study to identify pharmacologically tractable protein complexes is presented in Figure1. Before running MCODER, we downloaded the STRING database (Homo sapiens, v10.0) from http://string-db.org: STRING is an archive of direct (physical) and indirect (functional) protein-protein interactions [23]. We filtered low confident interactions by applying an interaction-score cutoff (score < 0.4) to obtain 13,159 genes with 738,312 interactions. In parallel, we downloaded and preprocessed proteome data sets by selecting samples that have preassigned molecular subtypes and matched normal controls to obtain input data sets: HGS-OvCa (n = 3,329 proteins, 140 samples) and CRC (n = 3,718 proteins, 70 samples) [21, 22]. HGS-OvCa consisted of four molecular subtypes: differentiated (n = 35 samples), immunoreactive (n = 37 samples), proliferative (n = 34 samples), and mesenchymal (n = 34 samples). CRC consisted of four molecular subtypes: CMS1 (n = 14 samples), CMS2 (n = 28 samples), CMS3 (n = 9 samples), and CMS4 (n = 14 samples). To identify differentially expressed proteins (DEPs) selectively elevated in a particular molecular subtype, a one-sided t-test was conducted iteratively within a tumor (e.g., CMS1 versus CMS2, CMS3, CMS4). After preparing differentially expressed protein sets, we converted them into adjacency matrices for each set, with connection information between nodes according to the STRING database, followed by calculation of k-core values, vertex density, and vertex score. Self-loop and duplicated connections between nodes were not considered for the calculation. Clusters were detected with the following parameters: minimal k-core value = 2, haircut = TRUE, fluff = FALSE, self-loop = FALSE, node score cutoff = 0.2, depth = 20, and degree cutoff = 2. Subsequently, vertices in the clusters were annotated according to the DGI database [24], allowing for detection of druggable DEPs.Figure 1 Workflow for detecting densely connected network clusters using MCODER. See Implementation for further details. ## 3. Results First, we examined the performance of MCODER (Figure1) in comparison to the MCODE Cytoscape application, testing input networks of different sizes (Table 1). All tests were performed using MacBook Pro (Mac OS X, Late 2013, 2.4-GHz Intel Core i5, 8 GB RAM). Input data sets were prepared by random sampling of the given number of interactions from the STRING database. We found that both software packages returned identical protein complexes as an output. Meanwhile, however, MCODER in the R environment offered enhanced performance in regard to speed and memory usage in all test settings (Table 1). The MCODER installation package is available online at https://sourceforge.net/projects/mcoder.Table 1 Comparison of computational time and memory usage between MCODER and the MCODE Cytoscape application. Network size Performance MCODER Cytoscape MCODE 5K edges, 2,902 vertexes 6 s. 1 m. 14 s. 100K edges, 3,786 vertexes 11 s. 3 m. 44 s. 200K edges, 4,625 vertexes 19 s. 18 m. 47 s. Memory usage 0.45 GB 5 GBNext, for the individual molecular subtypes, we identified selectively elevated proteins under ap value threshold of 0.01: 300 proteins for differentiated, 284 proteins for immunoreactive, 547 proteins for proliferative, and 493 proteins for mesenchymal HGS-OvCa and 236 proteins for CMS1, 284 proteins for CMS2, 134 proteins for CMS3, and 137 proteins for CMS4 subtypes of CRC (see Supplementary Data 1 in the Supplementary Material available online at https://doi.org/10.1155/2017/1016305). For each of the DEP sets, MCODER identified highly interconnected subnetworks of protein-protein interactions. For HGS-OvCa, we detected pharmacologically targetable clusters in three of the four subtypes (Supplementary Data 2). In the immunoreactive subtype, two clusters showed connections with pharmacological agents. The first cluster contained interferon-stimulated gene 15(ISG15), which is a biomarker for predicting sensitivity to irinotecan, an anticancer drug and topoisomerase I inhibitor (Figure 2(a)). Previous studies have demonstrated thatISG15 encodes an ubiquitin-like protein conjugated to specific E3 ubiquitin ligases and seems to inhibit the signaling consequences of ubiquitin/26S proteasome pathways [25]. Currently, treatments with irinotecan, in combination with bevacizumab or cisplatin, are in clinical trials for recurrent ovarian cancer [26]. Our findings suggest that selecting patients with immunoreactive features might increase response rates to irinotecan in future trials. The second cluster comprised a chemokine signaling related protein complex, including STAT3, which can be inhibited by RTA402, acitretin, and atiprimod (Figure 2(b)). Previous studies have indicated that STAT3 inhibitors, in combination with cisplatin, enhance cisplatin sensitivity in cisplatin-resistant ovarian cancer [27, 28]. Thus, a combination of irinotecan and STAT3 inhibitors might be plausible in treating ovarian cancers of immunoreactive subtype. In the proliferative subtype, two clusters displayed connections with pharmacological compounds. The CDK2-proteasome-XPO1 cluster was enriched with pharmacological options, including the proteasome inhibitor bortezomib, which is available clinically, and CDK2 and XPO1 inhibitors, which are under active clinical trials for various tumor types (Figure 2(c)) [29, 30]: XPO1 inhibitors have been used to target platinum-resistant ovarian tumors [31] and have been described as potentially inhibiting abnormal NF-kB signaling [32]. The second DEP cluster was the tubulin complex, in which TUBA4A can be targeted by vincristine to blunt mitotic chromosomal separation (Figure 2(d)). Similar to paclitaxel, a microtubule stabilizer and an antiproliferative agent [33], vincristine may be a potential agent for the treatment of ovarian cancer, particularly that of proliferative subtype. In the mesenchymal subtype, focal adhesion, endocytosis, vascular smooth muscle contraction, the PI3K-AKT signaling pathway, and so forth were identified. Of these, the integrin-collagen complex is a pharmacologically tractable target; various integrin signaling inhibitors include ITGA5 inhibitors (JSM 6427, PF-04605412, Volociximab, and Vitaxin), ITGB1 inhibitors (Volociximab, JSM 6427, R411, Vitaxin, and PF-04605412), and an ITGAV inhibitor (L-000845704) (Figure 2(e)). Integrin signaling is involved in the migration, invasion, proliferation, and survival of cancer cells [34]. Recently published studies have demonstrated that integrins participate in maintaining cancer stem cell populations and contribute to cancer progression and drug resistance [35]. Although integrin inhibitors as monotherapy agents have failed to demonstrate benefits in metastatic ovarian tumors, possibly due to compensation by other integrins [36], simultaneous targeting of integrin-FAK and c-Myc signaling has been found to synergistically disrupt tumor cell proliferation and survival in HGS-OvCa [37], supporting the notion of combinatorial targeting of integrin as a valid approach for treating ovarian cancer, particularly that of mesenchymal subtype.Figure 2 Pharmacologically targetable network clusters overexpressed in molecular subtypes of HGS-OvCa: (a, b) immunoreactive, (c, d) proliferative, and (e) mesenchymal subtype.For CRC, MCODER identified pharmacologically targetable protein complexes in three of the four CMSs (Supplementary Data 2). In CMS1 subtype (MSI immune), proteasome complex (similar to the HGS-OvCa proliferative subtype) and ROCK1 signaling subnetworks were found to be overexpressed (Figures3(a)-3(b)). Bortezomib treatment has been shown to induce G2-M arrest by activation of an ataxia-telangiectasia mutated protein-cell cycle checkpoint kinase 1 pathway in colon cancer cells [38]. Combination of platelet-derived growth factor and the ROCK inhibitor Y27632 has been found to decrease the invasive potential of SW620 colon cancer cells [39]. In the CMS2 subtype (canonical), tubulin complex was found to be elevated, similar to the HGS-OvCa proliferative subtype (Figure 3(c)). This observation suggests that vincristine could have therapeutic effects on CRCs of CMS2 subtype. Alternatively, or in combination with microtubule inhibitors, Src inhibitors may also be a plausible approach for CMS2 tumors (Figure 3(d)). The CMS4 subtype of CRCs exhibits EMT activation and confers the poorest prognosis. Other study groups have formerly referred to this subtype as colon cancer subtype 3 [40] or stem-like subtype [41]. In CMS4 tumors, we found the total MAPK3 (ERK1) protein complex to be elevated, which is targetable with ERK inhibitor II (Figure 3(e)). Surprisingly, in accordance with the HGS-OvCa mesenchymal subtype, CMS4 was also characterized by elevation of the extracellular matrix collagen-integrin complex (Figure 3(f)): collagen in the extracellular matrix has indeed been found to drive EMT in CRC [42]. Thus, the collagen-integrin protein complex may work as a molecular linchpin that, when removed, could diminish the malignant potential of EMT tumors. Accordingly, we suggest that therapeutic antibodies that interrupt the signaling of integrin proteins could potentially be utilized as therapeutic options, in combination with other chemo- or targeted therapies, for this refractory subtype of colon cancer.Figure 3 Pharmacologically targetable network clusters overexpressed in molecular subtypes of CRC: (a, b) CMS1, (c, d) CMS2, and (e, f) CMS4.Finally, we sought to determine whether our findings are reproducible with other network clustering algorithms, including ClusterONE [12] and MCL [6]. Although the sizes of the detected clusters varied, all of the subclusters detected by MCODER were identified by these algorithms as well, indicating that our findings are robust across different clustering algorithms. ## 4. Discussion In this study, we implemented the network clustering algorithm MCODE into the R software environment (which we called MCODER) and demonstrated that the MCODER package saves computational resources and time, making it particularly suited for analyzing multiple omics data sets. Using MCODER, we identified potential candidates for anticancer therapy in molecular subtypes of ovarian and colorectal cancer by detecting protein complexes that were selectively overexpressed therein and that could be targeted with known pharmacological agents. For HGS-OvCa, we found that irinotecan and STAT3 inhibitors may be candidates for the immunoreactive subtype, along with bortezomib, CDK2, XPO1 inhibitors, and vincristine for the proliferative subtype and integrin signaling inhibitors for the mesenchymal subtype. For CRC, we found bortezomib and ROCK inhibitors to be potential candidates for the CMS1 subtype, along with vincristine and Src inhibitors for the CMS2 subtype and ERK inhibitor II and integrin signaling inhibitors for the CMS4 subtype. Importantly, our analyses revealed that the collagen-integrin protein complex, which is pharmacologically tractable, is commonly overexpressed in EMT subtypes of both ovarian and colorectal cancers. Further studies are needed to determine whether pharmacological inhibition of collagen-integrin signaling blunts tumor growth in an in vivo model of EMT cancer. --- *Source: 1016305-2017-06-13.xml*
2017
# Antimicrobial Resistance Pattern and Their Beta-Lactamase Encoding Genes amongPseudomonas aeruginosa Strains Isolated from Cancer Patients **Authors:** Mai M. Zafer; Mohamed H. Al-Agamy; Hadir A. El-Mahallawy; Magdy A. Amin; Mohammed Seif El-Din Ashour **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101635 --- ## Abstract This study was designed to investigate the prevalence of metallo-β-lactamases (MBL) and extended-spectrum β-lactamases (ESBL) in P. aeruginosa isolates collected from two different hospitals in Cairo, Egypt. Antibiotic susceptibility testing and phenotypic screening for ESBLs and MBLs were performed on 122 P. aeruginosa isolates collected in the period from January 2011 to March 2012. MICs were determined. ESBLs and MBLs genes were sought by PCR. The resistant rate to imipenem was 39.34%. The resistance rates for P. aeruginosa to cefuroxime, cefoperazone, ceftazidime, aztreonam, and piperacillin/tazobactam were 87.7%, 80.3%, 60.6%, 45.1%, and 25.4%, respectively. Out of 122 P. aeruginosa, 27% and 7.4% were MBL and ESBL, respectively. The prevalence of bla VIM-2, bla OXA-10-, bla VEB-1, bla NDM-, and bla IMP-1-like genes were found in 58.3%, 41.7%, 10.4%, 4.2%, and 2.1%, respectively. GIM-, SPM-, SIM-, and OXA-2-like genes were not detected in this study. OXA-10-like gene was concomitant with VIM-2 and/or VEB. Twelve isolates harbored both OXA-10 and VIM-2; two isolates carried both OXA-10 and VEB. Only one strain contained OXA-10, VIM-2, and VEB. In conclusion, bla VIM-2- and bla OXA-10-like genes were the most prevalent genes in P. aeruginosa in Egypt. To our knowledge, this is the first report of bla VIM-2, bla IMP-1, bla NDM, and bla OXA-10 in P. aeruginosa in Egypt. --- ## Body ## 1. Introduction Pseudomonas aeruginosa is widely known as an opportunistic organism, frequently involved in infections of immune-suppressed patients, and also causes outbreaks of hospital-acquired infections [1] that cause infections with a high mortality rate [2]. This latter is, in part, attributable to the organism’s intrinsically high resistance to many antimicrobials and the development of increased, particularly multidrug, resistance in healthcare settings [3], both of which complicate antipseudomonal chemotherapy. The carbapenems have been the drug of choice for the treatment of infections caused by penicillin or cephalosporin resistant Gram-negative bacilli [4]. However, carbapenem resistance has been observed frequently in nonfermenting bacilli Pseudomonas aeruginosa and Acinetobacter spp. Resistance to carbapenem is due to decreased outer membrane permeability, increased efflux systems, alteration of penicillin-binding proteins, and carbapenem hydrolyzing enzymes-carbapenemase. In the last decade, several classes A, B, and D β-lactamases have been detected in P. aeruginosa [5]. The carbapenemases found are mostly metallo-β-lactamases (MBL), including IMP, VIM, SPM, SIM, GIM, AIM, DIM, or NDM enzymes, but serine carbapenemases have also been recorded, including KPC and GES variants [6].The OXA-ESBLs are mutants of OXA-2 and -10, belonging to class D, whereas the other ESBL belongs to class A. VEB and PER types were found to be the most common (or least rare) ESBL inP. aeruginosa in several countries, contrasting to the dominance of CTX-M, SHV, and TEM ESBL in Enterobacteriaceae [7]. Detection of MBL and ESBL producing Gram-negative bacilli especially P. aeruginosa is crucial for optimal treatment of patients particularly critically ill and hospitalized patients and to control the spread of resistance. The aim of the present study was phenotypic and genotypic screening for MBL and ESBL producing strains among P. aeruginosa isolated from clinical specimens of cancer patients recovered from two hospitals in Cairo, Egypt. ## 2. Materials and Methods ### 2.1. Bacterial Strains Hundred twenty-two nonduplicate nonconsecutiveP. aeruginosa isolates were obtained from clinical specimens submitted for bacteriological testing from hospitalized in-patients admitted to Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012. Kasr El Aini School of Medicine and National Cancer Institute are tertiary hospitals belonging to Cairo University, Egypt. The study was approved by Ethics Committee of Cairo University and an informed consent was obtained from all patients receiving treatment and participating in the study. With regard to the specimen site, P. aeruginosa were isolated from wound swabs (n = 44), blood (n = 29), urine (n = 22), sputum (n = 11), cerebrospinal fluid (CSF) (n = 2), genital sites (n = 2), catheter tip (n = 2), central venous catheter (n = 4), ear swab (n = 2), pleural tissue specimen (n = 2), corneal graft (n = 1), and breast abscess (n = 1). ### 2.2. Bacterial Identification Identification ofP. aeruginosa was done on the basis of Gram staining, colony morphologies on MacConkey’s agar, motility, pigment production, oxidase reaction, growth at 42°C, and the biochemical tests included in the API 20NE identification kit (Biomerieux, Marcy l’Étoile, France). The Vitek 2 system (Vitek 2 software, version R02.03; Advanced Expert System [AES] software, version R02.00N (bioMerieux, Marcy l’Étoile, France) was used with the ID-GNB card for identification of Gram-negative bacilli. The identified strains were stored in glycerol broth cultures at −70°C. ### 2.3. Antimicrobial Susceptibility Testing Susceptibility of the isolates to the following antibacterial agents was tested by the Kirby-Bauer disc diffusion method [8] using disks (Oxoid ltd., Basin Stoke, Hants, England) on Mueller Hinton agar and interpreted as recommended by Clinical and Laboratory Standards Institute (CLSI) guidelines [9]: amikacin (AK, 30 μg), aztreonam (ATM, 30 μg), cefepime (FEP, 30 μg), cefoperazone (CFP, 30 μg), cefotaxime (CTX, 30 μg), ceftazidime (CAZ, 30 μg), ceftriaxone (CRO, 30 μg), cefuroxime (CXM, 30 μg), ciprofloxacin (CIP, 5 μg), imipenem (IPM, 10 μg), meropenem (MEM, 10 μg), piperacillin/tazobactam (TPZ, 100/10 μg), polymyxin B (PB, 300 units), and tobramycin (TOB, 10 μg). ### 2.4. MIC Determination for MBL-ProducingP. aeruginosa The MICs of 9 antibiotics (cefepime, piperacillin/tazobactam, ceftazidime/clavulanic acid, ceftazidime, ciprofloxacin, amikacin, gentamicin, imipenem, and cefotaxime) were determined to 33P. aeruginosa isolates that phenotypically produce MBL using Etest (AB Biodisk, Solna, Sweden) as described by the manufacturer. Results were interpreted using CLSI criteria for susceptibility testing [9]. P. aeruginosa ATCC 27853 was used as the reference strain. ### 2.5. Phenotypic Detection of ESBL Combined double disc synergy test was performed with discs containing ceftazidime (30μg) alone and in the presence of clavulanate (10 μg). In order to inhibit cephalosporinase overproduction, double disc synergy tests were also carried out with the addition of 400 μg of boronic acid [10]. Increase in ceftazidime inhibition zone of ≥5 mm in the presence of clavulanic acid as compared with when tested alone was considered to be ESBL producer. ### 2.6. Phenotypic Detection of MBL A 4μL of 0.5 M EDTA (Sigma Chemicals, St. Louis, MO) was poured on imipenem and ceftazidime disks to obtain a desired concentration of 750 μg per disk. The EDTA impregnated antibiotic disks were dried immediately in an incubator and stored at −20°C in air-tight vials without desiccant until used. 0.5 McFarland equivalent overnight broth culture of test strain was inoculated on a plate of Mueller Hinton agar. One 10 μg imipenem and one 30 μg ceftazidime disks were placed on the agar plate. One each of EDTA impregnated imipenem and ceftazidime disks were also placed on same agar plate. The plate was incubated at 37°C for 16 to 18 h. An increase in the zone size of ≥7 mm around the imipenem-EDTA disk or ceftazidime-EDTA compared to imipenem or ceftazidime disks without EDTA was recorded as MBL producing strain [11]. ### 2.7. Preparation of DNA Template for PCR DNA templates were prepared according to the previous described method [12]. A 300 μL of overnight culture of the test isolates in tryptone soy broth (Difco, Detroit, MI, USA) was centrifuged. The bacterial pellet was resuspended to the initial volume with HPLC grade water. The DNA template was prepared by boiling of suspension of bacterial pellet for 10 min and directly used in the PCR assay. ### 2.8. Detection of ESBL and MBL Genes Genes for ESBLs (OXA-10-like gene, OXA-2-like gene, and VEB) and MBLs (VIM-1, VIM-2, IMP-1, IMP-2, SIM, GIM, SPM, and NDM) were sought by PCR for all isolates using the primers listed in Table1 according to the previous protocols [7, 13–16]. Negative and positive controls were involved in all PCR experiments. Five μL of reaction mix containing PCR product was analysed by electrophoresis in 0.8% (w/v) agarose (Fermentas, Lithuania).Table 1 Primers used for detection of MBL, OXA 10, and VEB. Primers Sequence Reference Expected PCR product b l a IMP-1 TGAGCAAGTTATCTGTATTCTTAGTTGCTTGGTTTTGATG [14] 740 bp b l a IMP-2 GGCAGTCGCCCTAAAACAAATAGTTACTTGGCTGTGATGG [14] 737 bp b l a VIM-1 TTATGGAGCAGCAACGATGTCAAAAGTCCCGCTCCAACGA [14] 920 bp b l a VIM-2 AAAGTTATGCCGCACTCACCTGCAACTTCATGTTATGCCG [14] 865 bp b l a NDM CACCTCATGTTTGAATTCGCCCTCTGTCACATCGAAATCGC [16] 984 bp b l a OXA-10 TATCGCGTGTCTTTCGAGTATTAGCCACCAATGATGCCC [7] 760 bp b l a VEB-1 CGACTTCCATTTCCCGATGCGGACTCTGCAACAAATACGC [7] 642 bp b l a OXA-2 GCCAAAGGCACGATAGTTGTGCGTCCGAGTTGACTGCCGG [13] 700 bp b l a GIM TCGACACACCTT GGT CTG AAAACTTCCAACTT TGCCATGC [15] 477 bp b l a SPM AAAATCTGGGTACGCAAA CGACATTATCCGCTGGAACAGG [15] 271 bp b l a SIM TAC AAG GGATTCGGCATCGTAATGG CCTGT CCCATG TG [15] 570 bp ## 2.1. Bacterial Strains Hundred twenty-two nonduplicate nonconsecutiveP. aeruginosa isolates were obtained from clinical specimens submitted for bacteriological testing from hospitalized in-patients admitted to Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012. Kasr El Aini School of Medicine and National Cancer Institute are tertiary hospitals belonging to Cairo University, Egypt. The study was approved by Ethics Committee of Cairo University and an informed consent was obtained from all patients receiving treatment and participating in the study. With regard to the specimen site, P. aeruginosa were isolated from wound swabs (n = 44), blood (n = 29), urine (n = 22), sputum (n = 11), cerebrospinal fluid (CSF) (n = 2), genital sites (n = 2), catheter tip (n = 2), central venous catheter (n = 4), ear swab (n = 2), pleural tissue specimen (n = 2), corneal graft (n = 1), and breast abscess (n = 1). ## 2.2. Bacterial Identification Identification ofP. aeruginosa was done on the basis of Gram staining, colony morphologies on MacConkey’s agar, motility, pigment production, oxidase reaction, growth at 42°C, and the biochemical tests included in the API 20NE identification kit (Biomerieux, Marcy l’Étoile, France). The Vitek 2 system (Vitek 2 software, version R02.03; Advanced Expert System [AES] software, version R02.00N (bioMerieux, Marcy l’Étoile, France) was used with the ID-GNB card for identification of Gram-negative bacilli. The identified strains were stored in glycerol broth cultures at −70°C. ## 2.3. Antimicrobial Susceptibility Testing Susceptibility of the isolates to the following antibacterial agents was tested by the Kirby-Bauer disc diffusion method [8] using disks (Oxoid ltd., Basin Stoke, Hants, England) on Mueller Hinton agar and interpreted as recommended by Clinical and Laboratory Standards Institute (CLSI) guidelines [9]: amikacin (AK, 30 μg), aztreonam (ATM, 30 μg), cefepime (FEP, 30 μg), cefoperazone (CFP, 30 μg), cefotaxime (CTX, 30 μg), ceftazidime (CAZ, 30 μg), ceftriaxone (CRO, 30 μg), cefuroxime (CXM, 30 μg), ciprofloxacin (CIP, 5 μg), imipenem (IPM, 10 μg), meropenem (MEM, 10 μg), piperacillin/tazobactam (TPZ, 100/10 μg), polymyxin B (PB, 300 units), and tobramycin (TOB, 10 μg). ## 2.4. MIC Determination for MBL-ProducingP. aeruginosa The MICs of 9 antibiotics (cefepime, piperacillin/tazobactam, ceftazidime/clavulanic acid, ceftazidime, ciprofloxacin, amikacin, gentamicin, imipenem, and cefotaxime) were determined to 33P. aeruginosa isolates that phenotypically produce MBL using Etest (AB Biodisk, Solna, Sweden) as described by the manufacturer. Results were interpreted using CLSI criteria for susceptibility testing [9]. P. aeruginosa ATCC 27853 was used as the reference strain. ## 2.5. Phenotypic Detection of ESBL Combined double disc synergy test was performed with discs containing ceftazidime (30μg) alone and in the presence of clavulanate (10 μg). In order to inhibit cephalosporinase overproduction, double disc synergy tests were also carried out with the addition of 400 μg of boronic acid [10]. Increase in ceftazidime inhibition zone of ≥5 mm in the presence of clavulanic acid as compared with when tested alone was considered to be ESBL producer. ## 2.6. Phenotypic Detection of MBL A 4μL of 0.5 M EDTA (Sigma Chemicals, St. Louis, MO) was poured on imipenem and ceftazidime disks to obtain a desired concentration of 750 μg per disk. The EDTA impregnated antibiotic disks were dried immediately in an incubator and stored at −20°C in air-tight vials without desiccant until used. 0.5 McFarland equivalent overnight broth culture of test strain was inoculated on a plate of Mueller Hinton agar. One 10 μg imipenem and one 30 μg ceftazidime disks were placed on the agar plate. One each of EDTA impregnated imipenem and ceftazidime disks were also placed on same agar plate. The plate was incubated at 37°C for 16 to 18 h. An increase in the zone size of ≥7 mm around the imipenem-EDTA disk or ceftazidime-EDTA compared to imipenem or ceftazidime disks without EDTA was recorded as MBL producing strain [11]. ## 2.7. Preparation of DNA Template for PCR DNA templates were prepared according to the previous described method [12]. A 300 μL of overnight culture of the test isolates in tryptone soy broth (Difco, Detroit, MI, USA) was centrifuged. The bacterial pellet was resuspended to the initial volume with HPLC grade water. The DNA template was prepared by boiling of suspension of bacterial pellet for 10 min and directly used in the PCR assay. ## 2.8. Detection of ESBL and MBL Genes Genes for ESBLs (OXA-10-like gene, OXA-2-like gene, and VEB) and MBLs (VIM-1, VIM-2, IMP-1, IMP-2, SIM, GIM, SPM, and NDM) were sought by PCR for all isolates using the primers listed in Table1 according to the previous protocols [7, 13–16]. Negative and positive controls were involved in all PCR experiments. Five μL of reaction mix containing PCR product was analysed by electrophoresis in 0.8% (w/v) agarose (Fermentas, Lithuania).Table 1 Primers used for detection of MBL, OXA 10, and VEB. Primers Sequence Reference Expected PCR product b l a IMP-1 TGAGCAAGTTATCTGTATTCTTAGTTGCTTGGTTTTGATG [14] 740 bp b l a IMP-2 GGCAGTCGCCCTAAAACAAATAGTTACTTGGCTGTGATGG [14] 737 bp b l a VIM-1 TTATGGAGCAGCAACGATGTCAAAAGTCCCGCTCCAACGA [14] 920 bp b l a VIM-2 AAAGTTATGCCGCACTCACCTGCAACTTCATGTTATGCCG [14] 865 bp b l a NDM CACCTCATGTTTGAATTCGCCCTCTGTCACATCGAAATCGC [16] 984 bp b l a OXA-10 TATCGCGTGTCTTTCGAGTATTAGCCACCAATGATGCCC [7] 760 bp b l a VEB-1 CGACTTCCATTTCCCGATGCGGACTCTGCAACAAATACGC [7] 642 bp b l a OXA-2 GCCAAAGGCACGATAGTTGTGCGTCCGAGTTGACTGCCGG [13] 700 bp b l a GIM TCGACACACCTT GGT CTG AAAACTTCCAACTT TGCCATGC [15] 477 bp b l a SPM AAAATCTGGGTACGCAAA CGACATTATCCGCTGGAACAGG [15] 271 bp b l a SIM TAC AAG GGATTCGGCATCGTAATGG CCTGT CCCATG TG [15] 570 bp ## 3. Results The antimicrobial susceptibility testing was done by disc diffusion method to 122 clinical isolates ofP. aeruginosa that were collected from Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012.The resistant rates of antibiotics are shown in Table2. Forty-eight (39.34%) out of 122P. aeruginosa isolates were resistant to imipenem. Eight (6.5%) out of 122 isolates of P. aeruginosa showed intermediate resistance to imipenem. Fifty-six (46%) out of 122 P. aeruginosa isolates were resistant to meropenem. Only two isolates (1.64%) out of 122 showed intermediate resistance to meropenem. The resistant rates for β-lactam antibiotics including cefuroxime, cefoperazone, ceftazidime, aztreonam, and piperacillin/tazobactam were 87.7%, 80.3%, 60.6%, 45.1%, and 25.4%, respectively. The resistant rates for non-β-lactam antibiotics including gentamicin, ciprofloxacin, and amikacin were 50%, 43.4%, and 32.8%, respectively. Only 3 (2.5%) out of 122 P. aeruginosa isolates were resistant to polymyxin B. The antimicrobial resistance rates were higher for imipenem-resistant than imipenem-susceptible P. aeruginosa isolates (Table 2). Non-β-lactams showed higher activity against imipenem-resistant P. aeruginosa than β-lactams.Table 2 Resistance rates for clinicalP.  aeruginosa isolates. Antibiotic Number (%) of resistant isolates Total isolates(n = 122) Imipenem susceptible(n = 66) Imipenem intermediate(n = 8) Imipenem resistant(n = 48) β-lactams Cefuroxime 107 (87.7%) 51 (77.2%) 8 (100%) 48 (100%) Cefoperazone 98 (80.3%) 43 (65.2%) 8 (100%) 47 (97.9%) Ceftazidime 74 (60.6%) 23 (34.8%) 8 (100%) 43 (89.5%) Meropenem 56 (45.9%) 3 (4.5%) 7 (87.5%) 46 (95.8%) Aztreonam 55 (45.1%) 28 (42.4%) 3 (37.5%) 24 (50%) Imipenem 48 (39.3%) 66 (100%) 8 (100%) 48 (100%) Piperacillin/tazobactam 31 (25.4%) 4 (6.1%) 7 (87.5%) 20 (41.6%) Nonβ-lactams Gentamicin 61 (50%) 13 (19.7%) 5 (62.5%) 43 (89.5%) Ciprofloxacin 53 (43.4%) 17 (25.8%) 5 (62.5%) 31 (64.5%) Amikacin 40 (32.8%) 9 (13.6%) 7 (87.5%) 24 (50%) Polymyxin B 3 (2.4%) 0 (0.0%) 1 (12.5%) 2 (4.2%)Use of combined disk method (imipenem, ceftazidime/imipenem, and ceftazidime + EDTA) for phenotypic production of MBL allowed the detection of 33 of 122 (27%)P. aeruginosa isolates.Combined double disc synergy test was applied to detect ESBL in 122P. aeruginosa isolates using ceftazidime alone or with clavulanic acid. Of these, only 9 (7.4%) isolates were positive for production of ESBL. All ESBL-producing P. aeruginosa were found to be resistant to ceftazidime. Five out of 33 MBL-producing isolates were found to produce ESBL and MBL simultaneously.The results of MICs for MBL-producingP. aeruginosa isolates appeared in Table 3 which showed that the strains have high resistance toward imipenem, cefotaxime, gentamicin, and ciprofloxacin as their MIC was above the break points recommended by CLSI. The effect of clavulanic acid on the susceptibility was found as some isolates showed a decrease in MIC more than 3 doubling dilutions, and that indicates the presence of ESBL. On the other hand, according to the breakpoints recommended by CLSI, more than half of the isolates were sensitive toward piperacillin/tazobactam, ceftazidime, and cefepime and ceftazidime/clavulanic acid.Table 3 MICs of antibiotics for MBL-producingP.  aeruginosa isolates. Isolates number MICs (μg/mL) PM PTc TZ TZL CI AK GM IP CT ≥32 ≥128/4 ≥128/2 ≥32 ≥4 ≥64 ≥4 ≥16 ≥32 1 ≥256 16 ≥256 ≥256 2 12 3 ≥32 ≥256 2 12 64 8 3 ≥32 16 ≥256 ≥32 ≥256 3 8 64 64 24 ≥32 ≥256 12 ≥32 ≥256 4 4 3 6 2 ≥32 ≥256 ≥256 ≥32 32 5 4 3 6 1.5 ≥32 ≥256 ≥256 ≥32 32 6 64 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 7 24 ≥256 ≥256 192 ≥32 ≥256 12 ≥32 ≥256 8 6 96 96 24 0.094 4 2 ≥32 ≥256 9 ≥256 3 ≥256 ≥256 ≥32 32 12 ≥32 ≥256 10 ≥256 2 4 1.5 ≥32 2 2 ≥32 16 11 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 32 ≥32 ≥256 12 6 24 8 16 ≥32 ≥256 1.5 ≥32 ≥256 13 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 14 ≥256 24 ≥256 ≥256 1 6 2 ≥32 ≥256 15 16 24 16 4 ≥32 12 ≥256 ≥32 ≥256 16 ≥256 ≥256 48 8 ≥32 96 ≥256 ≥32 ≥256 17 6 12 12 2 ≥32 6 ≥256 ≥32 ≥256 18 ≥256 4 ≥256 ≥256 ≥32 96 32 ≥32 ≥256 19 64 ≥256 ≥256 96 0.094 96 32 ≥32 ≥256 20 8 32 12 2 ≥32 16 ≥256 ≥32 ≥256 21 16 16 24 4 ≥32 48 ≥256 ≥32 ≥256 22 256 ≥256 ≥256 ≥256 ≥32 128 32 ≥32 ≥256 23 ≥256 ≥256 ≥56 256 ≥32 ≥256 32 ≥32 ≥256 24 8 12 ≥256 4 0.064 8 ≥256 ≥32 ≥256 25 ≥256 ≥256 ≥256 128 ≥32 48 32 ≥32 ≥256 26 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 27 16 ≥256 ≥256 32 0.064 8 32 ≥32 ≥256 28 64 ≥256 ≥256 ≥256 0.064 32 32 ≥32 ≥256 29 2 4 12 2 0.125 4 2 ≥32 ≥256 30 2 4 8 1.5 0.125 4 2 ≥32 ≥256 31 8 128 6 24 0.094 6 2 ≥32 ≥256 32 12 32 2 8 0.064 6 2 ≥32 ≥256 33 6 ≥256 8 3 ≥32 128 2 ≥32 ≥256 PM: cefepime, PTc: piperacillin/tazobactam, TZL: ceftazidime/clavulanic acid, TZ: Ceftazidime, CI: ciprofloxacin, AK: Amikacin, GM: Gentamicin, IP: Imipenem, CT: Cefotaxime.PCR experiments revealed amplification of 865 bp fragment corresponding tobla VIM-2-like gene in 28 of 48 (58.3%) imipenem-resistant isolates and a 760 bp fragment corresponding to bla OXA-10-like gene in 20 (41.5%) of imipenem-resistant isolates and a 642 bp fragment corresponding to bla VEB gene in 5 (10.4%) of isolates; two isolates (4.2%) showed a fragment of 984 bp corresponding to bla NDM gene and only one isolate (2.1%) showed a fragment of a 740 bp corresponding to bla IMP-1. In this work MBL bla VIM-1, bla IMP-2, bla GIM, bla SIM, and bla SPM allele were not detected; also OXA-2 like gene was not detected in this study.OXA-10-like gene was concomitant with VIM-2 and/or VEB. Twelve isolates harbored both OXA-10 and VIM-2; however two isolates carried both OXA-10 and VEB. Only one MBL-producing strain contained OXA-10, VIM-2, and VEB (Table4).Table 4 Differential relation between genes investigated and phenotypic methods for MBL and ESBL producingP.  aeruginosa. Genes investigated VIM-2(n = 28) OXA-10(n = 20) VEB(n = 5) NDM(n = 2) IMP-1(n = 1) MBL (n = 33) Pos. 21 14 4 0 1 Neg. 7 6 1 2 0 ESBL (n = 9) Pos. 4 4 1 2 0 Neg. 24 16 4 0 1 VIM-2 Pos. 28 12 2 0 1 Neg. 0 8 3 2 0 OXA-10 Pos. 12 20 2 1 0 Neg. 16 0 3 1 1 VEB Pos. 2 2 5 0 0 Neg. 26 18 0 2 1 NDM Pos. 0 1 0 2 0 Neg. 28 19 5 0 1 IMP-1 Pos. 1 0 0 0 1 Neg. 27 20 5 2 0 ## 4. Discussion Carbapenems are among the best choices for the treatment of infections caused by multidrug resistant Gram-negative rods. In recent years, Egypt has been considered among the countries that reported high rates of antimicrobial resistance [17]. In the present study, there were high levels of resistance to all commercially available antimicrobial agents among P. aeruginosa isolated from Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt; the rate of 39.3% imipenem-resistant isolates, this rate of carbapenem resistance reflects a threat limiting the treatment options in our hospitals. This can be explained in part by the increase in consumption of antimicrobial agents in the last decade leading to a selective pressure of antibiotics on P. aeruginosa and consequently the bacteria modify the resistant mechanisms. A similar high rate of resistance has been reported in many developing countries worldwide [18]. In Egypt, Ashour and El-Sharif [19] concluded that Acinetobacter and Pseudomonas species exhibited the highest resistance levels to imipenem (37.03%) among other Gram-negative organisms [19]. Also Mahmoud et al. [17] showed that among P. aeruginosa strains 33.3% were resistant to imipenem [17].In the Middle East the occurrence of imipenem resistantP. aeruginosa is alarmingly recognized. In Saudi Arabia, the resistance rate of P. aeruginosa to imipenem was increased to 38.57% in 2011 [20]. Among 33 European countries participating in the European Antimicrobial Resistance Surveillance System in 2007, six countries reported carbapenem resistance rates of >25% among P. aeruginosa isolates; the highest rate was reported from Greece (51%) [21].The clinically important MBL families are located in horizontally transferrable gene cassettes and can be spread among Gram-negative bacteria. Although we have not studied this horizontal transfer in the current study, it has been well demonstrated by several previous reports from other groups. Different families of these enzymes have been reported from several geographical regions so far. The most commonly reported families are IMP (for active on imipenem, first isolated in Japan), VIM (for Verona Integron-encoded metallo-β-lactamase, first isolated in Italy), GIM (for German Imipenemase), SPM (for Sao Paulo metallo-β-lactamase, first isolated in Brazil), and SIM (for Seoul Imipenemase, first isolated in Korea). IMP- and VIM-producing Pseudomonas strains have been reported worldwide, in different geographical areas [22]. In the current study 27% of 122 total P. aeruginosa isolates were positive for the production of MBL based on the results of phenotypic screening for MBL. This was lower than the prevalence of MBL producers in Egyptian study which was 32.3% [23]. However our finding agreed with an Indian study in which 28.57% of P. aeruginosa was found to produce MBL [24]. In the present study, VIM-2 was the most frequently detectable gene among the different MBL genes investigated; the percent of 58.3% among imipenem-resistant P. aeruginosa was detected.This finding was supported by results of previous studies demonstrating VIM-2 as the most dominant MBL implicated in imipenem resistantP. aeruginosa and confers the greatest clinical threat [25]. Worldwide, VIM-2 is the dominant MBL gene associated with nosocomial outbreaks due to MBL-producing P. aeruginosa [26]. In our study, out of 33 (27%) MBL producers, 26 (78.8%) were positive for genes detected by PCR and 15 (31.3%) out of 48 imipenem resistant isolates were positive for genes investigated by PCR and in the same time were negative MBL producers. This indicated that there are other resistance mechanisms to carbapenem such as class A carbapenemases including KPC and GES variants and MBLs were not the sole mechanism of carbapenem resistance in the present study. The imipenem resistant strain with no phenotypic or genotypic sign of MBL production may possess other enzyme mediating carbapenem resistance such as AmpC beta lactamase and/or other mechanisms such as membrane permeability and efflux mechanisms.Class A ESBLs are typically identified inP. aeruginosa isolates showing resistance to extended-spectrum cephalosporin (ESCs) [27]. Classical ESBLs have evolved from restricted-spectrum class A TEM and SHV β-lactamases although a variety of non-TEM and non-SHV class A ESBLs have been described such as CTX-M, PER, VEB, GES, and BEL [5] and class D ESBLs derived from narrow-spectrum OXA β-lactamases are also well known [28]. Structural genes VEB, OXA, and PER types are the most common ESBLs reported in P. aeruginosa [7].In the present study, production of ESBL was detected in only (7.4%) out of 122P. aeruginosa isolates. This was much lower than what was found in a study done by Gharib et al. in 2009 in Egypt in which 24.5% were ESBL producers [29]. In the present study high prevalence of b l a OXA- 10 was detected in imipenem resistant P. aeruginosa isolates; twenty of 48 (41.7%) isolates resistant to imipenem were OXA-10 positive followed by VEB-1 which was detected in 5 (10.4%). In a recent study in Iran, most prevalent ESBL genes included OXA-10 (70%) and PER-1 (50%) followed by VEB-1 (31.3%) [30]. This study agreed with our study in the prevalence of OXA-10 in ESBL-producing P. aeruginosa. However VEB type ESBLs were the predominant ESBL reported in P. aeruginosa in a number of studies where ESBLs were commonly seen [7, 31]. Phenotypic methods for detection of ESBL are not reliable in P. aeruginosa strains and PCR is advisable since only 9 isolates were ESBL producers upon phenotypic screening while 20 isolates were positive OXA-10 and 5 were VEB positive using PCR for their detection.KPC rarely was detected inP. aeruginosa; however the number of reports of KPC-producing P. aeruginosa is increasing [32]. In this study, we did not test KPC. KPC and another rarely carbapenemases may be found in ESBL-producing strains because most of them had reduced susceptibility to imipenem (MIC 2–8 mg/L).In the current study 97.5% of the totalP. aeruginosa isolates were sensitive to polymyxin B. This supports the evidence that polymyxin B has increasingly become the last viable therapeutic option for multidrug resistant (MDR) Pseudomonas infections. This result agreed with a study done by Tawfik el al. in 2012 which they found that all isolates were sensitive to polymyxin [33].In conclusion, the rates of MBL-producingP. aeruginosa and ESBL-producing P. aeruginosa isolates from Kasr El Aini Hospital and National Cancer Institute, Cairo University, in Egypt were notable and, unfortunately, only a limited number of antimicrobial drugs are active. Therefore, MBL and ESBL screening should be implemented for routine laboratory studies in routine practice. VIM-2 is the most prevalent MBL producing P. aeruginosa in Egypt. OXA-10 is the most prevalent ESBL producing P. aeruginosa in Egypt. MBL is much more prevalent than ESBL as mechanism of resistance in P. aeruginosa. Molecular techniques are more reliable than phenotypic screening in detecting ESBL production in P. aeruginosa strains. Further studies are needed to specify the most important genes of resistance among P. aeruginosa in Egypt. --- *Source: 101635-2014-02-23.xml*
101635-2014-02-23_101635-2014-02-23.md
29,931
Antimicrobial Resistance Pattern and Their Beta-Lactamase Encoding Genes amongPseudomonas aeruginosa Strains Isolated from Cancer Patients
Mai M. Zafer; Mohamed H. Al-Agamy; Hadir A. El-Mahallawy; Magdy A. Amin; Mohammed Seif El-Din Ashour
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101635
101635-2014-02-23.xml
--- ## Abstract This study was designed to investigate the prevalence of metallo-β-lactamases (MBL) and extended-spectrum β-lactamases (ESBL) in P. aeruginosa isolates collected from two different hospitals in Cairo, Egypt. Antibiotic susceptibility testing and phenotypic screening for ESBLs and MBLs were performed on 122 P. aeruginosa isolates collected in the period from January 2011 to March 2012. MICs were determined. ESBLs and MBLs genes were sought by PCR. The resistant rate to imipenem was 39.34%. The resistance rates for P. aeruginosa to cefuroxime, cefoperazone, ceftazidime, aztreonam, and piperacillin/tazobactam were 87.7%, 80.3%, 60.6%, 45.1%, and 25.4%, respectively. Out of 122 P. aeruginosa, 27% and 7.4% were MBL and ESBL, respectively. The prevalence of bla VIM-2, bla OXA-10-, bla VEB-1, bla NDM-, and bla IMP-1-like genes were found in 58.3%, 41.7%, 10.4%, 4.2%, and 2.1%, respectively. GIM-, SPM-, SIM-, and OXA-2-like genes were not detected in this study. OXA-10-like gene was concomitant with VIM-2 and/or VEB. Twelve isolates harbored both OXA-10 and VIM-2; two isolates carried both OXA-10 and VEB. Only one strain contained OXA-10, VIM-2, and VEB. In conclusion, bla VIM-2- and bla OXA-10-like genes were the most prevalent genes in P. aeruginosa in Egypt. To our knowledge, this is the first report of bla VIM-2, bla IMP-1, bla NDM, and bla OXA-10 in P. aeruginosa in Egypt. --- ## Body ## 1. Introduction Pseudomonas aeruginosa is widely known as an opportunistic organism, frequently involved in infections of immune-suppressed patients, and also causes outbreaks of hospital-acquired infections [1] that cause infections with a high mortality rate [2]. This latter is, in part, attributable to the organism’s intrinsically high resistance to many antimicrobials and the development of increased, particularly multidrug, resistance in healthcare settings [3], both of which complicate antipseudomonal chemotherapy. The carbapenems have been the drug of choice for the treatment of infections caused by penicillin or cephalosporin resistant Gram-negative bacilli [4]. However, carbapenem resistance has been observed frequently in nonfermenting bacilli Pseudomonas aeruginosa and Acinetobacter spp. Resistance to carbapenem is due to decreased outer membrane permeability, increased efflux systems, alteration of penicillin-binding proteins, and carbapenem hydrolyzing enzymes-carbapenemase. In the last decade, several classes A, B, and D β-lactamases have been detected in P. aeruginosa [5]. The carbapenemases found are mostly metallo-β-lactamases (MBL), including IMP, VIM, SPM, SIM, GIM, AIM, DIM, or NDM enzymes, but serine carbapenemases have also been recorded, including KPC and GES variants [6].The OXA-ESBLs are mutants of OXA-2 and -10, belonging to class D, whereas the other ESBL belongs to class A. VEB and PER types were found to be the most common (or least rare) ESBL inP. aeruginosa in several countries, contrasting to the dominance of CTX-M, SHV, and TEM ESBL in Enterobacteriaceae [7]. Detection of MBL and ESBL producing Gram-negative bacilli especially P. aeruginosa is crucial for optimal treatment of patients particularly critically ill and hospitalized patients and to control the spread of resistance. The aim of the present study was phenotypic and genotypic screening for MBL and ESBL producing strains among P. aeruginosa isolated from clinical specimens of cancer patients recovered from two hospitals in Cairo, Egypt. ## 2. Materials and Methods ### 2.1. Bacterial Strains Hundred twenty-two nonduplicate nonconsecutiveP. aeruginosa isolates were obtained from clinical specimens submitted for bacteriological testing from hospitalized in-patients admitted to Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012. Kasr El Aini School of Medicine and National Cancer Institute are tertiary hospitals belonging to Cairo University, Egypt. The study was approved by Ethics Committee of Cairo University and an informed consent was obtained from all patients receiving treatment and participating in the study. With regard to the specimen site, P. aeruginosa were isolated from wound swabs (n = 44), blood (n = 29), urine (n = 22), sputum (n = 11), cerebrospinal fluid (CSF) (n = 2), genital sites (n = 2), catheter tip (n = 2), central venous catheter (n = 4), ear swab (n = 2), pleural tissue specimen (n = 2), corneal graft (n = 1), and breast abscess (n = 1). ### 2.2. Bacterial Identification Identification ofP. aeruginosa was done on the basis of Gram staining, colony morphologies on MacConkey’s agar, motility, pigment production, oxidase reaction, growth at 42°C, and the biochemical tests included in the API 20NE identification kit (Biomerieux, Marcy l’Étoile, France). The Vitek 2 system (Vitek 2 software, version R02.03; Advanced Expert System [AES] software, version R02.00N (bioMerieux, Marcy l’Étoile, France) was used with the ID-GNB card for identification of Gram-negative bacilli. The identified strains were stored in glycerol broth cultures at −70°C. ### 2.3. Antimicrobial Susceptibility Testing Susceptibility of the isolates to the following antibacterial agents was tested by the Kirby-Bauer disc diffusion method [8] using disks (Oxoid ltd., Basin Stoke, Hants, England) on Mueller Hinton agar and interpreted as recommended by Clinical and Laboratory Standards Institute (CLSI) guidelines [9]: amikacin (AK, 30 μg), aztreonam (ATM, 30 μg), cefepime (FEP, 30 μg), cefoperazone (CFP, 30 μg), cefotaxime (CTX, 30 μg), ceftazidime (CAZ, 30 μg), ceftriaxone (CRO, 30 μg), cefuroxime (CXM, 30 μg), ciprofloxacin (CIP, 5 μg), imipenem (IPM, 10 μg), meropenem (MEM, 10 μg), piperacillin/tazobactam (TPZ, 100/10 μg), polymyxin B (PB, 300 units), and tobramycin (TOB, 10 μg). ### 2.4. MIC Determination for MBL-ProducingP. aeruginosa The MICs of 9 antibiotics (cefepime, piperacillin/tazobactam, ceftazidime/clavulanic acid, ceftazidime, ciprofloxacin, amikacin, gentamicin, imipenem, and cefotaxime) were determined to 33P. aeruginosa isolates that phenotypically produce MBL using Etest (AB Biodisk, Solna, Sweden) as described by the manufacturer. Results were interpreted using CLSI criteria for susceptibility testing [9]. P. aeruginosa ATCC 27853 was used as the reference strain. ### 2.5. Phenotypic Detection of ESBL Combined double disc synergy test was performed with discs containing ceftazidime (30μg) alone and in the presence of clavulanate (10 μg). In order to inhibit cephalosporinase overproduction, double disc synergy tests were also carried out with the addition of 400 μg of boronic acid [10]. Increase in ceftazidime inhibition zone of ≥5 mm in the presence of clavulanic acid as compared with when tested alone was considered to be ESBL producer. ### 2.6. Phenotypic Detection of MBL A 4μL of 0.5 M EDTA (Sigma Chemicals, St. Louis, MO) was poured on imipenem and ceftazidime disks to obtain a desired concentration of 750 μg per disk. The EDTA impregnated antibiotic disks were dried immediately in an incubator and stored at −20°C in air-tight vials without desiccant until used. 0.5 McFarland equivalent overnight broth culture of test strain was inoculated on a plate of Mueller Hinton agar. One 10 μg imipenem and one 30 μg ceftazidime disks were placed on the agar plate. One each of EDTA impregnated imipenem and ceftazidime disks were also placed on same agar plate. The plate was incubated at 37°C for 16 to 18 h. An increase in the zone size of ≥7 mm around the imipenem-EDTA disk or ceftazidime-EDTA compared to imipenem or ceftazidime disks without EDTA was recorded as MBL producing strain [11]. ### 2.7. Preparation of DNA Template for PCR DNA templates were prepared according to the previous described method [12]. A 300 μL of overnight culture of the test isolates in tryptone soy broth (Difco, Detroit, MI, USA) was centrifuged. The bacterial pellet was resuspended to the initial volume with HPLC grade water. The DNA template was prepared by boiling of suspension of bacterial pellet for 10 min and directly used in the PCR assay. ### 2.8. Detection of ESBL and MBL Genes Genes for ESBLs (OXA-10-like gene, OXA-2-like gene, and VEB) and MBLs (VIM-1, VIM-2, IMP-1, IMP-2, SIM, GIM, SPM, and NDM) were sought by PCR for all isolates using the primers listed in Table1 according to the previous protocols [7, 13–16]. Negative and positive controls were involved in all PCR experiments. Five μL of reaction mix containing PCR product was analysed by electrophoresis in 0.8% (w/v) agarose (Fermentas, Lithuania).Table 1 Primers used for detection of MBL, OXA 10, and VEB. Primers Sequence Reference Expected PCR product b l a IMP-1 TGAGCAAGTTATCTGTATTCTTAGTTGCTTGGTTTTGATG [14] 740 bp b l a IMP-2 GGCAGTCGCCCTAAAACAAATAGTTACTTGGCTGTGATGG [14] 737 bp b l a VIM-1 TTATGGAGCAGCAACGATGTCAAAAGTCCCGCTCCAACGA [14] 920 bp b l a VIM-2 AAAGTTATGCCGCACTCACCTGCAACTTCATGTTATGCCG [14] 865 bp b l a NDM CACCTCATGTTTGAATTCGCCCTCTGTCACATCGAAATCGC [16] 984 bp b l a OXA-10 TATCGCGTGTCTTTCGAGTATTAGCCACCAATGATGCCC [7] 760 bp b l a VEB-1 CGACTTCCATTTCCCGATGCGGACTCTGCAACAAATACGC [7] 642 bp b l a OXA-2 GCCAAAGGCACGATAGTTGTGCGTCCGAGTTGACTGCCGG [13] 700 bp b l a GIM TCGACACACCTT GGT CTG AAAACTTCCAACTT TGCCATGC [15] 477 bp b l a SPM AAAATCTGGGTACGCAAA CGACATTATCCGCTGGAACAGG [15] 271 bp b l a SIM TAC AAG GGATTCGGCATCGTAATGG CCTGT CCCATG TG [15] 570 bp ## 2.1. Bacterial Strains Hundred twenty-two nonduplicate nonconsecutiveP. aeruginosa isolates were obtained from clinical specimens submitted for bacteriological testing from hospitalized in-patients admitted to Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012. Kasr El Aini School of Medicine and National Cancer Institute are tertiary hospitals belonging to Cairo University, Egypt. The study was approved by Ethics Committee of Cairo University and an informed consent was obtained from all patients receiving treatment and participating in the study. With regard to the specimen site, P. aeruginosa were isolated from wound swabs (n = 44), blood (n = 29), urine (n = 22), sputum (n = 11), cerebrospinal fluid (CSF) (n = 2), genital sites (n = 2), catheter tip (n = 2), central venous catheter (n = 4), ear swab (n = 2), pleural tissue specimen (n = 2), corneal graft (n = 1), and breast abscess (n = 1). ## 2.2. Bacterial Identification Identification ofP. aeruginosa was done on the basis of Gram staining, colony morphologies on MacConkey’s agar, motility, pigment production, oxidase reaction, growth at 42°C, and the biochemical tests included in the API 20NE identification kit (Biomerieux, Marcy l’Étoile, France). The Vitek 2 system (Vitek 2 software, version R02.03; Advanced Expert System [AES] software, version R02.00N (bioMerieux, Marcy l’Étoile, France) was used with the ID-GNB card for identification of Gram-negative bacilli. The identified strains were stored in glycerol broth cultures at −70°C. ## 2.3. Antimicrobial Susceptibility Testing Susceptibility of the isolates to the following antibacterial agents was tested by the Kirby-Bauer disc diffusion method [8] using disks (Oxoid ltd., Basin Stoke, Hants, England) on Mueller Hinton agar and interpreted as recommended by Clinical and Laboratory Standards Institute (CLSI) guidelines [9]: amikacin (AK, 30 μg), aztreonam (ATM, 30 μg), cefepime (FEP, 30 μg), cefoperazone (CFP, 30 μg), cefotaxime (CTX, 30 μg), ceftazidime (CAZ, 30 μg), ceftriaxone (CRO, 30 μg), cefuroxime (CXM, 30 μg), ciprofloxacin (CIP, 5 μg), imipenem (IPM, 10 μg), meropenem (MEM, 10 μg), piperacillin/tazobactam (TPZ, 100/10 μg), polymyxin B (PB, 300 units), and tobramycin (TOB, 10 μg). ## 2.4. MIC Determination for MBL-ProducingP. aeruginosa The MICs of 9 antibiotics (cefepime, piperacillin/tazobactam, ceftazidime/clavulanic acid, ceftazidime, ciprofloxacin, amikacin, gentamicin, imipenem, and cefotaxime) were determined to 33P. aeruginosa isolates that phenotypically produce MBL using Etest (AB Biodisk, Solna, Sweden) as described by the manufacturer. Results were interpreted using CLSI criteria for susceptibility testing [9]. P. aeruginosa ATCC 27853 was used as the reference strain. ## 2.5. Phenotypic Detection of ESBL Combined double disc synergy test was performed with discs containing ceftazidime (30μg) alone and in the presence of clavulanate (10 μg). In order to inhibit cephalosporinase overproduction, double disc synergy tests were also carried out with the addition of 400 μg of boronic acid [10]. Increase in ceftazidime inhibition zone of ≥5 mm in the presence of clavulanic acid as compared with when tested alone was considered to be ESBL producer. ## 2.6. Phenotypic Detection of MBL A 4μL of 0.5 M EDTA (Sigma Chemicals, St. Louis, MO) was poured on imipenem and ceftazidime disks to obtain a desired concentration of 750 μg per disk. The EDTA impregnated antibiotic disks were dried immediately in an incubator and stored at −20°C in air-tight vials without desiccant until used. 0.5 McFarland equivalent overnight broth culture of test strain was inoculated on a plate of Mueller Hinton agar. One 10 μg imipenem and one 30 μg ceftazidime disks were placed on the agar plate. One each of EDTA impregnated imipenem and ceftazidime disks were also placed on same agar plate. The plate was incubated at 37°C for 16 to 18 h. An increase in the zone size of ≥7 mm around the imipenem-EDTA disk or ceftazidime-EDTA compared to imipenem or ceftazidime disks without EDTA was recorded as MBL producing strain [11]. ## 2.7. Preparation of DNA Template for PCR DNA templates were prepared according to the previous described method [12]. A 300 μL of overnight culture of the test isolates in tryptone soy broth (Difco, Detroit, MI, USA) was centrifuged. The bacterial pellet was resuspended to the initial volume with HPLC grade water. The DNA template was prepared by boiling of suspension of bacterial pellet for 10 min and directly used in the PCR assay. ## 2.8. Detection of ESBL and MBL Genes Genes for ESBLs (OXA-10-like gene, OXA-2-like gene, and VEB) and MBLs (VIM-1, VIM-2, IMP-1, IMP-2, SIM, GIM, SPM, and NDM) were sought by PCR for all isolates using the primers listed in Table1 according to the previous protocols [7, 13–16]. Negative and positive controls were involved in all PCR experiments. Five μL of reaction mix containing PCR product was analysed by electrophoresis in 0.8% (w/v) agarose (Fermentas, Lithuania).Table 1 Primers used for detection of MBL, OXA 10, and VEB. Primers Sequence Reference Expected PCR product b l a IMP-1 TGAGCAAGTTATCTGTATTCTTAGTTGCTTGGTTTTGATG [14] 740 bp b l a IMP-2 GGCAGTCGCCCTAAAACAAATAGTTACTTGGCTGTGATGG [14] 737 bp b l a VIM-1 TTATGGAGCAGCAACGATGTCAAAAGTCCCGCTCCAACGA [14] 920 bp b l a VIM-2 AAAGTTATGCCGCACTCACCTGCAACTTCATGTTATGCCG [14] 865 bp b l a NDM CACCTCATGTTTGAATTCGCCCTCTGTCACATCGAAATCGC [16] 984 bp b l a OXA-10 TATCGCGTGTCTTTCGAGTATTAGCCACCAATGATGCCC [7] 760 bp b l a VEB-1 CGACTTCCATTTCCCGATGCGGACTCTGCAACAAATACGC [7] 642 bp b l a OXA-2 GCCAAAGGCACGATAGTTGTGCGTCCGAGTTGACTGCCGG [13] 700 bp b l a GIM TCGACACACCTT GGT CTG AAAACTTCCAACTT TGCCATGC [15] 477 bp b l a SPM AAAATCTGGGTACGCAAA CGACATTATCCGCTGGAACAGG [15] 271 bp b l a SIM TAC AAG GGATTCGGCATCGTAATGG CCTGT CCCATG TG [15] 570 bp ## 3. Results The antimicrobial susceptibility testing was done by disc diffusion method to 122 clinical isolates ofP. aeruginosa that were collected from Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt, in the period from January 2011 to March 2012.The resistant rates of antibiotics are shown in Table2. Forty-eight (39.34%) out of 122P. aeruginosa isolates were resistant to imipenem. Eight (6.5%) out of 122 isolates of P. aeruginosa showed intermediate resistance to imipenem. Fifty-six (46%) out of 122 P. aeruginosa isolates were resistant to meropenem. Only two isolates (1.64%) out of 122 showed intermediate resistance to meropenem. The resistant rates for β-lactam antibiotics including cefuroxime, cefoperazone, ceftazidime, aztreonam, and piperacillin/tazobactam were 87.7%, 80.3%, 60.6%, 45.1%, and 25.4%, respectively. The resistant rates for non-β-lactam antibiotics including gentamicin, ciprofloxacin, and amikacin were 50%, 43.4%, and 32.8%, respectively. Only 3 (2.5%) out of 122 P. aeruginosa isolates were resistant to polymyxin B. The antimicrobial resistance rates were higher for imipenem-resistant than imipenem-susceptible P. aeruginosa isolates (Table 2). Non-β-lactams showed higher activity against imipenem-resistant P. aeruginosa than β-lactams.Table 2 Resistance rates for clinicalP.  aeruginosa isolates. Antibiotic Number (%) of resistant isolates Total isolates(n = 122) Imipenem susceptible(n = 66) Imipenem intermediate(n = 8) Imipenem resistant(n = 48) β-lactams Cefuroxime 107 (87.7%) 51 (77.2%) 8 (100%) 48 (100%) Cefoperazone 98 (80.3%) 43 (65.2%) 8 (100%) 47 (97.9%) Ceftazidime 74 (60.6%) 23 (34.8%) 8 (100%) 43 (89.5%) Meropenem 56 (45.9%) 3 (4.5%) 7 (87.5%) 46 (95.8%) Aztreonam 55 (45.1%) 28 (42.4%) 3 (37.5%) 24 (50%) Imipenem 48 (39.3%) 66 (100%) 8 (100%) 48 (100%) Piperacillin/tazobactam 31 (25.4%) 4 (6.1%) 7 (87.5%) 20 (41.6%) Nonβ-lactams Gentamicin 61 (50%) 13 (19.7%) 5 (62.5%) 43 (89.5%) Ciprofloxacin 53 (43.4%) 17 (25.8%) 5 (62.5%) 31 (64.5%) Amikacin 40 (32.8%) 9 (13.6%) 7 (87.5%) 24 (50%) Polymyxin B 3 (2.4%) 0 (0.0%) 1 (12.5%) 2 (4.2%)Use of combined disk method (imipenem, ceftazidime/imipenem, and ceftazidime + EDTA) for phenotypic production of MBL allowed the detection of 33 of 122 (27%)P. aeruginosa isolates.Combined double disc synergy test was applied to detect ESBL in 122P. aeruginosa isolates using ceftazidime alone or with clavulanic acid. Of these, only 9 (7.4%) isolates were positive for production of ESBL. All ESBL-producing P. aeruginosa were found to be resistant to ceftazidime. Five out of 33 MBL-producing isolates were found to produce ESBL and MBL simultaneously.The results of MICs for MBL-producingP. aeruginosa isolates appeared in Table 3 which showed that the strains have high resistance toward imipenem, cefotaxime, gentamicin, and ciprofloxacin as their MIC was above the break points recommended by CLSI. The effect of clavulanic acid on the susceptibility was found as some isolates showed a decrease in MIC more than 3 doubling dilutions, and that indicates the presence of ESBL. On the other hand, according to the breakpoints recommended by CLSI, more than half of the isolates were sensitive toward piperacillin/tazobactam, ceftazidime, and cefepime and ceftazidime/clavulanic acid.Table 3 MICs of antibiotics for MBL-producingP.  aeruginosa isolates. Isolates number MICs (μg/mL) PM PTc TZ TZL CI AK GM IP CT ≥32 ≥128/4 ≥128/2 ≥32 ≥4 ≥64 ≥4 ≥16 ≥32 1 ≥256 16 ≥256 ≥256 2 12 3 ≥32 ≥256 2 12 64 8 3 ≥32 16 ≥256 ≥32 ≥256 3 8 64 64 24 ≥32 ≥256 12 ≥32 ≥256 4 4 3 6 2 ≥32 ≥256 ≥256 ≥32 32 5 4 3 6 1.5 ≥32 ≥256 ≥256 ≥32 32 6 64 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 7 24 ≥256 ≥256 192 ≥32 ≥256 12 ≥32 ≥256 8 6 96 96 24 0.094 4 2 ≥32 ≥256 9 ≥256 3 ≥256 ≥256 ≥32 32 12 ≥32 ≥256 10 ≥256 2 4 1.5 ≥32 2 2 ≥32 16 11 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 32 ≥32 ≥256 12 6 24 8 16 ≥32 ≥256 1.5 ≥32 ≥256 13 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 14 ≥256 24 ≥256 ≥256 1 6 2 ≥32 ≥256 15 16 24 16 4 ≥32 12 ≥256 ≥32 ≥256 16 ≥256 ≥256 48 8 ≥32 96 ≥256 ≥32 ≥256 17 6 12 12 2 ≥32 6 ≥256 ≥32 ≥256 18 ≥256 4 ≥256 ≥256 ≥32 96 32 ≥32 ≥256 19 64 ≥256 ≥256 96 0.094 96 32 ≥32 ≥256 20 8 32 12 2 ≥32 16 ≥256 ≥32 ≥256 21 16 16 24 4 ≥32 48 ≥256 ≥32 ≥256 22 256 ≥256 ≥256 ≥256 ≥32 128 32 ≥32 ≥256 23 ≥256 ≥256 ≥56 256 ≥32 ≥256 32 ≥32 ≥256 24 8 12 ≥256 4 0.064 8 ≥256 ≥32 ≥256 25 ≥256 ≥256 ≥256 128 ≥32 48 32 ≥32 ≥256 26 ≥256 ≥256 ≥256 ≥256 ≥32 ≥256 ≥256 ≥32 ≥256 27 16 ≥256 ≥256 32 0.064 8 32 ≥32 ≥256 28 64 ≥256 ≥256 ≥256 0.064 32 32 ≥32 ≥256 29 2 4 12 2 0.125 4 2 ≥32 ≥256 30 2 4 8 1.5 0.125 4 2 ≥32 ≥256 31 8 128 6 24 0.094 6 2 ≥32 ≥256 32 12 32 2 8 0.064 6 2 ≥32 ≥256 33 6 ≥256 8 3 ≥32 128 2 ≥32 ≥256 PM: cefepime, PTc: piperacillin/tazobactam, TZL: ceftazidime/clavulanic acid, TZ: Ceftazidime, CI: ciprofloxacin, AK: Amikacin, GM: Gentamicin, IP: Imipenem, CT: Cefotaxime.PCR experiments revealed amplification of 865 bp fragment corresponding tobla VIM-2-like gene in 28 of 48 (58.3%) imipenem-resistant isolates and a 760 bp fragment corresponding to bla OXA-10-like gene in 20 (41.5%) of imipenem-resistant isolates and a 642 bp fragment corresponding to bla VEB gene in 5 (10.4%) of isolates; two isolates (4.2%) showed a fragment of 984 bp corresponding to bla NDM gene and only one isolate (2.1%) showed a fragment of a 740 bp corresponding to bla IMP-1. In this work MBL bla VIM-1, bla IMP-2, bla GIM, bla SIM, and bla SPM allele were not detected; also OXA-2 like gene was not detected in this study.OXA-10-like gene was concomitant with VIM-2 and/or VEB. Twelve isolates harbored both OXA-10 and VIM-2; however two isolates carried both OXA-10 and VEB. Only one MBL-producing strain contained OXA-10, VIM-2, and VEB (Table4).Table 4 Differential relation between genes investigated and phenotypic methods for MBL and ESBL producingP.  aeruginosa. Genes investigated VIM-2(n = 28) OXA-10(n = 20) VEB(n = 5) NDM(n = 2) IMP-1(n = 1) MBL (n = 33) Pos. 21 14 4 0 1 Neg. 7 6 1 2 0 ESBL (n = 9) Pos. 4 4 1 2 0 Neg. 24 16 4 0 1 VIM-2 Pos. 28 12 2 0 1 Neg. 0 8 3 2 0 OXA-10 Pos. 12 20 2 1 0 Neg. 16 0 3 1 1 VEB Pos. 2 2 5 0 0 Neg. 26 18 0 2 1 NDM Pos. 0 1 0 2 0 Neg. 28 19 5 0 1 IMP-1 Pos. 1 0 0 0 1 Neg. 27 20 5 2 0 ## 4. Discussion Carbapenems are among the best choices for the treatment of infections caused by multidrug resistant Gram-negative rods. In recent years, Egypt has been considered among the countries that reported high rates of antimicrobial resistance [17]. In the present study, there were high levels of resistance to all commercially available antimicrobial agents among P. aeruginosa isolated from Kasr El Aini Hospital and National Cancer Institute, Cairo University, Egypt; the rate of 39.3% imipenem-resistant isolates, this rate of carbapenem resistance reflects a threat limiting the treatment options in our hospitals. This can be explained in part by the increase in consumption of antimicrobial agents in the last decade leading to a selective pressure of antibiotics on P. aeruginosa and consequently the bacteria modify the resistant mechanisms. A similar high rate of resistance has been reported in many developing countries worldwide [18]. In Egypt, Ashour and El-Sharif [19] concluded that Acinetobacter and Pseudomonas species exhibited the highest resistance levels to imipenem (37.03%) among other Gram-negative organisms [19]. Also Mahmoud et al. [17] showed that among P. aeruginosa strains 33.3% were resistant to imipenem [17].In the Middle East the occurrence of imipenem resistantP. aeruginosa is alarmingly recognized. In Saudi Arabia, the resistance rate of P. aeruginosa to imipenem was increased to 38.57% in 2011 [20]. Among 33 European countries participating in the European Antimicrobial Resistance Surveillance System in 2007, six countries reported carbapenem resistance rates of >25% among P. aeruginosa isolates; the highest rate was reported from Greece (51%) [21].The clinically important MBL families are located in horizontally transferrable gene cassettes and can be spread among Gram-negative bacteria. Although we have not studied this horizontal transfer in the current study, it has been well demonstrated by several previous reports from other groups. Different families of these enzymes have been reported from several geographical regions so far. The most commonly reported families are IMP (for active on imipenem, first isolated in Japan), VIM (for Verona Integron-encoded metallo-β-lactamase, first isolated in Italy), GIM (for German Imipenemase), SPM (for Sao Paulo metallo-β-lactamase, first isolated in Brazil), and SIM (for Seoul Imipenemase, first isolated in Korea). IMP- and VIM-producing Pseudomonas strains have been reported worldwide, in different geographical areas [22]. In the current study 27% of 122 total P. aeruginosa isolates were positive for the production of MBL based on the results of phenotypic screening for MBL. This was lower than the prevalence of MBL producers in Egyptian study which was 32.3% [23]. However our finding agreed with an Indian study in which 28.57% of P. aeruginosa was found to produce MBL [24]. In the present study, VIM-2 was the most frequently detectable gene among the different MBL genes investigated; the percent of 58.3% among imipenem-resistant P. aeruginosa was detected.This finding was supported by results of previous studies demonstrating VIM-2 as the most dominant MBL implicated in imipenem resistantP. aeruginosa and confers the greatest clinical threat [25]. Worldwide, VIM-2 is the dominant MBL gene associated with nosocomial outbreaks due to MBL-producing P. aeruginosa [26]. In our study, out of 33 (27%) MBL producers, 26 (78.8%) were positive for genes detected by PCR and 15 (31.3%) out of 48 imipenem resistant isolates were positive for genes investigated by PCR and in the same time were negative MBL producers. This indicated that there are other resistance mechanisms to carbapenem such as class A carbapenemases including KPC and GES variants and MBLs were not the sole mechanism of carbapenem resistance in the present study. The imipenem resistant strain with no phenotypic or genotypic sign of MBL production may possess other enzyme mediating carbapenem resistance such as AmpC beta lactamase and/or other mechanisms such as membrane permeability and efflux mechanisms.Class A ESBLs are typically identified inP. aeruginosa isolates showing resistance to extended-spectrum cephalosporin (ESCs) [27]. Classical ESBLs have evolved from restricted-spectrum class A TEM and SHV β-lactamases although a variety of non-TEM and non-SHV class A ESBLs have been described such as CTX-M, PER, VEB, GES, and BEL [5] and class D ESBLs derived from narrow-spectrum OXA β-lactamases are also well known [28]. Structural genes VEB, OXA, and PER types are the most common ESBLs reported in P. aeruginosa [7].In the present study, production of ESBL was detected in only (7.4%) out of 122P. aeruginosa isolates. This was much lower than what was found in a study done by Gharib et al. in 2009 in Egypt in which 24.5% were ESBL producers [29]. In the present study high prevalence of b l a OXA- 10 was detected in imipenem resistant P. aeruginosa isolates; twenty of 48 (41.7%) isolates resistant to imipenem were OXA-10 positive followed by VEB-1 which was detected in 5 (10.4%). In a recent study in Iran, most prevalent ESBL genes included OXA-10 (70%) and PER-1 (50%) followed by VEB-1 (31.3%) [30]. This study agreed with our study in the prevalence of OXA-10 in ESBL-producing P. aeruginosa. However VEB type ESBLs were the predominant ESBL reported in P. aeruginosa in a number of studies where ESBLs were commonly seen [7, 31]. Phenotypic methods for detection of ESBL are not reliable in P. aeruginosa strains and PCR is advisable since only 9 isolates were ESBL producers upon phenotypic screening while 20 isolates were positive OXA-10 and 5 were VEB positive using PCR for their detection.KPC rarely was detected inP. aeruginosa; however the number of reports of KPC-producing P. aeruginosa is increasing [32]. In this study, we did not test KPC. KPC and another rarely carbapenemases may be found in ESBL-producing strains because most of them had reduced susceptibility to imipenem (MIC 2–8 mg/L).In the current study 97.5% of the totalP. aeruginosa isolates were sensitive to polymyxin B. This supports the evidence that polymyxin B has increasingly become the last viable therapeutic option for multidrug resistant (MDR) Pseudomonas infections. This result agreed with a study done by Tawfik el al. in 2012 which they found that all isolates were sensitive to polymyxin [33].In conclusion, the rates of MBL-producingP. aeruginosa and ESBL-producing P. aeruginosa isolates from Kasr El Aini Hospital and National Cancer Institute, Cairo University, in Egypt were notable and, unfortunately, only a limited number of antimicrobial drugs are active. Therefore, MBL and ESBL screening should be implemented for routine laboratory studies in routine practice. VIM-2 is the most prevalent MBL producing P. aeruginosa in Egypt. OXA-10 is the most prevalent ESBL producing P. aeruginosa in Egypt. MBL is much more prevalent than ESBL as mechanism of resistance in P. aeruginosa. Molecular techniques are more reliable than phenotypic screening in detecting ESBL production in P. aeruginosa strains. Further studies are needed to specify the most important genes of resistance among P. aeruginosa in Egypt. --- *Source: 101635-2014-02-23.xml*
2014
# Chattering-Free Sliding Mode Control for Networked Control System with Time Delay and Packet Dropout Based on a New Compensation Strategy **Authors:** Yu Zhang; Shousheng Xie; Ledi Zhang; Litong Ren; Bin Zhou; Hao Wang **Journal:** Complexity (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1016381 --- ## Abstract This paper addresses the sliding mode control problem for a class of networked control systems with long time delay and consecutive packet dropout. A new modeling method is proposed, through which time delay and packet dropout are modeled in a unified model described by one Markov chain. To avoid the chattering problem of classic reaching law, a new chattering-free reaching law is proposed. Then with a focus on the problem that controller-actuator channel network condition cannot be foreseen by the controller, a new compensation strategy is proposed, which can effectively compensate for the effect of time delay and packet dropout in controller-actuator channel. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed approach. --- ## Body ## 1. Introduction In recent years, a lot of interest and concern have been focused on networked control systems (NCSs) where communication channels interconnect sensors with controllers and/or controllers with actuators [1]. Compared with conventional point-to-point interconnected feedback control systems, NCSs have advantages in many aspects: higher system operability, more efficient resource utilization and sharing, lower cost, and reduced weight, as well as simplicity for system installation and maintenance [2–4]. Despite these distinctive features provided by NCSs, the insertion of communication network in the feedback control loops also brings about new challenges such as bandwidth-limited channel, network-induced delay, packet dropout, and packet disordering [5].Time delay and packet dropout serve as two of the most common and crucial problems of NCSs that can directly degrade the performance of the closed-loop system or even lead to instability [6]. More specifically, long time delay refers to the time delay larger than the sampling period which, compared with short time delay or “one step” time delay, can cause more serious problems like packet disordering. Therefore, for discrete-time systems, it is more valuable to carry out research on long time delay systems. Numerous effective methods have been reported in the literature aiming to tackle the time delay and packet dropout problems arising in NCSs (see, for example, [7–10]). It can be concluded that the solution of the problem consists of two main parts, one of which is to establish an appropriate model to describe time delay and packet dropout and the other is to design a controller that can guarantee system stability despite the existence of time delay and packet dropout. This paper is also organized based on this structure.Firstly, since time delay and packet dropout usually exist simultaneously in the real network, it is important to establish a model to handle them in a common framework. Various modeling methods have been proposed including time delay system model [11], switched system model [12–14], asynchronous dynamical system model [15, 16], and stochastic system model [6, 8, 9, 17–25]. Among them, stochastic system model becomes popular in recent years and one of the most used ways is to adopt a Markov discrete-time linear system to describe random time delay or packet dropouts [9, 20–22, 24, 25]. It is found that the transition from one time delay/packet dropout state to another usually occurs with a certain probability and thus a Markov chain can be used to describe such relation with a transition probability matrix. Therefore, long time delay and consecutive packet dropout can be modeled as a Markov chain. In [24], both sensor-to-controller and controller-to-actuator delays are considered and described by Markov chains and the resulting closed-loop systems are written as jump linear systems with two modes. Similarly, in [25], the characteristics of network communication delays and packet dropouts in both sensor-to-controller and controller-to-actuator channels are modeled and their history behavior is described by three Markov chains. However, it is discovered that most of the existing Markov chain based modeling methods have a common problem that the time delay and packet dropout are usually considered on the controller side, which means they do not reflect the condition of the packets actually received by the actuator. This could lead to the problem of the use of outdated signals and packet disordering. To avoid this problem, buffer or zero-order-holder (ZOH) are introduced [26], but pitifully, the model used has not changed in [26]. Similarly, ZOH is also used in [27] and a corresponding modeling method is proposed, but [27] considers only the problem of packet dropout. Motivated by the above discussions, this paper focuses on proposing a new model that considers both long time delay and consecutive packet dropout in a unified model, and the situation of actual received control signals on the actuator side will be considered to avoid problems like packet disordering and the use of obsolete data.When modeling method is determined, the next key problem is to design an appropriate controller that can guarantee system stability and performance of the NCSs despite the existence of time delay and packet dropout. Among various methods, sliding mode control (SMC) method stands out for its unique feature that system states can be driven onto a specially designed sliding surface and then become totally insensitive to uncertainties [28–30]. However, the well-known chattering problem limits it wide application especially in discrete systems. With a focus on this problem, this paper proposes a chattering-free sliding mode reaching law for a type of multiple-input NCSs, and it needs to be pointed out that, to the best of the author’s knowledge, the use of such chattering-free reaching law in multiple-input systems has rarely been reported.Moreover, another critical problem usually ignored is that the network condition of the controller-actuator channel cannot be foreseen by the controller when calculating the control signal, which makes the calculated control signal not suitable for the systems when there exists time delay or packet dropout. Some researchers have noticed this problem but they tried to solve the problem only by using the time delay and packet dropout information of the former sampling period as compensation [31]. Therefore, another novelty of this paper is that it proposes a new compensation strategy, where all possible time delay and packet dropout states in the controller-actuator channel are considered and a multiple-model based controller is designed which can output a sequence of control signals, and then a compensator on the actuator side will decide which signal of the control signal sequence to use according to the time stamp information. In this way, the time delay and packet dropout can be properly compensated for even when the network condition of controller-actuation channel cannot be foreseen.The main contributions of this paper can be concluded as follows: (i) a new modeling method is proposed for NCSs with long time delay and consecutive packet dropout; (ii) a chattering-free sliding mode reaching law is proposed and first used for multiple-input NCSs; (iii) a multiple-model based sliding mode controller is designed together with a new compensation strategy which can compensate for time delay and packet dropout without the advanced knowledge of controller-actuator channel network condition.The remainder of this paper is organized as follows. Section2 introduces the newly proposed NCSs modeling method. The design of chattering-free sliding mode controller as well as the new compensation strategy is presented in Section 3. Numerical simulation results are shown in Section 4. Section 5 presents some conclusions. ## 2. A New Modeling Method for NCSs with Time Delay and Packet Dropout ### 2.1. Modeling of Time Delay and Packet Dropout Consider the discrete NCSs with time delay and packet dropout, which is described by the following state space model:(1)xk+1=Axk+Buk+dk,where x(k)∈Rn is system state variable; uk∈Rm is control input; d(k)∈Rn is external disturbance; and A, B, and C are matrices with appropriate dimensions.To make it easier for analysis, the following assumptions are made for system (1).Assumption 1. System (1) is controllable and all system state variables are observable.Assumption 2. Time delay and packet dropout exist in the controller-actuator channel only and each data packet is sent with a time stamp.Assumption 3. The long time delayτ is bounded and the upper bound is known, which is denoted as τ-; consecutive packet dropout is also bounded and the largest number of consecutive packet dropouts is known, which is denoted as ρ-.Assumption 4. External disturbanced(k) is unknown but bounded and satisfies sliding mode matching condition, i.e.,(2)dk=Bωk.Now we are going to consider long time delay and consecutive packet dropout for system (1). We will first establish a time delay model based on Markov chain, then we will take packet dropout as a special type of time delay, and the obtained transition probability matrix will be extended. Such modeling method considers the connection between time delay and packet dropout on the actuator side, which makes the model established better reflect the real condition in practical use.Inspired by [27], a ZOH is adopted for the NCS. Therefore, due to the existence of ZOH, only the newest received control signal will be used. Here we assume that time delay τk takes values in the set Ωτ=0,1,2,...τ-. It is noticed from the value space of τk that time delay can be zero in our model because, for a normal NCS, it is not possible that there is no on-time arriving signal. Therefore, it is more reasonable to add zero to the value space of time delay. First, let us think about an example, in which τk=0. It means controller output signal u-(k) arrives at time k, so the actuator input u(k)=u-(k). Then, due to the existence of ZOH, there are only two possible conditions for u(k+1). If u-(k+1) also arrives on time, we have u(k+1)=u-(k+1) and τk+1=0. If  u-(k+1) does not arrive on time, then u-(k) will still be used as an actuator input and we have u(k+1)=u-(k) and τk+1=1. In the same way, if we assume the time delay of time instant k is τk=i(i<τ-), there could be only i+2 possible conditions for the next time instant; i.e., if the control signal of k+1 arrives on time, then τk+1=0; if the control signal of k+1 does not arrive but the signal sent out at time k arrives, then we have τk+1=1; if control signal of k+1 or k does not arrive but that of time k-1 arrives, then we have τk+1=2; the same way can explain the conditions all the way to τk+1=i; however, if there is no newly arriving signal at k+1, then, due to the existence of ZOH, the control signal of time k is still used, so the value of time delay becomes τk+1=i+1. In particular, when τk=τ-, which is the upper bound of time delay, there could only be τ-+1 conditions. The existence of ZOH guarantees the use of newest signal, which means, for example, if control signal u-(k+1) arrives at time instance k+1, then all signals sent before time instant k+1 will never be used afterwards. The time delay state transition can be described as(3)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ, and define Πτ-=πij as system time delay transition possibility matrix, which is given as follows:(4)Πτ-=π00π0100⋯0π10π11π120⋯0⋮⋮⋮⋱⋱⋮πτ--20πτ--21πτ--22⋯πτ--2τ--10πτ--10πτ--11πτ--12⋯πτ--1τ--1πτ--1τ-πτ-0πτ-1πτ-2⋯πτ-τ--1πτ-τ-.Then let us take consecutive packet dropout into consideration. As mentioned above, if there is only bounded time delay, then when τk=τ-, there could be only τ-+1 possible conditions. However, if packet dropout is also considered, the time delay condition of time k+1 is different but can also be included and described in a similar way.Firstly, an example withτmax=2 and ρmax=2 is shown in Figure 1 as follows to illustrate the situation when consecutive packet dropout is included.Figure 1 Signal transmission timing diagram withτmax=2, ρmax=2.It is shown thatu-k is transmitted to the actuator at time instant k without time delay, thus giving τ(k)=0. Then at next time instant k+1, as illustrated above, τ(k+1) could only be τ(k+1)=0or1. If τ(k+1)=1, then at time instant k+2, we have τ(k+2)=0, or 1, or 2. If τ(k+2)=2, since time delay is bounded, then, at time instant k+3, the late arriving signal could only be u-k+1 or u-k+2, and if there is still no newly arrived signal, it is certain that packet u-k+1 and the signals sent before k+1, if still not received, are already lost. On this occasion, ZOH outputs control signal u-k as actuator input and the equivalent time delay is τ(k+3)=3. If τ(k+3)=3, since it is sure that u-k+1 is already lost, then, at time instant k+4, the late arriving signal could only be u-k+2 or u-k+3 and if there is still no newly arrived signal, u-k will still be used by the actuator; thus the equivalent time delay is τ(k+4)=4. If τ(k+4)=4, since consecutive packet dropout is also bounded, τ(k+5) could only take values in the set 0,1,2. Extend the above discussion to τmax=τ-,ρmax=ρ- and define the value space of equivalent time delay as Ωτ=0,1,2,...,(τ-+ρ-); then the following equivalent time delay transition is given:(5)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-+ρ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ∪i+1, and define Πτ-,ρ-=πij as system equivalent time delay transition possibility matrix, which is given as follows:(6)Πτ-,ρ-=π00π010…0π10π11π120⋯0⋮⋮⋱⋱⋱⋮⋮πτ-0πτ-1⋯πτ-τ-πτ-τ-+10⋯0πτ-+10πτ-+11⋯πτ-+1τ-0πτ-+1τ-+20⋯0πτ-+20πτ-+21⋯πτ-+2τ-00πτ-+2τ-+30⋯0⋮⋮⋮⋮⋮⋮⋱⋱⋱⋮πτ-+ρ--20πτ-+ρ--21⋯πτ-+ρ--2τ-00⋯0πτ-+ρ--2τ-+ρ--10πτ-+ρ--10πτ-+ρ--11⋯πτ-+ρ--1τ-00⋯00πτ-+ρ--1τ-+ρ-πτ-+ρ-0πτ-+ρ-1⋯πτ-+ρ-τ-00⋯000.As a result, the NCSs with long time delay and consecutive packet dropout can be expressed as(7)xk+1=Axk+Buk-τk+dk,where τ(k) is equivalent time delay.Remark 5. The proposed new modeling method stands out from other traditional stochastic system modeling methods in the following three aspects: (i) It innovatively models long time delay and consecutive packet dropout in a unified model described by only one Markov chain. (ii) The time scale adopted in our Markov chain is linear with the physical time; i.e., the state transition in our Markov chain always happens over one physical time instant. (iii) Compared with traditional models, the transition probability matrix of our model is not a full matrix and thus it requires less work in obtaining the transition probability matrix. ### 2.2. Elimination of Time Delay Term in the Form First, according to Assumption4, system (7) can be rewritten as(8)xk+1=Axk+Buk-τk+ωk.The following linear transformation can be adopted to obtain a new expression of system (8) which does not have time delay term in the form. When τ(k)≥1, by defining(9)x-k=Aτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk,system (8) can be transformed as(10)x-k+1=Ax-k+Buk+ωk.According to Assumption 1, system (10) is also controllable.Proof. The termBu(k-τ(k))+ω(k) can be decomposed as follows:(11)Buk-τ+ωk=Buk-τ+ωk-A-1Buk-τ+1+ωk+A-1Buk-τ+1+ωk-⋯-A-τ+1Buk-1+ωk+A-τ+1Buk-1+ωk-A-τBuk+ωk+A-τBuk+ωk=A-τBuk+ωk+∑i=0τ-1A-iBuk-τ+i+ωk-∑i=0τ-1A-i-1Buk-τ+1+i+ωk.Substituting (11) into (8), we have(12)xk+1+∑i=0τ-1A-i-1Buk-τ+1+i+ωk=Axk+∑i=0τ-1A-i-1Buk-τ+i+ωk+A-τBuk+ωk.Premultiplying both sides of (12) by Aτ, we have (13)Aτxk+1+∑i=0τ-1Aτ-i-1Buk-τ+1+i+ωk=AAτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk+Buk+ωk.Finally let x-(k)=Aτx(k)+∑i=0τ-1Aτ-i-1Bu(k-τ+i)+ω(k); system (10) can obtained. In particular, when τ(k)=0, it is defined that x-(k)=x(k). ## 2.1. Modeling of Time Delay and Packet Dropout Consider the discrete NCSs with time delay and packet dropout, which is described by the following state space model:(1)xk+1=Axk+Buk+dk,where x(k)∈Rn is system state variable; uk∈Rm is control input; d(k)∈Rn is external disturbance; and A, B, and C are matrices with appropriate dimensions.To make it easier for analysis, the following assumptions are made for system (1).Assumption 1. System (1) is controllable and all system state variables are observable.Assumption 2. Time delay and packet dropout exist in the controller-actuator channel only and each data packet is sent with a time stamp.Assumption 3. The long time delayτ is bounded and the upper bound is known, which is denoted as τ-; consecutive packet dropout is also bounded and the largest number of consecutive packet dropouts is known, which is denoted as ρ-.Assumption 4. External disturbanced(k) is unknown but bounded and satisfies sliding mode matching condition, i.e.,(2)dk=Bωk.Now we are going to consider long time delay and consecutive packet dropout for system (1). We will first establish a time delay model based on Markov chain, then we will take packet dropout as a special type of time delay, and the obtained transition probability matrix will be extended. Such modeling method considers the connection between time delay and packet dropout on the actuator side, which makes the model established better reflect the real condition in practical use.Inspired by [27], a ZOH is adopted for the NCS. Therefore, due to the existence of ZOH, only the newest received control signal will be used. Here we assume that time delay τk takes values in the set Ωτ=0,1,2,...τ-. It is noticed from the value space of τk that time delay can be zero in our model because, for a normal NCS, it is not possible that there is no on-time arriving signal. Therefore, it is more reasonable to add zero to the value space of time delay. First, let us think about an example, in which τk=0. It means controller output signal u-(k) arrives at time k, so the actuator input u(k)=u-(k). Then, due to the existence of ZOH, there are only two possible conditions for u(k+1). If u-(k+1) also arrives on time, we have u(k+1)=u-(k+1) and τk+1=0. If  u-(k+1) does not arrive on time, then u-(k) will still be used as an actuator input and we have u(k+1)=u-(k) and τk+1=1. In the same way, if we assume the time delay of time instant k is τk=i(i<τ-), there could be only i+2 possible conditions for the next time instant; i.e., if the control signal of k+1 arrives on time, then τk+1=0; if the control signal of k+1 does not arrive but the signal sent out at time k arrives, then we have τk+1=1; if control signal of k+1 or k does not arrive but that of time k-1 arrives, then we have τk+1=2; the same way can explain the conditions all the way to τk+1=i; however, if there is no newly arriving signal at k+1, then, due to the existence of ZOH, the control signal of time k is still used, so the value of time delay becomes τk+1=i+1. In particular, when τk=τ-, which is the upper bound of time delay, there could only be τ-+1 conditions. The existence of ZOH guarantees the use of newest signal, which means, for example, if control signal u-(k+1) arrives at time instance k+1, then all signals sent before time instant k+1 will never be used afterwards. The time delay state transition can be described as(3)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ, and define Πτ-=πij as system time delay transition possibility matrix, which is given as follows:(4)Πτ-=π00π0100⋯0π10π11π120⋯0⋮⋮⋮⋱⋱⋮πτ--20πτ--21πτ--22⋯πτ--2τ--10πτ--10πτ--11πτ--12⋯πτ--1τ--1πτ--1τ-πτ-0πτ-1πτ-2⋯πτ-τ--1πτ-τ-.Then let us take consecutive packet dropout into consideration. As mentioned above, if there is only bounded time delay, then when τk=τ-, there could be only τ-+1 possible conditions. However, if packet dropout is also considered, the time delay condition of time k+1 is different but can also be included and described in a similar way.Firstly, an example withτmax=2 and ρmax=2 is shown in Figure 1 as follows to illustrate the situation when consecutive packet dropout is included.Figure 1 Signal transmission timing diagram withτmax=2, ρmax=2.It is shown thatu-k is transmitted to the actuator at time instant k without time delay, thus giving τ(k)=0. Then at next time instant k+1, as illustrated above, τ(k+1) could only be τ(k+1)=0or1. If τ(k+1)=1, then at time instant k+2, we have τ(k+2)=0, or 1, or 2. If τ(k+2)=2, since time delay is bounded, then, at time instant k+3, the late arriving signal could only be u-k+1 or u-k+2, and if there is still no newly arrived signal, it is certain that packet u-k+1 and the signals sent before k+1, if still not received, are already lost. On this occasion, ZOH outputs control signal u-k as actuator input and the equivalent time delay is τ(k+3)=3. If τ(k+3)=3, since it is sure that u-k+1 is already lost, then, at time instant k+4, the late arriving signal could only be u-k+2 or u-k+3 and if there is still no newly arrived signal, u-k will still be used by the actuator; thus the equivalent time delay is τ(k+4)=4. If τ(k+4)=4, since consecutive packet dropout is also bounded, τ(k+5) could only take values in the set 0,1,2. Extend the above discussion to τmax=τ-,ρmax=ρ- and define the value space of equivalent time delay as Ωτ=0,1,2,...,(τ-+ρ-); then the following equivalent time delay transition is given:(5)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-+ρ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ∪i+1, and define Πτ-,ρ-=πij as system equivalent time delay transition possibility matrix, which is given as follows:(6)Πτ-,ρ-=π00π010…0π10π11π120⋯0⋮⋮⋱⋱⋱⋮⋮πτ-0πτ-1⋯πτ-τ-πτ-τ-+10⋯0πτ-+10πτ-+11⋯πτ-+1τ-0πτ-+1τ-+20⋯0πτ-+20πτ-+21⋯πτ-+2τ-00πτ-+2τ-+30⋯0⋮⋮⋮⋮⋮⋮⋱⋱⋱⋮πτ-+ρ--20πτ-+ρ--21⋯πτ-+ρ--2τ-00⋯0πτ-+ρ--2τ-+ρ--10πτ-+ρ--10πτ-+ρ--11⋯πτ-+ρ--1τ-00⋯00πτ-+ρ--1τ-+ρ-πτ-+ρ-0πτ-+ρ-1⋯πτ-+ρ-τ-00⋯000.As a result, the NCSs with long time delay and consecutive packet dropout can be expressed as(7)xk+1=Axk+Buk-τk+dk,where τ(k) is equivalent time delay.Remark 5. The proposed new modeling method stands out from other traditional stochastic system modeling methods in the following three aspects: (i) It innovatively models long time delay and consecutive packet dropout in a unified model described by only one Markov chain. (ii) The time scale adopted in our Markov chain is linear with the physical time; i.e., the state transition in our Markov chain always happens over one physical time instant. (iii) Compared with traditional models, the transition probability matrix of our model is not a full matrix and thus it requires less work in obtaining the transition probability matrix. ## 2.2. Elimination of Time Delay Term in the Form First, according to Assumption4, system (7) can be rewritten as(8)xk+1=Axk+Buk-τk+ωk.The following linear transformation can be adopted to obtain a new expression of system (8) which does not have time delay term in the form. When τ(k)≥1, by defining(9)x-k=Aτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk,system (8) can be transformed as(10)x-k+1=Ax-k+Buk+ωk.According to Assumption 1, system (10) is also controllable.Proof. The termBu(k-τ(k))+ω(k) can be decomposed as follows:(11)Buk-τ+ωk=Buk-τ+ωk-A-1Buk-τ+1+ωk+A-1Buk-τ+1+ωk-⋯-A-τ+1Buk-1+ωk+A-τ+1Buk-1+ωk-A-τBuk+ωk+A-τBuk+ωk=A-τBuk+ωk+∑i=0τ-1A-iBuk-τ+i+ωk-∑i=0τ-1A-i-1Buk-τ+1+i+ωk.Substituting (11) into (8), we have(12)xk+1+∑i=0τ-1A-i-1Buk-τ+1+i+ωk=Axk+∑i=0τ-1A-i-1Buk-τ+i+ωk+A-τBuk+ωk.Premultiplying both sides of (12) by Aτ, we have (13)Aτxk+1+∑i=0τ-1Aτ-i-1Buk-τ+1+i+ωk=AAτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk+Buk+ωk.Finally let x-(k)=Aτx(k)+∑i=0τ-1Aτ-i-1Bu(k-τ+i)+ω(k); system (10) can obtained. In particular, when τ(k)=0, it is defined that x-(k)=x(k). ## 3. Design of Chattering-Free Sliding Mode Controller ### 3.1. Design of Linear Sliding Surface Since disturbance satisfies the matching condition, then when designing the sliding surface, the system can be transformed to a sliding mode regular form so that the sliding mode dynamic is totally free from the disturbance. To obtain the sliding mode regular form of (10), we can define a nonsingular matrix T∈Rn×n, which makes(14)TB=0n-m×mBm,where Bm∈Rm×m is nonsingular. Choose T=T2T1T, where T1∈Rn×m, T2∈Rn×(n-m) are two subblocks of a unitary matrix resulting from the singular value decomposition of B, i.e.,(15)B=T1T2Σm×m0n-m×mJT,where Σ is a diagonal positive-definite matrix and J is a unitary matrix.Then through linear transformationz=Tx-, system (10) can be decomposed into two subsystems:(16a)z1k+1=A^11z1k+A^12z2k,(16b)z2k+1=A^21z1k+A^22z2k+Bmuk+ωk,where z1(k)∈Rn-m, z2(k)∈Rm; A^=TAT-1.It has been proved that (16a) is the sliding mode dynamics of system (10).Choose the following classic linear sliding surface:(17)sk=Czk=C1Izk=C1z1k+z2k.In the sliding mode, we have(18)sk=C1z1k+z2k=0⇒z2k=-C1z1k.Substituting (18) into (16a), we have (19)z1k+1=A^11-A^12C1z1k,which is a classic sliding surface design problem, and the sliding surface parameter C can be obtained through pole placement. ### 3.2. Design of Chattering-Free SMC To suppress chattering, a new chattering-free sliding mode reaching law is proposed, inspired by [32]. However, the reaching law proposed in this paper is for multiple-input systems, which means sliding surface s(k) is not a scalar but a vector, and the reaching law should be newly defined. The vector form reaching law is given bellow:(20)sk+1=sk-qTsk-εTsigα⁡sk,where 0<qT<1, 0<εT<1, 0<α<1 and,(21)sigα⁡sk≜diag⁡s1kα,s2kα,…,smkα·sgn⁡sk.Therefore, the sliding mode controller can be designed by comparing (16a), (16b), and (20):(22)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmωk.It can be seen that the unknown disturbance term is included in the expression of u(k), so u(k) cannot be used directly as a control signal. To tackle this problem we introduce the following disturbance estimation strategy:(23)ω^k=z2k-A^21z1k-1-A^22z2k-1-Bmuk-1=ωk-1.Substituting (23) into (22) yields the final SMC law:(24)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmω^k.With the SMC law (24), the new sliding mode reaching law considering disturbance is(25)sk+1=1-qTsk-εTsigα⁡sk+Bmωk-ω^k=1-qTsk-εTsigα⁡sk+Bmωk-ωk-1.Since s(k+1) is a vector with m sliding mode functions, then we write (25) into the following componentwise form and we will prove that each sliding mode state satisfies the reaching condition. The componentwise form of (25) is(26)sik+1=1-qTsik-εTsikαsgn⁡sik+wiωik-ωik-1i=1,2,…,m,where wi is the ith row of Bm, i.e.,(27)Bm=w1⋮wm.Assumption 6. The change rate of disturbanceδi(k)=ωi(k)-ωi(k-1) is limited, and there exists a constant δ∗ such that maxwiδi(k)≤δ∗,(i=1,2,…,m).Theorem 7. With the constant matrixC and the linear sliding surface given by (17), the trajectory of system sliding mode state si(k)i=1,2,…,m can be driven by controller (25) into the sliding surface neighborhood region Ω within at most mi∗ steps, where(28)Ω=sik∣sik≤η=ξα·max⁡δ∗εT1/α,εT1-qT1/1-αξα=1+αα/1-α-α1/1-αmi∗=mi+1withmi=si20-η2μ2μ=qTη+1-qT.Proof. The proof of the theory consists of two problems: (i) we need to prove that system sliding mode state will enter the regionΩ within mi∗ steps. (ii) we will prove that ifsik∈Ω, then sik+1∈Ω. Proof of Problem (i). Define a Lyapunov function Vik=si2k; then it is obtained that(29)ΔVik=si2k+1-si2k=-qTsik+εTsikαsgn⁡sik-wiδik·2sik-qTsik-εTsikαsgn⁡sik+wiδik.Then the following two conditions are discussed. ① Ifsik>η, there is(30)ΔVik=-qTsik+εTsiαk-wiδik·2sik-qTsik-εTsiαk+wiδik.Since sik>η, there is no doubt that sik>ξα·δ∗/εT1/α. Thus, we have(31)qTsik+εTsiαk-wiδik≥qTsik+ξααδ∗-wiδik≥qT·η+ξαα-1δ∗.Define μ=qT·η+ξαα-1δ∗, and then since 0<qT<1, 1<ξαα<2, it is sure that μ is a positive constant. Similarly, sincesik>η, there is no doubt that sik>η=ξα·εT/1-qT1/1-α. Thus, we have(32)1-qTsik>ξα1-α·εT.Therefore,(33)1-qTsik>ξ1-αα·εTsiαk>εTsiαk.By further calculation, we have(34)sik>qTsik+εTsiαk.Then it can be obtained that(35)2sik-qTsik-εTsiαk+wiδik≥qTsik+εTsiαk+wiδik≥qTsik+εTsiαk-wiδik≥μ.Consequently, there is(36)ΔVik=-qTsik+εT-wiδik·2sik-qTsik-εT+wiδik≤-μ2<0.② If sik<-η, by a similar proof procedure, it can be shown that the relation ΔVik≤-μ2 still holds. Therefore, when sik∉Ω, ΔVik=si2k+1-si2k≤-μ2<0. Moreover, it can be obtained from (36) that(37)ΔVik=si2k+1-si2k≤-μ2⇔si2k+1≤si2k-ε2T2≤si2k-1-2μ2≤⋯≤si20-k+1μ2.It means that if(38)si20-k+1μ2=η2,then sik+1∈Ω. However, the solution of (38) may not be an integer, so we denote mi=si20-η2/μ2 as the real number solution of (38). Then we can say that after at most mi∗=mi+1 steps, the ith sliding mode state will enter the sliding surface neighborhood region Ω. Proposition 8. Denote φ=max⁡δ∗/εT1/α,εT/1-qT1/1-α, and then there is δ∗≤εTφα≤1-qTφ. Lemma 9. For functions in the form:(39)ησ=1+σσ/1-σ-σ1/1-σ,where 0<σ<1, there exists 1<ησ<2, and for any parameter x∈0,1, the following relation holds [33]:(40)zησ-zσησσ+ησ-1≥0.Proof of Problem (ii). When sik∈Ω, it means -η≤sik≤η. We can define a new parameter -1≤θ≤1 such that sik=θ·η=θ·ξαφ. The following two cases are discussed. ① When0≤θ≤1, according to (26), we have(41)sik+1=1-qTθ·ξαφ-εTθαξααφα+wiδi.First, we will prove that si(k+1)≤η. According to Proposition 8 and Assumption6, there is(42)sik+1≤1-qTθ·ξαφ+δ∗1-θαξαα.If θξα≥1, we have(43)sik+1≤1-qTθ·ξαφ≤ξαφ=η.If 0≤θξα<1, we have(44)sik+1≤1-qTφ1+θ·ξα-θαξαα≤1-qTφ≤ξαφ=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(45)sik+1≥1-qTθ·ξαφ-εTθαξααφα-δ∗≥1-qTφθξα-θαξαα-1.If θξα≥1, since 0<α<1, we have(46)sik+1≥-1-qTφ≥-ξαφ=-η.If 0≤θξα<1, we have(47)sik+1≥1-qTφθξα-θαξαα-1≥-ξα1-qTφ≥-ξαφ=-η.② When -1≤θ≤0, according to (26), we have(48)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi.Firstly, we will prove that si(k+1)≤η. According to Proposition 8, we have(49)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≤-1-qTθ·ξαφ+1-qTφθξαα+δ∗≤1-qTφ-θ·ξα+θξαα+1.If θξα≥1, since 0<α<1, we have(50)sik+1≤1-qTφ≤ξαφ=η.If 0≤θξα<1, according to Lemma 9, we have(51)sik+1≤1-qTφξα≤φξα=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(52)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≥-1-qTθ·ξαφ+θξααδ∗-wiδi=-1-qTθ·ξαφ+δ∗θξαα-1.If θξα≥1, we have(53)sik+1≥-1-qTθ·ξαφ≥-ξαφ=-η.If 0≤θξα<1, we have(54)sik+1≥-1-qTφθ·ξα-θξαα+1≥-1-qTφ≥ξαφ=-η.To this end, we can conclude that when -η≤sik≤η, we have -η≤sik+1≤η, which means sik+1∈Ω.Remark 10. For discrete-time systems, the accurate arrival of sliding mode state on the sliding surface is hard to realize, so to totally eliminate chattering is actually not possible in the discrete-time systems. This paper usessigα⁡sk to replace the sign function to avoid the sharp switching of control input to suppress chattering. The closer the sliding mode state to the sliding surface, the lower the chattering that will be obtained. It needs to be pointed out that the saying “chattering-free” here is just a conventional way to describe such chattering suppression methods [32, 34, 35]. ### 3.3. Design of Multiple-Model Based Compensator Traditionally, if time delay information cannot be foreseen by the controller, there can only be two control input compensation strategies, i.e., zero-input or hold-input. For zero-input strategy, it means if the controller outputu-(k) does not arrive on time, then we let control input u(k)=0. However, when time delay happens frequently, this compensation strategy will result in very poor dynamic performance. And for hold-input strategy, it means control signal will remain unchanged unless new control signal arrives, but this may cause serious overshoot when long time delay exists. With a focus on the above problems, a new compensation strategy is proposed, which also comes from the idea of hold-input strategy but, compared with traditional hold-input strategy, it uses different system models to calculate control input signal for different equivalent time delay conditions. And to overcome the problem that controller cannot foresee the controller-actuator equivalent time delay, all possible models relating to different equivalent time delay τ(k) are established in the controller. Then, at each controlling period, instead of outputting one control signal, the controller will calculate and output a sequence of different control signals U-(k)=[u-(k)τ=0,u-(k)τ=1,…,u-(k)τ=τ-+ρ-] based on different models and the controller output U-(k) will be put in one data packet with a time stamp. Also at each controlling period, the ZOH will output a signal sequence packet U(k) based on the time stamp information, which means U(k)=U-(k) for τ(k)=0, or U(k)=U-(k-1) for τ(k)=1, etc. Finally, the compensator will decide which control signal of U(k) to use according to the time stamp information; i.e., if U(k)=U-(k), it will use the first signal of U(k), and if U(k)=U-(k-1), it will use the second signal of U(k), etc.In this way, even though there is no newly received control signal by the ZOH, the actuator can still get an updated control signal from the compensator and, unlike the traditional hold-input strategy, the updated control input signal has already taken into account the state changes of the former sampling period.Based on the above discussion, the algorithm of the compensator can be described as(55)uk=Ukgτk=U-k-τkgτk,where the compensation signal choosing function g(x) is defined as(56)gx=g0,g1,…,gi,…,gτ-+ρ-T,gi=1,i=x0,i≠x.To show the relationship between actuator input and controller output more clearly, the following illustrations are given:(57)uk=U-k·10…0T=u-kτ=0,τ=0U-k-1·010…0T=u-k-1τ=1,τ=1⋮⋮U-k-τk·0…0︸τk10…0T=u-k-τkτ=τk,τ=τk⋮⋮U-k-τ--ρ-·0…01T=u-k-τ--ρ-τ=τ-+ρ-,τ=τ-+ρ-.In conclusion, the steps of designing this chattering-fee sliding mode controller are given below.Step 1. Obtain the system’s time delay and packet dropout characteristics through experiments and then get the transition probability matrix (6) based on the proposed modeling method.Step 2. Obtain the system sliding mode regular form (16a) and (16b) through linear transformation z=Tx-, where the definition of nonsingular matrix T∈Rn×n is given by (14) and (15).Step 3. Design the classic linear sliding surface (17), whose parameter C=[C1I] can be determined by pole placement method.Step 4. Design the chattering-free reaching law (20) and choose appropriate reaching law parameters q, ε, and α that satisfy 0<qT<1, 0<εT<1, 0<α<1.Step 5. Design the sliding mode controller (22) according to reaching law (20).Step 6. Design the multiple-model based compensator (57) based on the upper bound of long time delay and consecutive packet dropout. ## 3.1. Design of Linear Sliding Surface Since disturbance satisfies the matching condition, then when designing the sliding surface, the system can be transformed to a sliding mode regular form so that the sliding mode dynamic is totally free from the disturbance. To obtain the sliding mode regular form of (10), we can define a nonsingular matrix T∈Rn×n, which makes(14)TB=0n-m×mBm,where Bm∈Rm×m is nonsingular. Choose T=T2T1T, where T1∈Rn×m, T2∈Rn×(n-m) are two subblocks of a unitary matrix resulting from the singular value decomposition of B, i.e.,(15)B=T1T2Σm×m0n-m×mJT,where Σ is a diagonal positive-definite matrix and J is a unitary matrix.Then through linear transformationz=Tx-, system (10) can be decomposed into two subsystems:(16a)z1k+1=A^11z1k+A^12z2k,(16b)z2k+1=A^21z1k+A^22z2k+Bmuk+ωk,where z1(k)∈Rn-m, z2(k)∈Rm; A^=TAT-1.It has been proved that (16a) is the sliding mode dynamics of system (10).Choose the following classic linear sliding surface:(17)sk=Czk=C1Izk=C1z1k+z2k.In the sliding mode, we have(18)sk=C1z1k+z2k=0⇒z2k=-C1z1k.Substituting (18) into (16a), we have (19)z1k+1=A^11-A^12C1z1k,which is a classic sliding surface design problem, and the sliding surface parameter C can be obtained through pole placement. ## 3.2. Design of Chattering-Free SMC To suppress chattering, a new chattering-free sliding mode reaching law is proposed, inspired by [32]. However, the reaching law proposed in this paper is for multiple-input systems, which means sliding surface s(k) is not a scalar but a vector, and the reaching law should be newly defined. The vector form reaching law is given bellow:(20)sk+1=sk-qTsk-εTsigα⁡sk,where 0<qT<1, 0<εT<1, 0<α<1 and,(21)sigα⁡sk≜diag⁡s1kα,s2kα,…,smkα·sgn⁡sk.Therefore, the sliding mode controller can be designed by comparing (16a), (16b), and (20):(22)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmωk.It can be seen that the unknown disturbance term is included in the expression of u(k), so u(k) cannot be used directly as a control signal. To tackle this problem we introduce the following disturbance estimation strategy:(23)ω^k=z2k-A^21z1k-1-A^22z2k-1-Bmuk-1=ωk-1.Substituting (23) into (22) yields the final SMC law:(24)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmω^k.With the SMC law (24), the new sliding mode reaching law considering disturbance is(25)sk+1=1-qTsk-εTsigα⁡sk+Bmωk-ω^k=1-qTsk-εTsigα⁡sk+Bmωk-ωk-1.Since s(k+1) is a vector with m sliding mode functions, then we write (25) into the following componentwise form and we will prove that each sliding mode state satisfies the reaching condition. The componentwise form of (25) is(26)sik+1=1-qTsik-εTsikαsgn⁡sik+wiωik-ωik-1i=1,2,…,m,where wi is the ith row of Bm, i.e.,(27)Bm=w1⋮wm.Assumption 6. The change rate of disturbanceδi(k)=ωi(k)-ωi(k-1) is limited, and there exists a constant δ∗ such that maxwiδi(k)≤δ∗,(i=1,2,…,m).Theorem 7. With the constant matrixC and the linear sliding surface given by (17), the trajectory of system sliding mode state si(k)i=1,2,…,m can be driven by controller (25) into the sliding surface neighborhood region Ω within at most mi∗ steps, where(28)Ω=sik∣sik≤η=ξα·max⁡δ∗εT1/α,εT1-qT1/1-αξα=1+αα/1-α-α1/1-αmi∗=mi+1withmi=si20-η2μ2μ=qTη+1-qT.Proof. The proof of the theory consists of two problems: (i) we need to prove that system sliding mode state will enter the regionΩ within mi∗ steps. (ii) we will prove that ifsik∈Ω, then sik+1∈Ω. Proof of Problem (i). Define a Lyapunov function Vik=si2k; then it is obtained that(29)ΔVik=si2k+1-si2k=-qTsik+εTsikαsgn⁡sik-wiδik·2sik-qTsik-εTsikαsgn⁡sik+wiδik.Then the following two conditions are discussed. ① Ifsik>η, there is(30)ΔVik=-qTsik+εTsiαk-wiδik·2sik-qTsik-εTsiαk+wiδik.Since sik>η, there is no doubt that sik>ξα·δ∗/εT1/α. Thus, we have(31)qTsik+εTsiαk-wiδik≥qTsik+ξααδ∗-wiδik≥qT·η+ξαα-1δ∗.Define μ=qT·η+ξαα-1δ∗, and then since 0<qT<1, 1<ξαα<2, it is sure that μ is a positive constant. Similarly, sincesik>η, there is no doubt that sik>η=ξα·εT/1-qT1/1-α. Thus, we have(32)1-qTsik>ξα1-α·εT.Therefore,(33)1-qTsik>ξ1-αα·εTsiαk>εTsiαk.By further calculation, we have(34)sik>qTsik+εTsiαk.Then it can be obtained that(35)2sik-qTsik-εTsiαk+wiδik≥qTsik+εTsiαk+wiδik≥qTsik+εTsiαk-wiδik≥μ.Consequently, there is(36)ΔVik=-qTsik+εT-wiδik·2sik-qTsik-εT+wiδik≤-μ2<0.② If sik<-η, by a similar proof procedure, it can be shown that the relation ΔVik≤-μ2 still holds. Therefore, when sik∉Ω, ΔVik=si2k+1-si2k≤-μ2<0. Moreover, it can be obtained from (36) that(37)ΔVik=si2k+1-si2k≤-μ2⇔si2k+1≤si2k-ε2T2≤si2k-1-2μ2≤⋯≤si20-k+1μ2.It means that if(38)si20-k+1μ2=η2,then sik+1∈Ω. However, the solution of (38) may not be an integer, so we denote mi=si20-η2/μ2 as the real number solution of (38). Then we can say that after at most mi∗=mi+1 steps, the ith sliding mode state will enter the sliding surface neighborhood region Ω. Proposition 8. Denote φ=max⁡δ∗/εT1/α,εT/1-qT1/1-α, and then there is δ∗≤εTφα≤1-qTφ. Lemma 9. For functions in the form:(39)ησ=1+σσ/1-σ-σ1/1-σ,where 0<σ<1, there exists 1<ησ<2, and for any parameter x∈0,1, the following relation holds [33]:(40)zησ-zσησσ+ησ-1≥0.Proof of Problem (ii). When sik∈Ω, it means -η≤sik≤η. We can define a new parameter -1≤θ≤1 such that sik=θ·η=θ·ξαφ. The following two cases are discussed. ① When0≤θ≤1, according to (26), we have(41)sik+1=1-qTθ·ξαφ-εTθαξααφα+wiδi.First, we will prove that si(k+1)≤η. According to Proposition 8 and Assumption6, there is(42)sik+1≤1-qTθ·ξαφ+δ∗1-θαξαα.If θξα≥1, we have(43)sik+1≤1-qTθ·ξαφ≤ξαφ=η.If 0≤θξα<1, we have(44)sik+1≤1-qTφ1+θ·ξα-θαξαα≤1-qTφ≤ξαφ=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(45)sik+1≥1-qTθ·ξαφ-εTθαξααφα-δ∗≥1-qTφθξα-θαξαα-1.If θξα≥1, since 0<α<1, we have(46)sik+1≥-1-qTφ≥-ξαφ=-η.If 0≤θξα<1, we have(47)sik+1≥1-qTφθξα-θαξαα-1≥-ξα1-qTφ≥-ξαφ=-η.② When -1≤θ≤0, according to (26), we have(48)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi.Firstly, we will prove that si(k+1)≤η. According to Proposition 8, we have(49)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≤-1-qTθ·ξαφ+1-qTφθξαα+δ∗≤1-qTφ-θ·ξα+θξαα+1.If θξα≥1, since 0<α<1, we have(50)sik+1≤1-qTφ≤ξαφ=η.If 0≤θξα<1, according to Lemma 9, we have(51)sik+1≤1-qTφξα≤φξα=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(52)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≥-1-qTθ·ξαφ+θξααδ∗-wiδi=-1-qTθ·ξαφ+δ∗θξαα-1.If θξα≥1, we have(53)sik+1≥-1-qTθ·ξαφ≥-ξαφ=-η.If 0≤θξα<1, we have(54)sik+1≥-1-qTφθ·ξα-θξαα+1≥-1-qTφ≥ξαφ=-η.To this end, we can conclude that when -η≤sik≤η, we have -η≤sik+1≤η, which means sik+1∈Ω.Remark 10. For discrete-time systems, the accurate arrival of sliding mode state on the sliding surface is hard to realize, so to totally eliminate chattering is actually not possible in the discrete-time systems. This paper usessigα⁡sk to replace the sign function to avoid the sharp switching of control input to suppress chattering. The closer the sliding mode state to the sliding surface, the lower the chattering that will be obtained. It needs to be pointed out that the saying “chattering-free” here is just a conventional way to describe such chattering suppression methods [32, 34, 35]. ## 3.3. Design of Multiple-Model Based Compensator Traditionally, if time delay information cannot be foreseen by the controller, there can only be two control input compensation strategies, i.e., zero-input or hold-input. For zero-input strategy, it means if the controller outputu-(k) does not arrive on time, then we let control input u(k)=0. However, when time delay happens frequently, this compensation strategy will result in very poor dynamic performance. And for hold-input strategy, it means control signal will remain unchanged unless new control signal arrives, but this may cause serious overshoot when long time delay exists. With a focus on the above problems, a new compensation strategy is proposed, which also comes from the idea of hold-input strategy but, compared with traditional hold-input strategy, it uses different system models to calculate control input signal for different equivalent time delay conditions. And to overcome the problem that controller cannot foresee the controller-actuator equivalent time delay, all possible models relating to different equivalent time delay τ(k) are established in the controller. Then, at each controlling period, instead of outputting one control signal, the controller will calculate and output a sequence of different control signals U-(k)=[u-(k)τ=0,u-(k)τ=1,…,u-(k)τ=τ-+ρ-] based on different models and the controller output U-(k) will be put in one data packet with a time stamp. Also at each controlling period, the ZOH will output a signal sequence packet U(k) based on the time stamp information, which means U(k)=U-(k) for τ(k)=0, or U(k)=U-(k-1) for τ(k)=1, etc. Finally, the compensator will decide which control signal of U(k) to use according to the time stamp information; i.e., if U(k)=U-(k), it will use the first signal of U(k), and if U(k)=U-(k-1), it will use the second signal of U(k), etc.In this way, even though there is no newly received control signal by the ZOH, the actuator can still get an updated control signal from the compensator and, unlike the traditional hold-input strategy, the updated control input signal has already taken into account the state changes of the former sampling period.Based on the above discussion, the algorithm of the compensator can be described as(55)uk=Ukgτk=U-k-τkgτk,where the compensation signal choosing function g(x) is defined as(56)gx=g0,g1,…,gi,…,gτ-+ρ-T,gi=1,i=x0,i≠x.To show the relationship between actuator input and controller output more clearly, the following illustrations are given:(57)uk=U-k·10…0T=u-kτ=0,τ=0U-k-1·010…0T=u-k-1τ=1,τ=1⋮⋮U-k-τk·0…0︸τk10…0T=u-k-τkτ=τk,τ=τk⋮⋮U-k-τ--ρ-·0…01T=u-k-τ--ρ-τ=τ-+ρ-,τ=τ-+ρ-.In conclusion, the steps of designing this chattering-fee sliding mode controller are given below.Step 1. Obtain the system’s time delay and packet dropout characteristics through experiments and then get the transition probability matrix (6) based on the proposed modeling method.Step 2. Obtain the system sliding mode regular form (16a) and (16b) through linear transformation z=Tx-, where the definition of nonsingular matrix T∈Rn×n is given by (14) and (15).Step 3. Design the classic linear sliding surface (17), whose parameter C=[C1I] can be determined by pole placement method.Step 4. Design the chattering-free reaching law (20) and choose appropriate reaching law parameters q, ε, and α that satisfy 0<qT<1, 0<εT<1, 0<α<1.Step 5. Design the sliding mode controller (22) according to reaching law (20).Step 6. Design the multiple-model based compensator (57) based on the upper bound of long time delay and consecutive packet dropout. ## 4. Simulation Example In this section, a simulation example is given to illustrate the effectiveness of the proposed method. Consider the following NCS in the form of (1), which comes from a type of aeroengine control system [36]. System parameters are initialed as follows:(58)A=-0.90.1-0.2-0.70.9-0.50.4-0.80.6,B=0.20.10.10.40.3-0.1,x=nLnHp3T,u=mfA8T,where nL is low pressure rotor speed; nH is high pressure rotor speed; p3 is compressor exit total pressure; mf is fuel flow; and A8 is critical section area of nozzle.System sampling period is set asT=20ms. The initial system state is x0=-0.90.5-0.7T. The disturbance is chosen as(59)dk=Bωk=BNζk,where ζk=1.2sin⁡(0.5π·k) and N=0.50.3T.The sliding function parameterC=[C1I] can be designed based on pole placement method, and by choosing the pole location of closed-loop system (16a) as -0.5, it can be obtained that C1=2.8539-0.0238T, and thus C=2.853910-0.023801.Then choose the reaching law parameters asq=10, ε=0.5, and α=0.5.For long time delay and packet dropout, it is defined thatτ-=2 and ρ-=2, and the probability transition matrix is given below:(60)Πτ-,ρ-=0.70.30000.60.20.2000.40.20.20.200.30.20.200.30.40.40.200.Based on matrix Πτ-,ρ-, a Markov chain typed equivalent time delay distribution is shown in Figure 2, which simulates the actual time delay and packet dropout condition of a networked control system during 100 sampling periods.Figure 2 Distribution of equivalent time delay.It can be seen from Figure2 that τ(k)=3 occurs four times, and τ(k)=4 does not occur. Accordingly, we can see that, due to the use of ZOH and the application of the proposed new modeling method, consecutive packet dropout does not really affect the system in this case even though it does exist. However, the effect of consecutive packet dropout may become noticeable if the upper bound of consecutive packet dropout is larger than two, but there is still no doubt that the proposed method greatly suppresses the effect of it.Simulation results obtained by using the proposed chattering-free SMC are shown in Figures3–5. To show the superiority of the proposed controller in chattering suppression, another controller is applied which is the classic reaching law based SMC proposed by Gao [37]. The same compensation strategy is also used in this controller. The results obtained are shown in Figures 6–8. The comparison of Figure 3 with Figure 6 reveals that the system states converge in less time with lower chattering in the case of the proposed controller as compared to the classic reaching law based SMC. The comparison of Figure 4 with Figure 7, and Figure 5 with Figure 8 gives the same conclusion in control input and sliding function.Figure 3 System state trajectory with the proposed controller.Figure 4 Control input trajectory with the proposed controller.Figure 5 Sliding function trajectory with the proposed controller.Figure 6 System state trajectory with classic reaching law based SMC.Figure 7 Control input trajectory with the classic reaching law based SMC.Figure 8 Sliding function trajectory with the classic reaching law based SMC.Moreover, to testify the effectiveness of the proposed multiple-model based compensator, another controller has been constructed which has the same chattering-free reaching law with the proposed controller but without compensator. Instead, the traditional hold-input compensation strategy is used. First, the condition ofτ-=2 and ρ-=2 is considered and the response of system state x1 with two controllers is shown in Figure 9, where controller 1 is the proposed controller and controller 2 is the proposed controller without compensator. It can be seen that there is no essential difference between the response curves of x1 with two controllers because the upper bound of time delay and packet dropout is small and the hold-input strategy is sufficient for compensation. Then Figures 10 and 11 depict the response curves of x1 on conditions τ-=3, ρ-=3 and τ-=4, ρ-=4, respectively. It is obvious that the performance of controller 2 (proposed controller without compensator) deteriorates sharply when the upper bound of time delay and packet dropout grows, while the proposed controller suffers little from the changes.Figure 9 Trajectory of statex1 with two controllers on condition τ-=2,ρ-=2.Figure 10 Trajectory of statex1 with two controllers on condition τ-=3,ρ-=3.Figure 11 Trajectory of statex1 with two controllers on condition τ-=4,ρ-=4. ## 5. Conclusion This paper proposes a chattering-free sliding mode controller with multiple-model based delay compensator for NCSs with long time delay and consecutive packet dropout. By proposing a new modeling method, long time delay and consecutive packet dropout can be modeled in a unified model described by one Markov chain, which not only simplifies the system model but also makes the controller designed more suitable for practical use. To the best of the authors’ knowledge, this modeling method for time delay and packet dropout has not been reported by any existing literature and it is the use of this model that makes the proposed compensator function well. A chattering-free sliding mode reaching law is then proposed and it is also the first time that such chattering-free reaching law is used for multiple-input discrete-time systems. To overcome the problem that controller-actuator channel network condition cannot be foreseen by controller, a new compensation strategy is proposed and a multiple-model based compensator is constructed. Finally, a simulation example is given and it shows that the proposed method can effectively overcome the effect of time delay and packet dropout as well as disturbance and can make the system states converge to the origin quickly without noticeable chattering. Therefore, the proposed method serves as a suitable choice for uncertain NCSs with long time delay and consecutive packet dropout. In our future research, we will work on testing the proposed method in a practical system so that the theory proposed can be more completed. --- *Source: 1016381-2019-07-11.xml*
1016381-2019-07-11_1016381-2019-07-11.md
53,633
Chattering-Free Sliding Mode Control for Networked Control System with Time Delay and Packet Dropout Based on a New Compensation Strategy
Yu Zhang; Shousheng Xie; Ledi Zhang; Litong Ren; Bin Zhou; Hao Wang
Complexity (2019)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1016381
1016381-2019-07-11.xml
--- ## Abstract This paper addresses the sliding mode control problem for a class of networked control systems with long time delay and consecutive packet dropout. A new modeling method is proposed, through which time delay and packet dropout are modeled in a unified model described by one Markov chain. To avoid the chattering problem of classic reaching law, a new chattering-free reaching law is proposed. Then with a focus on the problem that controller-actuator channel network condition cannot be foreseen by the controller, a new compensation strategy is proposed, which can effectively compensate for the effect of time delay and packet dropout in controller-actuator channel. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed approach. --- ## Body ## 1. Introduction In recent years, a lot of interest and concern have been focused on networked control systems (NCSs) where communication channels interconnect sensors with controllers and/or controllers with actuators [1]. Compared with conventional point-to-point interconnected feedback control systems, NCSs have advantages in many aspects: higher system operability, more efficient resource utilization and sharing, lower cost, and reduced weight, as well as simplicity for system installation and maintenance [2–4]. Despite these distinctive features provided by NCSs, the insertion of communication network in the feedback control loops also brings about new challenges such as bandwidth-limited channel, network-induced delay, packet dropout, and packet disordering [5].Time delay and packet dropout serve as two of the most common and crucial problems of NCSs that can directly degrade the performance of the closed-loop system or even lead to instability [6]. More specifically, long time delay refers to the time delay larger than the sampling period which, compared with short time delay or “one step” time delay, can cause more serious problems like packet disordering. Therefore, for discrete-time systems, it is more valuable to carry out research on long time delay systems. Numerous effective methods have been reported in the literature aiming to tackle the time delay and packet dropout problems arising in NCSs (see, for example, [7–10]). It can be concluded that the solution of the problem consists of two main parts, one of which is to establish an appropriate model to describe time delay and packet dropout and the other is to design a controller that can guarantee system stability despite the existence of time delay and packet dropout. This paper is also organized based on this structure.Firstly, since time delay and packet dropout usually exist simultaneously in the real network, it is important to establish a model to handle them in a common framework. Various modeling methods have been proposed including time delay system model [11], switched system model [12–14], asynchronous dynamical system model [15, 16], and stochastic system model [6, 8, 9, 17–25]. Among them, stochastic system model becomes popular in recent years and one of the most used ways is to adopt a Markov discrete-time linear system to describe random time delay or packet dropouts [9, 20–22, 24, 25]. It is found that the transition from one time delay/packet dropout state to another usually occurs with a certain probability and thus a Markov chain can be used to describe such relation with a transition probability matrix. Therefore, long time delay and consecutive packet dropout can be modeled as a Markov chain. In [24], both sensor-to-controller and controller-to-actuator delays are considered and described by Markov chains and the resulting closed-loop systems are written as jump linear systems with two modes. Similarly, in [25], the characteristics of network communication delays and packet dropouts in both sensor-to-controller and controller-to-actuator channels are modeled and their history behavior is described by three Markov chains. However, it is discovered that most of the existing Markov chain based modeling methods have a common problem that the time delay and packet dropout are usually considered on the controller side, which means they do not reflect the condition of the packets actually received by the actuator. This could lead to the problem of the use of outdated signals and packet disordering. To avoid this problem, buffer or zero-order-holder (ZOH) are introduced [26], but pitifully, the model used has not changed in [26]. Similarly, ZOH is also used in [27] and a corresponding modeling method is proposed, but [27] considers only the problem of packet dropout. Motivated by the above discussions, this paper focuses on proposing a new model that considers both long time delay and consecutive packet dropout in a unified model, and the situation of actual received control signals on the actuator side will be considered to avoid problems like packet disordering and the use of obsolete data.When modeling method is determined, the next key problem is to design an appropriate controller that can guarantee system stability and performance of the NCSs despite the existence of time delay and packet dropout. Among various methods, sliding mode control (SMC) method stands out for its unique feature that system states can be driven onto a specially designed sliding surface and then become totally insensitive to uncertainties [28–30]. However, the well-known chattering problem limits it wide application especially in discrete systems. With a focus on this problem, this paper proposes a chattering-free sliding mode reaching law for a type of multiple-input NCSs, and it needs to be pointed out that, to the best of the author’s knowledge, the use of such chattering-free reaching law in multiple-input systems has rarely been reported.Moreover, another critical problem usually ignored is that the network condition of the controller-actuator channel cannot be foreseen by the controller when calculating the control signal, which makes the calculated control signal not suitable for the systems when there exists time delay or packet dropout. Some researchers have noticed this problem but they tried to solve the problem only by using the time delay and packet dropout information of the former sampling period as compensation [31]. Therefore, another novelty of this paper is that it proposes a new compensation strategy, where all possible time delay and packet dropout states in the controller-actuator channel are considered and a multiple-model based controller is designed which can output a sequence of control signals, and then a compensator on the actuator side will decide which signal of the control signal sequence to use according to the time stamp information. In this way, the time delay and packet dropout can be properly compensated for even when the network condition of controller-actuation channel cannot be foreseen.The main contributions of this paper can be concluded as follows: (i) a new modeling method is proposed for NCSs with long time delay and consecutive packet dropout; (ii) a chattering-free sliding mode reaching law is proposed and first used for multiple-input NCSs; (iii) a multiple-model based sliding mode controller is designed together with a new compensation strategy which can compensate for time delay and packet dropout without the advanced knowledge of controller-actuator channel network condition.The remainder of this paper is organized as follows. Section2 introduces the newly proposed NCSs modeling method. The design of chattering-free sliding mode controller as well as the new compensation strategy is presented in Section 3. Numerical simulation results are shown in Section 4. Section 5 presents some conclusions. ## 2. A New Modeling Method for NCSs with Time Delay and Packet Dropout ### 2.1. Modeling of Time Delay and Packet Dropout Consider the discrete NCSs with time delay and packet dropout, which is described by the following state space model:(1)xk+1=Axk+Buk+dk,where x(k)∈Rn is system state variable; uk∈Rm is control input; d(k)∈Rn is external disturbance; and A, B, and C are matrices with appropriate dimensions.To make it easier for analysis, the following assumptions are made for system (1).Assumption 1. System (1) is controllable and all system state variables are observable.Assumption 2. Time delay and packet dropout exist in the controller-actuator channel only and each data packet is sent with a time stamp.Assumption 3. The long time delayτ is bounded and the upper bound is known, which is denoted as τ-; consecutive packet dropout is also bounded and the largest number of consecutive packet dropouts is known, which is denoted as ρ-.Assumption 4. External disturbanced(k) is unknown but bounded and satisfies sliding mode matching condition, i.e.,(2)dk=Bωk.Now we are going to consider long time delay and consecutive packet dropout for system (1). We will first establish a time delay model based on Markov chain, then we will take packet dropout as a special type of time delay, and the obtained transition probability matrix will be extended. Such modeling method considers the connection between time delay and packet dropout on the actuator side, which makes the model established better reflect the real condition in practical use.Inspired by [27], a ZOH is adopted for the NCS. Therefore, due to the existence of ZOH, only the newest received control signal will be used. Here we assume that time delay τk takes values in the set Ωτ=0,1,2,...τ-. It is noticed from the value space of τk that time delay can be zero in our model because, for a normal NCS, it is not possible that there is no on-time arriving signal. Therefore, it is more reasonable to add zero to the value space of time delay. First, let us think about an example, in which τk=0. It means controller output signal u-(k) arrives at time k, so the actuator input u(k)=u-(k). Then, due to the existence of ZOH, there are only two possible conditions for u(k+1). If u-(k+1) also arrives on time, we have u(k+1)=u-(k+1) and τk+1=0. If  u-(k+1) does not arrive on time, then u-(k) will still be used as an actuator input and we have u(k+1)=u-(k) and τk+1=1. In the same way, if we assume the time delay of time instant k is τk=i(i<τ-), there could be only i+2 possible conditions for the next time instant; i.e., if the control signal of k+1 arrives on time, then τk+1=0; if the control signal of k+1 does not arrive but the signal sent out at time k arrives, then we have τk+1=1; if control signal of k+1 or k does not arrive but that of time k-1 arrives, then we have τk+1=2; the same way can explain the conditions all the way to τk+1=i; however, if there is no newly arriving signal at k+1, then, due to the existence of ZOH, the control signal of time k is still used, so the value of time delay becomes τk+1=i+1. In particular, when τk=τ-, which is the upper bound of time delay, there could only be τ-+1 conditions. The existence of ZOH guarantees the use of newest signal, which means, for example, if control signal u-(k+1) arrives at time instance k+1, then all signals sent before time instant k+1 will never be used afterwards. The time delay state transition can be described as(3)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ, and define Πτ-=πij as system time delay transition possibility matrix, which is given as follows:(4)Πτ-=π00π0100⋯0π10π11π120⋯0⋮⋮⋮⋱⋱⋮πτ--20πτ--21πτ--22⋯πτ--2τ--10πτ--10πτ--11πτ--12⋯πτ--1τ--1πτ--1τ-πτ-0πτ-1πτ-2⋯πτ-τ--1πτ-τ-.Then let us take consecutive packet dropout into consideration. As mentioned above, if there is only bounded time delay, then when τk=τ-, there could be only τ-+1 possible conditions. However, if packet dropout is also considered, the time delay condition of time k+1 is different but can also be included and described in a similar way.Firstly, an example withτmax=2 and ρmax=2 is shown in Figure 1 as follows to illustrate the situation when consecutive packet dropout is included.Figure 1 Signal transmission timing diagram withτmax=2, ρmax=2.It is shown thatu-k is transmitted to the actuator at time instant k without time delay, thus giving τ(k)=0. Then at next time instant k+1, as illustrated above, τ(k+1) could only be τ(k+1)=0or1. If τ(k+1)=1, then at time instant k+2, we have τ(k+2)=0, or 1, or 2. If τ(k+2)=2, since time delay is bounded, then, at time instant k+3, the late arriving signal could only be u-k+1 or u-k+2, and if there is still no newly arrived signal, it is certain that packet u-k+1 and the signals sent before k+1, if still not received, are already lost. On this occasion, ZOH outputs control signal u-k as actuator input and the equivalent time delay is τ(k+3)=3. If τ(k+3)=3, since it is sure that u-k+1 is already lost, then, at time instant k+4, the late arriving signal could only be u-k+2 or u-k+3 and if there is still no newly arrived signal, u-k will still be used by the actuator; thus the equivalent time delay is τ(k+4)=4. If τ(k+4)=4, since consecutive packet dropout is also bounded, τ(k+5) could only take values in the set 0,1,2. Extend the above discussion to τmax=τ-,ρmax=ρ- and define the value space of equivalent time delay as Ωτ=0,1,2,...,(τ-+ρ-); then the following equivalent time delay transition is given:(5)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-+ρ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ∪i+1, and define Πτ-,ρ-=πij as system equivalent time delay transition possibility matrix, which is given as follows:(6)Πτ-,ρ-=π00π010…0π10π11π120⋯0⋮⋮⋱⋱⋱⋮⋮πτ-0πτ-1⋯πτ-τ-πτ-τ-+10⋯0πτ-+10πτ-+11⋯πτ-+1τ-0πτ-+1τ-+20⋯0πτ-+20πτ-+21⋯πτ-+2τ-00πτ-+2τ-+30⋯0⋮⋮⋮⋮⋮⋮⋱⋱⋱⋮πτ-+ρ--20πτ-+ρ--21⋯πτ-+ρ--2τ-00⋯0πτ-+ρ--2τ-+ρ--10πτ-+ρ--10πτ-+ρ--11⋯πτ-+ρ--1τ-00⋯00πτ-+ρ--1τ-+ρ-πτ-+ρ-0πτ-+ρ-1⋯πτ-+ρ-τ-00⋯000.As a result, the NCSs with long time delay and consecutive packet dropout can be expressed as(7)xk+1=Axk+Buk-τk+dk,where τ(k) is equivalent time delay.Remark 5. The proposed new modeling method stands out from other traditional stochastic system modeling methods in the following three aspects: (i) It innovatively models long time delay and consecutive packet dropout in a unified model described by only one Markov chain. (ii) The time scale adopted in our Markov chain is linear with the physical time; i.e., the state transition in our Markov chain always happens over one physical time instant. (iii) Compared with traditional models, the transition probability matrix of our model is not a full matrix and thus it requires less work in obtaining the transition probability matrix. ### 2.2. Elimination of Time Delay Term in the Form First, according to Assumption4, system (7) can be rewritten as(8)xk+1=Axk+Buk-τk+ωk.The following linear transformation can be adopted to obtain a new expression of system (8) which does not have time delay term in the form. When τ(k)≥1, by defining(9)x-k=Aτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk,system (8) can be transformed as(10)x-k+1=Ax-k+Buk+ωk.According to Assumption 1, system (10) is also controllable.Proof. The termBu(k-τ(k))+ω(k) can be decomposed as follows:(11)Buk-τ+ωk=Buk-τ+ωk-A-1Buk-τ+1+ωk+A-1Buk-τ+1+ωk-⋯-A-τ+1Buk-1+ωk+A-τ+1Buk-1+ωk-A-τBuk+ωk+A-τBuk+ωk=A-τBuk+ωk+∑i=0τ-1A-iBuk-τ+i+ωk-∑i=0τ-1A-i-1Buk-τ+1+i+ωk.Substituting (11) into (8), we have(12)xk+1+∑i=0τ-1A-i-1Buk-τ+1+i+ωk=Axk+∑i=0τ-1A-i-1Buk-τ+i+ωk+A-τBuk+ωk.Premultiplying both sides of (12) by Aτ, we have (13)Aτxk+1+∑i=0τ-1Aτ-i-1Buk-τ+1+i+ωk=AAτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk+Buk+ωk.Finally let x-(k)=Aτx(k)+∑i=0τ-1Aτ-i-1Bu(k-τ+i)+ω(k); system (10) can obtained. In particular, when τ(k)=0, it is defined that x-(k)=x(k). ## 2.1. Modeling of Time Delay and Packet Dropout Consider the discrete NCSs with time delay and packet dropout, which is described by the following state space model:(1)xk+1=Axk+Buk+dk,where x(k)∈Rn is system state variable; uk∈Rm is control input; d(k)∈Rn is external disturbance; and A, B, and C are matrices with appropriate dimensions.To make it easier for analysis, the following assumptions are made for system (1).Assumption 1. System (1) is controllable and all system state variables are observable.Assumption 2. Time delay and packet dropout exist in the controller-actuator channel only and each data packet is sent with a time stamp.Assumption 3. The long time delayτ is bounded and the upper bound is known, which is denoted as τ-; consecutive packet dropout is also bounded and the largest number of consecutive packet dropouts is known, which is denoted as ρ-.Assumption 4. External disturbanced(k) is unknown but bounded and satisfies sliding mode matching condition, i.e.,(2)dk=Bωk.Now we are going to consider long time delay and consecutive packet dropout for system (1). We will first establish a time delay model based on Markov chain, then we will take packet dropout as a special type of time delay, and the obtained transition probability matrix will be extended. Such modeling method considers the connection between time delay and packet dropout on the actuator side, which makes the model established better reflect the real condition in practical use.Inspired by [27], a ZOH is adopted for the NCS. Therefore, due to the existence of ZOH, only the newest received control signal will be used. Here we assume that time delay τk takes values in the set Ωτ=0,1,2,...τ-. It is noticed from the value space of τk that time delay can be zero in our model because, for a normal NCS, it is not possible that there is no on-time arriving signal. Therefore, it is more reasonable to add zero to the value space of time delay. First, let us think about an example, in which τk=0. It means controller output signal u-(k) arrives at time k, so the actuator input u(k)=u-(k). Then, due to the existence of ZOH, there are only two possible conditions for u(k+1). If u-(k+1) also arrives on time, we have u(k+1)=u-(k+1) and τk+1=0. If  u-(k+1) does not arrive on time, then u-(k) will still be used as an actuator input and we have u(k+1)=u-(k) and τk+1=1. In the same way, if we assume the time delay of time instant k is τk=i(i<τ-), there could be only i+2 possible conditions for the next time instant; i.e., if the control signal of k+1 arrives on time, then τk+1=0; if the control signal of k+1 does not arrive but the signal sent out at time k arrives, then we have τk+1=1; if control signal of k+1 or k does not arrive but that of time k-1 arrives, then we have τk+1=2; the same way can explain the conditions all the way to τk+1=i; however, if there is no newly arriving signal at k+1, then, due to the existence of ZOH, the control signal of time k is still used, so the value of time delay becomes τk+1=i+1. In particular, when τk=τ-, which is the upper bound of time delay, there could only be τ-+1 conditions. The existence of ZOH guarantees the use of newest signal, which means, for example, if control signal u-(k+1) arrives at time instance k+1, then all signals sent before time instant k+1 will never be used afterwards. The time delay state transition can be described as(3)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ, and define Πτ-=πij as system time delay transition possibility matrix, which is given as follows:(4)Πτ-=π00π0100⋯0π10π11π120⋯0⋮⋮⋮⋱⋱⋮πτ--20πτ--21πτ--22⋯πτ--2τ--10πτ--10πτ--11πτ--12⋯πτ--1τ--1πτ--1τ-πτ-0πτ-1πτ-2⋯πτ-τ--1πτ-τ-.Then let us take consecutive packet dropout into consideration. As mentioned above, if there is only bounded time delay, then when τk=τ-, there could be only τ-+1 possible conditions. However, if packet dropout is also considered, the time delay condition of time k+1 is different but can also be included and described in a similar way.Firstly, an example withτmax=2 and ρmax=2 is shown in Figure 1 as follows to illustrate the situation when consecutive packet dropout is included.Figure 1 Signal transmission timing diagram withτmax=2, ρmax=2.It is shown thatu-k is transmitted to the actuator at time instant k without time delay, thus giving τ(k)=0. Then at next time instant k+1, as illustrated above, τ(k+1) could only be τ(k+1)=0or1. If τ(k+1)=1, then at time instant k+2, we have τ(k+2)=0, or 1, or 2. If τ(k+2)=2, since time delay is bounded, then, at time instant k+3, the late arriving signal could only be u-k+1 or u-k+2, and if there is still no newly arrived signal, it is certain that packet u-k+1 and the signals sent before k+1, if still not received, are already lost. On this occasion, ZOH outputs control signal u-k as actuator input and the equivalent time delay is τ(k+3)=3. If τ(k+3)=3, since it is sure that u-k+1 is already lost, then, at time instant k+4, the late arriving signal could only be u-k+2 or u-k+3 and if there is still no newly arrived signal, u-k will still be used by the actuator; thus the equivalent time delay is τ(k+4)=4. If τ(k+4)=4, since consecutive packet dropout is also bounded, τ(k+5) could only take values in the set 0,1,2. Extend the above discussion to τmax=τ-,ρmax=ρ- and define the value space of equivalent time delay as Ωτ=0,1,2,...,(τ-+ρ-); then the following equivalent time delay transition is given:(5)πij=Pr⁡τk+1=j∣τk=i∑j=0τ-+ρ-πij=1,where i∈Ωτ,j∈0,1,…,i+1∩Ωτ∪i+1, and define Πτ-,ρ-=πij as system equivalent time delay transition possibility matrix, which is given as follows:(6)Πτ-,ρ-=π00π010…0π10π11π120⋯0⋮⋮⋱⋱⋱⋮⋮πτ-0πτ-1⋯πτ-τ-πτ-τ-+10⋯0πτ-+10πτ-+11⋯πτ-+1τ-0πτ-+1τ-+20⋯0πτ-+20πτ-+21⋯πτ-+2τ-00πτ-+2τ-+30⋯0⋮⋮⋮⋮⋮⋮⋱⋱⋱⋮πτ-+ρ--20πτ-+ρ--21⋯πτ-+ρ--2τ-00⋯0πτ-+ρ--2τ-+ρ--10πτ-+ρ--10πτ-+ρ--11⋯πτ-+ρ--1τ-00⋯00πτ-+ρ--1τ-+ρ-πτ-+ρ-0πτ-+ρ-1⋯πτ-+ρ-τ-00⋯000.As a result, the NCSs with long time delay and consecutive packet dropout can be expressed as(7)xk+1=Axk+Buk-τk+dk,where τ(k) is equivalent time delay.Remark 5. The proposed new modeling method stands out from other traditional stochastic system modeling methods in the following three aspects: (i) It innovatively models long time delay and consecutive packet dropout in a unified model described by only one Markov chain. (ii) The time scale adopted in our Markov chain is linear with the physical time; i.e., the state transition in our Markov chain always happens over one physical time instant. (iii) Compared with traditional models, the transition probability matrix of our model is not a full matrix and thus it requires less work in obtaining the transition probability matrix. ## 2.2. Elimination of Time Delay Term in the Form First, according to Assumption4, system (7) can be rewritten as(8)xk+1=Axk+Buk-τk+ωk.The following linear transformation can be adopted to obtain a new expression of system (8) which does not have time delay term in the form. When τ(k)≥1, by defining(9)x-k=Aτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk,system (8) can be transformed as(10)x-k+1=Ax-k+Buk+ωk.According to Assumption 1, system (10) is also controllable.Proof. The termBu(k-τ(k))+ω(k) can be decomposed as follows:(11)Buk-τ+ωk=Buk-τ+ωk-A-1Buk-τ+1+ωk+A-1Buk-τ+1+ωk-⋯-A-τ+1Buk-1+ωk+A-τ+1Buk-1+ωk-A-τBuk+ωk+A-τBuk+ωk=A-τBuk+ωk+∑i=0τ-1A-iBuk-τ+i+ωk-∑i=0τ-1A-i-1Buk-τ+1+i+ωk.Substituting (11) into (8), we have(12)xk+1+∑i=0τ-1A-i-1Buk-τ+1+i+ωk=Axk+∑i=0τ-1A-i-1Buk-τ+i+ωk+A-τBuk+ωk.Premultiplying both sides of (12) by Aτ, we have (13)Aτxk+1+∑i=0τ-1Aτ-i-1Buk-τ+1+i+ωk=AAτxk+∑i=0τ-1Aτ-i-1Buk-τ+i+ωk+Buk+ωk.Finally let x-(k)=Aτx(k)+∑i=0τ-1Aτ-i-1Bu(k-τ+i)+ω(k); system (10) can obtained. In particular, when τ(k)=0, it is defined that x-(k)=x(k). ## 3. Design of Chattering-Free Sliding Mode Controller ### 3.1. Design of Linear Sliding Surface Since disturbance satisfies the matching condition, then when designing the sliding surface, the system can be transformed to a sliding mode regular form so that the sliding mode dynamic is totally free from the disturbance. To obtain the sliding mode regular form of (10), we can define a nonsingular matrix T∈Rn×n, which makes(14)TB=0n-m×mBm,where Bm∈Rm×m is nonsingular. Choose T=T2T1T, where T1∈Rn×m, T2∈Rn×(n-m) are two subblocks of a unitary matrix resulting from the singular value decomposition of B, i.e.,(15)B=T1T2Σm×m0n-m×mJT,where Σ is a diagonal positive-definite matrix and J is a unitary matrix.Then through linear transformationz=Tx-, system (10) can be decomposed into two subsystems:(16a)z1k+1=A^11z1k+A^12z2k,(16b)z2k+1=A^21z1k+A^22z2k+Bmuk+ωk,where z1(k)∈Rn-m, z2(k)∈Rm; A^=TAT-1.It has been proved that (16a) is the sliding mode dynamics of system (10).Choose the following classic linear sliding surface:(17)sk=Czk=C1Izk=C1z1k+z2k.In the sliding mode, we have(18)sk=C1z1k+z2k=0⇒z2k=-C1z1k.Substituting (18) into (16a), we have (19)z1k+1=A^11-A^12C1z1k,which is a classic sliding surface design problem, and the sliding surface parameter C can be obtained through pole placement. ### 3.2. Design of Chattering-Free SMC To suppress chattering, a new chattering-free sliding mode reaching law is proposed, inspired by [32]. However, the reaching law proposed in this paper is for multiple-input systems, which means sliding surface s(k) is not a scalar but a vector, and the reaching law should be newly defined. The vector form reaching law is given bellow:(20)sk+1=sk-qTsk-εTsigα⁡sk,where 0<qT<1, 0<εT<1, 0<α<1 and,(21)sigα⁡sk≜diag⁡s1kα,s2kα,…,smkα·sgn⁡sk.Therefore, the sliding mode controller can be designed by comparing (16a), (16b), and (20):(22)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmωk.It can be seen that the unknown disturbance term is included in the expression of u(k), so u(k) cannot be used directly as a control signal. To tackle this problem we introduce the following disturbance estimation strategy:(23)ω^k=z2k-A^21z1k-1-A^22z2k-1-Bmuk-1=ωk-1.Substituting (23) into (22) yields the final SMC law:(24)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmω^k.With the SMC law (24), the new sliding mode reaching law considering disturbance is(25)sk+1=1-qTsk-εTsigα⁡sk+Bmωk-ω^k=1-qTsk-εTsigα⁡sk+Bmωk-ωk-1.Since s(k+1) is a vector with m sliding mode functions, then we write (25) into the following componentwise form and we will prove that each sliding mode state satisfies the reaching condition. The componentwise form of (25) is(26)sik+1=1-qTsik-εTsikαsgn⁡sik+wiωik-ωik-1i=1,2,…,m,where wi is the ith row of Bm, i.e.,(27)Bm=w1⋮wm.Assumption 6. The change rate of disturbanceδi(k)=ωi(k)-ωi(k-1) is limited, and there exists a constant δ∗ such that maxwiδi(k)≤δ∗,(i=1,2,…,m).Theorem 7. With the constant matrixC and the linear sliding surface given by (17), the trajectory of system sliding mode state si(k)i=1,2,…,m can be driven by controller (25) into the sliding surface neighborhood region Ω within at most mi∗ steps, where(28)Ω=sik∣sik≤η=ξα·max⁡δ∗εT1/α,εT1-qT1/1-αξα=1+αα/1-α-α1/1-αmi∗=mi+1withmi=si20-η2μ2μ=qTη+1-qT.Proof. The proof of the theory consists of two problems: (i) we need to prove that system sliding mode state will enter the regionΩ within mi∗ steps. (ii) we will prove that ifsik∈Ω, then sik+1∈Ω. Proof of Problem (i). Define a Lyapunov function Vik=si2k; then it is obtained that(29)ΔVik=si2k+1-si2k=-qTsik+εTsikαsgn⁡sik-wiδik·2sik-qTsik-εTsikαsgn⁡sik+wiδik.Then the following two conditions are discussed. ① Ifsik>η, there is(30)ΔVik=-qTsik+εTsiαk-wiδik·2sik-qTsik-εTsiαk+wiδik.Since sik>η, there is no doubt that sik>ξα·δ∗/εT1/α. Thus, we have(31)qTsik+εTsiαk-wiδik≥qTsik+ξααδ∗-wiδik≥qT·η+ξαα-1δ∗.Define μ=qT·η+ξαα-1δ∗, and then since 0<qT<1, 1<ξαα<2, it is sure that μ is a positive constant. Similarly, sincesik>η, there is no doubt that sik>η=ξα·εT/1-qT1/1-α. Thus, we have(32)1-qTsik>ξα1-α·εT.Therefore,(33)1-qTsik>ξ1-αα·εTsiαk>εTsiαk.By further calculation, we have(34)sik>qTsik+εTsiαk.Then it can be obtained that(35)2sik-qTsik-εTsiαk+wiδik≥qTsik+εTsiαk+wiδik≥qTsik+εTsiαk-wiδik≥μ.Consequently, there is(36)ΔVik=-qTsik+εT-wiδik·2sik-qTsik-εT+wiδik≤-μ2<0.② If sik<-η, by a similar proof procedure, it can be shown that the relation ΔVik≤-μ2 still holds. Therefore, when sik∉Ω, ΔVik=si2k+1-si2k≤-μ2<0. Moreover, it can be obtained from (36) that(37)ΔVik=si2k+1-si2k≤-μ2⇔si2k+1≤si2k-ε2T2≤si2k-1-2μ2≤⋯≤si20-k+1μ2.It means that if(38)si20-k+1μ2=η2,then sik+1∈Ω. However, the solution of (38) may not be an integer, so we denote mi=si20-η2/μ2 as the real number solution of (38). Then we can say that after at most mi∗=mi+1 steps, the ith sliding mode state will enter the sliding surface neighborhood region Ω. Proposition 8. Denote φ=max⁡δ∗/εT1/α,εT/1-qT1/1-α, and then there is δ∗≤εTφα≤1-qTφ. Lemma 9. For functions in the form:(39)ησ=1+σσ/1-σ-σ1/1-σ,where 0<σ<1, there exists 1<ησ<2, and for any parameter x∈0,1, the following relation holds [33]:(40)zησ-zσησσ+ησ-1≥0.Proof of Problem (ii). When sik∈Ω, it means -η≤sik≤η. We can define a new parameter -1≤θ≤1 such that sik=θ·η=θ·ξαφ. The following two cases are discussed. ① When0≤θ≤1, according to (26), we have(41)sik+1=1-qTθ·ξαφ-εTθαξααφα+wiδi.First, we will prove that si(k+1)≤η. According to Proposition 8 and Assumption6, there is(42)sik+1≤1-qTθ·ξαφ+δ∗1-θαξαα.If θξα≥1, we have(43)sik+1≤1-qTθ·ξαφ≤ξαφ=η.If 0≤θξα<1, we have(44)sik+1≤1-qTφ1+θ·ξα-θαξαα≤1-qTφ≤ξαφ=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(45)sik+1≥1-qTθ·ξαφ-εTθαξααφα-δ∗≥1-qTφθξα-θαξαα-1.If θξα≥1, since 0<α<1, we have(46)sik+1≥-1-qTφ≥-ξαφ=-η.If 0≤θξα<1, we have(47)sik+1≥1-qTφθξα-θαξαα-1≥-ξα1-qTφ≥-ξαφ=-η.② When -1≤θ≤0, according to (26), we have(48)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi.Firstly, we will prove that si(k+1)≤η. According to Proposition 8, we have(49)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≤-1-qTθ·ξαφ+1-qTφθξαα+δ∗≤1-qTφ-θ·ξα+θξαα+1.If θξα≥1, since 0<α<1, we have(50)sik+1≤1-qTφ≤ξαφ=η.If 0≤θξα<1, according to Lemma 9, we have(51)sik+1≤1-qTφξα≤φξα=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(52)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≥-1-qTθ·ξαφ+θξααδ∗-wiδi=-1-qTθ·ξαφ+δ∗θξαα-1.If θξα≥1, we have(53)sik+1≥-1-qTθ·ξαφ≥-ξαφ=-η.If 0≤θξα<1, we have(54)sik+1≥-1-qTφθ·ξα-θξαα+1≥-1-qTφ≥ξαφ=-η.To this end, we can conclude that when -η≤sik≤η, we have -η≤sik+1≤η, which means sik+1∈Ω.Remark 10. For discrete-time systems, the accurate arrival of sliding mode state on the sliding surface is hard to realize, so to totally eliminate chattering is actually not possible in the discrete-time systems. This paper usessigα⁡sk to replace the sign function to avoid the sharp switching of control input to suppress chattering. The closer the sliding mode state to the sliding surface, the lower the chattering that will be obtained. It needs to be pointed out that the saying “chattering-free” here is just a conventional way to describe such chattering suppression methods [32, 34, 35]. ### 3.3. Design of Multiple-Model Based Compensator Traditionally, if time delay information cannot be foreseen by the controller, there can only be two control input compensation strategies, i.e., zero-input or hold-input. For zero-input strategy, it means if the controller outputu-(k) does not arrive on time, then we let control input u(k)=0. However, when time delay happens frequently, this compensation strategy will result in very poor dynamic performance. And for hold-input strategy, it means control signal will remain unchanged unless new control signal arrives, but this may cause serious overshoot when long time delay exists. With a focus on the above problems, a new compensation strategy is proposed, which also comes from the idea of hold-input strategy but, compared with traditional hold-input strategy, it uses different system models to calculate control input signal for different equivalent time delay conditions. And to overcome the problem that controller cannot foresee the controller-actuator equivalent time delay, all possible models relating to different equivalent time delay τ(k) are established in the controller. Then, at each controlling period, instead of outputting one control signal, the controller will calculate and output a sequence of different control signals U-(k)=[u-(k)τ=0,u-(k)τ=1,…,u-(k)τ=τ-+ρ-] based on different models and the controller output U-(k) will be put in one data packet with a time stamp. Also at each controlling period, the ZOH will output a signal sequence packet U(k) based on the time stamp information, which means U(k)=U-(k) for τ(k)=0, or U(k)=U-(k-1) for τ(k)=1, etc. Finally, the compensator will decide which control signal of U(k) to use according to the time stamp information; i.e., if U(k)=U-(k), it will use the first signal of U(k), and if U(k)=U-(k-1), it will use the second signal of U(k), etc.In this way, even though there is no newly received control signal by the ZOH, the actuator can still get an updated control signal from the compensator and, unlike the traditional hold-input strategy, the updated control input signal has already taken into account the state changes of the former sampling period.Based on the above discussion, the algorithm of the compensator can be described as(55)uk=Ukgτk=U-k-τkgτk,where the compensation signal choosing function g(x) is defined as(56)gx=g0,g1,…,gi,…,gτ-+ρ-T,gi=1,i=x0,i≠x.To show the relationship between actuator input and controller output more clearly, the following illustrations are given:(57)uk=U-k·10…0T=u-kτ=0,τ=0U-k-1·010…0T=u-k-1τ=1,τ=1⋮⋮U-k-τk·0…0︸τk10…0T=u-k-τkτ=τk,τ=τk⋮⋮U-k-τ--ρ-·0…01T=u-k-τ--ρ-τ=τ-+ρ-,τ=τ-+ρ-.In conclusion, the steps of designing this chattering-fee sliding mode controller are given below.Step 1. Obtain the system’s time delay and packet dropout characteristics through experiments and then get the transition probability matrix (6) based on the proposed modeling method.Step 2. Obtain the system sliding mode regular form (16a) and (16b) through linear transformation z=Tx-, where the definition of nonsingular matrix T∈Rn×n is given by (14) and (15).Step 3. Design the classic linear sliding surface (17), whose parameter C=[C1I] can be determined by pole placement method.Step 4. Design the chattering-free reaching law (20) and choose appropriate reaching law parameters q, ε, and α that satisfy 0<qT<1, 0<εT<1, 0<α<1.Step 5. Design the sliding mode controller (22) according to reaching law (20).Step 6. Design the multiple-model based compensator (57) based on the upper bound of long time delay and consecutive packet dropout. ## 3.1. Design of Linear Sliding Surface Since disturbance satisfies the matching condition, then when designing the sliding surface, the system can be transformed to a sliding mode regular form so that the sliding mode dynamic is totally free from the disturbance. To obtain the sliding mode regular form of (10), we can define a nonsingular matrix T∈Rn×n, which makes(14)TB=0n-m×mBm,where Bm∈Rm×m is nonsingular. Choose T=T2T1T, where T1∈Rn×m, T2∈Rn×(n-m) are two subblocks of a unitary matrix resulting from the singular value decomposition of B, i.e.,(15)B=T1T2Σm×m0n-m×mJT,where Σ is a diagonal positive-definite matrix and J is a unitary matrix.Then through linear transformationz=Tx-, system (10) can be decomposed into two subsystems:(16a)z1k+1=A^11z1k+A^12z2k,(16b)z2k+1=A^21z1k+A^22z2k+Bmuk+ωk,where z1(k)∈Rn-m, z2(k)∈Rm; A^=TAT-1.It has been proved that (16a) is the sliding mode dynamics of system (10).Choose the following classic linear sliding surface:(17)sk=Czk=C1Izk=C1z1k+z2k.In the sliding mode, we have(18)sk=C1z1k+z2k=0⇒z2k=-C1z1k.Substituting (18) into (16a), we have (19)z1k+1=A^11-A^12C1z1k,which is a classic sliding surface design problem, and the sliding surface parameter C can be obtained through pole placement. ## 3.2. Design of Chattering-Free SMC To suppress chattering, a new chattering-free sliding mode reaching law is proposed, inspired by [32]. However, the reaching law proposed in this paper is for multiple-input systems, which means sliding surface s(k) is not a scalar but a vector, and the reaching law should be newly defined. The vector form reaching law is given bellow:(20)sk+1=sk-qTsk-εTsigα⁡sk,where 0<qT<1, 0<εT<1, 0<α<1 and,(21)sigα⁡sk≜diag⁡s1kα,s2kα,…,smkα·sgn⁡sk.Therefore, the sliding mode controller can be designed by comparing (16a), (16b), and (20):(22)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmωk.It can be seen that the unknown disturbance term is included in the expression of u(k), so u(k) cannot be used directly as a control signal. To tackle this problem we introduce the following disturbance estimation strategy:(23)ω^k=z2k-A^21z1k-1-A^22z2k-1-Bmuk-1=ωk-1.Substituting (23) into (22) yields the final SMC law:(24)uk=-Bm-1C1A^11z1k+C1A^12z2k+A^21z1k+A^22z2k-1-qTsk+εTsigα⁡sk+Bmω^k.With the SMC law (24), the new sliding mode reaching law considering disturbance is(25)sk+1=1-qTsk-εTsigα⁡sk+Bmωk-ω^k=1-qTsk-εTsigα⁡sk+Bmωk-ωk-1.Since s(k+1) is a vector with m sliding mode functions, then we write (25) into the following componentwise form and we will prove that each sliding mode state satisfies the reaching condition. The componentwise form of (25) is(26)sik+1=1-qTsik-εTsikαsgn⁡sik+wiωik-ωik-1i=1,2,…,m,where wi is the ith row of Bm, i.e.,(27)Bm=w1⋮wm.Assumption 6. The change rate of disturbanceδi(k)=ωi(k)-ωi(k-1) is limited, and there exists a constant δ∗ such that maxwiδi(k)≤δ∗,(i=1,2,…,m).Theorem 7. With the constant matrixC and the linear sliding surface given by (17), the trajectory of system sliding mode state si(k)i=1,2,…,m can be driven by controller (25) into the sliding surface neighborhood region Ω within at most mi∗ steps, where(28)Ω=sik∣sik≤η=ξα·max⁡δ∗εT1/α,εT1-qT1/1-αξα=1+αα/1-α-α1/1-αmi∗=mi+1withmi=si20-η2μ2μ=qTη+1-qT.Proof. The proof of the theory consists of two problems: (i) we need to prove that system sliding mode state will enter the regionΩ within mi∗ steps. (ii) we will prove that ifsik∈Ω, then sik+1∈Ω. Proof of Problem (i). Define a Lyapunov function Vik=si2k; then it is obtained that(29)ΔVik=si2k+1-si2k=-qTsik+εTsikαsgn⁡sik-wiδik·2sik-qTsik-εTsikαsgn⁡sik+wiδik.Then the following two conditions are discussed. ① Ifsik>η, there is(30)ΔVik=-qTsik+εTsiαk-wiδik·2sik-qTsik-εTsiαk+wiδik.Since sik>η, there is no doubt that sik>ξα·δ∗/εT1/α. Thus, we have(31)qTsik+εTsiαk-wiδik≥qTsik+ξααδ∗-wiδik≥qT·η+ξαα-1δ∗.Define μ=qT·η+ξαα-1δ∗, and then since 0<qT<1, 1<ξαα<2, it is sure that μ is a positive constant. Similarly, sincesik>η, there is no doubt that sik>η=ξα·εT/1-qT1/1-α. Thus, we have(32)1-qTsik>ξα1-α·εT.Therefore,(33)1-qTsik>ξ1-αα·εTsiαk>εTsiαk.By further calculation, we have(34)sik>qTsik+εTsiαk.Then it can be obtained that(35)2sik-qTsik-εTsiαk+wiδik≥qTsik+εTsiαk+wiδik≥qTsik+εTsiαk-wiδik≥μ.Consequently, there is(36)ΔVik=-qTsik+εT-wiδik·2sik-qTsik-εT+wiδik≤-μ2<0.② If sik<-η, by a similar proof procedure, it can be shown that the relation ΔVik≤-μ2 still holds. Therefore, when sik∉Ω, ΔVik=si2k+1-si2k≤-μ2<0. Moreover, it can be obtained from (36) that(37)ΔVik=si2k+1-si2k≤-μ2⇔si2k+1≤si2k-ε2T2≤si2k-1-2μ2≤⋯≤si20-k+1μ2.It means that if(38)si20-k+1μ2=η2,then sik+1∈Ω. However, the solution of (38) may not be an integer, so we denote mi=si20-η2/μ2 as the real number solution of (38). Then we can say that after at most mi∗=mi+1 steps, the ith sliding mode state will enter the sliding surface neighborhood region Ω. Proposition 8. Denote φ=max⁡δ∗/εT1/α,εT/1-qT1/1-α, and then there is δ∗≤εTφα≤1-qTφ. Lemma 9. For functions in the form:(39)ησ=1+σσ/1-σ-σ1/1-σ,where 0<σ<1, there exists 1<ησ<2, and for any parameter x∈0,1, the following relation holds [33]:(40)zησ-zσησσ+ησ-1≥0.Proof of Problem (ii). When sik∈Ω, it means -η≤sik≤η. We can define a new parameter -1≤θ≤1 such that sik=θ·η=θ·ξαφ. The following two cases are discussed. ① When0≤θ≤1, according to (26), we have(41)sik+1=1-qTθ·ξαφ-εTθαξααφα+wiδi.First, we will prove that si(k+1)≤η. According to Proposition 8 and Assumption6, there is(42)sik+1≤1-qTθ·ξαφ+δ∗1-θαξαα.If θξα≥1, we have(43)sik+1≤1-qTθ·ξαφ≤ξαφ=η.If 0≤θξα<1, we have(44)sik+1≤1-qTφ1+θ·ξα-θαξαα≤1-qTφ≤ξαφ=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(45)sik+1≥1-qTθ·ξαφ-εTθαξααφα-δ∗≥1-qTφθξα-θαξαα-1.If θξα≥1, since 0<α<1, we have(46)sik+1≥-1-qTφ≥-ξαφ=-η.If 0≤θξα<1, we have(47)sik+1≥1-qTφθξα-θαξαα-1≥-ξα1-qTφ≥-ξαφ=-η.② When -1≤θ≤0, according to (26), we have(48)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi.Firstly, we will prove that si(k+1)≤η. According to Proposition 8, we have(49)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≤-1-qTθ·ξαφ+1-qTφθξαα+δ∗≤1-qTφ-θ·ξα+θξαα+1.If θξα≥1, since 0<α<1, we have(50)sik+1≤1-qTφ≤ξαφ=η.If 0≤θξα<1, according to Lemma 9, we have(51)sik+1≤1-qTφξα≤φξα=η.Second, we will prove that si(k+1)≥-η. According to Proposition 8, we have(52)sik+1=-1-qTθ·ξαφ+εTθξααφα+wiδi≥-1-qTθ·ξαφ+θξααδ∗-wiδi=-1-qTθ·ξαφ+δ∗θξαα-1.If θξα≥1, we have(53)sik+1≥-1-qTθ·ξαφ≥-ξαφ=-η.If 0≤θξα<1, we have(54)sik+1≥-1-qTφθ·ξα-θξαα+1≥-1-qTφ≥ξαφ=-η.To this end, we can conclude that when -η≤sik≤η, we have -η≤sik+1≤η, which means sik+1∈Ω.Remark 10. For discrete-time systems, the accurate arrival of sliding mode state on the sliding surface is hard to realize, so to totally eliminate chattering is actually not possible in the discrete-time systems. This paper usessigα⁡sk to replace the sign function to avoid the sharp switching of control input to suppress chattering. The closer the sliding mode state to the sliding surface, the lower the chattering that will be obtained. It needs to be pointed out that the saying “chattering-free” here is just a conventional way to describe such chattering suppression methods [32, 34, 35]. ## 3.3. Design of Multiple-Model Based Compensator Traditionally, if time delay information cannot be foreseen by the controller, there can only be two control input compensation strategies, i.e., zero-input or hold-input. For zero-input strategy, it means if the controller outputu-(k) does not arrive on time, then we let control input u(k)=0. However, when time delay happens frequently, this compensation strategy will result in very poor dynamic performance. And for hold-input strategy, it means control signal will remain unchanged unless new control signal arrives, but this may cause serious overshoot when long time delay exists. With a focus on the above problems, a new compensation strategy is proposed, which also comes from the idea of hold-input strategy but, compared with traditional hold-input strategy, it uses different system models to calculate control input signal for different equivalent time delay conditions. And to overcome the problem that controller cannot foresee the controller-actuator equivalent time delay, all possible models relating to different equivalent time delay τ(k) are established in the controller. Then, at each controlling period, instead of outputting one control signal, the controller will calculate and output a sequence of different control signals U-(k)=[u-(k)τ=0,u-(k)τ=1,…,u-(k)τ=τ-+ρ-] based on different models and the controller output U-(k) will be put in one data packet with a time stamp. Also at each controlling period, the ZOH will output a signal sequence packet U(k) based on the time stamp information, which means U(k)=U-(k) for τ(k)=0, or U(k)=U-(k-1) for τ(k)=1, etc. Finally, the compensator will decide which control signal of U(k) to use according to the time stamp information; i.e., if U(k)=U-(k), it will use the first signal of U(k), and if U(k)=U-(k-1), it will use the second signal of U(k), etc.In this way, even though there is no newly received control signal by the ZOH, the actuator can still get an updated control signal from the compensator and, unlike the traditional hold-input strategy, the updated control input signal has already taken into account the state changes of the former sampling period.Based on the above discussion, the algorithm of the compensator can be described as(55)uk=Ukgτk=U-k-τkgτk,where the compensation signal choosing function g(x) is defined as(56)gx=g0,g1,…,gi,…,gτ-+ρ-T,gi=1,i=x0,i≠x.To show the relationship between actuator input and controller output more clearly, the following illustrations are given:(57)uk=U-k·10…0T=u-kτ=0,τ=0U-k-1·010…0T=u-k-1τ=1,τ=1⋮⋮U-k-τk·0…0︸τk10…0T=u-k-τkτ=τk,τ=τk⋮⋮U-k-τ--ρ-·0…01T=u-k-τ--ρ-τ=τ-+ρ-,τ=τ-+ρ-.In conclusion, the steps of designing this chattering-fee sliding mode controller are given below.Step 1. Obtain the system’s time delay and packet dropout characteristics through experiments and then get the transition probability matrix (6) based on the proposed modeling method.Step 2. Obtain the system sliding mode regular form (16a) and (16b) through linear transformation z=Tx-, where the definition of nonsingular matrix T∈Rn×n is given by (14) and (15).Step 3. Design the classic linear sliding surface (17), whose parameter C=[C1I] can be determined by pole placement method.Step 4. Design the chattering-free reaching law (20) and choose appropriate reaching law parameters q, ε, and α that satisfy 0<qT<1, 0<εT<1, 0<α<1.Step 5. Design the sliding mode controller (22) according to reaching law (20).Step 6. Design the multiple-model based compensator (57) based on the upper bound of long time delay and consecutive packet dropout. ## 4. Simulation Example In this section, a simulation example is given to illustrate the effectiveness of the proposed method. Consider the following NCS in the form of (1), which comes from a type of aeroengine control system [36]. System parameters are initialed as follows:(58)A=-0.90.1-0.2-0.70.9-0.50.4-0.80.6,B=0.20.10.10.40.3-0.1,x=nLnHp3T,u=mfA8T,where nL is low pressure rotor speed; nH is high pressure rotor speed; p3 is compressor exit total pressure; mf is fuel flow; and A8 is critical section area of nozzle.System sampling period is set asT=20ms. The initial system state is x0=-0.90.5-0.7T. The disturbance is chosen as(59)dk=Bωk=BNζk,where ζk=1.2sin⁡(0.5π·k) and N=0.50.3T.The sliding function parameterC=[C1I] can be designed based on pole placement method, and by choosing the pole location of closed-loop system (16a) as -0.5, it can be obtained that C1=2.8539-0.0238T, and thus C=2.853910-0.023801.Then choose the reaching law parameters asq=10, ε=0.5, and α=0.5.For long time delay and packet dropout, it is defined thatτ-=2 and ρ-=2, and the probability transition matrix is given below:(60)Πτ-,ρ-=0.70.30000.60.20.2000.40.20.20.200.30.20.200.30.40.40.200.Based on matrix Πτ-,ρ-, a Markov chain typed equivalent time delay distribution is shown in Figure 2, which simulates the actual time delay and packet dropout condition of a networked control system during 100 sampling periods.Figure 2 Distribution of equivalent time delay.It can be seen from Figure2 that τ(k)=3 occurs four times, and τ(k)=4 does not occur. Accordingly, we can see that, due to the use of ZOH and the application of the proposed new modeling method, consecutive packet dropout does not really affect the system in this case even though it does exist. However, the effect of consecutive packet dropout may become noticeable if the upper bound of consecutive packet dropout is larger than two, but there is still no doubt that the proposed method greatly suppresses the effect of it.Simulation results obtained by using the proposed chattering-free SMC are shown in Figures3–5. To show the superiority of the proposed controller in chattering suppression, another controller is applied which is the classic reaching law based SMC proposed by Gao [37]. The same compensation strategy is also used in this controller. The results obtained are shown in Figures 6–8. The comparison of Figure 3 with Figure 6 reveals that the system states converge in less time with lower chattering in the case of the proposed controller as compared to the classic reaching law based SMC. The comparison of Figure 4 with Figure 7, and Figure 5 with Figure 8 gives the same conclusion in control input and sliding function.Figure 3 System state trajectory with the proposed controller.Figure 4 Control input trajectory with the proposed controller.Figure 5 Sliding function trajectory with the proposed controller.Figure 6 System state trajectory with classic reaching law based SMC.Figure 7 Control input trajectory with the classic reaching law based SMC.Figure 8 Sliding function trajectory with the classic reaching law based SMC.Moreover, to testify the effectiveness of the proposed multiple-model based compensator, another controller has been constructed which has the same chattering-free reaching law with the proposed controller but without compensator. Instead, the traditional hold-input compensation strategy is used. First, the condition ofτ-=2 and ρ-=2 is considered and the response of system state x1 with two controllers is shown in Figure 9, where controller 1 is the proposed controller and controller 2 is the proposed controller without compensator. It can be seen that there is no essential difference between the response curves of x1 with two controllers because the upper bound of time delay and packet dropout is small and the hold-input strategy is sufficient for compensation. Then Figures 10 and 11 depict the response curves of x1 on conditions τ-=3, ρ-=3 and τ-=4, ρ-=4, respectively. It is obvious that the performance of controller 2 (proposed controller without compensator) deteriorates sharply when the upper bound of time delay and packet dropout grows, while the proposed controller suffers little from the changes.Figure 9 Trajectory of statex1 with two controllers on condition τ-=2,ρ-=2.Figure 10 Trajectory of statex1 with two controllers on condition τ-=3,ρ-=3.Figure 11 Trajectory of statex1 with two controllers on condition τ-=4,ρ-=4. ## 5. Conclusion This paper proposes a chattering-free sliding mode controller with multiple-model based delay compensator for NCSs with long time delay and consecutive packet dropout. By proposing a new modeling method, long time delay and consecutive packet dropout can be modeled in a unified model described by one Markov chain, which not only simplifies the system model but also makes the controller designed more suitable for practical use. To the best of the authors’ knowledge, this modeling method for time delay and packet dropout has not been reported by any existing literature and it is the use of this model that makes the proposed compensator function well. A chattering-free sliding mode reaching law is then proposed and it is also the first time that such chattering-free reaching law is used for multiple-input discrete-time systems. To overcome the problem that controller-actuator channel network condition cannot be foreseen by controller, a new compensation strategy is proposed and a multiple-model based compensator is constructed. Finally, a simulation example is given and it shows that the proposed method can effectively overcome the effect of time delay and packet dropout as well as disturbance and can make the system states converge to the origin quickly without noticeable chattering. Therefore, the proposed method serves as a suitable choice for uncertain NCSs with long time delay and consecutive packet dropout. In our future research, we will work on testing the proposed method in a practical system so that the theory proposed can be more completed. --- *Source: 1016381-2019-07-11.xml*
2019
# Identifying Political Risk Management Strategies in International Construction Projects **Authors:** Tengyuan Chang; Bon-Gang Hwang; Xiaopeng Deng; Xianbo Zhao **Journal:** Advances in Civil Engineering (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1016384 --- ## Abstract International construction projects are plagued with political risk, and international construction enterprises (ICEs) must manage this risk to survive. However, little attention has been devoted to political risk management strategies in international construction projects. To fill this research gap, a total of 27 possible strategies were identified through a comprehensive literature review and validated by a pilot survey with 10 international experts. Appraisals of these 27 strategies by relevant professionals were collected using questionnaires, 155 of which were returned. Exploratory factor analysis was conducted to explore the interrelationships among these 27 strategies. The results show that all of the 27 strategies are important for political risk management in international construction projects. Moreover, these 27 strategies were clustered into six components, namely, (1) making correct decisions, (2) conducting favorable negotiations, (3) completing full preparations, (4) shaping a good environment, (5) reducing unnecessary mistakes, and (6) obtaining a reasonable response. The 6 components can be regarded as 6 typical management techniques that contribute to political risk management in the preproject phase, project implementation phase, and postevent phases. The findings may help practitioners gain an in-depth understanding of political risk management strategies in international construction projects and provide a useful reference for ICEs to manage political risks when venturing outside their home countries. --- ## Body ## 1. Introduction With the rapid development of economic globalization, the global construction market has thrived in the past decade [1]. Moreover, the large market for construction in Asia, Africa, and Latin America will create widespread prosperity and opportunities for international construction enterprises (ICEs). By taking advantage of these opportunities, increasing numbers of international contractors will expand into the international construction market [2].However, opportunities are always accompanied by risks, and ICEs will be exposed to new risks when venturing outside their home countries [3, 4]. ICEs have witnessed a dramatic increase in political risks around the world, such as the credit crises in Greece, Venezuela, and Congo; the wars in southern Sudan, Syria, Afghanistan, and Libya; the terrorist attacks in Europe, the Middle East, Central Asia, and South Asia; and the coups in Niger, Thailand, and Honduras [2, 4, 5]. These risks had a very large negative impact on the global market and resulted in great losses for ICEs.Given the increasingly complex business environment, political risks should not be ignored by ICEs when they approach global markets [2, 6, 7]. Political risk in international construction projects refers to uncertainty related to political events (e.g., political violence, regime changes, coups, revolutions, breaches of contract, terrorist attacks, and wars) and to arbitrary or discriminatory actions (e.g., expropriation, unfair compensation, foreign exchange restrictions, unlawful interference, capital restrictions, corruption, and labor restrictions) by host governments or political groups that may have negative impacts on ICEs [6]. Compared with the nonsystematic risks (e.g., technical risk, quality risk, procurement risk, and financial risk) of construction projects, political risk is more complex, unpredictable, and devastating and is usually outside the scope of normal project activities [2].Much of the extant literature has focused on political risks in international general business [8, 9] but has paid less attention to political risks in international construction projects. In most cases, political risk management is practiced only as a part of risk management at the project level in construction projects. However, project-level political risks can also affect enterprises’ objectives (e.g., financial, reputation, stability, survival, development, and strategic decisions) [10]. Implementation of political risk management only at the project level has some drawbacks: (1) lack of a comprehensive understanding of political risks; (2) overemphasis on short-term project goals and less consideration of corporate strategic objectives; (3) constraints because of limited resources or inappropriate resource allocation among projects; and (4) lack of accumulation and sharing of risk management experience. Therefore, risk management only at the project level no longer seems to be sufficient to help ICEs to address political risks in the global market [11].Hence, political risk management in international construction projects should be conducted jointly at both the project and firm levels by considering the various types of risk and linking risk management strategies to the enterprise’s objectives. Successful political risk management should be based on sufficient resources and information as important components of the decision-making process are continually improved and enhanced [12]. At the firm level, political risks can be treated as part of the entire risk portfolio of an enterprise and can be addressed across multiple business areas [13, 14]. Implementation of political risk management at the firm level can lead to better coordination and consolidation of the resources and goals of the enterprise, which is more conducive to the long-term stability and development of ICEs [15].This study focuses on the political risk inherent in international construction projects and aims at identifying the strategies available for ICEs to manage political risk. The specific objectives of this study are to (1) identify possible risk management strategies that can address political risks in international construction projects, (2) evaluate the importance of the strategies, and (3) explore the interrelationships among those strategies and their practical applications in international construction projects.Because less attention has been devoted to political risk management in international construction projects, this paper can enrich the understanding of risk management in the field of international construction. Furthermore, this study may help practitioners clearly understand political risk management strategies in international construction projects and provide guidance for ICEs regarding how to address political risk when venturing outside their home countries. ## 2. Literature Review Political risk management has been a popular topic in the field of international business (e.g., foreign direct investment, trade in goods, and international joint ventures). Several strategies have been proposed to address political risks, such as investing only in safe environments, increasing the required return, adapting to particular business environment conditions, sharing risks with other firms [8], improving relative bargaining power [9], transferring risks to an insurance company [16], reducing vulnerabilities [17], spreading risk by investing in several countries, enhancing the core competitiveness [18], and implementing localization and involvement strategies [1, 19].Previous studies regarding risk management in construction projects have covered a wide variety of areas, such as overall risk management [20], safety risk management [21], financial risk management [22], quality risk management [23], risk assessment [24], advanced technology-driven risk management [25], and risk management in public-private partnerships [26, 27]. However, less attention has been devoted to political risk management in international construction projects. In some studies, political risk was mentioned only as a subset of external risks [28, 29].Several studies have been conducted to identify political risk events [2, 30] and political risk factors [5, 6, 31] in international construction projects. Although these studies can help international contractors gain a better understanding of political risks in international construction projects, they provide less guidance regarding how to manage political risk. It is obvious that knowledge and experiences associated with political risk management should be extended to the international construction business by considering overall political risk management strategies throughout the life of projects and the interrelation between the project and firm levels. ## 3. Methods ### 3.1. Strategy Identification and Survey Based on an overview of the literature on political risk and risk management, a total of 27 possible risk management strategies were identified and were coded as S01 to S27 (Table1).Table 1 Political risk management strategies. Strategy Reference [8] [3] [32] [5] [52] [33] [2] [8] [34] S01: making a higher tender offer X — — — — — — X — S02: conducting market research X X — X X — X X — S03: buying risk insurance X — — X X — X — X S04: adopting optimal contracts — — X — — X — — X S05: implementing a localization strategy X X — X — — X — X S06: avoiding misconduct X X — X — — X — X S07: adopting closed management of the construction site X — — X X — X X — S08: supporting environmental protection X — — — — — — — — S09: abiding by the traditional local culture X X — X — — — — — S10: making contingency plans X — — — X — X X X S11: obtaining the corresponding guarantee — — — X — X X — X S12: implementing an emergency plan X — — — X — X X X S13: forming joint ventures with local contractors X — X X — — — — X S14: conducting a postresponse assessment X — — — X — X — — S15: sending staff to training programs X — — — — — — — X S16: settling disputes through renegotiation — — X X — — — — X S17: choosing suitable projects — — — X — X — — X S18: building proper relations with host governments — X X X — — X — — S19: maintaining good relations with powerful groups — X X X — X X — X S20: creating links with local business X X X X — X X — X S21: changing the operation strategies — — X — — X X X — S22: controlling core and critical technology — X — — — X — X X S23: choosing a suitable entry mode X — — X — X X X X S24: employing capable local partners X X X X — X — — X S25: building up reputation X — — X — — X — — S26: allocating extra funds — — X — X — X — — S27: maintaining good relations with the public — X X — — X X — XA pilot survey was performed with 10 experts to verify the comprehensiveness of the preliminary strategies. These experts included (1) four professors (one each from Australia, Hong Kong, Singapore, and South Africa) engaged in research on business management and risk management, (2) two professors (one each from the United States and China) engaged in research on project management, and (3) four senior managers (one each from China Communications Construction Group Limited, Power Construction Corporation of China, China State Construction Engineering Corporation, and China Railway Group Limited). All the 10 experts had more than 20 years of work experience in their field. In their suggestion, no strategies were added or deleted; instead, descriptions of some strategies were added to ensure accuracy in understanding and to avoid ambiguity. For example, “S01, making a higher tender offer” refers to the premium for the retained political risks of an ICE.The pilot survey was used to develop a structured questionnaire that comprised three sections: (1) a brief introduction of political risk and a description of some strategies; (2) questions to profile the company, work experience, title, and location of the respondents; and (3) questions to evaluate the importance of the 27 strategies using a five-point Likert scale where 5 = very high, 4 = high, 3 = medium, 2 = low, and 1 = very low.Then, a list of selected experts was developed, and they included (1) 300 international academics who focus on related studies—their personal information was collected from their publications—and (2) 500 practitioners with extensive experience in international project management—drawn from 50 Chinese construction enterprises that were selected from the 2016 top 250 international contractors according toEngineering News-Record (ENR). The contact information of the 500 practitioners was collected from the Chinese construction management research sector, alumni associations, and the websites of their enterprises.From March to May 2017, the questionnaire was disseminated to these experts and a total of 158 responses were returned, of which three were incomplete or inappropriately filled out. The valid 155 responses represent a response rate of 19%. As indicated in Table2, among the 155 respondents, 56 were from academia and 99 were from industry. All the respondents had over 5 years’ work experience, and 52% had over 10 years’ work experience in industry or academia. Of the 56 academics, 28 were from China (including Hong Kong and Macao), and another 28 were from overseas. Among the 99 practitioners, 38, 26, 9, 5, 8, and 5 were from the divisions of Chinese construction enterprises in Asia (not including China), Africa, Europe, North America, South America, and Australia, respectively. Moreover, all practitioners had experienced political risk in the overseas construction market.Table 2 Profile of the respondents. Characteristic Categorization Academia (N=56) Practitioner (N=99) Overall (N=155) N % N % N % Work experience Over 20 years 8 14 10 10 18 12 16–20 years 15 27 12 12 27 17 11–15 years 17 30 28 28 45 29 5–10 years 16 29 49 49 65 42 Title Professor 22 39 — — 22 14 Associate professor 19 34 — — 19 12 Assistant professor/lecturer 15 27 — — 15 10 Senior manager — — 29 29 29 19 Department manager — — 28 28 28 18 Project manager — — 42 42 42 27 Location China 28 50 8 8 36 23 Asia (excl. China) 14 25 38 38 52 34 Africa 2 4 26 26 28 18 Europe 5 9 9 9 14 9 North America 4 7 5 5 9 6 South America 0 0 8 8 8 5 Australia 3 5 5 5 8 5 ### 3.2. Exploratory Factor Analysis The exploratory factor analysis has proven to be very useful for identifying the potential relationships between several sets of data and has frequently been employed in studies related to construction management [11, 31, 35]. The exploratory factor analysis is often used to create theories in a new research area, such as components, correlations, and relative weightings of a list of variables [36].A sample with 5-point Likert scale data used in exploratory factor analysis should meet two conditions: (1) the size of the valid sample must be greater than 100 or five times the number of items [37] and (2) the data for the sample must satisfy the recommended alpha reliability test, Bartlett’s test of sphericity, and the Kaiser–Meyer–Olkin (KMO) test of sampling adequacy [31].In this study, the number of valid questionnaires is 155, Cronbach’s alpha coefficient is 0.932 (>0.700, F statistic = 17.382, significance level = 0.000), the KMO index is 0.878 (≥0.500), and Bartlett’s test of sphericity (χ2 = 1497.243, df = 205, significance level = 0.000) is significant (p<0.050), indicating that these data are suitable for the exploratory factor analysis [38–40]. The factor analysis of the 27 political risk management strategies was performed using the principal component analysis and varimax rotation methods implemented using SPSS 22.0 software. The number of components was determined by successively using latent root criteria (eigenvalues > 1.000) [41]. As suggested by Malhotra, the cumulative variance of the produced components should be greater than 60.000%. To increase the correlation between strategies and components, the qualified strategies in each component should have a factor loading ≥0.500 [42]. The internal consistency of the components should satisfy two conditions: Cronbach’s alpha of each component ≥0.700 [38] and the item-to-total correlation of each retained measure ≥0.400 [43]. ## 3.1. Strategy Identification and Survey Based on an overview of the literature on political risk and risk management, a total of 27 possible risk management strategies were identified and were coded as S01 to S27 (Table1).Table 1 Political risk management strategies. Strategy Reference [8] [3] [32] [5] [52] [33] [2] [8] [34] S01: making a higher tender offer X — — — — — — X — S02: conducting market research X X — X X — X X — S03: buying risk insurance X — — X X — X — X S04: adopting optimal contracts — — X — — X — — X S05: implementing a localization strategy X X — X — — X — X S06: avoiding misconduct X X — X — — X — X S07: adopting closed management of the construction site X — — X X — X X — S08: supporting environmental protection X — — — — — — — — S09: abiding by the traditional local culture X X — X — — — — — S10: making contingency plans X — — — X — X X X S11: obtaining the corresponding guarantee — — — X — X X — X S12: implementing an emergency plan X — — — X — X X X S13: forming joint ventures with local contractors X — X X — — — — X S14: conducting a postresponse assessment X — — — X — X — — S15: sending staff to training programs X — — — — — — — X S16: settling disputes through renegotiation — — X X — — — — X S17: choosing suitable projects — — — X — X — — X S18: building proper relations with host governments — X X X — — X — — S19: maintaining good relations with powerful groups — X X X — X X — X S20: creating links with local business X X X X — X X — X S21: changing the operation strategies — — X — — X X X — S22: controlling core and critical technology — X — — — X — X X S23: choosing a suitable entry mode X — — X — X X X X S24: employing capable local partners X X X X — X — — X S25: building up reputation X — — X — — X — — S26: allocating extra funds — — X — X — X — — S27: maintaining good relations with the public — X X — — X X — XA pilot survey was performed with 10 experts to verify the comprehensiveness of the preliminary strategies. These experts included (1) four professors (one each from Australia, Hong Kong, Singapore, and South Africa) engaged in research on business management and risk management, (2) two professors (one each from the United States and China) engaged in research on project management, and (3) four senior managers (one each from China Communications Construction Group Limited, Power Construction Corporation of China, China State Construction Engineering Corporation, and China Railway Group Limited). All the 10 experts had more than 20 years of work experience in their field. In their suggestion, no strategies were added or deleted; instead, descriptions of some strategies were added to ensure accuracy in understanding and to avoid ambiguity. For example, “S01, making a higher tender offer” refers to the premium for the retained political risks of an ICE.The pilot survey was used to develop a structured questionnaire that comprised three sections: (1) a brief introduction of political risk and a description of some strategies; (2) questions to profile the company, work experience, title, and location of the respondents; and (3) questions to evaluate the importance of the 27 strategies using a five-point Likert scale where 5 = very high, 4 = high, 3 = medium, 2 = low, and 1 = very low.Then, a list of selected experts was developed, and they included (1) 300 international academics who focus on related studies—their personal information was collected from their publications—and (2) 500 practitioners with extensive experience in international project management—drawn from 50 Chinese construction enterprises that were selected from the 2016 top 250 international contractors according toEngineering News-Record (ENR). The contact information of the 500 practitioners was collected from the Chinese construction management research sector, alumni associations, and the websites of their enterprises.From March to May 2017, the questionnaire was disseminated to these experts and a total of 158 responses were returned, of which three were incomplete or inappropriately filled out. The valid 155 responses represent a response rate of 19%. As indicated in Table2, among the 155 respondents, 56 were from academia and 99 were from industry. All the respondents had over 5 years’ work experience, and 52% had over 10 years’ work experience in industry or academia. Of the 56 academics, 28 were from China (including Hong Kong and Macao), and another 28 were from overseas. Among the 99 practitioners, 38, 26, 9, 5, 8, and 5 were from the divisions of Chinese construction enterprises in Asia (not including China), Africa, Europe, North America, South America, and Australia, respectively. Moreover, all practitioners had experienced political risk in the overseas construction market.Table 2 Profile of the respondents. Characteristic Categorization Academia (N=56) Practitioner (N=99) Overall (N=155) N % N % N % Work experience Over 20 years 8 14 10 10 18 12 16–20 years 15 27 12 12 27 17 11–15 years 17 30 28 28 45 29 5–10 years 16 29 49 49 65 42 Title Professor 22 39 — — 22 14 Associate professor 19 34 — — 19 12 Assistant professor/lecturer 15 27 — — 15 10 Senior manager — — 29 29 29 19 Department manager — — 28 28 28 18 Project manager — — 42 42 42 27 Location China 28 50 8 8 36 23 Asia (excl. China) 14 25 38 38 52 34 Africa 2 4 26 26 28 18 Europe 5 9 9 9 14 9 North America 4 7 5 5 9 6 South America 0 0 8 8 8 5 Australia 3 5 5 5 8 5 ## 3.2. Exploratory Factor Analysis The exploratory factor analysis has proven to be very useful for identifying the potential relationships between several sets of data and has frequently been employed in studies related to construction management [11, 31, 35]. The exploratory factor analysis is often used to create theories in a new research area, such as components, correlations, and relative weightings of a list of variables [36].A sample with 5-point Likert scale data used in exploratory factor analysis should meet two conditions: (1) the size of the valid sample must be greater than 100 or five times the number of items [37] and (2) the data for the sample must satisfy the recommended alpha reliability test, Bartlett’s test of sphericity, and the Kaiser–Meyer–Olkin (KMO) test of sampling adequacy [31].In this study, the number of valid questionnaires is 155, Cronbach’s alpha coefficient is 0.932 (>0.700, F statistic = 17.382, significance level = 0.000), the KMO index is 0.878 (≥0.500), and Bartlett’s test of sphericity (χ2 = 1497.243, df = 205, significance level = 0.000) is significant (p<0.050), indicating that these data are suitable for the exploratory factor analysis [38–40]. The factor analysis of the 27 political risk management strategies was performed using the principal component analysis and varimax rotation methods implemented using SPSS 22.0 software. The number of components was determined by successively using latent root criteria (eigenvalues > 1.000) [41]. As suggested by Malhotra, the cumulative variance of the produced components should be greater than 60.000%. To increase the correlation between strategies and components, the qualified strategies in each component should have a factor loading ≥0.500 [42]. The internal consistency of the components should satisfy two conditions: Cronbach’s alpha of each component ≥0.700 [38] and the item-to-total correlation of each retained measure ≥0.400 [43]. ## 4. Results ### 4.1. Results of the Questionnaire Survey Table3 presents the evaluation results of the 27 political management strategies. The average values of the 27 strategies range from 3.27 (S21, changing the strategies) to 4.40 (S17, choosing suitable projects). All of them were significantly greater than 3 at the p=0.05 level (two tailed) in the one-sample t-test, indicating that the 27 strategies had significant importance in managing political risk in international construction projects. The five most important strategies were (1) choosing suitable projects (S17, average value 4.40), (2) building proper relations with host governments (S18, average value 4.30), (3) conducting market research (S02, average value 4.29), (4) avoiding misconduct (S06, average value 4.26), and (5) choosing a suitable entry mode (S23, average value 4.22). The p values of the 27 strategies were greater than 0.05 in the independent-sample t-test. Therefore, there were no significant differences in the average values of the strategies between scholars and practitioners.Table 3 Ranking of the political risk management strategies. Strategy Academia Industry p value Overall Mean Rank Mean Rank Mean Rank p value S01: making a higher tender offer 3.99 11 4.00 11 0.906 4.00 11 <0.001a S02: conducting market research 4.45 3 4.20 5 0.282 4.29 3 <0.001a S03: buying risk insurance 4.03 10 4.06 9 0.697 4.05 10 <0.001a S04: adopting optimal contracts 4.12 6 4.23 3 0.261 4.19 6 <0.001a S05: implementing a localization strategy 4.42 4 4.06 10 0.197 4.19 7 <0.001a S06: avoiding misconduct 4.45 2 4.15 6 0.089 4.26 4 <0.001a S07: adopting closed management of the construction site 3.41 22 3.85 12 0.537 3.69 18 <0.001a S08: supporting environmental protection 3.75 14 3.82 15 0.831 3.80 14 <0.001a S09: abiding by the traditional local culture 3.90 13 3.81 16 0.863 3.84 13 <0.001a S10: making contingency plans 4.10 9 4.09 8 0.606 4.09 9 <0.001a S11: obtaining the corresponding guarantee 3.98 12 4.22 4 0.401 4.14 8 <0.001a S12: implementing an emergency plan 3.18 26 3.55 23 0.244 3.42 24 <0.001a S13: forming joint ventures with local contractors 3.58 16 3.77 18 0.182 3.70 17 <0.001a S14: conducting a postresponse assessment 3.16 27 3.41 26 0.537 3.32 26 <0.001a S15: sending staff to training programs 3.43 20 3.62 21 0.628 3.55 21 <0.001a S16: settling disputes through renegotiation 3.28 25 3.43 25 0.617 3.38 25 <0.001a S17: choosing suitable projects 4.54 1 4.32 2 0.439 4.40 1 <0.001a S18: building proper relations with host governments 4.12 7 4.42 1 0.223 4.31 2 <0.001a S19: maintaining good relations with powerful groups 3.52 17 3.85 13 0.537 3.73 15 <0.001a S20: creating links with local business 3.43 21 3.70 19 0.377 3.60 20 <0.001a S21: changing the operation strategies 3.36 24 3.22 27 0.236 3.27 27 <0.001a S22: controlling core and critical technology 4.10 8 3.80 17 0.301 3.91 12 <0.001a S23: choosing a suitable entry mode 4.37 5 4.14 7 0.439 4.22 5 <0.001a S24: employing capable local partners 3.66 15 3.66 20 0.725 3.66 19 <0.001a S25: building up reputation 3.47 19 3.47 24 0.912 3.47 23 <0.001a S26: allocating extra funds 3.39 23 3.59 22 0.275 3.52 22 <0.001a S27: maintaining good relations with the public 3.49 18 3.83 14 0.137 3.71 16 <0.001a Note. aOne-sample t-test result is significant (test value = 3) at the p=0.05 significance level (two tailed). ### 4.2. Results of Exploratory Factor Analysis As illustrated in Table4, a total of six components with eigenvalues greater than 1.000 were explored. The cumulative variance of the six components was 61.208%, thus exceeding 60.000%. The 27 strategies were divided into the six components according to their loading on each component of more than 0.500. Although the loading of strategy “S13 forming joint venture with local contractors” in the first component was 0.521, it was still removed from subsequent analyses due to the low value of its communality (0.399 < 0.500) and item-to-total correlations (0.301 < 0.400). After the adjustment, Cronbach’s alpha coefficient of the first component and the communalities of the remaining six strategies in the first component increased. Cronbach’s alpha coefficients of the six components ranged from 0.743 to 0.857, and the item-to-total correlations of the remaining 26 strategies ranged from 0.478 to 0.653; thus, the model is reliable.Table 4 Results of the exploratory factor analysis. Strategy Communality Item-to-total correlation Component 1 2 3 4 5 6 S23 0.640 0.590 0.676 — — — — — S05 0.530 0.490 0.653 — — — — — S17 0.575 0.572 0.617 — — — — — S22 0.632 0.585 0.598 — — — — — S18 0.565 0.482 0.541 — — — — — S02 0.652 0.511 0.509 — — — — — S06 0.730 0.579 — 0.683 — — — — S07 0.648 0.497 — 0.667 — — — — S24 0.667 0.612 — 0.617 — — — — S08 0.543 0.515 — 0.595 — — — — S09 0.621 0.611 — 0.509 — — — — S15 0.742 0.592 — — 0.739 — — — S26 0.679 0.516 — — 0.670 — — — S03 0.512 0.527 — — 0.525 — — — S10 0.479 0.542 — — 0.507 — — — S27 0.532 0.621 — — — 0.682 — — S25 0.697 0.629 — — — 0.672 — — S20 0.632 0.637 — — — 0.586 — — S19 0.581 0.589 — — — 0.547 — — S04 0.561 0.478 — — — — 0.663 — S01 0.548 0.551 — — — — 0.567 — S11 0.629 0.557 — — — — 0.550 — S16 0.611 0.538 — — — — — 0.631 S12 0.710 0.539 — — — — — 0.618 S21 0.579 0.571 — — — — — 0.522 S14 0.576 0.653 — — — — — 0. 502 Cronbach’s alpha 0.857 0.832 0.811 0.767 0.743 0.758 Eigenvalues 6.483 2.867 2.310 1.451 1.162 1.100 Variance (%) 15.233 13.587 11.155 10.817 9.265 8.150 Cumulative variance (%) 15.233 28.820 39.975 50.793 60.058 68.208 Note. Only loadings of 0.500 or above are shown. Extraction method: principal component analysis. Rotation method: varimax with Kaiser normalization. Rotation converged in 10 iterations. ### 4.3. Results of the Validity Test The Pearson correlation analysis (2 tailed) was applied to check the validity of the results of the exploratory factor analysis. The strategies clustered into a component should be significantly correlated [44]. The results revealed that for each component all the strategies were correlated with the others, and thus, the strategies can explain political risk management in that dimension. Due to space limitations, only the correlations between strategies in the first component are presented in Table 5.Table 5 Pearson correlations for component one. S23 S05 S17 S22 S18 S02 S23 1.000 0.602b 0.477b 0.467b 0.564b 0.581b S05 — 1.000 0.388a 0.491b 0.413b 0.432b S17 — — 1.000 0.306b 0.625b 0.589b S22 — — — 1.000 0.427a 0.371b S18 — — — — 1.000 0.689b S02 — — — — — 1.000 aCorrelation is significant at the p=0.05 level (2 tailed). bCorrelation is significant at the p=0.01 level (2 tailed). ## 4.1. Results of the Questionnaire Survey Table3 presents the evaluation results of the 27 political management strategies. The average values of the 27 strategies range from 3.27 (S21, changing the strategies) to 4.40 (S17, choosing suitable projects). All of them were significantly greater than 3 at the p=0.05 level (two tailed) in the one-sample t-test, indicating that the 27 strategies had significant importance in managing political risk in international construction projects. The five most important strategies were (1) choosing suitable projects (S17, average value 4.40), (2) building proper relations with host governments (S18, average value 4.30), (3) conducting market research (S02, average value 4.29), (4) avoiding misconduct (S06, average value 4.26), and (5) choosing a suitable entry mode (S23, average value 4.22). The p values of the 27 strategies were greater than 0.05 in the independent-sample t-test. Therefore, there were no significant differences in the average values of the strategies between scholars and practitioners.Table 3 Ranking of the political risk management strategies. Strategy Academia Industry p value Overall Mean Rank Mean Rank Mean Rank p value S01: making a higher tender offer 3.99 11 4.00 11 0.906 4.00 11 <0.001a S02: conducting market research 4.45 3 4.20 5 0.282 4.29 3 <0.001a S03: buying risk insurance 4.03 10 4.06 9 0.697 4.05 10 <0.001a S04: adopting optimal contracts 4.12 6 4.23 3 0.261 4.19 6 <0.001a S05: implementing a localization strategy 4.42 4 4.06 10 0.197 4.19 7 <0.001a S06: avoiding misconduct 4.45 2 4.15 6 0.089 4.26 4 <0.001a S07: adopting closed management of the construction site 3.41 22 3.85 12 0.537 3.69 18 <0.001a S08: supporting environmental protection 3.75 14 3.82 15 0.831 3.80 14 <0.001a S09: abiding by the traditional local culture 3.90 13 3.81 16 0.863 3.84 13 <0.001a S10: making contingency plans 4.10 9 4.09 8 0.606 4.09 9 <0.001a S11: obtaining the corresponding guarantee 3.98 12 4.22 4 0.401 4.14 8 <0.001a S12: implementing an emergency plan 3.18 26 3.55 23 0.244 3.42 24 <0.001a S13: forming joint ventures with local contractors 3.58 16 3.77 18 0.182 3.70 17 <0.001a S14: conducting a postresponse assessment 3.16 27 3.41 26 0.537 3.32 26 <0.001a S15: sending staff to training programs 3.43 20 3.62 21 0.628 3.55 21 <0.001a S16: settling disputes through renegotiation 3.28 25 3.43 25 0.617 3.38 25 <0.001a S17: choosing suitable projects 4.54 1 4.32 2 0.439 4.40 1 <0.001a S18: building proper relations with host governments 4.12 7 4.42 1 0.223 4.31 2 <0.001a S19: maintaining good relations with powerful groups 3.52 17 3.85 13 0.537 3.73 15 <0.001a S20: creating links with local business 3.43 21 3.70 19 0.377 3.60 20 <0.001a S21: changing the operation strategies 3.36 24 3.22 27 0.236 3.27 27 <0.001a S22: controlling core and critical technology 4.10 8 3.80 17 0.301 3.91 12 <0.001a S23: choosing a suitable entry mode 4.37 5 4.14 7 0.439 4.22 5 <0.001a S24: employing capable local partners 3.66 15 3.66 20 0.725 3.66 19 <0.001a S25: building up reputation 3.47 19 3.47 24 0.912 3.47 23 <0.001a S26: allocating extra funds 3.39 23 3.59 22 0.275 3.52 22 <0.001a S27: maintaining good relations with the public 3.49 18 3.83 14 0.137 3.71 16 <0.001a Note. aOne-sample t-test result is significant (test value = 3) at the p=0.05 significance level (two tailed). ## 4.2. Results of Exploratory Factor Analysis As illustrated in Table4, a total of six components with eigenvalues greater than 1.000 were explored. The cumulative variance of the six components was 61.208%, thus exceeding 60.000%. The 27 strategies were divided into the six components according to their loading on each component of more than 0.500. Although the loading of strategy “S13 forming joint venture with local contractors” in the first component was 0.521, it was still removed from subsequent analyses due to the low value of its communality (0.399 < 0.500) and item-to-total correlations (0.301 < 0.400). After the adjustment, Cronbach’s alpha coefficient of the first component and the communalities of the remaining six strategies in the first component increased. Cronbach’s alpha coefficients of the six components ranged from 0.743 to 0.857, and the item-to-total correlations of the remaining 26 strategies ranged from 0.478 to 0.653; thus, the model is reliable.Table 4 Results of the exploratory factor analysis. Strategy Communality Item-to-total correlation Component 1 2 3 4 5 6 S23 0.640 0.590 0.676 — — — — — S05 0.530 0.490 0.653 — — — — — S17 0.575 0.572 0.617 — — — — — S22 0.632 0.585 0.598 — — — — — S18 0.565 0.482 0.541 — — — — — S02 0.652 0.511 0.509 — — — — — S06 0.730 0.579 — 0.683 — — — — S07 0.648 0.497 — 0.667 — — — — S24 0.667 0.612 — 0.617 — — — — S08 0.543 0.515 — 0.595 — — — — S09 0.621 0.611 — 0.509 — — — — S15 0.742 0.592 — — 0.739 — — — S26 0.679 0.516 — — 0.670 — — — S03 0.512 0.527 — — 0.525 — — — S10 0.479 0.542 — — 0.507 — — — S27 0.532 0.621 — — — 0.682 — — S25 0.697 0.629 — — — 0.672 — — S20 0.632 0.637 — — — 0.586 — — S19 0.581 0.589 — — — 0.547 — — S04 0.561 0.478 — — — — 0.663 — S01 0.548 0.551 — — — — 0.567 — S11 0.629 0.557 — — — — 0.550 — S16 0.611 0.538 — — — — — 0.631 S12 0.710 0.539 — — — — — 0.618 S21 0.579 0.571 — — — — — 0.522 S14 0.576 0.653 — — — — — 0. 502 Cronbach’s alpha 0.857 0.832 0.811 0.767 0.743 0.758 Eigenvalues 6.483 2.867 2.310 1.451 1.162 1.100 Variance (%) 15.233 13.587 11.155 10.817 9.265 8.150 Cumulative variance (%) 15.233 28.820 39.975 50.793 60.058 68.208 Note. Only loadings of 0.500 or above are shown. Extraction method: principal component analysis. Rotation method: varimax with Kaiser normalization. Rotation converged in 10 iterations. ## 4.3. Results of the Validity Test The Pearson correlation analysis (2 tailed) was applied to check the validity of the results of the exploratory factor analysis. The strategies clustered into a component should be significantly correlated [44]. The results revealed that for each component all the strategies were correlated with the others, and thus, the strategies can explain political risk management in that dimension. Due to space limitations, only the correlations between strategies in the first component are presented in Table 5.Table 5 Pearson correlations for component one. S23 S05 S17 S22 S18 S02 S23 1.000 0.602b 0.477b 0.467b 0.564b 0.581b S05 — 1.000 0.388a 0.491b 0.413b 0.432b S17 — — 1.000 0.306b 0.625b 0.589b S22 — — — 1.000 0.427a 0.371b S18 — — — — 1.000 0.689b S02 — — — — — 1.000 aCorrelation is significant at the p=0.05 level (2 tailed). bCorrelation is significant at the p=0.01 level (2 tailed). ## 5. Discussion ### 5.1. Connotation of the Components The connotation of each component is determined by the commonalities of the remaining measures it contains. On the basis of project management, risk management, and strategic management theories, the 6 components were renamed as follows: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6).As shown in Figure1, the six components may be divided into two dimensions of political risk management. Making correct decisions (C1), reducing unnecessary mistakes (C2), and obtaining a reasonable response (C6) are the components related to the reduction of risk exposure. When an ICE has a lower risk exposure, it has a lower risk level. In contrast, completing full preparations (C3), shaping a good environment (C4), and conducting favorable negotiations (C5) are the components associated with the promotion of risk response capability. A higher risk response capability indicates that an ICE has higher viability in an uncertain environment and is less likely to suffer damage arising from political risk. The components in the exposure reduction and capability promotion dimensions accounted for 38.975% and 29.233% of the total variance, respectively, thus indicating the leading role of reducing risk exposure and the supplementary role of improving risk response capability in political risk management in international projects.Figure 1 Characteristics of the components.In addition, the six components may be divided into groups with proactive, moderate, and passive characteristics. First, the components with a proactive characteristic (C1 and C4, which accounted for 26.040% of the total variance) are those utilized when making decisions or adapting to the local environment. Second, the components with a moderate characteristic (C2 and C3, which accounted for 24.742% of the total variance) are those applicable to a specific market or environment without specific risks. Third, the components with a passive characteristic (C6 and C5, which accounted for 19.420% of the total variance) are those related to specific risks. Compared to the passive strategies, the proactive and moderate strategies occupy more important positions in political risk management. In addition, ICEs are more likely to be resilient to political risks in the global market if they perform well with regard to the proactive strategies. ### 5.2. Application of the Components As shown in Figure2, the six components can be regarded as typical management techniques that contribute to political risk management in three different phases: the preproject phase, project implementation phase, and postevent phases. In the preproject phase, the premanagement techniques (C1, C5, and C3, which accounted for 39.975% of the total variance) can provide guidance for ICEs to avoid or transfer unacceptable risks and imply a higher offer for retained risks. In the project implementation phase, the interim management techniques (C4 and C2, which accounted for 24.404% of the total variance) can help ICEs adapt to the particular environmental conditions of the host country and reduce the probability and potential impacts of risks. In the postevent phase, the postmanagement techniques (C6, 10.155% of the total variance) can help ICEs relieve the actual impacts of political risk and accumulate experience in political risk management.Figure 2 Application of the components. #### 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. #### 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. #### 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. #### 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. #### 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. #### 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 5.1. Connotation of the Components The connotation of each component is determined by the commonalities of the remaining measures it contains. On the basis of project management, risk management, and strategic management theories, the 6 components were renamed as follows: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6).As shown in Figure1, the six components may be divided into two dimensions of political risk management. Making correct decisions (C1), reducing unnecessary mistakes (C2), and obtaining a reasonable response (C6) are the components related to the reduction of risk exposure. When an ICE has a lower risk exposure, it has a lower risk level. In contrast, completing full preparations (C3), shaping a good environment (C4), and conducting favorable negotiations (C5) are the components associated with the promotion of risk response capability. A higher risk response capability indicates that an ICE has higher viability in an uncertain environment and is less likely to suffer damage arising from political risk. The components in the exposure reduction and capability promotion dimensions accounted for 38.975% and 29.233% of the total variance, respectively, thus indicating the leading role of reducing risk exposure and the supplementary role of improving risk response capability in political risk management in international projects.Figure 1 Characteristics of the components.In addition, the six components may be divided into groups with proactive, moderate, and passive characteristics. First, the components with a proactive characteristic (C1 and C4, which accounted for 26.040% of the total variance) are those utilized when making decisions or adapting to the local environment. Second, the components with a moderate characteristic (C2 and C3, which accounted for 24.742% of the total variance) are those applicable to a specific market or environment without specific risks. Third, the components with a passive characteristic (C6 and C5, which accounted for 19.420% of the total variance) are those related to specific risks. Compared to the passive strategies, the proactive and moderate strategies occupy more important positions in political risk management. In addition, ICEs are more likely to be resilient to political risks in the global market if they perform well with regard to the proactive strategies. ## 5.2. Application of the Components As shown in Figure2, the six components can be regarded as typical management techniques that contribute to political risk management in three different phases: the preproject phase, project implementation phase, and postevent phases. In the preproject phase, the premanagement techniques (C1, C5, and C3, which accounted for 39.975% of the total variance) can provide guidance for ICEs to avoid or transfer unacceptable risks and imply a higher offer for retained risks. In the project implementation phase, the interim management techniques (C4 and C2, which accounted for 24.404% of the total variance) can help ICEs adapt to the particular environmental conditions of the host country and reduce the probability and potential impacts of risks. In the postevent phase, the postmanagement techniques (C6, 10.155% of the total variance) can help ICEs relieve the actual impacts of political risk and accumulate experience in political risk management.Figure 2 Application of the components. ### 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. ### 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. ### 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. ### 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. ### 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. ### 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. ## 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. ## 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. ## 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. ## 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. ## 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 6. Conclusions Political risk is a major problem encountered by ICEs in international construction projects. It is thus necessary to identify the strategies that can help ICEs address political risk. On the basis of a comprehensive literature review, 27 possible political risk management strategies were identified. The results of the questionnaire survey indicated that all the strategies were important for political risk management in international construction projects. Five strategies, including (1) choosing suitable projects (S17), (2) building proper relations with host governments (S18), (3) conducting market research (S02), (4) avoiding misconduct (S06), and (5) choosing a suitable entry mode (S23), were the most important strategies according to their average values.Through the exploratory factor analysis, the 27 strategies were clustered into six components: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6). The components (C1, C2, and C6) of the exposure decline dimension have higher contributions to political risk management than the components (C4, C3, and C5) of the capacity promotion dimension. In addition, components with a proactive characteristic (C1 and C4), components with a moderate characteristic (C2 and C3), and components with a passive characteristic (C5 and C6) can be ranked from the most to the least important for political risk management according to their cumulative variance.Furthermore, the six components independently contribute to political risk management in three different phases. In the preproject phase, premanagement techniques (C1, C3, and C5) can help ICEs avoid or transfer unacceptable risks and improve quotes for the retained risk. In the project implementation phase, interim management techniques (C2 and C4) are conducive to reducing risk and promoting ICEs’ adaptation to the overseas construction market. In the postevent phase, postmanagement techniques (C6) are useful for ICEs to eliminate their actual risk and to accumulate experience with political risk management. The high cumulative variance of the premanagement strategies indicated that the main tasks of political risk management should be performed in the early stage of a project.Compared to the respondents in the academic group, who were from different countries, all the respondents in the practitioner group were from Chinese construction enterprises, which is a limitation of this study. Nevertheless, the results of the independent-samplet-test revealed no significant differences in the responses between academics and practitioners. In addition, conditions in the global market are typically the same for ICEs from different countries. The relevant experience of the respondents is a reference for all practitioners, regardless of their nationalities. However, the characteristics of different enterprises and the actual conditions in different countries should be carefully considered when implementing these strategies. Further work could focus on evaluating these strategies with samples from different enterprises or different countries to increase the practical validity of the results. Despite its limitations, this study is a useful reference for academics and practitioners in terms of gaining an in-depth understanding of political risk management in international construction projects and provides guidance for ICEs to manage political risk when venturing outside their home countries. --- *Source: 1016384-2018-06-25.xml*
1016384-2018-06-25_1016384-2018-06-25.md
93,322
Identifying Political Risk Management Strategies in International Construction Projects
Tengyuan Chang; Bon-Gang Hwang; Xiaopeng Deng; Xianbo Zhao
Advances in Civil Engineering (2018)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1016384
1016384-2018-06-25.xml
--- ## Abstract International construction projects are plagued with political risk, and international construction enterprises (ICEs) must manage this risk to survive. However, little attention has been devoted to political risk management strategies in international construction projects. To fill this research gap, a total of 27 possible strategies were identified through a comprehensive literature review and validated by a pilot survey with 10 international experts. Appraisals of these 27 strategies by relevant professionals were collected using questionnaires, 155 of which were returned. Exploratory factor analysis was conducted to explore the interrelationships among these 27 strategies. The results show that all of the 27 strategies are important for political risk management in international construction projects. Moreover, these 27 strategies were clustered into six components, namely, (1) making correct decisions, (2) conducting favorable negotiations, (3) completing full preparations, (4) shaping a good environment, (5) reducing unnecessary mistakes, and (6) obtaining a reasonable response. The 6 components can be regarded as 6 typical management techniques that contribute to political risk management in the preproject phase, project implementation phase, and postevent phases. The findings may help practitioners gain an in-depth understanding of political risk management strategies in international construction projects and provide a useful reference for ICEs to manage political risks when venturing outside their home countries. --- ## Body ## 1. Introduction With the rapid development of economic globalization, the global construction market has thrived in the past decade [1]. Moreover, the large market for construction in Asia, Africa, and Latin America will create widespread prosperity and opportunities for international construction enterprises (ICEs). By taking advantage of these opportunities, increasing numbers of international contractors will expand into the international construction market [2].However, opportunities are always accompanied by risks, and ICEs will be exposed to new risks when venturing outside their home countries [3, 4]. ICEs have witnessed a dramatic increase in political risks around the world, such as the credit crises in Greece, Venezuela, and Congo; the wars in southern Sudan, Syria, Afghanistan, and Libya; the terrorist attacks in Europe, the Middle East, Central Asia, and South Asia; and the coups in Niger, Thailand, and Honduras [2, 4, 5]. These risks had a very large negative impact on the global market and resulted in great losses for ICEs.Given the increasingly complex business environment, political risks should not be ignored by ICEs when they approach global markets [2, 6, 7]. Political risk in international construction projects refers to uncertainty related to political events (e.g., political violence, regime changes, coups, revolutions, breaches of contract, terrorist attacks, and wars) and to arbitrary or discriminatory actions (e.g., expropriation, unfair compensation, foreign exchange restrictions, unlawful interference, capital restrictions, corruption, and labor restrictions) by host governments or political groups that may have negative impacts on ICEs [6]. Compared with the nonsystematic risks (e.g., technical risk, quality risk, procurement risk, and financial risk) of construction projects, political risk is more complex, unpredictable, and devastating and is usually outside the scope of normal project activities [2].Much of the extant literature has focused on political risks in international general business [8, 9] but has paid less attention to political risks in international construction projects. In most cases, political risk management is practiced only as a part of risk management at the project level in construction projects. However, project-level political risks can also affect enterprises’ objectives (e.g., financial, reputation, stability, survival, development, and strategic decisions) [10]. Implementation of political risk management only at the project level has some drawbacks: (1) lack of a comprehensive understanding of political risks; (2) overemphasis on short-term project goals and less consideration of corporate strategic objectives; (3) constraints because of limited resources or inappropriate resource allocation among projects; and (4) lack of accumulation and sharing of risk management experience. Therefore, risk management only at the project level no longer seems to be sufficient to help ICEs to address political risks in the global market [11].Hence, political risk management in international construction projects should be conducted jointly at both the project and firm levels by considering the various types of risk and linking risk management strategies to the enterprise’s objectives. Successful political risk management should be based on sufficient resources and information as important components of the decision-making process are continually improved and enhanced [12]. At the firm level, political risks can be treated as part of the entire risk portfolio of an enterprise and can be addressed across multiple business areas [13, 14]. Implementation of political risk management at the firm level can lead to better coordination and consolidation of the resources and goals of the enterprise, which is more conducive to the long-term stability and development of ICEs [15].This study focuses on the political risk inherent in international construction projects and aims at identifying the strategies available for ICEs to manage political risk. The specific objectives of this study are to (1) identify possible risk management strategies that can address political risks in international construction projects, (2) evaluate the importance of the strategies, and (3) explore the interrelationships among those strategies and their practical applications in international construction projects.Because less attention has been devoted to political risk management in international construction projects, this paper can enrich the understanding of risk management in the field of international construction. Furthermore, this study may help practitioners clearly understand political risk management strategies in international construction projects and provide guidance for ICEs regarding how to address political risk when venturing outside their home countries. ## 2. Literature Review Political risk management has been a popular topic in the field of international business (e.g., foreign direct investment, trade in goods, and international joint ventures). Several strategies have been proposed to address political risks, such as investing only in safe environments, increasing the required return, adapting to particular business environment conditions, sharing risks with other firms [8], improving relative bargaining power [9], transferring risks to an insurance company [16], reducing vulnerabilities [17], spreading risk by investing in several countries, enhancing the core competitiveness [18], and implementing localization and involvement strategies [1, 19].Previous studies regarding risk management in construction projects have covered a wide variety of areas, such as overall risk management [20], safety risk management [21], financial risk management [22], quality risk management [23], risk assessment [24], advanced technology-driven risk management [25], and risk management in public-private partnerships [26, 27]. However, less attention has been devoted to political risk management in international construction projects. In some studies, political risk was mentioned only as a subset of external risks [28, 29].Several studies have been conducted to identify political risk events [2, 30] and political risk factors [5, 6, 31] in international construction projects. Although these studies can help international contractors gain a better understanding of political risks in international construction projects, they provide less guidance regarding how to manage political risk. It is obvious that knowledge and experiences associated with political risk management should be extended to the international construction business by considering overall political risk management strategies throughout the life of projects and the interrelation between the project and firm levels. ## 3. Methods ### 3.1. Strategy Identification and Survey Based on an overview of the literature on political risk and risk management, a total of 27 possible risk management strategies were identified and were coded as S01 to S27 (Table1).Table 1 Political risk management strategies. Strategy Reference [8] [3] [32] [5] [52] [33] [2] [8] [34] S01: making a higher tender offer X — — — — — — X — S02: conducting market research X X — X X — X X — S03: buying risk insurance X — — X X — X — X S04: adopting optimal contracts — — X — — X — — X S05: implementing a localization strategy X X — X — — X — X S06: avoiding misconduct X X — X — — X — X S07: adopting closed management of the construction site X — — X X — X X — S08: supporting environmental protection X — — — — — — — — S09: abiding by the traditional local culture X X — X — — — — — S10: making contingency plans X — — — X — X X X S11: obtaining the corresponding guarantee — — — X — X X — X S12: implementing an emergency plan X — — — X — X X X S13: forming joint ventures with local contractors X — X X — — — — X S14: conducting a postresponse assessment X — — — X — X — — S15: sending staff to training programs X — — — — — — — X S16: settling disputes through renegotiation — — X X — — — — X S17: choosing suitable projects — — — X — X — — X S18: building proper relations with host governments — X X X — — X — — S19: maintaining good relations with powerful groups — X X X — X X — X S20: creating links with local business X X X X — X X — X S21: changing the operation strategies — — X — — X X X — S22: controlling core and critical technology — X — — — X — X X S23: choosing a suitable entry mode X — — X — X X X X S24: employing capable local partners X X X X — X — — X S25: building up reputation X — — X — — X — — S26: allocating extra funds — — X — X — X — — S27: maintaining good relations with the public — X X — — X X — XA pilot survey was performed with 10 experts to verify the comprehensiveness of the preliminary strategies. These experts included (1) four professors (one each from Australia, Hong Kong, Singapore, and South Africa) engaged in research on business management and risk management, (2) two professors (one each from the United States and China) engaged in research on project management, and (3) four senior managers (one each from China Communications Construction Group Limited, Power Construction Corporation of China, China State Construction Engineering Corporation, and China Railway Group Limited). All the 10 experts had more than 20 years of work experience in their field. In their suggestion, no strategies were added or deleted; instead, descriptions of some strategies were added to ensure accuracy in understanding and to avoid ambiguity. For example, “S01, making a higher tender offer” refers to the premium for the retained political risks of an ICE.The pilot survey was used to develop a structured questionnaire that comprised three sections: (1) a brief introduction of political risk and a description of some strategies; (2) questions to profile the company, work experience, title, and location of the respondents; and (3) questions to evaluate the importance of the 27 strategies using a five-point Likert scale where 5 = very high, 4 = high, 3 = medium, 2 = low, and 1 = very low.Then, a list of selected experts was developed, and they included (1) 300 international academics who focus on related studies—their personal information was collected from their publications—and (2) 500 practitioners with extensive experience in international project management—drawn from 50 Chinese construction enterprises that were selected from the 2016 top 250 international contractors according toEngineering News-Record (ENR). The contact information of the 500 practitioners was collected from the Chinese construction management research sector, alumni associations, and the websites of their enterprises.From March to May 2017, the questionnaire was disseminated to these experts and a total of 158 responses were returned, of which three were incomplete or inappropriately filled out. The valid 155 responses represent a response rate of 19%. As indicated in Table2, among the 155 respondents, 56 were from academia and 99 were from industry. All the respondents had over 5 years’ work experience, and 52% had over 10 years’ work experience in industry or academia. Of the 56 academics, 28 were from China (including Hong Kong and Macao), and another 28 were from overseas. Among the 99 practitioners, 38, 26, 9, 5, 8, and 5 were from the divisions of Chinese construction enterprises in Asia (not including China), Africa, Europe, North America, South America, and Australia, respectively. Moreover, all practitioners had experienced political risk in the overseas construction market.Table 2 Profile of the respondents. Characteristic Categorization Academia (N=56) Practitioner (N=99) Overall (N=155) N % N % N % Work experience Over 20 years 8 14 10 10 18 12 16–20 years 15 27 12 12 27 17 11–15 years 17 30 28 28 45 29 5–10 years 16 29 49 49 65 42 Title Professor 22 39 — — 22 14 Associate professor 19 34 — — 19 12 Assistant professor/lecturer 15 27 — — 15 10 Senior manager — — 29 29 29 19 Department manager — — 28 28 28 18 Project manager — — 42 42 42 27 Location China 28 50 8 8 36 23 Asia (excl. China) 14 25 38 38 52 34 Africa 2 4 26 26 28 18 Europe 5 9 9 9 14 9 North America 4 7 5 5 9 6 South America 0 0 8 8 8 5 Australia 3 5 5 5 8 5 ### 3.2. Exploratory Factor Analysis The exploratory factor analysis has proven to be very useful for identifying the potential relationships between several sets of data and has frequently been employed in studies related to construction management [11, 31, 35]. The exploratory factor analysis is often used to create theories in a new research area, such as components, correlations, and relative weightings of a list of variables [36].A sample with 5-point Likert scale data used in exploratory factor analysis should meet two conditions: (1) the size of the valid sample must be greater than 100 or five times the number of items [37] and (2) the data for the sample must satisfy the recommended alpha reliability test, Bartlett’s test of sphericity, and the Kaiser–Meyer–Olkin (KMO) test of sampling adequacy [31].In this study, the number of valid questionnaires is 155, Cronbach’s alpha coefficient is 0.932 (>0.700, F statistic = 17.382, significance level = 0.000), the KMO index is 0.878 (≥0.500), and Bartlett’s test of sphericity (χ2 = 1497.243, df = 205, significance level = 0.000) is significant (p<0.050), indicating that these data are suitable for the exploratory factor analysis [38–40]. The factor analysis of the 27 political risk management strategies was performed using the principal component analysis and varimax rotation methods implemented using SPSS 22.0 software. The number of components was determined by successively using latent root criteria (eigenvalues > 1.000) [41]. As suggested by Malhotra, the cumulative variance of the produced components should be greater than 60.000%. To increase the correlation between strategies and components, the qualified strategies in each component should have a factor loading ≥0.500 [42]. The internal consistency of the components should satisfy two conditions: Cronbach’s alpha of each component ≥0.700 [38] and the item-to-total correlation of each retained measure ≥0.400 [43]. ## 3.1. Strategy Identification and Survey Based on an overview of the literature on political risk and risk management, a total of 27 possible risk management strategies were identified and were coded as S01 to S27 (Table1).Table 1 Political risk management strategies. Strategy Reference [8] [3] [32] [5] [52] [33] [2] [8] [34] S01: making a higher tender offer X — — — — — — X — S02: conducting market research X X — X X — X X — S03: buying risk insurance X — — X X — X — X S04: adopting optimal contracts — — X — — X — — X S05: implementing a localization strategy X X — X — — X — X S06: avoiding misconduct X X — X — — X — X S07: adopting closed management of the construction site X — — X X — X X — S08: supporting environmental protection X — — — — — — — — S09: abiding by the traditional local culture X X — X — — — — — S10: making contingency plans X — — — X — X X X S11: obtaining the corresponding guarantee — — — X — X X — X S12: implementing an emergency plan X — — — X — X X X S13: forming joint ventures with local contractors X — X X — — — — X S14: conducting a postresponse assessment X — — — X — X — — S15: sending staff to training programs X — — — — — — — X S16: settling disputes through renegotiation — — X X — — — — X S17: choosing suitable projects — — — X — X — — X S18: building proper relations with host governments — X X X — — X — — S19: maintaining good relations with powerful groups — X X X — X X — X S20: creating links with local business X X X X — X X — X S21: changing the operation strategies — — X — — X X X — S22: controlling core and critical technology — X — — — X — X X S23: choosing a suitable entry mode X — — X — X X X X S24: employing capable local partners X X X X — X — — X S25: building up reputation X — — X — — X — — S26: allocating extra funds — — X — X — X — — S27: maintaining good relations with the public — X X — — X X — XA pilot survey was performed with 10 experts to verify the comprehensiveness of the preliminary strategies. These experts included (1) four professors (one each from Australia, Hong Kong, Singapore, and South Africa) engaged in research on business management and risk management, (2) two professors (one each from the United States and China) engaged in research on project management, and (3) four senior managers (one each from China Communications Construction Group Limited, Power Construction Corporation of China, China State Construction Engineering Corporation, and China Railway Group Limited). All the 10 experts had more than 20 years of work experience in their field. In their suggestion, no strategies were added or deleted; instead, descriptions of some strategies were added to ensure accuracy in understanding and to avoid ambiguity. For example, “S01, making a higher tender offer” refers to the premium for the retained political risks of an ICE.The pilot survey was used to develop a structured questionnaire that comprised three sections: (1) a brief introduction of political risk and a description of some strategies; (2) questions to profile the company, work experience, title, and location of the respondents; and (3) questions to evaluate the importance of the 27 strategies using a five-point Likert scale where 5 = very high, 4 = high, 3 = medium, 2 = low, and 1 = very low.Then, a list of selected experts was developed, and they included (1) 300 international academics who focus on related studies—their personal information was collected from their publications—and (2) 500 practitioners with extensive experience in international project management—drawn from 50 Chinese construction enterprises that were selected from the 2016 top 250 international contractors according toEngineering News-Record (ENR). The contact information of the 500 practitioners was collected from the Chinese construction management research sector, alumni associations, and the websites of their enterprises.From March to May 2017, the questionnaire was disseminated to these experts and a total of 158 responses were returned, of which three were incomplete or inappropriately filled out. The valid 155 responses represent a response rate of 19%. As indicated in Table2, among the 155 respondents, 56 were from academia and 99 were from industry. All the respondents had over 5 years’ work experience, and 52% had over 10 years’ work experience in industry or academia. Of the 56 academics, 28 were from China (including Hong Kong and Macao), and another 28 were from overseas. Among the 99 practitioners, 38, 26, 9, 5, 8, and 5 were from the divisions of Chinese construction enterprises in Asia (not including China), Africa, Europe, North America, South America, and Australia, respectively. Moreover, all practitioners had experienced political risk in the overseas construction market.Table 2 Profile of the respondents. Characteristic Categorization Academia (N=56) Practitioner (N=99) Overall (N=155) N % N % N % Work experience Over 20 years 8 14 10 10 18 12 16–20 years 15 27 12 12 27 17 11–15 years 17 30 28 28 45 29 5–10 years 16 29 49 49 65 42 Title Professor 22 39 — — 22 14 Associate professor 19 34 — — 19 12 Assistant professor/lecturer 15 27 — — 15 10 Senior manager — — 29 29 29 19 Department manager — — 28 28 28 18 Project manager — — 42 42 42 27 Location China 28 50 8 8 36 23 Asia (excl. China) 14 25 38 38 52 34 Africa 2 4 26 26 28 18 Europe 5 9 9 9 14 9 North America 4 7 5 5 9 6 South America 0 0 8 8 8 5 Australia 3 5 5 5 8 5 ## 3.2. Exploratory Factor Analysis The exploratory factor analysis has proven to be very useful for identifying the potential relationships between several sets of data and has frequently been employed in studies related to construction management [11, 31, 35]. The exploratory factor analysis is often used to create theories in a new research area, such as components, correlations, and relative weightings of a list of variables [36].A sample with 5-point Likert scale data used in exploratory factor analysis should meet two conditions: (1) the size of the valid sample must be greater than 100 or five times the number of items [37] and (2) the data for the sample must satisfy the recommended alpha reliability test, Bartlett’s test of sphericity, and the Kaiser–Meyer–Olkin (KMO) test of sampling adequacy [31].In this study, the number of valid questionnaires is 155, Cronbach’s alpha coefficient is 0.932 (>0.700, F statistic = 17.382, significance level = 0.000), the KMO index is 0.878 (≥0.500), and Bartlett’s test of sphericity (χ2 = 1497.243, df = 205, significance level = 0.000) is significant (p<0.050), indicating that these data are suitable for the exploratory factor analysis [38–40]. The factor analysis of the 27 political risk management strategies was performed using the principal component analysis and varimax rotation methods implemented using SPSS 22.0 software. The number of components was determined by successively using latent root criteria (eigenvalues > 1.000) [41]. As suggested by Malhotra, the cumulative variance of the produced components should be greater than 60.000%. To increase the correlation between strategies and components, the qualified strategies in each component should have a factor loading ≥0.500 [42]. The internal consistency of the components should satisfy two conditions: Cronbach’s alpha of each component ≥0.700 [38] and the item-to-total correlation of each retained measure ≥0.400 [43]. ## 4. Results ### 4.1. Results of the Questionnaire Survey Table3 presents the evaluation results of the 27 political management strategies. The average values of the 27 strategies range from 3.27 (S21, changing the strategies) to 4.40 (S17, choosing suitable projects). All of them were significantly greater than 3 at the p=0.05 level (two tailed) in the one-sample t-test, indicating that the 27 strategies had significant importance in managing political risk in international construction projects. The five most important strategies were (1) choosing suitable projects (S17, average value 4.40), (2) building proper relations with host governments (S18, average value 4.30), (3) conducting market research (S02, average value 4.29), (4) avoiding misconduct (S06, average value 4.26), and (5) choosing a suitable entry mode (S23, average value 4.22). The p values of the 27 strategies were greater than 0.05 in the independent-sample t-test. Therefore, there were no significant differences in the average values of the strategies between scholars and practitioners.Table 3 Ranking of the political risk management strategies. Strategy Academia Industry p value Overall Mean Rank Mean Rank Mean Rank p value S01: making a higher tender offer 3.99 11 4.00 11 0.906 4.00 11 <0.001a S02: conducting market research 4.45 3 4.20 5 0.282 4.29 3 <0.001a S03: buying risk insurance 4.03 10 4.06 9 0.697 4.05 10 <0.001a S04: adopting optimal contracts 4.12 6 4.23 3 0.261 4.19 6 <0.001a S05: implementing a localization strategy 4.42 4 4.06 10 0.197 4.19 7 <0.001a S06: avoiding misconduct 4.45 2 4.15 6 0.089 4.26 4 <0.001a S07: adopting closed management of the construction site 3.41 22 3.85 12 0.537 3.69 18 <0.001a S08: supporting environmental protection 3.75 14 3.82 15 0.831 3.80 14 <0.001a S09: abiding by the traditional local culture 3.90 13 3.81 16 0.863 3.84 13 <0.001a S10: making contingency plans 4.10 9 4.09 8 0.606 4.09 9 <0.001a S11: obtaining the corresponding guarantee 3.98 12 4.22 4 0.401 4.14 8 <0.001a S12: implementing an emergency plan 3.18 26 3.55 23 0.244 3.42 24 <0.001a S13: forming joint ventures with local contractors 3.58 16 3.77 18 0.182 3.70 17 <0.001a S14: conducting a postresponse assessment 3.16 27 3.41 26 0.537 3.32 26 <0.001a S15: sending staff to training programs 3.43 20 3.62 21 0.628 3.55 21 <0.001a S16: settling disputes through renegotiation 3.28 25 3.43 25 0.617 3.38 25 <0.001a S17: choosing suitable projects 4.54 1 4.32 2 0.439 4.40 1 <0.001a S18: building proper relations with host governments 4.12 7 4.42 1 0.223 4.31 2 <0.001a S19: maintaining good relations with powerful groups 3.52 17 3.85 13 0.537 3.73 15 <0.001a S20: creating links with local business 3.43 21 3.70 19 0.377 3.60 20 <0.001a S21: changing the operation strategies 3.36 24 3.22 27 0.236 3.27 27 <0.001a S22: controlling core and critical technology 4.10 8 3.80 17 0.301 3.91 12 <0.001a S23: choosing a suitable entry mode 4.37 5 4.14 7 0.439 4.22 5 <0.001a S24: employing capable local partners 3.66 15 3.66 20 0.725 3.66 19 <0.001a S25: building up reputation 3.47 19 3.47 24 0.912 3.47 23 <0.001a S26: allocating extra funds 3.39 23 3.59 22 0.275 3.52 22 <0.001a S27: maintaining good relations with the public 3.49 18 3.83 14 0.137 3.71 16 <0.001a Note. aOne-sample t-test result is significant (test value = 3) at the p=0.05 significance level (two tailed). ### 4.2. Results of Exploratory Factor Analysis As illustrated in Table4, a total of six components with eigenvalues greater than 1.000 were explored. The cumulative variance of the six components was 61.208%, thus exceeding 60.000%. The 27 strategies were divided into the six components according to their loading on each component of more than 0.500. Although the loading of strategy “S13 forming joint venture with local contractors” in the first component was 0.521, it was still removed from subsequent analyses due to the low value of its communality (0.399 < 0.500) and item-to-total correlations (0.301 < 0.400). After the adjustment, Cronbach’s alpha coefficient of the first component and the communalities of the remaining six strategies in the first component increased. Cronbach’s alpha coefficients of the six components ranged from 0.743 to 0.857, and the item-to-total correlations of the remaining 26 strategies ranged from 0.478 to 0.653; thus, the model is reliable.Table 4 Results of the exploratory factor analysis. Strategy Communality Item-to-total correlation Component 1 2 3 4 5 6 S23 0.640 0.590 0.676 — — — — — S05 0.530 0.490 0.653 — — — — — S17 0.575 0.572 0.617 — — — — — S22 0.632 0.585 0.598 — — — — — S18 0.565 0.482 0.541 — — — — — S02 0.652 0.511 0.509 — — — — — S06 0.730 0.579 — 0.683 — — — — S07 0.648 0.497 — 0.667 — — — — S24 0.667 0.612 — 0.617 — — — — S08 0.543 0.515 — 0.595 — — — — S09 0.621 0.611 — 0.509 — — — — S15 0.742 0.592 — — 0.739 — — — S26 0.679 0.516 — — 0.670 — — — S03 0.512 0.527 — — 0.525 — — — S10 0.479 0.542 — — 0.507 — — — S27 0.532 0.621 — — — 0.682 — — S25 0.697 0.629 — — — 0.672 — — S20 0.632 0.637 — — — 0.586 — — S19 0.581 0.589 — — — 0.547 — — S04 0.561 0.478 — — — — 0.663 — S01 0.548 0.551 — — — — 0.567 — S11 0.629 0.557 — — — — 0.550 — S16 0.611 0.538 — — — — — 0.631 S12 0.710 0.539 — — — — — 0.618 S21 0.579 0.571 — — — — — 0.522 S14 0.576 0.653 — — — — — 0. 502 Cronbach’s alpha 0.857 0.832 0.811 0.767 0.743 0.758 Eigenvalues 6.483 2.867 2.310 1.451 1.162 1.100 Variance (%) 15.233 13.587 11.155 10.817 9.265 8.150 Cumulative variance (%) 15.233 28.820 39.975 50.793 60.058 68.208 Note. Only loadings of 0.500 or above are shown. Extraction method: principal component analysis. Rotation method: varimax with Kaiser normalization. Rotation converged in 10 iterations. ### 4.3. Results of the Validity Test The Pearson correlation analysis (2 tailed) was applied to check the validity of the results of the exploratory factor analysis. The strategies clustered into a component should be significantly correlated [44]. The results revealed that for each component all the strategies were correlated with the others, and thus, the strategies can explain political risk management in that dimension. Due to space limitations, only the correlations between strategies in the first component are presented in Table 5.Table 5 Pearson correlations for component one. S23 S05 S17 S22 S18 S02 S23 1.000 0.602b 0.477b 0.467b 0.564b 0.581b S05 — 1.000 0.388a 0.491b 0.413b 0.432b S17 — — 1.000 0.306b 0.625b 0.589b S22 — — — 1.000 0.427a 0.371b S18 — — — — 1.000 0.689b S02 — — — — — 1.000 aCorrelation is significant at the p=0.05 level (2 tailed). bCorrelation is significant at the p=0.01 level (2 tailed). ## 4.1. Results of the Questionnaire Survey Table3 presents the evaluation results of the 27 political management strategies. The average values of the 27 strategies range from 3.27 (S21, changing the strategies) to 4.40 (S17, choosing suitable projects). All of them were significantly greater than 3 at the p=0.05 level (two tailed) in the one-sample t-test, indicating that the 27 strategies had significant importance in managing political risk in international construction projects. The five most important strategies were (1) choosing suitable projects (S17, average value 4.40), (2) building proper relations with host governments (S18, average value 4.30), (3) conducting market research (S02, average value 4.29), (4) avoiding misconduct (S06, average value 4.26), and (5) choosing a suitable entry mode (S23, average value 4.22). The p values of the 27 strategies were greater than 0.05 in the independent-sample t-test. Therefore, there were no significant differences in the average values of the strategies between scholars and practitioners.Table 3 Ranking of the political risk management strategies. Strategy Academia Industry p value Overall Mean Rank Mean Rank Mean Rank p value S01: making a higher tender offer 3.99 11 4.00 11 0.906 4.00 11 <0.001a S02: conducting market research 4.45 3 4.20 5 0.282 4.29 3 <0.001a S03: buying risk insurance 4.03 10 4.06 9 0.697 4.05 10 <0.001a S04: adopting optimal contracts 4.12 6 4.23 3 0.261 4.19 6 <0.001a S05: implementing a localization strategy 4.42 4 4.06 10 0.197 4.19 7 <0.001a S06: avoiding misconduct 4.45 2 4.15 6 0.089 4.26 4 <0.001a S07: adopting closed management of the construction site 3.41 22 3.85 12 0.537 3.69 18 <0.001a S08: supporting environmental protection 3.75 14 3.82 15 0.831 3.80 14 <0.001a S09: abiding by the traditional local culture 3.90 13 3.81 16 0.863 3.84 13 <0.001a S10: making contingency plans 4.10 9 4.09 8 0.606 4.09 9 <0.001a S11: obtaining the corresponding guarantee 3.98 12 4.22 4 0.401 4.14 8 <0.001a S12: implementing an emergency plan 3.18 26 3.55 23 0.244 3.42 24 <0.001a S13: forming joint ventures with local contractors 3.58 16 3.77 18 0.182 3.70 17 <0.001a S14: conducting a postresponse assessment 3.16 27 3.41 26 0.537 3.32 26 <0.001a S15: sending staff to training programs 3.43 20 3.62 21 0.628 3.55 21 <0.001a S16: settling disputes through renegotiation 3.28 25 3.43 25 0.617 3.38 25 <0.001a S17: choosing suitable projects 4.54 1 4.32 2 0.439 4.40 1 <0.001a S18: building proper relations with host governments 4.12 7 4.42 1 0.223 4.31 2 <0.001a S19: maintaining good relations with powerful groups 3.52 17 3.85 13 0.537 3.73 15 <0.001a S20: creating links with local business 3.43 21 3.70 19 0.377 3.60 20 <0.001a S21: changing the operation strategies 3.36 24 3.22 27 0.236 3.27 27 <0.001a S22: controlling core and critical technology 4.10 8 3.80 17 0.301 3.91 12 <0.001a S23: choosing a suitable entry mode 4.37 5 4.14 7 0.439 4.22 5 <0.001a S24: employing capable local partners 3.66 15 3.66 20 0.725 3.66 19 <0.001a S25: building up reputation 3.47 19 3.47 24 0.912 3.47 23 <0.001a S26: allocating extra funds 3.39 23 3.59 22 0.275 3.52 22 <0.001a S27: maintaining good relations with the public 3.49 18 3.83 14 0.137 3.71 16 <0.001a Note. aOne-sample t-test result is significant (test value = 3) at the p=0.05 significance level (two tailed). ## 4.2. Results of Exploratory Factor Analysis As illustrated in Table4, a total of six components with eigenvalues greater than 1.000 were explored. The cumulative variance of the six components was 61.208%, thus exceeding 60.000%. The 27 strategies were divided into the six components according to their loading on each component of more than 0.500. Although the loading of strategy “S13 forming joint venture with local contractors” in the first component was 0.521, it was still removed from subsequent analyses due to the low value of its communality (0.399 < 0.500) and item-to-total correlations (0.301 < 0.400). After the adjustment, Cronbach’s alpha coefficient of the first component and the communalities of the remaining six strategies in the first component increased. Cronbach’s alpha coefficients of the six components ranged from 0.743 to 0.857, and the item-to-total correlations of the remaining 26 strategies ranged from 0.478 to 0.653; thus, the model is reliable.Table 4 Results of the exploratory factor analysis. Strategy Communality Item-to-total correlation Component 1 2 3 4 5 6 S23 0.640 0.590 0.676 — — — — — S05 0.530 0.490 0.653 — — — — — S17 0.575 0.572 0.617 — — — — — S22 0.632 0.585 0.598 — — — — — S18 0.565 0.482 0.541 — — — — — S02 0.652 0.511 0.509 — — — — — S06 0.730 0.579 — 0.683 — — — — S07 0.648 0.497 — 0.667 — — — — S24 0.667 0.612 — 0.617 — — — — S08 0.543 0.515 — 0.595 — — — — S09 0.621 0.611 — 0.509 — — — — S15 0.742 0.592 — — 0.739 — — — S26 0.679 0.516 — — 0.670 — — — S03 0.512 0.527 — — 0.525 — — — S10 0.479 0.542 — — 0.507 — — — S27 0.532 0.621 — — — 0.682 — — S25 0.697 0.629 — — — 0.672 — — S20 0.632 0.637 — — — 0.586 — — S19 0.581 0.589 — — — 0.547 — — S04 0.561 0.478 — — — — 0.663 — S01 0.548 0.551 — — — — 0.567 — S11 0.629 0.557 — — — — 0.550 — S16 0.611 0.538 — — — — — 0.631 S12 0.710 0.539 — — — — — 0.618 S21 0.579 0.571 — — — — — 0.522 S14 0.576 0.653 — — — — — 0. 502 Cronbach’s alpha 0.857 0.832 0.811 0.767 0.743 0.758 Eigenvalues 6.483 2.867 2.310 1.451 1.162 1.100 Variance (%) 15.233 13.587 11.155 10.817 9.265 8.150 Cumulative variance (%) 15.233 28.820 39.975 50.793 60.058 68.208 Note. Only loadings of 0.500 or above are shown. Extraction method: principal component analysis. Rotation method: varimax with Kaiser normalization. Rotation converged in 10 iterations. ## 4.3. Results of the Validity Test The Pearson correlation analysis (2 tailed) was applied to check the validity of the results of the exploratory factor analysis. The strategies clustered into a component should be significantly correlated [44]. The results revealed that for each component all the strategies were correlated with the others, and thus, the strategies can explain political risk management in that dimension. Due to space limitations, only the correlations between strategies in the first component are presented in Table 5.Table 5 Pearson correlations for component one. S23 S05 S17 S22 S18 S02 S23 1.000 0.602b 0.477b 0.467b 0.564b 0.581b S05 — 1.000 0.388a 0.491b 0.413b 0.432b S17 — — 1.000 0.306b 0.625b 0.589b S22 — — — 1.000 0.427a 0.371b S18 — — — — 1.000 0.689b S02 — — — — — 1.000 aCorrelation is significant at the p=0.05 level (2 tailed). bCorrelation is significant at the p=0.01 level (2 tailed). ## 5. Discussion ### 5.1. Connotation of the Components The connotation of each component is determined by the commonalities of the remaining measures it contains. On the basis of project management, risk management, and strategic management theories, the 6 components were renamed as follows: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6).As shown in Figure1, the six components may be divided into two dimensions of political risk management. Making correct decisions (C1), reducing unnecessary mistakes (C2), and obtaining a reasonable response (C6) are the components related to the reduction of risk exposure. When an ICE has a lower risk exposure, it has a lower risk level. In contrast, completing full preparations (C3), shaping a good environment (C4), and conducting favorable negotiations (C5) are the components associated with the promotion of risk response capability. A higher risk response capability indicates that an ICE has higher viability in an uncertain environment and is less likely to suffer damage arising from political risk. The components in the exposure reduction and capability promotion dimensions accounted for 38.975% and 29.233% of the total variance, respectively, thus indicating the leading role of reducing risk exposure and the supplementary role of improving risk response capability in political risk management in international projects.Figure 1 Characteristics of the components.In addition, the six components may be divided into groups with proactive, moderate, and passive characteristics. First, the components with a proactive characteristic (C1 and C4, which accounted for 26.040% of the total variance) are those utilized when making decisions or adapting to the local environment. Second, the components with a moderate characteristic (C2 and C3, which accounted for 24.742% of the total variance) are those applicable to a specific market or environment without specific risks. Third, the components with a passive characteristic (C6 and C5, which accounted for 19.420% of the total variance) are those related to specific risks. Compared to the passive strategies, the proactive and moderate strategies occupy more important positions in political risk management. In addition, ICEs are more likely to be resilient to political risks in the global market if they perform well with regard to the proactive strategies. ### 5.2. Application of the Components As shown in Figure2, the six components can be regarded as typical management techniques that contribute to political risk management in three different phases: the preproject phase, project implementation phase, and postevent phases. In the preproject phase, the premanagement techniques (C1, C5, and C3, which accounted for 39.975% of the total variance) can provide guidance for ICEs to avoid or transfer unacceptable risks and imply a higher offer for retained risks. In the project implementation phase, the interim management techniques (C4 and C2, which accounted for 24.404% of the total variance) can help ICEs adapt to the particular environmental conditions of the host country and reduce the probability and potential impacts of risks. In the postevent phase, the postmanagement techniques (C6, 10.155% of the total variance) can help ICEs relieve the actual impacts of political risk and accumulate experience in political risk management.Figure 2 Application of the components. #### 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. #### 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. #### 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. #### 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. #### 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. #### 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 5.1. Connotation of the Components The connotation of each component is determined by the commonalities of the remaining measures it contains. On the basis of project management, risk management, and strategic management theories, the 6 components were renamed as follows: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6).As shown in Figure1, the six components may be divided into two dimensions of political risk management. Making correct decisions (C1), reducing unnecessary mistakes (C2), and obtaining a reasonable response (C6) are the components related to the reduction of risk exposure. When an ICE has a lower risk exposure, it has a lower risk level. In contrast, completing full preparations (C3), shaping a good environment (C4), and conducting favorable negotiations (C5) are the components associated with the promotion of risk response capability. A higher risk response capability indicates that an ICE has higher viability in an uncertain environment and is less likely to suffer damage arising from political risk. The components in the exposure reduction and capability promotion dimensions accounted for 38.975% and 29.233% of the total variance, respectively, thus indicating the leading role of reducing risk exposure and the supplementary role of improving risk response capability in political risk management in international projects.Figure 1 Characteristics of the components.In addition, the six components may be divided into groups with proactive, moderate, and passive characteristics. First, the components with a proactive characteristic (C1 and C4, which accounted for 26.040% of the total variance) are those utilized when making decisions or adapting to the local environment. Second, the components with a moderate characteristic (C2 and C3, which accounted for 24.742% of the total variance) are those applicable to a specific market or environment without specific risks. Third, the components with a passive characteristic (C6 and C5, which accounted for 19.420% of the total variance) are those related to specific risks. Compared to the passive strategies, the proactive and moderate strategies occupy more important positions in political risk management. In addition, ICEs are more likely to be resilient to political risks in the global market if they perform well with regard to the proactive strategies. ## 5.2. Application of the Components As shown in Figure2, the six components can be regarded as typical management techniques that contribute to political risk management in three different phases: the preproject phase, project implementation phase, and postevent phases. In the preproject phase, the premanagement techniques (C1, C5, and C3, which accounted for 39.975% of the total variance) can provide guidance for ICEs to avoid or transfer unacceptable risks and imply a higher offer for retained risks. In the project implementation phase, the interim management techniques (C4 and C2, which accounted for 24.404% of the total variance) can help ICEs adapt to the particular environmental conditions of the host country and reduce the probability and potential impacts of risks. In the postevent phase, the postmanagement techniques (C6, 10.155% of the total variance) can help ICEs relieve the actual impacts of political risk and accumulate experience in political risk management.Figure 2 Application of the components. ### 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. ### 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. ### 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. ### 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. ### 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. ### 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 5.2.1. Making Correct Decisions (C1) This component explained the largest percentage of the total variance (15.233%) and contains six strategies: (1) conducting market research (S02), (2) choosing a suitable entry mode (S23), (3) choosing suitable projects (S17), (4) building proper relations with host governments (S18), (5) implementing a localization strategy (S05), and (6) controlling core and critical technology (S22). The average values of these six strategies (4.29, 4.21, 4.40, 4.31, 4.19, and 3.91, resp.) were relatively high, ranking 3rd, 5th, 1st, 2nd, 7th, and 12th, respectively, among the 27 strategies. All of them are strongly associated with decision-making activities, which can be observed as the most important part of political risk management.Market research is a basic task for ICEs before they enter a country or contract a new project [3, 8]. Information about the target market can be obtained from the websites or reports of international organizations (e.g., The World Bank, International Monetary Fund, and World Trade Organization), nongovernmental organizations (e.g., industry associations, commercial banks, and insurance companies), and government agencies (e.g., ministries of construction, ministries of commerce, and foreign ministries) in the host and home countries. Based on a clear understanding of market conditions, ICEs can identify potential political events and their probabilities by using risk assessment. The results of market research and risk assessment can be used as evidence for decision making [45, 46]. In a high-risk country, ICEs should choose a flexible entry mode (e.g., sole-venture projects and joint-venture projects with short durations) to reduce their exposure to environmental fluctuations [6]. However, in a low-risk country, ICEs can choose a permanent entry mode (e.g., a sole-venture company, a joint-venture company, and branch offices) [6, 47] to seek further development and higher profits. In choosing projects, ICEs should select those that are suited to their capacities, fit their interests, and utilize their own expertise. In addition, if ICEs choose a project greatly desired by the host governments and the local population, a good operating environment will be established, and less political risk will exist [3]. The relationship between ICEs and host governments is a very important factor that has great potential influence on political risk management [4, 48]. In a politically stable country, ICEs that have a good relationship with host governments can obtain more support and benefits, such as convenient approval procedures, less government intervention, smooth information and communication channels, and sufficient government guarantees. In contrast, in politically unstable countries, a low-involvement strategy with host governments is a better choice and can help ICEs avoid becoming involved in political struggles. Localization is a common strategy in international business and can help ICEs integrate into the host society [6]. ICEs with a high level of localization will be free from discrimination and opposition [7, 49]. ICEs are paying increasing attention to the control of core and critical technologies because the impact of technology on competition between ICEs is increasingly fierce. Moreover, ICEs with core and critical technologies will have an important position and a stronger voice in negotiations with host governments and experience less government interference in the project implementation phase [6, 50]. ## 5.2.2. Conducting Favorable Negotiations (C5) Three strategies are assigned to this component and account for 9.265% of the total variance: (1) adopting optimal contracts (S04), (2) making a higher tender offer (S01), and (3) obtaining the corresponding guarantee (S11). All of these strategies are strongly related to bidding and contract activities during the early stages of a project.In the business negotiation stage, the first important issue for ICEs is to select internationally accepted standard contracts and exclude contractual clauses and conditions that are not practiced locally [32, 34]. The contract clauses, such as payment terms, liability for a breach of the terms of the dispute resolution clause, intellectual property clauses, force majeure clauses, and confidentiality provisions should be drafted properly since they are the foundation for the execution of the transaction and the settlement of disputes. When a political risk event occurs, the risk premium can potentially compensate for an ICE’s losses. Therefore, if ICEs must confront and retain some risks, an increase in the required return can be provided by making a higher tender offer to protect themselves against those risks [33]. Two common methods are used to increase the tender offer: appropriately increasing the price of materials and using adjustment coefficients. Of course, raising the tender offer cannot make up for all the potential losses, and if the price is too high, the probability of winning the bid will decrease [51]. International guarantee regulations are an efficacious remedy for defects in local remedies, international arbitration, and diplomatic protection [52]. ICEs should try their best to come to an agreement with the host government to obtain the proper guarantees to help them restrict improper behaviors by the host government [46, 33], obtain compensation in a timely manner, and avoid increased losses. ## 5.2.3. Completing Full Preparations (C3) This component explained 11.155% of the total variance and contained four strategies: (1) making contingency plans (S10), (2) sending staff to training programs (S15), (3) allocating extra funds (S26), and (4) purchasing risk insurance (S03), ranking 9th, 21st, 22nd, and 10th, respectively, among the 27 strategies. The four strategies are related to the preparations for risk response and should be performed before the commencement of construction works.It is sensible for ICEs to have a written contingency plan for potential political risk to protect their own interests and safety [46, 53]. The content of a contingency plan should include (1) risk prediction and analysis, (2) a dispute settlement mechanism, (3) action roles and responsibilities, (4) equipment and tools, and (5) steps and strategies. The cases in Yemen and Libya demonstrate that a good evacuation plan can effectively protect the safety of international contractors, even during a war. Additionally, appropriate training programs (e.g., antigraft, safety, and self-protection programs) should be provided for employees in accordance with the company’s code of conduct and safety policies and procedures. Political risk management should be supported by allocating extra funds [32], which can enhance the flexibility of ICEs in an uncertain environment. An increasing number of multinational enterprises are willing to buy political risk insurance, which is considered an important measure to manage political risks. Political risk insurance can reduce the uninsured losses caused by various types of political risk, such as war, internal conflict, transfer restrictions, repudiation of debt, and expropriation [16]. In some cases, political risk is also used as a bargaining chip for companies to secure long-term loans and settle disputes with governments [54]. Political risk insurance can be purchased from three types of providers: (1) public providers such as the African Trade Insurance Agency and the Asian Development Bank, (2) private providers such as the insurance centers in London and the United States, and (3) reinsurers such as Hannover Re (Germany) and the China Export and Credit Insurance Corporation. ## 5.2.4. Shaping a Good Environment (C4) This component was responsible for 10.817% of the total variance and included four strategies: (1) maintaining good relations with powerful groups (S19), (2) maintaining good relations with the public (S27), (3) linking with local businesses (S20), and (4) building a reputation (S25). These four strategies can help enterprises create a good operating environment in a foreign land.ICEs should maintain good relations with powerful groups (e.g., the media, labor unions, business coalitions, industry associations, consumer associations, and environmental protection groups) in host countries [3]. Not only are powerful groups important influencers in policy making, but they also play significant roles in the economic and social environment [46]. Good relations with powerful local groups are very helpful for ICEs in terms of obtaining the necessary resources and reducing interference. For example, ICEs can obtain useful market and policy information through partnerships with industry associations and business coalitions [55], but they may suffer from extra checks from labor unions because of disputes with local workers [56]. It is well known that opposition to international construction projects is often initiated by the local public [32]. Therefore, maintaining good relations with the public is beneficial for ICEs in terms of avoiding unnecessary trouble. Linking with local businesses, such as choosing well-connected local business partners or strengthening cooperation with local enterprises, can help ICEs reduce their image as foreigners [46] and therefore reduce their probability of becoming involved in micropolitical processes [8, 49]. Corporate reputation refers to the extent to which an enterprise garners public trust and praise and the extent to which an enterprise influences the public [57]. Corporate reputation represents the sum of a multinational enterprise’s ability to obtain social recognition, resources, opportunities, and support and to achieve value creation in the host country. A good reputation can allow ICEs to respond quickly to a crisis and enhance their ability to resist risk. Building a corporate reputation is a long-term process, and hence, ICEs must make unremitting efforts (e.g., taking into account the interests of the local public, participating in local public welfare activities, and cultivating a good image via marketing efforts) to create a good reputation [58, 59]. ## 5.2.5. Reducing Unnecessary Mistakes (C2) This component accounted for 13.587% of the total variance and consisted of five strategies: (1) avoiding misconduct (S06), (2) employing capable local partners (S24), (3) supporting environmental protection (S08), (4) abiding by the local culture (S09), and (5) adopting closed management of the construction site (S07). The strategies in this component are strongly related to the policy of reducing ICEs’ unnecessary mistakes in their operations.Many cases have shown that political risk is closely linked to misconduct (e.g., bribery, legal violations, wages in arrears, dishonest acts, environmental pollution, and cultural conflicts) by ICEs during the project implementation phase [34, 56]. For example, in a very racist country, ICEs’ discrimination against certain local people may lead to racial tension, thus causing government interference; in a corruption-ridden country, unhealthy relationships between enterprises and the host government may cause protests or opposition from the public. Thus, ICEs should act strictly according to a code of conduct to eliminate political risk caused by their own mistakes. Cultural conflicts often occur in international marketing practice. Respecting and abiding by the local culture will help ICEs to mitigate the risks arising from cultural conflicts [3, 8]. Environmental protection and sustainable development are currently major trends. Many people as well as governments have increasingly begun to pay attention to environmental protection. Thus, taking part in the protection and construction of the ecological environment will help ICEs maintain good relations with the local population. The market skills and knowledge of experienced and qualified local partners (e.g., lawyers, subcontractors, suppliers, and agencies) are effective supplements for ICEs, especially for ICEs that lack practical market experience in the host country. Employing resourceful local partners can help ICEs not only reduce costs and improve work efficiency but also gain legitimacy under institutional pressure [1, 60]. Closed management of construction sites with security systems (e.g., security guards, monitoring devices, and alarm mechanisms) is an effective means for ICEs to prevent crime, terrorist attacks, and external conflicts, thus keeping sites safe in an unstable environment [5]. ## 5.2.6. Obtaining a Reasonable Response (C6) Strategies clustered in this component are generally associated with risk response when a risk occurs, accounting for 10.155% of the total variance. This strategy contains four strategies: (1) implementing an emergency plan (S12), (2) settling disputes through renegotiation (S16), (3) changing the operation strategies (S21), and (4) conducting a postresponse assessment (S14).Once political risk events arise, ICEs should immediately implement a risk emergency plan to reduce damage and better protect their security [34]. For example, at the onset of wars, ICEs should promptly contact the embassy, suspend construction work, and evacuate their employees. Organizational capability and flexible adaptability are important weapons that ICEs can use to address difficulties in the emergency plan implementation process. In special cases, ICEs can also seek the support of the general public, local governments, their home countries, international organizations, and the media to cope with intractable threats. After the threat disappears, reassessing the residual risks is an effective means through which ICEs can adjust project plans in terms of resources, schedules, and costs and judge whether there is a need to make a claim, renegotiate, or change the operations strategies [46, 52]. In the course of claims or renegotiations, any disputes should be settled through reasonable channels, such as demanding compensation based on the contract or guarantee treaty, making use of international conventions, or resorting to arbitration or conciliation [54]. It should be noted that successful claims and renegotiations by ICEs are based on adequate evidence of their losses. Therefore, they must protect related documents even in deteriorating situations [5]. Lessons learned from practical project cases are more valuable than those learned from books and can be consolidated through a postresponse assessment. These lessons and knowledge can help ICEs to improve their capacity for political risk management and therefore to effectively address similar political risks in the future. ## 6. Conclusions Political risk is a major problem encountered by ICEs in international construction projects. It is thus necessary to identify the strategies that can help ICEs address political risk. On the basis of a comprehensive literature review, 27 possible political risk management strategies were identified. The results of the questionnaire survey indicated that all the strategies were important for political risk management in international construction projects. Five strategies, including (1) choosing suitable projects (S17), (2) building proper relations with host governments (S18), (3) conducting market research (S02), (4) avoiding misconduct (S06), and (5) choosing a suitable entry mode (S23), were the most important strategies according to their average values.Through the exploratory factor analysis, the 27 strategies were clustered into six components: (1) making correct decisions (C1), (2) reducing unnecessary mistakes (C2), (3) completing full preparations (C3), (4) shaping a good environment (C4), (5) conducting favorable negotiations (C5), and (6) obtaining a reasonable response (C6). The components (C1, C2, and C6) of the exposure decline dimension have higher contributions to political risk management than the components (C4, C3, and C5) of the capacity promotion dimension. In addition, components with a proactive characteristic (C1 and C4), components with a moderate characteristic (C2 and C3), and components with a passive characteristic (C5 and C6) can be ranked from the most to the least important for political risk management according to their cumulative variance.Furthermore, the six components independently contribute to political risk management in three different phases. In the preproject phase, premanagement techniques (C1, C3, and C5) can help ICEs avoid or transfer unacceptable risks and improve quotes for the retained risk. In the project implementation phase, interim management techniques (C2 and C4) are conducive to reducing risk and promoting ICEs’ adaptation to the overseas construction market. In the postevent phase, postmanagement techniques (C6) are useful for ICEs to eliminate their actual risk and to accumulate experience with political risk management. The high cumulative variance of the premanagement strategies indicated that the main tasks of political risk management should be performed in the early stage of a project.Compared to the respondents in the academic group, who were from different countries, all the respondents in the practitioner group were from Chinese construction enterprises, which is a limitation of this study. Nevertheless, the results of the independent-samplet-test revealed no significant differences in the responses between academics and practitioners. In addition, conditions in the global market are typically the same for ICEs from different countries. The relevant experience of the respondents is a reference for all practitioners, regardless of their nationalities. However, the characteristics of different enterprises and the actual conditions in different countries should be carefully considered when implementing these strategies. Further work could focus on evaluating these strategies with samples from different enterprises or different countries to increase the practical validity of the results. Despite its limitations, this study is a useful reference for academics and practitioners in terms of gaining an in-depth understanding of political risk management in international construction projects and provides guidance for ICEs to manage political risk when venturing outside their home countries. --- *Source: 1016384-2018-06-25.xml*
2018
# Effects of the Rock Bridge Ligament on Fracture and Energy Evolution of Preflawed Granite Exposed to Dynamic Loads **Authors:** Kaihua Sun; Xiong Wu; Xuefeng Yi; Yu Wang **Journal:** Shock and Vibration (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1016412 --- ## Abstract This paper aims to reveal the mechanical properties, energy evolution characteristics, and dynamic rupture process of preflawed granite under impact loading with different rock bridge angles and strain rates. A series of dynamic impact experiments were conducted along with the separate Hopkinson press bar (SHPB) testing system to analyze and study the overall rock fracture process. Under the impact load, the peak stress of granite increases with the increase of rock bridge angle and strain rate, but the increase gradually decreases. The peak strain also increases gradually with the increase of rock bridge angle, but there is an upper limit value; the total input strain energy increases with the increase of strain rate and rock bridge angle. It is shown that the higher the strain rate, the higher the unit dissipation energy, and the greater the degree of rock fragmentation. For rock under impact loads, the crack first initiates from the wing end of the prefabricated flaw, the preflaw closes gradually, and finally the crack propagates at the locking section leading to the coalescence of rock bridge. With the increase of strain rate, the fragmentation degree of the specimen increases asymptotically, and the average fragmentation size of the specimen decreases with the increase of strain rate. It is suggested that the stability of large rocked slopes is controlled by the locked section, and understanding the fracture evolution of the rock bridge is the key to slope instability prediction. --- ## Body ## 1. Introduction In the current mining process, blasting is an important means of extraction, and along with the effects of exiting, excavation, and blasting operations, the rock masses in the open pit slope are disturbed by dynamic loads to varying degrees; therefore, it is particularly necessary to reveal the influence of impact dynamics on rock instability [1–3]. The natural fractures in the mine rock are not continuous, and the discontinuous fractures form a kind of rock bridges (i.e., rock-locked sections) [4, 5]. The stress conditions at rock bridges are often more complex due to the greater shear stresses, and the presence of rock bridges can increase the strength of the weak surfaces of the rock mass, which is called the locking effect of rock bridges on intermittent jointed rock masses [6, 7]. The fracture of rock bridges leads to the connection of joints, and destabilization damage will occur only when all rock bodies are connected by the previous fractures [8]. Therefore, the fracture mechanism of rock bridges needs to be studied to ensure the stability of rock masses in open pit mines and the safety of mining.The impact loading testing is different from the conventional static loading and cyclic loading tests. Usually, the impact loading testing is applied high strain rate on rock, which is different from the medium-low strain rate of the cyclic or fatigue loading conditions [9–13]. The properties of rock bridge and the current status of slope instability have been widely studied by scholars worldwide. Yin et al. [9] used split Hopkinson pressure bar and high-speed digital image correlation (DIC) technology to investigate the full-field deformation characteristics of phyllite under Brazilian disc test. Wong et al. [14] studied the cracking process and strength characteristics of rock-like fracture materials with different flaw inclination angles, different inclination of rock bridges, and different frictional characteristics of the crack contact surface by uniaxial compression tests. Gomez et al. [15] used a high-speed camera to take photoelastic pictures of the specimen in dynamic BD test, where the specimen was subjected to impact loading, and after about 30us of stress wave propagation, the specimen reached stress equilibrium at both ends, and its internal stress state was consistent with the internal stress state of the specimen in quasi-static condition. Zhang and Zhao [16] combined SEM scanner, high-speed camera, DIC high-speed photography, and SHPB device to reproduce the whole process of rock surface impact damage by multiple means simultaneously. Li et al. [17] proposed a test method for dynamic compression strength of rocks under prestatic load and measured the dynamic compression strength of sandstone under preload, and the results showed that the dynamic compression strength of the specimen under preload was higher than its compression strength under pure static load or pure dynamic load. Gong et al. [18] found that the dissipation energy increased linearly with the increase of incident energy by conducting SHPB tests, and the linear energy dissipation law in dynamic compression tests was confirmed by connecting two different inclined paths by a critical incident energy. Yang et al. [19] conducted impact compression tests on samples of red sandstone, gray sandstone, and granite, which are common in rock engineering, with different impact velocities. The impact compression test of these three types of rocks was conducted to compare the stress wave propagation characteristics, dynamic stress-strain relationship, degree of fragmentation, and energy dissipation law. Wang et al. [20] investigated the mechanical properties and energy dissipation law of sandstone under impact loading and explored the change law under different temperature conditions.Currently, most of the studies on the mechanical properties of jointed rocks are conducted by means of static loading tests; in addition, most of the studies on the characteristics of jointed rock masses under impact dynamic loading are focused on intact rock. The damage and fracture mechanism of the preflawed rock subjected to dynamic impacting loads are not well understood. Therefore, in order to reveal the effects of different joint angles and strain rates on the dynamic response of rock materials, SHPB impact testing was conducted on fissure-prefabricated specimens with different angles to analyze the effects of different joint angles and strain rates on the dynamic stress-strain analysis, energy evolution, and dynamic fracturing process. ## 2. Test Materials and Methods ### 2.1. Specimen Preparation The test material was obtained from a metal mining at Western of China. According to XRD analysis, the granite is mainly composed of sodium feldspar, quartz, magnesia hornblende, orthoclase, and black mica, and the specific composition is shown in Figure1. SEM imaging was performed on different granite specimens, and the SEM morphology of the specimens is shown in Figure 2. The SEM imaging analysis shows that microcracks, pores, and mineral interfaces were distributed within rock.Figure 1 Mineral composition analysis using XRD method for the granite samples.Figure 2 SEM results of rock specimens to observe its mesoscopic structure.Referring to the suggested method by ISRM for impact tests, the rock cores were drilled into cylindrical shape with a diameter (D) of 50 mm and a height (H) of 50 mm [21]. The granite specimens were prepared by polishing the ends of the specimens to ensure that the error of unevenness in the granite specimens was less than 0.05 mm, and the error of parallelism at both ends was less than 0.1 mm. In order to simulate the locking effect of the rock, double prefabricated fractures were prepared in the specimen by using waterjet cutting method, and two prefabricated fractures with 1 mm pore size were cut by spraying high-pressure water mixed with abrasive from a 0.75 mm diameter nozzle. As shown in Figure 3, the rock specimen contains oblique and horizontal fissures with a length of 8 mm, the angles of approach of the oblique fissures are 30°, 50°, and 70°, and the length of the locking section is set to 12 mm.Figure 3 Preparation of the preflawed granite samples used for the SHPB tests. (a)β = 30, (b) β = 50, and (c) β = 70. (a)(b)(c) ### 2.2. Test Apparatus The impact test was done on a separated Hopkinson pressure bar (SHPB) test system to determine the dynamic stress-strain, stress-time, and strain-time relationships of rock specimens under impact loading and to carry out an in-depth study of dynamic intrinsic relationships based on this. As shown in Figure4, the SHPB device consists of a power system, an impact rod (bullet), an input rod, an output rod, an absorption rod, and a measurement recording system, with the tested specimen sandwiched between the input and output rods and auxiliary equipment: a super dynamic strain gauge, a transient waveform recorder, a computer, strain gauges, bullet velocity tests, and other devices.Figure 4 SHPB testing system. (a) Hopkinson compression bar test setup; (b) 30,000-frame high-speed camera. (a)(b)The rock specimens were tested with different impact strain rates of 0.07, 0.09, 0.10, and 0.11 MPa, and the average strain rates of the rocks corresponding to different impact pressures were 31.4 s−1, 53.3 s−1, 78.4 s−1, and 143.8 s−1. The test data were collected with the help of DHDAS dynamic signal acquisition system, and 30,000-frame high-speed camera was used to capture the rock crack extension process. The whole test process is shown in Figure 4. ### 2.3. Test Idea The SHPB experiment is that the impacting rod strikes the input rod along the axial direction with a certain speed and generates a compressive stress wave in the input rod. The initial pressure pulse is reflected and simultaneously turned into a tension pulse by the free end of the pulse, and then the jet is directed to the opposite polarized light. Thus, the wavelength produced in the falling beam is the length of the projection. When the stress wave falling into the drop bar reaches the sample, due to changes in intermediate conditions, part of the drop stress wave crossing the interface is reflected on the drop bar and forms a reflected stress wave, while the other part of the stress wave passing through the sample and transmitted forms a transmitted and then transmitted stress wave that is “intercepted” by the absorber bar and eventually absorbed by the dynamic absorber or lining device. The “stress-strain signals” of the incident, reflected, and transmitted stress waves can be accurately recorded by using resistive strain gauges attached to the incident and transmitted rods, which makes it easy to determine the dynamic loads and displacements on both ends of the sample, and to derive the stress-strain relationships of the material [22, 23].The basic principle of SHPB experiments is the theory of elastic stress waves in slender rods. According to the theory of stress wave propagation and the assumption of one-dimensional stress, and in accordance with the incoming type requirement of the displacement, the equations can be obtained as follows [24, 25]:(1)u1=C0εi∫0tdτ+−C0εr∫0tdτ,F1=EAεi+εr,u2=C0εt∫0tdτ,F2=EAεT,where u1 and u2 are the cross-sectional displacements of the incident and transmitted rods in contact with the specimen, respectively; C0 is the elastic longitudinal wave velocity of the waveguide rod; F1 and F2 are the forces acting on the two ends of the specimen, respectively; E is the elastic modulus of the waveguide rod; and A is the cross-sectional area of the waveguide rod.From the above two equations, the strainεs, strain rate εs, and stress σs on the specimen can be introduced as follows:(2)σs=F1+F22Asεi+εr+εt,εs=u1−u1lsC0ls∫0tεi−εr−εtdτ,ε˙s=dεsdt=C0lsεi−εr−εt,where ls is the length of the specimen and As is the cross-sectional area of the specimen. According to the specimen stress uniformity assumption, we have F1 = F2, and according to the one-dimensional stress wave theory, we have(3)εi+εr=εt.Using the data of incident waveεi and transmitted wave εt, we can obtain(4)σs=EA0Asεt,εs=2C0ls∫0tεi−εtdτ,ε˙s=2C0lsεi−εt.The stress, strain, and strain rate curves of the specimen with time are thus obtained.Equations (2) and (4) are the “three-wave method” and “two-wave method” for measuring the stress-strain relationship at high strain rates in conventional SHPB experiments. Due to the long incident rod in the SHPB experiment, the incident waveform will be distorted to a certain extent during the reflection process, and it is more serious for the large diameter SHPB device, while the material of the specimen is uniform, the distance is short, the filtering effect is good, and the transmission wave dispersion is small, so the incident and transmission waves of the two-wave method are usually used.According to the law of conservation of energy, the dissipation energyWs(t) of the impact compression experimental specimen is(5)Wst=Wit−Wrt−Wtt,where Wi(t), Wr(t), and Wt(t) are the incident, reflected, and transmitted wave energies, respectively, and the expressions are as follows:(6)Wit=E0C0A0∫0tεi2tdτ,Wrt=E0C0A0∫0tεr2tdτ,Wtt=E0C0A0∫0tεt2tdτ. ## 2.1. Specimen Preparation The test material was obtained from a metal mining at Western of China. According to XRD analysis, the granite is mainly composed of sodium feldspar, quartz, magnesia hornblende, orthoclase, and black mica, and the specific composition is shown in Figure1. SEM imaging was performed on different granite specimens, and the SEM morphology of the specimens is shown in Figure 2. The SEM imaging analysis shows that microcracks, pores, and mineral interfaces were distributed within rock.Figure 1 Mineral composition analysis using XRD method for the granite samples.Figure 2 SEM results of rock specimens to observe its mesoscopic structure.Referring to the suggested method by ISRM for impact tests, the rock cores were drilled into cylindrical shape with a diameter (D) of 50 mm and a height (H) of 50 mm [21]. The granite specimens were prepared by polishing the ends of the specimens to ensure that the error of unevenness in the granite specimens was less than 0.05 mm, and the error of parallelism at both ends was less than 0.1 mm. In order to simulate the locking effect of the rock, double prefabricated fractures were prepared in the specimen by using waterjet cutting method, and two prefabricated fractures with 1 mm pore size were cut by spraying high-pressure water mixed with abrasive from a 0.75 mm diameter nozzle. As shown in Figure 3, the rock specimen contains oblique and horizontal fissures with a length of 8 mm, the angles of approach of the oblique fissures are 30°, 50°, and 70°, and the length of the locking section is set to 12 mm.Figure 3 Preparation of the preflawed granite samples used for the SHPB tests. (a)β = 30, (b) β = 50, and (c) β = 70. (a)(b)(c) ## 2.2. Test Apparatus The impact test was done on a separated Hopkinson pressure bar (SHPB) test system to determine the dynamic stress-strain, stress-time, and strain-time relationships of rock specimens under impact loading and to carry out an in-depth study of dynamic intrinsic relationships based on this. As shown in Figure4, the SHPB device consists of a power system, an impact rod (bullet), an input rod, an output rod, an absorption rod, and a measurement recording system, with the tested specimen sandwiched between the input and output rods and auxiliary equipment: a super dynamic strain gauge, a transient waveform recorder, a computer, strain gauges, bullet velocity tests, and other devices.Figure 4 SHPB testing system. (a) Hopkinson compression bar test setup; (b) 30,000-frame high-speed camera. (a)(b)The rock specimens were tested with different impact strain rates of 0.07, 0.09, 0.10, and 0.11 MPa, and the average strain rates of the rocks corresponding to different impact pressures were 31.4 s−1, 53.3 s−1, 78.4 s−1, and 143.8 s−1. The test data were collected with the help of DHDAS dynamic signal acquisition system, and 30,000-frame high-speed camera was used to capture the rock crack extension process. The whole test process is shown in Figure 4. ## 2.3. Test Idea The SHPB experiment is that the impacting rod strikes the input rod along the axial direction with a certain speed and generates a compressive stress wave in the input rod. The initial pressure pulse is reflected and simultaneously turned into a tension pulse by the free end of the pulse, and then the jet is directed to the opposite polarized light. Thus, the wavelength produced in the falling beam is the length of the projection. When the stress wave falling into the drop bar reaches the sample, due to changes in intermediate conditions, part of the drop stress wave crossing the interface is reflected on the drop bar and forms a reflected stress wave, while the other part of the stress wave passing through the sample and transmitted forms a transmitted and then transmitted stress wave that is “intercepted” by the absorber bar and eventually absorbed by the dynamic absorber or lining device. The “stress-strain signals” of the incident, reflected, and transmitted stress waves can be accurately recorded by using resistive strain gauges attached to the incident and transmitted rods, which makes it easy to determine the dynamic loads and displacements on both ends of the sample, and to derive the stress-strain relationships of the material [22, 23].The basic principle of SHPB experiments is the theory of elastic stress waves in slender rods. According to the theory of stress wave propagation and the assumption of one-dimensional stress, and in accordance with the incoming type requirement of the displacement, the equations can be obtained as follows [24, 25]:(1)u1=C0εi∫0tdτ+−C0εr∫0tdτ,F1=EAεi+εr,u2=C0εt∫0tdτ,F2=EAεT,where u1 and u2 are the cross-sectional displacements of the incident and transmitted rods in contact with the specimen, respectively; C0 is the elastic longitudinal wave velocity of the waveguide rod; F1 and F2 are the forces acting on the two ends of the specimen, respectively; E is the elastic modulus of the waveguide rod; and A is the cross-sectional area of the waveguide rod.From the above two equations, the strainεs, strain rate εs, and stress σs on the specimen can be introduced as follows:(2)σs=F1+F22Asεi+εr+εt,εs=u1−u1lsC0ls∫0tεi−εr−εtdτ,ε˙s=dεsdt=C0lsεi−εr−εt,where ls is the length of the specimen and As is the cross-sectional area of the specimen. According to the specimen stress uniformity assumption, we have F1 = F2, and according to the one-dimensional stress wave theory, we have(3)εi+εr=εt.Using the data of incident waveεi and transmitted wave εt, we can obtain(4)σs=EA0Asεt,εs=2C0ls∫0tεi−εtdτ,ε˙s=2C0lsεi−εt.The stress, strain, and strain rate curves of the specimen with time are thus obtained.Equations (2) and (4) are the “three-wave method” and “two-wave method” for measuring the stress-strain relationship at high strain rates in conventional SHPB experiments. Due to the long incident rod in the SHPB experiment, the incident waveform will be distorted to a certain extent during the reflection process, and it is more serious for the large diameter SHPB device, while the material of the specimen is uniform, the distance is short, the filtering effect is good, and the transmission wave dispersion is small, so the incident and transmission waves of the two-wave method are usually used.According to the law of conservation of energy, the dissipation energyWs(t) of the impact compression experimental specimen is(5)Wst=Wit−Wrt−Wtt,where Wi(t), Wr(t), and Wt(t) are the incident, reflected, and transmitted wave energies, respectively, and the expressions are as follows:(6)Wit=E0C0A0∫0tεi2tdτ,Wrt=E0C0A0∫0tεr2tdτ,Wtt=E0C0A0∫0tεt2tdτ. ## 3. Test Results ### 3.1. Typical Dynamic Stress-Strain Curves The test data were processed according to the one-wave method [25], and the stress-strain curves were analyzed and plotted after the curve reached stress equilibrium. The dynamic stress-strain curves of rock specimens with different rock bridge angles at different impact air pressures are shown in Figure 5.Figure 5 Dynamic stress-strain curves of granite specimens with different rock bridge angles at different strain rates. (a) Rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The data collected by the SHPB system were processed to obtain the stress-strain curves, as shown in Figure5. From the dynamic stress-strain curves in Figure 5, it can be seen that the dynamic stress-strain curves can be divided into five stages: pore compacting stage, elastic deformation stage, microfracture stable development stage, microcrack unstable development stage, and post damage stage. The rock material is a natural nonhomogeneous material, and the specimen contains more microcracks in the open state inside, which are gradually closed under the action of lower stresses. When the stress grows to a certain value, the stress-strain curve shows a certain range of elastic growth, and then the specimen produces more and more trace new fractures, and finally in the microcrack nonstable development stage. The sharp increase of new fractures leads to mutual penetration, until the internal structure of the rock is completely destroyed to form a macroscopic fracture surface. The analysis shows that the dynamic stress-strain curves of granite specimens with different bridge angles and strain rates are basically the same.At different strain rates, the peak stress increases with increasing strain rate, but the increase is smaller and smaller. This is because there is a certain delayed response of the rock under the impact load, and the strength of the rock will increase with the increase of strain rate, tending to the upper limit of strength. For the granite specimens in this work, it is analyzed that the peak strain is positively correlated with the strain rate when the strain rate is from 30 s−1 to 100 s−1. When the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fracture of the rock is damaged before it can be expanded.In order to investigate the relationship between peak stress and rock bridge angle and strain rate, the peak stresses of granite specimens with different rock bridge angles at different strain rates shown in Table1 are summarized.Table 1 Summarization of rock peak strength for rock under different strain rates. Rock bridge angleε˙=31.4s−1ε˙=53.3s−1ε˙=78.4s−1ε˙=143.8s−130°21036345348050°25740448750370°359454512530Table1 lists the peak stresses of granite specimens with different rock bridge angles at various strain rates.Figure6 plots the relationship between the peak stress with the angle and strain rate of the tested granite samples. The peak stress increases with increasing strain rate, but the increase is getting smaller because the rock itself responds with a certain delay under the impact load, while the strength of granite increases with increasing strain rate, but there is an upper limit. For the granite specimens in this work, the peak strain is positively correlated with the strain rate; when the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fractures of the granite specimens are damaged before the specimens can be expanded. The peak stress of granite specimens with the same strain rate of the bridge angle of 30° is the smallest, while the peak stress of granite specimens with the bridge angle of 70° is the largest, and the peak stress will increase with the increase of the bridge angle. This shows that the effect of the bridge angle on the strength of granite specimens is significant, which is due to the locking effect of the bridge on the interrupted jointed rock body to improve the strength of the rock. In order to describe the relationship between peak stress and strain rate more precisely, so the data of rock bridge angle of 30° was taken for curve fitting, and the fitting results are shown in Figure 7.Figure 6 Peak stresses of granite specimens. (a) Variation of peak stress with angle of rock bridge; (b) variation of peak stress with strain rate. (a)(b)Figure 7 Fitting curve of peak strength versus strain rate for granite specimens with rock bridge angle of 30°.The regression simulation of the relational curve yields the following functional equation between peak stress and strain rate.(7)σ=a−b⋅cε˙,where σ is the peak stress, a = 487.24, b = 963.09, c = 0.96, and the correlation R2 = 0.99659, indicating obvious correlation.The value range of this curve strain rateε should be greater than 15 s−1, if the strain rate is less than 15 s−1, the peak value is negative, which is not in line with the reality. It can also be concluded that the peak stress of the granite specimen with the rock bridge angle of 30° increases gradually with the increase of strain rate, but the increase is getting smaller and smaller, and the peak will not exceed 487 MPa. ### 3.2. Energy Conversion Analysis Assuming that the test system is a closed system and that there is no heat exchange with the outside world during the test. The kinetic energy transferred by compression of the rock near the peak will be neglected for rock samples under impact loading, and then the total uniaxial axial input strain energyU is obtained under pressure according to the first law of thermodynamics as [26–28](8)U=Ud+Ue,where Ud is the dissipative strain energy and Ue is the releasable elastic strain energy.The energy evolution curves of the granite samples with different rock bridge angle are plotted in Figures8–10. The black curve in the figures shows the total energy evolution curve of the granite specimen under the impact load, which divides the energy absorption curve of the total input strain energy of the impact process into four stages. The first stage is the compacting stage, where the impact load compacts the microcracks and pores, and the energy evolution curve is concave upward and rises slowly. The second stage is the elastic deformation stage. The absorbed energy increases rapidly, and the elastic strain energy increases accordingly, and the energy evolution curve is approximately linear. The third stage is the yielding stage. The specimen produces new cracks and the existing cracks further evolve, and the energy evolution curve deviates from a straight line, but the specimen still has a certain load-bearing capacity. The fourth stage is the damage stage, the energy absorbed by the specimen no longer increases, and the energy evolution curve tends to be smooth.Figure 8 Strain energy density versus axial strain for rock with 30° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 9 Strain energy density versus axial strain for rock with 50° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 10 Strain energy density versus axial strain for rock with 70° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)The energy evolution curves of the dissipated strain energyUd and the elastic strain energy Ue of the rock specimen at different stages also show different energy evolution characteristics. During the compression-density stage, both the releasable elastic strain energy and the dissipated strain energy are accumulated. At the elastic deformation stage, the releasable elastic strain energy is continuously accumulated and the dissipated energy is almost not increased. At the yielding stage, the increase rate of the releasable elastic strain energy becomes slower and the energy dissipation rate becomes faster. At the damage stage, the elastic strain energy is released drastically and the energy dissipation is drastic.Under the action of impact loading, the total input strain energy of the rock specimen gradually increases, while the input strain energy is converted into releasable elastic strain energy and stored in the rock specimen during the elastic phase of the stress-strain curve. The stored elastic strain energy reaches its maximum value when the peak stress is reached in the stress-strain curve, and the elastic strain energy is finally released with the destruction of the rock specimen. During the whole loading process, the dissipated strain energy is gradually increased, which is due to the friction of microdefect closure, microcrack expansion, and relative misalignment of the fracture surface inside the rock specimen, the granite specimen produces energy dissipation, and finally leads to the loss of cohesion of the rock.From Figure11(a), it is shown that the total input strain energy increases with the increase of strain rate. As can be also seen from Figure 11(b), when the strain rate is at 30 s−1∼80 s−1, the peak releasable elastic strain energy increases gradually with the increase of strain rate. When the strain rate reaches 143.8 s−1, the cracks inside the granite specimen are destroyed without expanding in a short time because the strain rate is too large, so the elastic strain energy is also less during the destruction process and is released quickly. The peak releasable elastic strain energy at a rate of 143.8 s−1 decreases sharply. With the increase of energy input, the energy dissipation also increases and the degree of rock damage becomes more severe, and it is also found that the total input strain energy increases with the increase of rock bridge angle. From Figure 11(c), it can be seen that the dissipated energy gradually increases with increasing strain rate.Figure 11 Plots of the relationship between the strain density against strain rate. (a)U; (b) Ue; (c) Ud. (a)(b)(c) ### 3.3. Dynamic Fracture Evolution Visualization In order to obtain the dynamic rupture process of granite specimens under impact loading, a high-speed camera was used to record the crack expansion process of rock specimens, so as to analyze the crack expansion process of rock specimens under different rock bridge angles and different strain rates. The fracture process for the tested granite samples with different approach angle is shown in Figure12. It can be seen that both the strain rate and the approach angle impact the propagation path of cracks. In addition, crack density increases with increasing strain rate.Figure 12 Description of crack propagation using high-speed camera for rock specimens at different strain rates: (a) rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The fracture process of the granite specimen shown in Figure12 reveals the dynamic fracture evolution process of the locked rock. Under the impact loading, the cracks first start to expand from the tip ends of the two prefabricated fractures. When the cracks at the tip ends start to expand, the two prefabricated fractures gradually compact and close, and the width of the prefabricated fractures decreases significantly. As the cracks at the tip ends of the prefabricated fractures expand to a certain degree, secondary fractures start to appear. With the increase of the applied loads, the width of the secondary fractures gradually increases, and finally the fractures penetrate, the width of the fractures increases abruptly, and the rock specimen damages. Comparing the damage process of rock specimens under different rock bridge angles and different strain rates, it is found that the crack propagation path and network are complex for a rock with a larger the rock bridge angle. With the increase of the strain rate, the number and width of cracks both increase, which is because the ductility of rock specimens increases with the increase of strain rate. However, there exists a specific value of strain rate, and when the strain rate exceeds this value, the cracks inside the granite specimen will be damaged because it is too late to expand in a short time.When the strain rate of the granite specimen is relatively low, the damage of the rock has the typical characteristics of axial splitting damage. The XRD analysis shows that the granite specimens are rich in brittle minerals such as quartz and feldspar, which are highly susceptible to brittle fracture under impact loading. With the increase of strain rate, the damage mode of granite specimens changed from axial splitting to crushing. Cracks first appear along the quartz-quartz or quartz-feldspar boundaries, and with the increase of external load, the cracks in the quartz grains expand rapidly, while the separation of the deconstruction surface is the main damage mode of the black mica.The locked segments in the granite specimens are marked with white ellipses, and it is found that the cracks at the locked segments are all through when the rocks are in the damage stage. For the rock specimens at the same strain rate, the effect of the rock bridge angle on the crack extension is also significant. A comparison of the crack extension process by the strain rate of 31.4 s−1 in Figures 12(a)∼12(c) shows that the width of the crack as well as the damage of the final rock specimen increases sharply with the increase of the rock bridge angle, which also indicates that the rock locking section has a controlling effect on the crack extension. ## 3.1. Typical Dynamic Stress-Strain Curves The test data were processed according to the one-wave method [25], and the stress-strain curves were analyzed and plotted after the curve reached stress equilibrium. The dynamic stress-strain curves of rock specimens with different rock bridge angles at different impact air pressures are shown in Figure 5.Figure 5 Dynamic stress-strain curves of granite specimens with different rock bridge angles at different strain rates. (a) Rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The data collected by the SHPB system were processed to obtain the stress-strain curves, as shown in Figure5. From the dynamic stress-strain curves in Figure 5, it can be seen that the dynamic stress-strain curves can be divided into five stages: pore compacting stage, elastic deformation stage, microfracture stable development stage, microcrack unstable development stage, and post damage stage. The rock material is a natural nonhomogeneous material, and the specimen contains more microcracks in the open state inside, which are gradually closed under the action of lower stresses. When the stress grows to a certain value, the stress-strain curve shows a certain range of elastic growth, and then the specimen produces more and more trace new fractures, and finally in the microcrack nonstable development stage. The sharp increase of new fractures leads to mutual penetration, until the internal structure of the rock is completely destroyed to form a macroscopic fracture surface. The analysis shows that the dynamic stress-strain curves of granite specimens with different bridge angles and strain rates are basically the same.At different strain rates, the peak stress increases with increasing strain rate, but the increase is smaller and smaller. This is because there is a certain delayed response of the rock under the impact load, and the strength of the rock will increase with the increase of strain rate, tending to the upper limit of strength. For the granite specimens in this work, it is analyzed that the peak strain is positively correlated with the strain rate when the strain rate is from 30 s−1 to 100 s−1. When the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fracture of the rock is damaged before it can be expanded.In order to investigate the relationship between peak stress and rock bridge angle and strain rate, the peak stresses of granite specimens with different rock bridge angles at different strain rates shown in Table1 are summarized.Table 1 Summarization of rock peak strength for rock under different strain rates. Rock bridge angleε˙=31.4s−1ε˙=53.3s−1ε˙=78.4s−1ε˙=143.8s−130°21036345348050°25740448750370°359454512530Table1 lists the peak stresses of granite specimens with different rock bridge angles at various strain rates.Figure6 plots the relationship between the peak stress with the angle and strain rate of the tested granite samples. The peak stress increases with increasing strain rate, but the increase is getting smaller because the rock itself responds with a certain delay under the impact load, while the strength of granite increases with increasing strain rate, but there is an upper limit. For the granite specimens in this work, the peak strain is positively correlated with the strain rate; when the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fractures of the granite specimens are damaged before the specimens can be expanded. The peak stress of granite specimens with the same strain rate of the bridge angle of 30° is the smallest, while the peak stress of granite specimens with the bridge angle of 70° is the largest, and the peak stress will increase with the increase of the bridge angle. This shows that the effect of the bridge angle on the strength of granite specimens is significant, which is due to the locking effect of the bridge on the interrupted jointed rock body to improve the strength of the rock. In order to describe the relationship between peak stress and strain rate more precisely, so the data of rock bridge angle of 30° was taken for curve fitting, and the fitting results are shown in Figure 7.Figure 6 Peak stresses of granite specimens. (a) Variation of peak stress with angle of rock bridge; (b) variation of peak stress with strain rate. (a)(b)Figure 7 Fitting curve of peak strength versus strain rate for granite specimens with rock bridge angle of 30°.The regression simulation of the relational curve yields the following functional equation between peak stress and strain rate.(7)σ=a−b⋅cε˙,where σ is the peak stress, a = 487.24, b = 963.09, c = 0.96, and the correlation R2 = 0.99659, indicating obvious correlation.The value range of this curve strain rateε should be greater than 15 s−1, if the strain rate is less than 15 s−1, the peak value is negative, which is not in line with the reality. It can also be concluded that the peak stress of the granite specimen with the rock bridge angle of 30° increases gradually with the increase of strain rate, but the increase is getting smaller and smaller, and the peak will not exceed 487 MPa. ## 3.2. Energy Conversion Analysis Assuming that the test system is a closed system and that there is no heat exchange with the outside world during the test. The kinetic energy transferred by compression of the rock near the peak will be neglected for rock samples under impact loading, and then the total uniaxial axial input strain energyU is obtained under pressure according to the first law of thermodynamics as [26–28](8)U=Ud+Ue,where Ud is the dissipative strain energy and Ue is the releasable elastic strain energy.The energy evolution curves of the granite samples with different rock bridge angle are plotted in Figures8–10. The black curve in the figures shows the total energy evolution curve of the granite specimen under the impact load, which divides the energy absorption curve of the total input strain energy of the impact process into four stages. The first stage is the compacting stage, where the impact load compacts the microcracks and pores, and the energy evolution curve is concave upward and rises slowly. The second stage is the elastic deformation stage. The absorbed energy increases rapidly, and the elastic strain energy increases accordingly, and the energy evolution curve is approximately linear. The third stage is the yielding stage. The specimen produces new cracks and the existing cracks further evolve, and the energy evolution curve deviates from a straight line, but the specimen still has a certain load-bearing capacity. The fourth stage is the damage stage, the energy absorbed by the specimen no longer increases, and the energy evolution curve tends to be smooth.Figure 8 Strain energy density versus axial strain for rock with 30° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 9 Strain energy density versus axial strain for rock with 50° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 10 Strain energy density versus axial strain for rock with 70° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)The energy evolution curves of the dissipated strain energyUd and the elastic strain energy Ue of the rock specimen at different stages also show different energy evolution characteristics. During the compression-density stage, both the releasable elastic strain energy and the dissipated strain energy are accumulated. At the elastic deformation stage, the releasable elastic strain energy is continuously accumulated and the dissipated energy is almost not increased. At the yielding stage, the increase rate of the releasable elastic strain energy becomes slower and the energy dissipation rate becomes faster. At the damage stage, the elastic strain energy is released drastically and the energy dissipation is drastic.Under the action of impact loading, the total input strain energy of the rock specimen gradually increases, while the input strain energy is converted into releasable elastic strain energy and stored in the rock specimen during the elastic phase of the stress-strain curve. The stored elastic strain energy reaches its maximum value when the peak stress is reached in the stress-strain curve, and the elastic strain energy is finally released with the destruction of the rock specimen. During the whole loading process, the dissipated strain energy is gradually increased, which is due to the friction of microdefect closure, microcrack expansion, and relative misalignment of the fracture surface inside the rock specimen, the granite specimen produces energy dissipation, and finally leads to the loss of cohesion of the rock.From Figure11(a), it is shown that the total input strain energy increases with the increase of strain rate. As can be also seen from Figure 11(b), when the strain rate is at 30 s−1∼80 s−1, the peak releasable elastic strain energy increases gradually with the increase of strain rate. When the strain rate reaches 143.8 s−1, the cracks inside the granite specimen are destroyed without expanding in a short time because the strain rate is too large, so the elastic strain energy is also less during the destruction process and is released quickly. The peak releasable elastic strain energy at a rate of 143.8 s−1 decreases sharply. With the increase of energy input, the energy dissipation also increases and the degree of rock damage becomes more severe, and it is also found that the total input strain energy increases with the increase of rock bridge angle. From Figure 11(c), it can be seen that the dissipated energy gradually increases with increasing strain rate.Figure 11 Plots of the relationship between the strain density against strain rate. (a)U; (b) Ue; (c) Ud. (a)(b)(c) ## 3.3. Dynamic Fracture Evolution Visualization In order to obtain the dynamic rupture process of granite specimens under impact loading, a high-speed camera was used to record the crack expansion process of rock specimens, so as to analyze the crack expansion process of rock specimens under different rock bridge angles and different strain rates. The fracture process for the tested granite samples with different approach angle is shown in Figure12. It can be seen that both the strain rate and the approach angle impact the propagation path of cracks. In addition, crack density increases with increasing strain rate.Figure 12 Description of crack propagation using high-speed camera for rock specimens at different strain rates: (a) rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The fracture process of the granite specimen shown in Figure12 reveals the dynamic fracture evolution process of the locked rock. Under the impact loading, the cracks first start to expand from the tip ends of the two prefabricated fractures. When the cracks at the tip ends start to expand, the two prefabricated fractures gradually compact and close, and the width of the prefabricated fractures decreases significantly. As the cracks at the tip ends of the prefabricated fractures expand to a certain degree, secondary fractures start to appear. With the increase of the applied loads, the width of the secondary fractures gradually increases, and finally the fractures penetrate, the width of the fractures increases abruptly, and the rock specimen damages. Comparing the damage process of rock specimens under different rock bridge angles and different strain rates, it is found that the crack propagation path and network are complex for a rock with a larger the rock bridge angle. With the increase of the strain rate, the number and width of cracks both increase, which is because the ductility of rock specimens increases with the increase of strain rate. However, there exists a specific value of strain rate, and when the strain rate exceeds this value, the cracks inside the granite specimen will be damaged because it is too late to expand in a short time.When the strain rate of the granite specimen is relatively low, the damage of the rock has the typical characteristics of axial splitting damage. The XRD analysis shows that the granite specimens are rich in brittle minerals such as quartz and feldspar, which are highly susceptible to brittle fracture under impact loading. With the increase of strain rate, the damage mode of granite specimens changed from axial splitting to crushing. Cracks first appear along the quartz-quartz or quartz-feldspar boundaries, and with the increase of external load, the cracks in the quartz grains expand rapidly, while the separation of the deconstruction surface is the main damage mode of the black mica.The locked segments in the granite specimens are marked with white ellipses, and it is found that the cracks at the locked segments are all through when the rocks are in the damage stage. For the rock specimens at the same strain rate, the effect of the rock bridge angle on the crack extension is also significant. A comparison of the crack extension process by the strain rate of 31.4 s−1 in Figures 12(a)∼12(c) shows that the width of the crack as well as the damage of the final rock specimen increases sharply with the increase of the rock bridge angle, which also indicates that the rock locking section has a controlling effect on the crack extension. ## 4. Conclusions In this work, granite specimens with prefabricated different rock bridge angles were dynamically loaded using the SHPB loading method to investigate the stress-strain and energy as well as their dynamic rupture processes. Based on the above analysis, the main conclusions are summarized as follows:(1) The peak stress of granite will increase with the increase of rock bridge angle. This is because the locking effect of the rock bridge on the interrupted joints increases the strength of the rock; the peak stress of granite also increases with the increase of strain rate, but the increase is smaller and smaller under the action of impact loading.(2) The peak total strain energy increases with the increase of rock bridge angle and strain rate. The input energy is mainly transformed into elastic strain energy in the elastic deformation stage and all of it is released when the specimen is damaged. While the dissipated strain energy gradually increases with the increase of the energy input, and its increasing rate becomes faster and faster.(3) The crack propagation path is influenced by both the rock bridge angle and strain rate. The larger the rock bridge angle is, the more obvious the crack expansion is and the more severe the damage of the rock when the crack propagation. With the increase of the strain rate, the number and width of cracks also increase. The locking section has a controlling effect on the damage of the rock, and the rock will be damaged only when the crack penetrates at the locking section. --- *Source: 1016412-2021-10-05.xml*
1016412-2021-10-05_1016412-2021-10-05.md
49,079
Effects of the Rock Bridge Ligament on Fracture and Energy Evolution of Preflawed Granite Exposed to Dynamic Loads
Kaihua Sun; Xiong Wu; Xuefeng Yi; Yu Wang
Shock and Vibration (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1016412
1016412-2021-10-05.xml
--- ## Abstract This paper aims to reveal the mechanical properties, energy evolution characteristics, and dynamic rupture process of preflawed granite under impact loading with different rock bridge angles and strain rates. A series of dynamic impact experiments were conducted along with the separate Hopkinson press bar (SHPB) testing system to analyze and study the overall rock fracture process. Under the impact load, the peak stress of granite increases with the increase of rock bridge angle and strain rate, but the increase gradually decreases. The peak strain also increases gradually with the increase of rock bridge angle, but there is an upper limit value; the total input strain energy increases with the increase of strain rate and rock bridge angle. It is shown that the higher the strain rate, the higher the unit dissipation energy, and the greater the degree of rock fragmentation. For rock under impact loads, the crack first initiates from the wing end of the prefabricated flaw, the preflaw closes gradually, and finally the crack propagates at the locking section leading to the coalescence of rock bridge. With the increase of strain rate, the fragmentation degree of the specimen increases asymptotically, and the average fragmentation size of the specimen decreases with the increase of strain rate. It is suggested that the stability of large rocked slopes is controlled by the locked section, and understanding the fracture evolution of the rock bridge is the key to slope instability prediction. --- ## Body ## 1. Introduction In the current mining process, blasting is an important means of extraction, and along with the effects of exiting, excavation, and blasting operations, the rock masses in the open pit slope are disturbed by dynamic loads to varying degrees; therefore, it is particularly necessary to reveal the influence of impact dynamics on rock instability [1–3]. The natural fractures in the mine rock are not continuous, and the discontinuous fractures form a kind of rock bridges (i.e., rock-locked sections) [4, 5]. The stress conditions at rock bridges are often more complex due to the greater shear stresses, and the presence of rock bridges can increase the strength of the weak surfaces of the rock mass, which is called the locking effect of rock bridges on intermittent jointed rock masses [6, 7]. The fracture of rock bridges leads to the connection of joints, and destabilization damage will occur only when all rock bodies are connected by the previous fractures [8]. Therefore, the fracture mechanism of rock bridges needs to be studied to ensure the stability of rock masses in open pit mines and the safety of mining.The impact loading testing is different from the conventional static loading and cyclic loading tests. Usually, the impact loading testing is applied high strain rate on rock, which is different from the medium-low strain rate of the cyclic or fatigue loading conditions [9–13]. The properties of rock bridge and the current status of slope instability have been widely studied by scholars worldwide. Yin et al. [9] used split Hopkinson pressure bar and high-speed digital image correlation (DIC) technology to investigate the full-field deformation characteristics of phyllite under Brazilian disc test. Wong et al. [14] studied the cracking process and strength characteristics of rock-like fracture materials with different flaw inclination angles, different inclination of rock bridges, and different frictional characteristics of the crack contact surface by uniaxial compression tests. Gomez et al. [15] used a high-speed camera to take photoelastic pictures of the specimen in dynamic BD test, where the specimen was subjected to impact loading, and after about 30us of stress wave propagation, the specimen reached stress equilibrium at both ends, and its internal stress state was consistent with the internal stress state of the specimen in quasi-static condition. Zhang and Zhao [16] combined SEM scanner, high-speed camera, DIC high-speed photography, and SHPB device to reproduce the whole process of rock surface impact damage by multiple means simultaneously. Li et al. [17] proposed a test method for dynamic compression strength of rocks under prestatic load and measured the dynamic compression strength of sandstone under preload, and the results showed that the dynamic compression strength of the specimen under preload was higher than its compression strength under pure static load or pure dynamic load. Gong et al. [18] found that the dissipation energy increased linearly with the increase of incident energy by conducting SHPB tests, and the linear energy dissipation law in dynamic compression tests was confirmed by connecting two different inclined paths by a critical incident energy. Yang et al. [19] conducted impact compression tests on samples of red sandstone, gray sandstone, and granite, which are common in rock engineering, with different impact velocities. The impact compression test of these three types of rocks was conducted to compare the stress wave propagation characteristics, dynamic stress-strain relationship, degree of fragmentation, and energy dissipation law. Wang et al. [20] investigated the mechanical properties and energy dissipation law of sandstone under impact loading and explored the change law under different temperature conditions.Currently, most of the studies on the mechanical properties of jointed rocks are conducted by means of static loading tests; in addition, most of the studies on the characteristics of jointed rock masses under impact dynamic loading are focused on intact rock. The damage and fracture mechanism of the preflawed rock subjected to dynamic impacting loads are not well understood. Therefore, in order to reveal the effects of different joint angles and strain rates on the dynamic response of rock materials, SHPB impact testing was conducted on fissure-prefabricated specimens with different angles to analyze the effects of different joint angles and strain rates on the dynamic stress-strain analysis, energy evolution, and dynamic fracturing process. ## 2. Test Materials and Methods ### 2.1. Specimen Preparation The test material was obtained from a metal mining at Western of China. According to XRD analysis, the granite is mainly composed of sodium feldspar, quartz, magnesia hornblende, orthoclase, and black mica, and the specific composition is shown in Figure1. SEM imaging was performed on different granite specimens, and the SEM morphology of the specimens is shown in Figure 2. The SEM imaging analysis shows that microcracks, pores, and mineral interfaces were distributed within rock.Figure 1 Mineral composition analysis using XRD method for the granite samples.Figure 2 SEM results of rock specimens to observe its mesoscopic structure.Referring to the suggested method by ISRM for impact tests, the rock cores were drilled into cylindrical shape with a diameter (D) of 50 mm and a height (H) of 50 mm [21]. The granite specimens were prepared by polishing the ends of the specimens to ensure that the error of unevenness in the granite specimens was less than 0.05 mm, and the error of parallelism at both ends was less than 0.1 mm. In order to simulate the locking effect of the rock, double prefabricated fractures were prepared in the specimen by using waterjet cutting method, and two prefabricated fractures with 1 mm pore size were cut by spraying high-pressure water mixed with abrasive from a 0.75 mm diameter nozzle. As shown in Figure 3, the rock specimen contains oblique and horizontal fissures with a length of 8 mm, the angles of approach of the oblique fissures are 30°, 50°, and 70°, and the length of the locking section is set to 12 mm.Figure 3 Preparation of the preflawed granite samples used for the SHPB tests. (a)β = 30, (b) β = 50, and (c) β = 70. (a)(b)(c) ### 2.2. Test Apparatus The impact test was done on a separated Hopkinson pressure bar (SHPB) test system to determine the dynamic stress-strain, stress-time, and strain-time relationships of rock specimens under impact loading and to carry out an in-depth study of dynamic intrinsic relationships based on this. As shown in Figure4, the SHPB device consists of a power system, an impact rod (bullet), an input rod, an output rod, an absorption rod, and a measurement recording system, with the tested specimen sandwiched between the input and output rods and auxiliary equipment: a super dynamic strain gauge, a transient waveform recorder, a computer, strain gauges, bullet velocity tests, and other devices.Figure 4 SHPB testing system. (a) Hopkinson compression bar test setup; (b) 30,000-frame high-speed camera. (a)(b)The rock specimens were tested with different impact strain rates of 0.07, 0.09, 0.10, and 0.11 MPa, and the average strain rates of the rocks corresponding to different impact pressures were 31.4 s−1, 53.3 s−1, 78.4 s−1, and 143.8 s−1. The test data were collected with the help of DHDAS dynamic signal acquisition system, and 30,000-frame high-speed camera was used to capture the rock crack extension process. The whole test process is shown in Figure 4. ### 2.3. Test Idea The SHPB experiment is that the impacting rod strikes the input rod along the axial direction with a certain speed and generates a compressive stress wave in the input rod. The initial pressure pulse is reflected and simultaneously turned into a tension pulse by the free end of the pulse, and then the jet is directed to the opposite polarized light. Thus, the wavelength produced in the falling beam is the length of the projection. When the stress wave falling into the drop bar reaches the sample, due to changes in intermediate conditions, part of the drop stress wave crossing the interface is reflected on the drop bar and forms a reflected stress wave, while the other part of the stress wave passing through the sample and transmitted forms a transmitted and then transmitted stress wave that is “intercepted” by the absorber bar and eventually absorbed by the dynamic absorber or lining device. The “stress-strain signals” of the incident, reflected, and transmitted stress waves can be accurately recorded by using resistive strain gauges attached to the incident and transmitted rods, which makes it easy to determine the dynamic loads and displacements on both ends of the sample, and to derive the stress-strain relationships of the material [22, 23].The basic principle of SHPB experiments is the theory of elastic stress waves in slender rods. According to the theory of stress wave propagation and the assumption of one-dimensional stress, and in accordance with the incoming type requirement of the displacement, the equations can be obtained as follows [24, 25]:(1)u1=C0εi∫0tdτ+−C0εr∫0tdτ,F1=EAεi+εr,u2=C0εt∫0tdτ,F2=EAεT,where u1 and u2 are the cross-sectional displacements of the incident and transmitted rods in contact with the specimen, respectively; C0 is the elastic longitudinal wave velocity of the waveguide rod; F1 and F2 are the forces acting on the two ends of the specimen, respectively; E is the elastic modulus of the waveguide rod; and A is the cross-sectional area of the waveguide rod.From the above two equations, the strainεs, strain rate εs, and stress σs on the specimen can be introduced as follows:(2)σs=F1+F22Asεi+εr+εt,εs=u1−u1lsC0ls∫0tεi−εr−εtdτ,ε˙s=dεsdt=C0lsεi−εr−εt,where ls is the length of the specimen and As is the cross-sectional area of the specimen. According to the specimen stress uniformity assumption, we have F1 = F2, and according to the one-dimensional stress wave theory, we have(3)εi+εr=εt.Using the data of incident waveεi and transmitted wave εt, we can obtain(4)σs=EA0Asεt,εs=2C0ls∫0tεi−εtdτ,ε˙s=2C0lsεi−εt.The stress, strain, and strain rate curves of the specimen with time are thus obtained.Equations (2) and (4) are the “three-wave method” and “two-wave method” for measuring the stress-strain relationship at high strain rates in conventional SHPB experiments. Due to the long incident rod in the SHPB experiment, the incident waveform will be distorted to a certain extent during the reflection process, and it is more serious for the large diameter SHPB device, while the material of the specimen is uniform, the distance is short, the filtering effect is good, and the transmission wave dispersion is small, so the incident and transmission waves of the two-wave method are usually used.According to the law of conservation of energy, the dissipation energyWs(t) of the impact compression experimental specimen is(5)Wst=Wit−Wrt−Wtt,where Wi(t), Wr(t), and Wt(t) are the incident, reflected, and transmitted wave energies, respectively, and the expressions are as follows:(6)Wit=E0C0A0∫0tεi2tdτ,Wrt=E0C0A0∫0tεr2tdτ,Wtt=E0C0A0∫0tεt2tdτ. ## 2.1. Specimen Preparation The test material was obtained from a metal mining at Western of China. According to XRD analysis, the granite is mainly composed of sodium feldspar, quartz, magnesia hornblende, orthoclase, and black mica, and the specific composition is shown in Figure1. SEM imaging was performed on different granite specimens, and the SEM morphology of the specimens is shown in Figure 2. The SEM imaging analysis shows that microcracks, pores, and mineral interfaces were distributed within rock.Figure 1 Mineral composition analysis using XRD method for the granite samples.Figure 2 SEM results of rock specimens to observe its mesoscopic structure.Referring to the suggested method by ISRM for impact tests, the rock cores were drilled into cylindrical shape with a diameter (D) of 50 mm and a height (H) of 50 mm [21]. The granite specimens were prepared by polishing the ends of the specimens to ensure that the error of unevenness in the granite specimens was less than 0.05 mm, and the error of parallelism at both ends was less than 0.1 mm. In order to simulate the locking effect of the rock, double prefabricated fractures were prepared in the specimen by using waterjet cutting method, and two prefabricated fractures with 1 mm pore size were cut by spraying high-pressure water mixed with abrasive from a 0.75 mm diameter nozzle. As shown in Figure 3, the rock specimen contains oblique and horizontal fissures with a length of 8 mm, the angles of approach of the oblique fissures are 30°, 50°, and 70°, and the length of the locking section is set to 12 mm.Figure 3 Preparation of the preflawed granite samples used for the SHPB tests. (a)β = 30, (b) β = 50, and (c) β = 70. (a)(b)(c) ## 2.2. Test Apparatus The impact test was done on a separated Hopkinson pressure bar (SHPB) test system to determine the dynamic stress-strain, stress-time, and strain-time relationships of rock specimens under impact loading and to carry out an in-depth study of dynamic intrinsic relationships based on this. As shown in Figure4, the SHPB device consists of a power system, an impact rod (bullet), an input rod, an output rod, an absorption rod, and a measurement recording system, with the tested specimen sandwiched between the input and output rods and auxiliary equipment: a super dynamic strain gauge, a transient waveform recorder, a computer, strain gauges, bullet velocity tests, and other devices.Figure 4 SHPB testing system. (a) Hopkinson compression bar test setup; (b) 30,000-frame high-speed camera. (a)(b)The rock specimens were tested with different impact strain rates of 0.07, 0.09, 0.10, and 0.11 MPa, and the average strain rates of the rocks corresponding to different impact pressures were 31.4 s−1, 53.3 s−1, 78.4 s−1, and 143.8 s−1. The test data were collected with the help of DHDAS dynamic signal acquisition system, and 30,000-frame high-speed camera was used to capture the rock crack extension process. The whole test process is shown in Figure 4. ## 2.3. Test Idea The SHPB experiment is that the impacting rod strikes the input rod along the axial direction with a certain speed and generates a compressive stress wave in the input rod. The initial pressure pulse is reflected and simultaneously turned into a tension pulse by the free end of the pulse, and then the jet is directed to the opposite polarized light. Thus, the wavelength produced in the falling beam is the length of the projection. When the stress wave falling into the drop bar reaches the sample, due to changes in intermediate conditions, part of the drop stress wave crossing the interface is reflected on the drop bar and forms a reflected stress wave, while the other part of the stress wave passing through the sample and transmitted forms a transmitted and then transmitted stress wave that is “intercepted” by the absorber bar and eventually absorbed by the dynamic absorber or lining device. The “stress-strain signals” of the incident, reflected, and transmitted stress waves can be accurately recorded by using resistive strain gauges attached to the incident and transmitted rods, which makes it easy to determine the dynamic loads and displacements on both ends of the sample, and to derive the stress-strain relationships of the material [22, 23].The basic principle of SHPB experiments is the theory of elastic stress waves in slender rods. According to the theory of stress wave propagation and the assumption of one-dimensional stress, and in accordance with the incoming type requirement of the displacement, the equations can be obtained as follows [24, 25]:(1)u1=C0εi∫0tdτ+−C0εr∫0tdτ,F1=EAεi+εr,u2=C0εt∫0tdτ,F2=EAεT,where u1 and u2 are the cross-sectional displacements of the incident and transmitted rods in contact with the specimen, respectively; C0 is the elastic longitudinal wave velocity of the waveguide rod; F1 and F2 are the forces acting on the two ends of the specimen, respectively; E is the elastic modulus of the waveguide rod; and A is the cross-sectional area of the waveguide rod.From the above two equations, the strainεs, strain rate εs, and stress σs on the specimen can be introduced as follows:(2)σs=F1+F22Asεi+εr+εt,εs=u1−u1lsC0ls∫0tεi−εr−εtdτ,ε˙s=dεsdt=C0lsεi−εr−εt,where ls is the length of the specimen and As is the cross-sectional area of the specimen. According to the specimen stress uniformity assumption, we have F1 = F2, and according to the one-dimensional stress wave theory, we have(3)εi+εr=εt.Using the data of incident waveεi and transmitted wave εt, we can obtain(4)σs=EA0Asεt,εs=2C0ls∫0tεi−εtdτ,ε˙s=2C0lsεi−εt.The stress, strain, and strain rate curves of the specimen with time are thus obtained.Equations (2) and (4) are the “three-wave method” and “two-wave method” for measuring the stress-strain relationship at high strain rates in conventional SHPB experiments. Due to the long incident rod in the SHPB experiment, the incident waveform will be distorted to a certain extent during the reflection process, and it is more serious for the large diameter SHPB device, while the material of the specimen is uniform, the distance is short, the filtering effect is good, and the transmission wave dispersion is small, so the incident and transmission waves of the two-wave method are usually used.According to the law of conservation of energy, the dissipation energyWs(t) of the impact compression experimental specimen is(5)Wst=Wit−Wrt−Wtt,where Wi(t), Wr(t), and Wt(t) are the incident, reflected, and transmitted wave energies, respectively, and the expressions are as follows:(6)Wit=E0C0A0∫0tεi2tdτ,Wrt=E0C0A0∫0tεr2tdτ,Wtt=E0C0A0∫0tεt2tdτ. ## 3. Test Results ### 3.1. Typical Dynamic Stress-Strain Curves The test data were processed according to the one-wave method [25], and the stress-strain curves were analyzed and plotted after the curve reached stress equilibrium. The dynamic stress-strain curves of rock specimens with different rock bridge angles at different impact air pressures are shown in Figure 5.Figure 5 Dynamic stress-strain curves of granite specimens with different rock bridge angles at different strain rates. (a) Rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The data collected by the SHPB system were processed to obtain the stress-strain curves, as shown in Figure5. From the dynamic stress-strain curves in Figure 5, it can be seen that the dynamic stress-strain curves can be divided into five stages: pore compacting stage, elastic deformation stage, microfracture stable development stage, microcrack unstable development stage, and post damage stage. The rock material is a natural nonhomogeneous material, and the specimen contains more microcracks in the open state inside, which are gradually closed under the action of lower stresses. When the stress grows to a certain value, the stress-strain curve shows a certain range of elastic growth, and then the specimen produces more and more trace new fractures, and finally in the microcrack nonstable development stage. The sharp increase of new fractures leads to mutual penetration, until the internal structure of the rock is completely destroyed to form a macroscopic fracture surface. The analysis shows that the dynamic stress-strain curves of granite specimens with different bridge angles and strain rates are basically the same.At different strain rates, the peak stress increases with increasing strain rate, but the increase is smaller and smaller. This is because there is a certain delayed response of the rock under the impact load, and the strength of the rock will increase with the increase of strain rate, tending to the upper limit of strength. For the granite specimens in this work, it is analyzed that the peak strain is positively correlated with the strain rate when the strain rate is from 30 s−1 to 100 s−1. When the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fracture of the rock is damaged before it can be expanded.In order to investigate the relationship between peak stress and rock bridge angle and strain rate, the peak stresses of granite specimens with different rock bridge angles at different strain rates shown in Table1 are summarized.Table 1 Summarization of rock peak strength for rock under different strain rates. Rock bridge angleε˙=31.4s−1ε˙=53.3s−1ε˙=78.4s−1ε˙=143.8s−130°21036345348050°25740448750370°359454512530Table1 lists the peak stresses of granite specimens with different rock bridge angles at various strain rates.Figure6 plots the relationship between the peak stress with the angle and strain rate of the tested granite samples. The peak stress increases with increasing strain rate, but the increase is getting smaller because the rock itself responds with a certain delay under the impact load, while the strength of granite increases with increasing strain rate, but there is an upper limit. For the granite specimens in this work, the peak strain is positively correlated with the strain rate; when the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fractures of the granite specimens are damaged before the specimens can be expanded. The peak stress of granite specimens with the same strain rate of the bridge angle of 30° is the smallest, while the peak stress of granite specimens with the bridge angle of 70° is the largest, and the peak stress will increase with the increase of the bridge angle. This shows that the effect of the bridge angle on the strength of granite specimens is significant, which is due to the locking effect of the bridge on the interrupted jointed rock body to improve the strength of the rock. In order to describe the relationship between peak stress and strain rate more precisely, so the data of rock bridge angle of 30° was taken for curve fitting, and the fitting results are shown in Figure 7.Figure 6 Peak stresses of granite specimens. (a) Variation of peak stress with angle of rock bridge; (b) variation of peak stress with strain rate. (a)(b)Figure 7 Fitting curve of peak strength versus strain rate for granite specimens with rock bridge angle of 30°.The regression simulation of the relational curve yields the following functional equation between peak stress and strain rate.(7)σ=a−b⋅cε˙,where σ is the peak stress, a = 487.24, b = 963.09, c = 0.96, and the correlation R2 = 0.99659, indicating obvious correlation.The value range of this curve strain rateε should be greater than 15 s−1, if the strain rate is less than 15 s−1, the peak value is negative, which is not in line with the reality. It can also be concluded that the peak stress of the granite specimen with the rock bridge angle of 30° increases gradually with the increase of strain rate, but the increase is getting smaller and smaller, and the peak will not exceed 487 MPa. ### 3.2. Energy Conversion Analysis Assuming that the test system is a closed system and that there is no heat exchange with the outside world during the test. The kinetic energy transferred by compression of the rock near the peak will be neglected for rock samples under impact loading, and then the total uniaxial axial input strain energyU is obtained under pressure according to the first law of thermodynamics as [26–28](8)U=Ud+Ue,where Ud is the dissipative strain energy and Ue is the releasable elastic strain energy.The energy evolution curves of the granite samples with different rock bridge angle are plotted in Figures8–10. The black curve in the figures shows the total energy evolution curve of the granite specimen under the impact load, which divides the energy absorption curve of the total input strain energy of the impact process into four stages. The first stage is the compacting stage, where the impact load compacts the microcracks and pores, and the energy evolution curve is concave upward and rises slowly. The second stage is the elastic deformation stage. The absorbed energy increases rapidly, and the elastic strain energy increases accordingly, and the energy evolution curve is approximately linear. The third stage is the yielding stage. The specimen produces new cracks and the existing cracks further evolve, and the energy evolution curve deviates from a straight line, but the specimen still has a certain load-bearing capacity. The fourth stage is the damage stage, the energy absorbed by the specimen no longer increases, and the energy evolution curve tends to be smooth.Figure 8 Strain energy density versus axial strain for rock with 30° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 9 Strain energy density versus axial strain for rock with 50° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 10 Strain energy density versus axial strain for rock with 70° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)The energy evolution curves of the dissipated strain energyUd and the elastic strain energy Ue of the rock specimen at different stages also show different energy evolution characteristics. During the compression-density stage, both the releasable elastic strain energy and the dissipated strain energy are accumulated. At the elastic deformation stage, the releasable elastic strain energy is continuously accumulated and the dissipated energy is almost not increased. At the yielding stage, the increase rate of the releasable elastic strain energy becomes slower and the energy dissipation rate becomes faster. At the damage stage, the elastic strain energy is released drastically and the energy dissipation is drastic.Under the action of impact loading, the total input strain energy of the rock specimen gradually increases, while the input strain energy is converted into releasable elastic strain energy and stored in the rock specimen during the elastic phase of the stress-strain curve. The stored elastic strain energy reaches its maximum value when the peak stress is reached in the stress-strain curve, and the elastic strain energy is finally released with the destruction of the rock specimen. During the whole loading process, the dissipated strain energy is gradually increased, which is due to the friction of microdefect closure, microcrack expansion, and relative misalignment of the fracture surface inside the rock specimen, the granite specimen produces energy dissipation, and finally leads to the loss of cohesion of the rock.From Figure11(a), it is shown that the total input strain energy increases with the increase of strain rate. As can be also seen from Figure 11(b), when the strain rate is at 30 s−1∼80 s−1, the peak releasable elastic strain energy increases gradually with the increase of strain rate. When the strain rate reaches 143.8 s−1, the cracks inside the granite specimen are destroyed without expanding in a short time because the strain rate is too large, so the elastic strain energy is also less during the destruction process and is released quickly. The peak releasable elastic strain energy at a rate of 143.8 s−1 decreases sharply. With the increase of energy input, the energy dissipation also increases and the degree of rock damage becomes more severe, and it is also found that the total input strain energy increases with the increase of rock bridge angle. From Figure 11(c), it can be seen that the dissipated energy gradually increases with increasing strain rate.Figure 11 Plots of the relationship between the strain density against strain rate. (a)U; (b) Ue; (c) Ud. (a)(b)(c) ### 3.3. Dynamic Fracture Evolution Visualization In order to obtain the dynamic rupture process of granite specimens under impact loading, a high-speed camera was used to record the crack expansion process of rock specimens, so as to analyze the crack expansion process of rock specimens under different rock bridge angles and different strain rates. The fracture process for the tested granite samples with different approach angle is shown in Figure12. It can be seen that both the strain rate and the approach angle impact the propagation path of cracks. In addition, crack density increases with increasing strain rate.Figure 12 Description of crack propagation using high-speed camera for rock specimens at different strain rates: (a) rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The fracture process of the granite specimen shown in Figure12 reveals the dynamic fracture evolution process of the locked rock. Under the impact loading, the cracks first start to expand from the tip ends of the two prefabricated fractures. When the cracks at the tip ends start to expand, the two prefabricated fractures gradually compact and close, and the width of the prefabricated fractures decreases significantly. As the cracks at the tip ends of the prefabricated fractures expand to a certain degree, secondary fractures start to appear. With the increase of the applied loads, the width of the secondary fractures gradually increases, and finally the fractures penetrate, the width of the fractures increases abruptly, and the rock specimen damages. Comparing the damage process of rock specimens under different rock bridge angles and different strain rates, it is found that the crack propagation path and network are complex for a rock with a larger the rock bridge angle. With the increase of the strain rate, the number and width of cracks both increase, which is because the ductility of rock specimens increases with the increase of strain rate. However, there exists a specific value of strain rate, and when the strain rate exceeds this value, the cracks inside the granite specimen will be damaged because it is too late to expand in a short time.When the strain rate of the granite specimen is relatively low, the damage of the rock has the typical characteristics of axial splitting damage. The XRD analysis shows that the granite specimens are rich in brittle minerals such as quartz and feldspar, which are highly susceptible to brittle fracture under impact loading. With the increase of strain rate, the damage mode of granite specimens changed from axial splitting to crushing. Cracks first appear along the quartz-quartz or quartz-feldspar boundaries, and with the increase of external load, the cracks in the quartz grains expand rapidly, while the separation of the deconstruction surface is the main damage mode of the black mica.The locked segments in the granite specimens are marked with white ellipses, and it is found that the cracks at the locked segments are all through when the rocks are in the damage stage. For the rock specimens at the same strain rate, the effect of the rock bridge angle on the crack extension is also significant. A comparison of the crack extension process by the strain rate of 31.4 s−1 in Figures 12(a)∼12(c) shows that the width of the crack as well as the damage of the final rock specimen increases sharply with the increase of the rock bridge angle, which also indicates that the rock locking section has a controlling effect on the crack extension. ## 3.1. Typical Dynamic Stress-Strain Curves The test data were processed according to the one-wave method [25], and the stress-strain curves were analyzed and plotted after the curve reached stress equilibrium. The dynamic stress-strain curves of rock specimens with different rock bridge angles at different impact air pressures are shown in Figure 5.Figure 5 Dynamic stress-strain curves of granite specimens with different rock bridge angles at different strain rates. (a) Rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The data collected by the SHPB system were processed to obtain the stress-strain curves, as shown in Figure5. From the dynamic stress-strain curves in Figure 5, it can be seen that the dynamic stress-strain curves can be divided into five stages: pore compacting stage, elastic deformation stage, microfracture stable development stage, microcrack unstable development stage, and post damage stage. The rock material is a natural nonhomogeneous material, and the specimen contains more microcracks in the open state inside, which are gradually closed under the action of lower stresses. When the stress grows to a certain value, the stress-strain curve shows a certain range of elastic growth, and then the specimen produces more and more trace new fractures, and finally in the microcrack nonstable development stage. The sharp increase of new fractures leads to mutual penetration, until the internal structure of the rock is completely destroyed to form a macroscopic fracture surface. The analysis shows that the dynamic stress-strain curves of granite specimens with different bridge angles and strain rates are basically the same.At different strain rates, the peak stress increases with increasing strain rate, but the increase is smaller and smaller. This is because there is a certain delayed response of the rock under the impact load, and the strength of the rock will increase with the increase of strain rate, tending to the upper limit of strength. For the granite specimens in this work, it is analyzed that the peak strain is positively correlated with the strain rate when the strain rate is from 30 s−1 to 100 s−1. When the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fracture of the rock is damaged before it can be expanded.In order to investigate the relationship between peak stress and rock bridge angle and strain rate, the peak stresses of granite specimens with different rock bridge angles at different strain rates shown in Table1 are summarized.Table 1 Summarization of rock peak strength for rock under different strain rates. Rock bridge angleε˙=31.4s−1ε˙=53.3s−1ε˙=78.4s−1ε˙=143.8s−130°21036345348050°25740448750370°359454512530Table1 lists the peak stresses of granite specimens with different rock bridge angles at various strain rates.Figure6 plots the relationship between the peak stress with the angle and strain rate of the tested granite samples. The peak stress increases with increasing strain rate, but the increase is getting smaller because the rock itself responds with a certain delay under the impact load, while the strength of granite increases with increasing strain rate, but there is an upper limit. For the granite specimens in this work, the peak strain is positively correlated with the strain rate; when the strain rate changes to 143.8 s−1, the peak strain corresponding to the peak stress becomes smaller because the impact load is too large and the internal fractures of the granite specimens are damaged before the specimens can be expanded. The peak stress of granite specimens with the same strain rate of the bridge angle of 30° is the smallest, while the peak stress of granite specimens with the bridge angle of 70° is the largest, and the peak stress will increase with the increase of the bridge angle. This shows that the effect of the bridge angle on the strength of granite specimens is significant, which is due to the locking effect of the bridge on the interrupted jointed rock body to improve the strength of the rock. In order to describe the relationship between peak stress and strain rate more precisely, so the data of rock bridge angle of 30° was taken for curve fitting, and the fitting results are shown in Figure 7.Figure 6 Peak stresses of granite specimens. (a) Variation of peak stress with angle of rock bridge; (b) variation of peak stress with strain rate. (a)(b)Figure 7 Fitting curve of peak strength versus strain rate for granite specimens with rock bridge angle of 30°.The regression simulation of the relational curve yields the following functional equation between peak stress and strain rate.(7)σ=a−b⋅cε˙,where σ is the peak stress, a = 487.24, b = 963.09, c = 0.96, and the correlation R2 = 0.99659, indicating obvious correlation.The value range of this curve strain rateε should be greater than 15 s−1, if the strain rate is less than 15 s−1, the peak value is negative, which is not in line with the reality. It can also be concluded that the peak stress of the granite specimen with the rock bridge angle of 30° increases gradually with the increase of strain rate, but the increase is getting smaller and smaller, and the peak will not exceed 487 MPa. ## 3.2. Energy Conversion Analysis Assuming that the test system is a closed system and that there is no heat exchange with the outside world during the test. The kinetic energy transferred by compression of the rock near the peak will be neglected for rock samples under impact loading, and then the total uniaxial axial input strain energyU is obtained under pressure according to the first law of thermodynamics as [26–28](8)U=Ud+Ue,where Ud is the dissipative strain energy and Ue is the releasable elastic strain energy.The energy evolution curves of the granite samples with different rock bridge angle are plotted in Figures8–10. The black curve in the figures shows the total energy evolution curve of the granite specimen under the impact load, which divides the energy absorption curve of the total input strain energy of the impact process into four stages. The first stage is the compacting stage, where the impact load compacts the microcracks and pores, and the energy evolution curve is concave upward and rises slowly. The second stage is the elastic deformation stage. The absorbed energy increases rapidly, and the elastic strain energy increases accordingly, and the energy evolution curve is approximately linear. The third stage is the yielding stage. The specimen produces new cracks and the existing cracks further evolve, and the energy evolution curve deviates from a straight line, but the specimen still has a certain load-bearing capacity. The fourth stage is the damage stage, the energy absorbed by the specimen no longer increases, and the energy evolution curve tends to be smooth.Figure 8 Strain energy density versus axial strain for rock with 30° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 9 Strain energy density versus axial strain for rock with 50° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)Figure 10 Strain energy density versus axial strain for rock with 70° rock bridge angle subjected to different loading rates. (a)(b)(c)(d)The energy evolution curves of the dissipated strain energyUd and the elastic strain energy Ue of the rock specimen at different stages also show different energy evolution characteristics. During the compression-density stage, both the releasable elastic strain energy and the dissipated strain energy are accumulated. At the elastic deformation stage, the releasable elastic strain energy is continuously accumulated and the dissipated energy is almost not increased. At the yielding stage, the increase rate of the releasable elastic strain energy becomes slower and the energy dissipation rate becomes faster. At the damage stage, the elastic strain energy is released drastically and the energy dissipation is drastic.Under the action of impact loading, the total input strain energy of the rock specimen gradually increases, while the input strain energy is converted into releasable elastic strain energy and stored in the rock specimen during the elastic phase of the stress-strain curve. The stored elastic strain energy reaches its maximum value when the peak stress is reached in the stress-strain curve, and the elastic strain energy is finally released with the destruction of the rock specimen. During the whole loading process, the dissipated strain energy is gradually increased, which is due to the friction of microdefect closure, microcrack expansion, and relative misalignment of the fracture surface inside the rock specimen, the granite specimen produces energy dissipation, and finally leads to the loss of cohesion of the rock.From Figure11(a), it is shown that the total input strain energy increases with the increase of strain rate. As can be also seen from Figure 11(b), when the strain rate is at 30 s−1∼80 s−1, the peak releasable elastic strain energy increases gradually with the increase of strain rate. When the strain rate reaches 143.8 s−1, the cracks inside the granite specimen are destroyed without expanding in a short time because the strain rate is too large, so the elastic strain energy is also less during the destruction process and is released quickly. The peak releasable elastic strain energy at a rate of 143.8 s−1 decreases sharply. With the increase of energy input, the energy dissipation also increases and the degree of rock damage becomes more severe, and it is also found that the total input strain energy increases with the increase of rock bridge angle. From Figure 11(c), it can be seen that the dissipated energy gradually increases with increasing strain rate.Figure 11 Plots of the relationship between the strain density against strain rate. (a)U; (b) Ue; (c) Ud. (a)(b)(c) ## 3.3. Dynamic Fracture Evolution Visualization In order to obtain the dynamic rupture process of granite specimens under impact loading, a high-speed camera was used to record the crack expansion process of rock specimens, so as to analyze the crack expansion process of rock specimens under different rock bridge angles and different strain rates. The fracture process for the tested granite samples with different approach angle is shown in Figure12. It can be seen that both the strain rate and the approach angle impact the propagation path of cracks. In addition, crack density increases with increasing strain rate.Figure 12 Description of crack propagation using high-speed camera for rock specimens at different strain rates: (a) rock bridge angle of 30°, (b) rock bridge angle of 50°, and (c) rock bridge angle of 70°. (a)(b)(c)The fracture process of the granite specimen shown in Figure12 reveals the dynamic fracture evolution process of the locked rock. Under the impact loading, the cracks first start to expand from the tip ends of the two prefabricated fractures. When the cracks at the tip ends start to expand, the two prefabricated fractures gradually compact and close, and the width of the prefabricated fractures decreases significantly. As the cracks at the tip ends of the prefabricated fractures expand to a certain degree, secondary fractures start to appear. With the increase of the applied loads, the width of the secondary fractures gradually increases, and finally the fractures penetrate, the width of the fractures increases abruptly, and the rock specimen damages. Comparing the damage process of rock specimens under different rock bridge angles and different strain rates, it is found that the crack propagation path and network are complex for a rock with a larger the rock bridge angle. With the increase of the strain rate, the number and width of cracks both increase, which is because the ductility of rock specimens increases with the increase of strain rate. However, there exists a specific value of strain rate, and when the strain rate exceeds this value, the cracks inside the granite specimen will be damaged because it is too late to expand in a short time.When the strain rate of the granite specimen is relatively low, the damage of the rock has the typical characteristics of axial splitting damage. The XRD analysis shows that the granite specimens are rich in brittle minerals such as quartz and feldspar, which are highly susceptible to brittle fracture under impact loading. With the increase of strain rate, the damage mode of granite specimens changed from axial splitting to crushing. Cracks first appear along the quartz-quartz or quartz-feldspar boundaries, and with the increase of external load, the cracks in the quartz grains expand rapidly, while the separation of the deconstruction surface is the main damage mode of the black mica.The locked segments in the granite specimens are marked with white ellipses, and it is found that the cracks at the locked segments are all through when the rocks are in the damage stage. For the rock specimens at the same strain rate, the effect of the rock bridge angle on the crack extension is also significant. A comparison of the crack extension process by the strain rate of 31.4 s−1 in Figures 12(a)∼12(c) shows that the width of the crack as well as the damage of the final rock specimen increases sharply with the increase of the rock bridge angle, which also indicates that the rock locking section has a controlling effect on the crack extension. ## 4. Conclusions In this work, granite specimens with prefabricated different rock bridge angles were dynamically loaded using the SHPB loading method to investigate the stress-strain and energy as well as their dynamic rupture processes. Based on the above analysis, the main conclusions are summarized as follows:(1) The peak stress of granite will increase with the increase of rock bridge angle. This is because the locking effect of the rock bridge on the interrupted joints increases the strength of the rock; the peak stress of granite also increases with the increase of strain rate, but the increase is smaller and smaller under the action of impact loading.(2) The peak total strain energy increases with the increase of rock bridge angle and strain rate. The input energy is mainly transformed into elastic strain energy in the elastic deformation stage and all of it is released when the specimen is damaged. While the dissipated strain energy gradually increases with the increase of the energy input, and its increasing rate becomes faster and faster.(3) The crack propagation path is influenced by both the rock bridge angle and strain rate. The larger the rock bridge angle is, the more obvious the crack expansion is and the more severe the damage of the rock when the crack propagation. With the increase of the strain rate, the number and width of cracks also increase. The locking section has a controlling effect on the damage of the rock, and the rock will be damaged only when the crack penetrates at the locking section. --- *Source: 1016412-2021-10-05.xml*
2021
# Frequent Pattern Mining of Eye-Tracking Records Partitioned into Cognitive Chunks **Authors:** Noriyuki Matsuda; Haruhiko Takeuchi **Journal:** Applied Computational Intelligence and Soft Computing (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101642 --- ## Abstract Assuming that scenes would be visually scanned by chunking information, we partitioned fixation sequences of web page viewers into chunks using isolate gaze point(s) as the delimiter. Fixations were coded in terms of the segments in a5 × 5 mesh imposed on the screen. The identified chunks were mostly short, consisting of one or two fixations. These were analyzed with respect to the within- and between-chunk distances in the overall records and the patterns (i.e., subsequences) frequently shared among the records. Although the two types of distances were both dominated by zero- and one-block shifts, the primacy of the modal shifts was less prominent between chunks than within them. The lower primacy was compensated by the longer shifts. The patterns frequently extracted at three threshold levels were mostly simple, consisting of one or two chunks. The patterns revealed interesting properties as to segment differentiation and the directionality of the attentional shifts. --- ## Body ## 1. Introduction Eyes seldom stay completely still. They continually move even when one tries to fixate one’s gaze on an object because of the tremors, drifts, and microsaccades that occur on a small scale [1]. Hence, researchers need to infer a fixation from consecutive gaze points clustered in space [2]. We may regard such a cluster of gaze points as a perceptual chunk, a familiar term in psychology after Miller [3] in referring to a practically meaningful unit of information processing.During fixation, people closely scan a limited part of the scene they are interested in. They then quickly move their eyes to the next fixation area by saccade, which momentarily disrupts vision. However, it normally goes unnoticed thanks to our vision system that produces continuous transsaccades perception [4–6]. It means that successive fixations constitute a higher order chunking over and above the primary chunking of gaze points. Put metaphorically, the gaze , fixation , fixation-chunking relationship is analogous to the letter , word , phrase relationship. For the sake of brevity, a chunk of fixations will be referred to as a chunk.In viewing natural scenes or displays, a chunk continues to grow until interrupted by one or more isolate gaze points resulting from drifting attention or by accident. These do not participate in any fixation. Whatever causes the interruption, we believe that such isolate points serve as chunk delimiters, like the pauses in speech. As a pause can be either short or long, interruptions by isolate points can vary in length. Figure1 illustrates two levels of chunking: (a) chunking of gaze points into fixations and (b) chunking of consecutive fixations with and without interruption.Two fixations in one chunk (a) and in separate chunks (b). (a) (b)Granting our conjecture, one may still wonder what particular merits will accrue from the analysis of chunks in lieu of ordinary plain fixation sequences. The expected merits are twofold: separation of between- and within-chunk patterns and extraction of common patterns across records. Neither of these is attainable when dealing with multiple records by heat maps of fixations accumulated with no regard to sequential connections [7], by network analysis of the adjacent transitions accumulated within and between records [8–10], or by scan paths that would be too complicated [11] unless reduced to frequently shared subpaths. The key to understanding this point lies in the structure of fixation sequences as explained below. ### 1.1. Structure of Fixation Sequences Equation (1) presents two types of fixation sequences, one plain and the other partitioned, both arranged in time sequence. The former serves as a basis for heat maps, scan paths, and network analysis. The latter incorporates chunks delimited by isolate gazes or any other appropriate criterion. The essential nature of the sequences remains the same when fixations are coded in areas of interest (AOI) or grid-like segments.Plain and Partitioned Fixation Sequences. Consider (1) Plain: F 1 F 2 F 3 ⋯ F i F i + 1 F i + 2 ⋯ Partitioned: F 1 F 2 , F 3 , … , F i F i + 1 F i + 2 ⋯ , … , where F i denotes the ith fixation.Although it was not explicitly stated, McCarthy et al. [12] in effect extracted chunks from partitioned sequences in their work on the importance of web page objects and their locations. They grouped consecutive fixations within each AOI into a chunk called a glance to obtain plain sequences of glances coded in AOI. Their interest was to see how often areas of web pages would attract glances by varying the area locations and the types of tasks.By focusing on the frequency of glances as an indication of importance, they disregarded the length of the chunks, that is, the number of fixations within glances. Also disregarded was the shift of glances, that is, between-chunk sequences. To us, both within- and between-chunk patterns seem to contain rich information worthy of investigation. The information can be extracted from partitioned sequences but not from plain ones. In addition, partitioned sequences will be of great value when some AOIs are nested into broader AOIs (see [16]), given the appropriate coding. The present study is extensible to such a hierarchical structure.For the sake of simplicity, we will focus on the eye movements of web page viewers, and we will assume that the pages are divided into grid-like AOIs, that the fixations are coded in terms of the areas in which they fall, and that chunks are delimited by isolate gaze points. ### 1.2. Shifts of Interest within and between Chunks The distance between two successive fixations indicates how far the interest shifted or did not shift in a looped transition that represents sustained interest in a given area. In our view, a chunk of fixations reflects continuous interest, and a new one begins after a momentary drift of the gaze. It seems natural to expect the distance distribution of the within-chunk shifts to differ, to some extent, from that of the between-chunk shifts.The distance analysis explained above exploits information from the cumulative records across all viewers. Hence, it is possible that the results are influenced by some dominant patterns in particular records. If one is interested in sequential regularities often shared among records, frequent sequential pattern mining is useful, as explained below. ### 1.3. Frequent Sequential Pattern Mining Among others, we will employ PrefixSpan, developed by Pei et al. [13, 14], because of its conceptual compatibility with the partitioned sequences of eye-tracking data. Their approach is briefly explained below using their example, seen in Table 1. (See the Appendix for a more formal explanation.) One can view the data as the eye-tracking records of four viewers in which fixations are alphabetically coded according to the areas of interest (AOI) they fall into: a, b, c, d, e, f, and g.Table 1 Pattern extraction by prefix “a” at ms3. Record Initial sequences Patterns prefixed bya 1 [ a ] [ a b c ] [ a c ] [ d ] [ c f ] [ a b c ] [ a c ] [ d ] [ c f ] 2 [ a d ] [ c ] [ b c ] [ a e ] [ _ d ] [ c ] [ b c ] [ a e ] 3 [ e f ] [ a b ] [ d f ] [ c ] [ b ] [ _ b ] [ d f ] [ c ] [ b ] 4 [ e ] [ g ] [ a f ] [ c ] [ b ] [ c ] [ _ f ] [ c ] [ b ] [ c ] frequent code a / 4 , b / 4 / , c / 4 , d / 4, e / 3 , f / 3 b / 4 , c / 4 Note. The underscore _ means that the prefix was present in the chunk; for example, _ b implies a b.Codesa, b, c, d, e, and f are all frequent, shared by the majority, whereas code g is infrequent, appearing only once. For further scanning, any infrequent or rare code is to be removed from the records, since it will never appear in frequent patterns according to thea priori principle [15]. Let us set the level of being frequent at three for illustrative purposes. This level is called the minimum support threshold (abbreviated as ms).For every frequent code, one scans the reduced records, devoid of infrequent codes, for patterns prefixed by the given code. Those found for prefix “a” are listed in the second column of Table 1. These are subject to further scanning with respect to the frequent codes at this step, that is, b and c. This process recursively continues until no code is frequent or no patterns remain in the records. Note that prefixes grow in each step like “[ a ] [ b ]”, “[ a ] [ c ]” in the above example (see the Appendix for a more formal explanation).Table2 lists 14 frequent patterns extracted at ms3 from the initial record, including those found at ms4 as an embedded part. For instance, [ a ] [ c ], found at ms4, is embedded in patterns [ a ] [ c ] and [ a ] [ c ] [ c ], at ms3; that is, (2) a c ⊆ a c , a c c . Similarly, those found at ms3 are included in the patterns at ms2 reported by Pei et al. [13, 14]. Inclusive relations generally hold between different ms levels.Table 2 All of the frequent patterns extracted from Table1 at ms3. [ a ] [ a ] [ b ] [ a ] [ c ] [ a ] [ c ] [ b ] [ a ] [ c ] [ c ] [ b ] [ b ] [ c ] [ c ] [ c ] [ b ] [ c ] [ c ] [ d ] [ d ] [ c ] [ e ] [ f ] Note. Underscored patterns were extracted at ms4.Ordinarily, one finds too few patterns at a high ms level and too many at a low level to make an interesting analysis. However, once one recognizes the inclusive relations, making use of multiple levels becomes a plausible solution for identifying strongly frequent patterns as opposed to mildly and weakly frequent ones. (See the Appendix for the relation networks among the patterns identified at ms2, ms3, and ms4.)The present approach is expected to advance eye-tracking research along with conventional heat maps, scan paths, and network analysis recently developed by Matsuda and Takeuchi [8–10]. ## 1.1. Structure of Fixation Sequences Equation (1) presents two types of fixation sequences, one plain and the other partitioned, both arranged in time sequence. The former serves as a basis for heat maps, scan paths, and network analysis. The latter incorporates chunks delimited by isolate gazes or any other appropriate criterion. The essential nature of the sequences remains the same when fixations are coded in areas of interest (AOI) or grid-like segments.Plain and Partitioned Fixation Sequences. Consider (1) Plain: F 1 F 2 F 3 ⋯ F i F i + 1 F i + 2 ⋯ Partitioned: F 1 F 2 , F 3 , … , F i F i + 1 F i + 2 ⋯ , … , where F i denotes the ith fixation.Although it was not explicitly stated, McCarthy et al. [12] in effect extracted chunks from partitioned sequences in their work on the importance of web page objects and their locations. They grouped consecutive fixations within each AOI into a chunk called a glance to obtain plain sequences of glances coded in AOI. Their interest was to see how often areas of web pages would attract glances by varying the area locations and the types of tasks.By focusing on the frequency of glances as an indication of importance, they disregarded the length of the chunks, that is, the number of fixations within glances. Also disregarded was the shift of glances, that is, between-chunk sequences. To us, both within- and between-chunk patterns seem to contain rich information worthy of investigation. The information can be extracted from partitioned sequences but not from plain ones. In addition, partitioned sequences will be of great value when some AOIs are nested into broader AOIs (see [16]), given the appropriate coding. The present study is extensible to such a hierarchical structure.For the sake of simplicity, we will focus on the eye movements of web page viewers, and we will assume that the pages are divided into grid-like AOIs, that the fixations are coded in terms of the areas in which they fall, and that chunks are delimited by isolate gaze points. ## 1.2. Shifts of Interest within and between Chunks The distance between two successive fixations indicates how far the interest shifted or did not shift in a looped transition that represents sustained interest in a given area. In our view, a chunk of fixations reflects continuous interest, and a new one begins after a momentary drift of the gaze. It seems natural to expect the distance distribution of the within-chunk shifts to differ, to some extent, from that of the between-chunk shifts.The distance analysis explained above exploits information from the cumulative records across all viewers. Hence, it is possible that the results are influenced by some dominant patterns in particular records. If one is interested in sequential regularities often shared among records, frequent sequential pattern mining is useful, as explained below. ## 1.3. Frequent Sequential Pattern Mining Among others, we will employ PrefixSpan, developed by Pei et al. [13, 14], because of its conceptual compatibility with the partitioned sequences of eye-tracking data. Their approach is briefly explained below using their example, seen in Table 1. (See the Appendix for a more formal explanation.) One can view the data as the eye-tracking records of four viewers in which fixations are alphabetically coded according to the areas of interest (AOI) they fall into: a, b, c, d, e, f, and g.Table 1 Pattern extraction by prefix “a” at ms3. Record Initial sequences Patterns prefixed bya 1 [ a ] [ a b c ] [ a c ] [ d ] [ c f ] [ a b c ] [ a c ] [ d ] [ c f ] 2 [ a d ] [ c ] [ b c ] [ a e ] [ _ d ] [ c ] [ b c ] [ a e ] 3 [ e f ] [ a b ] [ d f ] [ c ] [ b ] [ _ b ] [ d f ] [ c ] [ b ] 4 [ e ] [ g ] [ a f ] [ c ] [ b ] [ c ] [ _ f ] [ c ] [ b ] [ c ] frequent code a / 4 , b / 4 / , c / 4 , d / 4, e / 3 , f / 3 b / 4 , c / 4 Note. The underscore _ means that the prefix was present in the chunk; for example, _ b implies a b.Codesa, b, c, d, e, and f are all frequent, shared by the majority, whereas code g is infrequent, appearing only once. For further scanning, any infrequent or rare code is to be removed from the records, since it will never appear in frequent patterns according to thea priori principle [15]. Let us set the level of being frequent at three for illustrative purposes. This level is called the minimum support threshold (abbreviated as ms).For every frequent code, one scans the reduced records, devoid of infrequent codes, for patterns prefixed by the given code. Those found for prefix “a” are listed in the second column of Table 1. These are subject to further scanning with respect to the frequent codes at this step, that is, b and c. This process recursively continues until no code is frequent or no patterns remain in the records. Note that prefixes grow in each step like “[ a ] [ b ]”, “[ a ] [ c ]” in the above example (see the Appendix for a more formal explanation).Table2 lists 14 frequent patterns extracted at ms3 from the initial record, including those found at ms4 as an embedded part. For instance, [ a ] [ c ], found at ms4, is embedded in patterns [ a ] [ c ] and [ a ] [ c ] [ c ], at ms3; that is, (2) a c ⊆ a c , a c c . Similarly, those found at ms3 are included in the patterns at ms2 reported by Pei et al. [13, 14]. Inclusive relations generally hold between different ms levels.Table 2 All of the frequent patterns extracted from Table1 at ms3. [ a ] [ a ] [ b ] [ a ] [ c ] [ a ] [ c ] [ b ] [ a ] [ c ] [ c ] [ b ] [ b ] [ c ] [ c ] [ c ] [ b ] [ c ] [ c ] [ d ] [ d ] [ c ] [ e ] [ f ] Note. Underscored patterns were extracted at ms4.Ordinarily, one finds too few patterns at a high ms level and too many at a low level to make an interesting analysis. However, once one recognizes the inclusive relations, making use of multiple levels becomes a plausible solution for identifying strongly frequent patterns as opposed to mildly and weakly frequent ones. (See the Appendix for the relation networks among the patterns identified at ms2, ms3, and ms4.)The present approach is expected to advance eye-tracking research along with conventional heat maps, scan paths, and network analysis recently developed by Matsuda and Takeuchi [8–10]. ## 2. Method ### 2.1. Subjects (Ss) Twenty residents (seven males and 13 females) living near the AIST Research Institute in Japan were recruited for the experiments. They had normal or corrected vision, and their ages ranged from 19 to 48 years (average 30). Ten of the Ss were university students, five were housewives, and the rest were part-time workers. Eleven Ss were heavy Internet users, while the rest were light users, as judged from their reports about the number of hours they spent browsing online in a week. ### 2.2. Stimuli The front (or top) pages of ten commercial web sites were selected from various business areas: airline companies, commerce and shopping, and banking. These were classifiable into three groups according to the layout types [8–10]. Due to space limits, we chose four pages with the same layout, the top and the principal layers. The principal layers were divided into the main area in the middle and subareas on both sides. The layers and the areas differed in size among pages. ### 2.3. Apparatus and Procedure The stimuli were presented with 1024 × 768 pixel resolution on a TFT 17′′ display in a Tobii 1750 eye-tracking system at a rate of 50 Hz. The web pages were randomly displayed to the Ss one at a time for 20 sec. The Ss were asked to browse each page at their own pace. The translated instructions are “Various web pages will be shown on the computer display in turn. Please look at each page as you usually do until the screen darkens and then, click the mouse button when you are ready to proceed.” The Ss were informed that the experiment would last for approximately five minutes. ### 2.4. Segment Coding A 5 × 5 mesh was superposed on the effective part of each page, after the page was stripped of white margins that had no text or graphics. A uniform mesh was employed for ease of comparison among pages that varied in design beyond the basic layout. The distance of a shift between two segments was measured by the Euclidean distance, computed as the square root ofm 2 + n 2, where m and n are the number of blocks (i.e., segments) moved along the horizontal and vertical axes.The rows (and columns) of the mesh were alphabetically (and numerically) labeled in descending order: A through E (and 1 through 5). The segments were coded by combining these labels as seen in Figure2:   A 1 , A 2 , … , A 5 for the first row; B 1 , … , B 5 for the second; and so on through E 1 , … , E 5 for the fifth row.Figure 2 Segment coding. ### 2.5. Fixation Sequences The raw tracking data for each subject consisted of time-stamped gaze points measured inx y-coordinates. The gaze points were grouped into a fixation point if they stayed within a radius of 30 pixels for 100 msec. Otherwise, they remained isolate.Each fixation was then translated into code sequences according to the segments in which the fixation fell. Finally, each fixation sequence was partitioned into chunks using the isolate gaze points as delimiters. ### 2.6. Preprocessing the Codes for PrefixSpan In accord with the algorithm, 25 segments were first recoded using lettersa through y; then the codes in each chunk were alphabetically ordered with no duplication. In this process, we represented within-chunk loops by extra recoding. Consecutively repeated codes within a chunk were replaced by the corresponding capital letter, for example, [caaababaa] to [cAbabA]. After eliminating duplicates, we sorted the codes within each chunk, for example, [Aabc] from the original sequence. Consequently, we maintained the sequential order among chunks, but the within-chunk sequences could have been distorted. Due to this possibility, we were unable to identify between-chunk loops.Frequent patterns were extracted at three levels of minimum support (denoted as ms12, ms14, and ms16) corresponding to 60, 70, and 80% of the subjects. ## 2.1. Subjects (Ss) Twenty residents (seven males and 13 females) living near the AIST Research Institute in Japan were recruited for the experiments. They had normal or corrected vision, and their ages ranged from 19 to 48 years (average 30). Ten of the Ss were university students, five were housewives, and the rest were part-time workers. Eleven Ss were heavy Internet users, while the rest were light users, as judged from their reports about the number of hours they spent browsing online in a week. ## 2.2. Stimuli The front (or top) pages of ten commercial web sites were selected from various business areas: airline companies, commerce and shopping, and banking. These were classifiable into three groups according to the layout types [8–10]. Due to space limits, we chose four pages with the same layout, the top and the principal layers. The principal layers were divided into the main area in the middle and subareas on both sides. The layers and the areas differed in size among pages. ## 2.3. Apparatus and Procedure The stimuli were presented with 1024 × 768 pixel resolution on a TFT 17′′ display in a Tobii 1750 eye-tracking system at a rate of 50 Hz. The web pages were randomly displayed to the Ss one at a time for 20 sec. The Ss were asked to browse each page at their own pace. The translated instructions are “Various web pages will be shown on the computer display in turn. Please look at each page as you usually do until the screen darkens and then, click the mouse button when you are ready to proceed.” The Ss were informed that the experiment would last for approximately five minutes. ## 2.4. Segment Coding A 5 × 5 mesh was superposed on the effective part of each page, after the page was stripped of white margins that had no text or graphics. A uniform mesh was employed for ease of comparison among pages that varied in design beyond the basic layout. The distance of a shift between two segments was measured by the Euclidean distance, computed as the square root ofm 2 + n 2, where m and n are the number of blocks (i.e., segments) moved along the horizontal and vertical axes.The rows (and columns) of the mesh were alphabetically (and numerically) labeled in descending order: A through E (and 1 through 5). The segments were coded by combining these labels as seen in Figure2:   A 1 , A 2 , … , A 5 for the first row; B 1 , … , B 5 for the second; and so on through E 1 , … , E 5 for the fifth row.Figure 2 Segment coding. ## 2.5. Fixation Sequences The raw tracking data for each subject consisted of time-stamped gaze points measured inx y-coordinates. The gaze points were grouped into a fixation point if they stayed within a radius of 30 pixels for 100 msec. Otherwise, they remained isolate.Each fixation was then translated into code sequences according to the segments in which the fixation fell. Finally, each fixation sequence was partitioned into chunks using the isolate gaze points as delimiters. ## 2.6. Preprocessing the Codes for PrefixSpan In accord with the algorithm, 25 segments were first recoded using lettersa through y; then the codes in each chunk were alphabetically ordered with no duplication. In this process, we represented within-chunk loops by extra recoding. Consecutively repeated codes within a chunk were replaced by the corresponding capital letter, for example, [caaababaa] to [cAbabA]. After eliminating duplicates, we sorted the codes within each chunk, for example, [Aabc] from the original sequence. Consequently, we maintained the sequential order among chunks, but the within-chunk sequences could have been distorted. Due to this possibility, we were unable to identify between-chunk loops.Frequent patterns were extracted at three levels of minimum support (denoted as ms12, ms14, and ms16) corresponding to 60, 70, and 80% of the subjects. ## 3. Results The four pages used as stimuli will be referred to as P1, P2, P3, and P4. ### 3.1. Examination of the Chunks The total number of chunks did not greatly differ among pages, ranging from 539 (P2) to 592 (P1). The pages agreed well on the lengths and proportions of primary, secondary, and tertiary chunks that contained one, two, and three fixations, respectively. Primary chunks accounted for 53.3 (P4) to 60.4% (P1) of the total chunks, and secondary chunks accounted for 21.9 (P1) and 25.1% (P4). Putting the primary and secondary chunks together, the vast majority of the chunks (≥78.4%) were very short. The proportions of the tertiary chunks were much smaller, ranging from 6.9 (P3) to 12.2% (P4). The longer chunks accounted for 7.9 (P1) to 11.6% (P3).The primary shifts of transitions within double-fixation chunks were loops (distance = 0) across pages. These accounted for 48.5 (P1) to 62.2% (P2). The pages agreed also on the secondary (√ 1) and tertiary (√ 2) distances, which involved adjacent segments connected laterally (or vertically) and diagonally, respectively. The proportion of the former ranged from 30.0 (P2) to 40.8% (P1). In contrast, that of the latter was much smaller (≤8.8%). Put together, the overwhelming majority of the double-fixation chunks (≥88.4%) were homogenous, that is, loops, or minimally heterogeneous (√ 1).Loops and one-block shifts were also dominant among the chunks of length three or more. Loops accounted for 49.2 (P4) to 60.8% (P3) of the shifts, and one-block shifts accounted for 34.4 (P3) to 42.7% (P1) of them. Putting these together, the overwhelming majority (≥91.0%) of the shifts within longer chunks were extremely short in distance.Similarly, extremely short shifts (≤√1) were modal among between-chunk transitions in reverse order and less prominent than within-chunk transitions. Primary one-block shifts accounted for 33.9 (P2) to 37.6% (P3) of the total between-chunk shifts, and loops accounted for 22.3 (P1) to 30.3% (P2). Their combined proportions ranged from 57.5 (P1) to 64.2% (P2).The low prominence of the first two modal shifts was compensated by the relatively large proportions of the longer ones. Each of the two-block shifts (√ 2 and √ 4) exceeded 10% levels on all pages with the exception of 7.7% (√ 2) on P2. Compared to the paucity of shifts of three blocks (√ 5) within chunks (≤1.7%), the corresponding distance between chunks, which ranged from 7.1 (P2) to 9.4% (P1), was noteworthy. Similarly noticeable was the size of the long-distance shifts (≥√8), which ranged from 7.3 (P3) to 10.2% (P1), while such shifts were nonexistent or negligible (0.8% on P1) within chunks. ### 3.2. Examination of the Frequent Patterns The frequent patterns extracted at three different ms levels (ms12, ms14, and ms16) are inclusive within each page in the sense that (a) subpatterns of a frequent pattern are also frequent at a given level and (b) the patterns extracted at a higher level are included in those at a lower level. For the sake of simplicity, the term “frequent” will be omitted below when obvious. Prior to mining, special coding was applied to the within-chunk loops as explained in Section2.As seen in Table3, the patterns were generally short, consisting of one or two chunks across pages at all ms levels. The longer ones (one on P2 and five on P3), all of length three, were found only at ms12. The constituent chunks were simple in composition, being a single fixation or a single loop. The loops were limited to (A1A1), (B1B1), and (D1D1), all located in the first column of the mesh. (These will be denoted as (A1..), (B1..), and (D1..).) The (D1..) loop appeared only on P2 at ms12 by itself, unaccompanied by any other chunk. (B1..) appeared alone on P1 at ms12 and on P2 at all ms levels. Also, it was paired with B3 on P2 both as a prefix (ms12) and as a postfix (ms12 and ms14). (A1..) appeared by itself on P2 (ms12), P3 (ms12, ms14, and ms16), and P4 (ms12 and ms14) and also as a prefix to other chunk(s) on P3 (ms12, ms14, and ms16) and P4 (ms12). None of the corresponding segments were in the first column. The postfixes on P3 were A2 (ms12, ms14, and ms16); A3 and B3 (ms12 and ms14); and B2, B3, B4, B5, C3, and D2 (ms12) in addition to A2A2 and B3B3 (ms12). Those on P4 were B4, C3, C4, D3, and D4 (ms12).Table 3 Number of patterns (N) by length (len) by ms. len ms12 ms14 ms16 N Loops N Loops N Loops P1 1 18 B1/1 11 5 2 19 4 1 P2 1 15 A1/1, B1/1, D1/1 9 B1/1 5 B1/1 2 14 B1/2 4 B1/1 1 3 1 P3 1 14 A1/1 12 A1/1 7 A1/1 2 34 A1/8 11 A1/3 2 A1/1 3 5 A1/2 P4 1 18 A1/1 15 A1/1 6 2 20 A1/5 3 1 Note. The length of a pattern (len) is the number of constituent chunks. Also listed are the identified within-chunk loops with the number of patterns in which they appeared.In the six patterns of length three found on P2 and P3 at ms12, the constituent codes were partially or totally homogenous. Five of them contained two repeated codes, either A2 or B3, including those prefixed by (A1..) as reported above. The remaining one, found solely on P3, contained A2. In the following examination of the double-chunk patterns, loops will be treated as single codes to reduce complexity.The double-chunk patterns are listed in Table4 by the direction of the sequences—upward, homogenous, horizontal, and downward. Superscripts L and R denote leftward and rightward sequences. Underscored patterns were extracted at ms14 and above. Those found only at ms16 are further emphasized in italicized bold face. The total number of patterns varied from 13 (on P2 and P4) to 34 (P3).Table 4 Double-chunk patterns by direction. Page Direction Pattern P1 ↑ B2A2  B2A3R  B2A4R == B2B2  B3B3 ↔ A1A4 R  B2B3 R A2A4R  A3A4R  B2B1L  B2B5R  B3B2L  B3B4R ↓ A1D3R  B2C3R  B2C4R  B2D2  B2D3R  B3D3 P2 ↑ B3A1L  B3A3 == B3B3  A1A1  B1B1  B2B2 ↔ B3B1 L  B1B3R  B3B2L ↓ B3C2 L  B3C3  B3D2L  B3D3 P3 ↑ B2A3 R  B2A2  C3A2L == A2A2 B3B3  B2B2  C3C3 ↔ A1A2 R  A1A3 R  A2A3 R  B3B4 R A3A2L  B2B3R  B3B2L  B3B5R  C2C1L ↓ A1B3 R  A2B3 R  A2B4 R  A2C3 R A1B2R  A1B4R  A1B5R  A1C3R  A1D2R A2B2  A2B5R  A2D2  A2D3R  A2D5R B2C3R  B3D2L  B3D3  C3D2L P4 ↑ (none) == C3C3 ↔ C3C4 R  A2A3R  B3B4R  B3B5R  C3C2L ↓ A2B4 A2C4R  A3B4R  B2C3R  B3C3  B3C4R  C3D3 Note. The sequence directions are upward (↑), homogenous (==), horizontal (↔), and downward (↓). Underscored patterns were extracted at ms14. Those extracted at 16 are also emphasized in italicized bold face. Leftward and rightward sequences are marked by superscripts L and R, respectively.At ms16, the patterns were homogenous (B2B2 on P1; B3B3 on P2), horizontal (A1A2 on P3; C3C4 on P4), or downward (A2B3 on P3) sequences with the exception of down-rightward pattern A2B3 on P3. There was no leftward heterogeneous pattern.The new patterns found at ms14 included an upward sequence (B2A3 on P3) and five downward sequences (B3C2 on P2; A1B3, A2B4, and A2C3 on P3; and A2B4 on P4) in addition to four homogenous sequences (B3B3 on P1; A2A2, B3B3 on P3; C3C3 on P4) and six horizontal sequences (A1A4 and B2B3 on P1; B3B1 on P2; and A1A3, A2A3, and B3B4 on P3). Among the 12 heterogeneous patterns, only two (B3B1 and B3C2 on P2) were leftward.The patterns extracted at ms14 and above had no segments in rows D and E and no segments in the fifth column. None of the seven upward and downward sequences were strictly vertical, involving adjacent or nonadjacent columns in the ratio of 4 to 3. These vertical patterns mostly involved adjacent rows (6 out of 7).Some of the constituent segments of the sequences at ms14 and above appeared solely as prefixes (A1 on P1 and P3; A2 on P4) or as postfixes (B3 on P1; B1 and C2 on P2; A3, B3, B4, and C3 on P3; B4 and C4 on P4).The new double-chunk patterns found at ms12 had (a) segments in row D and in column 5, (b) notable positions of the new segments, (c) increased heterogeneous patterns, (d) increased sequences between nonadjacent rows, (e) strictly vertical sequences, and (f) bilateral sequence pairs. The segments in row D appeared only as postfixes in the downward sequences (D2 and D3 on P1 and P2; D2, D3, and D5 on P3; and D3 on P4). Similarly, the new segments found in row C were postfixes (C3 and C4 on P1; C3 on P2; C1 on P3; and C2 on P4) with a single exception (C2 on P3). The new segments in row B were mostly postfixes: B1, B4, and B5 on P1, B5 on P3, and B4 and B5 on P4. B2 and B3 on P4 were prefixes. An interesting case was B2 on P2 which was special, being a prefix to itself (B2B2). Dual roles were more notable than unary ones among the new segments in row A (A2 and A3 on P1, A1 on P2, and A3 on P4).A total of seven new upward sequences were found, three on P1 and two on both P2 and P3, but still none on P4. These were prefixed by B2 (on P1 and P3), B3 (P2), or C3 (P3) and postfixed by the segments in row A—A1, A2, A3, or A4. Only C3A2 involved nonadjacent rows. A strictly vertical sequence was present on each of P1, P2, and P3—B2A2, B3A3, and B2A2. The rest were rightward (B2A3 and B2A4 on P1) or leftward on P2 and P3 (B3A1 on P2; C3A2 on P3).A total of five new homogenous sequences were found on P2 and P3, one in row A (A1A1 on P2), three in row B (B1B1 on P2 and B2B2 on P2 and P3), and one in row C (C3C3 on P3). Like those at ms14 and above, none of the constituents were in columns 4 or 5.A total of 17 new horizontal sequences were found on P1 (two in row A and four in row B), P2 (two in B), P3 (one in A, three in B, and one in C), and P4 (one in A, two in B, and one in C). A2 and A3 appeared as a prefix or as a postfix, while A4 appeared only as a postfix. The same held for B1, B2, and B3, while B4 and B5 appeared only as postfixes. C2 assumed dual positions in C2C1 on P3 and C3C2 on P4, both of which were leftward. The ratio of leftward to rightward sequences was 2 : 4, 1 : 1, 3 : 2, and 1 : 3 in the order of P1, P2, P3, and P4.A total of 29 new downward sequences were found, six on P1, three on P2, 14 on P3, and six on P4. The prefixes concentrated in rows A and B with two exceptions (C3D2 on P3 and C3D3 on P4). In contrast, the postfixes concentrated in rows C and D with exceptions of five patterns on P3 and one on P4. Half or more of the downward patterns on P1, P2, and P3 involved nonadjacent rows (A-D/1 and B-D/3 on P1; B-D/2 on P2; and A-C/1, A-D/5, and B-D/2 on P3, wheren denotes the number of cases), whereas only A2C4 out of six patterns did so on P4. The strictly vertical patterns were limited to columns 2 and 3 (B-D/2 on P1; B-C/1 and B-D/1 on P2; A-B/1, A-D/1, and B-D/1 on P3; and B-C/1 and C-D/ on P4). The rest were rightward on P1 and P4, leftward on P2, or mixed on P3.Among all of the patterns in Table4, the heterogeneous sequences were mostly unilateral in that the symmetric pairs were limited in number (B2B3-B3B2 on P1; B1B3-B3B1 on P2; A2A3-A3A2, A2B2-B2A2, A2C3-C3A2, and B2B3-B3B2 on P3; and none on P4). Four of these were horizontal sequences. The constituents were limited to a subset consisting of the first three rows and columns, that is, { A 2 , B 1 , B 2 , B 3 , and C 3 }.The individual constituents of the multichunk patterns were frequent by themselves as primitive patterns at a given ms level, but not vice versa. Table5 lists the isolate primitive patterns not participating in any multichunk pattern at a given ms level. While the number of total primitive patterns monotonically decreased from ms12 to ms16, the ratio of the isolate primitive patterns to the total primitive patterns monotonically increased on all pages almost perfectly. The ratios at ms 12 , 14 , 16 were 4 / 17 , 7 / 11 , 4 / 5, 4 / 13 , 5 / 8 , 2 / 4, 0 / 13 , 4 / 11 , 3 / 6, and 5 / 17 , 9 / 14 , 4 / 6, in the order of P1, P2, P3, and P4. The sole exception was the second and the third ratios on P2. There were no isolates on P3 at ms12.Table 5 Isolate primitives by ms level. ms12 ms14 ms16 P1 A5 C2 C5 E3 A2  A3 B1 B5 C3 C4 D3 A1A2  A4 B3 P2 A2 B4 C1 D1 A1  A3  B2 C3 D3 A1  A3 P3 (none) B5 C1 C2 D2 A3 B2 C3 P4 A5B5  C1  C5 D2 B1 B2B3  B5  C1  C2 C5 D3 D4 A2B3  B4  C5 Note. Primitives in bold face were persistent at two or three ms levels.Generally, an isolate primitive at a given ms level would become a member of sequence(s) at a lower level and would not be present at a higher level. Exceptionally, C5, located in the rightmost column, persisted on P4 as an isolate at all ms levels. Partial persistence was observed between ms14 and ms16 on P1 (A2), P2 (A1, A3), and P4 (B3) as well as between ms12 and ms14 on P4 (B5, C1). No persistence was observed on P3. The persistent ones on P1 and P4 were limited to the first three columns of the top row,{ A 1 , A 2 , A 3 }, whereas those on P4 spread over rows B and C in columns 1, 3, and 5, that is, { B 3 , B 5 , C 1 , C 5 }.Finally, E3 on P1 at ms12 was the sole frequent segment in the bottom row E where segments were generally infrequent across pages at all ms levels. ## 3.1. Examination of the Chunks The total number of chunks did not greatly differ among pages, ranging from 539 (P2) to 592 (P1). The pages agreed well on the lengths and proportions of primary, secondary, and tertiary chunks that contained one, two, and three fixations, respectively. Primary chunks accounted for 53.3 (P4) to 60.4% (P1) of the total chunks, and secondary chunks accounted for 21.9 (P1) and 25.1% (P4). Putting the primary and secondary chunks together, the vast majority of the chunks (≥78.4%) were very short. The proportions of the tertiary chunks were much smaller, ranging from 6.9 (P3) to 12.2% (P4). The longer chunks accounted for 7.9 (P1) to 11.6% (P3).The primary shifts of transitions within double-fixation chunks were loops (distance = 0) across pages. These accounted for 48.5 (P1) to 62.2% (P2). The pages agreed also on the secondary (√ 1) and tertiary (√ 2) distances, which involved adjacent segments connected laterally (or vertically) and diagonally, respectively. The proportion of the former ranged from 30.0 (P2) to 40.8% (P1). In contrast, that of the latter was much smaller (≤8.8%). Put together, the overwhelming majority of the double-fixation chunks (≥88.4%) were homogenous, that is, loops, or minimally heterogeneous (√ 1).Loops and one-block shifts were also dominant among the chunks of length three or more. Loops accounted for 49.2 (P4) to 60.8% (P3) of the shifts, and one-block shifts accounted for 34.4 (P3) to 42.7% (P1) of them. Putting these together, the overwhelming majority (≥91.0%) of the shifts within longer chunks were extremely short in distance.Similarly, extremely short shifts (≤√1) were modal among between-chunk transitions in reverse order and less prominent than within-chunk transitions. Primary one-block shifts accounted for 33.9 (P2) to 37.6% (P3) of the total between-chunk shifts, and loops accounted for 22.3 (P1) to 30.3% (P2). Their combined proportions ranged from 57.5 (P1) to 64.2% (P2).The low prominence of the first two modal shifts was compensated by the relatively large proportions of the longer ones. Each of the two-block shifts (√ 2 and √ 4) exceeded 10% levels on all pages with the exception of 7.7% (√ 2) on P2. Compared to the paucity of shifts of three blocks (√ 5) within chunks (≤1.7%), the corresponding distance between chunks, which ranged from 7.1 (P2) to 9.4% (P1), was noteworthy. Similarly noticeable was the size of the long-distance shifts (≥√8), which ranged from 7.3 (P3) to 10.2% (P1), while such shifts were nonexistent or negligible (0.8% on P1) within chunks. ## 3.2. Examination of the Frequent Patterns The frequent patterns extracted at three different ms levels (ms12, ms14, and ms16) are inclusive within each page in the sense that (a) subpatterns of a frequent pattern are also frequent at a given level and (b) the patterns extracted at a higher level are included in those at a lower level. For the sake of simplicity, the term “frequent” will be omitted below when obvious. Prior to mining, special coding was applied to the within-chunk loops as explained in Section2.As seen in Table3, the patterns were generally short, consisting of one or two chunks across pages at all ms levels. The longer ones (one on P2 and five on P3), all of length three, were found only at ms12. The constituent chunks were simple in composition, being a single fixation or a single loop. The loops were limited to (A1A1), (B1B1), and (D1D1), all located in the first column of the mesh. (These will be denoted as (A1..), (B1..), and (D1..).) The (D1..) loop appeared only on P2 at ms12 by itself, unaccompanied by any other chunk. (B1..) appeared alone on P1 at ms12 and on P2 at all ms levels. Also, it was paired with B3 on P2 both as a prefix (ms12) and as a postfix (ms12 and ms14). (A1..) appeared by itself on P2 (ms12), P3 (ms12, ms14, and ms16), and P4 (ms12 and ms14) and also as a prefix to other chunk(s) on P3 (ms12, ms14, and ms16) and P4 (ms12). None of the corresponding segments were in the first column. The postfixes on P3 were A2 (ms12, ms14, and ms16); A3 and B3 (ms12 and ms14); and B2, B3, B4, B5, C3, and D2 (ms12) in addition to A2A2 and B3B3 (ms12). Those on P4 were B4, C3, C4, D3, and D4 (ms12).Table 3 Number of patterns (N) by length (len) by ms. len ms12 ms14 ms16 N Loops N Loops N Loops P1 1 18 B1/1 11 5 2 19 4 1 P2 1 15 A1/1, B1/1, D1/1 9 B1/1 5 B1/1 2 14 B1/2 4 B1/1 1 3 1 P3 1 14 A1/1 12 A1/1 7 A1/1 2 34 A1/8 11 A1/3 2 A1/1 3 5 A1/2 P4 1 18 A1/1 15 A1/1 6 2 20 A1/5 3 1 Note. The length of a pattern (len) is the number of constituent chunks. Also listed are the identified within-chunk loops with the number of patterns in which they appeared.In the six patterns of length three found on P2 and P3 at ms12, the constituent codes were partially or totally homogenous. Five of them contained two repeated codes, either A2 or B3, including those prefixed by (A1..) as reported above. The remaining one, found solely on P3, contained A2. In the following examination of the double-chunk patterns, loops will be treated as single codes to reduce complexity.The double-chunk patterns are listed in Table4 by the direction of the sequences—upward, homogenous, horizontal, and downward. Superscripts L and R denote leftward and rightward sequences. Underscored patterns were extracted at ms14 and above. Those found only at ms16 are further emphasized in italicized bold face. The total number of patterns varied from 13 (on P2 and P4) to 34 (P3).Table 4 Double-chunk patterns by direction. Page Direction Pattern P1 ↑ B2A2  B2A3R  B2A4R == B2B2  B3B3 ↔ A1A4 R  B2B3 R A2A4R  A3A4R  B2B1L  B2B5R  B3B2L  B3B4R ↓ A1D3R  B2C3R  B2C4R  B2D2  B2D3R  B3D3 P2 ↑ B3A1L  B3A3 == B3B3  A1A1  B1B1  B2B2 ↔ B3B1 L  B1B3R  B3B2L ↓ B3C2 L  B3C3  B3D2L  B3D3 P3 ↑ B2A3 R  B2A2  C3A2L == A2A2 B3B3  B2B2  C3C3 ↔ A1A2 R  A1A3 R  A2A3 R  B3B4 R A3A2L  B2B3R  B3B2L  B3B5R  C2C1L ↓ A1B3 R  A2B3 R  A2B4 R  A2C3 R A1B2R  A1B4R  A1B5R  A1C3R  A1D2R A2B2  A2B5R  A2D2  A2D3R  A2D5R B2C3R  B3D2L  B3D3  C3D2L P4 ↑ (none) == C3C3 ↔ C3C4 R  A2A3R  B3B4R  B3B5R  C3C2L ↓ A2B4 A2C4R  A3B4R  B2C3R  B3C3  B3C4R  C3D3 Note. The sequence directions are upward (↑), homogenous (==), horizontal (↔), and downward (↓). Underscored patterns were extracted at ms14. Those extracted at 16 are also emphasized in italicized bold face. Leftward and rightward sequences are marked by superscripts L and R, respectively.At ms16, the patterns were homogenous (B2B2 on P1; B3B3 on P2), horizontal (A1A2 on P3; C3C4 on P4), or downward (A2B3 on P3) sequences with the exception of down-rightward pattern A2B3 on P3. There was no leftward heterogeneous pattern.The new patterns found at ms14 included an upward sequence (B2A3 on P3) and five downward sequences (B3C2 on P2; A1B3, A2B4, and A2C3 on P3; and A2B4 on P4) in addition to four homogenous sequences (B3B3 on P1; A2A2, B3B3 on P3; C3C3 on P4) and six horizontal sequences (A1A4 and B2B3 on P1; B3B1 on P2; and A1A3, A2A3, and B3B4 on P3). Among the 12 heterogeneous patterns, only two (B3B1 and B3C2 on P2) were leftward.The patterns extracted at ms14 and above had no segments in rows D and E and no segments in the fifth column. None of the seven upward and downward sequences were strictly vertical, involving adjacent or nonadjacent columns in the ratio of 4 to 3. These vertical patterns mostly involved adjacent rows (6 out of 7).Some of the constituent segments of the sequences at ms14 and above appeared solely as prefixes (A1 on P1 and P3; A2 on P4) or as postfixes (B3 on P1; B1 and C2 on P2; A3, B3, B4, and C3 on P3; B4 and C4 on P4).The new double-chunk patterns found at ms12 had (a) segments in row D and in column 5, (b) notable positions of the new segments, (c) increased heterogeneous patterns, (d) increased sequences between nonadjacent rows, (e) strictly vertical sequences, and (f) bilateral sequence pairs. The segments in row D appeared only as postfixes in the downward sequences (D2 and D3 on P1 and P2; D2, D3, and D5 on P3; and D3 on P4). Similarly, the new segments found in row C were postfixes (C3 and C4 on P1; C3 on P2; C1 on P3; and C2 on P4) with a single exception (C2 on P3). The new segments in row B were mostly postfixes: B1, B4, and B5 on P1, B5 on P3, and B4 and B5 on P4. B2 and B3 on P4 were prefixes. An interesting case was B2 on P2 which was special, being a prefix to itself (B2B2). Dual roles were more notable than unary ones among the new segments in row A (A2 and A3 on P1, A1 on P2, and A3 on P4).A total of seven new upward sequences were found, three on P1 and two on both P2 and P3, but still none on P4. These were prefixed by B2 (on P1 and P3), B3 (P2), or C3 (P3) and postfixed by the segments in row A—A1, A2, A3, or A4. Only C3A2 involved nonadjacent rows. A strictly vertical sequence was present on each of P1, P2, and P3—B2A2, B3A3, and B2A2. The rest were rightward (B2A3 and B2A4 on P1) or leftward on P2 and P3 (B3A1 on P2; C3A2 on P3).A total of five new homogenous sequences were found on P2 and P3, one in row A (A1A1 on P2), three in row B (B1B1 on P2 and B2B2 on P2 and P3), and one in row C (C3C3 on P3). Like those at ms14 and above, none of the constituents were in columns 4 or 5.A total of 17 new horizontal sequences were found on P1 (two in row A and four in row B), P2 (two in B), P3 (one in A, three in B, and one in C), and P4 (one in A, two in B, and one in C). A2 and A3 appeared as a prefix or as a postfix, while A4 appeared only as a postfix. The same held for B1, B2, and B3, while B4 and B5 appeared only as postfixes. C2 assumed dual positions in C2C1 on P3 and C3C2 on P4, both of which were leftward. The ratio of leftward to rightward sequences was 2 : 4, 1 : 1, 3 : 2, and 1 : 3 in the order of P1, P2, P3, and P4.A total of 29 new downward sequences were found, six on P1, three on P2, 14 on P3, and six on P4. The prefixes concentrated in rows A and B with two exceptions (C3D2 on P3 and C3D3 on P4). In contrast, the postfixes concentrated in rows C and D with exceptions of five patterns on P3 and one on P4. Half or more of the downward patterns on P1, P2, and P3 involved nonadjacent rows (A-D/1 and B-D/3 on P1; B-D/2 on P2; and A-C/1, A-D/5, and B-D/2 on P3, wheren denotes the number of cases), whereas only A2C4 out of six patterns did so on P4. The strictly vertical patterns were limited to columns 2 and 3 (B-D/2 on P1; B-C/1 and B-D/1 on P2; A-B/1, A-D/1, and B-D/1 on P3; and B-C/1 and C-D/ on P4). The rest were rightward on P1 and P4, leftward on P2, or mixed on P3.Among all of the patterns in Table4, the heterogeneous sequences were mostly unilateral in that the symmetric pairs were limited in number (B2B3-B3B2 on P1; B1B3-B3B1 on P2; A2A3-A3A2, A2B2-B2A2, A2C3-C3A2, and B2B3-B3B2 on P3; and none on P4). Four of these were horizontal sequences. The constituents were limited to a subset consisting of the first three rows and columns, that is, { A 2 , B 1 , B 2 , B 3 , and C 3 }.The individual constituents of the multichunk patterns were frequent by themselves as primitive patterns at a given ms level, but not vice versa. Table5 lists the isolate primitive patterns not participating in any multichunk pattern at a given ms level. While the number of total primitive patterns monotonically decreased from ms12 to ms16, the ratio of the isolate primitive patterns to the total primitive patterns monotonically increased on all pages almost perfectly. The ratios at ms 12 , 14 , 16 were 4 / 17 , 7 / 11 , 4 / 5, 4 / 13 , 5 / 8 , 2 / 4, 0 / 13 , 4 / 11 , 3 / 6, and 5 / 17 , 9 / 14 , 4 / 6, in the order of P1, P2, P3, and P4. The sole exception was the second and the third ratios on P2. There were no isolates on P3 at ms12.Table 5 Isolate primitives by ms level. ms12 ms14 ms16 P1 A5 C2 C5 E3 A2  A3 B1 B5 C3 C4 D3 A1A2  A4 B3 P2 A2 B4 C1 D1 A1  A3  B2 C3 D3 A1  A3 P3 (none) B5 C1 C2 D2 A3 B2 C3 P4 A5B5  C1  C5 D2 B1 B2B3  B5  C1  C2 C5 D3 D4 A2B3  B4  C5 Note. Primitives in bold face were persistent at two or three ms levels.Generally, an isolate primitive at a given ms level would become a member of sequence(s) at a lower level and would not be present at a higher level. Exceptionally, C5, located in the rightmost column, persisted on P4 as an isolate at all ms levels. Partial persistence was observed between ms14 and ms16 on P1 (A2), P2 (A1, A3), and P4 (B3) as well as between ms12 and ms14 on P4 (B5, C1). No persistence was observed on P3. The persistent ones on P1 and P4 were limited to the first three columns of the top row,{ A 1 , A 2 , A 3 }, whereas those on P4 spread over rows B and C in columns 1, 3, and 5, that is, { B 3 , B 5 , C 1 , C 5 }.Finally, E3 on P1 at ms12 was the sole frequent segment in the bottom row E where segments were generally infrequent across pages at all ms levels. ## 4. Discussion Eye-tracking researchers have inferred a fixation from gaze points closely clustered in space and time, treating it as a meaningful unit of information processing, that is, achunk, a familiar concept in psychology. Chunking of lower-level chunks into a higher one is not uncommon as seen in the relationships letter , word , phrase , sentence , paragraph , …. The present paper examined the patterns of second-order chunks, that is, chunks of fixations, using isolate gaze point(s) not participating in any fixation as the delimiter. The delimiter was assumed to play an auxiliary role in chunking, like a pause in speech.Most of the identified chunks were short, consisting of one or two fixations. Also, the transitions within multifixation chunks and between chunks were mostly short in distance, either loops or one-block shifts to adjacent segments. These seem to be attributable to the minimum criterion of the delimiter we employed—at least one isolate gaze point. Hence, even an accidental dislocation of one’s gaze resulted in chunking. It would be ideal if we could separate cognitively meaningful chunking from accidental chunking. Until an effective method is established, the best we can do is to be cautious in interpreting the results.Actually, setting an appropriate criterion is a difficult task due to the possible individual and situational variations. Perhaps individuated criteria will be appropriate instead of a uniform criterion. Further investigation of the distributions of gaze points participating in fixations and those that are isolated is necessary.As reported earlier, within- and between-chunk transitions were similar in that the first two modal distances were zero (i.e., loops) and one block. However, these differed in order and in magnitude. Loops were primary among within-chunk transitions but secondary among between-chunk transitions. The opposite was true for the one-block shifts. Next, the proportions of the primary and secondary distances of the within-chunk transitions exceeded the respective proportions pertaining to the between-chunk transitions. Similarly, there were more long-distance shifts between chunks than within them.These results seem to suggest that the attention of our subjects was most likely shifted, after a pause, to an adjacent segment one block away or within the same segment. The medium or long-distance shifts were also separated by pauses, though their proportions were smaller than the short ones. Shifts without a pause, that is, within-chunk shifts, were short, chiefly occurring in the same segment or between adjacent segments one block away.Now we turn to a discussion of the frequent patterns (i.e., subsequences) extracted by PrefixSpan. The patterns were simple in structure, mostly consisting of single or double chunks. Furthermore, the chunks themselves contained single fixations or single loops as expected from the chunk properties discussed above. More complex structures might have resulted if we had employed less stringent criteria for the delimiter. Even so, beneath the structural simplicity, interesting properties emerged as to the segment differentiation and the directional unevenness in attentional shifts.First, the within-chunk loops were limited to (A1..), (B1..), and (D1..), all of which were in the leftmost column. While the presence of (D1..) was quite limited, the leading roles of (A1..) and (B1..) as prefixes in the multichunk sequences are noteworthy. These roles might be attributable to menu items placed in the segments. Second, the multichunk sequences chiefly consisted of the segments in rows A, B, and C. In particular, the leading role of A1 on P1 and P3 was noteworthy, like the loop (A1..), though its dual role as pre- and postfix was observed on P2. In contrast, A4, B4, and C4 were consistently positioned as postfixes. The same held for the segments in row D, which appeared only at the lowest ms level. The segments in row E were totally absent in multichunk sequences.Third, the sequences at ms14 and ms16 were more likely to be horizontal, including homogenous codes, than downward and, to much less extent, than the upward sequence, which remained least likely among the additional patterns found at ms12. The order between horizontal and downward sequences varied across pages at ms12.By chunking eye-tracking records into smaller units, we discovered interesting properties of the eye movement of web page viewers. However, further studies seem necessary to enhance the present approach, for example, by setting up nested AOI’s to reflect the hierarchical structure of the web objects [16] and by adjusting the chunk delimiters to accommodate individual and task variations. Besides these refinements, we are planning an application of mined frequent patterns to simultaneous clustering [17] of subjects and the properties of their eye movement and other relevant indices. --- *Source: 101642-2014-11-23.xml*
101642-2014-11-23_101642-2014-11-23.md
55,606
Frequent Pattern Mining of Eye-Tracking Records Partitioned into Cognitive Chunks
Noriyuki Matsuda; Haruhiko Takeuchi
Applied Computational Intelligence and Soft Computing (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101642
101642-2014-11-23.xml
--- ## Abstract Assuming that scenes would be visually scanned by chunking information, we partitioned fixation sequences of web page viewers into chunks using isolate gaze point(s) as the delimiter. Fixations were coded in terms of the segments in a5 × 5 mesh imposed on the screen. The identified chunks were mostly short, consisting of one or two fixations. These were analyzed with respect to the within- and between-chunk distances in the overall records and the patterns (i.e., subsequences) frequently shared among the records. Although the two types of distances were both dominated by zero- and one-block shifts, the primacy of the modal shifts was less prominent between chunks than within them. The lower primacy was compensated by the longer shifts. The patterns frequently extracted at three threshold levels were mostly simple, consisting of one or two chunks. The patterns revealed interesting properties as to segment differentiation and the directionality of the attentional shifts. --- ## Body ## 1. Introduction Eyes seldom stay completely still. They continually move even when one tries to fixate one’s gaze on an object because of the tremors, drifts, and microsaccades that occur on a small scale [1]. Hence, researchers need to infer a fixation from consecutive gaze points clustered in space [2]. We may regard such a cluster of gaze points as a perceptual chunk, a familiar term in psychology after Miller [3] in referring to a practically meaningful unit of information processing.During fixation, people closely scan a limited part of the scene they are interested in. They then quickly move their eyes to the next fixation area by saccade, which momentarily disrupts vision. However, it normally goes unnoticed thanks to our vision system that produces continuous transsaccades perception [4–6]. It means that successive fixations constitute a higher order chunking over and above the primary chunking of gaze points. Put metaphorically, the gaze , fixation , fixation-chunking relationship is analogous to the letter , word , phrase relationship. For the sake of brevity, a chunk of fixations will be referred to as a chunk.In viewing natural scenes or displays, a chunk continues to grow until interrupted by one or more isolate gaze points resulting from drifting attention or by accident. These do not participate in any fixation. Whatever causes the interruption, we believe that such isolate points serve as chunk delimiters, like the pauses in speech. As a pause can be either short or long, interruptions by isolate points can vary in length. Figure1 illustrates two levels of chunking: (a) chunking of gaze points into fixations and (b) chunking of consecutive fixations with and without interruption.Two fixations in one chunk (a) and in separate chunks (b). (a) (b)Granting our conjecture, one may still wonder what particular merits will accrue from the analysis of chunks in lieu of ordinary plain fixation sequences. The expected merits are twofold: separation of between- and within-chunk patterns and extraction of common patterns across records. Neither of these is attainable when dealing with multiple records by heat maps of fixations accumulated with no regard to sequential connections [7], by network analysis of the adjacent transitions accumulated within and between records [8–10], or by scan paths that would be too complicated [11] unless reduced to frequently shared subpaths. The key to understanding this point lies in the structure of fixation sequences as explained below. ### 1.1. Structure of Fixation Sequences Equation (1) presents two types of fixation sequences, one plain and the other partitioned, both arranged in time sequence. The former serves as a basis for heat maps, scan paths, and network analysis. The latter incorporates chunks delimited by isolate gazes or any other appropriate criterion. The essential nature of the sequences remains the same when fixations are coded in areas of interest (AOI) or grid-like segments.Plain and Partitioned Fixation Sequences. Consider (1) Plain: F 1 F 2 F 3 ⋯ F i F i + 1 F i + 2 ⋯ Partitioned: F 1 F 2 , F 3 , … , F i F i + 1 F i + 2 ⋯ , … , where F i denotes the ith fixation.Although it was not explicitly stated, McCarthy et al. [12] in effect extracted chunks from partitioned sequences in their work on the importance of web page objects and their locations. They grouped consecutive fixations within each AOI into a chunk called a glance to obtain plain sequences of glances coded in AOI. Their interest was to see how often areas of web pages would attract glances by varying the area locations and the types of tasks.By focusing on the frequency of glances as an indication of importance, they disregarded the length of the chunks, that is, the number of fixations within glances. Also disregarded was the shift of glances, that is, between-chunk sequences. To us, both within- and between-chunk patterns seem to contain rich information worthy of investigation. The information can be extracted from partitioned sequences but not from plain ones. In addition, partitioned sequences will be of great value when some AOIs are nested into broader AOIs (see [16]), given the appropriate coding. The present study is extensible to such a hierarchical structure.For the sake of simplicity, we will focus on the eye movements of web page viewers, and we will assume that the pages are divided into grid-like AOIs, that the fixations are coded in terms of the areas in which they fall, and that chunks are delimited by isolate gaze points. ### 1.2. Shifts of Interest within and between Chunks The distance between two successive fixations indicates how far the interest shifted or did not shift in a looped transition that represents sustained interest in a given area. In our view, a chunk of fixations reflects continuous interest, and a new one begins after a momentary drift of the gaze. It seems natural to expect the distance distribution of the within-chunk shifts to differ, to some extent, from that of the between-chunk shifts.The distance analysis explained above exploits information from the cumulative records across all viewers. Hence, it is possible that the results are influenced by some dominant patterns in particular records. If one is interested in sequential regularities often shared among records, frequent sequential pattern mining is useful, as explained below. ### 1.3. Frequent Sequential Pattern Mining Among others, we will employ PrefixSpan, developed by Pei et al. [13, 14], because of its conceptual compatibility with the partitioned sequences of eye-tracking data. Their approach is briefly explained below using their example, seen in Table 1. (See the Appendix for a more formal explanation.) One can view the data as the eye-tracking records of four viewers in which fixations are alphabetically coded according to the areas of interest (AOI) they fall into: a, b, c, d, e, f, and g.Table 1 Pattern extraction by prefix “a” at ms3. Record Initial sequences Patterns prefixed bya 1 [ a ] [ a b c ] [ a c ] [ d ] [ c f ] [ a b c ] [ a c ] [ d ] [ c f ] 2 [ a d ] [ c ] [ b c ] [ a e ] [ _ d ] [ c ] [ b c ] [ a e ] 3 [ e f ] [ a b ] [ d f ] [ c ] [ b ] [ _ b ] [ d f ] [ c ] [ b ] 4 [ e ] [ g ] [ a f ] [ c ] [ b ] [ c ] [ _ f ] [ c ] [ b ] [ c ] frequent code a / 4 , b / 4 / , c / 4 , d / 4, e / 3 , f / 3 b / 4 , c / 4 Note. The underscore _ means that the prefix was present in the chunk; for example, _ b implies a b.Codesa, b, c, d, e, and f are all frequent, shared by the majority, whereas code g is infrequent, appearing only once. For further scanning, any infrequent or rare code is to be removed from the records, since it will never appear in frequent patterns according to thea priori principle [15]. Let us set the level of being frequent at three for illustrative purposes. This level is called the minimum support threshold (abbreviated as ms).For every frequent code, one scans the reduced records, devoid of infrequent codes, for patterns prefixed by the given code. Those found for prefix “a” are listed in the second column of Table 1. These are subject to further scanning with respect to the frequent codes at this step, that is, b and c. This process recursively continues until no code is frequent or no patterns remain in the records. Note that prefixes grow in each step like “[ a ] [ b ]”, “[ a ] [ c ]” in the above example (see the Appendix for a more formal explanation).Table2 lists 14 frequent patterns extracted at ms3 from the initial record, including those found at ms4 as an embedded part. For instance, [ a ] [ c ], found at ms4, is embedded in patterns [ a ] [ c ] and [ a ] [ c ] [ c ], at ms3; that is, (2) a c ⊆ a c , a c c . Similarly, those found at ms3 are included in the patterns at ms2 reported by Pei et al. [13, 14]. Inclusive relations generally hold between different ms levels.Table 2 All of the frequent patterns extracted from Table1 at ms3. [ a ] [ a ] [ b ] [ a ] [ c ] [ a ] [ c ] [ b ] [ a ] [ c ] [ c ] [ b ] [ b ] [ c ] [ c ] [ c ] [ b ] [ c ] [ c ] [ d ] [ d ] [ c ] [ e ] [ f ] Note. Underscored patterns were extracted at ms4.Ordinarily, one finds too few patterns at a high ms level and too many at a low level to make an interesting analysis. However, once one recognizes the inclusive relations, making use of multiple levels becomes a plausible solution for identifying strongly frequent patterns as opposed to mildly and weakly frequent ones. (See the Appendix for the relation networks among the patterns identified at ms2, ms3, and ms4.)The present approach is expected to advance eye-tracking research along with conventional heat maps, scan paths, and network analysis recently developed by Matsuda and Takeuchi [8–10]. ## 1.1. Structure of Fixation Sequences Equation (1) presents two types of fixation sequences, one plain and the other partitioned, both arranged in time sequence. The former serves as a basis for heat maps, scan paths, and network analysis. The latter incorporates chunks delimited by isolate gazes or any other appropriate criterion. The essential nature of the sequences remains the same when fixations are coded in areas of interest (AOI) or grid-like segments.Plain and Partitioned Fixation Sequences. Consider (1) Plain: F 1 F 2 F 3 ⋯ F i F i + 1 F i + 2 ⋯ Partitioned: F 1 F 2 , F 3 , … , F i F i + 1 F i + 2 ⋯ , … , where F i denotes the ith fixation.Although it was not explicitly stated, McCarthy et al. [12] in effect extracted chunks from partitioned sequences in their work on the importance of web page objects and their locations. They grouped consecutive fixations within each AOI into a chunk called a glance to obtain plain sequences of glances coded in AOI. Their interest was to see how often areas of web pages would attract glances by varying the area locations and the types of tasks.By focusing on the frequency of glances as an indication of importance, they disregarded the length of the chunks, that is, the number of fixations within glances. Also disregarded was the shift of glances, that is, between-chunk sequences. To us, both within- and between-chunk patterns seem to contain rich information worthy of investigation. The information can be extracted from partitioned sequences but not from plain ones. In addition, partitioned sequences will be of great value when some AOIs are nested into broader AOIs (see [16]), given the appropriate coding. The present study is extensible to such a hierarchical structure.For the sake of simplicity, we will focus on the eye movements of web page viewers, and we will assume that the pages are divided into grid-like AOIs, that the fixations are coded in terms of the areas in which they fall, and that chunks are delimited by isolate gaze points. ## 1.2. Shifts of Interest within and between Chunks The distance between two successive fixations indicates how far the interest shifted or did not shift in a looped transition that represents sustained interest in a given area. In our view, a chunk of fixations reflects continuous interest, and a new one begins after a momentary drift of the gaze. It seems natural to expect the distance distribution of the within-chunk shifts to differ, to some extent, from that of the between-chunk shifts.The distance analysis explained above exploits information from the cumulative records across all viewers. Hence, it is possible that the results are influenced by some dominant patterns in particular records. If one is interested in sequential regularities often shared among records, frequent sequential pattern mining is useful, as explained below. ## 1.3. Frequent Sequential Pattern Mining Among others, we will employ PrefixSpan, developed by Pei et al. [13, 14], because of its conceptual compatibility with the partitioned sequences of eye-tracking data. Their approach is briefly explained below using their example, seen in Table 1. (See the Appendix for a more formal explanation.) One can view the data as the eye-tracking records of four viewers in which fixations are alphabetically coded according to the areas of interest (AOI) they fall into: a, b, c, d, e, f, and g.Table 1 Pattern extraction by prefix “a” at ms3. Record Initial sequences Patterns prefixed bya 1 [ a ] [ a b c ] [ a c ] [ d ] [ c f ] [ a b c ] [ a c ] [ d ] [ c f ] 2 [ a d ] [ c ] [ b c ] [ a e ] [ _ d ] [ c ] [ b c ] [ a e ] 3 [ e f ] [ a b ] [ d f ] [ c ] [ b ] [ _ b ] [ d f ] [ c ] [ b ] 4 [ e ] [ g ] [ a f ] [ c ] [ b ] [ c ] [ _ f ] [ c ] [ b ] [ c ] frequent code a / 4 , b / 4 / , c / 4 , d / 4, e / 3 , f / 3 b / 4 , c / 4 Note. The underscore _ means that the prefix was present in the chunk; for example, _ b implies a b.Codesa, b, c, d, e, and f are all frequent, shared by the majority, whereas code g is infrequent, appearing only once. For further scanning, any infrequent or rare code is to be removed from the records, since it will never appear in frequent patterns according to thea priori principle [15]. Let us set the level of being frequent at three for illustrative purposes. This level is called the minimum support threshold (abbreviated as ms).For every frequent code, one scans the reduced records, devoid of infrequent codes, for patterns prefixed by the given code. Those found for prefix “a” are listed in the second column of Table 1. These are subject to further scanning with respect to the frequent codes at this step, that is, b and c. This process recursively continues until no code is frequent or no patterns remain in the records. Note that prefixes grow in each step like “[ a ] [ b ]”, “[ a ] [ c ]” in the above example (see the Appendix for a more formal explanation).Table2 lists 14 frequent patterns extracted at ms3 from the initial record, including those found at ms4 as an embedded part. For instance, [ a ] [ c ], found at ms4, is embedded in patterns [ a ] [ c ] and [ a ] [ c ] [ c ], at ms3; that is, (2) a c ⊆ a c , a c c . Similarly, those found at ms3 are included in the patterns at ms2 reported by Pei et al. [13, 14]. Inclusive relations generally hold between different ms levels.Table 2 All of the frequent patterns extracted from Table1 at ms3. [ a ] [ a ] [ b ] [ a ] [ c ] [ a ] [ c ] [ b ] [ a ] [ c ] [ c ] [ b ] [ b ] [ c ] [ c ] [ c ] [ b ] [ c ] [ c ] [ d ] [ d ] [ c ] [ e ] [ f ] Note. Underscored patterns were extracted at ms4.Ordinarily, one finds too few patterns at a high ms level and too many at a low level to make an interesting analysis. However, once one recognizes the inclusive relations, making use of multiple levels becomes a plausible solution for identifying strongly frequent patterns as opposed to mildly and weakly frequent ones. (See the Appendix for the relation networks among the patterns identified at ms2, ms3, and ms4.)The present approach is expected to advance eye-tracking research along with conventional heat maps, scan paths, and network analysis recently developed by Matsuda and Takeuchi [8–10]. ## 2. Method ### 2.1. Subjects (Ss) Twenty residents (seven males and 13 females) living near the AIST Research Institute in Japan were recruited for the experiments. They had normal or corrected vision, and their ages ranged from 19 to 48 years (average 30). Ten of the Ss were university students, five were housewives, and the rest were part-time workers. Eleven Ss were heavy Internet users, while the rest were light users, as judged from their reports about the number of hours they spent browsing online in a week. ### 2.2. Stimuli The front (or top) pages of ten commercial web sites were selected from various business areas: airline companies, commerce and shopping, and banking. These were classifiable into three groups according to the layout types [8–10]. Due to space limits, we chose four pages with the same layout, the top and the principal layers. The principal layers were divided into the main area in the middle and subareas on both sides. The layers and the areas differed in size among pages. ### 2.3. Apparatus and Procedure The stimuli were presented with 1024 × 768 pixel resolution on a TFT 17′′ display in a Tobii 1750 eye-tracking system at a rate of 50 Hz. The web pages were randomly displayed to the Ss one at a time for 20 sec. The Ss were asked to browse each page at their own pace. The translated instructions are “Various web pages will be shown on the computer display in turn. Please look at each page as you usually do until the screen darkens and then, click the mouse button when you are ready to proceed.” The Ss were informed that the experiment would last for approximately five minutes. ### 2.4. Segment Coding A 5 × 5 mesh was superposed on the effective part of each page, after the page was stripped of white margins that had no text or graphics. A uniform mesh was employed for ease of comparison among pages that varied in design beyond the basic layout. The distance of a shift between two segments was measured by the Euclidean distance, computed as the square root ofm 2 + n 2, where m and n are the number of blocks (i.e., segments) moved along the horizontal and vertical axes.The rows (and columns) of the mesh were alphabetically (and numerically) labeled in descending order: A through E (and 1 through 5). The segments were coded by combining these labels as seen in Figure2:   A 1 , A 2 , … , A 5 for the first row; B 1 , … , B 5 for the second; and so on through E 1 , … , E 5 for the fifth row.Figure 2 Segment coding. ### 2.5. Fixation Sequences The raw tracking data for each subject consisted of time-stamped gaze points measured inx y-coordinates. The gaze points were grouped into a fixation point if they stayed within a radius of 30 pixels for 100 msec. Otherwise, they remained isolate.Each fixation was then translated into code sequences according to the segments in which the fixation fell. Finally, each fixation sequence was partitioned into chunks using the isolate gaze points as delimiters. ### 2.6. Preprocessing the Codes for PrefixSpan In accord with the algorithm, 25 segments were first recoded using lettersa through y; then the codes in each chunk were alphabetically ordered with no duplication. In this process, we represented within-chunk loops by extra recoding. Consecutively repeated codes within a chunk were replaced by the corresponding capital letter, for example, [caaababaa] to [cAbabA]. After eliminating duplicates, we sorted the codes within each chunk, for example, [Aabc] from the original sequence. Consequently, we maintained the sequential order among chunks, but the within-chunk sequences could have been distorted. Due to this possibility, we were unable to identify between-chunk loops.Frequent patterns were extracted at three levels of minimum support (denoted as ms12, ms14, and ms16) corresponding to 60, 70, and 80% of the subjects. ## 2.1. Subjects (Ss) Twenty residents (seven males and 13 females) living near the AIST Research Institute in Japan were recruited for the experiments. They had normal or corrected vision, and their ages ranged from 19 to 48 years (average 30). Ten of the Ss were university students, five were housewives, and the rest were part-time workers. Eleven Ss were heavy Internet users, while the rest were light users, as judged from their reports about the number of hours they spent browsing online in a week. ## 2.2. Stimuli The front (or top) pages of ten commercial web sites were selected from various business areas: airline companies, commerce and shopping, and banking. These were classifiable into three groups according to the layout types [8–10]. Due to space limits, we chose four pages with the same layout, the top and the principal layers. The principal layers were divided into the main area in the middle and subareas on both sides. The layers and the areas differed in size among pages. ## 2.3. Apparatus and Procedure The stimuli were presented with 1024 × 768 pixel resolution on a TFT 17′′ display in a Tobii 1750 eye-tracking system at a rate of 50 Hz. The web pages were randomly displayed to the Ss one at a time for 20 sec. The Ss were asked to browse each page at their own pace. The translated instructions are “Various web pages will be shown on the computer display in turn. Please look at each page as you usually do until the screen darkens and then, click the mouse button when you are ready to proceed.” The Ss were informed that the experiment would last for approximately five minutes. ## 2.4. Segment Coding A 5 × 5 mesh was superposed on the effective part of each page, after the page was stripped of white margins that had no text or graphics. A uniform mesh was employed for ease of comparison among pages that varied in design beyond the basic layout. The distance of a shift between two segments was measured by the Euclidean distance, computed as the square root ofm 2 + n 2, where m and n are the number of blocks (i.e., segments) moved along the horizontal and vertical axes.The rows (and columns) of the mesh were alphabetically (and numerically) labeled in descending order: A through E (and 1 through 5). The segments were coded by combining these labels as seen in Figure2:   A 1 , A 2 , … , A 5 for the first row; B 1 , … , B 5 for the second; and so on through E 1 , … , E 5 for the fifth row.Figure 2 Segment coding. ## 2.5. Fixation Sequences The raw tracking data for each subject consisted of time-stamped gaze points measured inx y-coordinates. The gaze points were grouped into a fixation point if they stayed within a radius of 30 pixels for 100 msec. Otherwise, they remained isolate.Each fixation was then translated into code sequences according to the segments in which the fixation fell. Finally, each fixation sequence was partitioned into chunks using the isolate gaze points as delimiters. ## 2.6. Preprocessing the Codes for PrefixSpan In accord with the algorithm, 25 segments were first recoded using lettersa through y; then the codes in each chunk were alphabetically ordered with no duplication. In this process, we represented within-chunk loops by extra recoding. Consecutively repeated codes within a chunk were replaced by the corresponding capital letter, for example, [caaababaa] to [cAbabA]. After eliminating duplicates, we sorted the codes within each chunk, for example, [Aabc] from the original sequence. Consequently, we maintained the sequential order among chunks, but the within-chunk sequences could have been distorted. Due to this possibility, we were unable to identify between-chunk loops.Frequent patterns were extracted at three levels of minimum support (denoted as ms12, ms14, and ms16) corresponding to 60, 70, and 80% of the subjects. ## 3. Results The four pages used as stimuli will be referred to as P1, P2, P3, and P4. ### 3.1. Examination of the Chunks The total number of chunks did not greatly differ among pages, ranging from 539 (P2) to 592 (P1). The pages agreed well on the lengths and proportions of primary, secondary, and tertiary chunks that contained one, two, and three fixations, respectively. Primary chunks accounted for 53.3 (P4) to 60.4% (P1) of the total chunks, and secondary chunks accounted for 21.9 (P1) and 25.1% (P4). Putting the primary and secondary chunks together, the vast majority of the chunks (≥78.4%) were very short. The proportions of the tertiary chunks were much smaller, ranging from 6.9 (P3) to 12.2% (P4). The longer chunks accounted for 7.9 (P1) to 11.6% (P3).The primary shifts of transitions within double-fixation chunks were loops (distance = 0) across pages. These accounted for 48.5 (P1) to 62.2% (P2). The pages agreed also on the secondary (√ 1) and tertiary (√ 2) distances, which involved adjacent segments connected laterally (or vertically) and diagonally, respectively. The proportion of the former ranged from 30.0 (P2) to 40.8% (P1). In contrast, that of the latter was much smaller (≤8.8%). Put together, the overwhelming majority of the double-fixation chunks (≥88.4%) were homogenous, that is, loops, or minimally heterogeneous (√ 1).Loops and one-block shifts were also dominant among the chunks of length three or more. Loops accounted for 49.2 (P4) to 60.8% (P3) of the shifts, and one-block shifts accounted for 34.4 (P3) to 42.7% (P1) of them. Putting these together, the overwhelming majority (≥91.0%) of the shifts within longer chunks were extremely short in distance.Similarly, extremely short shifts (≤√1) were modal among between-chunk transitions in reverse order and less prominent than within-chunk transitions. Primary one-block shifts accounted for 33.9 (P2) to 37.6% (P3) of the total between-chunk shifts, and loops accounted for 22.3 (P1) to 30.3% (P2). Their combined proportions ranged from 57.5 (P1) to 64.2% (P2).The low prominence of the first two modal shifts was compensated by the relatively large proportions of the longer ones. Each of the two-block shifts (√ 2 and √ 4) exceeded 10% levels on all pages with the exception of 7.7% (√ 2) on P2. Compared to the paucity of shifts of three blocks (√ 5) within chunks (≤1.7%), the corresponding distance between chunks, which ranged from 7.1 (P2) to 9.4% (P1), was noteworthy. Similarly noticeable was the size of the long-distance shifts (≥√8), which ranged from 7.3 (P3) to 10.2% (P1), while such shifts were nonexistent or negligible (0.8% on P1) within chunks. ### 3.2. Examination of the Frequent Patterns The frequent patterns extracted at three different ms levels (ms12, ms14, and ms16) are inclusive within each page in the sense that (a) subpatterns of a frequent pattern are also frequent at a given level and (b) the patterns extracted at a higher level are included in those at a lower level. For the sake of simplicity, the term “frequent” will be omitted below when obvious. Prior to mining, special coding was applied to the within-chunk loops as explained in Section2.As seen in Table3, the patterns were generally short, consisting of one or two chunks across pages at all ms levels. The longer ones (one on P2 and five on P3), all of length three, were found only at ms12. The constituent chunks were simple in composition, being a single fixation or a single loop. The loops were limited to (A1A1), (B1B1), and (D1D1), all located in the first column of the mesh. (These will be denoted as (A1..), (B1..), and (D1..).) The (D1..) loop appeared only on P2 at ms12 by itself, unaccompanied by any other chunk. (B1..) appeared alone on P1 at ms12 and on P2 at all ms levels. Also, it was paired with B3 on P2 both as a prefix (ms12) and as a postfix (ms12 and ms14). (A1..) appeared by itself on P2 (ms12), P3 (ms12, ms14, and ms16), and P4 (ms12 and ms14) and also as a prefix to other chunk(s) on P3 (ms12, ms14, and ms16) and P4 (ms12). None of the corresponding segments were in the first column. The postfixes on P3 were A2 (ms12, ms14, and ms16); A3 and B3 (ms12 and ms14); and B2, B3, B4, B5, C3, and D2 (ms12) in addition to A2A2 and B3B3 (ms12). Those on P4 were B4, C3, C4, D3, and D4 (ms12).Table 3 Number of patterns (N) by length (len) by ms. len ms12 ms14 ms16 N Loops N Loops N Loops P1 1 18 B1/1 11 5 2 19 4 1 P2 1 15 A1/1, B1/1, D1/1 9 B1/1 5 B1/1 2 14 B1/2 4 B1/1 1 3 1 P3 1 14 A1/1 12 A1/1 7 A1/1 2 34 A1/8 11 A1/3 2 A1/1 3 5 A1/2 P4 1 18 A1/1 15 A1/1 6 2 20 A1/5 3 1 Note. The length of a pattern (len) is the number of constituent chunks. Also listed are the identified within-chunk loops with the number of patterns in which they appeared.In the six patterns of length three found on P2 and P3 at ms12, the constituent codes were partially or totally homogenous. Five of them contained two repeated codes, either A2 or B3, including those prefixed by (A1..) as reported above. The remaining one, found solely on P3, contained A2. In the following examination of the double-chunk patterns, loops will be treated as single codes to reduce complexity.The double-chunk patterns are listed in Table4 by the direction of the sequences—upward, homogenous, horizontal, and downward. Superscripts L and R denote leftward and rightward sequences. Underscored patterns were extracted at ms14 and above. Those found only at ms16 are further emphasized in italicized bold face. The total number of patterns varied from 13 (on P2 and P4) to 34 (P3).Table 4 Double-chunk patterns by direction. Page Direction Pattern P1 ↑ B2A2  B2A3R  B2A4R == B2B2  B3B3 ↔ A1A4 R  B2B3 R A2A4R  A3A4R  B2B1L  B2B5R  B3B2L  B3B4R ↓ A1D3R  B2C3R  B2C4R  B2D2  B2D3R  B3D3 P2 ↑ B3A1L  B3A3 == B3B3  A1A1  B1B1  B2B2 ↔ B3B1 L  B1B3R  B3B2L ↓ B3C2 L  B3C3  B3D2L  B3D3 P3 ↑ B2A3 R  B2A2  C3A2L == A2A2 B3B3  B2B2  C3C3 ↔ A1A2 R  A1A3 R  A2A3 R  B3B4 R A3A2L  B2B3R  B3B2L  B3B5R  C2C1L ↓ A1B3 R  A2B3 R  A2B4 R  A2C3 R A1B2R  A1B4R  A1B5R  A1C3R  A1D2R A2B2  A2B5R  A2D2  A2D3R  A2D5R B2C3R  B3D2L  B3D3  C3D2L P4 ↑ (none) == C3C3 ↔ C3C4 R  A2A3R  B3B4R  B3B5R  C3C2L ↓ A2B4 A2C4R  A3B4R  B2C3R  B3C3  B3C4R  C3D3 Note. The sequence directions are upward (↑), homogenous (==), horizontal (↔), and downward (↓). Underscored patterns were extracted at ms14. Those extracted at 16 are also emphasized in italicized bold face. Leftward and rightward sequences are marked by superscripts L and R, respectively.At ms16, the patterns were homogenous (B2B2 on P1; B3B3 on P2), horizontal (A1A2 on P3; C3C4 on P4), or downward (A2B3 on P3) sequences with the exception of down-rightward pattern A2B3 on P3. There was no leftward heterogeneous pattern.The new patterns found at ms14 included an upward sequence (B2A3 on P3) and five downward sequences (B3C2 on P2; A1B3, A2B4, and A2C3 on P3; and A2B4 on P4) in addition to four homogenous sequences (B3B3 on P1; A2A2, B3B3 on P3; C3C3 on P4) and six horizontal sequences (A1A4 and B2B3 on P1; B3B1 on P2; and A1A3, A2A3, and B3B4 on P3). Among the 12 heterogeneous patterns, only two (B3B1 and B3C2 on P2) were leftward.The patterns extracted at ms14 and above had no segments in rows D and E and no segments in the fifth column. None of the seven upward and downward sequences were strictly vertical, involving adjacent or nonadjacent columns in the ratio of 4 to 3. These vertical patterns mostly involved adjacent rows (6 out of 7).Some of the constituent segments of the sequences at ms14 and above appeared solely as prefixes (A1 on P1 and P3; A2 on P4) or as postfixes (B3 on P1; B1 and C2 on P2; A3, B3, B4, and C3 on P3; B4 and C4 on P4).The new double-chunk patterns found at ms12 had (a) segments in row D and in column 5, (b) notable positions of the new segments, (c) increased heterogeneous patterns, (d) increased sequences between nonadjacent rows, (e) strictly vertical sequences, and (f) bilateral sequence pairs. The segments in row D appeared only as postfixes in the downward sequences (D2 and D3 on P1 and P2; D2, D3, and D5 on P3; and D3 on P4). Similarly, the new segments found in row C were postfixes (C3 and C4 on P1; C3 on P2; C1 on P3; and C2 on P4) with a single exception (C2 on P3). The new segments in row B were mostly postfixes: B1, B4, and B5 on P1, B5 on P3, and B4 and B5 on P4. B2 and B3 on P4 were prefixes. An interesting case was B2 on P2 which was special, being a prefix to itself (B2B2). Dual roles were more notable than unary ones among the new segments in row A (A2 and A3 on P1, A1 on P2, and A3 on P4).A total of seven new upward sequences were found, three on P1 and two on both P2 and P3, but still none on P4. These were prefixed by B2 (on P1 and P3), B3 (P2), or C3 (P3) and postfixed by the segments in row A—A1, A2, A3, or A4. Only C3A2 involved nonadjacent rows. A strictly vertical sequence was present on each of P1, P2, and P3—B2A2, B3A3, and B2A2. The rest were rightward (B2A3 and B2A4 on P1) or leftward on P2 and P3 (B3A1 on P2; C3A2 on P3).A total of five new homogenous sequences were found on P2 and P3, one in row A (A1A1 on P2), three in row B (B1B1 on P2 and B2B2 on P2 and P3), and one in row C (C3C3 on P3). Like those at ms14 and above, none of the constituents were in columns 4 or 5.A total of 17 new horizontal sequences were found on P1 (two in row A and four in row B), P2 (two in B), P3 (one in A, three in B, and one in C), and P4 (one in A, two in B, and one in C). A2 and A3 appeared as a prefix or as a postfix, while A4 appeared only as a postfix. The same held for B1, B2, and B3, while B4 and B5 appeared only as postfixes. C2 assumed dual positions in C2C1 on P3 and C3C2 on P4, both of which were leftward. The ratio of leftward to rightward sequences was 2 : 4, 1 : 1, 3 : 2, and 1 : 3 in the order of P1, P2, P3, and P4.A total of 29 new downward sequences were found, six on P1, three on P2, 14 on P3, and six on P4. The prefixes concentrated in rows A and B with two exceptions (C3D2 on P3 and C3D3 on P4). In contrast, the postfixes concentrated in rows C and D with exceptions of five patterns on P3 and one on P4. Half or more of the downward patterns on P1, P2, and P3 involved nonadjacent rows (A-D/1 and B-D/3 on P1; B-D/2 on P2; and A-C/1, A-D/5, and B-D/2 on P3, wheren denotes the number of cases), whereas only A2C4 out of six patterns did so on P4. The strictly vertical patterns were limited to columns 2 and 3 (B-D/2 on P1; B-C/1 and B-D/1 on P2; A-B/1, A-D/1, and B-D/1 on P3; and B-C/1 and C-D/ on P4). The rest were rightward on P1 and P4, leftward on P2, or mixed on P3.Among all of the patterns in Table4, the heterogeneous sequences were mostly unilateral in that the symmetric pairs were limited in number (B2B3-B3B2 on P1; B1B3-B3B1 on P2; A2A3-A3A2, A2B2-B2A2, A2C3-C3A2, and B2B3-B3B2 on P3; and none on P4). Four of these were horizontal sequences. The constituents were limited to a subset consisting of the first three rows and columns, that is, { A 2 , B 1 , B 2 , B 3 , and C 3 }.The individual constituents of the multichunk patterns were frequent by themselves as primitive patterns at a given ms level, but not vice versa. Table5 lists the isolate primitive patterns not participating in any multichunk pattern at a given ms level. While the number of total primitive patterns monotonically decreased from ms12 to ms16, the ratio of the isolate primitive patterns to the total primitive patterns monotonically increased on all pages almost perfectly. The ratios at ms 12 , 14 , 16 were 4 / 17 , 7 / 11 , 4 / 5, 4 / 13 , 5 / 8 , 2 / 4, 0 / 13 , 4 / 11 , 3 / 6, and 5 / 17 , 9 / 14 , 4 / 6, in the order of P1, P2, P3, and P4. The sole exception was the second and the third ratios on P2. There were no isolates on P3 at ms12.Table 5 Isolate primitives by ms level. ms12 ms14 ms16 P1 A5 C2 C5 E3 A2  A3 B1 B5 C3 C4 D3 A1A2  A4 B3 P2 A2 B4 C1 D1 A1  A3  B2 C3 D3 A1  A3 P3 (none) B5 C1 C2 D2 A3 B2 C3 P4 A5B5  C1  C5 D2 B1 B2B3  B5  C1  C2 C5 D3 D4 A2B3  B4  C5 Note. Primitives in bold face were persistent at two or three ms levels.Generally, an isolate primitive at a given ms level would become a member of sequence(s) at a lower level and would not be present at a higher level. Exceptionally, C5, located in the rightmost column, persisted on P4 as an isolate at all ms levels. Partial persistence was observed between ms14 and ms16 on P1 (A2), P2 (A1, A3), and P4 (B3) as well as between ms12 and ms14 on P4 (B5, C1). No persistence was observed on P3. The persistent ones on P1 and P4 were limited to the first three columns of the top row,{ A 1 , A 2 , A 3 }, whereas those on P4 spread over rows B and C in columns 1, 3, and 5, that is, { B 3 , B 5 , C 1 , C 5 }.Finally, E3 on P1 at ms12 was the sole frequent segment in the bottom row E where segments were generally infrequent across pages at all ms levels. ## 3.1. Examination of the Chunks The total number of chunks did not greatly differ among pages, ranging from 539 (P2) to 592 (P1). The pages agreed well on the lengths and proportions of primary, secondary, and tertiary chunks that contained one, two, and three fixations, respectively. Primary chunks accounted for 53.3 (P4) to 60.4% (P1) of the total chunks, and secondary chunks accounted for 21.9 (P1) and 25.1% (P4). Putting the primary and secondary chunks together, the vast majority of the chunks (≥78.4%) were very short. The proportions of the tertiary chunks were much smaller, ranging from 6.9 (P3) to 12.2% (P4). The longer chunks accounted for 7.9 (P1) to 11.6% (P3).The primary shifts of transitions within double-fixation chunks were loops (distance = 0) across pages. These accounted for 48.5 (P1) to 62.2% (P2). The pages agreed also on the secondary (√ 1) and tertiary (√ 2) distances, which involved adjacent segments connected laterally (or vertically) and diagonally, respectively. The proportion of the former ranged from 30.0 (P2) to 40.8% (P1). In contrast, that of the latter was much smaller (≤8.8%). Put together, the overwhelming majority of the double-fixation chunks (≥88.4%) were homogenous, that is, loops, or minimally heterogeneous (√ 1).Loops and one-block shifts were also dominant among the chunks of length three or more. Loops accounted for 49.2 (P4) to 60.8% (P3) of the shifts, and one-block shifts accounted for 34.4 (P3) to 42.7% (P1) of them. Putting these together, the overwhelming majority (≥91.0%) of the shifts within longer chunks were extremely short in distance.Similarly, extremely short shifts (≤√1) were modal among between-chunk transitions in reverse order and less prominent than within-chunk transitions. Primary one-block shifts accounted for 33.9 (P2) to 37.6% (P3) of the total between-chunk shifts, and loops accounted for 22.3 (P1) to 30.3% (P2). Their combined proportions ranged from 57.5 (P1) to 64.2% (P2).The low prominence of the first two modal shifts was compensated by the relatively large proportions of the longer ones. Each of the two-block shifts (√ 2 and √ 4) exceeded 10% levels on all pages with the exception of 7.7% (√ 2) on P2. Compared to the paucity of shifts of three blocks (√ 5) within chunks (≤1.7%), the corresponding distance between chunks, which ranged from 7.1 (P2) to 9.4% (P1), was noteworthy. Similarly noticeable was the size of the long-distance shifts (≥√8), which ranged from 7.3 (P3) to 10.2% (P1), while such shifts were nonexistent or negligible (0.8% on P1) within chunks. ## 3.2. Examination of the Frequent Patterns The frequent patterns extracted at three different ms levels (ms12, ms14, and ms16) are inclusive within each page in the sense that (a) subpatterns of a frequent pattern are also frequent at a given level and (b) the patterns extracted at a higher level are included in those at a lower level. For the sake of simplicity, the term “frequent” will be omitted below when obvious. Prior to mining, special coding was applied to the within-chunk loops as explained in Section2.As seen in Table3, the patterns were generally short, consisting of one or two chunks across pages at all ms levels. The longer ones (one on P2 and five on P3), all of length three, were found only at ms12. The constituent chunks were simple in composition, being a single fixation or a single loop. The loops were limited to (A1A1), (B1B1), and (D1D1), all located in the first column of the mesh. (These will be denoted as (A1..), (B1..), and (D1..).) The (D1..) loop appeared only on P2 at ms12 by itself, unaccompanied by any other chunk. (B1..) appeared alone on P1 at ms12 and on P2 at all ms levels. Also, it was paired with B3 on P2 both as a prefix (ms12) and as a postfix (ms12 and ms14). (A1..) appeared by itself on P2 (ms12), P3 (ms12, ms14, and ms16), and P4 (ms12 and ms14) and also as a prefix to other chunk(s) on P3 (ms12, ms14, and ms16) and P4 (ms12). None of the corresponding segments were in the first column. The postfixes on P3 were A2 (ms12, ms14, and ms16); A3 and B3 (ms12 and ms14); and B2, B3, B4, B5, C3, and D2 (ms12) in addition to A2A2 and B3B3 (ms12). Those on P4 were B4, C3, C4, D3, and D4 (ms12).Table 3 Number of patterns (N) by length (len) by ms. len ms12 ms14 ms16 N Loops N Loops N Loops P1 1 18 B1/1 11 5 2 19 4 1 P2 1 15 A1/1, B1/1, D1/1 9 B1/1 5 B1/1 2 14 B1/2 4 B1/1 1 3 1 P3 1 14 A1/1 12 A1/1 7 A1/1 2 34 A1/8 11 A1/3 2 A1/1 3 5 A1/2 P4 1 18 A1/1 15 A1/1 6 2 20 A1/5 3 1 Note. The length of a pattern (len) is the number of constituent chunks. Also listed are the identified within-chunk loops with the number of patterns in which they appeared.In the six patterns of length three found on P2 and P3 at ms12, the constituent codes were partially or totally homogenous. Five of them contained two repeated codes, either A2 or B3, including those prefixed by (A1..) as reported above. The remaining one, found solely on P3, contained A2. In the following examination of the double-chunk patterns, loops will be treated as single codes to reduce complexity.The double-chunk patterns are listed in Table4 by the direction of the sequences—upward, homogenous, horizontal, and downward. Superscripts L and R denote leftward and rightward sequences. Underscored patterns were extracted at ms14 and above. Those found only at ms16 are further emphasized in italicized bold face. The total number of patterns varied from 13 (on P2 and P4) to 34 (P3).Table 4 Double-chunk patterns by direction. Page Direction Pattern P1 ↑ B2A2  B2A3R  B2A4R == B2B2  B3B3 ↔ A1A4 R  B2B3 R A2A4R  A3A4R  B2B1L  B2B5R  B3B2L  B3B4R ↓ A1D3R  B2C3R  B2C4R  B2D2  B2D3R  B3D3 P2 ↑ B3A1L  B3A3 == B3B3  A1A1  B1B1  B2B2 ↔ B3B1 L  B1B3R  B3B2L ↓ B3C2 L  B3C3  B3D2L  B3D3 P3 ↑ B2A3 R  B2A2  C3A2L == A2A2 B3B3  B2B2  C3C3 ↔ A1A2 R  A1A3 R  A2A3 R  B3B4 R A3A2L  B2B3R  B3B2L  B3B5R  C2C1L ↓ A1B3 R  A2B3 R  A2B4 R  A2C3 R A1B2R  A1B4R  A1B5R  A1C3R  A1D2R A2B2  A2B5R  A2D2  A2D3R  A2D5R B2C3R  B3D2L  B3D3  C3D2L P4 ↑ (none) == C3C3 ↔ C3C4 R  A2A3R  B3B4R  B3B5R  C3C2L ↓ A2B4 A2C4R  A3B4R  B2C3R  B3C3  B3C4R  C3D3 Note. The sequence directions are upward (↑), homogenous (==), horizontal (↔), and downward (↓). Underscored patterns were extracted at ms14. Those extracted at 16 are also emphasized in italicized bold face. Leftward and rightward sequences are marked by superscripts L and R, respectively.At ms16, the patterns were homogenous (B2B2 on P1; B3B3 on P2), horizontal (A1A2 on P3; C3C4 on P4), or downward (A2B3 on P3) sequences with the exception of down-rightward pattern A2B3 on P3. There was no leftward heterogeneous pattern.The new patterns found at ms14 included an upward sequence (B2A3 on P3) and five downward sequences (B3C2 on P2; A1B3, A2B4, and A2C3 on P3; and A2B4 on P4) in addition to four homogenous sequences (B3B3 on P1; A2A2, B3B3 on P3; C3C3 on P4) and six horizontal sequences (A1A4 and B2B3 on P1; B3B1 on P2; and A1A3, A2A3, and B3B4 on P3). Among the 12 heterogeneous patterns, only two (B3B1 and B3C2 on P2) were leftward.The patterns extracted at ms14 and above had no segments in rows D and E and no segments in the fifth column. None of the seven upward and downward sequences were strictly vertical, involving adjacent or nonadjacent columns in the ratio of 4 to 3. These vertical patterns mostly involved adjacent rows (6 out of 7).Some of the constituent segments of the sequences at ms14 and above appeared solely as prefixes (A1 on P1 and P3; A2 on P4) or as postfixes (B3 on P1; B1 and C2 on P2; A3, B3, B4, and C3 on P3; B4 and C4 on P4).The new double-chunk patterns found at ms12 had (a) segments in row D and in column 5, (b) notable positions of the new segments, (c) increased heterogeneous patterns, (d) increased sequences between nonadjacent rows, (e) strictly vertical sequences, and (f) bilateral sequence pairs. The segments in row D appeared only as postfixes in the downward sequences (D2 and D3 on P1 and P2; D2, D3, and D5 on P3; and D3 on P4). Similarly, the new segments found in row C were postfixes (C3 and C4 on P1; C3 on P2; C1 on P3; and C2 on P4) with a single exception (C2 on P3). The new segments in row B were mostly postfixes: B1, B4, and B5 on P1, B5 on P3, and B4 and B5 on P4. B2 and B3 on P4 were prefixes. An interesting case was B2 on P2 which was special, being a prefix to itself (B2B2). Dual roles were more notable than unary ones among the new segments in row A (A2 and A3 on P1, A1 on P2, and A3 on P4).A total of seven new upward sequences were found, three on P1 and two on both P2 and P3, but still none on P4. These were prefixed by B2 (on P1 and P3), B3 (P2), or C3 (P3) and postfixed by the segments in row A—A1, A2, A3, or A4. Only C3A2 involved nonadjacent rows. A strictly vertical sequence was present on each of P1, P2, and P3—B2A2, B3A3, and B2A2. The rest were rightward (B2A3 and B2A4 on P1) or leftward on P2 and P3 (B3A1 on P2; C3A2 on P3).A total of five new homogenous sequences were found on P2 and P3, one in row A (A1A1 on P2), three in row B (B1B1 on P2 and B2B2 on P2 and P3), and one in row C (C3C3 on P3). Like those at ms14 and above, none of the constituents were in columns 4 or 5.A total of 17 new horizontal sequences were found on P1 (two in row A and four in row B), P2 (two in B), P3 (one in A, three in B, and one in C), and P4 (one in A, two in B, and one in C). A2 and A3 appeared as a prefix or as a postfix, while A4 appeared only as a postfix. The same held for B1, B2, and B3, while B4 and B5 appeared only as postfixes. C2 assumed dual positions in C2C1 on P3 and C3C2 on P4, both of which were leftward. The ratio of leftward to rightward sequences was 2 : 4, 1 : 1, 3 : 2, and 1 : 3 in the order of P1, P2, P3, and P4.A total of 29 new downward sequences were found, six on P1, three on P2, 14 on P3, and six on P4. The prefixes concentrated in rows A and B with two exceptions (C3D2 on P3 and C3D3 on P4). In contrast, the postfixes concentrated in rows C and D with exceptions of five patterns on P3 and one on P4. Half or more of the downward patterns on P1, P2, and P3 involved nonadjacent rows (A-D/1 and B-D/3 on P1; B-D/2 on P2; and A-C/1, A-D/5, and B-D/2 on P3, wheren denotes the number of cases), whereas only A2C4 out of six patterns did so on P4. The strictly vertical patterns were limited to columns 2 and 3 (B-D/2 on P1; B-C/1 and B-D/1 on P2; A-B/1, A-D/1, and B-D/1 on P3; and B-C/1 and C-D/ on P4). The rest were rightward on P1 and P4, leftward on P2, or mixed on P3.Among all of the patterns in Table4, the heterogeneous sequences were mostly unilateral in that the symmetric pairs were limited in number (B2B3-B3B2 on P1; B1B3-B3B1 on P2; A2A3-A3A2, A2B2-B2A2, A2C3-C3A2, and B2B3-B3B2 on P3; and none on P4). Four of these were horizontal sequences. The constituents were limited to a subset consisting of the first three rows and columns, that is, { A 2 , B 1 , B 2 , B 3 , and C 3 }.The individual constituents of the multichunk patterns were frequent by themselves as primitive patterns at a given ms level, but not vice versa. Table5 lists the isolate primitive patterns not participating in any multichunk pattern at a given ms level. While the number of total primitive patterns monotonically decreased from ms12 to ms16, the ratio of the isolate primitive patterns to the total primitive patterns monotonically increased on all pages almost perfectly. The ratios at ms 12 , 14 , 16 were 4 / 17 , 7 / 11 , 4 / 5, 4 / 13 , 5 / 8 , 2 / 4, 0 / 13 , 4 / 11 , 3 / 6, and 5 / 17 , 9 / 14 , 4 / 6, in the order of P1, P2, P3, and P4. The sole exception was the second and the third ratios on P2. There were no isolates on P3 at ms12.Table 5 Isolate primitives by ms level. ms12 ms14 ms16 P1 A5 C2 C5 E3 A2  A3 B1 B5 C3 C4 D3 A1A2  A4 B3 P2 A2 B4 C1 D1 A1  A3  B2 C3 D3 A1  A3 P3 (none) B5 C1 C2 D2 A3 B2 C3 P4 A5B5  C1  C5 D2 B1 B2B3  B5  C1  C2 C5 D3 D4 A2B3  B4  C5 Note. Primitives in bold face were persistent at two or three ms levels.Generally, an isolate primitive at a given ms level would become a member of sequence(s) at a lower level and would not be present at a higher level. Exceptionally, C5, located in the rightmost column, persisted on P4 as an isolate at all ms levels. Partial persistence was observed between ms14 and ms16 on P1 (A2), P2 (A1, A3), and P4 (B3) as well as between ms12 and ms14 on P4 (B5, C1). No persistence was observed on P3. The persistent ones on P1 and P4 were limited to the first three columns of the top row,{ A 1 , A 2 , A 3 }, whereas those on P4 spread over rows B and C in columns 1, 3, and 5, that is, { B 3 , B 5 , C 1 , C 5 }.Finally, E3 on P1 at ms12 was the sole frequent segment in the bottom row E where segments were generally infrequent across pages at all ms levels. ## 4. Discussion Eye-tracking researchers have inferred a fixation from gaze points closely clustered in space and time, treating it as a meaningful unit of information processing, that is, achunk, a familiar concept in psychology. Chunking of lower-level chunks into a higher one is not uncommon as seen in the relationships letter , word , phrase , sentence , paragraph , …. The present paper examined the patterns of second-order chunks, that is, chunks of fixations, using isolate gaze point(s) not participating in any fixation as the delimiter. The delimiter was assumed to play an auxiliary role in chunking, like a pause in speech.Most of the identified chunks were short, consisting of one or two fixations. Also, the transitions within multifixation chunks and between chunks were mostly short in distance, either loops or one-block shifts to adjacent segments. These seem to be attributable to the minimum criterion of the delimiter we employed—at least one isolate gaze point. Hence, even an accidental dislocation of one’s gaze resulted in chunking. It would be ideal if we could separate cognitively meaningful chunking from accidental chunking. Until an effective method is established, the best we can do is to be cautious in interpreting the results.Actually, setting an appropriate criterion is a difficult task due to the possible individual and situational variations. Perhaps individuated criteria will be appropriate instead of a uniform criterion. Further investigation of the distributions of gaze points participating in fixations and those that are isolated is necessary.As reported earlier, within- and between-chunk transitions were similar in that the first two modal distances were zero (i.e., loops) and one block. However, these differed in order and in magnitude. Loops were primary among within-chunk transitions but secondary among between-chunk transitions. The opposite was true for the one-block shifts. Next, the proportions of the primary and secondary distances of the within-chunk transitions exceeded the respective proportions pertaining to the between-chunk transitions. Similarly, there were more long-distance shifts between chunks than within them.These results seem to suggest that the attention of our subjects was most likely shifted, after a pause, to an adjacent segment one block away or within the same segment. The medium or long-distance shifts were also separated by pauses, though their proportions were smaller than the short ones. Shifts without a pause, that is, within-chunk shifts, were short, chiefly occurring in the same segment or between adjacent segments one block away.Now we turn to a discussion of the frequent patterns (i.e., subsequences) extracted by PrefixSpan. The patterns were simple in structure, mostly consisting of single or double chunks. Furthermore, the chunks themselves contained single fixations or single loops as expected from the chunk properties discussed above. More complex structures might have resulted if we had employed less stringent criteria for the delimiter. Even so, beneath the structural simplicity, interesting properties emerged as to the segment differentiation and the directional unevenness in attentional shifts.First, the within-chunk loops were limited to (A1..), (B1..), and (D1..), all of which were in the leftmost column. While the presence of (D1..) was quite limited, the leading roles of (A1..) and (B1..) as prefixes in the multichunk sequences are noteworthy. These roles might be attributable to menu items placed in the segments. Second, the multichunk sequences chiefly consisted of the segments in rows A, B, and C. In particular, the leading role of A1 on P1 and P3 was noteworthy, like the loop (A1..), though its dual role as pre- and postfix was observed on P2. In contrast, A4, B4, and C4 were consistently positioned as postfixes. The same held for the segments in row D, which appeared only at the lowest ms level. The segments in row E were totally absent in multichunk sequences.Third, the sequences at ms14 and ms16 were more likely to be horizontal, including homogenous codes, than downward and, to much less extent, than the upward sequence, which remained least likely among the additional patterns found at ms12. The order between horizontal and downward sequences varied across pages at ms12.By chunking eye-tracking records into smaller units, we discovered interesting properties of the eye movement of web page viewers. However, further studies seem necessary to enhance the present approach, for example, by setting up nested AOI’s to reflect the hierarchical structure of the web objects [16] and by adjusting the chunk delimiters to accommodate individual and task variations. Besides these refinements, we are planning an application of mined frequent patterns to simultaneous clustering [17] of subjects and the properties of their eye movement and other relevant indices. --- *Source: 101642-2014-11-23.xml*
2014
# Uniform in Time Description for Weak Solutions of the Hopf Equation with Nonconvex Nonlinearity **Authors:** Antonio Olivas Martinez; Georgy A. Omel'yanov **Journal:** International Journal of Mathematics and Mathematical Sciences (2009) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2009/101647 --- ## Abstract We consider the Riemann problem for the Hopf equation with concave-convex flux functions. Applying the weak asymptotics method we construct a uniform in time description for the Cauchy data evolution and show that the use of this method implies automatically the appearance of the Oleinik E-condition. --- ## Body ## 1. Introduction It is well known that the uniqueness problem for weak solutions of hyperbolic quasilinear systems remains unsolved up to now in the case of arbitrary jump amplitudes. Moreover, the approach which has been used successfully for shocks with sufficiently small amplitudes [1, 2] cannot be extended to the general case. On the other hand, there is a possibility to construct the unique stable solution passing to parabolic regularization. However, the vanishing viscosity method cannot be used effectively for nontrivial vector problems. Indeed, in the essentially nonintegrable case we, obviously, do not have the exact solution. Moreover, any traditional asymptotic method does not serve for the problem of nonlinear wave interaction since it leads to the appearance of a chain of partial differential equations, the first of them is nonlinear and, in fact, coincides with the original equation.We are of opinion that a progress in this problem can be achieved in the framework of the weak asymptotics method; see, for example, [3–5]. In this method the approximated solutions are sought in the same form as in the Whitham method modified for nonlinear waves with localized fast variation [6, 7] (for the original Whitham method for rapidly oscillating waves see [8]). At the same time, the discrepancy in the weak asymptotics method is assumed to be small in the sense of the space of functionals 𝒟x′ over test functions depending only on the “space” variable x. This somehow trivial modification allows us to reduce the problem of describing interaction of nonlinear waves to solving some systems of ordinary differential equations (instead of solving partial differential equations). Respectively, the main characteristics of the solution (the trajectory of the limiting singularity motion, etc.) can be found by this method, whereas the shape of the real solution cannot be found.Applications of the weak asymptotics method allowed among other to investigate the interaction of solitons for nonintegrable versions of the KdV and sine-Gordon equations [9–11], to describe uniformly in time the confluence of the shock waves for the Hopf equation with convex nonlinearities [4], as well as to construct uniform in time asymptotics for the Riemann problem for isothermal gas dynamics [12–14] and delta-shock solutions for the so-called pressureless gas dynamics [15, 16]. However, it should be necessary to verify the method application to each new type of problems.As for the uniqueness problem, we are not ready now to consider the vector case; so we are going to simulate it and to investigate the Riemann problem for the scalar conservation law with nonconvex nonlinearity:(1.1)∂u∂t+∂f(u)∂x=0,t>0,x∈ℝ1,(1.2)u|t=0={u-,x<0,u+,x>0.Furthermore, the structure of the uniform in time asymptotics for a regularization of the problem (1.1), (1.2) with an arbitrary f(u) can be very complicated. On the other hand, it is clear that we can define a sequence of time intervals and consider the asymptotics uε for each time interval as a combination of local interacting solutions. Almost without loss of generality we can suppose that the local solutions correspond to convex or concave-convex parts of the nonlinearity f(uε). That is why, in view of the result [4], we restrict ourselves to the concave-convex case; that is, we will suppose that (1.3)uf′′(u)>0(u≠0),f′′(0)=0,f′′′(0)≠0,lim|u|→∞f′(u)=∞. For definiteness we assume also that(1.4)u->0>u+.Let us recall that the solution of the initial-value problem is called stable if it depends continuously on the initial data (see, e.g., [2]). Obviously, the stable solution to the problem (1.1)–(1.4) is well known (see, e.g., [17]) and it can be constructed using the characteristics method for (1.1) with regularized initial data. In particular, the stable solution will be the shock wave with amplitude u--u+ if and only if the Oleinik E-condition(1.5)f(u)-f(u-)u-u-≥f(u+)-f(u-)u+-u-≥f(u+)-f(u)u+-u is satisfied for any u∈[u+,u-].The same shock wave presents an example of nonstable weak solutions if the condition (1.5) is violated. Let us note that this nonadmissible shock wave looks as if it is stable if f′(u-)>f′(u+).Technically, our result consists of obtaining uniform in time asymptotic solutions for a regularization of the problem (1.1), (1.2). However, we consider as the main result the fact that the weak asymptotics method allows to construct the admissible limiting solution without any additional conditions. In particular, we obtain automatically the Oleinik E-condition for the shock wave solution.The structure of the asymptotics construction is the following. Firstly we pass from the initial step function to a sequence of step functions such that each jump corresponds to a stable solution (in fact, to a shock wave or a centered rarefaction). Here we take into account the fact that weak asymptotics similarly to exact weak solution is not unique in the unstable case. At the same time, describing the collision of stable waves, we obtain automatically the stable scenario of interaction. Therefore, this passage from the Riemann problem to the problem of interaction of stable waves can be treated as a “regularization.” For our model example it means the transformation of the problem (1.1), (1.2) to the following “regularization”: (1.6)∂uΔ∂t+∂f(uΔ)∂x=0,t>0,x∈ℝ1,uΔ|t=0=u̅+(u--u̅)H(x10-x)+(u+-u̅)H(x-x20), where Δ=x20-x10>0 is the “regularization” parameter, H(x) is the Heaviside function, and u̅∈(u+,u-). We choose the intermediate state u̅<0 such that the left jump (at the point x=x10) corresponds for t≪Δ to the stable shock wave, whereas the right jump (at the point x=x20) corresponds to the centered rarefaction. Let us note that the problem (1.6) with Δ=const is of interest by itself.Next, we pass from (1.6) to the parabolic regularization:(1.7)∂uΔε∂t+∂f(uΔε)∂x=ε∂2uΔε∂x2,t>0,x∈ℝ1,uΔε|t=0=u̅+(u--u̅)ω((x10-x)ε)+(u+-u̅)ω((x-x20)ε), where ω(x/ε) is a regularization of the Heaviside function with the parameter ε≪Δ. The contents of Sections 2 and 3 are the construction of the weak asymptotic solution to the problem (1.7).Finally, in conclusion, we consider the limiting solution both forε→0 and for Δ→0.Completing this section let us formalize the concept of the weak asymptotics.Definition 1.1. LetuΔε=uΔε(t,x) be a function that belongs to 𝒞∞([0,T]×ℝx1) for each ε=const>0 and to 𝒞([0,T];𝒟′(ℝx1)) uniformly in ε∈[0,const]. One says that uΔε(t,x) is a weak asymptotic mod𝒪𝒟′(ε) solution of (1.7) if the relation (1.8)ddt∫-∞∞uΔεψdx-∫-∞∞f(uΔε)∂ψ∂xdx=𝒪(ε) holds uniformly in t∈(0,T] for any test function ψ=ψ(x)∈𝒟(ℝx1).Here and below the estimate𝒪(εk) is understood in the 𝒞([0,T]) sense: |𝒪(εk)|≤CTεk for t∈[0,T].Definition 1.2. A functiong(t,x,ε) is said to be of the value 𝒪𝒟′(εk) if the relation (1.9)(g,ψ)=∫-∞∞g(t,x,ε)ψ(x)dx=𝒪(εk) holds for any test function ψ=ψ(x)∈𝒟(ℝx1).It is very important to note that the viscosity term in (1.7) has the value 𝒪𝒟′(ε) and disappears in (1.8). The same is true for any parabolic regularization of the form ε(b(u))xx. Thus, we see that the weak asymptotic mod𝒪𝒟′(ε) solution does not depend on the dissipative terms. In what follows we will omit the subindex Δ for u. ## 2. Construction of the Asymptotic Solution for the First Interaction ### 2.1. Asymptotic Ansatz To present the asymptotic ansatz for the problem (1.7) let us consider the possible scenario of the initial data (1.6) evolution. Our choice of u̅ in (1.6) implies that(2.1)f(u)<f(u̅)+f(u-)-f(u̅)u--u̅(u-u̅)∀u∈(u̅,u-),(2.2)f(u)>f(u̅)+f(u̅)-f(u+)u̅-u+(u-u̅)∀u∈(u+,u̅). Thus, the problem (1.6) solution should be the superposition of noninteracting shock wave and centered rarefaction during a sufficiently small time interval, namely, (2.3)u=u̅+(u--u̅)H(φ10(t)-x)+{r(x-x20t)-u̅}H(x-φ20(t))+{u+-r(x-x20t)}H(x-φ30(t)), where t≪Δ, φ10(t) is the shock wave phase:(2.4)φ10(t)=x10+s10t,s10≝f(u-)-f(u̅)u--u̅,φk0=φk0(t) for k=2,3 are the characteristics:(2.5)φ20=x20+f′(u̅)t,φ30=x20+f′(u+)t, and r=r((x-x20)/t) is the centered rarefaction with the support between x=φ20 and x=φ30:(2.6)r∈𝒞∞issuchthatf′(r(z))=z.Assumption (2.1) implies the intersection of the shock wave trajectory φ10 with the characteristic φ20 at some time instant t1*=𝒪(Δ). Accordingly, the interaction between the shock and the singularity of the type (x-φ20)+λ, 0<λ<1 (i.e., with the left border of the rarefaction) has to occur, which will result in the appearance of a shock wave with a variable amplitude. Furthermore, this shock wave can interact with the right border of the rarefaction wave. So, generally speaking, the asymptotic ansatz needs to contain two fast variables. However, the distance between the characteristics x=φ20(t) and x=φ30(t) at the first critical time t1* is greater than a constant for Δ=const. Thus, the shock wave trajectory can intersect the characteristic x=φ30(t) only at a second critical time instant t2* such that t2*-t1*≥const>0. Therefore, we can investigate the interaction process by stages.Let us consider the first evolution stage for the solution of the problem (1.7). We present the asymptotic ansatz as a natural regularization of (2.3):(2.7)uε=u̅+(u--u̅)ω1+(R-u̅)ω2+(u+-R)ω3, where R=R(x,t,ε)∈𝒞∞(ℝ1×ℝ+1×[0,1]) is a function such that(2.8)R(x,t,ε)={u̅ifx<φ20-cε,r(x-x20t)ifφ20<x<φ30,u+ifx>φ30+cε with a constant c>0,(2.9)ω1=ω(-x+φ1ε),ω2=ω(x-φ2ε),ω3=ω(x-φ30ε), and ω(z/ε) is the Heaviside function regularization.Furthermore, the phasesφk=φk(τ,t), k=1,2, are assumed to be smooth functions such that(2.10)φk(τ,t)→φk0(t)asτ→+∞,φk(τ,t)→φk1(t)asτ→-∞ exponentially fast, where τ denotes the “fast time”:(2.11)τ=ψ0(t)ε,ψ0(t)=φ20(t)-φ10(t).To simplify the formulas we also suppose that(2.12)φ11(t)=φ21(t).We assume thatω tends to its limiting values(2.13)0=limη→-∞ω(η),1=limη→∞ω(η) at an exponential rate. Moreover, since the limiting as ε→0 solution does not depend on the choice of ω, let(2.14)ωη′>0,ω(η)+ω(-η)=1.The first assumption (2.10) implies that the ansatz (2.7) describes the two noninteracting waves (2.3) for t≤t1*-cεα, α∈(0,1). The second assumptions (2.10) and (2.12) imply that the ansatz (2.7) describes the union of the shock and the rarefaction waves for t≥t1*+cεα. ### 2.2. Preliminary Calculations To determine the asymptotics (2.7) we should calculate weak expansions of uε and f(uε). Almost trivial calculations show that(2.15)uε=u--(u--u̅)H1+(R-u̅)H2+(u+-R)H3+𝒪𝒟′(ε), where(2.16)Hk=H(x-φk)fork=1,2,H3=H(x-φ30).Next, we have to calculate the weak expansion for the nonlinear term.Lemma 2.1. Under the assumptions mentioned above the following relation holds:(2.17)f(uε)=f(u-)-(u--u̅)B1H1+{(R2-u̅)B2-f(R2)+f(R)}H2+{f(u+)-f(R)}H3+𝒪𝒟′(ε), where Bi are the following convolutions: (2.18)B1=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η-σ))dη,B2=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(-η-σ)+(R2-u̅)ω(η))dη with the properties (2.19)limσ→+∞B1=f(u-)-f(u̅)u--u̅,limσ→-∞B1=f(R1+u--u̅)-f(u̅)u--u̅,limσ→+∞B2=f′(u̅),limσ→-∞B2=f(u-+R2-u̅)-f(u-)R2-u̅,σ=σ(τ,t,ε) characterizes the distance between the trajectories φ1 and φ2, namely, (2.20)σ=φ2-φ1ε, and Rk=R(φk,t,ε) for k=1,2,R3=R(φ30,t,ε).Sketch of the Proof For eachψ(x)∈𝒟(ℝ1) we have (2.21)(f(uε),ψ)=-∫-∞∞f(uε)dϕ(x)dxdx=f(u-)∫-∞∞ψ(x)dx+∫-∞∞∂uε∂xf′(uε)ϕ(x)dx, where ϕ(x)=∫x∞ψ(x′)dx′. Next, the derivative∂uε/∂x contains terms of value 𝒪(1/ε), say ω'((φ1-x)/ε)/ε and the term (ω2-ω3)Rx′. To calculate the first term we change the variable, say η=(φ1-x)/ε, and apply the Taylor expansion. Therefore, (2.22)-∫-∞∞1εω′(φ1-xε)f′(uε)ϕ(x)dx=∫-∞∞ω′(η)f′(uε)ϕ(x)|x=φ1-εηdη=B1ϕ(φ1)+𝒪(ε). Finally, we note that (2.23)ω2-ω3=H(x-φ2)-H(x-φ3)+𝒪𝒟′(ε),uε|x∈[φ2,φ30]=u̅+(R-u̅)ω2=R+𝒪𝒟′(ε). Thus, (2.24)∫-∞∞Rx′(ω2-ω3)f′(uε)ϕ(x)dx=∫φ2φ30Rx′f′(R)ϕ(x)dx+𝒪(ε)=ϕ(x)f(R)|x=φ2x=φ30+∫φ2φ30f(R)ψ(x)dx+𝒪(ε). This implies the formula (2.17). To calculate the limiting values (2.19) of the convolutions Bi it is enough to use the stabilization properties (2.13) of the function ω(η).Remark 2.2. The convolutionsBi are the functions of σ, τ, and t. At the same time we can treat Bi as functions of σ, τ, and ε. Indeed, let us denote by x1* the intersection point of the trajectories x=φ10(t) and x=φ20(t), that is, x1*=φ10(t1*)=φ20(t1*). Then, by virtue of (2.4) and (2.5) (2.25)φ10(t)=x1*+s10(t-t1*),φ20=x1*+f′(u̅)(t-t1*). Consequently, (2.26)τ=ψ0′ε(t-t1*),ψ0′≝f′(u̅)-s10,(2.27)Bi(σ,τ,t)|t=t1*+ετ/ψ0′≝B̃i(σ,τ,ε).Substituting the expressions (2.15) and (2.17) into the left-hand side of (1.8), we derive our main relation for obtaining the parameters of the asymptotic solution (2.7):(2.28)(u--u̅){dφ1dt-B1}δ(x-φ1)-(R2-u̅){dφ2dt-B2}δ(x-φ2)+{∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟′(ε). ### 2.3. Analysis of the Singularity Dynamics Let us consider the system that is obtained by setting equal to zero the coefficients of theδ functions in relation (2.28), namely, (2.29)dφkdt=Bk,k=1,2. Before the interaction (τ→+∞) the first assumption (2.10) for k=1,2 implies σ→τ→+∞. Therefore, the limiting relations (2.19) verify the concordance of (2.29) with our definition (2.4) and (2.5) of φ10 and φ20.To find the limiting behavior ofφk after the interaction (τ→-∞) let us reduce the system (2.29) to a scalar equation. In view of (2.20) and (2.26)(2.30)d(φ2-φ1)dt=ψ0′dσdτ. Hence, by subtracting one equation in (2.29) from the other we obtain(2.31)ψ0′dσdτ=B̃2-B̃1≝F(σ,τ,ε), where we take into account Remark 2.2. Using the first assumption (2.10) again we complete (2.31) with the condition(2.32)limτ→+∞στ=1.To study this problem let us analyze the functionF(σ,τ,ε).Lemma 2.3. The valueσ=0 is the unique critical point for the problems (2.31) and (2.32) and is achieved for τ→-∞.Proof. First we calculate(2.33)F|σ=0=∫-∞∞{f′(u-+(R2-u-)ω(η))-f′(R1+(u--R1)ω(η))}×ω'(η)dη|σ=0=f(R2)-f(u-)R2-u--f(u-)-f(R1)u--R1|σ=0=0 since σ=0 implies φ1=φ2. Next we note that the assumption (2.1) implies the inequality: F|σ→+∞=ψ0′<0. Let us consider now the functionF for |σ| bounded by a constant. Since φ2-φ1=σε=𝒪(ε) for such values of σ, we can conclude that Rk-u̅=𝒪(ε), k=1,2. Therefore, with accuracy 𝒪(ε) (2.34)F(σ,τ,ε)=∫-∞∞ω′(η-σ)f′(u--(u--u̅)ω(η))dη-f(u-)-f(u̅)u--u̅. In fact, the integral in the right-hand side of (2.34) is the average of f′ with the kernel ω'. For concave-convex functions f the derivative f′(u--(u--u̅)ω(η)) decreases monotonically from f′(u-)>0 to its minimal value f′(0)<0 when η goes form -∞ to the value η=η0 where η0 is such that u--(u--u̅)ω(η0)=0. Next, when η goes form η0 to +∞, the derivative increases monotonically from f′(0) to the limiting value f′(u̅)<0. At the same time, ω'(η-σ)>0 is a soliton-type exponentially vanishing function concentrated around the point η=σ. This implies that the behavior of the integral as a function of σ is the same as the behavior of f′(u--(u--u̅)ω(η)) as the function of η. Therefore, the integral diagram has the unique solution of the equation F(σ,·,·)=const for any nonnegative const<f′(u-). Thus, the equation F(σ,·,·)=0 has the unique solution σ=0; moreover F'σ|σ=0<0. Furthermore,(2.35)∂F(σ,τ,ε)∂τ|σ=0=R2τ′∫-∞∞ω(η)ω′(η)f′′(u̅+(u--u̅)ω(-η)+(R2-u̅)ω(η))dη-R1τ′∫-∞∞ω(-η)ω′(η)f′′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η))dη|σ=0=0 since R1=R2 and R1τ′=R2τ′ for σ=0. By induction we obtain the equality (2.36)dmF(σ(τ),τ,ε)dτm|σ=0=0∀m∈ℕ which implies the statement of Lemma 2.3.Consequently,φ1 and φ2 converge after the first interaction that confirms the a priori supposition (2.12). To obtain the limiting trajectory x=φ11=φ21 of the shock wave, it is enough to pass to the limit τ→-∞ in one of the equalities (2.29). Obviously, we obtain the following equation:(2.37)dφ11dt=f(u-)-f(r)u--r|x=φ11.Let us come back to the relation (2.28). Defining φk in accordance with (2.29), we transform (2.28) to the following form: (2.38){∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟'(ε).For each test functionψ we have(2.39)({∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30)),ψ)=∑±∫Ω±{∂R∂t+∂f(R)∂x}ψ(x)dx+∫φ20φ30{∂R∂t+∂f(R)∂x}ψ(x)dx, where(2.40)Ω-={x:φ2<x<φ20},Ω+={x:φ20<x<φ2}.Forφ20<x<φ30 the function R coincides with the centered rarefaction r, thus(2.41)∂r∂t+∂f(r)∂x=0, and the last integral in (2.39) is equal to zero. For x∈Ω± we note that, according to definition (2.8), either R=const or |φ2(τ,t)-φ20(t)|≤cε, c=const. Since Rt′ and Rx′ are bounded uniformly in t>0, we conclude that the first integrals in (2.39) have the value 𝒪(ε).This completes the construction of the asymptotic solution (2.7).Obviously, fort∈(0,t1*-c1εα], c1>0, α∈(0,1), the formula (2.7) is transformed to the form(2.42)uε=u̅+(u--u̅)ω(φ10(t)-xε)+(R-u̅)ω(x-φ20(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→+∞, σ→+∞.Fort∈[t1*+c2εα,t1*+c3εα], c3>c2>0, α∈(0,1), the formula (2.7) is transformed to the form(2.43)uε=u-+(R-u-)ω(x-φ11(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→-∞, σ→0. This implies the following.Lemma 2.4. The weak asymptoticmod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the problem (1.7) solution from the state (2.42) to the state (2.43) when t increases from 0 to t1*+cεα.Clearly, passing to the limit asε→0 we obtain the well-known result for the stable scenario of the collision of the shock wave and the centered rarefaction, when the shock wave enters into the rarefaction domain and propagates with variables velocity and amplitude (see (2.37) and (2.41)). ## 2.1. Asymptotic Ansatz To present the asymptotic ansatz for the problem (1.7) let us consider the possible scenario of the initial data (1.6) evolution. Our choice of u̅ in (1.6) implies that(2.1)f(u)<f(u̅)+f(u-)-f(u̅)u--u̅(u-u̅)∀u∈(u̅,u-),(2.2)f(u)>f(u̅)+f(u̅)-f(u+)u̅-u+(u-u̅)∀u∈(u+,u̅). Thus, the problem (1.6) solution should be the superposition of noninteracting shock wave and centered rarefaction during a sufficiently small time interval, namely, (2.3)u=u̅+(u--u̅)H(φ10(t)-x)+{r(x-x20t)-u̅}H(x-φ20(t))+{u+-r(x-x20t)}H(x-φ30(t)), where t≪Δ, φ10(t) is the shock wave phase:(2.4)φ10(t)=x10+s10t,s10≝f(u-)-f(u̅)u--u̅,φk0=φk0(t) for k=2,3 are the characteristics:(2.5)φ20=x20+f′(u̅)t,φ30=x20+f′(u+)t, and r=r((x-x20)/t) is the centered rarefaction with the support between x=φ20 and x=φ30:(2.6)r∈𝒞∞issuchthatf′(r(z))=z.Assumption (2.1) implies the intersection of the shock wave trajectory φ10 with the characteristic φ20 at some time instant t1*=𝒪(Δ). Accordingly, the interaction between the shock and the singularity of the type (x-φ20)+λ, 0<λ<1 (i.e., with the left border of the rarefaction) has to occur, which will result in the appearance of a shock wave with a variable amplitude. Furthermore, this shock wave can interact with the right border of the rarefaction wave. So, generally speaking, the asymptotic ansatz needs to contain two fast variables. However, the distance between the characteristics x=φ20(t) and x=φ30(t) at the first critical time t1* is greater than a constant for Δ=const. Thus, the shock wave trajectory can intersect the characteristic x=φ30(t) only at a second critical time instant t2* such that t2*-t1*≥const>0. Therefore, we can investigate the interaction process by stages.Let us consider the first evolution stage for the solution of the problem (1.7). We present the asymptotic ansatz as a natural regularization of (2.3):(2.7)uε=u̅+(u--u̅)ω1+(R-u̅)ω2+(u+-R)ω3, where R=R(x,t,ε)∈𝒞∞(ℝ1×ℝ+1×[0,1]) is a function such that(2.8)R(x,t,ε)={u̅ifx<φ20-cε,r(x-x20t)ifφ20<x<φ30,u+ifx>φ30+cε with a constant c>0,(2.9)ω1=ω(-x+φ1ε),ω2=ω(x-φ2ε),ω3=ω(x-φ30ε), and ω(z/ε) is the Heaviside function regularization.Furthermore, the phasesφk=φk(τ,t), k=1,2, are assumed to be smooth functions such that(2.10)φk(τ,t)→φk0(t)asτ→+∞,φk(τ,t)→φk1(t)asτ→-∞ exponentially fast, where τ denotes the “fast time”:(2.11)τ=ψ0(t)ε,ψ0(t)=φ20(t)-φ10(t).To simplify the formulas we also suppose that(2.12)φ11(t)=φ21(t).We assume thatω tends to its limiting values(2.13)0=limη→-∞ω(η),1=limη→∞ω(η) at an exponential rate. Moreover, since the limiting as ε→0 solution does not depend on the choice of ω, let(2.14)ωη′>0,ω(η)+ω(-η)=1.The first assumption (2.10) implies that the ansatz (2.7) describes the two noninteracting waves (2.3) for t≤t1*-cεα, α∈(0,1). The second assumptions (2.10) and (2.12) imply that the ansatz (2.7) describes the union of the shock and the rarefaction waves for t≥t1*+cεα. ## 2.2. Preliminary Calculations To determine the asymptotics (2.7) we should calculate weak expansions of uε and f(uε). Almost trivial calculations show that(2.15)uε=u--(u--u̅)H1+(R-u̅)H2+(u+-R)H3+𝒪𝒟′(ε), where(2.16)Hk=H(x-φk)fork=1,2,H3=H(x-φ30).Next, we have to calculate the weak expansion for the nonlinear term.Lemma 2.1. Under the assumptions mentioned above the following relation holds:(2.17)f(uε)=f(u-)-(u--u̅)B1H1+{(R2-u̅)B2-f(R2)+f(R)}H2+{f(u+)-f(R)}H3+𝒪𝒟′(ε), where Bi are the following convolutions: (2.18)B1=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η-σ))dη,B2=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(-η-σ)+(R2-u̅)ω(η))dη with the properties (2.19)limσ→+∞B1=f(u-)-f(u̅)u--u̅,limσ→-∞B1=f(R1+u--u̅)-f(u̅)u--u̅,limσ→+∞B2=f′(u̅),limσ→-∞B2=f(u-+R2-u̅)-f(u-)R2-u̅,σ=σ(τ,t,ε) characterizes the distance between the trajectories φ1 and φ2, namely, (2.20)σ=φ2-φ1ε, and Rk=R(φk,t,ε) for k=1,2,R3=R(φ30,t,ε).Sketch of the Proof For eachψ(x)∈𝒟(ℝ1) we have (2.21)(f(uε),ψ)=-∫-∞∞f(uε)dϕ(x)dxdx=f(u-)∫-∞∞ψ(x)dx+∫-∞∞∂uε∂xf′(uε)ϕ(x)dx, where ϕ(x)=∫x∞ψ(x′)dx′. Next, the derivative∂uε/∂x contains terms of value 𝒪(1/ε), say ω'((φ1-x)/ε)/ε and the term (ω2-ω3)Rx′. To calculate the first term we change the variable, say η=(φ1-x)/ε, and apply the Taylor expansion. Therefore, (2.22)-∫-∞∞1εω′(φ1-xε)f′(uε)ϕ(x)dx=∫-∞∞ω′(η)f′(uε)ϕ(x)|x=φ1-εηdη=B1ϕ(φ1)+𝒪(ε). Finally, we note that (2.23)ω2-ω3=H(x-φ2)-H(x-φ3)+𝒪𝒟′(ε),uε|x∈[φ2,φ30]=u̅+(R-u̅)ω2=R+𝒪𝒟′(ε). Thus, (2.24)∫-∞∞Rx′(ω2-ω3)f′(uε)ϕ(x)dx=∫φ2φ30Rx′f′(R)ϕ(x)dx+𝒪(ε)=ϕ(x)f(R)|x=φ2x=φ30+∫φ2φ30f(R)ψ(x)dx+𝒪(ε). This implies the formula (2.17). To calculate the limiting values (2.19) of the convolutions Bi it is enough to use the stabilization properties (2.13) of the function ω(η).Remark 2.2. The convolutionsBi are the functions of σ, τ, and t. At the same time we can treat Bi as functions of σ, τ, and ε. Indeed, let us denote by x1* the intersection point of the trajectories x=φ10(t) and x=φ20(t), that is, x1*=φ10(t1*)=φ20(t1*). Then, by virtue of (2.4) and (2.5) (2.25)φ10(t)=x1*+s10(t-t1*),φ20=x1*+f′(u̅)(t-t1*). Consequently, (2.26)τ=ψ0′ε(t-t1*),ψ0′≝f′(u̅)-s10,(2.27)Bi(σ,τ,t)|t=t1*+ετ/ψ0′≝B̃i(σ,τ,ε).Substituting the expressions (2.15) and (2.17) into the left-hand side of (1.8), we derive our main relation for obtaining the parameters of the asymptotic solution (2.7):(2.28)(u--u̅){dφ1dt-B1}δ(x-φ1)-(R2-u̅){dφ2dt-B2}δ(x-φ2)+{∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟′(ε). ## 2.3. Analysis of the Singularity Dynamics Let us consider the system that is obtained by setting equal to zero the coefficients of theδ functions in relation (2.28), namely, (2.29)dφkdt=Bk,k=1,2. Before the interaction (τ→+∞) the first assumption (2.10) for k=1,2 implies σ→τ→+∞. Therefore, the limiting relations (2.19) verify the concordance of (2.29) with our definition (2.4) and (2.5) of φ10 and φ20.To find the limiting behavior ofφk after the interaction (τ→-∞) let us reduce the system (2.29) to a scalar equation. In view of (2.20) and (2.26)(2.30)d(φ2-φ1)dt=ψ0′dσdτ. Hence, by subtracting one equation in (2.29) from the other we obtain(2.31)ψ0′dσdτ=B̃2-B̃1≝F(σ,τ,ε), where we take into account Remark 2.2. Using the first assumption (2.10) again we complete (2.31) with the condition(2.32)limτ→+∞στ=1.To study this problem let us analyze the functionF(σ,τ,ε).Lemma 2.3. The valueσ=0 is the unique critical point for the problems (2.31) and (2.32) and is achieved for τ→-∞.Proof. First we calculate(2.33)F|σ=0=∫-∞∞{f′(u-+(R2-u-)ω(η))-f′(R1+(u--R1)ω(η))}×ω'(η)dη|σ=0=f(R2)-f(u-)R2-u--f(u-)-f(R1)u--R1|σ=0=0 since σ=0 implies φ1=φ2. Next we note that the assumption (2.1) implies the inequality: F|σ→+∞=ψ0′<0. Let us consider now the functionF for |σ| bounded by a constant. Since φ2-φ1=σε=𝒪(ε) for such values of σ, we can conclude that Rk-u̅=𝒪(ε), k=1,2. Therefore, with accuracy 𝒪(ε) (2.34)F(σ,τ,ε)=∫-∞∞ω′(η-σ)f′(u--(u--u̅)ω(η))dη-f(u-)-f(u̅)u--u̅. In fact, the integral in the right-hand side of (2.34) is the average of f′ with the kernel ω'. For concave-convex functions f the derivative f′(u--(u--u̅)ω(η)) decreases monotonically from f′(u-)>0 to its minimal value f′(0)<0 when η goes form -∞ to the value η=η0 where η0 is such that u--(u--u̅)ω(η0)=0. Next, when η goes form η0 to +∞, the derivative increases monotonically from f′(0) to the limiting value f′(u̅)<0. At the same time, ω'(η-σ)>0 is a soliton-type exponentially vanishing function concentrated around the point η=σ. This implies that the behavior of the integral as a function of σ is the same as the behavior of f′(u--(u--u̅)ω(η)) as the function of η. Therefore, the integral diagram has the unique solution of the equation F(σ,·,·)=const for any nonnegative const<f′(u-). Thus, the equation F(σ,·,·)=0 has the unique solution σ=0; moreover F'σ|σ=0<0. Furthermore,(2.35)∂F(σ,τ,ε)∂τ|σ=0=R2τ′∫-∞∞ω(η)ω′(η)f′′(u̅+(u--u̅)ω(-η)+(R2-u̅)ω(η))dη-R1τ′∫-∞∞ω(-η)ω′(η)f′′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η))dη|σ=0=0 since R1=R2 and R1τ′=R2τ′ for σ=0. By induction we obtain the equality (2.36)dmF(σ(τ),τ,ε)dτm|σ=0=0∀m∈ℕ which implies the statement of Lemma 2.3.Consequently,φ1 and φ2 converge after the first interaction that confirms the a priori supposition (2.12). To obtain the limiting trajectory x=φ11=φ21 of the shock wave, it is enough to pass to the limit τ→-∞ in one of the equalities (2.29). Obviously, we obtain the following equation:(2.37)dφ11dt=f(u-)-f(r)u--r|x=φ11.Let us come back to the relation (2.28). Defining φk in accordance with (2.29), we transform (2.28) to the following form: (2.38){∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟'(ε).For each test functionψ we have(2.39)({∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30)),ψ)=∑±∫Ω±{∂R∂t+∂f(R)∂x}ψ(x)dx+∫φ20φ30{∂R∂t+∂f(R)∂x}ψ(x)dx, where(2.40)Ω-={x:φ2<x<φ20},Ω+={x:φ20<x<φ2}.Forφ20<x<φ30 the function R coincides with the centered rarefaction r, thus(2.41)∂r∂t+∂f(r)∂x=0, and the last integral in (2.39) is equal to zero. For x∈Ω± we note that, according to definition (2.8), either R=const or |φ2(τ,t)-φ20(t)|≤cε, c=const. Since Rt′ and Rx′ are bounded uniformly in t>0, we conclude that the first integrals in (2.39) have the value 𝒪(ε).This completes the construction of the asymptotic solution (2.7).Obviously, fort∈(0,t1*-c1εα], c1>0, α∈(0,1), the formula (2.7) is transformed to the form(2.42)uε=u̅+(u--u̅)ω(φ10(t)-xε)+(R-u̅)ω(x-φ20(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→+∞, σ→+∞.Fort∈[t1*+c2εα,t1*+c3εα], c3>c2>0, α∈(0,1), the formula (2.7) is transformed to the form(2.43)uε=u-+(R-u-)ω(x-φ11(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→-∞, σ→0. This implies the following.Lemma 2.4. The weak asymptoticmod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the problem (1.7) solution from the state (2.42) to the state (2.43) when t increases from 0 to t1*+cεα.Clearly, passing to the limit asε→0 we obtain the well-known result for the stable scenario of the collision of the shock wave and the centered rarefaction, when the shock wave enters into the rarefaction domain and propagates with variables velocity and amplitude (see (2.37) and (2.41)). ## 3. The Shock Wave Propagation over the Centered Rarefaction Let us consider the evolution of the problem (1.7) solution for t>t1*. The behavior of (2.37) solution is well known (see, e.g., [18]): the trajectory x=φ11 crosses all the characteristics X=f′(u)t+x20 if(3.1)f(u-)-f(u)u--u>f′(u)foru∈(ũ,u̅] and tends to the characteristic X=f′(ũ)t+x20 with ũ such that(3.2)f(u-)-f(ũ)u--ũ=f′(ũ).Ifu+<ũ, the resulting solution for the problem (1.7) will be a combination of the smoothed shock wave (with amplitude u--ũ and the front trajectory φ11=f′(ũ)t+x20) and the regularization for the centered rarefaction (defined near the domain bounded by the characteristics X̃=f′(ũ)t+x20 and X+=f′(u+)t+x20). Obviously, u≡u- for x<φ11(t) and u≡u+ for x≥X+(t). Therefore, we obtain the following.Theorem 3.1. Letu+<ũ. Then the weak asymptotic mod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the initial data (1.7) into the described above regularization for the combination of the shock wave and the centered rarefaction.Ifu+>ũ, there occurs the collision of the shock wave and the weak singularity of the (x-φ30)-λ type, 0<λ<1 (in the limit as ε→0). To describe this collision let us construct again a weak asymptotic mod𝒪𝒟′(ε) solution. In a similar way to (2.7) we write(3.3)uε=u-+(R-u-)ω1+(u+-R)ω3, where R=R(x,t,ε) is defined in (2.8) and(3.4)ωk=ω(x-φkε),k=1,3. We suppose that the phases φk=φk(τ1,t) are smooth functions such that(3.5)φ1(τ1,t)→φ11(t),φ3(τ1,t)→φ30(t)asτ1→+∞,(3.6)φ1(τ1,t)→φ̅(t),φ3(τ1,t)→φ31(t)asτ1→-∞, exponentially fast, where the “fast time” τ1 is defined as follows:(3.7)τ1=ψ1(t)ε,ψ1(t)=φ30(t)-φ11(t).To simplify the formulas we also suppose that(3.8)φ̅(t)=φ31(t).The assumptions (3.5), (3.6), and (3.8) imply that the ansatz (3.3) coincides with the solution described in Section 2 as τ1→+∞ and tends to the shock wave as τ1→-∞.Repeating the analysis of Section2 we obtain the following statement.Lemma 3.2. Under the assumptions mentioned above the following relations hold:(3.9)uε=u-+(R-u-)H(x-φ1)+(u+-R)H(x-φ3)+𝒪𝒟′(ε),f(uε)=f(u-)+{(R1-u-)C1-f(R1)+f(R)}H1+{(u+-R3)C3+f(R3)-f(R)}H3+𝒪𝒟′(ε), where Ci are the convolutions (3.10)C1=∫-∞∞ω′(η)f′(u-+(R1-u-)ω(η)+(u+-R1)ω(η-σ1))dη,C3=∫-∞∞ω′(η)f′(u-+(R3-u-)ω(η+σ1)+(u+-R3)ω(η))dη with the properties (3.11)limσ→+∞C1=f(u-)-f(R1)u--R1,limσ→-∞C1=f(u++u--R1)-f(u+)u--R1,limσ→+∞C3=f′(u+),limσ→-∞C3=f(u-+u+-R3)-f(u-)u+-R3,σ1=σ1(τ1,t,ε) characterizes the distance between the trajectories φ1 and φ3, namely, (3.12)σ1=(φ3-φ1)ε, and Rk=R(φk,t,ε) for k=1,3.Substituting the expressions (3.9) into the left-hand side of (1.8) we derive the following relation for obtaining the asymptotic parameters:(3.13)-(R1-u-){dφ1dt-C1}δ(x-φ1)-(u+-R3){dφ3dt-C3}δ(x-φ3)+{∂R∂t+∂f(R)∂x}(H1-H3)=𝒪𝒟′(ε).To calculate the trajectoriesφ1 and φ3 we set the coefficients of the δ-functions in relation (3.13) equal to zero, namely,(3.14)dφkdt=Ck,k=1,3.Lemma 3.3. Under the assumptionu+>ũ, system (3.14) describes the confluence of the trajectories φ1 and φ3.Proof. Before the interaction (τ1→+∞) σ1→+∞, so that we obtain again the Rankine-Hugoniot condition (2.37) for φ11. Moreover, we obtain the second formula in (2.5) for the characteristic φ30. Subtracting the above relations we pass to the equation(3.15)d(φ3-φ1)dt=ψ1′dσ1dτ1=C3-C1≝F1(σ1,τ1,ε), where we put t in terms of τ1 and ε. Suppositions (3.5) complete equation (3.15) with the condition (3.16)limτ1→+∞σ1τ1=1. The last step of the proof is similar to Lemma2.3 verification of the following statement. Lemma 3.4. The valueσ1=0 is the unique critical point for the problems (3.15) and (3.16) and is achieved for τ1→-∞.Consequently,φ1 and φ3 converge after the second interaction that confirms the a priori supposition (3.8). Passing in (3.14) to the limit τ1→-∞ we find the Rankine-Hugoniot condition(3.17)dφ̅dt=f(u-)-f(u+)u--u+ for the limiting trajectory x=φ̅=φ31 of the shock wave with the amplitude u--u+. Thus, the supposition u+∈(ũ,u̅) is explicitly the stability condition for the limiting shock wave.Finally we note that the relation(3.18)∂uε∂t+∂f(uε)∂x=𝒪𝒟′(ε),forφ1<x<φ3 can be proved in a similar way as in Section 2.Summarizing the above arguments we obtain the following assertion.Theorem 3.5. Letu+>ũ. Then the weak asymptotic mod𝒪𝒟′(ε) solutions (2.7) and (3.3) describes uniformly in time the evolution of the initial data (1.6) to the smoothed shock wave with amplitude u--u+. ## 4. Conclusion Concluding all the result we obtain the following uniform in time description of the problem (1.7) solution: the front φ1 of the smoothed shock wave and the left front φ2 of the smoothed centered rarefaction merge during the time interval (t1*-cεα,t1*+cεα), 0<α<1, in accordance with (2.29). If u+<ũ, then the further evolution of the front φ11≡φ21 is described by (2.37) whereas the right front of the rarefaction wave remains the characteristic φ30. In the case u+=ũ the trajectory φ11 tends to φ30 as t→∞. If u+>ũ, then the trajectories φ11 and φ3 merge during the time interval (t2*-cεα,t2*+cεα) in accordance with (3.14) and the resulting trajectory for t≥t2*+cεα coincides with the shock wave front (3.17).The conditionu+>ũ, in view of (2.37) and the assumption (1.3), is equivalent to the inequality(4.1)f(u)≤f(u+)+f(u+)-f(u-)u+-u-(u-u+)∀u∈[u+,u-], which is explicitly the Oleinik E-condition.In the limit asε→0 but Δ=const all the trajectories loose the smoothness remaining continuous. However, the condition (4.1) does not depend on ε; so it remains valid for limiting solution.To calculate the limit asΔ→0 it is enough to note that t1*=𝒪(Δ) and |φ30-φ20||t=t1*=𝒪(Δ). Therefore, the problems (1.1) and (1.2) solution will be, in accordance with the condition (4.1), either the shock wave with amplitude u--u+ or the union of the shock wave (with amplitude u--ũ) and the centered rarefaction (with support between the characteristics f′(ũ)t and f′(u+)t). --- *Source: 101647-2010-01-26.xml*
101647-2010-01-26_101647-2010-01-26.md
36,502
Uniform in Time Description for Weak Solutions of the Hopf Equation with Nonconvex Nonlinearity
Antonio Olivas Martinez; Georgy A. Omel'yanov
International Journal of Mathematics and Mathematical Sciences (2009)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2009/101647
101647-2010-01-26.xml
--- ## Abstract We consider the Riemann problem for the Hopf equation with concave-convex flux functions. Applying the weak asymptotics method we construct a uniform in time description for the Cauchy data evolution and show that the use of this method implies automatically the appearance of the Oleinik E-condition. --- ## Body ## 1. Introduction It is well known that the uniqueness problem for weak solutions of hyperbolic quasilinear systems remains unsolved up to now in the case of arbitrary jump amplitudes. Moreover, the approach which has been used successfully for shocks with sufficiently small amplitudes [1, 2] cannot be extended to the general case. On the other hand, there is a possibility to construct the unique stable solution passing to parabolic regularization. However, the vanishing viscosity method cannot be used effectively for nontrivial vector problems. Indeed, in the essentially nonintegrable case we, obviously, do not have the exact solution. Moreover, any traditional asymptotic method does not serve for the problem of nonlinear wave interaction since it leads to the appearance of a chain of partial differential equations, the first of them is nonlinear and, in fact, coincides with the original equation.We are of opinion that a progress in this problem can be achieved in the framework of the weak asymptotics method; see, for example, [3–5]. In this method the approximated solutions are sought in the same form as in the Whitham method modified for nonlinear waves with localized fast variation [6, 7] (for the original Whitham method for rapidly oscillating waves see [8]). At the same time, the discrepancy in the weak asymptotics method is assumed to be small in the sense of the space of functionals 𝒟x′ over test functions depending only on the “space” variable x. This somehow trivial modification allows us to reduce the problem of describing interaction of nonlinear waves to solving some systems of ordinary differential equations (instead of solving partial differential equations). Respectively, the main characteristics of the solution (the trajectory of the limiting singularity motion, etc.) can be found by this method, whereas the shape of the real solution cannot be found.Applications of the weak asymptotics method allowed among other to investigate the interaction of solitons for nonintegrable versions of the KdV and sine-Gordon equations [9–11], to describe uniformly in time the confluence of the shock waves for the Hopf equation with convex nonlinearities [4], as well as to construct uniform in time asymptotics for the Riemann problem for isothermal gas dynamics [12–14] and delta-shock solutions for the so-called pressureless gas dynamics [15, 16]. However, it should be necessary to verify the method application to each new type of problems.As for the uniqueness problem, we are not ready now to consider the vector case; so we are going to simulate it and to investigate the Riemann problem for the scalar conservation law with nonconvex nonlinearity:(1.1)∂u∂t+∂f(u)∂x=0,t>0,x∈ℝ1,(1.2)u|t=0={u-,x<0,u+,x>0.Furthermore, the structure of the uniform in time asymptotics for a regularization of the problem (1.1), (1.2) with an arbitrary f(u) can be very complicated. On the other hand, it is clear that we can define a sequence of time intervals and consider the asymptotics uε for each time interval as a combination of local interacting solutions. Almost without loss of generality we can suppose that the local solutions correspond to convex or concave-convex parts of the nonlinearity f(uε). That is why, in view of the result [4], we restrict ourselves to the concave-convex case; that is, we will suppose that (1.3)uf′′(u)>0(u≠0),f′′(0)=0,f′′′(0)≠0,lim|u|→∞f′(u)=∞. For definiteness we assume also that(1.4)u->0>u+.Let us recall that the solution of the initial-value problem is called stable if it depends continuously on the initial data (see, e.g., [2]). Obviously, the stable solution to the problem (1.1)–(1.4) is well known (see, e.g., [17]) and it can be constructed using the characteristics method for (1.1) with regularized initial data. In particular, the stable solution will be the shock wave with amplitude u--u+ if and only if the Oleinik E-condition(1.5)f(u)-f(u-)u-u-≥f(u+)-f(u-)u+-u-≥f(u+)-f(u)u+-u is satisfied for any u∈[u+,u-].The same shock wave presents an example of nonstable weak solutions if the condition (1.5) is violated. Let us note that this nonadmissible shock wave looks as if it is stable if f′(u-)>f′(u+).Technically, our result consists of obtaining uniform in time asymptotic solutions for a regularization of the problem (1.1), (1.2). However, we consider as the main result the fact that the weak asymptotics method allows to construct the admissible limiting solution without any additional conditions. In particular, we obtain automatically the Oleinik E-condition for the shock wave solution.The structure of the asymptotics construction is the following. Firstly we pass from the initial step function to a sequence of step functions such that each jump corresponds to a stable solution (in fact, to a shock wave or a centered rarefaction). Here we take into account the fact that weak asymptotics similarly to exact weak solution is not unique in the unstable case. At the same time, describing the collision of stable waves, we obtain automatically the stable scenario of interaction. Therefore, this passage from the Riemann problem to the problem of interaction of stable waves can be treated as a “regularization.” For our model example it means the transformation of the problem (1.1), (1.2) to the following “regularization”: (1.6)∂uΔ∂t+∂f(uΔ)∂x=0,t>0,x∈ℝ1,uΔ|t=0=u̅+(u--u̅)H(x10-x)+(u+-u̅)H(x-x20), where Δ=x20-x10>0 is the “regularization” parameter, H(x) is the Heaviside function, and u̅∈(u+,u-). We choose the intermediate state u̅<0 such that the left jump (at the point x=x10) corresponds for t≪Δ to the stable shock wave, whereas the right jump (at the point x=x20) corresponds to the centered rarefaction. Let us note that the problem (1.6) with Δ=const is of interest by itself.Next, we pass from (1.6) to the parabolic regularization:(1.7)∂uΔε∂t+∂f(uΔε)∂x=ε∂2uΔε∂x2,t>0,x∈ℝ1,uΔε|t=0=u̅+(u--u̅)ω((x10-x)ε)+(u+-u̅)ω((x-x20)ε), where ω(x/ε) is a regularization of the Heaviside function with the parameter ε≪Δ. The contents of Sections 2 and 3 are the construction of the weak asymptotic solution to the problem (1.7).Finally, in conclusion, we consider the limiting solution both forε→0 and for Δ→0.Completing this section let us formalize the concept of the weak asymptotics.Definition 1.1. LetuΔε=uΔε(t,x) be a function that belongs to 𝒞∞([0,T]×ℝx1) for each ε=const>0 and to 𝒞([0,T];𝒟′(ℝx1)) uniformly in ε∈[0,const]. One says that uΔε(t,x) is a weak asymptotic mod𝒪𝒟′(ε) solution of (1.7) if the relation (1.8)ddt∫-∞∞uΔεψdx-∫-∞∞f(uΔε)∂ψ∂xdx=𝒪(ε) holds uniformly in t∈(0,T] for any test function ψ=ψ(x)∈𝒟(ℝx1).Here and below the estimate𝒪(εk) is understood in the 𝒞([0,T]) sense: |𝒪(εk)|≤CTεk for t∈[0,T].Definition 1.2. A functiong(t,x,ε) is said to be of the value 𝒪𝒟′(εk) if the relation (1.9)(g,ψ)=∫-∞∞g(t,x,ε)ψ(x)dx=𝒪(εk) holds for any test function ψ=ψ(x)∈𝒟(ℝx1).It is very important to note that the viscosity term in (1.7) has the value 𝒪𝒟′(ε) and disappears in (1.8). The same is true for any parabolic regularization of the form ε(b(u))xx. Thus, we see that the weak asymptotic mod𝒪𝒟′(ε) solution does not depend on the dissipative terms. In what follows we will omit the subindex Δ for u. ## 2. Construction of the Asymptotic Solution for the First Interaction ### 2.1. Asymptotic Ansatz To present the asymptotic ansatz for the problem (1.7) let us consider the possible scenario of the initial data (1.6) evolution. Our choice of u̅ in (1.6) implies that(2.1)f(u)<f(u̅)+f(u-)-f(u̅)u--u̅(u-u̅)∀u∈(u̅,u-),(2.2)f(u)>f(u̅)+f(u̅)-f(u+)u̅-u+(u-u̅)∀u∈(u+,u̅). Thus, the problem (1.6) solution should be the superposition of noninteracting shock wave and centered rarefaction during a sufficiently small time interval, namely, (2.3)u=u̅+(u--u̅)H(φ10(t)-x)+{r(x-x20t)-u̅}H(x-φ20(t))+{u+-r(x-x20t)}H(x-φ30(t)), where t≪Δ, φ10(t) is the shock wave phase:(2.4)φ10(t)=x10+s10t,s10≝f(u-)-f(u̅)u--u̅,φk0=φk0(t) for k=2,3 are the characteristics:(2.5)φ20=x20+f′(u̅)t,φ30=x20+f′(u+)t, and r=r((x-x20)/t) is the centered rarefaction with the support between x=φ20 and x=φ30:(2.6)r∈𝒞∞issuchthatf′(r(z))=z.Assumption (2.1) implies the intersection of the shock wave trajectory φ10 with the characteristic φ20 at some time instant t1*=𝒪(Δ). Accordingly, the interaction between the shock and the singularity of the type (x-φ20)+λ, 0<λ<1 (i.e., with the left border of the rarefaction) has to occur, which will result in the appearance of a shock wave with a variable amplitude. Furthermore, this shock wave can interact with the right border of the rarefaction wave. So, generally speaking, the asymptotic ansatz needs to contain two fast variables. However, the distance between the characteristics x=φ20(t) and x=φ30(t) at the first critical time t1* is greater than a constant for Δ=const. Thus, the shock wave trajectory can intersect the characteristic x=φ30(t) only at a second critical time instant t2* such that t2*-t1*≥const>0. Therefore, we can investigate the interaction process by stages.Let us consider the first evolution stage for the solution of the problem (1.7). We present the asymptotic ansatz as a natural regularization of (2.3):(2.7)uε=u̅+(u--u̅)ω1+(R-u̅)ω2+(u+-R)ω3, where R=R(x,t,ε)∈𝒞∞(ℝ1×ℝ+1×[0,1]) is a function such that(2.8)R(x,t,ε)={u̅ifx<φ20-cε,r(x-x20t)ifφ20<x<φ30,u+ifx>φ30+cε with a constant c>0,(2.9)ω1=ω(-x+φ1ε),ω2=ω(x-φ2ε),ω3=ω(x-φ30ε), and ω(z/ε) is the Heaviside function regularization.Furthermore, the phasesφk=φk(τ,t), k=1,2, are assumed to be smooth functions such that(2.10)φk(τ,t)→φk0(t)asτ→+∞,φk(τ,t)→φk1(t)asτ→-∞ exponentially fast, where τ denotes the “fast time”:(2.11)τ=ψ0(t)ε,ψ0(t)=φ20(t)-φ10(t).To simplify the formulas we also suppose that(2.12)φ11(t)=φ21(t).We assume thatω tends to its limiting values(2.13)0=limη→-∞ω(η),1=limη→∞ω(η) at an exponential rate. Moreover, since the limiting as ε→0 solution does not depend on the choice of ω, let(2.14)ωη′>0,ω(η)+ω(-η)=1.The first assumption (2.10) implies that the ansatz (2.7) describes the two noninteracting waves (2.3) for t≤t1*-cεα, α∈(0,1). The second assumptions (2.10) and (2.12) imply that the ansatz (2.7) describes the union of the shock and the rarefaction waves for t≥t1*+cεα. ### 2.2. Preliminary Calculations To determine the asymptotics (2.7) we should calculate weak expansions of uε and f(uε). Almost trivial calculations show that(2.15)uε=u--(u--u̅)H1+(R-u̅)H2+(u+-R)H3+𝒪𝒟′(ε), where(2.16)Hk=H(x-φk)fork=1,2,H3=H(x-φ30).Next, we have to calculate the weak expansion for the nonlinear term.Lemma 2.1. Under the assumptions mentioned above the following relation holds:(2.17)f(uε)=f(u-)-(u--u̅)B1H1+{(R2-u̅)B2-f(R2)+f(R)}H2+{f(u+)-f(R)}H3+𝒪𝒟′(ε), where Bi are the following convolutions: (2.18)B1=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η-σ))dη,B2=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(-η-σ)+(R2-u̅)ω(η))dη with the properties (2.19)limσ→+∞B1=f(u-)-f(u̅)u--u̅,limσ→-∞B1=f(R1+u--u̅)-f(u̅)u--u̅,limσ→+∞B2=f′(u̅),limσ→-∞B2=f(u-+R2-u̅)-f(u-)R2-u̅,σ=σ(τ,t,ε) characterizes the distance between the trajectories φ1 and φ2, namely, (2.20)σ=φ2-φ1ε, and Rk=R(φk,t,ε) for k=1,2,R3=R(φ30,t,ε).Sketch of the Proof For eachψ(x)∈𝒟(ℝ1) we have (2.21)(f(uε),ψ)=-∫-∞∞f(uε)dϕ(x)dxdx=f(u-)∫-∞∞ψ(x)dx+∫-∞∞∂uε∂xf′(uε)ϕ(x)dx, where ϕ(x)=∫x∞ψ(x′)dx′. Next, the derivative∂uε/∂x contains terms of value 𝒪(1/ε), say ω'((φ1-x)/ε)/ε and the term (ω2-ω3)Rx′. To calculate the first term we change the variable, say η=(φ1-x)/ε, and apply the Taylor expansion. Therefore, (2.22)-∫-∞∞1εω′(φ1-xε)f′(uε)ϕ(x)dx=∫-∞∞ω′(η)f′(uε)ϕ(x)|x=φ1-εηdη=B1ϕ(φ1)+𝒪(ε). Finally, we note that (2.23)ω2-ω3=H(x-φ2)-H(x-φ3)+𝒪𝒟′(ε),uε|x∈[φ2,φ30]=u̅+(R-u̅)ω2=R+𝒪𝒟′(ε). Thus, (2.24)∫-∞∞Rx′(ω2-ω3)f′(uε)ϕ(x)dx=∫φ2φ30Rx′f′(R)ϕ(x)dx+𝒪(ε)=ϕ(x)f(R)|x=φ2x=φ30+∫φ2φ30f(R)ψ(x)dx+𝒪(ε). This implies the formula (2.17). To calculate the limiting values (2.19) of the convolutions Bi it is enough to use the stabilization properties (2.13) of the function ω(η).Remark 2.2. The convolutionsBi are the functions of σ, τ, and t. At the same time we can treat Bi as functions of σ, τ, and ε. Indeed, let us denote by x1* the intersection point of the trajectories x=φ10(t) and x=φ20(t), that is, x1*=φ10(t1*)=φ20(t1*). Then, by virtue of (2.4) and (2.5) (2.25)φ10(t)=x1*+s10(t-t1*),φ20=x1*+f′(u̅)(t-t1*). Consequently, (2.26)τ=ψ0′ε(t-t1*),ψ0′≝f′(u̅)-s10,(2.27)Bi(σ,τ,t)|t=t1*+ετ/ψ0′≝B̃i(σ,τ,ε).Substituting the expressions (2.15) and (2.17) into the left-hand side of (1.8), we derive our main relation for obtaining the parameters of the asymptotic solution (2.7):(2.28)(u--u̅){dφ1dt-B1}δ(x-φ1)-(R2-u̅){dφ2dt-B2}δ(x-φ2)+{∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟′(ε). ### 2.3. Analysis of the Singularity Dynamics Let us consider the system that is obtained by setting equal to zero the coefficients of theδ functions in relation (2.28), namely, (2.29)dφkdt=Bk,k=1,2. Before the interaction (τ→+∞) the first assumption (2.10) for k=1,2 implies σ→τ→+∞. Therefore, the limiting relations (2.19) verify the concordance of (2.29) with our definition (2.4) and (2.5) of φ10 and φ20.To find the limiting behavior ofφk after the interaction (τ→-∞) let us reduce the system (2.29) to a scalar equation. In view of (2.20) and (2.26)(2.30)d(φ2-φ1)dt=ψ0′dσdτ. Hence, by subtracting one equation in (2.29) from the other we obtain(2.31)ψ0′dσdτ=B̃2-B̃1≝F(σ,τ,ε), where we take into account Remark 2.2. Using the first assumption (2.10) again we complete (2.31) with the condition(2.32)limτ→+∞στ=1.To study this problem let us analyze the functionF(σ,τ,ε).Lemma 2.3. The valueσ=0 is the unique critical point for the problems (2.31) and (2.32) and is achieved for τ→-∞.Proof. First we calculate(2.33)F|σ=0=∫-∞∞{f′(u-+(R2-u-)ω(η))-f′(R1+(u--R1)ω(η))}×ω'(η)dη|σ=0=f(R2)-f(u-)R2-u--f(u-)-f(R1)u--R1|σ=0=0 since σ=0 implies φ1=φ2. Next we note that the assumption (2.1) implies the inequality: F|σ→+∞=ψ0′<0. Let us consider now the functionF for |σ| bounded by a constant. Since φ2-φ1=σε=𝒪(ε) for such values of σ, we can conclude that Rk-u̅=𝒪(ε), k=1,2. Therefore, with accuracy 𝒪(ε) (2.34)F(σ,τ,ε)=∫-∞∞ω′(η-σ)f′(u--(u--u̅)ω(η))dη-f(u-)-f(u̅)u--u̅. In fact, the integral in the right-hand side of (2.34) is the average of f′ with the kernel ω'. For concave-convex functions f the derivative f′(u--(u--u̅)ω(η)) decreases monotonically from f′(u-)>0 to its minimal value f′(0)<0 when η goes form -∞ to the value η=η0 where η0 is such that u--(u--u̅)ω(η0)=0. Next, when η goes form η0 to +∞, the derivative increases monotonically from f′(0) to the limiting value f′(u̅)<0. At the same time, ω'(η-σ)>0 is a soliton-type exponentially vanishing function concentrated around the point η=σ. This implies that the behavior of the integral as a function of σ is the same as the behavior of f′(u--(u--u̅)ω(η)) as the function of η. Therefore, the integral diagram has the unique solution of the equation F(σ,·,·)=const for any nonnegative const<f′(u-). Thus, the equation F(σ,·,·)=0 has the unique solution σ=0; moreover F'σ|σ=0<0. Furthermore,(2.35)∂F(σ,τ,ε)∂τ|σ=0=R2τ′∫-∞∞ω(η)ω′(η)f′′(u̅+(u--u̅)ω(-η)+(R2-u̅)ω(η))dη-R1τ′∫-∞∞ω(-η)ω′(η)f′′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η))dη|σ=0=0 since R1=R2 and R1τ′=R2τ′ for σ=0. By induction we obtain the equality (2.36)dmF(σ(τ),τ,ε)dτm|σ=0=0∀m∈ℕ which implies the statement of Lemma 2.3.Consequently,φ1 and φ2 converge after the first interaction that confirms the a priori supposition (2.12). To obtain the limiting trajectory x=φ11=φ21 of the shock wave, it is enough to pass to the limit τ→-∞ in one of the equalities (2.29). Obviously, we obtain the following equation:(2.37)dφ11dt=f(u-)-f(r)u--r|x=φ11.Let us come back to the relation (2.28). Defining φk in accordance with (2.29), we transform (2.28) to the following form: (2.38){∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟'(ε).For each test functionψ we have(2.39)({∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30)),ψ)=∑±∫Ω±{∂R∂t+∂f(R)∂x}ψ(x)dx+∫φ20φ30{∂R∂t+∂f(R)∂x}ψ(x)dx, where(2.40)Ω-={x:φ2<x<φ20},Ω+={x:φ20<x<φ2}.Forφ20<x<φ30 the function R coincides with the centered rarefaction r, thus(2.41)∂r∂t+∂f(r)∂x=0, and the last integral in (2.39) is equal to zero. For x∈Ω± we note that, according to definition (2.8), either R=const or |φ2(τ,t)-φ20(t)|≤cε, c=const. Since Rt′ and Rx′ are bounded uniformly in t>0, we conclude that the first integrals in (2.39) have the value 𝒪(ε).This completes the construction of the asymptotic solution (2.7).Obviously, fort∈(0,t1*-c1εα], c1>0, α∈(0,1), the formula (2.7) is transformed to the form(2.42)uε=u̅+(u--u̅)ω(φ10(t)-xε)+(R-u̅)ω(x-φ20(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→+∞, σ→+∞.Fort∈[t1*+c2εα,t1*+c3εα], c3>c2>0, α∈(0,1), the formula (2.7) is transformed to the form(2.43)uε=u-+(R-u-)ω(x-φ11(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→-∞, σ→0. This implies the following.Lemma 2.4. The weak asymptoticmod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the problem (1.7) solution from the state (2.42) to the state (2.43) when t increases from 0 to t1*+cεα.Clearly, passing to the limit asε→0 we obtain the well-known result for the stable scenario of the collision of the shock wave and the centered rarefaction, when the shock wave enters into the rarefaction domain and propagates with variables velocity and amplitude (see (2.37) and (2.41)). ## 2.1. Asymptotic Ansatz To present the asymptotic ansatz for the problem (1.7) let us consider the possible scenario of the initial data (1.6) evolution. Our choice of u̅ in (1.6) implies that(2.1)f(u)<f(u̅)+f(u-)-f(u̅)u--u̅(u-u̅)∀u∈(u̅,u-),(2.2)f(u)>f(u̅)+f(u̅)-f(u+)u̅-u+(u-u̅)∀u∈(u+,u̅). Thus, the problem (1.6) solution should be the superposition of noninteracting shock wave and centered rarefaction during a sufficiently small time interval, namely, (2.3)u=u̅+(u--u̅)H(φ10(t)-x)+{r(x-x20t)-u̅}H(x-φ20(t))+{u+-r(x-x20t)}H(x-φ30(t)), where t≪Δ, φ10(t) is the shock wave phase:(2.4)φ10(t)=x10+s10t,s10≝f(u-)-f(u̅)u--u̅,φk0=φk0(t) for k=2,3 are the characteristics:(2.5)φ20=x20+f′(u̅)t,φ30=x20+f′(u+)t, and r=r((x-x20)/t) is the centered rarefaction with the support between x=φ20 and x=φ30:(2.6)r∈𝒞∞issuchthatf′(r(z))=z.Assumption (2.1) implies the intersection of the shock wave trajectory φ10 with the characteristic φ20 at some time instant t1*=𝒪(Δ). Accordingly, the interaction between the shock and the singularity of the type (x-φ20)+λ, 0<λ<1 (i.e., with the left border of the rarefaction) has to occur, which will result in the appearance of a shock wave with a variable amplitude. Furthermore, this shock wave can interact with the right border of the rarefaction wave. So, generally speaking, the asymptotic ansatz needs to contain two fast variables. However, the distance between the characteristics x=φ20(t) and x=φ30(t) at the first critical time t1* is greater than a constant for Δ=const. Thus, the shock wave trajectory can intersect the characteristic x=φ30(t) only at a second critical time instant t2* such that t2*-t1*≥const>0. Therefore, we can investigate the interaction process by stages.Let us consider the first evolution stage for the solution of the problem (1.7). We present the asymptotic ansatz as a natural regularization of (2.3):(2.7)uε=u̅+(u--u̅)ω1+(R-u̅)ω2+(u+-R)ω3, where R=R(x,t,ε)∈𝒞∞(ℝ1×ℝ+1×[0,1]) is a function such that(2.8)R(x,t,ε)={u̅ifx<φ20-cε,r(x-x20t)ifφ20<x<φ30,u+ifx>φ30+cε with a constant c>0,(2.9)ω1=ω(-x+φ1ε),ω2=ω(x-φ2ε),ω3=ω(x-φ30ε), and ω(z/ε) is the Heaviside function regularization.Furthermore, the phasesφk=φk(τ,t), k=1,2, are assumed to be smooth functions such that(2.10)φk(τ,t)→φk0(t)asτ→+∞,φk(τ,t)→φk1(t)asτ→-∞ exponentially fast, where τ denotes the “fast time”:(2.11)τ=ψ0(t)ε,ψ0(t)=φ20(t)-φ10(t).To simplify the formulas we also suppose that(2.12)φ11(t)=φ21(t).We assume thatω tends to its limiting values(2.13)0=limη→-∞ω(η),1=limη→∞ω(η) at an exponential rate. Moreover, since the limiting as ε→0 solution does not depend on the choice of ω, let(2.14)ωη′>0,ω(η)+ω(-η)=1.The first assumption (2.10) implies that the ansatz (2.7) describes the two noninteracting waves (2.3) for t≤t1*-cεα, α∈(0,1). The second assumptions (2.10) and (2.12) imply that the ansatz (2.7) describes the union of the shock and the rarefaction waves for t≥t1*+cεα. ## 2.2. Preliminary Calculations To determine the asymptotics (2.7) we should calculate weak expansions of uε and f(uε). Almost trivial calculations show that(2.15)uε=u--(u--u̅)H1+(R-u̅)H2+(u+-R)H3+𝒪𝒟′(ε), where(2.16)Hk=H(x-φk)fork=1,2,H3=H(x-φ30).Next, we have to calculate the weak expansion for the nonlinear term.Lemma 2.1. Under the assumptions mentioned above the following relation holds:(2.17)f(uε)=f(u-)-(u--u̅)B1H1+{(R2-u̅)B2-f(R2)+f(R)}H2+{f(u+)-f(R)}H3+𝒪𝒟′(ε), where Bi are the following convolutions: (2.18)B1=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η-σ))dη,B2=∫-∞∞ω′(η)f′(u̅+(u--u̅)ω(-η-σ)+(R2-u̅)ω(η))dη with the properties (2.19)limσ→+∞B1=f(u-)-f(u̅)u--u̅,limσ→-∞B1=f(R1+u--u̅)-f(u̅)u--u̅,limσ→+∞B2=f′(u̅),limσ→-∞B2=f(u-+R2-u̅)-f(u-)R2-u̅,σ=σ(τ,t,ε) characterizes the distance between the trajectories φ1 and φ2, namely, (2.20)σ=φ2-φ1ε, and Rk=R(φk,t,ε) for k=1,2,R3=R(φ30,t,ε).Sketch of the Proof For eachψ(x)∈𝒟(ℝ1) we have (2.21)(f(uε),ψ)=-∫-∞∞f(uε)dϕ(x)dxdx=f(u-)∫-∞∞ψ(x)dx+∫-∞∞∂uε∂xf′(uε)ϕ(x)dx, where ϕ(x)=∫x∞ψ(x′)dx′. Next, the derivative∂uε/∂x contains terms of value 𝒪(1/ε), say ω'((φ1-x)/ε)/ε and the term (ω2-ω3)Rx′. To calculate the first term we change the variable, say η=(φ1-x)/ε, and apply the Taylor expansion. Therefore, (2.22)-∫-∞∞1εω′(φ1-xε)f′(uε)ϕ(x)dx=∫-∞∞ω′(η)f′(uε)ϕ(x)|x=φ1-εηdη=B1ϕ(φ1)+𝒪(ε). Finally, we note that (2.23)ω2-ω3=H(x-φ2)-H(x-φ3)+𝒪𝒟′(ε),uε|x∈[φ2,φ30]=u̅+(R-u̅)ω2=R+𝒪𝒟′(ε). Thus, (2.24)∫-∞∞Rx′(ω2-ω3)f′(uε)ϕ(x)dx=∫φ2φ30Rx′f′(R)ϕ(x)dx+𝒪(ε)=ϕ(x)f(R)|x=φ2x=φ30+∫φ2φ30f(R)ψ(x)dx+𝒪(ε). This implies the formula (2.17). To calculate the limiting values (2.19) of the convolutions Bi it is enough to use the stabilization properties (2.13) of the function ω(η).Remark 2.2. The convolutionsBi are the functions of σ, τ, and t. At the same time we can treat Bi as functions of σ, τ, and ε. Indeed, let us denote by x1* the intersection point of the trajectories x=φ10(t) and x=φ20(t), that is, x1*=φ10(t1*)=φ20(t1*). Then, by virtue of (2.4) and (2.5) (2.25)φ10(t)=x1*+s10(t-t1*),φ20=x1*+f′(u̅)(t-t1*). Consequently, (2.26)τ=ψ0′ε(t-t1*),ψ0′≝f′(u̅)-s10,(2.27)Bi(σ,τ,t)|t=t1*+ετ/ψ0′≝B̃i(σ,τ,ε).Substituting the expressions (2.15) and (2.17) into the left-hand side of (1.8), we derive our main relation for obtaining the parameters of the asymptotic solution (2.7):(2.28)(u--u̅){dφ1dt-B1}δ(x-φ1)-(R2-u̅){dφ2dt-B2}δ(x-φ2)+{∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟′(ε). ## 2.3. Analysis of the Singularity Dynamics Let us consider the system that is obtained by setting equal to zero the coefficients of theδ functions in relation (2.28), namely, (2.29)dφkdt=Bk,k=1,2. Before the interaction (τ→+∞) the first assumption (2.10) for k=1,2 implies σ→τ→+∞. Therefore, the limiting relations (2.19) verify the concordance of (2.29) with our definition (2.4) and (2.5) of φ10 and φ20.To find the limiting behavior ofφk after the interaction (τ→-∞) let us reduce the system (2.29) to a scalar equation. In view of (2.20) and (2.26)(2.30)d(φ2-φ1)dt=ψ0′dσdτ. Hence, by subtracting one equation in (2.29) from the other we obtain(2.31)ψ0′dσdτ=B̃2-B̃1≝F(σ,τ,ε), where we take into account Remark 2.2. Using the first assumption (2.10) again we complete (2.31) with the condition(2.32)limτ→+∞στ=1.To study this problem let us analyze the functionF(σ,τ,ε).Lemma 2.3. The valueσ=0 is the unique critical point for the problems (2.31) and (2.32) and is achieved for τ→-∞.Proof. First we calculate(2.33)F|σ=0=∫-∞∞{f′(u-+(R2-u-)ω(η))-f′(R1+(u--R1)ω(η))}×ω'(η)dη|σ=0=f(R2)-f(u-)R2-u--f(u-)-f(R1)u--R1|σ=0=0 since σ=0 implies φ1=φ2. Next we note that the assumption (2.1) implies the inequality: F|σ→+∞=ψ0′<0. Let us consider now the functionF for |σ| bounded by a constant. Since φ2-φ1=σε=𝒪(ε) for such values of σ, we can conclude that Rk-u̅=𝒪(ε), k=1,2. Therefore, with accuracy 𝒪(ε) (2.34)F(σ,τ,ε)=∫-∞∞ω′(η-σ)f′(u--(u--u̅)ω(η))dη-f(u-)-f(u̅)u--u̅. In fact, the integral in the right-hand side of (2.34) is the average of f′ with the kernel ω'. For concave-convex functions f the derivative f′(u--(u--u̅)ω(η)) decreases monotonically from f′(u-)>0 to its minimal value f′(0)<0 when η goes form -∞ to the value η=η0 where η0 is such that u--(u--u̅)ω(η0)=0. Next, when η goes form η0 to +∞, the derivative increases monotonically from f′(0) to the limiting value f′(u̅)<0. At the same time, ω'(η-σ)>0 is a soliton-type exponentially vanishing function concentrated around the point η=σ. This implies that the behavior of the integral as a function of σ is the same as the behavior of f′(u--(u--u̅)ω(η)) as the function of η. Therefore, the integral diagram has the unique solution of the equation F(σ,·,·)=const for any nonnegative const<f′(u-). Thus, the equation F(σ,·,·)=0 has the unique solution σ=0; moreover F'σ|σ=0<0. Furthermore,(2.35)∂F(σ,τ,ε)∂τ|σ=0=R2τ′∫-∞∞ω(η)ω′(η)f′′(u̅+(u--u̅)ω(-η)+(R2-u̅)ω(η))dη-R1τ′∫-∞∞ω(-η)ω′(η)f′′(u̅+(u--u̅)ω(η)+(R1-u̅)ω(-η))dη|σ=0=0 since R1=R2 and R1τ′=R2τ′ for σ=0. By induction we obtain the equality (2.36)dmF(σ(τ),τ,ε)dτm|σ=0=0∀m∈ℕ which implies the statement of Lemma 2.3.Consequently,φ1 and φ2 converge after the first interaction that confirms the a priori supposition (2.12). To obtain the limiting trajectory x=φ11=φ21 of the shock wave, it is enough to pass to the limit τ→-∞ in one of the equalities (2.29). Obviously, we obtain the following equation:(2.37)dφ11dt=f(u-)-f(r)u--r|x=φ11.Let us come back to the relation (2.28). Defining φk in accordance with (2.29), we transform (2.28) to the following form: (2.38){∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30))=𝒪𝒟'(ε).For each test functionψ we have(2.39)({∂R∂t+∂f(R)∂x}(H(x-φ2)-H(x-φ30)),ψ)=∑±∫Ω±{∂R∂t+∂f(R)∂x}ψ(x)dx+∫φ20φ30{∂R∂t+∂f(R)∂x}ψ(x)dx, where(2.40)Ω-={x:φ2<x<φ20},Ω+={x:φ20<x<φ2}.Forφ20<x<φ30 the function R coincides with the centered rarefaction r, thus(2.41)∂r∂t+∂f(r)∂x=0, and the last integral in (2.39) is equal to zero. For x∈Ω± we note that, according to definition (2.8), either R=const or |φ2(τ,t)-φ20(t)|≤cε, c=const. Since Rt′ and Rx′ are bounded uniformly in t>0, we conclude that the first integrals in (2.39) have the value 𝒪(ε).This completes the construction of the asymptotic solution (2.7).Obviously, fort∈(0,t1*-c1εα], c1>0, α∈(0,1), the formula (2.7) is transformed to the form(2.42)uε=u̅+(u--u̅)ω(φ10(t)-xε)+(R-u̅)ω(x-φ20(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→+∞, σ→+∞.Fort∈[t1*+c2εα,t1*+c3εα], c3>c2>0, α∈(0,1), the formula (2.7) is transformed to the form(2.43)uε=u-+(R-u-)ω(x-φ11(t)ε)+(u+-R)ω(x-φ30(t)ε), which is the limit of (2.7) as τ→-∞, σ→0. This implies the following.Lemma 2.4. The weak asymptoticmod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the problem (1.7) solution from the state (2.42) to the state (2.43) when t increases from 0 to t1*+cεα.Clearly, passing to the limit asε→0 we obtain the well-known result for the stable scenario of the collision of the shock wave and the centered rarefaction, when the shock wave enters into the rarefaction domain and propagates with variables velocity and amplitude (see (2.37) and (2.41)). ## 3. The Shock Wave Propagation over the Centered Rarefaction Let us consider the evolution of the problem (1.7) solution for t>t1*. The behavior of (2.37) solution is well known (see, e.g., [18]): the trajectory x=φ11 crosses all the characteristics X=f′(u)t+x20 if(3.1)f(u-)-f(u)u--u>f′(u)foru∈(ũ,u̅] and tends to the characteristic X=f′(ũ)t+x20 with ũ such that(3.2)f(u-)-f(ũ)u--ũ=f′(ũ).Ifu+<ũ, the resulting solution for the problem (1.7) will be a combination of the smoothed shock wave (with amplitude u--ũ and the front trajectory φ11=f′(ũ)t+x20) and the regularization for the centered rarefaction (defined near the domain bounded by the characteristics X̃=f′(ũ)t+x20 and X+=f′(u+)t+x20). Obviously, u≡u- for x<φ11(t) and u≡u+ for x≥X+(t). Therefore, we obtain the following.Theorem 3.1. Letu+<ũ. Then the weak asymptotic mod𝒪𝒟′(ε) solution (2.7) describes uniformly in time the evolution of the initial data (1.7) into the described above regularization for the combination of the shock wave and the centered rarefaction.Ifu+>ũ, there occurs the collision of the shock wave and the weak singularity of the (x-φ30)-λ type, 0<λ<1 (in the limit as ε→0). To describe this collision let us construct again a weak asymptotic mod𝒪𝒟′(ε) solution. In a similar way to (2.7) we write(3.3)uε=u-+(R-u-)ω1+(u+-R)ω3, where R=R(x,t,ε) is defined in (2.8) and(3.4)ωk=ω(x-φkε),k=1,3. We suppose that the phases φk=φk(τ1,t) are smooth functions such that(3.5)φ1(τ1,t)→φ11(t),φ3(τ1,t)→φ30(t)asτ1→+∞,(3.6)φ1(τ1,t)→φ̅(t),φ3(τ1,t)→φ31(t)asτ1→-∞, exponentially fast, where the “fast time” τ1 is defined as follows:(3.7)τ1=ψ1(t)ε,ψ1(t)=φ30(t)-φ11(t).To simplify the formulas we also suppose that(3.8)φ̅(t)=φ31(t).The assumptions (3.5), (3.6), and (3.8) imply that the ansatz (3.3) coincides with the solution described in Section 2 as τ1→+∞ and tends to the shock wave as τ1→-∞.Repeating the analysis of Section2 we obtain the following statement.Lemma 3.2. Under the assumptions mentioned above the following relations hold:(3.9)uε=u-+(R-u-)H(x-φ1)+(u+-R)H(x-φ3)+𝒪𝒟′(ε),f(uε)=f(u-)+{(R1-u-)C1-f(R1)+f(R)}H1+{(u+-R3)C3+f(R3)-f(R)}H3+𝒪𝒟′(ε), where Ci are the convolutions (3.10)C1=∫-∞∞ω′(η)f′(u-+(R1-u-)ω(η)+(u+-R1)ω(η-σ1))dη,C3=∫-∞∞ω′(η)f′(u-+(R3-u-)ω(η+σ1)+(u+-R3)ω(η))dη with the properties (3.11)limσ→+∞C1=f(u-)-f(R1)u--R1,limσ→-∞C1=f(u++u--R1)-f(u+)u--R1,limσ→+∞C3=f′(u+),limσ→-∞C3=f(u-+u+-R3)-f(u-)u+-R3,σ1=σ1(τ1,t,ε) characterizes the distance between the trajectories φ1 and φ3, namely, (3.12)σ1=(φ3-φ1)ε, and Rk=R(φk,t,ε) for k=1,3.Substituting the expressions (3.9) into the left-hand side of (1.8) we derive the following relation for obtaining the asymptotic parameters:(3.13)-(R1-u-){dφ1dt-C1}δ(x-φ1)-(u+-R3){dφ3dt-C3}δ(x-φ3)+{∂R∂t+∂f(R)∂x}(H1-H3)=𝒪𝒟′(ε).To calculate the trajectoriesφ1 and φ3 we set the coefficients of the δ-functions in relation (3.13) equal to zero, namely,(3.14)dφkdt=Ck,k=1,3.Lemma 3.3. Under the assumptionu+>ũ, system (3.14) describes the confluence of the trajectories φ1 and φ3.Proof. Before the interaction (τ1→+∞) σ1→+∞, so that we obtain again the Rankine-Hugoniot condition (2.37) for φ11. Moreover, we obtain the second formula in (2.5) for the characteristic φ30. Subtracting the above relations we pass to the equation(3.15)d(φ3-φ1)dt=ψ1′dσ1dτ1=C3-C1≝F1(σ1,τ1,ε), where we put t in terms of τ1 and ε. Suppositions (3.5) complete equation (3.15) with the condition (3.16)limτ1→+∞σ1τ1=1. The last step of the proof is similar to Lemma2.3 verification of the following statement. Lemma 3.4. The valueσ1=0 is the unique critical point for the problems (3.15) and (3.16) and is achieved for τ1→-∞.Consequently,φ1 and φ3 converge after the second interaction that confirms the a priori supposition (3.8). Passing in (3.14) to the limit τ1→-∞ we find the Rankine-Hugoniot condition(3.17)dφ̅dt=f(u-)-f(u+)u--u+ for the limiting trajectory x=φ̅=φ31 of the shock wave with the amplitude u--u+. Thus, the supposition u+∈(ũ,u̅) is explicitly the stability condition for the limiting shock wave.Finally we note that the relation(3.18)∂uε∂t+∂f(uε)∂x=𝒪𝒟′(ε),forφ1<x<φ3 can be proved in a similar way as in Section 2.Summarizing the above arguments we obtain the following assertion.Theorem 3.5. Letu+>ũ. Then the weak asymptotic mod𝒪𝒟′(ε) solutions (2.7) and (3.3) describes uniformly in time the evolution of the initial data (1.6) to the smoothed shock wave with amplitude u--u+. ## 4. Conclusion Concluding all the result we obtain the following uniform in time description of the problem (1.7) solution: the front φ1 of the smoothed shock wave and the left front φ2 of the smoothed centered rarefaction merge during the time interval (t1*-cεα,t1*+cεα), 0<α<1, in accordance with (2.29). If u+<ũ, then the further evolution of the front φ11≡φ21 is described by (2.37) whereas the right front of the rarefaction wave remains the characteristic φ30. In the case u+=ũ the trajectory φ11 tends to φ30 as t→∞. If u+>ũ, then the trajectories φ11 and φ3 merge during the time interval (t2*-cεα,t2*+cεα) in accordance with (3.14) and the resulting trajectory for t≥t2*+cεα coincides with the shock wave front (3.17).The conditionu+>ũ, in view of (2.37) and the assumption (1.3), is equivalent to the inequality(4.1)f(u)≤f(u+)+f(u+)-f(u-)u+-u-(u-u+)∀u∈[u+,u-], which is explicitly the Oleinik E-condition.In the limit asε→0 but Δ=const all the trajectories loose the smoothness remaining continuous. However, the condition (4.1) does not depend on ε; so it remains valid for limiting solution.To calculate the limit asΔ→0 it is enough to note that t1*=𝒪(Δ) and |φ30-φ20||t=t1*=𝒪(Δ). Therefore, the problems (1.1) and (1.2) solution will be, in accordance with the condition (4.1), either the shock wave with amplitude u--u+ or the union of the shock wave (with amplitude u--ũ) and the centered rarefaction (with support between the characteristics f′(ũ)t and f′(u+)t). --- *Source: 101647-2010-01-26.xml*
2009
# Crack-Depth Prediction in Steel Based on Cooling Rate **Authors:** M. Rodríguez-Martín; S. Lagüela; D. González-Aguilera; P. Rodríguez-Gonzálvez **Journal:** Advances in Materials Science and Engineering (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1016482 --- ## Abstract One criterion for the evaluation of surface cracks in steel welds is to analyze the depth of the crack, because it is an effective indicator of its potential risk. This paper proposes a new methodology to obtain an accurate crack-depth prediction model based on the combination of infrared thermography and the 3D reconstruction procedure. In order to do this, a study of the cooling rate of the steel is implemented through active infrared thermography, allowing the study of the differential thermal behavior of the steel in the fissured zone with respect to the nonfissured zone. These cooling rate data are correlated with the real geometry of the crack, which is obtained with the 3D reconstruction of the welds through a macrophotogrammetric procedure. In this way, it is possible to analyze the correlation between cooling rate and depth through the different zones of the crack. The results of the study allow the establishment of an accurate predictive depth model which enables the study of the depth of the crack using only the cooling rate data. In this way, the remote measure of the depth of the surface steel crack based on thermography is possible. --- ## Body ## 1. Introduction Welding is the most important joining process for metallic materials but is also an aggressive technique, taking into account the fact that an important heating process is applied in order to meld the material and that high thermomechanical stresses induced by the welding procedure could cause several security problems. Defects present in the material are harmful and dangerous for the integrity of the joints. The most commonly observed welding flaws include the following: lack of fusion, lack of penetration, gas holes, porosity, cracking process, and inclusions, which are all typified in the international quality standards [1]. Some types of defects may occur more frequently than others for a particular welding process. In order to maintain the desired level of structural integrity, welds must be inspected in accordance with the quality standards. The results of the inspection of welds also provide useful information to identify potential problems in the manufacturing process and to improve the welding operations [2]. In the current industry and building practice, welding inspection is the responsibility of inspectors certified according to international standards [3].In these cases, the cracking process is critical because the quality of the result may influence the failure of an important structural element and, consequently, the full collapse of the structure with drastic consequences. Welds are the origin of structural feebleness in the majority of cases and should be systematically checked in order to ensure the structural integrity of the components. Therefore, establishing a detailed geometric study of the crack is really important, since the knowledge of the size of the crack (measures along surface and depth) presents enormous importance for the evaluation of materials. If the depth of the crack is not known, the rejection of the material will be automatic due to the impossibility of predicting its risk; else, if the depth of the crack is known, this information could be used to calculate the risk of maintaining the material and to assess the possibility of its rejection [4]. Thus, the possibility of measuring accurately the depth of small cracks is highly useful, since they represent the least dangerous factor and will therefore present a more likely probability to be accepted without removal [5]. Moreover, the crack depth would increase with the crack propagation length [6].The methods most frequently used for the measurement of surface cracks are ultrasounds [5] and radiography [7]. More novel and sophisticated methods are scanning cameras [8], laser [9], 2D stereoimaging [10], and macrophotogrammetry [11]. Among them, the only one that allows the complete dimensioning for crack depth is the radiographic method [7] although an approximation for the prediction of depth of the crack has been raised with thermography using absolute values of temperature [12].Infrared Thermographic (IRT) methods have also been used in the welding field for the control and metallurgical characterization of the welding process through the study of emissivity [13]. IRT has been used to inspect and measure the adhering slag on the weld during the welding process [14]. Another important use of the thermographic methods is the inspection and evaluation of materials, mainly composite materials. However, their application to inspect steel welds is not as settled as the previous methods. Relevant research shows positive results, ensuring a promising future for the application of the technique in testing engineering. The most extended modality of thermography used to evaluate materials is the one denominated asactive thermography, which is based on the study of the thermal reaction of the material when external signal stimulation is applied. The signal can be applied in different ways, such as heat pulses [15], one-frequency modulated signal, also known as lock-in method [16], mechanical vibrations [17], ultrasounds [18], and microwaves [19]. In this paper, authors apply a new approach to establish a protocol for the prediction of crack depth from cooling rate data. Cooling rate data are extracted for each pixel from a sequence of thermal images using an algorithm (pixelwise algorithm for time derivative of temperature) based on the experimental proposal of [20]. Cooling rate data are integrated with geometrical depth data extracted following the macrophotogrammetric procedure established for welding in [11]. The objective is to establish a prediction model that allows the estimation of depth of the crack using an IR camera and low-temperature long-pulsed heating/cooling of the metal.This paper is organized as follows: after this introduction, Section2 presents the equipment used and the testing methodology; Section 3 analyzes the results obtained and the information gathered from the combined thermographic and geometric knowledge of the defects detected with thermography. Last, Section 4 explains the conclusions drawn from the presented study. ## 2. Materials and Methods The materials needed and the methods followed to implement the procedure are shown in this section. ### 2.1. Materials A welded plaque of low carbon with a thickness of 7.5 mm was used as subject of the study, given the presence of a crack with a little notch in the face of weld (Figure1). The plaque has been welded with Tungsten Inert Gas (TIG) welding, presenting butt-welding with edge preparation in V. The crack is oriented parallel to the longitudinal axis of the weld and is consequently denominated astoe crack according to the quality standard [21].Figure 1 Low-carbon steel weld used as specimen. This presents a toe crack close to the limit of the weld (macroimage).The thermal analysis is performed with an IR (infrared) camera. The IR camera used for this work is NEC TH9260 with 640 × 480 Uncooled Focal Plane Array Detector (UFPA), with a resolution of 0.06°C and a measurement range from −40°C to 500°C. The camera is geometrically calibrated prior to data acquisition using a calibration grid based on the emissivity difference between the background and the targets, presented in [22]. The calibration parameters of the IR camera in the focus position used for data acquisition during the thermographic tests are shown in Table 1.Table 1 Parameters for the geometric calibration of the IR camera NEC TH9260. Focal length (mm) 14.35 ± 0.44 Format size (mm) 5.98 × 4.50 (±0.07) Pixel size (mm) 9.34 ⋅ 10 - 3 × 9.37 ⋅ 10 - 3 Principal point (X PP) (mm) 2.99 ± 0.07 Principal point (Y PP) (mm) 2.28 ± 0.09 Lens distortion K 1 - 1.32 ⋅ 10 - 2 K 2 1.08 ⋅ 10 - 3 P 1 - 3.43 ⋅ 10 - 4 P 2 1.44 ⋅ 10 - 3Finally, the photogrammetric process has been applied following the methodology established in [11]. For the implementation, a commercial digital single lens reflex (DSLR) camera, Canon EOS 500D, with a Sigma 50 mm macro lens is used. A tripod is used to stabilize the camera during acquisition due to the high exposure times required to get the correct lighting exposition. In order to homogenize and optimize the illumination conditions, two halogen lamps (50 W each) are also used. ### 2.2. Methodology The methodology applied for this research presents two different parts (Figure2): on the one hand, a thermal processing is applied to extract the cooling rate for each pixel of the thermal matrix through two different sections; on the other hand, a photogrammetric procedure is applied to obtain the real depth values of the cracks through the two mentioned sections. The objective is to compare the two types of results in order to study the possible relation between them through a linear mathematical approach.Figure 2 Methodology followed to obtain the prediction model. Inputs are the thermal and photographic images. Processing has two parts: the thermal procedure (firstly, the rectification of the image is implemented, after a PATDT algorithm is applied in order to obtain the cooling rate for each pixel of the image; subsequently, sectioning of the image through different lines is implemented in order to get discrete cooling rate functions) and the macrophotogrammetric procedure [10]. Finally, the processing of the inputs allows obtaining the cooling rates and depths through a concrete different section of the crack; with these outputs, a mathematical approach is applied in order to obtain the prediction model. #### 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. #### 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). #### 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 2.1. Materials A welded plaque of low carbon with a thickness of 7.5 mm was used as subject of the study, given the presence of a crack with a little notch in the face of weld (Figure1). The plaque has been welded with Tungsten Inert Gas (TIG) welding, presenting butt-welding with edge preparation in V. The crack is oriented parallel to the longitudinal axis of the weld and is consequently denominated astoe crack according to the quality standard [21].Figure 1 Low-carbon steel weld used as specimen. This presents a toe crack close to the limit of the weld (macroimage).The thermal analysis is performed with an IR (infrared) camera. The IR camera used for this work is NEC TH9260 with 640 × 480 Uncooled Focal Plane Array Detector (UFPA), with a resolution of 0.06°C and a measurement range from −40°C to 500°C. The camera is geometrically calibrated prior to data acquisition using a calibration grid based on the emissivity difference between the background and the targets, presented in [22]. The calibration parameters of the IR camera in the focus position used for data acquisition during the thermographic tests are shown in Table 1.Table 1 Parameters for the geometric calibration of the IR camera NEC TH9260. Focal length (mm) 14.35 ± 0.44 Format size (mm) 5.98 × 4.50 (±0.07) Pixel size (mm) 9.34 ⋅ 10 - 3 × 9.37 ⋅ 10 - 3 Principal point (X PP) (mm) 2.99 ± 0.07 Principal point (Y PP) (mm) 2.28 ± 0.09 Lens distortion K 1 - 1.32 ⋅ 10 - 2 K 2 1.08 ⋅ 10 - 3 P 1 - 3.43 ⋅ 10 - 4 P 2 1.44 ⋅ 10 - 3Finally, the photogrammetric process has been applied following the methodology established in [11]. For the implementation, a commercial digital single lens reflex (DSLR) camera, Canon EOS 500D, with a Sigma 50 mm macro lens is used. A tripod is used to stabilize the camera during acquisition due to the high exposure times required to get the correct lighting exposition. In order to homogenize and optimize the illumination conditions, two halogen lamps (50 W each) are also used. ## 2.2. Methodology The methodology applied for this research presents two different parts (Figure2): on the one hand, a thermal processing is applied to extract the cooling rate for each pixel of the thermal matrix through two different sections; on the other hand, a photogrammetric procedure is applied to obtain the real depth values of the cracks through the two mentioned sections. The objective is to compare the two types of results in order to study the possible relation between them through a linear mathematical approach.Figure 2 Methodology followed to obtain the prediction model. Inputs are the thermal and photographic images. Processing has two parts: the thermal procedure (firstly, the rectification of the image is implemented, after a PATDT algorithm is applied in order to obtain the cooling rate for each pixel of the image; subsequently, sectioning of the image through different lines is implemented in order to get discrete cooling rate functions) and the macrophotogrammetric procedure [10]. Finally, the processing of the inputs allows obtaining the cooling rates and depths through a concrete different section of the crack; with these outputs, a mathematical approach is applied in order to obtain the prediction model. ### 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. ### 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). ### 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. ## 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). ## 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 3. Results ### 3.1. Cooling Rate Results The graphical representation of the relative cooling rate matrix segmented in the crack region in 3D is shown in Figure6. The average cooling rate for the adjacent zone to the defect, Q a, is 0.0149°C/s. The cooling rate through two sections (L 1 and L 2) is studied in Figure 6. ### 3.2. Depth Results The macrophotogrammetric model for the crack is obtained from the matching of images, which are acquired from convergent positions with respect to the weld, repeating the procedure exposed in [11].For the computation of depths, the point cloud from the macrophotogrammetric procedure is sectioned in the two longitudinal sections (L 1 and L 2) (Figure 4). The section of the point cloud with normal planes L 1 and L 2 is applied in order to extract the depth for each pixel (Figure 5). The metrical reference between depths on the dense point cloud and the cooling rate values is made using the distances from the extremes of the crack.Figure 5 3D representation of the crack zone based on the relative cooling rate matrix obtained following the procedure illustrated by Figure3. In Figure 6, the two pixel rows of the crack (L 1 and L 2) are labeled.Figure 6 Depth function and relative cooling rate function along the longitudinal sections (L 1 (a) and L 2 (b)). (a) (b)The fit is used because the number of points is higher than the number of pixels on the cooling rate image. For this reason, authors have chosen to work with continuous functions in order to ease the correlation of data. The fit model used to obtain a depth function that allows the estimation of the real depth is a 9-degree polynomial fit, which provides an acceptable goodness (Root Mean Square Error (RMSE) of 0.0098 for longitudinal sectionL 1 and RMSE of 0.0082 for longitudinal section L 2). ### 3.3. Correlation between Cooling and Depth Data When relative cooling rate and depth data have been obtained through each section (Figure6), each pair of values (depth-relative cooling rate) is extracted from 20 longitude values and correlated in order to establish a prediction model which allows the estimation of the depth using just the cooling rate value.The correlation between depth and relative cooling rate (Q - Q a) (Figure 7) is obtained with a fitting error of 0.86 (Table 2). For this reason, the prediction model is considered as acceptable, which demonstrates the existence of a coherent mathematical relationship between depth (mm) and cooling rate (°C/s).Table 2 Parameters of the lineal fit which correlates relative cooling rate and depth. D = a Q - Q a + b Gain (a) Offset (b) R 2 15.71 −0.002534 0.857Figure 7 Correlation between depth and relative cooling rate for each pixel in the crack. ## 3.1. Cooling Rate Results The graphical representation of the relative cooling rate matrix segmented in the crack region in 3D is shown in Figure6. The average cooling rate for the adjacent zone to the defect, Q a, is 0.0149°C/s. The cooling rate through two sections (L 1 and L 2) is studied in Figure 6. ## 3.2. Depth Results The macrophotogrammetric model for the crack is obtained from the matching of images, which are acquired from convergent positions with respect to the weld, repeating the procedure exposed in [11].For the computation of depths, the point cloud from the macrophotogrammetric procedure is sectioned in the two longitudinal sections (L 1 and L 2) (Figure 4). The section of the point cloud with normal planes L 1 and L 2 is applied in order to extract the depth for each pixel (Figure 5). The metrical reference between depths on the dense point cloud and the cooling rate values is made using the distances from the extremes of the crack.Figure 5 3D representation of the crack zone based on the relative cooling rate matrix obtained following the procedure illustrated by Figure3. In Figure 6, the two pixel rows of the crack (L 1 and L 2) are labeled.Figure 6 Depth function and relative cooling rate function along the longitudinal sections (L 1 (a) and L 2 (b)). (a) (b)The fit is used because the number of points is higher than the number of pixels on the cooling rate image. For this reason, authors have chosen to work with continuous functions in order to ease the correlation of data. The fit model used to obtain a depth function that allows the estimation of the real depth is a 9-degree polynomial fit, which provides an acceptable goodness (Root Mean Square Error (RMSE) of 0.0098 for longitudinal sectionL 1 and RMSE of 0.0082 for longitudinal section L 2). ## 3.3. Correlation between Cooling and Depth Data When relative cooling rate and depth data have been obtained through each section (Figure6), each pair of values (depth-relative cooling rate) is extracted from 20 longitude values and correlated in order to establish a prediction model which allows the estimation of the depth using just the cooling rate value.The correlation between depth and relative cooling rate (Q - Q a) (Figure 7) is obtained with a fitting error of 0.86 (Table 2). For this reason, the prediction model is considered as acceptable, which demonstrates the existence of a coherent mathematical relationship between depth (mm) and cooling rate (°C/s).Table 2 Parameters of the lineal fit which correlates relative cooling rate and depth. D = a Q - Q a + b Gain (a) Offset (b) R 2 15.71 −0.002534 0.857Figure 7 Correlation between depth and relative cooling rate for each pixel in the crack. ## 4. Validation ### 4.1. Internal Validation of the Prediction Model In order to test the validity of the model, an internal validation procedure is applied through the analysis of two transversal sections (T 1 and T 2) in the same crack (Figure 4). These sections are performed in a direction where the difference of depth between positions is the highest. The aim of this step is to analyze the correspondence between the predicted depth values P [ x ] (where x is the width) calculated with the prediction model (Table 2) from the cooling rate data with the real depth values V [ x ] extracted from the sectioning of the point cloud. These sections are significantly smaller than the longitudinal sections used to build the prediction model (Figure 4) but present a higher contrast of the cooling rate. For this reason, the application of the prediction model to those points belonging to these transversal sections serves to analyze the internal coherence of the model and thus the goodness of the fit.The prediction model is applied to each transversal section (Figure8). Then, the prediction model is compared with the real model in order to analyze the accuracy. The error is calculated for each x value with the following discrete expression:(2) ε = ∑ x 0 x f V x - P x ,where V is the real value of depth and P is the predictive value (both for the same value of x). The error of the prediction model for the first transversal section, T 1 (Figure 8), is 0.00169 mm (14%) and the error for the second transversal section, T 2, is 0.0010 mm (13%).Figure 8 Transversal sections used for the internal validation of the model (T 1 (a) and T 2 (b)). (a) (b) ### 4.2. External Validation of the Prediction Model In this section, the robustness of the prediction model is tested through its application to the estimation of the depth in a crack open to surface in a different plate, made of the same material (Figure9). In this case, the crack is a longitudinal crack open to surface placed on the weld (weld cap removed) and generated by the stress provoked during the welding process. This crack is significantly smaller (tenths of mm) than the crack used to build the model, which is the reason for its choice. In this way, authors seek to test the model under stringent requirements.Figure 9 Cooling rate matrix of the new crack, with pseudocolor assigned to each cooling rate value (top left) and macrophotography of the crack (bottom left). Overlapping of cooling rate map with macrophotography of the crack in order to correlate the different depths of the crack with the different cooling rate data (right). The transversal sectionT ′ used to assess the prediction model has been plotted.The macrophotogrammetric procedure is repeated for this new crack in order to obtain accurately the depth distribution of the crack (denominated asreal depth). Then, the plate is heated at 70°C. This temperature, higher than in the previous case, is necessary in order to obtain a correct visualization of the crack in the thermal matrix, given the small size of this crack. The cooling period is monitored for 250 s (the same period). The cooling rate is calculated from the thermal images for each time instant following the procedure established in Section 2.2.1(3). Subsequently, the average cooling rate for the adjacent zone of the crack, Q a, is calculated. The result is 0.0516°C/s. Then, the prediction model calculated in Section 3.3 is applied through the values of transversal section of the crack (called section T ′) and the depth values are estimated for each pixel from the cooling rate data (Figure 9).At this point, real and prediction models are compared in order to analyze the error of the predicted model regarding the reality. Both models are shown in Figure10. The resulting error of the prediction model is 0.009 mm (18%). This result is acceptable knowing that the maximum depth in the crack is 0.05 mm.Figure 10 Real and predicted depth through the transversal sectionT ′ used for the external validation of the model. ## 4.1. Internal Validation of the Prediction Model In order to test the validity of the model, an internal validation procedure is applied through the analysis of two transversal sections (T 1 and T 2) in the same crack (Figure 4). These sections are performed in a direction where the difference of depth between positions is the highest. The aim of this step is to analyze the correspondence between the predicted depth values P [ x ] (where x is the width) calculated with the prediction model (Table 2) from the cooling rate data with the real depth values V [ x ] extracted from the sectioning of the point cloud. These sections are significantly smaller than the longitudinal sections used to build the prediction model (Figure 4) but present a higher contrast of the cooling rate. For this reason, the application of the prediction model to those points belonging to these transversal sections serves to analyze the internal coherence of the model and thus the goodness of the fit.The prediction model is applied to each transversal section (Figure8). Then, the prediction model is compared with the real model in order to analyze the accuracy. The error is calculated for each x value with the following discrete expression:(2) ε = ∑ x 0 x f V x - P x ,where V is the real value of depth and P is the predictive value (both for the same value of x). The error of the prediction model for the first transversal section, T 1 (Figure 8), is 0.00169 mm (14%) and the error for the second transversal section, T 2, is 0.0010 mm (13%).Figure 8 Transversal sections used for the internal validation of the model (T 1 (a) and T 2 (b)). (a) (b) ## 4.2. External Validation of the Prediction Model In this section, the robustness of the prediction model is tested through its application to the estimation of the depth in a crack open to surface in a different plate, made of the same material (Figure9). In this case, the crack is a longitudinal crack open to surface placed on the weld (weld cap removed) and generated by the stress provoked during the welding process. This crack is significantly smaller (tenths of mm) than the crack used to build the model, which is the reason for its choice. In this way, authors seek to test the model under stringent requirements.Figure 9 Cooling rate matrix of the new crack, with pseudocolor assigned to each cooling rate value (top left) and macrophotography of the crack (bottom left). Overlapping of cooling rate map with macrophotography of the crack in order to correlate the different depths of the crack with the different cooling rate data (right). The transversal sectionT ′ used to assess the prediction model has been plotted.The macrophotogrammetric procedure is repeated for this new crack in order to obtain accurately the depth distribution of the crack (denominated asreal depth). Then, the plate is heated at 70°C. This temperature, higher than in the previous case, is necessary in order to obtain a correct visualization of the crack in the thermal matrix, given the small size of this crack. The cooling period is monitored for 250 s (the same period). The cooling rate is calculated from the thermal images for each time instant following the procedure established in Section 2.2.1(3). Subsequently, the average cooling rate for the adjacent zone of the crack, Q a, is calculated. The result is 0.0516°C/s. Then, the prediction model calculated in Section 3.3 is applied through the values of transversal section of the crack (called section T ′) and the depth values are estimated for each pixel from the cooling rate data (Figure 9).At this point, real and prediction models are compared in order to analyze the error of the predicted model regarding the reality. Both models are shown in Figure10. The resulting error of the prediction model is 0.009 mm (18%). This result is acceptable knowing that the maximum depth in the crack is 0.05 mm.Figure 10 Real and predicted depth through the transversal sectionT ′ used for the external validation of the model. ## 5. Conclusions A procedure for the prediction of depth of cracks from infrared data has been implemented. This approach represents a methodological innovation in the field of Nondestructive Testing for the evaluation of weld flaws.The procedure applied allows the accurate prediction of different depths of the crack from thermal images. Low-temperature heating (40°) is applied to the steel and the cooling is monitored for 250 s. Although the heat transfer tends to be homogenized through the material in long cooling times, the values of the cooling rate also depend on the surface properties for each point of the material. Therefore, different surface properties in the defect and the nondefect zone provide a difference in the heat dissipation which provokes a difference in the cooling rates. This difference is clearly visible for each pixel through the sections of thermal images. These thermal data are associated with the real depths obtained with a reliable and experienced macrophotogrammetric procedure [10] using a commercial DSLR camera.The cooling rate value for each pixel is different for each time moment, since the cooling rate is higher at the beginning of the cooling process than at its end due to the exponential reduction of the temperature. The registration and integration of the cooling rate values from 250-second cooling interval allows the registration of the totality of information. For this reason, the homogenization of the heat transfer through the material is not a limit for the methodology, since results present enough contrast to establish a depth prediction model.The prediction model established through the correlation between cooling rate and depth maintains statistical consistency allowing an acceptable fit (RMSE between 0.008 and 0.009). Furthermore, the prediction model has been validated following a twofold approach: on the one hand, an internal validation has been implemented, consisting of the application of the prediction model to the crack in two transversal sections (different sections to those used to obtain the prediction model). These transversal sections have been chosen since they present high thermal contrast in a minimal space, subjecting the prediction model to a limit situation. Results demonstrate the good precision of the method to predict depth values of cracks, with 15% of maximum deviation. On the other hand, an external validation has been applied through the analysis of a different crack. In this case, a crack open to surface located in cap removing weld has been used to provide the external validation. With this test, authors analyze not only the validity of the method for other cracks, but also the quality of the prediction model when the heating is done at different temperatures (in this case, higher, 70°C). Results of the prediction model are consistent, with 18% of maximum deviation regarding real depth. The error is mainly due to the interpolation method used for the relation of thermal and depth values. It could be improved using more discrete values and this is a future goal that this work opens.The procedure has been applied to cracks generated from the stress provoked by the welding process in low-carbon steel. However, the procedure could be extended to the analysis of cracks in other materials. The cooling phenomenon is studied by the Newton Law and depends on the physical property of the materials as conductivity, heat capacity, surface features, and so forth. For this reason, the extension of the technique to other materials will be researched in future works. --- *Source: 1016482-2016-02-18.xml*
1016482-2016-02-18_1016482-2016-02-18.md
51,466
Crack-Depth Prediction in Steel Based on Cooling Rate
M. Rodríguez-Martín; S. Lagüela; D. González-Aguilera; P. Rodríguez-Gonzálvez
Advances in Materials Science and Engineering (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1016482
1016482-2016-02-18.xml
--- ## Abstract One criterion for the evaluation of surface cracks in steel welds is to analyze the depth of the crack, because it is an effective indicator of its potential risk. This paper proposes a new methodology to obtain an accurate crack-depth prediction model based on the combination of infrared thermography and the 3D reconstruction procedure. In order to do this, a study of the cooling rate of the steel is implemented through active infrared thermography, allowing the study of the differential thermal behavior of the steel in the fissured zone with respect to the nonfissured zone. These cooling rate data are correlated with the real geometry of the crack, which is obtained with the 3D reconstruction of the welds through a macrophotogrammetric procedure. In this way, it is possible to analyze the correlation between cooling rate and depth through the different zones of the crack. The results of the study allow the establishment of an accurate predictive depth model which enables the study of the depth of the crack using only the cooling rate data. In this way, the remote measure of the depth of the surface steel crack based on thermography is possible. --- ## Body ## 1. Introduction Welding is the most important joining process for metallic materials but is also an aggressive technique, taking into account the fact that an important heating process is applied in order to meld the material and that high thermomechanical stresses induced by the welding procedure could cause several security problems. Defects present in the material are harmful and dangerous for the integrity of the joints. The most commonly observed welding flaws include the following: lack of fusion, lack of penetration, gas holes, porosity, cracking process, and inclusions, which are all typified in the international quality standards [1]. Some types of defects may occur more frequently than others for a particular welding process. In order to maintain the desired level of structural integrity, welds must be inspected in accordance with the quality standards. The results of the inspection of welds also provide useful information to identify potential problems in the manufacturing process and to improve the welding operations [2]. In the current industry and building practice, welding inspection is the responsibility of inspectors certified according to international standards [3].In these cases, the cracking process is critical because the quality of the result may influence the failure of an important structural element and, consequently, the full collapse of the structure with drastic consequences. Welds are the origin of structural feebleness in the majority of cases and should be systematically checked in order to ensure the structural integrity of the components. Therefore, establishing a detailed geometric study of the crack is really important, since the knowledge of the size of the crack (measures along surface and depth) presents enormous importance for the evaluation of materials. If the depth of the crack is not known, the rejection of the material will be automatic due to the impossibility of predicting its risk; else, if the depth of the crack is known, this information could be used to calculate the risk of maintaining the material and to assess the possibility of its rejection [4]. Thus, the possibility of measuring accurately the depth of small cracks is highly useful, since they represent the least dangerous factor and will therefore present a more likely probability to be accepted without removal [5]. Moreover, the crack depth would increase with the crack propagation length [6].The methods most frequently used for the measurement of surface cracks are ultrasounds [5] and radiography [7]. More novel and sophisticated methods are scanning cameras [8], laser [9], 2D stereoimaging [10], and macrophotogrammetry [11]. Among them, the only one that allows the complete dimensioning for crack depth is the radiographic method [7] although an approximation for the prediction of depth of the crack has been raised with thermography using absolute values of temperature [12].Infrared Thermographic (IRT) methods have also been used in the welding field for the control and metallurgical characterization of the welding process through the study of emissivity [13]. IRT has been used to inspect and measure the adhering slag on the weld during the welding process [14]. Another important use of the thermographic methods is the inspection and evaluation of materials, mainly composite materials. However, their application to inspect steel welds is not as settled as the previous methods. Relevant research shows positive results, ensuring a promising future for the application of the technique in testing engineering. The most extended modality of thermography used to evaluate materials is the one denominated asactive thermography, which is based on the study of the thermal reaction of the material when external signal stimulation is applied. The signal can be applied in different ways, such as heat pulses [15], one-frequency modulated signal, also known as lock-in method [16], mechanical vibrations [17], ultrasounds [18], and microwaves [19]. In this paper, authors apply a new approach to establish a protocol for the prediction of crack depth from cooling rate data. Cooling rate data are extracted for each pixel from a sequence of thermal images using an algorithm (pixelwise algorithm for time derivative of temperature) based on the experimental proposal of [20]. Cooling rate data are integrated with geometrical depth data extracted following the macrophotogrammetric procedure established for welding in [11]. The objective is to establish a prediction model that allows the estimation of depth of the crack using an IR camera and low-temperature long-pulsed heating/cooling of the metal.This paper is organized as follows: after this introduction, Section2 presents the equipment used and the testing methodology; Section 3 analyzes the results obtained and the information gathered from the combined thermographic and geometric knowledge of the defects detected with thermography. Last, Section 4 explains the conclusions drawn from the presented study. ## 2. Materials and Methods The materials needed and the methods followed to implement the procedure are shown in this section. ### 2.1. Materials A welded plaque of low carbon with a thickness of 7.5 mm was used as subject of the study, given the presence of a crack with a little notch in the face of weld (Figure1). The plaque has been welded with Tungsten Inert Gas (TIG) welding, presenting butt-welding with edge preparation in V. The crack is oriented parallel to the longitudinal axis of the weld and is consequently denominated astoe crack according to the quality standard [21].Figure 1 Low-carbon steel weld used as specimen. This presents a toe crack close to the limit of the weld (macroimage).The thermal analysis is performed with an IR (infrared) camera. The IR camera used for this work is NEC TH9260 with 640 × 480 Uncooled Focal Plane Array Detector (UFPA), with a resolution of 0.06°C and a measurement range from −40°C to 500°C. The camera is geometrically calibrated prior to data acquisition using a calibration grid based on the emissivity difference between the background and the targets, presented in [22]. The calibration parameters of the IR camera in the focus position used for data acquisition during the thermographic tests are shown in Table 1.Table 1 Parameters for the geometric calibration of the IR camera NEC TH9260. Focal length (mm) 14.35 ± 0.44 Format size (mm) 5.98 × 4.50 (±0.07) Pixel size (mm) 9.34 ⋅ 10 - 3 × 9.37 ⋅ 10 - 3 Principal point (X PP) (mm) 2.99 ± 0.07 Principal point (Y PP) (mm) 2.28 ± 0.09 Lens distortion K 1 - 1.32 ⋅ 10 - 2 K 2 1.08 ⋅ 10 - 3 P 1 - 3.43 ⋅ 10 - 4 P 2 1.44 ⋅ 10 - 3Finally, the photogrammetric process has been applied following the methodology established in [11]. For the implementation, a commercial digital single lens reflex (DSLR) camera, Canon EOS 500D, with a Sigma 50 mm macro lens is used. A tripod is used to stabilize the camera during acquisition due to the high exposure times required to get the correct lighting exposition. In order to homogenize and optimize the illumination conditions, two halogen lamps (50 W each) are also used. ### 2.2. Methodology The methodology applied for this research presents two different parts (Figure2): on the one hand, a thermal processing is applied to extract the cooling rate for each pixel of the thermal matrix through two different sections; on the other hand, a photogrammetric procedure is applied to obtain the real depth values of the cracks through the two mentioned sections. The objective is to compare the two types of results in order to study the possible relation between them through a linear mathematical approach.Figure 2 Methodology followed to obtain the prediction model. Inputs are the thermal and photographic images. Processing has two parts: the thermal procedure (firstly, the rectification of the image is implemented, after a PATDT algorithm is applied in order to obtain the cooling rate for each pixel of the image; subsequently, sectioning of the image through different lines is implemented in order to get discrete cooling rate functions) and the macrophotogrammetric procedure [10]. Finally, the processing of the inputs allows obtaining the cooling rates and depths through a concrete different section of the crack; with these outputs, a mathematical approach is applied in order to obtain the prediction model. #### 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. #### 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). #### 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 2.1. Materials A welded plaque of low carbon with a thickness of 7.5 mm was used as subject of the study, given the presence of a crack with a little notch in the face of weld (Figure1). The plaque has been welded with Tungsten Inert Gas (TIG) welding, presenting butt-welding with edge preparation in V. The crack is oriented parallel to the longitudinal axis of the weld and is consequently denominated astoe crack according to the quality standard [21].Figure 1 Low-carbon steel weld used as specimen. This presents a toe crack close to the limit of the weld (macroimage).The thermal analysis is performed with an IR (infrared) camera. The IR camera used for this work is NEC TH9260 with 640 × 480 Uncooled Focal Plane Array Detector (UFPA), with a resolution of 0.06°C and a measurement range from −40°C to 500°C. The camera is geometrically calibrated prior to data acquisition using a calibration grid based on the emissivity difference between the background and the targets, presented in [22]. The calibration parameters of the IR camera in the focus position used for data acquisition during the thermographic tests are shown in Table 1.Table 1 Parameters for the geometric calibration of the IR camera NEC TH9260. Focal length (mm) 14.35 ± 0.44 Format size (mm) 5.98 × 4.50 (±0.07) Pixel size (mm) 9.34 ⋅ 10 - 3 × 9.37 ⋅ 10 - 3 Principal point (X PP) (mm) 2.99 ± 0.07 Principal point (Y PP) (mm) 2.28 ± 0.09 Lens distortion K 1 - 1.32 ⋅ 10 - 2 K 2 1.08 ⋅ 10 - 3 P 1 - 3.43 ⋅ 10 - 4 P 2 1.44 ⋅ 10 - 3Finally, the photogrammetric process has been applied following the methodology established in [11]. For the implementation, a commercial digital single lens reflex (DSLR) camera, Canon EOS 500D, with a Sigma 50 mm macro lens is used. A tripod is used to stabilize the camera during acquisition due to the high exposure times required to get the correct lighting exposition. In order to homogenize and optimize the illumination conditions, two halogen lamps (50 W each) are also used. ## 2.2. Methodology The methodology applied for this research presents two different parts (Figure2): on the one hand, a thermal processing is applied to extract the cooling rate for each pixel of the thermal matrix through two different sections; on the other hand, a photogrammetric procedure is applied to obtain the real depth values of the cracks through the two mentioned sections. The objective is to compare the two types of results in order to study the possible relation between them through a linear mathematical approach.Figure 2 Methodology followed to obtain the prediction model. Inputs are the thermal and photographic images. Processing has two parts: the thermal procedure (firstly, the rectification of the image is implemented, after a PATDT algorithm is applied in order to obtain the cooling rate for each pixel of the image; subsequently, sectioning of the image through different lines is implemented in order to get discrete cooling rate functions) and the macrophotogrammetric procedure [10]. Finally, the processing of the inputs allows obtaining the cooling rates and depths through a concrete different section of the crack; with these outputs, a mathematical approach is applied in order to obtain the prediction model. ### 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. ### 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). ### 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 2.2.1. Thermal Data Procedure to Extract the Cooling Rates ( 1) Acquisition of Thermal Images. The data acquisition protocol starts with the heating of the weld by a Joule effect heater until 40°C. The superficial temperature is controlled with a contact thermometer TESTO720 with Pt-100, resolution 0.1°C, and accuracy ±0.2°C. The thermometer is held on the surface of the plaque, in order to ensure total contact with the plaque and avoid the interference of the ambient conditions in the measurement. When the desired temperature is obtained, the heater is switched off and the IR camera is switched on. The monitoring of the cooling period is made with 50 thermograms, one every 5 seconds. Finally, the thermal images are extracted in RAW format, which implies one temperature value for each pixel, so that each image constitutes a matrix with the temperature values of the weld in the time of the acquisition.( 2) Rectification of Thermal Images. The objective of the rectification is to scale the thermal images to the real geometry, allowing the establishment of a correlation between temperature and depth values, the latest coming from the macrophotogrammetric procedure. If the thermal images have 2D metric, their different points can be related with the 3D metric dense point cloud obtained with macrophotogrammetry. The first step consists of the extraction of the temperature matrix of each thermographic image: each position in the matrix contains the temperature value of the corresponding part of the object contained in the image pixel. Values are corrected on an emissivity basis, using as reference the temperature values measured at the beginning of the test (prior heating) with the contact thermometer. The emissivity value is calculated using Stefan-Boltzmann’s Law, and the correction is applied to the matrix for obtaining the real temperature values.Once the temperature values are corrected, the infrared image matrix is subjected to a rectification algorithm. The core algorithm of image rectification is the plane projective transformation, ruled by(1) X = Δ x + a 0 + a 1 x ′ + a 2 y ′ c 1 x ′ + c 2 y ′ + 1 Y = Δ y + b 0 + b 1 x ′ + b 2 y ′ c 1 x ′ + c 2 y ′ + 1 ,where X and Y are the rectified (real) coordinates of the element, x ′ and y ′ are the pixel coordinates in the image, and a 0, a 1, a 2, b 0, b 1, b 2, c 1, and c 2 are the mathematical coefficients of the projective matrix that enclose rotation, scale, translation, and perspective.Therefore, the knowledge of the coordinates of 4 points in the object is the only requirement for the determination of the projective matrix, as well as the geometric calibration parameters of the camera( Δ x , Δ y ).(3) Pixelwise Algorithm for Time Derivative of Temperature (PATDT). Once images have been rectified, the next step is to apply an algorithm based on the approach developed in [21]. In this way, the cooling rate of the heated steel is analyzed according to Newton’s Cooling Law. This law indicates that temperature decreases exponentially. This procedure based on the monitoring of the cooling allows the detection of defects without using complex processing algorithms (Figure 3), like those indicated in Section 1.Figure 3 Steps followed by the global time-derivative algorithm. After data acquisition and rectification, the exponential model is applied for each (i , j) position with the values of temperature for each time instant. As a result, a temperature function is obtained and the time derivate of the temperature function is the cooling function, which is averaged for each position of the matrix, obtaining a matrix of cooling rates.The PATDT applies Newton’s Cooling Law for each pixel of the thermal image using an exponential fit model for the temperature-time data of each thermogram. Using this approach, a cooling rate functionQ ( t ) is established for each pixel as time derivate of the temperature function, T ( t ). The sequence followed by the algorithm is the following (Figure 3):(a) An exponential fit according to Newton Law is established for each temperature valueT 0 , T 1 , T 2 , … , T j, associated with each time instant t 0 , t 1 , t 2 , … , t f and for each pixel position of matrix, obtaining an expression T t for each pixel position. (b) The derivation∂ T t / d t is calculated in order to obtain the cooling rate function Q t for each pixel position. (c) An integrated averageQ for Q t between the initial and final time is calculated for each pixel position.A new matrix with the same dimensions as the thermal matrix is created. The integrated average cooling rateQ i , j is introduced on each i , j position of the matrix obtaining a cooling rate matrix.In order to work with relative cooling rate data, each cooling rate value corresponding to the crack (Q i j) is subtracted from the average of immediate exterior pixels Q a from a 10-pixel zone placed immediately near the crack. In this way, the relative cooling rate is obtained for each pixel of the crack and a relative cooling rate matrix is obtained as Q i j - Q a, being i , j the dimensions of the crack submatrix. ## 2.2.2. Macrophotogrammetric Procedure to Extract Depth Data The generation of the photogrammetric 3D model of the crack is done following the procedure established in detail by the authors in [11]. The first step is the acquisition of photographic images with a DSLR camera following a semispherical trajectory centered on the object. Once images are acquired, keeping always a constant distance between the lens and the object, two processing steps are applied: first, the automatic determination of the spatial and angular positions of each image, regardless of the order of acquisition and without requiring initial approximations or camera calibration, and second, the automatic computation of a dense 3D point cloud (submillimetric resolution), so that each pixel of the image renders a specific point of the model of the weld (Figure 4).Figure 4 Cracks with sections used to generate the model (L 1 and L 2), internal validation (T 1 and T 2), and external validation (T ′). ## 2.2.3. Correlation between Cooling Rate and Depth Data for Each Section for the Generation of the Model and Validation When the cooling rate function for each section is obtained following the thermal procedure established in Section2.2.1 and the depth function for each section is extracted following the macrophotogrammetric procedure explained in Section 2.2.2, the correlation between the points of both functions is established for every two different longitudinal sections in order to design a mathematical lineal model that allows the crack-depth prediction based on the cooling rate (Figure 2). Two transversal sections are used for the external validation of the model. Additionally, the method is applied to other cracks in other welds in order to establish an external validation of the method (Figure 4). ## 3. Results ### 3.1. Cooling Rate Results The graphical representation of the relative cooling rate matrix segmented in the crack region in 3D is shown in Figure6. The average cooling rate for the adjacent zone to the defect, Q a, is 0.0149°C/s. The cooling rate through two sections (L 1 and L 2) is studied in Figure 6. ### 3.2. Depth Results The macrophotogrammetric model for the crack is obtained from the matching of images, which are acquired from convergent positions with respect to the weld, repeating the procedure exposed in [11].For the computation of depths, the point cloud from the macrophotogrammetric procedure is sectioned in the two longitudinal sections (L 1 and L 2) (Figure 4). The section of the point cloud with normal planes L 1 and L 2 is applied in order to extract the depth for each pixel (Figure 5). The metrical reference between depths on the dense point cloud and the cooling rate values is made using the distances from the extremes of the crack.Figure 5 3D representation of the crack zone based on the relative cooling rate matrix obtained following the procedure illustrated by Figure3. In Figure 6, the two pixel rows of the crack (L 1 and L 2) are labeled.Figure 6 Depth function and relative cooling rate function along the longitudinal sections (L 1 (a) and L 2 (b)). (a) (b)The fit is used because the number of points is higher than the number of pixels on the cooling rate image. For this reason, authors have chosen to work with continuous functions in order to ease the correlation of data. The fit model used to obtain a depth function that allows the estimation of the real depth is a 9-degree polynomial fit, which provides an acceptable goodness (Root Mean Square Error (RMSE) of 0.0098 for longitudinal sectionL 1 and RMSE of 0.0082 for longitudinal section L 2). ### 3.3. Correlation between Cooling and Depth Data When relative cooling rate and depth data have been obtained through each section (Figure6), each pair of values (depth-relative cooling rate) is extracted from 20 longitude values and correlated in order to establish a prediction model which allows the estimation of the depth using just the cooling rate value.The correlation between depth and relative cooling rate (Q - Q a) (Figure 7) is obtained with a fitting error of 0.86 (Table 2). For this reason, the prediction model is considered as acceptable, which demonstrates the existence of a coherent mathematical relationship between depth (mm) and cooling rate (°C/s).Table 2 Parameters of the lineal fit which correlates relative cooling rate and depth. D = a Q - Q a + b Gain (a) Offset (b) R 2 15.71 −0.002534 0.857Figure 7 Correlation between depth and relative cooling rate for each pixel in the crack. ## 3.1. Cooling Rate Results The graphical representation of the relative cooling rate matrix segmented in the crack region in 3D is shown in Figure6. The average cooling rate for the adjacent zone to the defect, Q a, is 0.0149°C/s. The cooling rate through two sections (L 1 and L 2) is studied in Figure 6. ## 3.2. Depth Results The macrophotogrammetric model for the crack is obtained from the matching of images, which are acquired from convergent positions with respect to the weld, repeating the procedure exposed in [11].For the computation of depths, the point cloud from the macrophotogrammetric procedure is sectioned in the two longitudinal sections (L 1 and L 2) (Figure 4). The section of the point cloud with normal planes L 1 and L 2 is applied in order to extract the depth for each pixel (Figure 5). The metrical reference between depths on the dense point cloud and the cooling rate values is made using the distances from the extremes of the crack.Figure 5 3D representation of the crack zone based on the relative cooling rate matrix obtained following the procedure illustrated by Figure3. In Figure 6, the two pixel rows of the crack (L 1 and L 2) are labeled.Figure 6 Depth function and relative cooling rate function along the longitudinal sections (L 1 (a) and L 2 (b)). (a) (b)The fit is used because the number of points is higher than the number of pixels on the cooling rate image. For this reason, authors have chosen to work with continuous functions in order to ease the correlation of data. The fit model used to obtain a depth function that allows the estimation of the real depth is a 9-degree polynomial fit, which provides an acceptable goodness (Root Mean Square Error (RMSE) of 0.0098 for longitudinal sectionL 1 and RMSE of 0.0082 for longitudinal section L 2). ## 3.3. Correlation between Cooling and Depth Data When relative cooling rate and depth data have been obtained through each section (Figure6), each pair of values (depth-relative cooling rate) is extracted from 20 longitude values and correlated in order to establish a prediction model which allows the estimation of the depth using just the cooling rate value.The correlation between depth and relative cooling rate (Q - Q a) (Figure 7) is obtained with a fitting error of 0.86 (Table 2). For this reason, the prediction model is considered as acceptable, which demonstrates the existence of a coherent mathematical relationship between depth (mm) and cooling rate (°C/s).Table 2 Parameters of the lineal fit which correlates relative cooling rate and depth. D = a Q - Q a + b Gain (a) Offset (b) R 2 15.71 −0.002534 0.857Figure 7 Correlation between depth and relative cooling rate for each pixel in the crack. ## 4. Validation ### 4.1. Internal Validation of the Prediction Model In order to test the validity of the model, an internal validation procedure is applied through the analysis of two transversal sections (T 1 and T 2) in the same crack (Figure 4). These sections are performed in a direction where the difference of depth between positions is the highest. The aim of this step is to analyze the correspondence between the predicted depth values P [ x ] (where x is the width) calculated with the prediction model (Table 2) from the cooling rate data with the real depth values V [ x ] extracted from the sectioning of the point cloud. These sections are significantly smaller than the longitudinal sections used to build the prediction model (Figure 4) but present a higher contrast of the cooling rate. For this reason, the application of the prediction model to those points belonging to these transversal sections serves to analyze the internal coherence of the model and thus the goodness of the fit.The prediction model is applied to each transversal section (Figure8). Then, the prediction model is compared with the real model in order to analyze the accuracy. The error is calculated for each x value with the following discrete expression:(2) ε = ∑ x 0 x f V x - P x ,where V is the real value of depth and P is the predictive value (both for the same value of x). The error of the prediction model for the first transversal section, T 1 (Figure 8), is 0.00169 mm (14%) and the error for the second transversal section, T 2, is 0.0010 mm (13%).Figure 8 Transversal sections used for the internal validation of the model (T 1 (a) and T 2 (b)). (a) (b) ### 4.2. External Validation of the Prediction Model In this section, the robustness of the prediction model is tested through its application to the estimation of the depth in a crack open to surface in a different plate, made of the same material (Figure9). In this case, the crack is a longitudinal crack open to surface placed on the weld (weld cap removed) and generated by the stress provoked during the welding process. This crack is significantly smaller (tenths of mm) than the crack used to build the model, which is the reason for its choice. In this way, authors seek to test the model under stringent requirements.Figure 9 Cooling rate matrix of the new crack, with pseudocolor assigned to each cooling rate value (top left) and macrophotography of the crack (bottom left). Overlapping of cooling rate map with macrophotography of the crack in order to correlate the different depths of the crack with the different cooling rate data (right). The transversal sectionT ′ used to assess the prediction model has been plotted.The macrophotogrammetric procedure is repeated for this new crack in order to obtain accurately the depth distribution of the crack (denominated asreal depth). Then, the plate is heated at 70°C. This temperature, higher than in the previous case, is necessary in order to obtain a correct visualization of the crack in the thermal matrix, given the small size of this crack. The cooling period is monitored for 250 s (the same period). The cooling rate is calculated from the thermal images for each time instant following the procedure established in Section 2.2.1(3). Subsequently, the average cooling rate for the adjacent zone of the crack, Q a, is calculated. The result is 0.0516°C/s. Then, the prediction model calculated in Section 3.3 is applied through the values of transversal section of the crack (called section T ′) and the depth values are estimated for each pixel from the cooling rate data (Figure 9).At this point, real and prediction models are compared in order to analyze the error of the predicted model regarding the reality. Both models are shown in Figure10. The resulting error of the prediction model is 0.009 mm (18%). This result is acceptable knowing that the maximum depth in the crack is 0.05 mm.Figure 10 Real and predicted depth through the transversal sectionT ′ used for the external validation of the model. ## 4.1. Internal Validation of the Prediction Model In order to test the validity of the model, an internal validation procedure is applied through the analysis of two transversal sections (T 1 and T 2) in the same crack (Figure 4). These sections are performed in a direction where the difference of depth between positions is the highest. The aim of this step is to analyze the correspondence between the predicted depth values P [ x ] (where x is the width) calculated with the prediction model (Table 2) from the cooling rate data with the real depth values V [ x ] extracted from the sectioning of the point cloud. These sections are significantly smaller than the longitudinal sections used to build the prediction model (Figure 4) but present a higher contrast of the cooling rate. For this reason, the application of the prediction model to those points belonging to these transversal sections serves to analyze the internal coherence of the model and thus the goodness of the fit.The prediction model is applied to each transversal section (Figure8). Then, the prediction model is compared with the real model in order to analyze the accuracy. The error is calculated for each x value with the following discrete expression:(2) ε = ∑ x 0 x f V x - P x ,where V is the real value of depth and P is the predictive value (both for the same value of x). The error of the prediction model for the first transversal section, T 1 (Figure 8), is 0.00169 mm (14%) and the error for the second transversal section, T 2, is 0.0010 mm (13%).Figure 8 Transversal sections used for the internal validation of the model (T 1 (a) and T 2 (b)). (a) (b) ## 4.2. External Validation of the Prediction Model In this section, the robustness of the prediction model is tested through its application to the estimation of the depth in a crack open to surface in a different plate, made of the same material (Figure9). In this case, the crack is a longitudinal crack open to surface placed on the weld (weld cap removed) and generated by the stress provoked during the welding process. This crack is significantly smaller (tenths of mm) than the crack used to build the model, which is the reason for its choice. In this way, authors seek to test the model under stringent requirements.Figure 9 Cooling rate matrix of the new crack, with pseudocolor assigned to each cooling rate value (top left) and macrophotography of the crack (bottom left). Overlapping of cooling rate map with macrophotography of the crack in order to correlate the different depths of the crack with the different cooling rate data (right). The transversal sectionT ′ used to assess the prediction model has been plotted.The macrophotogrammetric procedure is repeated for this new crack in order to obtain accurately the depth distribution of the crack (denominated asreal depth). Then, the plate is heated at 70°C. This temperature, higher than in the previous case, is necessary in order to obtain a correct visualization of the crack in the thermal matrix, given the small size of this crack. The cooling period is monitored for 250 s (the same period). The cooling rate is calculated from the thermal images for each time instant following the procedure established in Section 2.2.1(3). Subsequently, the average cooling rate for the adjacent zone of the crack, Q a, is calculated. The result is 0.0516°C/s. Then, the prediction model calculated in Section 3.3 is applied through the values of transversal section of the crack (called section T ′) and the depth values are estimated for each pixel from the cooling rate data (Figure 9).At this point, real and prediction models are compared in order to analyze the error of the predicted model regarding the reality. Both models are shown in Figure10. The resulting error of the prediction model is 0.009 mm (18%). This result is acceptable knowing that the maximum depth in the crack is 0.05 mm.Figure 10 Real and predicted depth through the transversal sectionT ′ used for the external validation of the model. ## 5. Conclusions A procedure for the prediction of depth of cracks from infrared data has been implemented. This approach represents a methodological innovation in the field of Nondestructive Testing for the evaluation of weld flaws.The procedure applied allows the accurate prediction of different depths of the crack from thermal images. Low-temperature heating (40°) is applied to the steel and the cooling is monitored for 250 s. Although the heat transfer tends to be homogenized through the material in long cooling times, the values of the cooling rate also depend on the surface properties for each point of the material. Therefore, different surface properties in the defect and the nondefect zone provide a difference in the heat dissipation which provokes a difference in the cooling rates. This difference is clearly visible for each pixel through the sections of thermal images. These thermal data are associated with the real depths obtained with a reliable and experienced macrophotogrammetric procedure [10] using a commercial DSLR camera.The cooling rate value for each pixel is different for each time moment, since the cooling rate is higher at the beginning of the cooling process than at its end due to the exponential reduction of the temperature. The registration and integration of the cooling rate values from 250-second cooling interval allows the registration of the totality of information. For this reason, the homogenization of the heat transfer through the material is not a limit for the methodology, since results present enough contrast to establish a depth prediction model.The prediction model established through the correlation between cooling rate and depth maintains statistical consistency allowing an acceptable fit (RMSE between 0.008 and 0.009). Furthermore, the prediction model has been validated following a twofold approach: on the one hand, an internal validation has been implemented, consisting of the application of the prediction model to the crack in two transversal sections (different sections to those used to obtain the prediction model). These transversal sections have been chosen since they present high thermal contrast in a minimal space, subjecting the prediction model to a limit situation. Results demonstrate the good precision of the method to predict depth values of cracks, with 15% of maximum deviation. On the other hand, an external validation has been applied through the analysis of a different crack. In this case, a crack open to surface located in cap removing weld has been used to provide the external validation. With this test, authors analyze not only the validity of the method for other cracks, but also the quality of the prediction model when the heating is done at different temperatures (in this case, higher, 70°C). Results of the prediction model are consistent, with 18% of maximum deviation regarding real depth. The error is mainly due to the interpolation method used for the relation of thermal and depth values. It could be improved using more discrete values and this is a future goal that this work opens.The procedure has been applied to cracks generated from the stress provoked by the welding process in low-carbon steel. However, the procedure could be extended to the analysis of cracks in other materials. The cooling phenomenon is studied by the Newton Law and depends on the physical property of the materials as conductivity, heat capacity, surface features, and so forth. For this reason, the extension of the technique to other materials will be researched in future works. --- *Source: 1016482-2016-02-18.xml*
2016
# A Global Attractor in Some Discrete Contest Competition Models with Delay under the Effect of Periodic Stocking **Authors:** Ziyad AlSharawi **Journal:** Abstract and Applied Analysis (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101649 --- ## Abstract We consider discrete models of the formxn+1=xnf(xn−1)+hn, where hn is a nonnegative p-periodic sequence representing stocking in the population, and investigate their dynamics. Under certain conditions on the recruitment function f(x), we give a compact invariant region and use Brouwer fixed point theorem to prove the existence of a p-periodic solution. Also, we prove the global attractivity of the p-periodic solution when p=2. In particular, this study gives theoretical results attesting to the belief that stocking (whether it is constant or periodic) preserves the global attractivity of the periodic solution in contest competition models with short delay. Finally, as an illustrative example, we discuss Pielou's model with periodic stocking. --- ## Body ## 1. Introduction In mathematical ecology, difference equations of the formxn+1=xnf(xn), n∈ℕ:={0,1,…} are used to model single species with nonoverlapping generations [1, 2], where xn denotes the number of sexually mature individuals at discrete time n, and f(xn) is the density-dependent net growth rate of the population. The form of the function f(x) is chosen to reflect certain characteristics of the studied population such as intraspecific competition. For some background readings about models obtained by the various choices of f(x), we refer the reader to [1, 3, 4] in the discrete case. Also, we refer the reader to [5] and the references therein for the continuous case. Two classical types are known as the scrambled and contest competition models [4]. Our attention in this work is limited to the contest competition models where f(x) is assumed to be decreasing, xf(x) is increasing, and xf(x) is asymptotic to a certain level at high population densities. A prototype of such models is the Beverton-Holt model [6], which is obtained by considering f(x)=μKx/(K+(μ-1)x). Here, μ>1 is interpreted as the growth rate per generation, and K is the carrying capacity of the environment. In populations with substantial time needed to reach sexual maturity, certain delay effect must be included in the function f(x), which motivates us to consider difference equations of the form (1)xn+1=xnf(xn-k), where k is a fixed positive integer [7]. In general, it is widely known that long time delay has a destabilizing effect on the population's steady state, while short time delay can preserve stability [8–10]. However, when the delay is large, the dynamics of (1) is less tractable [11]. Furthermore, we are more interested here in the effect of stocking than the effect of delay, and therefore, we keep the time delay short to preserve stability in the absence of stocking. In particular, we fix the delay to be k=1.A substantial body of research has explored the effect of constant stocking on population models without delay [12–19]. In brief and general terms, it has been found that constant stocking can be used to suppress chaos, reverse the period doubling phenomena, lower the risk of extinction, and have a stabilizing effect on the population steady state. On the other hand, and to the best of our knowledge, little (if any) has been done to explore the effect of stocking (whether constant or periodic) on models with delay. So, our work here has a twofold objective which motivates us to consider difference equations of the form to study the effect of periodic stocking on contest competition models with delay and to complement the work of the author and his collaborators in [20], where the dynamics of (1) with k=1 was studied under the effect of constant yield harvesting. Recall that we have some accumulating restrictions on the function f(x) due to the nature of associating our equation with contest competition models. So, in an abstract mathematical form, our problem can be posed as follows. Consider the difference equation (2)xn+1=xnf(xn-1)+hn, where {hn} is a nonnegative p-periodic sequence representing stocking due to refuge, immigration, feeding, and so forth, and the function f(x) obeys the following conditions:(C1) f(0)=b>1,(C2) f∈C1([0,∞)) and f(x) is decreasing on [0,∞),(C3) xf(x) is increasing and bounded. The condition in (C1) is a generic one in the absence of stocking, that is, if b≤1 and hn=0, then there is no long-term survival regardless of the initial density of the population.This paper is organized as follows. In Section2, we give some preliminary results concerning local stability, boundedness, and global stability of (2) when the stocking sequence is 1-periodic, that is, when hn=h>0 for all n∈ℕ. In Section 3, the period of the stocking sequence is taken to be larger than one. A compact invariant region has been established and a characterization of the periodic solutions is given. Also, the global asymptotic behavior of solutions has been investigated when p=2. As a particular case of (2), we discuss Pielou's equation with delay one in Section 4. ## 2. Preliminary Results: The Autonomous Case In this section, we focus on the autonomous case, that is,hj=h>0 for all j=0,1,…,p-1. Thus, (2) becomes (3)xn+1=xnf(xn-1)+h.Some results concerning (3) can be found in the literature [21]; however, for the sake of completeness and usage in the nonautonomous case, we give the following preliminary results. ### 2.1. Local Stability and Boundedness Equation (3) has two equilibrium solutions at h=0, namely, 0 and f-1(1). For h>0, the origin slides downward to become negative, while the other equilibrium stays positive and slides upward. This fact becomes clear when we write t=tf(t)+h as 1-h/t=f(t). The left hand side is increasing in t while the right hand side is decreasing. Thus, we have only one positive equilibrium in the positive quadrant, which we denote in the sequel by x-2,h. Since x-2,h is positive and increasing in h, f(x-2,h)<1 for all h>0. The linearized equation associated with (3) at a fixed point x- is given by (4)yn+1-f(x-)yn-x-f′(x-)yn-1=0.Definep:=f(x-) and q:=-x-f′(x-). For x-=x-2,h, we have 0<p<1 and q is nonnegative. The roots of λ2-pλ+q=0 determine the local stability of our equilibrium point. Since λj,h=(1/2)(p+(-1)jp2-4q),j=1,2,x-2,h starts as stable at h=0 and stays stable as long as q<1. Figure 1 clarifies the relationship between p,q and the magnitude of λj,h. We summarize these facts in the following proposition.Figure 1 This figure shows the magnitude of the characteristic roots of (4) depending on the location of the (p,q) values, where p=f(x-) and q=-x-f′(x-). Also, |λ1,h|>1 and |λ1,h|>1 when p,q>1;|λ1,h|<1 and |λ1,h|<1 when p<1 and q<1.Proposition 1. Assume that conditions (C1) to (C3) are satisfied andh>0. The positive equilibrium x-2,h of (3) is locally asymptotically stable.Proof. Sincex-2,h>x-2,0, we have p<1. Also, since F(t)=tf(t) is increasing, we obtain F′(t)=tf′(t)+f(t)>0 and consequently -q+p>0. Thus, we have q<p<1. Now, Figure 1 makes the rest of the proof clear.It is obvious thatxk≥h for all k≥1. On the other hand, since (5)xn+1=xn-1f(xn-1)f(xn-2)+hf(xn-1)+h≤xn-1f(xn-1)b+hb+h, the boundedness of y=tf(t) assures the boundedness of all solutions of (2). ### 2.2. Oscillations and Global Stability A solution of (3) is called oscillatory if it is neither, eventually, less than nor larger than x-2,h [11]. Also, one can consider oscillations about a curve [20]. A solution {xn} of (3) is called oscillatory about a curve H(x,y)=0 if the sequence {un=(xn-1,xn)} does not eventually stay on one side of the curve. The latter definition can be more convenient in some cases; however, in (3), both are equivalent when we consider H(x,y)=y-x as we show in the following result.Proposition 2. A solution of (3) is oscillatory if and only if it is oscillatory about the curve y=x.Proof. Assume that{xn} oscillates about x-2,h, but it is not oscillatory about y=x. So, {xn} is either eventually increasing or eventually decreasing, which contradicts the assumption that xn is oscillatory about x-2,h. Conversely, suppose {(xn-1,xn)} oscillates about y=x, but {xn} does not oscillate about x-2,h. First, we consider the case xn≤x-2,h for all n≥n0. If xm>xm-1 for some m>n0, then f(xm)<f(xm-1) and consequently (6)xm+1=xmf(xm-1)+h>xmf(xm)+h>xm. So, we can induce an eventually increasing sequence which contradicts our assumption. Ifxm≤xm-1 for some m>n0, then xm+1≤xmf(xm)+h. Thus, either xm+1≤xm, and the induction leads to a decreasing sequence that must converge which is not possible, or xm+1>xm, and we go back to the first scenario. Finally, the case xn≥x-2,h for all n≥n0 can be handled similarly.Next, we define the map(7)T(x,y)=(y,yf(x)+h).The mapT portrays the solutions of (3) geometrically in the nonnegative quadrant, and therefore, it plays a prominent role in the sequel. Here, we used the nonnegative quadrant to denote the positive quadrant union of the axes on the boundary. By applying the map T on the regions above and below the curve y=x, one can observe that a nonequilibrium solution of (3) must be oscillatory. Also, using the map T, one can observe that stocking increases the frequency of oscillations in the following sense. The length of semicycles in the absence of stocking is longer than the length of semicycles in the existence of stocking, where a semicycle is used to denote the string of consecutive terms above or below the equilibrium.Since solutions of (3) are bounded, we define (8)S:=limsupxn,I:=liminfxn.From the equationxn+2=xnf(xn)f(xn-1)+hf(xn)+h and using the fact that tf(t) is increasing, we obtain (9)S≤Sf(S)f(I)+hf(I)+h,I≥If(I)f(S)+hf(S)+h.Whenh>0, we have S≥I>0. So, we can multiply the first inequality by I and the second one by S to obtain (10)S(f(S)+1)≤I(f(I)+1).Sincet(f(t)+1) is increasing, we obtain I=S. This approach was used by Camouzis and Ladas in [22], and it was used by Nyerges in [21] to prove that x-2,h is globally attractive. This fact together with the local stability established in Proposition 1 shows the global asymptotic stability of x-2,h as we summarize in the following proposition.Proposition 3. The equilibrium solutionx-2,h of (3) is globally asymptotically stable.Next, it is obvious that the positive quadrant forms an invariant for (3); however, since solutions are bounded, we are interested in a bounded invariant that can be developed to serve us in the periodic case. Notice that by invariance here we always mean forward invariance, that is, Rh is an invariant of (3) if T(x,y)∈Rh for all (x,y)∈Rh. To establish the existence of a bounded invariant region, we need to have in mind the following simple fact.Proposition 4. There exists a finite constantch≥h such that Gh(t)=(bt+h)f(t)≤bch for all t≥0. Furthermore, ch can be taken as ch:=(1/b)suptGh(t).Proof. Use the fact thattf(t) is bounded and f(t) is decreasing with f(0)=b and limt→∞f(t)=0 to obtain the result.Next, define the curvesΓj,j=0,1,2,3,4 to be the line segments that connect the points (0,0),(0,h),(ch,bch+h), (bch+h,bch+h), (bch+h,0), and (0,0), respectively. Now, define Rh to be the region bounded by the curves of Γj,j=0,…,4 including the boundary, then the following result gives a bounded invariant of (3). Here, it is worth mentioning that Γ0 shrinks to a point at h=0; however, our notation and arguments about the invariant region are still valid except that the boundary of Rh becomes a quadrilateral rather than a pentagon.Theorem 5. The regionRh as defined above gives a compact invariant for (3).Proof. Consider the mapT(x,y) as defined in (7). T is one-to-one on the positive quadrant. Thus, all we need is to test T on the boundary of Rh. It is straightforward computations to find that T(Γ0)⊆Γ1. Since horizontal line segments are mapped to vertical line segments under T, we test the end points of Γ2 to find (11)T(ch,bch+h)=(bch+h,(bch+h)f(ch)+h),T(bch+h,bch+h)=(bch+h,(bch+h)f(bch+h)+h). By the choice ofch given in Proposition 4, we have (12)(bch+h)f(bch+h)+h≤(bch+h)f(ch)+h≤bch+h. Thus,T(Γ2)⊂Γ3. Next, T(Γ3)⊂Rh and T(Γ4)=(0,h) are straightforward to observe. Finally, we show that T(Γ1)⊂Rh. For 0≤t≤ch, we have (13)T(t,bt+h)=(bt+h,(bt+h)f(t)+h); however, h≤bt+h≤bch+h and (bt+h)f(t)+h≤bch+h by the choice of ch, which completes the proof. Figure 2 illustrates the region Rh and its image under the map T when (bt+h)f(t) is increasing.Figure 2 The figure on the left shows the choice of the compact regionRh when y=(bt+h)f(t) is increasing, and the one on the right shows T(Rh) with blue boundary inside Rh. ## 2.1. Local Stability and Boundedness Equation (3) has two equilibrium solutions at h=0, namely, 0 and f-1(1). For h>0, the origin slides downward to become negative, while the other equilibrium stays positive and slides upward. This fact becomes clear when we write t=tf(t)+h as 1-h/t=f(t). The left hand side is increasing in t while the right hand side is decreasing. Thus, we have only one positive equilibrium in the positive quadrant, which we denote in the sequel by x-2,h. Since x-2,h is positive and increasing in h, f(x-2,h)<1 for all h>0. The linearized equation associated with (3) at a fixed point x- is given by (4)yn+1-f(x-)yn-x-f′(x-)yn-1=0.Definep:=f(x-) and q:=-x-f′(x-). For x-=x-2,h, we have 0<p<1 and q is nonnegative. The roots of λ2-pλ+q=0 determine the local stability of our equilibrium point. Since λj,h=(1/2)(p+(-1)jp2-4q),j=1,2,x-2,h starts as stable at h=0 and stays stable as long as q<1. Figure 1 clarifies the relationship between p,q and the magnitude of λj,h. We summarize these facts in the following proposition.Figure 1 This figure shows the magnitude of the characteristic roots of (4) depending on the location of the (p,q) values, where p=f(x-) and q=-x-f′(x-). Also, |λ1,h|>1 and |λ1,h|>1 when p,q>1;|λ1,h|<1 and |λ1,h|<1 when p<1 and q<1.Proposition 1. Assume that conditions (C1) to (C3) are satisfied andh>0. The positive equilibrium x-2,h of (3) is locally asymptotically stable.Proof. Sincex-2,h>x-2,0, we have p<1. Also, since F(t)=tf(t) is increasing, we obtain F′(t)=tf′(t)+f(t)>0 and consequently -q+p>0. Thus, we have q<p<1. Now, Figure 1 makes the rest of the proof clear.It is obvious thatxk≥h for all k≥1. On the other hand, since (5)xn+1=xn-1f(xn-1)f(xn-2)+hf(xn-1)+h≤xn-1f(xn-1)b+hb+h, the boundedness of y=tf(t) assures the boundedness of all solutions of (2). ## 2.2. Oscillations and Global Stability A solution of (3) is called oscillatory if it is neither, eventually, less than nor larger than x-2,h [11]. Also, one can consider oscillations about a curve [20]. A solution {xn} of (3) is called oscillatory about a curve H(x,y)=0 if the sequence {un=(xn-1,xn)} does not eventually stay on one side of the curve. The latter definition can be more convenient in some cases; however, in (3), both are equivalent when we consider H(x,y)=y-x as we show in the following result.Proposition 2. A solution of (3) is oscillatory if and only if it is oscillatory about the curve y=x.Proof. Assume that{xn} oscillates about x-2,h, but it is not oscillatory about y=x. So, {xn} is either eventually increasing or eventually decreasing, which contradicts the assumption that xn is oscillatory about x-2,h. Conversely, suppose {(xn-1,xn)} oscillates about y=x, but {xn} does not oscillate about x-2,h. First, we consider the case xn≤x-2,h for all n≥n0. If xm>xm-1 for some m>n0, then f(xm)<f(xm-1) and consequently (6)xm+1=xmf(xm-1)+h>xmf(xm)+h>xm. So, we can induce an eventually increasing sequence which contradicts our assumption. Ifxm≤xm-1 for some m>n0, then xm+1≤xmf(xm)+h. Thus, either xm+1≤xm, and the induction leads to a decreasing sequence that must converge which is not possible, or xm+1>xm, and we go back to the first scenario. Finally, the case xn≥x-2,h for all n≥n0 can be handled similarly.Next, we define the map(7)T(x,y)=(y,yf(x)+h).The mapT portrays the solutions of (3) geometrically in the nonnegative quadrant, and therefore, it plays a prominent role in the sequel. Here, we used the nonnegative quadrant to denote the positive quadrant union of the axes on the boundary. By applying the map T on the regions above and below the curve y=x, one can observe that a nonequilibrium solution of (3) must be oscillatory. Also, using the map T, one can observe that stocking increases the frequency of oscillations in the following sense. The length of semicycles in the absence of stocking is longer than the length of semicycles in the existence of stocking, where a semicycle is used to denote the string of consecutive terms above or below the equilibrium.Since solutions of (3) are bounded, we define (8)S:=limsupxn,I:=liminfxn.From the equationxn+2=xnf(xn)f(xn-1)+hf(xn)+h and using the fact that tf(t) is increasing, we obtain (9)S≤Sf(S)f(I)+hf(I)+h,I≥If(I)f(S)+hf(S)+h.Whenh>0, we have S≥I>0. So, we can multiply the first inequality by I and the second one by S to obtain (10)S(f(S)+1)≤I(f(I)+1).Sincet(f(t)+1) is increasing, we obtain I=S. This approach was used by Camouzis and Ladas in [22], and it was used by Nyerges in [21] to prove that x-2,h is globally attractive. This fact together with the local stability established in Proposition 1 shows the global asymptotic stability of x-2,h as we summarize in the following proposition.Proposition 3. The equilibrium solutionx-2,h of (3) is globally asymptotically stable.Next, it is obvious that the positive quadrant forms an invariant for (3); however, since solutions are bounded, we are interested in a bounded invariant that can be developed to serve us in the periodic case. Notice that by invariance here we always mean forward invariance, that is, Rh is an invariant of (3) if T(x,y)∈Rh for all (x,y)∈Rh. To establish the existence of a bounded invariant region, we need to have in mind the following simple fact.Proposition 4. There exists a finite constantch≥h such that Gh(t)=(bt+h)f(t)≤bch for all t≥0. Furthermore, ch can be taken as ch:=(1/b)suptGh(t).Proof. Use the fact thattf(t) is bounded and f(t) is decreasing with f(0)=b and limt→∞f(t)=0 to obtain the result.Next, define the curvesΓj,j=0,1,2,3,4 to be the line segments that connect the points (0,0),(0,h),(ch,bch+h), (bch+h,bch+h), (bch+h,0), and (0,0), respectively. Now, define Rh to be the region bounded by the curves of Γj,j=0,…,4 including the boundary, then the following result gives a bounded invariant of (3). Here, it is worth mentioning that Γ0 shrinks to a point at h=0; however, our notation and arguments about the invariant region are still valid except that the boundary of Rh becomes a quadrilateral rather than a pentagon.Theorem 5. The regionRh as defined above gives a compact invariant for (3).Proof. Consider the mapT(x,y) as defined in (7). T is one-to-one on the positive quadrant. Thus, all we need is to test T on the boundary of Rh. It is straightforward computations to find that T(Γ0)⊆Γ1. Since horizontal line segments are mapped to vertical line segments under T, we test the end points of Γ2 to find (11)T(ch,bch+h)=(bch+h,(bch+h)f(ch)+h),T(bch+h,bch+h)=(bch+h,(bch+h)f(bch+h)+h). By the choice ofch given in Proposition 4, we have (12)(bch+h)f(bch+h)+h≤(bch+h)f(ch)+h≤bch+h. Thus,T(Γ2)⊂Γ3. Next, T(Γ3)⊂Rh and T(Γ4)=(0,h) are straightforward to observe. Finally, we show that T(Γ1)⊂Rh. For 0≤t≤ch, we have (13)T(t,bt+h)=(bt+h,(bt+h)f(t)+h); however, h≤bt+h≤bch+h and (bt+h)f(t)+h≤bch+h by the choice of ch, which completes the proof. Figure 2 illustrates the region Rh and its image under the map T when (bt+h)f(t) is increasing.Figure 2 The figure on the left shows the choice of the compact regionRh when y=(bt+h)f(t) is increasing, and the one on the right shows T(Rh) with blue boundary inside Rh. ## 3. Periodic Stocking In this section, we force periodic stocking on (3) to obtain (14)xn+1=xnf(xn-1)+hn, where hn is a p-periodic sequence of stocking quotas, and p denotes the minimal period. Observe that some consecutive values of the stocking sequence can be zero; however, it is natural to assume that ∑j=0p-1‍hj>0. As in the constant case, we associate (14) with a p-periodic sequence of two dimensional maps that we use in the sequel, namely {Tj,j=0,1,…,p-1}, where Tj(x,y)=(y,yf(x)+hj). It is obvious that if we replace h by hj in Theorem 5, then Rhj forms a compact and invariant region for the individual map Tj, which enables us to build a suitable machinery for establishing the existence of a periodic solution. It is convenient now to develop the notations of the previous section so it can suit the periodic case. We denote the line segments that form the boundary of Rhj by Γj,i,i=0,…,4, where Γj,i corresponds to Γi in the autonomous case and that are associated with the individual map Tj. Also, the constant ch in Proposition 4 will be replaced by chj, and this is associated with the individual map Tj. ### 3.1. Existence of a Periodic Solution We start by establishing a compact invariant region for (14). Define (15)hm:=max{h0,h1,…,hp-1},cm:=max{chj:j=0,…,p-1}, where chj is as taken in Proposition 4; that is, chj=(1/b)suptGhj(t); then use hm and cm to define the region Rhm as in the paragraph preceding Theorem 5. Now, we have the following result.Lemma 6. Consider (14) together with the associated p-periodic sequence of maps {Tj}. Each of the following holds true.(i) One hasRhi⊆Rhj whenever hi≤hj.(ii) Rhm is a compact invariant for each individual map Tj.(iii) Rhm is a compact invariant for the map T^:=Tp-1∘Tp-2∘⋯∘T0.Proof. (i) Whenhi≤hj, we obtain Ghi(t)≤Ghj(t) for all t≥0. Thus, chi≤chj, and the result becomes obvious from Proposition 4 and the geometric structure of the regions Rhi and Rhj. To prove (ii), let (x,y)∈Rhm, we show that Tj(x,y)∈Rhm. Since (16)Tj(x,y)=(y,yf(x)+hj)=(y,yf(x)+hm)-(0,hm-hj)=Tm(x,y)-(0,hm-hj), then the first component of Tj(x,y) is the same as the first component of Tm(x,y) and the second component of Tj(x,y) is lower than the second component of Tm(x,y). Now, the fact that Tm(x,y)∈Rhm and the geometric structure of Rhm assures that Tj(x,y)∈Rhm. Finally, (iii) follows from (ii).Periodic stocking (or harvesting) has the effect of forcing population cycles to evolve and become multiples of the stocking/harvesting period as we show in the following result, which is more general than (14).Theorem 7. Consider the general difference equationxn+1=F(xn,xn-1,…,xn-k) with p-periodic stocking (or harvesting). If a periodic solution exists, then the period is a multiple of p.Proof. The proof is a contradiction; suppose that we have anr-periodic solution of the equation xn+1=F(xn,xn-1,…,xn-k)+hn for some r that is not a multiple of p. Then, the greatest common divisor between r and p (d:=gcd(r,p)) is not p. Define the maps Fi:=F+hi,i=0,1,…,p-1; then for each 0≤i≤d-1, the maps {Fkd+i,k=0,1,…,p/d-1} must agree at the point Xi:=(xi,xi-1,…,xi-k), where the components xi-k,xi+1-k,…,xi are consecutive elements of the p-periodic solution. This implies (17)hi=hd+i=h2d+i=⋯=h(p/d-1)d+i for all i=0,1,…,d-1, which contradicts the minimality of the period of the p-periodic difference equation.Theorem7 shows that (14) has no equilibrium solutions, and therefore, our previous notion of characterizing oscillatory solutions based on the oscillations about y=x is the valid one here. Thus, solutions of (14) are oscillatory about y=x because they cannot be monotonic. Although it is natural for fluctuations in the environment to create fluctuations in the population, we find it appropriate here to connect the loosely-defined term “fluctuation" with the mathematically well-defined term “oscillation.” Next, we use the Brouwer fixed theorem [23] (page 51) to prove the existence of a periodic solution of (14).Lemma 8 (Brouwer fixed-point theorem [23], page 51). LetM be a nonempty, convex, and compact subset of ℝn. If T:M→M is continuous, then T has a fixed point in M.Theorem 9. Thep-periodic difference equation in (14) has a p-periodic solution.Proof. Consider the mapT^:=Tp-1∘Tp-2∘⋯∘T0, then using Lemma 6, we obtain T^:Rhm→Rhm. Furthermore, Rhm is nonempty, compact, and obviously convex. So, by Lemma 8, T^ has a fixed point in Rhm. This fixed point establishes a periodic solution of (14) with minimal period that divides p; however, Theorem 7 shows that the period must be p. ### 3.2. Global Attractivity of the Periodic Solution Whenp=2 Consider the periodicity of (14) to be p=2 and suppose h0+h1≠0. We partition the solutions of (14) into two subsequences, the one with even indices {x2n} and the one with odd indices {x2n}. Thus, we have (18)x2n+1=x2nf(x2n-1)+h0,x2n+2=x2n+1f(x2n)+h1.Since the solutions are bounded, we define(19)liminf{x2n+i}=Ii,limsup{x2n+i}=Si,i=0,1.Now, the second iterate of (18) gives us (20)x2n+2=x2nf(x2n)f(x2n-1)+h0f(x2n)+h1,(21)x2n+3=x2n+1f(x2n+1)f(x2n)+h1f(x2n+1)+h0.Use the fact thatf(t) is decreasing and tf(t) is increasing in (20) to obtain (22)S0≤S0f(S0)f(I1)+h0f(I0)+h1,(23)I0≥I0f(I0)f(S1)+h0f(S0)+h1.Also, (21) gives us (24)S1≤S1f(S1)f(I0)+h1f(I1)+h0,(25)I1≥I1f(I1)f(S0)+h1f(S1)+h0.Multiply inequality (22) by I0 and inequality (23) by S0 to obtain (26)S0I0f(I0)f(S1)+S0(h0f(S0)+h1)≤I0S0f(S0)f(I1)+I0(h0f(I0)+h1).SinceI0(h0f(I0)+h1)≤S0(h0f(S0)+h1), we obtain (27)f(I0)f(S1)≤f(S0)f(I1).Also, multiply inequality (24) by I1 and inequality (25) by S1 to obtain (28)S1I1f(I1)f(S0)+S1(h1f(S1)+h0)≤I1S1f(S1)f(I0)+I1(h1f(I1)+h0).SinceI1(h1f(I1)+h0)≤S1(h1f(S1)+h0), we obtain (29)f(I1)f(S0)≤f(I0)f(S1).Using inequalities (27) and (29), we obtain the following result.Lemma 10. ConsiderI0,I1,S0,S1 as defined in (19); then f(I0)f(S1)=f(I1)f(S0).Next, we give the following result.Theorem 11. Forp=2, the 2-periodic solution of (14) is a global attractor.Proof. Use the result of Lemma10 in inequality (26) to obtain (30)S0(h0f(S0)+h1)≤I0(h0f(I0)+h1). Sinceg(t)=t(h0f(t)+h1) is increasing and I0≤S0, we must have I0=S0. Similarly, use the result of Lemma 10 in inequality (28) to obtain (31)S1(h1f(S1)+h0)≤I1(h1f(I1)+h0), and consequently S1=I1. Hence, Ii=Si,i=0,1, and the proof is complete.Remark 12. Observe that the approach of this section proves not only the global attractivity of thep-periodic solution but also its existence; however, Theorem 7 is still significant here because it proves the minimality of the period. Also, establishing the compact invariant region in Lemma 6 deserves embracing regardless of the global attractivity of the periodic solution. Finally, proving the global attractivity for general p will be the topic of some future work. ## 3.1. Existence of a Periodic Solution We start by establishing a compact invariant region for (14). Define (15)hm:=max{h0,h1,…,hp-1},cm:=max{chj:j=0,…,p-1}, where chj is as taken in Proposition 4; that is, chj=(1/b)suptGhj(t); then use hm and cm to define the region Rhm as in the paragraph preceding Theorem 5. Now, we have the following result.Lemma 6. Consider (14) together with the associated p-periodic sequence of maps {Tj}. Each of the following holds true.(i) One hasRhi⊆Rhj whenever hi≤hj.(ii) Rhm is a compact invariant for each individual map Tj.(iii) Rhm is a compact invariant for the map T^:=Tp-1∘Tp-2∘⋯∘T0.Proof. (i) Whenhi≤hj, we obtain Ghi(t)≤Ghj(t) for all t≥0. Thus, chi≤chj, and the result becomes obvious from Proposition 4 and the geometric structure of the regions Rhi and Rhj. To prove (ii), let (x,y)∈Rhm, we show that Tj(x,y)∈Rhm. Since (16)Tj(x,y)=(y,yf(x)+hj)=(y,yf(x)+hm)-(0,hm-hj)=Tm(x,y)-(0,hm-hj), then the first component of Tj(x,y) is the same as the first component of Tm(x,y) and the second component of Tj(x,y) is lower than the second component of Tm(x,y). Now, the fact that Tm(x,y)∈Rhm and the geometric structure of Rhm assures that Tj(x,y)∈Rhm. Finally, (iii) follows from (ii).Periodic stocking (or harvesting) has the effect of forcing population cycles to evolve and become multiples of the stocking/harvesting period as we show in the following result, which is more general than (14).Theorem 7. Consider the general difference equationxn+1=F(xn,xn-1,…,xn-k) with p-periodic stocking (or harvesting). If a periodic solution exists, then the period is a multiple of p.Proof. The proof is a contradiction; suppose that we have anr-periodic solution of the equation xn+1=F(xn,xn-1,…,xn-k)+hn for some r that is not a multiple of p. Then, the greatest common divisor between r and p (d:=gcd(r,p)) is not p. Define the maps Fi:=F+hi,i=0,1,…,p-1; then for each 0≤i≤d-1, the maps {Fkd+i,k=0,1,…,p/d-1} must agree at the point Xi:=(xi,xi-1,…,xi-k), where the components xi-k,xi+1-k,…,xi are consecutive elements of the p-periodic solution. This implies (17)hi=hd+i=h2d+i=⋯=h(p/d-1)d+i for all i=0,1,…,d-1, which contradicts the minimality of the period of the p-periodic difference equation.Theorem7 shows that (14) has no equilibrium solutions, and therefore, our previous notion of characterizing oscillatory solutions based on the oscillations about y=x is the valid one here. Thus, solutions of (14) are oscillatory about y=x because they cannot be monotonic. Although it is natural for fluctuations in the environment to create fluctuations in the population, we find it appropriate here to connect the loosely-defined term “fluctuation" with the mathematically well-defined term “oscillation.” Next, we use the Brouwer fixed theorem [23] (page 51) to prove the existence of a periodic solution of (14).Lemma 8 (Brouwer fixed-point theorem [23], page 51). LetM be a nonempty, convex, and compact subset of ℝn. If T:M→M is continuous, then T has a fixed point in M.Theorem 9. Thep-periodic difference equation in (14) has a p-periodic solution.Proof. Consider the mapT^:=Tp-1∘Tp-2∘⋯∘T0, then using Lemma 6, we obtain T^:Rhm→Rhm. Furthermore, Rhm is nonempty, compact, and obviously convex. So, by Lemma 8, T^ has a fixed point in Rhm. This fixed point establishes a periodic solution of (14) with minimal period that divides p; however, Theorem 7 shows that the period must be p. ## 3.2. Global Attractivity of the Periodic Solution Whenp=2 Consider the periodicity of (14) to be p=2 and suppose h0+h1≠0. We partition the solutions of (14) into two subsequences, the one with even indices {x2n} and the one with odd indices {x2n}. Thus, we have (18)x2n+1=x2nf(x2n-1)+h0,x2n+2=x2n+1f(x2n)+h1.Since the solutions are bounded, we define(19)liminf{x2n+i}=Ii,limsup{x2n+i}=Si,i=0,1.Now, the second iterate of (18) gives us (20)x2n+2=x2nf(x2n)f(x2n-1)+h0f(x2n)+h1,(21)x2n+3=x2n+1f(x2n+1)f(x2n)+h1f(x2n+1)+h0.Use the fact thatf(t) is decreasing and tf(t) is increasing in (20) to obtain (22)S0≤S0f(S0)f(I1)+h0f(I0)+h1,(23)I0≥I0f(I0)f(S1)+h0f(S0)+h1.Also, (21) gives us (24)S1≤S1f(S1)f(I0)+h1f(I1)+h0,(25)I1≥I1f(I1)f(S0)+h1f(S1)+h0.Multiply inequality (22) by I0 and inequality (23) by S0 to obtain (26)S0I0f(I0)f(S1)+S0(h0f(S0)+h1)≤I0S0f(S0)f(I1)+I0(h0f(I0)+h1).SinceI0(h0f(I0)+h1)≤S0(h0f(S0)+h1), we obtain (27)f(I0)f(S1)≤f(S0)f(I1).Also, multiply inequality (24) by I1 and inequality (25) by S1 to obtain (28)S1I1f(I1)f(S0)+S1(h1f(S1)+h0)≤I1S1f(S1)f(I0)+I1(h1f(I1)+h0).SinceI1(h1f(I1)+h0)≤S1(h1f(S1)+h0), we obtain (29)f(I1)f(S0)≤f(I0)f(S1).Using inequalities (27) and (29), we obtain the following result.Lemma 10. ConsiderI0,I1,S0,S1 as defined in (19); then f(I0)f(S1)=f(I1)f(S0).Next, we give the following result.Theorem 11. Forp=2, the 2-periodic solution of (14) is a global attractor.Proof. Use the result of Lemma10 in inequality (26) to obtain (30)S0(h0f(S0)+h1)≤I0(h0f(I0)+h1). Sinceg(t)=t(h0f(t)+h1) is increasing and I0≤S0, we must have I0=S0. Similarly, use the result of Lemma 10 in inequality (28) to obtain (31)S1(h1f(S1)+h0)≤I1(h1f(I1)+h0), and consequently S1=I1. Hence, Ii=Si,i=0,1, and the proof is complete.Remark 12. Observe that the approach of this section proves not only the global attractivity of thep-periodic solution but also its existence; however, Theorem 7 is still significant here because it proves the minimality of the period. Also, establishing the compact invariant region in Lemma 6 deserves embracing regardless of the global attractivity of the periodic solution. Finally, proving the global attractivity for general p will be the topic of some future work. ## 4. Pielou's Equation with Stocking As an illustrative example to our results, we consider the functionf in (14) to be f(t)=bt/(1+t). It is worth mentioning that in the absence of stocking, Pielou ([24], page 80) suggested taking f(xn-m)=μK/(K+(μ-1)xn-m) to account for certain fluctuating populations, which cannot be modeled by the Beverton-Holt equation. So, here we are dealing with the dimensionless Pielou's equation yn+1=byn/(1+yn-1), which takes the following form after forcing stocking: (32)yn+1=byn1+yn-1+hn.Whenhn=0, (32) has the positive equilibrium x2,0=b-1 which is globally asymptotically stable. When hn=h>0,x2,h=(1/2)(b+h-1)+(1/2)(b+h-1)2+h inherits the global asymptotic stability of x2,0 as shown in [21]. Now, consider {hn} to be 2-periodic. To find the 2-periodic solution assured by Theorem 9, we substitute y-1=x and y0=y in (32) to obtain (33)(y-h0)(1+y)=bx,(x-h1)(1+x)=by.Now, the solution is obvious graphically (it is the point of intersection between the two curves in the positive quadrant). However, the solution is not simple to write explicitly, and therefore, we proceed by choosingh0=b+1/(b-1) and h1=b. In this case, the 2-periodic solution {x-,y-} is given by (34)x-=b-1+b3/2b-1,y-=b+1b-1+b(b-1).Next, use the functionf(t)=b/(1+t) in Section 3.2 and follow the same steps to find (35)liminf{y2n}=I0=S0=limsup{y2n},liminf{y2n+1}=I1=S1=limsup{y2n+1}. Thus, the odd iterates converge to a point, say u, while the even iterates converge to a point, say v. Substitute u and v in (18); then compare it with (33) to find that {u,v} is indeed {x-,y-}. Finally, Figure 3 shows the convergence to the 2-cycle for the specific values of the parameters.Figure 3 This graph shows the stable2-cycle for the 2-periodic equation in (32) when h0=b+1/(b-1) and h1=b, where b is fixed at 5/2.Another interesting notion that can be observed here is the resonance of the solutions of (32). The arithmetic average of the globally attracting 2-periodic solution is (36)xsv:=12(x-+y-)=12(2b-1+1b-1+b3/2b-1+b(b-1)).On the other hand, when we take the constant stockingh=(1/2)(h0+h1)=b+1/2(b-1), we obtain the globally attracting equilibrium (37)x-=4b2-6b+3+16b4-32b3+28b2-12b+14(b-1). Figure 4 shows that xav>x-.Figure 4 This graph shows the average of the attracting2-cycle (blue color) in contrast with the equilibrium that results from constant stoking equals the average of h0 and h1, where h0=b+1/(b-1) and h1=b. ## 5. Conclusion and Discussion In this paper, we investigated the dynamics of the periodic difference equationxn+1=xnf(xn-1)+hn, where f(x) is differentiable and decreasing on [0,∞), while xf(x) is increasing and bounded. This equation can be used as a discrete model to represent contest competition in species with periodic stocking. We found that periodic stocking forces the existence of a periodic solution that has the same period as the stocking period. In addition to the unbounded invariant given by the positive quadrant, we constructed a bounded invariant region, which we used to prove the existence of the periodic solution. Also, we proved that the periodic solution is globally attractive when the stocking period is 2. We conjecture that the periodic solution is globally attractive regardless of the stocking period. Although the steady state has evolved to become the periodic solution of the same period as the stocking period, our results show that periodic stocking preserves the global attractivity of the periodic solution. --- *Source: 101649-2013-10-27.xml*
101649-2013-10-27_101649-2013-10-27.md
36,281
A Global Attractor in Some Discrete Contest Competition Models with Delay under the Effect of Periodic Stocking
Ziyad AlSharawi
Abstract and Applied Analysis (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101649
101649-2013-10-27.xml
--- ## Abstract We consider discrete models of the formxn+1=xnf(xn−1)+hn, where hn is a nonnegative p-periodic sequence representing stocking in the population, and investigate their dynamics. Under certain conditions on the recruitment function f(x), we give a compact invariant region and use Brouwer fixed point theorem to prove the existence of a p-periodic solution. Also, we prove the global attractivity of the p-periodic solution when p=2. In particular, this study gives theoretical results attesting to the belief that stocking (whether it is constant or periodic) preserves the global attractivity of the periodic solution in contest competition models with short delay. Finally, as an illustrative example, we discuss Pielou's model with periodic stocking. --- ## Body ## 1. Introduction In mathematical ecology, difference equations of the formxn+1=xnf(xn), n∈ℕ:={0,1,…} are used to model single species with nonoverlapping generations [1, 2], where xn denotes the number of sexually mature individuals at discrete time n, and f(xn) is the density-dependent net growth rate of the population. The form of the function f(x) is chosen to reflect certain characteristics of the studied population such as intraspecific competition. For some background readings about models obtained by the various choices of f(x), we refer the reader to [1, 3, 4] in the discrete case. Also, we refer the reader to [5] and the references therein for the continuous case. Two classical types are known as the scrambled and contest competition models [4]. Our attention in this work is limited to the contest competition models where f(x) is assumed to be decreasing, xf(x) is increasing, and xf(x) is asymptotic to a certain level at high population densities. A prototype of such models is the Beverton-Holt model [6], which is obtained by considering f(x)=μKx/(K+(μ-1)x). Here, μ>1 is interpreted as the growth rate per generation, and K is the carrying capacity of the environment. In populations with substantial time needed to reach sexual maturity, certain delay effect must be included in the function f(x), which motivates us to consider difference equations of the form (1)xn+1=xnf(xn-k), where k is a fixed positive integer [7]. In general, it is widely known that long time delay has a destabilizing effect on the population's steady state, while short time delay can preserve stability [8–10]. However, when the delay is large, the dynamics of (1) is less tractable [11]. Furthermore, we are more interested here in the effect of stocking than the effect of delay, and therefore, we keep the time delay short to preserve stability in the absence of stocking. In particular, we fix the delay to be k=1.A substantial body of research has explored the effect of constant stocking on population models without delay [12–19]. In brief and general terms, it has been found that constant stocking can be used to suppress chaos, reverse the period doubling phenomena, lower the risk of extinction, and have a stabilizing effect on the population steady state. On the other hand, and to the best of our knowledge, little (if any) has been done to explore the effect of stocking (whether constant or periodic) on models with delay. So, our work here has a twofold objective which motivates us to consider difference equations of the form to study the effect of periodic stocking on contest competition models with delay and to complement the work of the author and his collaborators in [20], where the dynamics of (1) with k=1 was studied under the effect of constant yield harvesting. Recall that we have some accumulating restrictions on the function f(x) due to the nature of associating our equation with contest competition models. So, in an abstract mathematical form, our problem can be posed as follows. Consider the difference equation (2)xn+1=xnf(xn-1)+hn, where {hn} is a nonnegative p-periodic sequence representing stocking due to refuge, immigration, feeding, and so forth, and the function f(x) obeys the following conditions:(C1) f(0)=b>1,(C2) f∈C1([0,∞)) and f(x) is decreasing on [0,∞),(C3) xf(x) is increasing and bounded. The condition in (C1) is a generic one in the absence of stocking, that is, if b≤1 and hn=0, then there is no long-term survival regardless of the initial density of the population.This paper is organized as follows. In Section2, we give some preliminary results concerning local stability, boundedness, and global stability of (2) when the stocking sequence is 1-periodic, that is, when hn=h>0 for all n∈ℕ. In Section 3, the period of the stocking sequence is taken to be larger than one. A compact invariant region has been established and a characterization of the periodic solutions is given. Also, the global asymptotic behavior of solutions has been investigated when p=2. As a particular case of (2), we discuss Pielou's equation with delay one in Section 4. ## 2. Preliminary Results: The Autonomous Case In this section, we focus on the autonomous case, that is,hj=h>0 for all j=0,1,…,p-1. Thus, (2) becomes (3)xn+1=xnf(xn-1)+h.Some results concerning (3) can be found in the literature [21]; however, for the sake of completeness and usage in the nonautonomous case, we give the following preliminary results. ### 2.1. Local Stability and Boundedness Equation (3) has two equilibrium solutions at h=0, namely, 0 and f-1(1). For h>0, the origin slides downward to become negative, while the other equilibrium stays positive and slides upward. This fact becomes clear when we write t=tf(t)+h as 1-h/t=f(t). The left hand side is increasing in t while the right hand side is decreasing. Thus, we have only one positive equilibrium in the positive quadrant, which we denote in the sequel by x-2,h. Since x-2,h is positive and increasing in h, f(x-2,h)<1 for all h>0. The linearized equation associated with (3) at a fixed point x- is given by (4)yn+1-f(x-)yn-x-f′(x-)yn-1=0.Definep:=f(x-) and q:=-x-f′(x-). For x-=x-2,h, we have 0<p<1 and q is nonnegative. The roots of λ2-pλ+q=0 determine the local stability of our equilibrium point. Since λj,h=(1/2)(p+(-1)jp2-4q),j=1,2,x-2,h starts as stable at h=0 and stays stable as long as q<1. Figure 1 clarifies the relationship between p,q and the magnitude of λj,h. We summarize these facts in the following proposition.Figure 1 This figure shows the magnitude of the characteristic roots of (4) depending on the location of the (p,q) values, where p=f(x-) and q=-x-f′(x-). Also, |λ1,h|>1 and |λ1,h|>1 when p,q>1;|λ1,h|<1 and |λ1,h|<1 when p<1 and q<1.Proposition 1. Assume that conditions (C1) to (C3) are satisfied andh>0. The positive equilibrium x-2,h of (3) is locally asymptotically stable.Proof. Sincex-2,h>x-2,0, we have p<1. Also, since F(t)=tf(t) is increasing, we obtain F′(t)=tf′(t)+f(t)>0 and consequently -q+p>0. Thus, we have q<p<1. Now, Figure 1 makes the rest of the proof clear.It is obvious thatxk≥h for all k≥1. On the other hand, since (5)xn+1=xn-1f(xn-1)f(xn-2)+hf(xn-1)+h≤xn-1f(xn-1)b+hb+h, the boundedness of y=tf(t) assures the boundedness of all solutions of (2). ### 2.2. Oscillations and Global Stability A solution of (3) is called oscillatory if it is neither, eventually, less than nor larger than x-2,h [11]. Also, one can consider oscillations about a curve [20]. A solution {xn} of (3) is called oscillatory about a curve H(x,y)=0 if the sequence {un=(xn-1,xn)} does not eventually stay on one side of the curve. The latter definition can be more convenient in some cases; however, in (3), both are equivalent when we consider H(x,y)=y-x as we show in the following result.Proposition 2. A solution of (3) is oscillatory if and only if it is oscillatory about the curve y=x.Proof. Assume that{xn} oscillates about x-2,h, but it is not oscillatory about y=x. So, {xn} is either eventually increasing or eventually decreasing, which contradicts the assumption that xn is oscillatory about x-2,h. Conversely, suppose {(xn-1,xn)} oscillates about y=x, but {xn} does not oscillate about x-2,h. First, we consider the case xn≤x-2,h for all n≥n0. If xm>xm-1 for some m>n0, then f(xm)<f(xm-1) and consequently (6)xm+1=xmf(xm-1)+h>xmf(xm)+h>xm. So, we can induce an eventually increasing sequence which contradicts our assumption. Ifxm≤xm-1 for some m>n0, then xm+1≤xmf(xm)+h. Thus, either xm+1≤xm, and the induction leads to a decreasing sequence that must converge which is not possible, or xm+1>xm, and we go back to the first scenario. Finally, the case xn≥x-2,h for all n≥n0 can be handled similarly.Next, we define the map(7)T(x,y)=(y,yf(x)+h).The mapT portrays the solutions of (3) geometrically in the nonnegative quadrant, and therefore, it plays a prominent role in the sequel. Here, we used the nonnegative quadrant to denote the positive quadrant union of the axes on the boundary. By applying the map T on the regions above and below the curve y=x, one can observe that a nonequilibrium solution of (3) must be oscillatory. Also, using the map T, one can observe that stocking increases the frequency of oscillations in the following sense. The length of semicycles in the absence of stocking is longer than the length of semicycles in the existence of stocking, where a semicycle is used to denote the string of consecutive terms above or below the equilibrium.Since solutions of (3) are bounded, we define (8)S:=limsupxn,I:=liminfxn.From the equationxn+2=xnf(xn)f(xn-1)+hf(xn)+h and using the fact that tf(t) is increasing, we obtain (9)S≤Sf(S)f(I)+hf(I)+h,I≥If(I)f(S)+hf(S)+h.Whenh>0, we have S≥I>0. So, we can multiply the first inequality by I and the second one by S to obtain (10)S(f(S)+1)≤I(f(I)+1).Sincet(f(t)+1) is increasing, we obtain I=S. This approach was used by Camouzis and Ladas in [22], and it was used by Nyerges in [21] to prove that x-2,h is globally attractive. This fact together with the local stability established in Proposition 1 shows the global asymptotic stability of x-2,h as we summarize in the following proposition.Proposition 3. The equilibrium solutionx-2,h of (3) is globally asymptotically stable.Next, it is obvious that the positive quadrant forms an invariant for (3); however, since solutions are bounded, we are interested in a bounded invariant that can be developed to serve us in the periodic case. Notice that by invariance here we always mean forward invariance, that is, Rh is an invariant of (3) if T(x,y)∈Rh for all (x,y)∈Rh. To establish the existence of a bounded invariant region, we need to have in mind the following simple fact.Proposition 4. There exists a finite constantch≥h such that Gh(t)=(bt+h)f(t)≤bch for all t≥0. Furthermore, ch can be taken as ch:=(1/b)suptGh(t).Proof. Use the fact thattf(t) is bounded and f(t) is decreasing with f(0)=b and limt→∞f(t)=0 to obtain the result.Next, define the curvesΓj,j=0,1,2,3,4 to be the line segments that connect the points (0,0),(0,h),(ch,bch+h), (bch+h,bch+h), (bch+h,0), and (0,0), respectively. Now, define Rh to be the region bounded by the curves of Γj,j=0,…,4 including the boundary, then the following result gives a bounded invariant of (3). Here, it is worth mentioning that Γ0 shrinks to a point at h=0; however, our notation and arguments about the invariant region are still valid except that the boundary of Rh becomes a quadrilateral rather than a pentagon.Theorem 5. The regionRh as defined above gives a compact invariant for (3).Proof. Consider the mapT(x,y) as defined in (7). T is one-to-one on the positive quadrant. Thus, all we need is to test T on the boundary of Rh. It is straightforward computations to find that T(Γ0)⊆Γ1. Since horizontal line segments are mapped to vertical line segments under T, we test the end points of Γ2 to find (11)T(ch,bch+h)=(bch+h,(bch+h)f(ch)+h),T(bch+h,bch+h)=(bch+h,(bch+h)f(bch+h)+h). By the choice ofch given in Proposition 4, we have (12)(bch+h)f(bch+h)+h≤(bch+h)f(ch)+h≤bch+h. Thus,T(Γ2)⊂Γ3. Next, T(Γ3)⊂Rh and T(Γ4)=(0,h) are straightforward to observe. Finally, we show that T(Γ1)⊂Rh. For 0≤t≤ch, we have (13)T(t,bt+h)=(bt+h,(bt+h)f(t)+h); however, h≤bt+h≤bch+h and (bt+h)f(t)+h≤bch+h by the choice of ch, which completes the proof. Figure 2 illustrates the region Rh and its image under the map T when (bt+h)f(t) is increasing.Figure 2 The figure on the left shows the choice of the compact regionRh when y=(bt+h)f(t) is increasing, and the one on the right shows T(Rh) with blue boundary inside Rh. ## 2.1. Local Stability and Boundedness Equation (3) has two equilibrium solutions at h=0, namely, 0 and f-1(1). For h>0, the origin slides downward to become negative, while the other equilibrium stays positive and slides upward. This fact becomes clear when we write t=tf(t)+h as 1-h/t=f(t). The left hand side is increasing in t while the right hand side is decreasing. Thus, we have only one positive equilibrium in the positive quadrant, which we denote in the sequel by x-2,h. Since x-2,h is positive and increasing in h, f(x-2,h)<1 for all h>0. The linearized equation associated with (3) at a fixed point x- is given by (4)yn+1-f(x-)yn-x-f′(x-)yn-1=0.Definep:=f(x-) and q:=-x-f′(x-). For x-=x-2,h, we have 0<p<1 and q is nonnegative. The roots of λ2-pλ+q=0 determine the local stability of our equilibrium point. Since λj,h=(1/2)(p+(-1)jp2-4q),j=1,2,x-2,h starts as stable at h=0 and stays stable as long as q<1. Figure 1 clarifies the relationship between p,q and the magnitude of λj,h. We summarize these facts in the following proposition.Figure 1 This figure shows the magnitude of the characteristic roots of (4) depending on the location of the (p,q) values, where p=f(x-) and q=-x-f′(x-). Also, |λ1,h|>1 and |λ1,h|>1 when p,q>1;|λ1,h|<1 and |λ1,h|<1 when p<1 and q<1.Proposition 1. Assume that conditions (C1) to (C3) are satisfied andh>0. The positive equilibrium x-2,h of (3) is locally asymptotically stable.Proof. Sincex-2,h>x-2,0, we have p<1. Also, since F(t)=tf(t) is increasing, we obtain F′(t)=tf′(t)+f(t)>0 and consequently -q+p>0. Thus, we have q<p<1. Now, Figure 1 makes the rest of the proof clear.It is obvious thatxk≥h for all k≥1. On the other hand, since (5)xn+1=xn-1f(xn-1)f(xn-2)+hf(xn-1)+h≤xn-1f(xn-1)b+hb+h, the boundedness of y=tf(t) assures the boundedness of all solutions of (2). ## 2.2. Oscillations and Global Stability A solution of (3) is called oscillatory if it is neither, eventually, less than nor larger than x-2,h [11]. Also, one can consider oscillations about a curve [20]. A solution {xn} of (3) is called oscillatory about a curve H(x,y)=0 if the sequence {un=(xn-1,xn)} does not eventually stay on one side of the curve. The latter definition can be more convenient in some cases; however, in (3), both are equivalent when we consider H(x,y)=y-x as we show in the following result.Proposition 2. A solution of (3) is oscillatory if and only if it is oscillatory about the curve y=x.Proof. Assume that{xn} oscillates about x-2,h, but it is not oscillatory about y=x. So, {xn} is either eventually increasing or eventually decreasing, which contradicts the assumption that xn is oscillatory about x-2,h. Conversely, suppose {(xn-1,xn)} oscillates about y=x, but {xn} does not oscillate about x-2,h. First, we consider the case xn≤x-2,h for all n≥n0. If xm>xm-1 for some m>n0, then f(xm)<f(xm-1) and consequently (6)xm+1=xmf(xm-1)+h>xmf(xm)+h>xm. So, we can induce an eventually increasing sequence which contradicts our assumption. Ifxm≤xm-1 for some m>n0, then xm+1≤xmf(xm)+h. Thus, either xm+1≤xm, and the induction leads to a decreasing sequence that must converge which is not possible, or xm+1>xm, and we go back to the first scenario. Finally, the case xn≥x-2,h for all n≥n0 can be handled similarly.Next, we define the map(7)T(x,y)=(y,yf(x)+h).The mapT portrays the solutions of (3) geometrically in the nonnegative quadrant, and therefore, it plays a prominent role in the sequel. Here, we used the nonnegative quadrant to denote the positive quadrant union of the axes on the boundary. By applying the map T on the regions above and below the curve y=x, one can observe that a nonequilibrium solution of (3) must be oscillatory. Also, using the map T, one can observe that stocking increases the frequency of oscillations in the following sense. The length of semicycles in the absence of stocking is longer than the length of semicycles in the existence of stocking, where a semicycle is used to denote the string of consecutive terms above or below the equilibrium.Since solutions of (3) are bounded, we define (8)S:=limsupxn,I:=liminfxn.From the equationxn+2=xnf(xn)f(xn-1)+hf(xn)+h and using the fact that tf(t) is increasing, we obtain (9)S≤Sf(S)f(I)+hf(I)+h,I≥If(I)f(S)+hf(S)+h.Whenh>0, we have S≥I>0. So, we can multiply the first inequality by I and the second one by S to obtain (10)S(f(S)+1)≤I(f(I)+1).Sincet(f(t)+1) is increasing, we obtain I=S. This approach was used by Camouzis and Ladas in [22], and it was used by Nyerges in [21] to prove that x-2,h is globally attractive. This fact together with the local stability established in Proposition 1 shows the global asymptotic stability of x-2,h as we summarize in the following proposition.Proposition 3. The equilibrium solutionx-2,h of (3) is globally asymptotically stable.Next, it is obvious that the positive quadrant forms an invariant for (3); however, since solutions are bounded, we are interested in a bounded invariant that can be developed to serve us in the periodic case. Notice that by invariance here we always mean forward invariance, that is, Rh is an invariant of (3) if T(x,y)∈Rh for all (x,y)∈Rh. To establish the existence of a bounded invariant region, we need to have in mind the following simple fact.Proposition 4. There exists a finite constantch≥h such that Gh(t)=(bt+h)f(t)≤bch for all t≥0. Furthermore, ch can be taken as ch:=(1/b)suptGh(t).Proof. Use the fact thattf(t) is bounded and f(t) is decreasing with f(0)=b and limt→∞f(t)=0 to obtain the result.Next, define the curvesΓj,j=0,1,2,3,4 to be the line segments that connect the points (0,0),(0,h),(ch,bch+h), (bch+h,bch+h), (bch+h,0), and (0,0), respectively. Now, define Rh to be the region bounded by the curves of Γj,j=0,…,4 including the boundary, then the following result gives a bounded invariant of (3). Here, it is worth mentioning that Γ0 shrinks to a point at h=0; however, our notation and arguments about the invariant region are still valid except that the boundary of Rh becomes a quadrilateral rather than a pentagon.Theorem 5. The regionRh as defined above gives a compact invariant for (3).Proof. Consider the mapT(x,y) as defined in (7). T is one-to-one on the positive quadrant. Thus, all we need is to test T on the boundary of Rh. It is straightforward computations to find that T(Γ0)⊆Γ1. Since horizontal line segments are mapped to vertical line segments under T, we test the end points of Γ2 to find (11)T(ch,bch+h)=(bch+h,(bch+h)f(ch)+h),T(bch+h,bch+h)=(bch+h,(bch+h)f(bch+h)+h). By the choice ofch given in Proposition 4, we have (12)(bch+h)f(bch+h)+h≤(bch+h)f(ch)+h≤bch+h. Thus,T(Γ2)⊂Γ3. Next, T(Γ3)⊂Rh and T(Γ4)=(0,h) are straightforward to observe. Finally, we show that T(Γ1)⊂Rh. For 0≤t≤ch, we have (13)T(t,bt+h)=(bt+h,(bt+h)f(t)+h); however, h≤bt+h≤bch+h and (bt+h)f(t)+h≤bch+h by the choice of ch, which completes the proof. Figure 2 illustrates the region Rh and its image under the map T when (bt+h)f(t) is increasing.Figure 2 The figure on the left shows the choice of the compact regionRh when y=(bt+h)f(t) is increasing, and the one on the right shows T(Rh) with blue boundary inside Rh. ## 3. Periodic Stocking In this section, we force periodic stocking on (3) to obtain (14)xn+1=xnf(xn-1)+hn, where hn is a p-periodic sequence of stocking quotas, and p denotes the minimal period. Observe that some consecutive values of the stocking sequence can be zero; however, it is natural to assume that ∑j=0p-1‍hj>0. As in the constant case, we associate (14) with a p-periodic sequence of two dimensional maps that we use in the sequel, namely {Tj,j=0,1,…,p-1}, where Tj(x,y)=(y,yf(x)+hj). It is obvious that if we replace h by hj in Theorem 5, then Rhj forms a compact and invariant region for the individual map Tj, which enables us to build a suitable machinery for establishing the existence of a periodic solution. It is convenient now to develop the notations of the previous section so it can suit the periodic case. We denote the line segments that form the boundary of Rhj by Γj,i,i=0,…,4, where Γj,i corresponds to Γi in the autonomous case and that are associated with the individual map Tj. Also, the constant ch in Proposition 4 will be replaced by chj, and this is associated with the individual map Tj. ### 3.1. Existence of a Periodic Solution We start by establishing a compact invariant region for (14). Define (15)hm:=max{h0,h1,…,hp-1},cm:=max{chj:j=0,…,p-1}, where chj is as taken in Proposition 4; that is, chj=(1/b)suptGhj(t); then use hm and cm to define the region Rhm as in the paragraph preceding Theorem 5. Now, we have the following result.Lemma 6. Consider (14) together with the associated p-periodic sequence of maps {Tj}. Each of the following holds true.(i) One hasRhi⊆Rhj whenever hi≤hj.(ii) Rhm is a compact invariant for each individual map Tj.(iii) Rhm is a compact invariant for the map T^:=Tp-1∘Tp-2∘⋯∘T0.Proof. (i) Whenhi≤hj, we obtain Ghi(t)≤Ghj(t) for all t≥0. Thus, chi≤chj, and the result becomes obvious from Proposition 4 and the geometric structure of the regions Rhi and Rhj. To prove (ii), let (x,y)∈Rhm, we show that Tj(x,y)∈Rhm. Since (16)Tj(x,y)=(y,yf(x)+hj)=(y,yf(x)+hm)-(0,hm-hj)=Tm(x,y)-(0,hm-hj), then the first component of Tj(x,y) is the same as the first component of Tm(x,y) and the second component of Tj(x,y) is lower than the second component of Tm(x,y). Now, the fact that Tm(x,y)∈Rhm and the geometric structure of Rhm assures that Tj(x,y)∈Rhm. Finally, (iii) follows from (ii).Periodic stocking (or harvesting) has the effect of forcing population cycles to evolve and become multiples of the stocking/harvesting period as we show in the following result, which is more general than (14).Theorem 7. Consider the general difference equationxn+1=F(xn,xn-1,…,xn-k) with p-periodic stocking (or harvesting). If a periodic solution exists, then the period is a multiple of p.Proof. The proof is a contradiction; suppose that we have anr-periodic solution of the equation xn+1=F(xn,xn-1,…,xn-k)+hn for some r that is not a multiple of p. Then, the greatest common divisor between r and p (d:=gcd(r,p)) is not p. Define the maps Fi:=F+hi,i=0,1,…,p-1; then for each 0≤i≤d-1, the maps {Fkd+i,k=0,1,…,p/d-1} must agree at the point Xi:=(xi,xi-1,…,xi-k), where the components xi-k,xi+1-k,…,xi are consecutive elements of the p-periodic solution. This implies (17)hi=hd+i=h2d+i=⋯=h(p/d-1)d+i for all i=0,1,…,d-1, which contradicts the minimality of the period of the p-periodic difference equation.Theorem7 shows that (14) has no equilibrium solutions, and therefore, our previous notion of characterizing oscillatory solutions based on the oscillations about y=x is the valid one here. Thus, solutions of (14) are oscillatory about y=x because they cannot be monotonic. Although it is natural for fluctuations in the environment to create fluctuations in the population, we find it appropriate here to connect the loosely-defined term “fluctuation" with the mathematically well-defined term “oscillation.” Next, we use the Brouwer fixed theorem [23] (page 51) to prove the existence of a periodic solution of (14).Lemma 8 (Brouwer fixed-point theorem [23], page 51). LetM be a nonempty, convex, and compact subset of ℝn. If T:M→M is continuous, then T has a fixed point in M.Theorem 9. Thep-periodic difference equation in (14) has a p-periodic solution.Proof. Consider the mapT^:=Tp-1∘Tp-2∘⋯∘T0, then using Lemma 6, we obtain T^:Rhm→Rhm. Furthermore, Rhm is nonempty, compact, and obviously convex. So, by Lemma 8, T^ has a fixed point in Rhm. This fixed point establishes a periodic solution of (14) with minimal period that divides p; however, Theorem 7 shows that the period must be p. ### 3.2. Global Attractivity of the Periodic Solution Whenp=2 Consider the periodicity of (14) to be p=2 and suppose h0+h1≠0. We partition the solutions of (14) into two subsequences, the one with even indices {x2n} and the one with odd indices {x2n}. Thus, we have (18)x2n+1=x2nf(x2n-1)+h0,x2n+2=x2n+1f(x2n)+h1.Since the solutions are bounded, we define(19)liminf{x2n+i}=Ii,limsup{x2n+i}=Si,i=0,1.Now, the second iterate of (18) gives us (20)x2n+2=x2nf(x2n)f(x2n-1)+h0f(x2n)+h1,(21)x2n+3=x2n+1f(x2n+1)f(x2n)+h1f(x2n+1)+h0.Use the fact thatf(t) is decreasing and tf(t) is increasing in (20) to obtain (22)S0≤S0f(S0)f(I1)+h0f(I0)+h1,(23)I0≥I0f(I0)f(S1)+h0f(S0)+h1.Also, (21) gives us (24)S1≤S1f(S1)f(I0)+h1f(I1)+h0,(25)I1≥I1f(I1)f(S0)+h1f(S1)+h0.Multiply inequality (22) by I0 and inequality (23) by S0 to obtain (26)S0I0f(I0)f(S1)+S0(h0f(S0)+h1)≤I0S0f(S0)f(I1)+I0(h0f(I0)+h1).SinceI0(h0f(I0)+h1)≤S0(h0f(S0)+h1), we obtain (27)f(I0)f(S1)≤f(S0)f(I1).Also, multiply inequality (24) by I1 and inequality (25) by S1 to obtain (28)S1I1f(I1)f(S0)+S1(h1f(S1)+h0)≤I1S1f(S1)f(I0)+I1(h1f(I1)+h0).SinceI1(h1f(I1)+h0)≤S1(h1f(S1)+h0), we obtain (29)f(I1)f(S0)≤f(I0)f(S1).Using inequalities (27) and (29), we obtain the following result.Lemma 10. ConsiderI0,I1,S0,S1 as defined in (19); then f(I0)f(S1)=f(I1)f(S0).Next, we give the following result.Theorem 11. Forp=2, the 2-periodic solution of (14) is a global attractor.Proof. Use the result of Lemma10 in inequality (26) to obtain (30)S0(h0f(S0)+h1)≤I0(h0f(I0)+h1). Sinceg(t)=t(h0f(t)+h1) is increasing and I0≤S0, we must have I0=S0. Similarly, use the result of Lemma 10 in inequality (28) to obtain (31)S1(h1f(S1)+h0)≤I1(h1f(I1)+h0), and consequently S1=I1. Hence, Ii=Si,i=0,1, and the proof is complete.Remark 12. Observe that the approach of this section proves not only the global attractivity of thep-periodic solution but also its existence; however, Theorem 7 is still significant here because it proves the minimality of the period. Also, establishing the compact invariant region in Lemma 6 deserves embracing regardless of the global attractivity of the periodic solution. Finally, proving the global attractivity for general p will be the topic of some future work. ## 3.1. Existence of a Periodic Solution We start by establishing a compact invariant region for (14). Define (15)hm:=max{h0,h1,…,hp-1},cm:=max{chj:j=0,…,p-1}, where chj is as taken in Proposition 4; that is, chj=(1/b)suptGhj(t); then use hm and cm to define the region Rhm as in the paragraph preceding Theorem 5. Now, we have the following result.Lemma 6. Consider (14) together with the associated p-periodic sequence of maps {Tj}. Each of the following holds true.(i) One hasRhi⊆Rhj whenever hi≤hj.(ii) Rhm is a compact invariant for each individual map Tj.(iii) Rhm is a compact invariant for the map T^:=Tp-1∘Tp-2∘⋯∘T0.Proof. (i) Whenhi≤hj, we obtain Ghi(t)≤Ghj(t) for all t≥0. Thus, chi≤chj, and the result becomes obvious from Proposition 4 and the geometric structure of the regions Rhi and Rhj. To prove (ii), let (x,y)∈Rhm, we show that Tj(x,y)∈Rhm. Since (16)Tj(x,y)=(y,yf(x)+hj)=(y,yf(x)+hm)-(0,hm-hj)=Tm(x,y)-(0,hm-hj), then the first component of Tj(x,y) is the same as the first component of Tm(x,y) and the second component of Tj(x,y) is lower than the second component of Tm(x,y). Now, the fact that Tm(x,y)∈Rhm and the geometric structure of Rhm assures that Tj(x,y)∈Rhm. Finally, (iii) follows from (ii).Periodic stocking (or harvesting) has the effect of forcing population cycles to evolve and become multiples of the stocking/harvesting period as we show in the following result, which is more general than (14).Theorem 7. Consider the general difference equationxn+1=F(xn,xn-1,…,xn-k) with p-periodic stocking (or harvesting). If a periodic solution exists, then the period is a multiple of p.Proof. The proof is a contradiction; suppose that we have anr-periodic solution of the equation xn+1=F(xn,xn-1,…,xn-k)+hn for some r that is not a multiple of p. Then, the greatest common divisor between r and p (d:=gcd(r,p)) is not p. Define the maps Fi:=F+hi,i=0,1,…,p-1; then for each 0≤i≤d-1, the maps {Fkd+i,k=0,1,…,p/d-1} must agree at the point Xi:=(xi,xi-1,…,xi-k), where the components xi-k,xi+1-k,…,xi are consecutive elements of the p-periodic solution. This implies (17)hi=hd+i=h2d+i=⋯=h(p/d-1)d+i for all i=0,1,…,d-1, which contradicts the minimality of the period of the p-periodic difference equation.Theorem7 shows that (14) has no equilibrium solutions, and therefore, our previous notion of characterizing oscillatory solutions based on the oscillations about y=x is the valid one here. Thus, solutions of (14) are oscillatory about y=x because they cannot be monotonic. Although it is natural for fluctuations in the environment to create fluctuations in the population, we find it appropriate here to connect the loosely-defined term “fluctuation" with the mathematically well-defined term “oscillation.” Next, we use the Brouwer fixed theorem [23] (page 51) to prove the existence of a periodic solution of (14).Lemma 8 (Brouwer fixed-point theorem [23], page 51). LetM be a nonempty, convex, and compact subset of ℝn. If T:M→M is continuous, then T has a fixed point in M.Theorem 9. Thep-periodic difference equation in (14) has a p-periodic solution.Proof. Consider the mapT^:=Tp-1∘Tp-2∘⋯∘T0, then using Lemma 6, we obtain T^:Rhm→Rhm. Furthermore, Rhm is nonempty, compact, and obviously convex. So, by Lemma 8, T^ has a fixed point in Rhm. This fixed point establishes a periodic solution of (14) with minimal period that divides p; however, Theorem 7 shows that the period must be p. ## 3.2. Global Attractivity of the Periodic Solution Whenp=2 Consider the periodicity of (14) to be p=2 and suppose h0+h1≠0. We partition the solutions of (14) into two subsequences, the one with even indices {x2n} and the one with odd indices {x2n}. Thus, we have (18)x2n+1=x2nf(x2n-1)+h0,x2n+2=x2n+1f(x2n)+h1.Since the solutions are bounded, we define(19)liminf{x2n+i}=Ii,limsup{x2n+i}=Si,i=0,1.Now, the second iterate of (18) gives us (20)x2n+2=x2nf(x2n)f(x2n-1)+h0f(x2n)+h1,(21)x2n+3=x2n+1f(x2n+1)f(x2n)+h1f(x2n+1)+h0.Use the fact thatf(t) is decreasing and tf(t) is increasing in (20) to obtain (22)S0≤S0f(S0)f(I1)+h0f(I0)+h1,(23)I0≥I0f(I0)f(S1)+h0f(S0)+h1.Also, (21) gives us (24)S1≤S1f(S1)f(I0)+h1f(I1)+h0,(25)I1≥I1f(I1)f(S0)+h1f(S1)+h0.Multiply inequality (22) by I0 and inequality (23) by S0 to obtain (26)S0I0f(I0)f(S1)+S0(h0f(S0)+h1)≤I0S0f(S0)f(I1)+I0(h0f(I0)+h1).SinceI0(h0f(I0)+h1)≤S0(h0f(S0)+h1), we obtain (27)f(I0)f(S1)≤f(S0)f(I1).Also, multiply inequality (24) by I1 and inequality (25) by S1 to obtain (28)S1I1f(I1)f(S0)+S1(h1f(S1)+h0)≤I1S1f(S1)f(I0)+I1(h1f(I1)+h0).SinceI1(h1f(I1)+h0)≤S1(h1f(S1)+h0), we obtain (29)f(I1)f(S0)≤f(I0)f(S1).Using inequalities (27) and (29), we obtain the following result.Lemma 10. ConsiderI0,I1,S0,S1 as defined in (19); then f(I0)f(S1)=f(I1)f(S0).Next, we give the following result.Theorem 11. Forp=2, the 2-periodic solution of (14) is a global attractor.Proof. Use the result of Lemma10 in inequality (26) to obtain (30)S0(h0f(S0)+h1)≤I0(h0f(I0)+h1). Sinceg(t)=t(h0f(t)+h1) is increasing and I0≤S0, we must have I0=S0. Similarly, use the result of Lemma 10 in inequality (28) to obtain (31)S1(h1f(S1)+h0)≤I1(h1f(I1)+h0), and consequently S1=I1. Hence, Ii=Si,i=0,1, and the proof is complete.Remark 12. Observe that the approach of this section proves not only the global attractivity of thep-periodic solution but also its existence; however, Theorem 7 is still significant here because it proves the minimality of the period. Also, establishing the compact invariant region in Lemma 6 deserves embracing regardless of the global attractivity of the periodic solution. Finally, proving the global attractivity for general p will be the topic of some future work. ## 4. Pielou's Equation with Stocking As an illustrative example to our results, we consider the functionf in (14) to be f(t)=bt/(1+t). It is worth mentioning that in the absence of stocking, Pielou ([24], page 80) suggested taking f(xn-m)=μK/(K+(μ-1)xn-m) to account for certain fluctuating populations, which cannot be modeled by the Beverton-Holt equation. So, here we are dealing with the dimensionless Pielou's equation yn+1=byn/(1+yn-1), which takes the following form after forcing stocking: (32)yn+1=byn1+yn-1+hn.Whenhn=0, (32) has the positive equilibrium x2,0=b-1 which is globally asymptotically stable. When hn=h>0,x2,h=(1/2)(b+h-1)+(1/2)(b+h-1)2+h inherits the global asymptotic stability of x2,0 as shown in [21]. Now, consider {hn} to be 2-periodic. To find the 2-periodic solution assured by Theorem 9, we substitute y-1=x and y0=y in (32) to obtain (33)(y-h0)(1+y)=bx,(x-h1)(1+x)=by.Now, the solution is obvious graphically (it is the point of intersection between the two curves in the positive quadrant). However, the solution is not simple to write explicitly, and therefore, we proceed by choosingh0=b+1/(b-1) and h1=b. In this case, the 2-periodic solution {x-,y-} is given by (34)x-=b-1+b3/2b-1,y-=b+1b-1+b(b-1).Next, use the functionf(t)=b/(1+t) in Section 3.2 and follow the same steps to find (35)liminf{y2n}=I0=S0=limsup{y2n},liminf{y2n+1}=I1=S1=limsup{y2n+1}. Thus, the odd iterates converge to a point, say u, while the even iterates converge to a point, say v. Substitute u and v in (18); then compare it with (33) to find that {u,v} is indeed {x-,y-}. Finally, Figure 3 shows the convergence to the 2-cycle for the specific values of the parameters.Figure 3 This graph shows the stable2-cycle for the 2-periodic equation in (32) when h0=b+1/(b-1) and h1=b, where b is fixed at 5/2.Another interesting notion that can be observed here is the resonance of the solutions of (32). The arithmetic average of the globally attracting 2-periodic solution is (36)xsv:=12(x-+y-)=12(2b-1+1b-1+b3/2b-1+b(b-1)).On the other hand, when we take the constant stockingh=(1/2)(h0+h1)=b+1/2(b-1), we obtain the globally attracting equilibrium (37)x-=4b2-6b+3+16b4-32b3+28b2-12b+14(b-1). Figure 4 shows that xav>x-.Figure 4 This graph shows the average of the attracting2-cycle (blue color) in contrast with the equilibrium that results from constant stoking equals the average of h0 and h1, where h0=b+1/(b-1) and h1=b. ## 5. Conclusion and Discussion In this paper, we investigated the dynamics of the periodic difference equationxn+1=xnf(xn-1)+hn, where f(x) is differentiable and decreasing on [0,∞), while xf(x) is increasing and bounded. This equation can be used as a discrete model to represent contest competition in species with periodic stocking. We found that periodic stocking forces the existence of a periodic solution that has the same period as the stocking period. In addition to the unbounded invariant given by the positive quadrant, we constructed a bounded invariant region, which we used to prove the existence of the periodic solution. Also, we proved that the periodic solution is globally attractive when the stocking period is 2. We conjecture that the periodic solution is globally attractive regardless of the stocking period. Although the steady state has evolved to become the periodic solution of the same period as the stocking period, our results show that periodic stocking preserves the global attractivity of the periodic solution. --- *Source: 101649-2013-10-27.xml*
2013
# Correlation between Clinical and Histopathological Diagnoses in Oral Cavity Lesions: A 12-Year Retrospective Study **Authors:** Golnoush Farzinnia; Mehdi Sasannia; Shima Torabi; Fahimeh Rezazadeh; Alireza Ranjbaran; Azita Azad **Journal:** International Journal of Dentistry (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016495 --- ## Abstract Objective. Proper diagnosis plays a key role in the treatment and prognosis of all diseases. Although histopathological diagnosis is still known as the gold standard, final diagnosis becomes difficult unless precise clinical descriptions are obtained. So, this study aimed to evaluate the concordance of the clinical and histopathological diagnoses of all oral and maxillofacial biopsy specimens in a 12-year duration. Materials and Methods. Archive files and clinical findings related to 3001 patients who had been referred to the Department of Oral Pathology during a 12-year period were reviewed. The recorded information in files included age, sex, lesion’s location, clinical and histopathological diagnoses, and specialty of dentists. Results. Out of 3001 cases included and reviewed in this study, 2167 cases (72.2%) were consistent between clinical and histopathologic diagnoses. Age, sex, and clinician’s specialty were indicated to have no significant effect on diagnosis (p values = 0.520, 0.310, 0.281, respectively), but location and type of lesion affected that (p values = 0.040 and 0.022, respectively). In regard to location, the highest concordance of clinical and histopathologic diagnoses was observed in mouth floor lesions, and the lowest one was in gingival mucosa. In terms of lesion category, the highest and the lowest concordance rates belonged to white and red lesions and pigmented lesions, respectively. Conclusion. The results of the present study show that the consistency of clinical and histopathological diagnoses was three times more than their inconsistency, and the accuracy of the clinicians was largely acceptable. --- ## Body ## 1. Introduction The oral cavity is a complex area in the located in the head and neck regions and home to a diverse range of cysts, benign, and malignant salivary gland tumors, as well as odontogenic and nonodontogenic neoplasms [1, 2]. Both the diagnosis and treatment of oral cavity lesions are known as integral parts of oral health care [3]. Moreover, it is well understood that early detection and treatment of these lesions would greatly lead to the improvement of patients’ survival rates and quality of life [4]. Although each oral lesion has different characteristics and clinical features aiding in diagnosis, clinical diagnosis errors occur due to the similarities in clinical presentations, lack of precise definitions for these characteristics, incompatibility of the signs and symptoms in patients, and the presence of multiple manifestations for a lesion [5, 6]. Therefore, in order to minimize misdiagnoses and to achieve more accurate ones, it is necessary to consider the patients’ chief complaints, medical and dental histories’ records, clinical manifestations, imaging diagnostic techniques, and various tests like laboratory tests that include biopsies with microscopic evaluations and blood tests [6]. Histopathologic examination, which is known as the gold standard in diagnostic oral pathology, is used to confirm the clinical diagnosis [7]. However, pathologists may encounter uncertainty during performing the histological examination on lesions under some circumstances, because various lesions may exhibit comparable microscopic views. Thus, the clinical examination can be considered as an effective and important step for confirming pathology results and will also be quite useful in such situations [8]. Therefore, the initial clinical diagnosis made by clinicians must be accurate. Moreover, it should not miss any oral potentially malignant disorders (OPMDs) or malignant lesions, and a close collaboration between the clinician and the pathologist is required in this regard, in order to reach a definitive and right diagnosis [2]. Various studies have previously investigated the concordance of clinical and pathologic diagnoses, and as a result, they reported concordance rates of approximately between 50 and 80% [3, 5–8]. Due to the reported discrepancy in the concordance rates between clinical and histopathological diagnoses in numerous studies performed in various places, the present study aimed to determine the rate of discrepancy between clinical and histopathological diagnoses. This research was done on the patients admitted to Shiraz dentistry school with the hope that the obtained results would help identify weaknesses in the diagnosis of oral diseases and improve both diagnostic and treatment outcomes. ## 2. Materials and Methods This study was performed in the Faculty of Dentistry, Shiraz University of Medical Sciences, in terms of all relevant principles of the Helsinki Declaration. All the included subjects signed informed consent forms, and the ethical approval was obtained from the ethics committee of Shiraz University of Medical Sciences, Shiraz (IR.SUMS.DENTAL.REC.1398.123).In this retrospective study, all the oral lesions diagnosed between January 2006 and December 2018 were then extracted from the archives that existed in the Department of oral pathology. Clinical examinations have been performed and approved by oral medicine specialists and maxillofacial surgeons who had sufficient skills in this field. For the purpose of this study, the census method was firstly used to select the eligible subjects, and the exclusion criteria were as follows: records with inadequate information, biopsy samples without definite pathological reports, and lesions in which a clinical impression was not given. In the patients’ records, the following data were available: demographic data (age and gender), location of the lesion (mandible, maxilla, palate, alveolar mucosa, buccal mucosa, labial mucosa, ventral surface of tongue, dorsal surface of tongue, lateral surfaces of tongue, floor of mouth, gingiva, and lip), clinician’s specialty (oral medicine and oral surgeon), and the clinical and pathological diagnoses of the lesions. All the included cases were subdivided into the following five groups based on the clinical manifestations.(1) Ulcerative, vesicular, and bullous lesions(2) Red and white lesions(3) Pigmented lesions(4) Bone lesions, which were divided into either cystic or tumoral (benign/malignant) lesions(5) Exophytic soft tissue lesions, which were divided into either reactive/inflammatory or tumoral (benign/malignant) lesionsThis classification of lesions was done according to the textbook of oral diseases (Burket’s ORAL MEDICINE 12th edition) [9]. The histopathological criteria for the final pathological diagnosis of each lesion were based on the textbook of Oral and Maxillofacial Pathology [10]. Finally, the obtained samples with a similar diagnosis using both techniques were recorded as the concordance of clinical and pathological diagnoses.The collected data from all groups were imported to Statistical Package for Social Sciences (SPSS) for Windows software, version 16.0 (SPSS Inc., Chicago, IL, USA). As well, descriptive statistics indices were used to calculate the absolute and relative frequencies of different lesions. The chi-square test was used to compare the categorical demographic variables among the groups. The confidence interval was set to 95%, andp < 0.05 was considered statistically significant. ## 3. Results A total of 3001 clinical files were evaluated in the current study. In 2167 cases (72.2%), the clinical and pathological diagnoses were consistent. ### 3.1. Age and Sex Among all the biopsied cases, 1432 (47.7%) men and 1569 (52.3%) women were included. Moreover, 1058 male (73.9%) and 1109 (70.7%) female subjects had consistent diagnoses between clinical and pathological. In addition, 2708 (93.2%) cases were in the second decade of their life, so they were the most prevalent cases. After the tenth, ninth, and eighth decades (with a total of 9 cases), the sixth decade had the most frequent clinical and histological concordance (78%), and the fifth decade had the least (51.6%) (Table1). Of note, there was no significant relationship between patients’ sex and age and concordance of clinical and histopathologic diagnoses (P values = 0.310 and 0.520, respectively).Table 1 Concordance rate of clinical and histopathologic diagnosis based on age ranges. Decade (age ranges)Total casesConcordance N (%)P value1 (0–9)42 (50%)0.5202 (10–19)27972038 (72.6%)3 (20–29)5437 (68.5%)4 (30–39)4732 (68.1)5 (40–49)3116 (51.6%)6 (50–59)4132 (78%)7 (60–69)1812 (66.7%)8 (70–79)65 (83.3%)9 (80–89)22 (100%)10 (90–99)11 (100%) ### 3.2. Clinician’s Specialty Among the total subjects, 1428 (47.6%) cases were referred from oral and maxillofacial medicine, and 1573 cases (52.4%) were from the oral and maxillofacial surgery department. As well, 75% of the referrals from the medicine department were consistent between clinical diagnosis and pathology, the rate of which was 69.7% for the surgery department. No significant relationship was found between the clinician’s specialty and concordance of clinical and histopathologic diagnoses (P value = 0.281). ### 3.3. Location Of the 12 documented biopsy sites, the mandible was observed to be the most common one accounting for 770 (25.6%) cases, followed by the floor of the mouth accounting for the least biopsied sites with 27 biopsies and with the highest rate of concordance (85.2%). Notably, the minimum rate of concordance was found to be related to gingival lesions (66.1%). A significant relationship was also found between the lesion’s site and concordance of clinical and histological diagnoses (P value = 0.040) (Table 2).Table 2 Concordance rate of clinical and histopathologic diagnosis based on location. Site of lesionTotal casesConcordanceN (%)P valueMandible770516 (67%)0.040Maxilla495346 (69.9%)Palate12283 (68%)Alveolar mucosa8056 (70%)Buccal mucosa479399 (83.3%)Labial mucosa151124 (82.1%)Ventral surface of tongue3827 (71%)Dorsal surface of tongue10077 (77%)Lateral surfaces of tongue190130 (68.4%)Floor of mouth2723 (85.2%)Gingiva410271 (66.1%)Lip139115 (82.7%) ### 3.4. Categories of Lesions As mentioned earlier, all the cases included in this study were divided into 5 categories (Table3). Exophytic lesions that were observed in 44.1% (n = 1326) cases were the most common category of lesions. Biopsy of pigmented lesions was the least type by detecting only in 1.1% (n = 34) cases. Red and white lesions accounted for the highest rate of concordance (86.1%) and the least rate belonged to pigmented lesions (47.1%). As well, a significant relationship was found between the type of lesion and concordance of clinical and histological diagnoses (P value = 0.022). In this regard, the frequency and concordance rate of lesions in each category are shown in Table 4.Table 3 Concordance rate of clinical and histopathologic diagnosis based on the type of lesions. Category of lesionTotal casesConcordanceN (%)P valueUlcerative, vesicular, and bullous lesions7542 (56%)0.022Red and white lesions519447 (86.1%)Pigmented lesions3416 (47.1%)Exophytic soft tissue lesions1326893 (67.3%)Bone lesions1047769 (73.5%)Table 4 Frequency and concordance rate of clinical and histopathologic diagnosis in each category of lesions. LesionTotal casesConcordanceN (%)Ulcerative, vesicular, and bullous lesionsPemphigus vulgaris4634 (73.9%)Pemphigoid153 (20%)Eosinophilic ulcers of tongue93 (33.3%)Traumatic ulcers30 (0%)Recurrent aphthous stomatitis11 (100%)Erythema multiform11 (100%)White and red lesionsLichen planus449398 (88.6%)Leukoplakia6349 (77.8%)Oral erythroplakia40 (0%)Lupus erythematosus20 (0%)Hairy leukoplakia10 (0%)Pigmented lesionsOral/Labial melanotic macule1411 (78.6%)Inflammatory hyperpigmentation60 (0%)Melanocytic nevus62 (33.3%)Oral melanoacanthoma42 (50%)Malignant melanoma31 (33.3%)Melanosis10 (0%)Exophytic soft tissue lesions1Reactive/Inflammatory lesions1100742 (67.4%)Fibroma361247 (68.4%)Pyogenic granuloma252137 (54.4%)Mucocele174160 (91.9%)Epulis fissuratum125109 (87.2%)Peripheral giant cell granuloma12457 (46%)Peripheral odontogenic fibroma3818 (47.4%)Epulis granulomatosa149 (64.3%)Neurofibroma125 (41.7%)Benign tumoral lesions8343 (52%)Oral papilloma5033 (66%)Pleomorphic adenoma155 (33.3%)Lipoma50 (0%)Schwannoma51 (20%)Hemangioma40 (0%)Traumatic neuroma22 (100%)Lymphangioma11 (100%)Basal cell adenoma11 (100%)Malignant tumoral lesions143108 (75.5%)Squamous cell carcinoma132104 (78.8%)Basal cell carcinoma31 (33.3%)Lymphoma30 (0%)Mucoepidermoid carcinoma31 (33.3%)Adenoid cystic carcinoma22 (100%)Bone lesions2Cystic lesions861669 (77.7%)Radicular cyst420358 (85.2%)Odontogenic keratocyst184115 (62.5%)Dentigerous cyst173134 (77.4%)Residual cyst5436 (66.7%)Nasopalatine canal cyst1818 (100%)Traumatic bone cyst117 (63.6%)Aneurysmal bone cyst11 (100%)Benign tumoral lesions13371 (53.4%)Central giant cell granuloma5026 (52%)Ameloblastoma3316 (48.5%)Odontoma1513 (86.7%)Osteoma117 (63.6%)Cementoblastoma74 (57.1%)Adenomatoid odontogenic tumor71 (14.3%)Central odontogenic fibroma11 (100%)Odontogenic myxoma73 (42.9%)Ameloblastic fibroma10 (0%)Malignant tumoral lesions149 (64.3%)Osteosarcoma116 (54.5%)Fibrosarcoma22 (100%)Chondrosarcoma11 (100%)Other∗3920 (51.2%)1. Exophytic lesions were subdivided into two subgroups: reactive/inflammatory and tumoral lesions (malignant and benign tumors). 2. Bone lesions were subdivided into two subgroups: cystic and tumoral lesions (malignant and benign tumors).∗Bone samples that were not included in either cystic or tumoral lesion were named “other”. This category includes developmental lesions of bone (fibrous dysplasia, ossifying fibroma, and periapical cemento-osseous dysplasia). ## 3.1. Age and Sex Among all the biopsied cases, 1432 (47.7%) men and 1569 (52.3%) women were included. Moreover, 1058 male (73.9%) and 1109 (70.7%) female subjects had consistent diagnoses between clinical and pathological. In addition, 2708 (93.2%) cases were in the second decade of their life, so they were the most prevalent cases. After the tenth, ninth, and eighth decades (with a total of 9 cases), the sixth decade had the most frequent clinical and histological concordance (78%), and the fifth decade had the least (51.6%) (Table1). Of note, there was no significant relationship between patients’ sex and age and concordance of clinical and histopathologic diagnoses (P values = 0.310 and 0.520, respectively).Table 1 Concordance rate of clinical and histopathologic diagnosis based on age ranges. Decade (age ranges)Total casesConcordance N (%)P value1 (0–9)42 (50%)0.5202 (10–19)27972038 (72.6%)3 (20–29)5437 (68.5%)4 (30–39)4732 (68.1)5 (40–49)3116 (51.6%)6 (50–59)4132 (78%)7 (60–69)1812 (66.7%)8 (70–79)65 (83.3%)9 (80–89)22 (100%)10 (90–99)11 (100%) ## 3.2. Clinician’s Specialty Among the total subjects, 1428 (47.6%) cases were referred from oral and maxillofacial medicine, and 1573 cases (52.4%) were from the oral and maxillofacial surgery department. As well, 75% of the referrals from the medicine department were consistent between clinical diagnosis and pathology, the rate of which was 69.7% for the surgery department. No significant relationship was found between the clinician’s specialty and concordance of clinical and histopathologic diagnoses (P value = 0.281). ## 3.3. Location Of the 12 documented biopsy sites, the mandible was observed to be the most common one accounting for 770 (25.6%) cases, followed by the floor of the mouth accounting for the least biopsied sites with 27 biopsies and with the highest rate of concordance (85.2%). Notably, the minimum rate of concordance was found to be related to gingival lesions (66.1%). A significant relationship was also found between the lesion’s site and concordance of clinical and histological diagnoses (P value = 0.040) (Table 2).Table 2 Concordance rate of clinical and histopathologic diagnosis based on location. Site of lesionTotal casesConcordanceN (%)P valueMandible770516 (67%)0.040Maxilla495346 (69.9%)Palate12283 (68%)Alveolar mucosa8056 (70%)Buccal mucosa479399 (83.3%)Labial mucosa151124 (82.1%)Ventral surface of tongue3827 (71%)Dorsal surface of tongue10077 (77%)Lateral surfaces of tongue190130 (68.4%)Floor of mouth2723 (85.2%)Gingiva410271 (66.1%)Lip139115 (82.7%) ## 3.4. Categories of Lesions As mentioned earlier, all the cases included in this study were divided into 5 categories (Table3). Exophytic lesions that were observed in 44.1% (n = 1326) cases were the most common category of lesions. Biopsy of pigmented lesions was the least type by detecting only in 1.1% (n = 34) cases. Red and white lesions accounted for the highest rate of concordance (86.1%) and the least rate belonged to pigmented lesions (47.1%). As well, a significant relationship was found between the type of lesion and concordance of clinical and histological diagnoses (P value = 0.022). In this regard, the frequency and concordance rate of lesions in each category are shown in Table 4.Table 3 Concordance rate of clinical and histopathologic diagnosis based on the type of lesions. Category of lesionTotal casesConcordanceN (%)P valueUlcerative, vesicular, and bullous lesions7542 (56%)0.022Red and white lesions519447 (86.1%)Pigmented lesions3416 (47.1%)Exophytic soft tissue lesions1326893 (67.3%)Bone lesions1047769 (73.5%)Table 4 Frequency and concordance rate of clinical and histopathologic diagnosis in each category of lesions. LesionTotal casesConcordanceN (%)Ulcerative, vesicular, and bullous lesionsPemphigus vulgaris4634 (73.9%)Pemphigoid153 (20%)Eosinophilic ulcers of tongue93 (33.3%)Traumatic ulcers30 (0%)Recurrent aphthous stomatitis11 (100%)Erythema multiform11 (100%)White and red lesionsLichen planus449398 (88.6%)Leukoplakia6349 (77.8%)Oral erythroplakia40 (0%)Lupus erythematosus20 (0%)Hairy leukoplakia10 (0%)Pigmented lesionsOral/Labial melanotic macule1411 (78.6%)Inflammatory hyperpigmentation60 (0%)Melanocytic nevus62 (33.3%)Oral melanoacanthoma42 (50%)Malignant melanoma31 (33.3%)Melanosis10 (0%)Exophytic soft tissue lesions1Reactive/Inflammatory lesions1100742 (67.4%)Fibroma361247 (68.4%)Pyogenic granuloma252137 (54.4%)Mucocele174160 (91.9%)Epulis fissuratum125109 (87.2%)Peripheral giant cell granuloma12457 (46%)Peripheral odontogenic fibroma3818 (47.4%)Epulis granulomatosa149 (64.3%)Neurofibroma125 (41.7%)Benign tumoral lesions8343 (52%)Oral papilloma5033 (66%)Pleomorphic adenoma155 (33.3%)Lipoma50 (0%)Schwannoma51 (20%)Hemangioma40 (0%)Traumatic neuroma22 (100%)Lymphangioma11 (100%)Basal cell adenoma11 (100%)Malignant tumoral lesions143108 (75.5%)Squamous cell carcinoma132104 (78.8%)Basal cell carcinoma31 (33.3%)Lymphoma30 (0%)Mucoepidermoid carcinoma31 (33.3%)Adenoid cystic carcinoma22 (100%)Bone lesions2Cystic lesions861669 (77.7%)Radicular cyst420358 (85.2%)Odontogenic keratocyst184115 (62.5%)Dentigerous cyst173134 (77.4%)Residual cyst5436 (66.7%)Nasopalatine canal cyst1818 (100%)Traumatic bone cyst117 (63.6%)Aneurysmal bone cyst11 (100%)Benign tumoral lesions13371 (53.4%)Central giant cell granuloma5026 (52%)Ameloblastoma3316 (48.5%)Odontoma1513 (86.7%)Osteoma117 (63.6%)Cementoblastoma74 (57.1%)Adenomatoid odontogenic tumor71 (14.3%)Central odontogenic fibroma11 (100%)Odontogenic myxoma73 (42.9%)Ameloblastic fibroma10 (0%)Malignant tumoral lesions149 (64.3%)Osteosarcoma116 (54.5%)Fibrosarcoma22 (100%)Chondrosarcoma11 (100%)Other∗3920 (51.2%)1. Exophytic lesions were subdivided into two subgroups: reactive/inflammatory and tumoral lesions (malignant and benign tumors). 2. Bone lesions were subdivided into two subgroups: cystic and tumoral lesions (malignant and benign tumors).∗Bone samples that were not included in either cystic or tumoral lesion were named “other”. This category includes developmental lesions of bone (fibrous dysplasia, ossifying fibroma, and periapical cemento-osseous dysplasia). ## 4. Discussion In this study, the rate of concordance between the two clinical and histopathological diagnoses was examined, along with the prevalence of each biopsied lesion submitted to the Department of Oral Pathology, Shiraz dentistry school. Accordingly, these considerations are valuable for improving the existing knowledge about the perception and behavior of dentists and dental students regarding the necessity of performing the histopathological examination. In the present study, the rate of clinicopathological concordance was obtained as 72.2%, which is similar to those obtained in studies by Saravani et al. [11] and Emamverdizadeh et al. [8] who calculated the overall concordance rate as 70.1% and 72.3%, respectively. However, our concordance rate was low when compared to studies conducted by Tatli et al. [2] and Forman et al. [12] (93.3% and 94.4%, respectively). This can be accrued to more sample size and the diversity of lesions in our study. In a study by Soyele et al. [13], clinicopathological reports of 592 biopsied cases during the period of 2008–2017 were retrieved and then analyzed. Accordingly, they recorded the concordance rate as 54.6%, which was similar to the results of Poudel et al.’s study [7] (54.6%). These discrepancies could be due to remarkable differences in these studies’ methodologies such as the clinicians’ and the pathologists’ skills, the accuracy of biopsy, sample size, and conditions under which the specimens were transferred to the laboratory.Based on the fact that some lesions occur more frequently in one sex or at certain ages, so it can be said that age or sex can be considered as one of the influential factors in making a better differential diagnosis. However, in the present study, no significant relationship was observed between concordance rate and sex or age. These findings are in line with those of Saravani et al.’s study [11]. However, in Forman et al.’s research [12], age was found to be significantly associated with accuracy between clinical and histological diagnoses. Furthermore, in the current study, the highest concordance rate after the tenth, ninth, and eighth decades (with a total of 9 cases in almost 3000 cases) was observed in the sixth decade of life, which is almost consistent with other similar reports, demonstrating that the highest percentage of concordance rate was observed in the seventh decade and older age [13–17]. The reason for the greater concordance rate between clinical and pathological diagnoses in this age group may possibly be the loss of teeth, thereby the reduced number of odontogenic lesions and irritation associated with them. Another reason might be the exclusion of lesions developing in children or young adults. Moreover, a slight increase might be found in some specific lesions such as denture-related lesions and other prevalent lesions, which consequently makes a correct diagnosis of lesions easier [11, 14]. Despite the results of the present study, two previous studies [12, 13] have also observed a higher concordance index in women, while another study [2] has reported slightly higher discordance rates for the female patients’ lesions compared to the male patients’ ones.Similar to the current study, Saravani et al. [11] have also found no relationship between concordance of clinical and histopathological diagnoses and the clinician’s specialty. However, in the study by Foroughi et al. [18], the highest and lowest concordance rates between clinical and pathological diagnoses were achieved by oral medicine specialists (98%) and general dentists (71%), respectively. The current study indicated that a significant relationship exists between the lesion’s site and concordance of clinical and histological diagnoses. Gingival lesions and floor of mouth both had the minimum and maximum rates of concordance in the current study, respectively. Correspondingly, this finding may be due to the fact that several oral diseases have the same clinical manifestations in gingiva; for example, desquamative gingivitis can be seen in either ulcerative and vesiculobullous or white and red lesions, so it is not clinically distinguishable among these types of diseases. However, Foroughi et al. [18] and Hashemipour et al. [16] in their studies reported the most concordance rate of clinical and histopathological diagnoses in the gingiva. Furthermore, the lowest concordance rate was observed on the floor of the mouth, as reported in Hashemipour et al. and Saravani et al.’s studies [11, 16]. These contradictory findings in these studies may be due to variations in the sample size and the clinicians’ knowledge and experiences.The present study is unique as it, for the first time, examined a large number of studied biopsy samples and then classified all lesions into 5 categories of ulcerative, white and red, pigmented, exophytic, and bone lesions, which include almost all types of oral lesions while other studies have mainly focused only on few specific lesions and a specific group [19–22]. According to the results of the current study, a statistically significant relationship exists between the concordance rate of the histopathological and clinical diagnoses and the type of lesions. Accordingly, this finding is in line with the results of the study by Saravani et al. [11] who found a significant relationship between the type of lesion (either neoplastic or nonneoplastic) and clinicopathological concordance. In this study, out of 5 general categories of lesions, the highest prevalence belonged to exophytic lesions, white and red lesions had the highest concordance rate, and pigmented lesions had the lowest rate. In white and red lesions, oral lichen planus was the most commonly observed lesion, and it also had the highest percentage of concordance (88.6%). Similarly, Fattahi et al. [14] in their study found the highest percentage of concordance for lichen planus (100%), and in another study, Goyal et al. [21] found the lichen planus as the most common lesion in oral mucosal lesions with the clinicopathological concordance rate of 91.4%.As stated earlier, several investigations conducted on the concordance of clinical and pathological diagnoses have reported varying concordance rates as their results. Since the correct clinical or pathological diagnosis of lesions is closely linked to both the knowledge and educational level of clinicians, it is critical to redesign students’ educational programs totally and then improve them. In order to avoid diagnostic errors, physicians and dentists should also take thorough histories of patients and then transmit them to pathologists, besides following proper and standard procedures when taking biopsies. ## 5. Conclusion The results of the present study indicate that there is a concordance between the clinical and pathological diagnoses of the lesions in more than 70% of cases, but unfortunately, inconsistency still exists regarding some lesions, which is not negligible. So, it should be noted that the clinicopathological concordance rate will never reach 100%, because there are lesions that have the same clinical appearance and different histopathology, and in many of them, the definitive diagnosis is still based on the histopathological results. Therefore, to avoid misdiagnosis and improper treatment, all dental specialists should be informed and aware of the importance of sending all excised specimens for performing histological investigations. --- *Source: 1016495-2022-05-14.xml*
1016495-2022-05-14_1016495-2022-05-14.md
27,855
Correlation between Clinical and Histopathological Diagnoses in Oral Cavity Lesions: A 12-Year Retrospective Study
Golnoush Farzinnia; Mehdi Sasannia; Shima Torabi; Fahimeh Rezazadeh; Alireza Ranjbaran; Azita Azad
International Journal of Dentistry (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016495
1016495-2022-05-14.xml
--- ## Abstract Objective. Proper diagnosis plays a key role in the treatment and prognosis of all diseases. Although histopathological diagnosis is still known as the gold standard, final diagnosis becomes difficult unless precise clinical descriptions are obtained. So, this study aimed to evaluate the concordance of the clinical and histopathological diagnoses of all oral and maxillofacial biopsy specimens in a 12-year duration. Materials and Methods. Archive files and clinical findings related to 3001 patients who had been referred to the Department of Oral Pathology during a 12-year period were reviewed. The recorded information in files included age, sex, lesion’s location, clinical and histopathological diagnoses, and specialty of dentists. Results. Out of 3001 cases included and reviewed in this study, 2167 cases (72.2%) were consistent between clinical and histopathologic diagnoses. Age, sex, and clinician’s specialty were indicated to have no significant effect on diagnosis (p values = 0.520, 0.310, 0.281, respectively), but location and type of lesion affected that (p values = 0.040 and 0.022, respectively). In regard to location, the highest concordance of clinical and histopathologic diagnoses was observed in mouth floor lesions, and the lowest one was in gingival mucosa. In terms of lesion category, the highest and the lowest concordance rates belonged to white and red lesions and pigmented lesions, respectively. Conclusion. The results of the present study show that the consistency of clinical and histopathological diagnoses was three times more than their inconsistency, and the accuracy of the clinicians was largely acceptable. --- ## Body ## 1. Introduction The oral cavity is a complex area in the located in the head and neck regions and home to a diverse range of cysts, benign, and malignant salivary gland tumors, as well as odontogenic and nonodontogenic neoplasms [1, 2]. Both the diagnosis and treatment of oral cavity lesions are known as integral parts of oral health care [3]. Moreover, it is well understood that early detection and treatment of these lesions would greatly lead to the improvement of patients’ survival rates and quality of life [4]. Although each oral lesion has different characteristics and clinical features aiding in diagnosis, clinical diagnosis errors occur due to the similarities in clinical presentations, lack of precise definitions for these characteristics, incompatibility of the signs and symptoms in patients, and the presence of multiple manifestations for a lesion [5, 6]. Therefore, in order to minimize misdiagnoses and to achieve more accurate ones, it is necessary to consider the patients’ chief complaints, medical and dental histories’ records, clinical manifestations, imaging diagnostic techniques, and various tests like laboratory tests that include biopsies with microscopic evaluations and blood tests [6]. Histopathologic examination, which is known as the gold standard in diagnostic oral pathology, is used to confirm the clinical diagnosis [7]. However, pathologists may encounter uncertainty during performing the histological examination on lesions under some circumstances, because various lesions may exhibit comparable microscopic views. Thus, the clinical examination can be considered as an effective and important step for confirming pathology results and will also be quite useful in such situations [8]. Therefore, the initial clinical diagnosis made by clinicians must be accurate. Moreover, it should not miss any oral potentially malignant disorders (OPMDs) or malignant lesions, and a close collaboration between the clinician and the pathologist is required in this regard, in order to reach a definitive and right diagnosis [2]. Various studies have previously investigated the concordance of clinical and pathologic diagnoses, and as a result, they reported concordance rates of approximately between 50 and 80% [3, 5–8]. Due to the reported discrepancy in the concordance rates between clinical and histopathological diagnoses in numerous studies performed in various places, the present study aimed to determine the rate of discrepancy between clinical and histopathological diagnoses. This research was done on the patients admitted to Shiraz dentistry school with the hope that the obtained results would help identify weaknesses in the diagnosis of oral diseases and improve both diagnostic and treatment outcomes. ## 2. Materials and Methods This study was performed in the Faculty of Dentistry, Shiraz University of Medical Sciences, in terms of all relevant principles of the Helsinki Declaration. All the included subjects signed informed consent forms, and the ethical approval was obtained from the ethics committee of Shiraz University of Medical Sciences, Shiraz (IR.SUMS.DENTAL.REC.1398.123).In this retrospective study, all the oral lesions diagnosed between January 2006 and December 2018 were then extracted from the archives that existed in the Department of oral pathology. Clinical examinations have been performed and approved by oral medicine specialists and maxillofacial surgeons who had sufficient skills in this field. For the purpose of this study, the census method was firstly used to select the eligible subjects, and the exclusion criteria were as follows: records with inadequate information, biopsy samples without definite pathological reports, and lesions in which a clinical impression was not given. In the patients’ records, the following data were available: demographic data (age and gender), location of the lesion (mandible, maxilla, palate, alveolar mucosa, buccal mucosa, labial mucosa, ventral surface of tongue, dorsal surface of tongue, lateral surfaces of tongue, floor of mouth, gingiva, and lip), clinician’s specialty (oral medicine and oral surgeon), and the clinical and pathological diagnoses of the lesions. All the included cases were subdivided into the following five groups based on the clinical manifestations.(1) Ulcerative, vesicular, and bullous lesions(2) Red and white lesions(3) Pigmented lesions(4) Bone lesions, which were divided into either cystic or tumoral (benign/malignant) lesions(5) Exophytic soft tissue lesions, which were divided into either reactive/inflammatory or tumoral (benign/malignant) lesionsThis classification of lesions was done according to the textbook of oral diseases (Burket’s ORAL MEDICINE 12th edition) [9]. The histopathological criteria for the final pathological diagnosis of each lesion were based on the textbook of Oral and Maxillofacial Pathology [10]. Finally, the obtained samples with a similar diagnosis using both techniques were recorded as the concordance of clinical and pathological diagnoses.The collected data from all groups were imported to Statistical Package for Social Sciences (SPSS) for Windows software, version 16.0 (SPSS Inc., Chicago, IL, USA). As well, descriptive statistics indices were used to calculate the absolute and relative frequencies of different lesions. The chi-square test was used to compare the categorical demographic variables among the groups. The confidence interval was set to 95%, andp < 0.05 was considered statistically significant. ## 3. Results A total of 3001 clinical files were evaluated in the current study. In 2167 cases (72.2%), the clinical and pathological diagnoses were consistent. ### 3.1. Age and Sex Among all the biopsied cases, 1432 (47.7%) men and 1569 (52.3%) women were included. Moreover, 1058 male (73.9%) and 1109 (70.7%) female subjects had consistent diagnoses between clinical and pathological. In addition, 2708 (93.2%) cases were in the second decade of their life, so they were the most prevalent cases. After the tenth, ninth, and eighth decades (with a total of 9 cases), the sixth decade had the most frequent clinical and histological concordance (78%), and the fifth decade had the least (51.6%) (Table1). Of note, there was no significant relationship between patients’ sex and age and concordance of clinical and histopathologic diagnoses (P values = 0.310 and 0.520, respectively).Table 1 Concordance rate of clinical and histopathologic diagnosis based on age ranges. Decade (age ranges)Total casesConcordance N (%)P value1 (0–9)42 (50%)0.5202 (10–19)27972038 (72.6%)3 (20–29)5437 (68.5%)4 (30–39)4732 (68.1)5 (40–49)3116 (51.6%)6 (50–59)4132 (78%)7 (60–69)1812 (66.7%)8 (70–79)65 (83.3%)9 (80–89)22 (100%)10 (90–99)11 (100%) ### 3.2. Clinician’s Specialty Among the total subjects, 1428 (47.6%) cases were referred from oral and maxillofacial medicine, and 1573 cases (52.4%) were from the oral and maxillofacial surgery department. As well, 75% of the referrals from the medicine department were consistent between clinical diagnosis and pathology, the rate of which was 69.7% for the surgery department. No significant relationship was found between the clinician’s specialty and concordance of clinical and histopathologic diagnoses (P value = 0.281). ### 3.3. Location Of the 12 documented biopsy sites, the mandible was observed to be the most common one accounting for 770 (25.6%) cases, followed by the floor of the mouth accounting for the least biopsied sites with 27 biopsies and with the highest rate of concordance (85.2%). Notably, the minimum rate of concordance was found to be related to gingival lesions (66.1%). A significant relationship was also found between the lesion’s site and concordance of clinical and histological diagnoses (P value = 0.040) (Table 2).Table 2 Concordance rate of clinical and histopathologic diagnosis based on location. Site of lesionTotal casesConcordanceN (%)P valueMandible770516 (67%)0.040Maxilla495346 (69.9%)Palate12283 (68%)Alveolar mucosa8056 (70%)Buccal mucosa479399 (83.3%)Labial mucosa151124 (82.1%)Ventral surface of tongue3827 (71%)Dorsal surface of tongue10077 (77%)Lateral surfaces of tongue190130 (68.4%)Floor of mouth2723 (85.2%)Gingiva410271 (66.1%)Lip139115 (82.7%) ### 3.4. Categories of Lesions As mentioned earlier, all the cases included in this study were divided into 5 categories (Table3). Exophytic lesions that were observed in 44.1% (n = 1326) cases were the most common category of lesions. Biopsy of pigmented lesions was the least type by detecting only in 1.1% (n = 34) cases. Red and white lesions accounted for the highest rate of concordance (86.1%) and the least rate belonged to pigmented lesions (47.1%). As well, a significant relationship was found between the type of lesion and concordance of clinical and histological diagnoses (P value = 0.022). In this regard, the frequency and concordance rate of lesions in each category are shown in Table 4.Table 3 Concordance rate of clinical and histopathologic diagnosis based on the type of lesions. Category of lesionTotal casesConcordanceN (%)P valueUlcerative, vesicular, and bullous lesions7542 (56%)0.022Red and white lesions519447 (86.1%)Pigmented lesions3416 (47.1%)Exophytic soft tissue lesions1326893 (67.3%)Bone lesions1047769 (73.5%)Table 4 Frequency and concordance rate of clinical and histopathologic diagnosis in each category of lesions. LesionTotal casesConcordanceN (%)Ulcerative, vesicular, and bullous lesionsPemphigus vulgaris4634 (73.9%)Pemphigoid153 (20%)Eosinophilic ulcers of tongue93 (33.3%)Traumatic ulcers30 (0%)Recurrent aphthous stomatitis11 (100%)Erythema multiform11 (100%)White and red lesionsLichen planus449398 (88.6%)Leukoplakia6349 (77.8%)Oral erythroplakia40 (0%)Lupus erythematosus20 (0%)Hairy leukoplakia10 (0%)Pigmented lesionsOral/Labial melanotic macule1411 (78.6%)Inflammatory hyperpigmentation60 (0%)Melanocytic nevus62 (33.3%)Oral melanoacanthoma42 (50%)Malignant melanoma31 (33.3%)Melanosis10 (0%)Exophytic soft tissue lesions1Reactive/Inflammatory lesions1100742 (67.4%)Fibroma361247 (68.4%)Pyogenic granuloma252137 (54.4%)Mucocele174160 (91.9%)Epulis fissuratum125109 (87.2%)Peripheral giant cell granuloma12457 (46%)Peripheral odontogenic fibroma3818 (47.4%)Epulis granulomatosa149 (64.3%)Neurofibroma125 (41.7%)Benign tumoral lesions8343 (52%)Oral papilloma5033 (66%)Pleomorphic adenoma155 (33.3%)Lipoma50 (0%)Schwannoma51 (20%)Hemangioma40 (0%)Traumatic neuroma22 (100%)Lymphangioma11 (100%)Basal cell adenoma11 (100%)Malignant tumoral lesions143108 (75.5%)Squamous cell carcinoma132104 (78.8%)Basal cell carcinoma31 (33.3%)Lymphoma30 (0%)Mucoepidermoid carcinoma31 (33.3%)Adenoid cystic carcinoma22 (100%)Bone lesions2Cystic lesions861669 (77.7%)Radicular cyst420358 (85.2%)Odontogenic keratocyst184115 (62.5%)Dentigerous cyst173134 (77.4%)Residual cyst5436 (66.7%)Nasopalatine canal cyst1818 (100%)Traumatic bone cyst117 (63.6%)Aneurysmal bone cyst11 (100%)Benign tumoral lesions13371 (53.4%)Central giant cell granuloma5026 (52%)Ameloblastoma3316 (48.5%)Odontoma1513 (86.7%)Osteoma117 (63.6%)Cementoblastoma74 (57.1%)Adenomatoid odontogenic tumor71 (14.3%)Central odontogenic fibroma11 (100%)Odontogenic myxoma73 (42.9%)Ameloblastic fibroma10 (0%)Malignant tumoral lesions149 (64.3%)Osteosarcoma116 (54.5%)Fibrosarcoma22 (100%)Chondrosarcoma11 (100%)Other∗3920 (51.2%)1. Exophytic lesions were subdivided into two subgroups: reactive/inflammatory and tumoral lesions (malignant and benign tumors). 2. Bone lesions were subdivided into two subgroups: cystic and tumoral lesions (malignant and benign tumors).∗Bone samples that were not included in either cystic or tumoral lesion were named “other”. This category includes developmental lesions of bone (fibrous dysplasia, ossifying fibroma, and periapical cemento-osseous dysplasia). ## 3.1. Age and Sex Among all the biopsied cases, 1432 (47.7%) men and 1569 (52.3%) women were included. Moreover, 1058 male (73.9%) and 1109 (70.7%) female subjects had consistent diagnoses between clinical and pathological. In addition, 2708 (93.2%) cases were in the second decade of their life, so they were the most prevalent cases. After the tenth, ninth, and eighth decades (with a total of 9 cases), the sixth decade had the most frequent clinical and histological concordance (78%), and the fifth decade had the least (51.6%) (Table1). Of note, there was no significant relationship between patients’ sex and age and concordance of clinical and histopathologic diagnoses (P values = 0.310 and 0.520, respectively).Table 1 Concordance rate of clinical and histopathologic diagnosis based on age ranges. Decade (age ranges)Total casesConcordance N (%)P value1 (0–9)42 (50%)0.5202 (10–19)27972038 (72.6%)3 (20–29)5437 (68.5%)4 (30–39)4732 (68.1)5 (40–49)3116 (51.6%)6 (50–59)4132 (78%)7 (60–69)1812 (66.7%)8 (70–79)65 (83.3%)9 (80–89)22 (100%)10 (90–99)11 (100%) ## 3.2. Clinician’s Specialty Among the total subjects, 1428 (47.6%) cases were referred from oral and maxillofacial medicine, and 1573 cases (52.4%) were from the oral and maxillofacial surgery department. As well, 75% of the referrals from the medicine department were consistent between clinical diagnosis and pathology, the rate of which was 69.7% for the surgery department. No significant relationship was found between the clinician’s specialty and concordance of clinical and histopathologic diagnoses (P value = 0.281). ## 3.3. Location Of the 12 documented biopsy sites, the mandible was observed to be the most common one accounting for 770 (25.6%) cases, followed by the floor of the mouth accounting for the least biopsied sites with 27 biopsies and with the highest rate of concordance (85.2%). Notably, the minimum rate of concordance was found to be related to gingival lesions (66.1%). A significant relationship was also found between the lesion’s site and concordance of clinical and histological diagnoses (P value = 0.040) (Table 2).Table 2 Concordance rate of clinical and histopathologic diagnosis based on location. Site of lesionTotal casesConcordanceN (%)P valueMandible770516 (67%)0.040Maxilla495346 (69.9%)Palate12283 (68%)Alveolar mucosa8056 (70%)Buccal mucosa479399 (83.3%)Labial mucosa151124 (82.1%)Ventral surface of tongue3827 (71%)Dorsal surface of tongue10077 (77%)Lateral surfaces of tongue190130 (68.4%)Floor of mouth2723 (85.2%)Gingiva410271 (66.1%)Lip139115 (82.7%) ## 3.4. Categories of Lesions As mentioned earlier, all the cases included in this study were divided into 5 categories (Table3). Exophytic lesions that were observed in 44.1% (n = 1326) cases were the most common category of lesions. Biopsy of pigmented lesions was the least type by detecting only in 1.1% (n = 34) cases. Red and white lesions accounted for the highest rate of concordance (86.1%) and the least rate belonged to pigmented lesions (47.1%). As well, a significant relationship was found between the type of lesion and concordance of clinical and histological diagnoses (P value = 0.022). In this regard, the frequency and concordance rate of lesions in each category are shown in Table 4.Table 3 Concordance rate of clinical and histopathologic diagnosis based on the type of lesions. Category of lesionTotal casesConcordanceN (%)P valueUlcerative, vesicular, and bullous lesions7542 (56%)0.022Red and white lesions519447 (86.1%)Pigmented lesions3416 (47.1%)Exophytic soft tissue lesions1326893 (67.3%)Bone lesions1047769 (73.5%)Table 4 Frequency and concordance rate of clinical and histopathologic diagnosis in each category of lesions. LesionTotal casesConcordanceN (%)Ulcerative, vesicular, and bullous lesionsPemphigus vulgaris4634 (73.9%)Pemphigoid153 (20%)Eosinophilic ulcers of tongue93 (33.3%)Traumatic ulcers30 (0%)Recurrent aphthous stomatitis11 (100%)Erythema multiform11 (100%)White and red lesionsLichen planus449398 (88.6%)Leukoplakia6349 (77.8%)Oral erythroplakia40 (0%)Lupus erythematosus20 (0%)Hairy leukoplakia10 (0%)Pigmented lesionsOral/Labial melanotic macule1411 (78.6%)Inflammatory hyperpigmentation60 (0%)Melanocytic nevus62 (33.3%)Oral melanoacanthoma42 (50%)Malignant melanoma31 (33.3%)Melanosis10 (0%)Exophytic soft tissue lesions1Reactive/Inflammatory lesions1100742 (67.4%)Fibroma361247 (68.4%)Pyogenic granuloma252137 (54.4%)Mucocele174160 (91.9%)Epulis fissuratum125109 (87.2%)Peripheral giant cell granuloma12457 (46%)Peripheral odontogenic fibroma3818 (47.4%)Epulis granulomatosa149 (64.3%)Neurofibroma125 (41.7%)Benign tumoral lesions8343 (52%)Oral papilloma5033 (66%)Pleomorphic adenoma155 (33.3%)Lipoma50 (0%)Schwannoma51 (20%)Hemangioma40 (0%)Traumatic neuroma22 (100%)Lymphangioma11 (100%)Basal cell adenoma11 (100%)Malignant tumoral lesions143108 (75.5%)Squamous cell carcinoma132104 (78.8%)Basal cell carcinoma31 (33.3%)Lymphoma30 (0%)Mucoepidermoid carcinoma31 (33.3%)Adenoid cystic carcinoma22 (100%)Bone lesions2Cystic lesions861669 (77.7%)Radicular cyst420358 (85.2%)Odontogenic keratocyst184115 (62.5%)Dentigerous cyst173134 (77.4%)Residual cyst5436 (66.7%)Nasopalatine canal cyst1818 (100%)Traumatic bone cyst117 (63.6%)Aneurysmal bone cyst11 (100%)Benign tumoral lesions13371 (53.4%)Central giant cell granuloma5026 (52%)Ameloblastoma3316 (48.5%)Odontoma1513 (86.7%)Osteoma117 (63.6%)Cementoblastoma74 (57.1%)Adenomatoid odontogenic tumor71 (14.3%)Central odontogenic fibroma11 (100%)Odontogenic myxoma73 (42.9%)Ameloblastic fibroma10 (0%)Malignant tumoral lesions149 (64.3%)Osteosarcoma116 (54.5%)Fibrosarcoma22 (100%)Chondrosarcoma11 (100%)Other∗3920 (51.2%)1. Exophytic lesions were subdivided into two subgroups: reactive/inflammatory and tumoral lesions (malignant and benign tumors). 2. Bone lesions were subdivided into two subgroups: cystic and tumoral lesions (malignant and benign tumors).∗Bone samples that were not included in either cystic or tumoral lesion were named “other”. This category includes developmental lesions of bone (fibrous dysplasia, ossifying fibroma, and periapical cemento-osseous dysplasia). ## 4. Discussion In this study, the rate of concordance between the two clinical and histopathological diagnoses was examined, along with the prevalence of each biopsied lesion submitted to the Department of Oral Pathology, Shiraz dentistry school. Accordingly, these considerations are valuable for improving the existing knowledge about the perception and behavior of dentists and dental students regarding the necessity of performing the histopathological examination. In the present study, the rate of clinicopathological concordance was obtained as 72.2%, which is similar to those obtained in studies by Saravani et al. [11] and Emamverdizadeh et al. [8] who calculated the overall concordance rate as 70.1% and 72.3%, respectively. However, our concordance rate was low when compared to studies conducted by Tatli et al. [2] and Forman et al. [12] (93.3% and 94.4%, respectively). This can be accrued to more sample size and the diversity of lesions in our study. In a study by Soyele et al. [13], clinicopathological reports of 592 biopsied cases during the period of 2008–2017 were retrieved and then analyzed. Accordingly, they recorded the concordance rate as 54.6%, which was similar to the results of Poudel et al.’s study [7] (54.6%). These discrepancies could be due to remarkable differences in these studies’ methodologies such as the clinicians’ and the pathologists’ skills, the accuracy of biopsy, sample size, and conditions under which the specimens were transferred to the laboratory.Based on the fact that some lesions occur more frequently in one sex or at certain ages, so it can be said that age or sex can be considered as one of the influential factors in making a better differential diagnosis. However, in the present study, no significant relationship was observed between concordance rate and sex or age. These findings are in line with those of Saravani et al.’s study [11]. However, in Forman et al.’s research [12], age was found to be significantly associated with accuracy between clinical and histological diagnoses. Furthermore, in the current study, the highest concordance rate after the tenth, ninth, and eighth decades (with a total of 9 cases in almost 3000 cases) was observed in the sixth decade of life, which is almost consistent with other similar reports, demonstrating that the highest percentage of concordance rate was observed in the seventh decade and older age [13–17]. The reason for the greater concordance rate between clinical and pathological diagnoses in this age group may possibly be the loss of teeth, thereby the reduced number of odontogenic lesions and irritation associated with them. Another reason might be the exclusion of lesions developing in children or young adults. Moreover, a slight increase might be found in some specific lesions such as denture-related lesions and other prevalent lesions, which consequently makes a correct diagnosis of lesions easier [11, 14]. Despite the results of the present study, two previous studies [12, 13] have also observed a higher concordance index in women, while another study [2] has reported slightly higher discordance rates for the female patients’ lesions compared to the male patients’ ones.Similar to the current study, Saravani et al. [11] have also found no relationship between concordance of clinical and histopathological diagnoses and the clinician’s specialty. However, in the study by Foroughi et al. [18], the highest and lowest concordance rates between clinical and pathological diagnoses were achieved by oral medicine specialists (98%) and general dentists (71%), respectively. The current study indicated that a significant relationship exists between the lesion’s site and concordance of clinical and histological diagnoses. Gingival lesions and floor of mouth both had the minimum and maximum rates of concordance in the current study, respectively. Correspondingly, this finding may be due to the fact that several oral diseases have the same clinical manifestations in gingiva; for example, desquamative gingivitis can be seen in either ulcerative and vesiculobullous or white and red lesions, so it is not clinically distinguishable among these types of diseases. However, Foroughi et al. [18] and Hashemipour et al. [16] in their studies reported the most concordance rate of clinical and histopathological diagnoses in the gingiva. Furthermore, the lowest concordance rate was observed on the floor of the mouth, as reported in Hashemipour et al. and Saravani et al.’s studies [11, 16]. These contradictory findings in these studies may be due to variations in the sample size and the clinicians’ knowledge and experiences.The present study is unique as it, for the first time, examined a large number of studied biopsy samples and then classified all lesions into 5 categories of ulcerative, white and red, pigmented, exophytic, and bone lesions, which include almost all types of oral lesions while other studies have mainly focused only on few specific lesions and a specific group [19–22]. According to the results of the current study, a statistically significant relationship exists between the concordance rate of the histopathological and clinical diagnoses and the type of lesions. Accordingly, this finding is in line with the results of the study by Saravani et al. [11] who found a significant relationship between the type of lesion (either neoplastic or nonneoplastic) and clinicopathological concordance. In this study, out of 5 general categories of lesions, the highest prevalence belonged to exophytic lesions, white and red lesions had the highest concordance rate, and pigmented lesions had the lowest rate. In white and red lesions, oral lichen planus was the most commonly observed lesion, and it also had the highest percentage of concordance (88.6%). Similarly, Fattahi et al. [14] in their study found the highest percentage of concordance for lichen planus (100%), and in another study, Goyal et al. [21] found the lichen planus as the most common lesion in oral mucosal lesions with the clinicopathological concordance rate of 91.4%.As stated earlier, several investigations conducted on the concordance of clinical and pathological diagnoses have reported varying concordance rates as their results. Since the correct clinical or pathological diagnosis of lesions is closely linked to both the knowledge and educational level of clinicians, it is critical to redesign students’ educational programs totally and then improve them. In order to avoid diagnostic errors, physicians and dentists should also take thorough histories of patients and then transmit them to pathologists, besides following proper and standard procedures when taking biopsies. ## 5. Conclusion The results of the present study indicate that there is a concordance between the clinical and pathological diagnoses of the lesions in more than 70% of cases, but unfortunately, inconsistency still exists regarding some lesions, which is not negligible. So, it should be noted that the clinicopathological concordance rate will never reach 100%, because there are lesions that have the same clinical appearance and different histopathology, and in many of them, the definitive diagnosis is still based on the histopathological results. Therefore, to avoid misdiagnosis and improper treatment, all dental specialists should be informed and aware of the importance of sending all excised specimens for performing histological investigations. --- *Source: 1016495-2022-05-14.xml*
2022
# A Hyperbolic Tangent Adaptive PID + LQR Control Applied to a Step-Down Converter Using Poles Placement Design Implemented in FPGA **Authors:** Marcelo Dias Pedroso; Claudinor Bitencourt Nascimento; Angelo Marcelo Tusset; Maurício dos Santos Kaster **Journal:** Mathematical Problems in Engineering (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101650 --- ## Abstract This work presents an adaptive control that integrates two linear control strategies applied to a step-down converter: Proportional Integral Derivative (PID) and Linear Quadratic Regulator (LQR) controls. Considering the converter open loop transfer function and using the poles placement technique, the designs of the two controllers are set so that the operating point of the closed loop system presents the same natural frequency. With poles placement design, the overshoot problems of the LQR controller are avoided. To achieve the best performance of each controller, a hyperbolic tangent weight function is applied. The limits of the hyperbolic tangent function are defined based on the system error range. Simulation results using the Altera DSP Builder software in a MATLAB/SIMULINK environment of the proposed control schemes are presented. --- ## Body ## 1. Introduction The technological evolution of electronic devices has been very significant in recent years. With the increasing performance of microcontrollers and Digital Signal Processors (DSP), as well as the ascension of Field-Programmable Gate Array (FPGA), associated with high speeds of current A/D converters, some concepts related to digital signal processing have been reevaluated, and new forms of mathematical processing and algorithms developed [1, 2].More powerful digital devices are needed to enable the implementation of complex control systems in several applications to help improve their performance, stability, and robustness. In the power electronics field, better control strategies can enhance power quality and efficiency [3], by reducing losses, which represents important goals in forthcoming appliances that must comply with government environmental policies for electric power generation.FPGAs are devices with flexible logic programming. They allow several product features and functions, adapt to new standards, and reconfigure hardware for specific applications, even after the device has been already installed in the end product. One can use a FPGA to implement any logical function that an application-specific integrated circuit (ASIC) could perform, but the ability to update the functionality after shipping provides advantages for many application areas such as automotive, broadcast, computer and storage, display, medical, military, test and measurement, wireless, wire line, and in a lot of other areas. The main advantage of FPGA is the ability to have its logical structures arranged to work in parallel as opposed to the inherent sequential execution in microcontrollers. This can drastically boost the overall performance of the device. Also, when compared to microcontrollers and DSPs, FPGAs offer many design advantages including rapid prototyping, shorter time to market, the ability to reprogram in the field, lower Nonrecurring Engineering (NRE) costs, and long product life cycle to mitigate obsolescence risk [4–6]. In the last years, FPGAs manufacturing costs decreased significantly, making possible its use in most common applications like power electronics applications [7–11]. In these applications, the use of FPGAs allows more complex control techniques that can be used to improve the system performance such as response time along with reduced overshoot.FPGAs can be programmed using Hardware Development Language (HDL) which describes the connections of the logic gates together to form adders, multipliers, registers, and so on. The HDL places a dedicated resource for the task and allows for parallel operation. However, the complexity of HDL coding can be a barrier for many electrical engineering applications [12].As an alternative to low-level HDL programming, there are some optimized frameworks with ready-to-use high-level blocks such as the Altera’s DSP Builder software, which provides graphical design blocks to run simulations into a MATLAB/SIMULINK environment. The same blocks are used by DSP Builder as the basis for autogenerating an optimized HDL code to be loaded into FPGA hardware. The use of DSP builder tool can reduce the implementation time of a project resulting in lower costs associated to human-related design efforts.In electrical power conditioning systems that use switched static converters, the integration of high performance hardware with linear and nonlinear control techniques implemented digitally has provided the improvement of system performance, resulting in increased efficiency as well as enhanced quality of power supply [3]. Several types of voltage and current controllers can be implemented, either in the analog or digital way.Regarding the control techniques employed in these applications, traditionally the Proportional-Integral (PI) controller is one of the mostly used techniques to carry out control of some aspects of the converters, like regulating the output voltage or correcting the power factor. In these applications, the derivative action is usually not considered because it can amplify high frequency noise caused by switches commutation. Due to the integral component, PID has been widely used because it presents a favorable characteristic of eliminating the steady-state error, but its response time is high when compared to other kinds of controllers when the system is subjected to external disturbances. Other linear control techniques are also widely used like, for example, the LQR control. As it is an optimal control, it is common to appear as significant overshoots. It is desired to improve the system response time without worrying about the overshoot, commonly resulted from load disturbance or reference change, which can be achieved by employing poles placement design. However, LQR control does not have integral action, and its use does not guarantee null steady-state error.In order to improve system performance in relation to the response time without causing overshoots, which could damage the load or circuit elements, and to obtain a steady-state null error, adaptive and nonlinear controllers have being used. The Feedback Linearization, Adaptive Control, Gain Scheduling, Sliding Mode control, and State Dependent Riccati Equation (SDRE), can be highlighted each with its own characteristics regarding their application or parameters such as stability, robustness, and computational cost [13–24]. Another possibility of enhancing the system performance is the integration of two or more controllers. In this sense, the system error is compensated according to the best independent performance of each controller. The combined action of the two controls is considered as an adaptive control [24, 25]. A common approach is the use of a decision function that establishes a limit value that supports the decision of which control will be used. Another approach is the use of both control actions in a weighted fashion, determined by a weight function.Along with the advances in digital control, the use of more complex theories, mainly related to nonlinear and adaptive controllers design, has been excelling ever more. Adaptive control techniques have been highlighted in various applications. Alternative forms of adaptive control techniques applied to power electronic circuits are presented in [18–24].In this context, this paper presents an adaptive control that uses the best responses of two linear controllers, that is, the PID and LQR controls. To reduce overshoot problems of the LQR controller, poles placement design is applied. As a result, a composite control law is obtained for two different controllers used at the same time, where the control actions are mathematically weighted according to system error by means of a hyperbolic tangent function. This function represents the decision function that establishes the weights of PID and LQR control actions into the resulting control action, referred to as Hyperbolic Tangent Adaptive Control (HTAC). The main objective of this control technique is to perform the output voltage control of a step-down converter operating in continuous conduction mode (CCM). This strategy was tested by means of simulations using the DSP Builder software in a MATLAB/SIMULINK environment applied to the output voltage control on a classic Buck converter. Load steps were applied to assess the performance of the Hyperbolic Tangent Adaptive Control (HTAC). The results show that the proposed technique achieves better responses than the controllers alone, with a fast transient response, a small overshoot, due to the poles placement design, and null steady-state error. ## 2. Control Strategies ### 2.1. Linear PID Control Among the control structures used in the industrial segment, the classic parallel PID controller, shown in Figure1, stands out as one of the most widely used controllers due to well established practical implementation and tuning. There are various consolidated techniques which uses the PID transfer function and the system transfer function in order to obtain the proportional (Kplin), integral (Kilin), and derivative (kdlin) gains of the controller [26]. The design criteria, like overshoot and settling time for the closed loop system, are generally satisfactory when using this linear PID controller structure.Figure 1 Hyperbolic tangent adaptive control.The transfer function and the control law in the time domain of a PID controller with fixed gains are expressed by(1)C(s)=Kdlins2+Kplins+Kilins, where kdlin represents the derivative linear gain, kplin the proportional linear gain, and kilin the integral linear gain. ### 2.2. LQR Control The LQR controller is based on the minimization of a quadratic criterion associated with the state variables energy and the control signals. An optimal control provides a systematic way to calculate the gain matrix of the state feedback control [27, 28].As LQR is a consolidated control technique, the controller design will not be presented in this work.The problem formulation of optimal control (LQR) for the Buck converter can be ordered as follows: find a control functionu(t) ensuring that the output voltage of the converter is independent from the initial state to the reference value. The objective is to minimize the functional (2)J=∫0∞(δTQδ+uTRu)dt, where Q and R are symmetric and positive definite matrices, δ is a vector of the input errors, and u is the control signal. In addition, R can be chosen unitary without loss of generality [28].From the control law presented in (3), for R=1, (3)u(t)=-Kx(t), where (4)K=-BTP.For the defined values of the matrixQ (where Q is a positive definite matrix), the matrix P (where P is a symmetric matrix) can be found, which is a solution for the Riccati equation: (5)ATP+PA-PBBTP+Q=0.The limitation is that the LQR addresses a regulation problem and cannot originally be applied to a tracking problem, which is desired in practice [27].It is important to note that different values for the weight coefficient matrixQ are obtained for different trajectories, which implies that the range of values of Q matrix components influence the quality of the transient process.The technique of pole placement is proposed in this paper to find an optimal matrixQ (LQR formulation) which ensures the desired characteristics of transient response, ensuring achieving optimal regulation.To design the LQR controller feedback vector, the technique of pole placement is initially used [27, 28], since this control law works in the same way as the control law for the LQR controller. Thus, the state feedback K is found satisfying all conditions imposed by LQR controller, and the operating point of the closed loop system can be assured.Therefore, the vectorK can be defined, by the pole placement technique, by equation (6)Kp=[αn-βnαn-1-βn-1⋯α2-β2α1-β1]T-1, where αi is obtained from the characteristic polynomial: (7)(s-μ1)(s-μ1)⋯(s-μ1)=sn+α1sn-1+⋯+αn-1s+αn, where μi are the desired poles and the elements βi are the characteristic polynomial coefficients given by (8)|sI-A|=sn+β1sn-1+⋯+βn-1s+βn.The transformation matrixT is given by (9)T=MW, where M is the controllability matrix given by (10) and W is given by (11).Consider(10)M=[BAB⋯An-1B],(11)W=[an-1an-2⋯a11an-2an-3⋯10⋮⋮⋱⋮⋮a11⋯0010⋯00].Replacing (6) in (4), the Riccati matrix (P) for gain vector (Kp) can be obtained by (12)Pp=(BT)+Kp, where (BT)+ is a pseudoinverse matrix.Replacing (12) in (5) yields (13)Q=-ATPp-PpA+PpBBTPp.Through the solution of (13), the control that causes the system to the desired orbits minimizing the functional (2) can be defined, with the predefined transient behavior, where, by means of mathematical manipulations, one can find a matrix Q that satisfies the conditions of the LQR controller. The matrix Q must be positive. In this sense, a single test can be carried through an algorithm to prove that the obtained matrix Q is equal to QT. Thus, it can be said that the (Kp) state feedback vector corresponds to an optimal controller offered by LQR algorithm. ### 2.3. Proposed Hyperbolic Tangent Adaptive Control (HTAC) After some analyses realized by numeric simulations, to be presented afterwards, and relating to the system response during a load disturbance and to the steady-state error, it is possible to observe that either the proposed LQR and PID controllers are effective to maintain the system over the dominant poles in closed loop defined in the control design. Also, it can be observed that the LQR control is more effective for the transient response and the PID control for the steady state.With the objective of obtaining a control that presents the combined efficiency of both controllers, enhancing the performance of the system in closed loop as well as reducing the overshoot and the settling time, a parallel combination of LQR and PID controls is proposed. The simplest approach is to switch which controller will actuate over the system given a specific rule, such as a predefined error value. The disadvantage of this approach is the occurrence of an abrupt change in the control structure. In order to avoid this abrupt commutation between the controllers, a weighted combination is proposed where their control actions are regulated by weights (wi) defined by a hyperbolic tangent function.The control law for the HTAC controller is defined by(14)u(t)=uC1(s)(t)*w1+uC2(s)(t)*w2, where uC1(s)(t) and w1 represent the LQR control and its weight, respectively, and uC2(s)(t) and w2 represent the PID control and its weight, respectively, with w1+w2=1.The valuew1 is determined as a function of the system error, given by (15)w1=tanh|e|, where e is the normalized error.In this sense, it is possible to define the higher weightw1 for the controller with faster responses in situations of larger errors, this will be the controller with higher predominance in the control action.The valuew2 is defined by (16)w2=1-w1.So, the weightw2 will be inversely proportional to the error and will present higher values in situations of small errors, bringing into spot the controller that ensures a null error in steady state. Considering this objective, a controller with an integrative term is a good choice.In Figure1, one can observe how the HTAC controller can be implemented. ## 2.1. Linear PID Control Among the control structures used in the industrial segment, the classic parallel PID controller, shown in Figure1, stands out as one of the most widely used controllers due to well established practical implementation and tuning. There are various consolidated techniques which uses the PID transfer function and the system transfer function in order to obtain the proportional (Kplin), integral (Kilin), and derivative (kdlin) gains of the controller [26]. The design criteria, like overshoot and settling time for the closed loop system, are generally satisfactory when using this linear PID controller structure.Figure 1 Hyperbolic tangent adaptive control.The transfer function and the control law in the time domain of a PID controller with fixed gains are expressed by(1)C(s)=Kdlins2+Kplins+Kilins, where kdlin represents the derivative linear gain, kplin the proportional linear gain, and kilin the integral linear gain. ## 2.2. LQR Control The LQR controller is based on the minimization of a quadratic criterion associated with the state variables energy and the control signals. An optimal control provides a systematic way to calculate the gain matrix of the state feedback control [27, 28].As LQR is a consolidated control technique, the controller design will not be presented in this work.The problem formulation of optimal control (LQR) for the Buck converter can be ordered as follows: find a control functionu(t) ensuring that the output voltage of the converter is independent from the initial state to the reference value. The objective is to minimize the functional (2)J=∫0∞(δTQδ+uTRu)dt, where Q and R are symmetric and positive definite matrices, δ is a vector of the input errors, and u is the control signal. In addition, R can be chosen unitary without loss of generality [28].From the control law presented in (3), for R=1, (3)u(t)=-Kx(t), where (4)K=-BTP.For the defined values of the matrixQ (where Q is a positive definite matrix), the matrix P (where P is a symmetric matrix) can be found, which is a solution for the Riccati equation: (5)ATP+PA-PBBTP+Q=0.The limitation is that the LQR addresses a regulation problem and cannot originally be applied to a tracking problem, which is desired in practice [27].It is important to note that different values for the weight coefficient matrixQ are obtained for different trajectories, which implies that the range of values of Q matrix components influence the quality of the transient process.The technique of pole placement is proposed in this paper to find an optimal matrixQ (LQR formulation) which ensures the desired characteristics of transient response, ensuring achieving optimal regulation.To design the LQR controller feedback vector, the technique of pole placement is initially used [27, 28], since this control law works in the same way as the control law for the LQR controller. Thus, the state feedback K is found satisfying all conditions imposed by LQR controller, and the operating point of the closed loop system can be assured.Therefore, the vectorK can be defined, by the pole placement technique, by equation (6)Kp=[αn-βnαn-1-βn-1⋯α2-β2α1-β1]T-1, where αi is obtained from the characteristic polynomial: (7)(s-μ1)(s-μ1)⋯(s-μ1)=sn+α1sn-1+⋯+αn-1s+αn, where μi are the desired poles and the elements βi are the characteristic polynomial coefficients given by (8)|sI-A|=sn+β1sn-1+⋯+βn-1s+βn.The transformation matrixT is given by (9)T=MW, where M is the controllability matrix given by (10) and W is given by (11).Consider(10)M=[BAB⋯An-1B],(11)W=[an-1an-2⋯a11an-2an-3⋯10⋮⋮⋱⋮⋮a11⋯0010⋯00].Replacing (6) in (4), the Riccati matrix (P) for gain vector (Kp) can be obtained by (12)Pp=(BT)+Kp, where (BT)+ is a pseudoinverse matrix.Replacing (12) in (5) yields (13)Q=-ATPp-PpA+PpBBTPp.Through the solution of (13), the control that causes the system to the desired orbits minimizing the functional (2) can be defined, with the predefined transient behavior, where, by means of mathematical manipulations, one can find a matrix Q that satisfies the conditions of the LQR controller. The matrix Q must be positive. In this sense, a single test can be carried through an algorithm to prove that the obtained matrix Q is equal to QT. Thus, it can be said that the (Kp) state feedback vector corresponds to an optimal controller offered by LQR algorithm. ## 2.3. Proposed Hyperbolic Tangent Adaptive Control (HTAC) After some analyses realized by numeric simulations, to be presented afterwards, and relating to the system response during a load disturbance and to the steady-state error, it is possible to observe that either the proposed LQR and PID controllers are effective to maintain the system over the dominant poles in closed loop defined in the control design. Also, it can be observed that the LQR control is more effective for the transient response and the PID control for the steady state.With the objective of obtaining a control that presents the combined efficiency of both controllers, enhancing the performance of the system in closed loop as well as reducing the overshoot and the settling time, a parallel combination of LQR and PID controls is proposed. The simplest approach is to switch which controller will actuate over the system given a specific rule, such as a predefined error value. The disadvantage of this approach is the occurrence of an abrupt change in the control structure. In order to avoid this abrupt commutation between the controllers, a weighted combination is proposed where their control actions are regulated by weights (wi) defined by a hyperbolic tangent function.The control law for the HTAC controller is defined by(14)u(t)=uC1(s)(t)*w1+uC2(s)(t)*w2, where uC1(s)(t) and w1 represent the LQR control and its weight, respectively, and uC2(s)(t) and w2 represent the PID control and its weight, respectively, with w1+w2=1.The valuew1 is determined as a function of the system error, given by (15)w1=tanh|e|, where e is the normalized error.In this sense, it is possible to define the higher weightw1 for the controller with faster responses in situations of larger errors, this will be the controller with higher predominance in the control action.The valuew2 is defined by (16)w2=1-w1.So, the weightw2 will be inversely proportional to the error and will present higher values in situations of small errors, bringing into spot the controller that ensures a null error in steady state. Considering this objective, a controller with an integrative term is a good choice.In Figure1, one can observe how the HTAC controller can be implemented. ## 3. Control Strategy Applied to a Step-Down Converter Figure2 presents the control strategy scheme applied to the step-down converter.Figure 2 Control strategy scheme applied to the step-down converter.The design parameters for the converter are presented in Table1.Table 1 Parameters for the buck converter. Parameters Value Input voltage (Vi) 48 V Capacitance (Co) 3.33μF Inductance (Lo) 1.1 mH Resistance (Ro) 30Ω Reference voltage (Vref) 30 VThe matrices that determine the state space system of the buck converter are presented in the following equation:(17)A1=A2=[01Co-1Lo1CoRo];B1=[01Lo];B2=[00];C1=C2=[01],D1=D2=0.As the duty cycle (D) is the ratio between the output reference voltage and the input voltage, the state matrices are rewritten as (18)A1=A2=[0300300909.110010];B1=[0909.1];B2=[00];C1=C2=[01],D1=D2=0.Replacing the data in Table1 into (17), one can define the duty cycle to the output voltage transfer function of the Buck converter, as expressed in (19)Gvod(s)=v^o(s)d^(s)=1,28e10s2+10000s+2.667e7.Having the transfer function of the converter, the design of the controllers is set so that the operating point of the closed loop system has a natural frequency according to:(20)wn=1LoCo=16333rad/s.The damping ratio (ξ) in open loop, determined by (20), is approximately 0.3 and is defined as a function of the settling time (ts) at 2% and the natural frequency of the system. The value of ts can be found from the step response of the open loop system: (21)ξ=4wnts.From these data, the operating point of the closed loop system can be set to a new damping ratio near 0.8, which is the maximum acceptable value for second order systems. Thus, the set point is(22)s1=-13064±9798j.With the control design parameters of the converter defined, one can design the PID and LQR, so that the operation of the closed loop system occurs at the same operating point. So, having the controllers following the same operating sequence, a comparative analysis of the performance for each control scheme can be carried out with more accuracy.Following this analysis, the hyperbolic tangent function is applied in order to use the best responses of the applied controllers according to the error generated by possible disturbances.From the operating point of the closed loop system, defined in (22), it is possible to design the PID controller. The designed controller gains are Kp=0.0723, Ki=367.16, and Kd=0.000004037.For the LQR controller using the operation point of the closed loop system defined in (22), the state feedback gain found from the proposed algorithm is (23)Kp=[409416078].Replacing (23) into (13) results in Q=108[0.05910.02830.02832.6274], achieving the optimal control given in (24)u(t)=-4094vco-16078ico.Taking the previous designed results, the HTAC was implemented and presented in this paper using the combination of these two controllers. The limits of the hyperbolic tangent function are defined from the system error range. ## 4. Simulation Results The hardware chosen for implementation of the proposed controllers in this project is a EP3C25Q240C8N Cyclone III FPGA, manufactured by Altera. The simulations are performed in the MATLAB/SIMULINK environment where the DSP Builder framework, provided by Altera, is installed as a toolbox, making, possible to simulate the model of the power converter itself and to export it to be compiled by the Altera Quartus II software without leaving Simulink.To obtain more accurate results, various factors of a real prototype are taken into consideration. The simulation step size period was defined in the Simulink as 25 nanoseconds. The signal obtained from the output voltage is adjusted by a block that simulates the conditioning circuit in order to satisfy the inputs of the analog-to-digital converter (A/D). The external A/D converter is necessary because the FPGA hardware does not have internal A/D converters.The chosen A/D converter is the AD7655, manufactured by Analog Devices, with a 16-bit resolution and sampling rate of 500 KSPS (samples per second). In the simulation, a system of blocks is used to represent the inherent delay of the digital conversion and quantization according to the sampling rate.The complete implementation of the HTAC controller using DSP Builder is shown in Figure3. It can be observed that the block diagram has a 16-bit input and a single output (single bit) which is connected to the pulse driver that actuates the switch.Figure 3 Block diagram implemented in DSP Builder.Figure4 presents the transient response for the Buck converter in closed loop for the two control techniques employed, PID and LQR. For the converter startup process, a 250 μs length ramp is used to achieve the rated output voltage.Figure 4 Startup process converter.Figure5 presents the responses of each system alone for a load disturbance. Figure 6 presents the steady-state output voltage of the converter for all controllers. Unlike LQR, PID presents a zero steady-state error.Figure 5 System response for a 50% load step.Figure 6 Steady-state output voltage of the converter for both controllers.Figure7 shows the current across the converter inductor Lo for the PID and LRQ. It can be observed that the inductor Lo currents have the same ripple value defined in the converter’s design. In relation to the load step response, it can be observed that for the initial overshoot the currents of PID and LRQ controllers are similar.Figure 7 CurrentiL with response for a 50% load step.Figure8 presents the transient for the Buck converter in closed loop, enabling comparison between the responses of the controllers implemented separately: PID, LQR, and HTAC. Figure 9 presents in detail the responses for a load disturbance. Thus, one can observe that the HTAC presents a response as fast as that for the LQR controller and a null error in steady state as for the PID controller. Figure 10 presents the responses of HTAC for a load disturbance of 50%.Figure 8 Startup process converter with HTAC control.Figure 9 System response for a 50% load step for PID, LQR and HTAC.Figure 10 System response for a 50% load step with HTAC.Figure11 shows the converter inductor current Lo for the hyperbolic tangent adaptive control.Figure 11 CurrentiL for the hyperbolic tangent adaptive control for a 50% load step.The variation on the weightsw1 and w2 applied to the HTAC controller, multiplied by the control action of the PID and LQR controllers, is presented in Figure 12, where it is possible to observe that in the moments when disturbances occur, the weights exhibit variations that goes up to the moment of stabilization.Figure 12 Variation of the weightsw1 and w2 of the HTAC controller. ## 5. Conclusions This work presented the design and simulations of an adaptive PID + LQR control technique applied to a step-down converter. The pole placement technique was used to guarantee that the two controllers work in the same operation point and the system does not present excessive voltage and current overshoot. Knowing that the steady-state error of the converter output voltage for the PID is smaller than for the LQR control and the response time for LQR controller is smaller than for the PID, a parallel combination of the designed controllers is proposed, yielding an adaptive controller which improves the performance of the system, both in response to time and the reduction of overshoots of the controlled magnitudes.A hyperbolic tangent weight function is used to gather the best performance of each controller according to the system error. Thus, the best responses in settling time and overshoot and annulling the steady-state error are achieved as compared to independent implementation of each controller. The Altera DSP Builder framework was used in a MATLAB/SIMULINK environment for the implementation of the Hyperbolic Tangent Adaptive Control (HTAC) and to obtain real-time simulation results. --- *Source: 101650-2013-11-18.xml*
101650-2013-11-18_101650-2013-11-18.md
30,355
A Hyperbolic Tangent Adaptive PID + LQR Control Applied to a Step-Down Converter Using Poles Placement Design Implemented in FPGA
Marcelo Dias Pedroso; Claudinor Bitencourt Nascimento; Angelo Marcelo Tusset; Maurício dos Santos Kaster
Mathematical Problems in Engineering (2013)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101650
101650-2013-11-18.xml
--- ## Abstract This work presents an adaptive control that integrates two linear control strategies applied to a step-down converter: Proportional Integral Derivative (PID) and Linear Quadratic Regulator (LQR) controls. Considering the converter open loop transfer function and using the poles placement technique, the designs of the two controllers are set so that the operating point of the closed loop system presents the same natural frequency. With poles placement design, the overshoot problems of the LQR controller are avoided. To achieve the best performance of each controller, a hyperbolic tangent weight function is applied. The limits of the hyperbolic tangent function are defined based on the system error range. Simulation results using the Altera DSP Builder software in a MATLAB/SIMULINK environment of the proposed control schemes are presented. --- ## Body ## 1. Introduction The technological evolution of electronic devices has been very significant in recent years. With the increasing performance of microcontrollers and Digital Signal Processors (DSP), as well as the ascension of Field-Programmable Gate Array (FPGA), associated with high speeds of current A/D converters, some concepts related to digital signal processing have been reevaluated, and new forms of mathematical processing and algorithms developed [1, 2].More powerful digital devices are needed to enable the implementation of complex control systems in several applications to help improve their performance, stability, and robustness. In the power electronics field, better control strategies can enhance power quality and efficiency [3], by reducing losses, which represents important goals in forthcoming appliances that must comply with government environmental policies for electric power generation.FPGAs are devices with flexible logic programming. They allow several product features and functions, adapt to new standards, and reconfigure hardware for specific applications, even after the device has been already installed in the end product. One can use a FPGA to implement any logical function that an application-specific integrated circuit (ASIC) could perform, but the ability to update the functionality after shipping provides advantages for many application areas such as automotive, broadcast, computer and storage, display, medical, military, test and measurement, wireless, wire line, and in a lot of other areas. The main advantage of FPGA is the ability to have its logical structures arranged to work in parallel as opposed to the inherent sequential execution in microcontrollers. This can drastically boost the overall performance of the device. Also, when compared to microcontrollers and DSPs, FPGAs offer many design advantages including rapid prototyping, shorter time to market, the ability to reprogram in the field, lower Nonrecurring Engineering (NRE) costs, and long product life cycle to mitigate obsolescence risk [4–6]. In the last years, FPGAs manufacturing costs decreased significantly, making possible its use in most common applications like power electronics applications [7–11]. In these applications, the use of FPGAs allows more complex control techniques that can be used to improve the system performance such as response time along with reduced overshoot.FPGAs can be programmed using Hardware Development Language (HDL) which describes the connections of the logic gates together to form adders, multipliers, registers, and so on. The HDL places a dedicated resource for the task and allows for parallel operation. However, the complexity of HDL coding can be a barrier for many electrical engineering applications [12].As an alternative to low-level HDL programming, there are some optimized frameworks with ready-to-use high-level blocks such as the Altera’s DSP Builder software, which provides graphical design blocks to run simulations into a MATLAB/SIMULINK environment. The same blocks are used by DSP Builder as the basis for autogenerating an optimized HDL code to be loaded into FPGA hardware. The use of DSP builder tool can reduce the implementation time of a project resulting in lower costs associated to human-related design efforts.In electrical power conditioning systems that use switched static converters, the integration of high performance hardware with linear and nonlinear control techniques implemented digitally has provided the improvement of system performance, resulting in increased efficiency as well as enhanced quality of power supply [3]. Several types of voltage and current controllers can be implemented, either in the analog or digital way.Regarding the control techniques employed in these applications, traditionally the Proportional-Integral (PI) controller is one of the mostly used techniques to carry out control of some aspects of the converters, like regulating the output voltage or correcting the power factor. In these applications, the derivative action is usually not considered because it can amplify high frequency noise caused by switches commutation. Due to the integral component, PID has been widely used because it presents a favorable characteristic of eliminating the steady-state error, but its response time is high when compared to other kinds of controllers when the system is subjected to external disturbances. Other linear control techniques are also widely used like, for example, the LQR control. As it is an optimal control, it is common to appear as significant overshoots. It is desired to improve the system response time without worrying about the overshoot, commonly resulted from load disturbance or reference change, which can be achieved by employing poles placement design. However, LQR control does not have integral action, and its use does not guarantee null steady-state error.In order to improve system performance in relation to the response time without causing overshoots, which could damage the load or circuit elements, and to obtain a steady-state null error, adaptive and nonlinear controllers have being used. The Feedback Linearization, Adaptive Control, Gain Scheduling, Sliding Mode control, and State Dependent Riccati Equation (SDRE), can be highlighted each with its own characteristics regarding their application or parameters such as stability, robustness, and computational cost [13–24]. Another possibility of enhancing the system performance is the integration of two or more controllers. In this sense, the system error is compensated according to the best independent performance of each controller. The combined action of the two controls is considered as an adaptive control [24, 25]. A common approach is the use of a decision function that establishes a limit value that supports the decision of which control will be used. Another approach is the use of both control actions in a weighted fashion, determined by a weight function.Along with the advances in digital control, the use of more complex theories, mainly related to nonlinear and adaptive controllers design, has been excelling ever more. Adaptive control techniques have been highlighted in various applications. Alternative forms of adaptive control techniques applied to power electronic circuits are presented in [18–24].In this context, this paper presents an adaptive control that uses the best responses of two linear controllers, that is, the PID and LQR controls. To reduce overshoot problems of the LQR controller, poles placement design is applied. As a result, a composite control law is obtained for two different controllers used at the same time, where the control actions are mathematically weighted according to system error by means of a hyperbolic tangent function. This function represents the decision function that establishes the weights of PID and LQR control actions into the resulting control action, referred to as Hyperbolic Tangent Adaptive Control (HTAC). The main objective of this control technique is to perform the output voltage control of a step-down converter operating in continuous conduction mode (CCM). This strategy was tested by means of simulations using the DSP Builder software in a MATLAB/SIMULINK environment applied to the output voltage control on a classic Buck converter. Load steps were applied to assess the performance of the Hyperbolic Tangent Adaptive Control (HTAC). The results show that the proposed technique achieves better responses than the controllers alone, with a fast transient response, a small overshoot, due to the poles placement design, and null steady-state error. ## 2. Control Strategies ### 2.1. Linear PID Control Among the control structures used in the industrial segment, the classic parallel PID controller, shown in Figure1, stands out as one of the most widely used controllers due to well established practical implementation and tuning. There are various consolidated techniques which uses the PID transfer function and the system transfer function in order to obtain the proportional (Kplin), integral (Kilin), and derivative (kdlin) gains of the controller [26]. The design criteria, like overshoot and settling time for the closed loop system, are generally satisfactory when using this linear PID controller structure.Figure 1 Hyperbolic tangent adaptive control.The transfer function and the control law in the time domain of a PID controller with fixed gains are expressed by(1)C(s)=Kdlins2+Kplins+Kilins, where kdlin represents the derivative linear gain, kplin the proportional linear gain, and kilin the integral linear gain. ### 2.2. LQR Control The LQR controller is based on the minimization of a quadratic criterion associated with the state variables energy and the control signals. An optimal control provides a systematic way to calculate the gain matrix of the state feedback control [27, 28].As LQR is a consolidated control technique, the controller design will not be presented in this work.The problem formulation of optimal control (LQR) for the Buck converter can be ordered as follows: find a control functionu(t) ensuring that the output voltage of the converter is independent from the initial state to the reference value. The objective is to minimize the functional (2)J=∫0∞(δTQδ+uTRu)dt, where Q and R are symmetric and positive definite matrices, δ is a vector of the input errors, and u is the control signal. In addition, R can be chosen unitary without loss of generality [28].From the control law presented in (3), for R=1, (3)u(t)=-Kx(t), where (4)K=-BTP.For the defined values of the matrixQ (where Q is a positive definite matrix), the matrix P (where P is a symmetric matrix) can be found, which is a solution for the Riccati equation: (5)ATP+PA-PBBTP+Q=0.The limitation is that the LQR addresses a regulation problem and cannot originally be applied to a tracking problem, which is desired in practice [27].It is important to note that different values for the weight coefficient matrixQ are obtained for different trajectories, which implies that the range of values of Q matrix components influence the quality of the transient process.The technique of pole placement is proposed in this paper to find an optimal matrixQ (LQR formulation) which ensures the desired characteristics of transient response, ensuring achieving optimal regulation.To design the LQR controller feedback vector, the technique of pole placement is initially used [27, 28], since this control law works in the same way as the control law for the LQR controller. Thus, the state feedback K is found satisfying all conditions imposed by LQR controller, and the operating point of the closed loop system can be assured.Therefore, the vectorK can be defined, by the pole placement technique, by equation (6)Kp=[αn-βnαn-1-βn-1⋯α2-β2α1-β1]T-1, where αi is obtained from the characteristic polynomial: (7)(s-μ1)(s-μ1)⋯(s-μ1)=sn+α1sn-1+⋯+αn-1s+αn, where μi are the desired poles and the elements βi are the characteristic polynomial coefficients given by (8)|sI-A|=sn+β1sn-1+⋯+βn-1s+βn.The transformation matrixT is given by (9)T=MW, where M is the controllability matrix given by (10) and W is given by (11).Consider(10)M=[BAB⋯An-1B],(11)W=[an-1an-2⋯a11an-2an-3⋯10⋮⋮⋱⋮⋮a11⋯0010⋯00].Replacing (6) in (4), the Riccati matrix (P) for gain vector (Kp) can be obtained by (12)Pp=(BT)+Kp, where (BT)+ is a pseudoinverse matrix.Replacing (12) in (5) yields (13)Q=-ATPp-PpA+PpBBTPp.Through the solution of (13), the control that causes the system to the desired orbits minimizing the functional (2) can be defined, with the predefined transient behavior, where, by means of mathematical manipulations, one can find a matrix Q that satisfies the conditions of the LQR controller. The matrix Q must be positive. In this sense, a single test can be carried through an algorithm to prove that the obtained matrix Q is equal to QT. Thus, it can be said that the (Kp) state feedback vector corresponds to an optimal controller offered by LQR algorithm. ### 2.3. Proposed Hyperbolic Tangent Adaptive Control (HTAC) After some analyses realized by numeric simulations, to be presented afterwards, and relating to the system response during a load disturbance and to the steady-state error, it is possible to observe that either the proposed LQR and PID controllers are effective to maintain the system over the dominant poles in closed loop defined in the control design. Also, it can be observed that the LQR control is more effective for the transient response and the PID control for the steady state.With the objective of obtaining a control that presents the combined efficiency of both controllers, enhancing the performance of the system in closed loop as well as reducing the overshoot and the settling time, a parallel combination of LQR and PID controls is proposed. The simplest approach is to switch which controller will actuate over the system given a specific rule, such as a predefined error value. The disadvantage of this approach is the occurrence of an abrupt change in the control structure. In order to avoid this abrupt commutation between the controllers, a weighted combination is proposed where their control actions are regulated by weights (wi) defined by a hyperbolic tangent function.The control law for the HTAC controller is defined by(14)u(t)=uC1(s)(t)*w1+uC2(s)(t)*w2, where uC1(s)(t) and w1 represent the LQR control and its weight, respectively, and uC2(s)(t) and w2 represent the PID control and its weight, respectively, with w1+w2=1.The valuew1 is determined as a function of the system error, given by (15)w1=tanh|e|, where e is the normalized error.In this sense, it is possible to define the higher weightw1 for the controller with faster responses in situations of larger errors, this will be the controller with higher predominance in the control action.The valuew2 is defined by (16)w2=1-w1.So, the weightw2 will be inversely proportional to the error and will present higher values in situations of small errors, bringing into spot the controller that ensures a null error in steady state. Considering this objective, a controller with an integrative term is a good choice.In Figure1, one can observe how the HTAC controller can be implemented. ## 2.1. Linear PID Control Among the control structures used in the industrial segment, the classic parallel PID controller, shown in Figure1, stands out as one of the most widely used controllers due to well established practical implementation and tuning. There are various consolidated techniques which uses the PID transfer function and the system transfer function in order to obtain the proportional (Kplin), integral (Kilin), and derivative (kdlin) gains of the controller [26]. The design criteria, like overshoot and settling time for the closed loop system, are generally satisfactory when using this linear PID controller structure.Figure 1 Hyperbolic tangent adaptive control.The transfer function and the control law in the time domain of a PID controller with fixed gains are expressed by(1)C(s)=Kdlins2+Kplins+Kilins, where kdlin represents the derivative linear gain, kplin the proportional linear gain, and kilin the integral linear gain. ## 2.2. LQR Control The LQR controller is based on the minimization of a quadratic criterion associated with the state variables energy and the control signals. An optimal control provides a systematic way to calculate the gain matrix of the state feedback control [27, 28].As LQR is a consolidated control technique, the controller design will not be presented in this work.The problem formulation of optimal control (LQR) for the Buck converter can be ordered as follows: find a control functionu(t) ensuring that the output voltage of the converter is independent from the initial state to the reference value. The objective is to minimize the functional (2)J=∫0∞(δTQδ+uTRu)dt, where Q and R are symmetric and positive definite matrices, δ is a vector of the input errors, and u is the control signal. In addition, R can be chosen unitary without loss of generality [28].From the control law presented in (3), for R=1, (3)u(t)=-Kx(t), where (4)K=-BTP.For the defined values of the matrixQ (where Q is a positive definite matrix), the matrix P (where P is a symmetric matrix) can be found, which is a solution for the Riccati equation: (5)ATP+PA-PBBTP+Q=0.The limitation is that the LQR addresses a regulation problem and cannot originally be applied to a tracking problem, which is desired in practice [27].It is important to note that different values for the weight coefficient matrixQ are obtained for different trajectories, which implies that the range of values of Q matrix components influence the quality of the transient process.The technique of pole placement is proposed in this paper to find an optimal matrixQ (LQR formulation) which ensures the desired characteristics of transient response, ensuring achieving optimal regulation.To design the LQR controller feedback vector, the technique of pole placement is initially used [27, 28], since this control law works in the same way as the control law for the LQR controller. Thus, the state feedback K is found satisfying all conditions imposed by LQR controller, and the operating point of the closed loop system can be assured.Therefore, the vectorK can be defined, by the pole placement technique, by equation (6)Kp=[αn-βnαn-1-βn-1⋯α2-β2α1-β1]T-1, where αi is obtained from the characteristic polynomial: (7)(s-μ1)(s-μ1)⋯(s-μ1)=sn+α1sn-1+⋯+αn-1s+αn, where μi are the desired poles and the elements βi are the characteristic polynomial coefficients given by (8)|sI-A|=sn+β1sn-1+⋯+βn-1s+βn.The transformation matrixT is given by (9)T=MW, where M is the controllability matrix given by (10) and W is given by (11).Consider(10)M=[BAB⋯An-1B],(11)W=[an-1an-2⋯a11an-2an-3⋯10⋮⋮⋱⋮⋮a11⋯0010⋯00].Replacing (6) in (4), the Riccati matrix (P) for gain vector (Kp) can be obtained by (12)Pp=(BT)+Kp, where (BT)+ is a pseudoinverse matrix.Replacing (12) in (5) yields (13)Q=-ATPp-PpA+PpBBTPp.Through the solution of (13), the control that causes the system to the desired orbits minimizing the functional (2) can be defined, with the predefined transient behavior, where, by means of mathematical manipulations, one can find a matrix Q that satisfies the conditions of the LQR controller. The matrix Q must be positive. In this sense, a single test can be carried through an algorithm to prove that the obtained matrix Q is equal to QT. Thus, it can be said that the (Kp) state feedback vector corresponds to an optimal controller offered by LQR algorithm. ## 2.3. Proposed Hyperbolic Tangent Adaptive Control (HTAC) After some analyses realized by numeric simulations, to be presented afterwards, and relating to the system response during a load disturbance and to the steady-state error, it is possible to observe that either the proposed LQR and PID controllers are effective to maintain the system over the dominant poles in closed loop defined in the control design. Also, it can be observed that the LQR control is more effective for the transient response and the PID control for the steady state.With the objective of obtaining a control that presents the combined efficiency of both controllers, enhancing the performance of the system in closed loop as well as reducing the overshoot and the settling time, a parallel combination of LQR and PID controls is proposed. The simplest approach is to switch which controller will actuate over the system given a specific rule, such as a predefined error value. The disadvantage of this approach is the occurrence of an abrupt change in the control structure. In order to avoid this abrupt commutation between the controllers, a weighted combination is proposed where their control actions are regulated by weights (wi) defined by a hyperbolic tangent function.The control law for the HTAC controller is defined by(14)u(t)=uC1(s)(t)*w1+uC2(s)(t)*w2, where uC1(s)(t) and w1 represent the LQR control and its weight, respectively, and uC2(s)(t) and w2 represent the PID control and its weight, respectively, with w1+w2=1.The valuew1 is determined as a function of the system error, given by (15)w1=tanh|e|, where e is the normalized error.In this sense, it is possible to define the higher weightw1 for the controller with faster responses in situations of larger errors, this will be the controller with higher predominance in the control action.The valuew2 is defined by (16)w2=1-w1.So, the weightw2 will be inversely proportional to the error and will present higher values in situations of small errors, bringing into spot the controller that ensures a null error in steady state. Considering this objective, a controller with an integrative term is a good choice.In Figure1, one can observe how the HTAC controller can be implemented. ## 3. Control Strategy Applied to a Step-Down Converter Figure2 presents the control strategy scheme applied to the step-down converter.Figure 2 Control strategy scheme applied to the step-down converter.The design parameters for the converter are presented in Table1.Table 1 Parameters for the buck converter. Parameters Value Input voltage (Vi) 48 V Capacitance (Co) 3.33μF Inductance (Lo) 1.1 mH Resistance (Ro) 30Ω Reference voltage (Vref) 30 VThe matrices that determine the state space system of the buck converter are presented in the following equation:(17)A1=A2=[01Co-1Lo1CoRo];B1=[01Lo];B2=[00];C1=C2=[01],D1=D2=0.As the duty cycle (D) is the ratio between the output reference voltage and the input voltage, the state matrices are rewritten as (18)A1=A2=[0300300909.110010];B1=[0909.1];B2=[00];C1=C2=[01],D1=D2=0.Replacing the data in Table1 into (17), one can define the duty cycle to the output voltage transfer function of the Buck converter, as expressed in (19)Gvod(s)=v^o(s)d^(s)=1,28e10s2+10000s+2.667e7.Having the transfer function of the converter, the design of the controllers is set so that the operating point of the closed loop system has a natural frequency according to:(20)wn=1LoCo=16333rad/s.The damping ratio (ξ) in open loop, determined by (20), is approximately 0.3 and is defined as a function of the settling time (ts) at 2% and the natural frequency of the system. The value of ts can be found from the step response of the open loop system: (21)ξ=4wnts.From these data, the operating point of the closed loop system can be set to a new damping ratio near 0.8, which is the maximum acceptable value for second order systems. Thus, the set point is(22)s1=-13064±9798j.With the control design parameters of the converter defined, one can design the PID and LQR, so that the operation of the closed loop system occurs at the same operating point. So, having the controllers following the same operating sequence, a comparative analysis of the performance for each control scheme can be carried out with more accuracy.Following this analysis, the hyperbolic tangent function is applied in order to use the best responses of the applied controllers according to the error generated by possible disturbances.From the operating point of the closed loop system, defined in (22), it is possible to design the PID controller. The designed controller gains are Kp=0.0723, Ki=367.16, and Kd=0.000004037.For the LQR controller using the operation point of the closed loop system defined in (22), the state feedback gain found from the proposed algorithm is (23)Kp=[409416078].Replacing (23) into (13) results in Q=108[0.05910.02830.02832.6274], achieving the optimal control given in (24)u(t)=-4094vco-16078ico.Taking the previous designed results, the HTAC was implemented and presented in this paper using the combination of these two controllers. The limits of the hyperbolic tangent function are defined from the system error range. ## 4. Simulation Results The hardware chosen for implementation of the proposed controllers in this project is a EP3C25Q240C8N Cyclone III FPGA, manufactured by Altera. The simulations are performed in the MATLAB/SIMULINK environment where the DSP Builder framework, provided by Altera, is installed as a toolbox, making, possible to simulate the model of the power converter itself and to export it to be compiled by the Altera Quartus II software without leaving Simulink.To obtain more accurate results, various factors of a real prototype are taken into consideration. The simulation step size period was defined in the Simulink as 25 nanoseconds. The signal obtained from the output voltage is adjusted by a block that simulates the conditioning circuit in order to satisfy the inputs of the analog-to-digital converter (A/D). The external A/D converter is necessary because the FPGA hardware does not have internal A/D converters.The chosen A/D converter is the AD7655, manufactured by Analog Devices, with a 16-bit resolution and sampling rate of 500 KSPS (samples per second). In the simulation, a system of blocks is used to represent the inherent delay of the digital conversion and quantization according to the sampling rate.The complete implementation of the HTAC controller using DSP Builder is shown in Figure3. It can be observed that the block diagram has a 16-bit input and a single output (single bit) which is connected to the pulse driver that actuates the switch.Figure 3 Block diagram implemented in DSP Builder.Figure4 presents the transient response for the Buck converter in closed loop for the two control techniques employed, PID and LQR. For the converter startup process, a 250 μs length ramp is used to achieve the rated output voltage.Figure 4 Startup process converter.Figure5 presents the responses of each system alone for a load disturbance. Figure 6 presents the steady-state output voltage of the converter for all controllers. Unlike LQR, PID presents a zero steady-state error.Figure 5 System response for a 50% load step.Figure 6 Steady-state output voltage of the converter for both controllers.Figure7 shows the current across the converter inductor Lo for the PID and LRQ. It can be observed that the inductor Lo currents have the same ripple value defined in the converter’s design. In relation to the load step response, it can be observed that for the initial overshoot the currents of PID and LRQ controllers are similar.Figure 7 CurrentiL with response for a 50% load step.Figure8 presents the transient for the Buck converter in closed loop, enabling comparison between the responses of the controllers implemented separately: PID, LQR, and HTAC. Figure 9 presents in detail the responses for a load disturbance. Thus, one can observe that the HTAC presents a response as fast as that for the LQR controller and a null error in steady state as for the PID controller. Figure 10 presents the responses of HTAC for a load disturbance of 50%.Figure 8 Startup process converter with HTAC control.Figure 9 System response for a 50% load step for PID, LQR and HTAC.Figure 10 System response for a 50% load step with HTAC.Figure11 shows the converter inductor current Lo for the hyperbolic tangent adaptive control.Figure 11 CurrentiL for the hyperbolic tangent adaptive control for a 50% load step.The variation on the weightsw1 and w2 applied to the HTAC controller, multiplied by the control action of the PID and LQR controllers, is presented in Figure 12, where it is possible to observe that in the moments when disturbances occur, the weights exhibit variations that goes up to the moment of stabilization.Figure 12 Variation of the weightsw1 and w2 of the HTAC controller. ## 5. Conclusions This work presented the design and simulations of an adaptive PID + LQR control technique applied to a step-down converter. The pole placement technique was used to guarantee that the two controllers work in the same operation point and the system does not present excessive voltage and current overshoot. Knowing that the steady-state error of the converter output voltage for the PID is smaller than for the LQR control and the response time for LQR controller is smaller than for the PID, a parallel combination of the designed controllers is proposed, yielding an adaptive controller which improves the performance of the system, both in response to time and the reduction of overshoots of the controlled magnitudes.A hyperbolic tangent weight function is used to gather the best performance of each controller according to the system error. Thus, the best responses in settling time and overshoot and annulling the steady-state error are achieved as compared to independent implementation of each controller. The Altera DSP Builder framework was used in a MATLAB/SIMULINK environment for the implementation of the Hyperbolic Tangent Adaptive Control (HTAC) and to obtain real-time simulation results. --- *Source: 101650-2013-11-18.xml*
2013
# Some Discussions about the Error Functions on SO(3) and SE(3) for the Guidance of a UAV Using the Screw Algebra Theory **Authors:** Yi Zhu; Xin Chen; Chuntao Li **Journal:** Advances in Mathematical Physics (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1016530 --- ## Abstract In this paper a new error function designed on 3-dimensional special Euclidean group SE(3) is proposed for the guidance of a UAV (Unmanned Aerial Vehicle). In the beginning, a detailed 6-DOF (Degree of Freedom) aircraft model is formulated including 12 nonlinear differential equations. Secondly the definitions of the adjoint representations are presented to establish the relationships of the Lie groups SO(3) and SE(3) and their Lie algebras so(3) and se(3). After that the general situation of the differential equations with matrices belonging to SO(3) and SE(3) is presented. According to these equations the features of the error function on SO(3) are discussed. Then an error function on SE(3) is devised which creates a new way of error functions constructing. In the simulation a trajectory tracking example is given with a target trajectory being a curve of elliptic cylinder helix. The result shows that a better tracking performance is obtained with the new devised error function. --- ## Body ## 1. Introduction The way of computing the tracking errors plays an important role in the guidance process of a UAV. For the problem of either a 2D tracking in a plane or a 3D tracking in the physical space, many valuable researches have been made about the guidance methods of “trajectory tracking” and “path following” [1].To solve the tracking problems, different researchers hold different opinions. The early methods somewhat originate from the target tracking of missiles such as proportional navigation, way point, and vector field method [2–4]. Then the body-mass point model is usually used so that the direct relationship between the position deviation and speed (or acceleration) can be concerned. Sometimes the influence of the attitude angles and the angular velocity of body frame is also taken into consideration [5]. On the contrary, the features of the inner loops of the aircraft system are often clearly figured out for lager aircrafts [6, 7]. For a 2D tracking issue there are some novel navigation methods and guidance strategies emerging in the light of geometrical intuition and physical interpretation [8–10]. Also, for the curves of 3D trajectories, the constraint of time parameter can be transformed into an arc length parameter by theory of differential geometry [11, 12].Actually, when a 6-DOF model of an aircraft is concerned, there are at least three basic coordinate frames included which are the inertial frame, the aircraft-body frame, and the airspeed frame. So the coordinate transformations between these different coordinate frames are directly related to the accuracy of the tracking errors computing, that is, where the error functions on SO(3) are used. For examples, in some literatures the guidance strategy is implemented based on a mixed structure of the attitude loops and guidance loops with controllers of the forces and moments [13, 14]. Another instance is the moving frame guidance method. This guidance method changes the ordinary error functions from the inertial frame to a moving frame by orthogonal matrices which belong to SO(3) [15].Many researches have been made about the formulation of a moving frame of a given trajectory, as recently in [16–19] and previously in [20, 21]. However, the designing of the error functions of a moving frame is a difficulty because there is interdisciplinary knowledge involved such as the Lie group theory. Some literatures indicate that the analyses about the Lie group can be simplified by the screw algebra theory [22]. Theses analyses are important particularly in the tracking process of aircrafts [23, 24]. So in this paper some discussions have been made to provide clear relationships between Lie groups SO(3) and SE(3) and their Lie algebras so(3) and se(3). Then some features of the error functions on SO(3) are proved before a new designed error function on SE(3) is proposed. Thus a new way of error functions constructing is presented. The effects of the different error functions are tested in the simulation with a 6-DOF UAV model. ## 2. Preliminary ### 2.1. UAV Model A flight control system is a bit more complicated than ordinary control systems. The analytic expressions of 6-DOF motion of an aircraft, that is, the 12 nonlinear differential equations, are formulated as follows:(I) Force equations:(1)u˙=vr-wq-gsin⁡θ+Fxmc,v˙=-ur+wp+gcos⁡θsin⁡ϕ+Fymc,w˙=uq-vp+gcos⁡θcos⁡ϕ+Fzmc.(II) Kinematic equations:(2)ϕ˙=p+qsin⁡ϕ+rcos⁡ϕtan⁡θ,θ˙=qcos⁡ϕ-rsin⁡ϕ,ψ˙=qsin⁡ϕ+rcos⁡ϕcos⁡θ.(III) Moment equations:(3)p˙=c1r+c2pq+c3L-+c4N,q˙=c5pr-c6p2-r2+c7M,r˙=c8p-c2rq+c4L-+c9N,where c1=(Iy-Iz)Iz-Ixz2/IxIz-Ixz2, c2=((Ix-Iy+Iz)Ixz)/IxIz-Ixz2, c3=Iz/IxIz-Ixz2, c4=Ixz/IxIz-Ixz2, c5=(Iz-Ix)/Iy, c6=Ixz/Iy, c7=1/Iy, c8=Ix(Ix-Iy)+Ixz2/IxIz-Ixz2, and c9=Ix/IxIz-Ixz2.(IV) Navigation equations:(4)x˙g=ucos⁡θcos⁡ψ+vsin⁡ϕsin⁡θcos⁡ψ-cos⁡ϕsin⁡ψ+wsin⁡ϕsin⁡ψ+cos⁡ϕsin⁡θcos⁡ψ,y˙g=ucos⁡θsin⁡ψ+vsin⁡ϕsin⁡θsin⁡ψ+cos⁡ϕcos⁡ψ+w-sin⁡ϕcos⁡ψ+cos⁡ϕsin⁡θsin⁡ψ,h˙g=usin⁡θ-vsin⁡ϕcos⁡θ-wcos⁡ϕcos⁡θor(5)x˙g=Vcos⁡γcos⁡φ,y˙g=Vcos⁡γsin⁡φ,h˙g=Vsin⁡γ,where mc is the mass of the aircraft, Fi(i=x,y,z) represents the force of axes of aircraft-body coordinate frame, respectively, L-, M, and N are moments of body frame, u, v, and w are speed components of body frame, θ, ψ, and ϕ represent pitch angle, yaw angle, and bank angle, respectively, p, q, and r are angular velocity from body frame to inertial frame resolved in body frame, xg, yg, and hg represent the position of the aircraft in inertial frame, Ix, Iy, and Iz are rotary inertias of axes of body frame, V is true airspeed, and γ, φ are flight-path angles between the first/second axis of wind coordinate frame and inertial frame, respectively.In practice, we may not necessarily chooseu, v, and w as the state variables of an aircraft model concerning different requirements and we usually choose the true air speed V and aerodynamic angles, such as angle of attack α and sideslip angle β, instead. According to the rotation matrix RW2B from wind frame to body frame, along with the relationship Xbody=RW2BXwind, one has the equation(6)uvw=RW2BV00=Vcos⁡αcos⁡βVsin⁡βVsin⁡αcos⁡β.With (1)~(4) the 12 differential equations are obtained; however it is not adequate to establish a complete nonlinear model of a UAV. More additional parts are needed. Figure 1 shows the inner structure of the UAV dynamic model.Figure 1 The inner structure of the UAV model.Hereδe, δa, and δr are the angular deviations of the elevator, ailerons, and rudder, δT is the opening degree of throttle, and Mach represents Mach number. As shown in Figure 1, the actuators are δe, δa, δr, and δT rather than forces and moments. Then, it is viable to construct a simulation model of the given aircraft. By (1)~(4) a nonlinear state-space system can be obtained with state variables defined by XT=u,v,w,ϕ,θ,ψ,p,q,r,xg,yg,h and control input vector as UT=δe,δα,δr,δT. ### 2.2. Adjoint Representations of Lie Algebras Before the features of the error functions on SO(3) and SE(3) are discussed, the features of SO(3) and SE(3) themselves should be made clear. In the following part, some basic concepts of the screw algebra and Lie group theory are discussed in detail.According to the screw algebra theory of motions of the rigid body, the definitions and adjoint representations of the 3-dimensional special orthogonal groupSO3, the 3-dimensional special Euclidean group SE3, and the corresponding Lie algebras so3 and se3 are presented as follows.(I) The3×3 matrix adjoint representation of so3 is as follows.s o 3 is the Lie algebra of the 3-dimensional special orthogonal group SO3, the adjoint representation of which has a form of a skew-symmetric matrix:(7)ads=As=s∧=0-szsysz0-sx-sysx0.Since the Lie algebra so3 is represented as a 3×3 skew-symmetric matrix, one basis of the 3-dimensional vector space in the form of matrices is(8)ads1=s1∧=00000-1010,ads2=s2∧=001000-100,ads3=s3∧=0-10100000,where s1=1,0,0T, s2=0,1,0T, and s3=0,0,1T.Any elements belonging toso3 can be represented as a linear combination of this basis.(II) The standard and adjoint representation ofse3 is as follows.s e 3 is the Lie algebra of the 3-dimensional special Euclidean group SE3. SEn, also denoted E+n, is defined to describe rigid body motions including translations and rotations, which is based on an identity of a rigid body motion and a curve in the Euclidean group.The standard4×4 matrix representation of se3 is (9)E=s∧s00T0,where s∧∈so3 and s0∈R3. These elements belong to a space which is a subset of R4×4. The generators of this 6-dimensional vector space are(10)E1=000000-1001000000,E2=00100000-10000000,E3=0-100100000000000,E4=0001000000000000,E5=0000000100000000,E6=0000000000010000.Also there is a 6×6 matrix adjoint representation of se3 defined as(11)U=adS=s∧0s0∧s∧,where the operator adS⊂R6×6 is isomorphic to E and the generators are(12)adS1=s1∧00s1∧,adS2=s2∧00s2∧,adS3=s3∧00s3∧,adS4=00s1∧0,adS5=00s2∧0,adS6=00s3∧0.We can see that the Lie algebra so3≅R3 is a subspace of se3≅R6, where the symbol ≅ means isomorphic.(III) The exponential mapping is as follows.The exponential mapping establishes a connection betweenso3 and SO3, se3 and SE3 as well. According to the Rodrigues equation, when the rotation axis s and the revolute joint θ are given, the rotation matrix R can be obtained as (13)R=eθAs=I+sin⁡θAs+1-cos⁡θAs2,where s∈R3, As=s×∈so3, and R∈SO3.Formula (13) presents the exponential mapping from so3 to SO3, the proof of which can be found in literature [22].Similarly, the exponential mapping fromse3 to SE3 is defined as (14)H=eθE=eθAsθs00T0=Rd0T1=eθAsVs00T1,where E is the standard 4×4 matrix representation of se3, and(15)V=θI+1-cos⁡θAs+θ-sin⁡θAs2.(IV) The relationship between SO3 and SE3 is as follows.Special Euclidean groupSE3 is a closed subgroup of 3-dimensional affine group Aff3. SE3 can be represented as a semidirect product of the special orthogonal group SO3 and the translation group T3; that is,(16)SE3≅SO3∝T3.The geometric meaning of above semidirect product is a rotation motion acting on a translation.Furthermore, a6×6 finite displacement screw matrix is defined as(17)N=R0ARR=I0AIR00R=NtNr,where rotation matrix R∈SO3 and A is a skew-symmetric matrix of translation action. Then we can see that(18)det⁡N=det⁡R0ARR=det⁡Rdet⁡R=1.So the finite displacement screw matrix belongs to the special linear group SLn, which is a subgroup of the general linear group GLn. Also N is an element of the Lie group SE3. ## 2.1. UAV Model A flight control system is a bit more complicated than ordinary control systems. The analytic expressions of 6-DOF motion of an aircraft, that is, the 12 nonlinear differential equations, are formulated as follows:(I) Force equations:(1)u˙=vr-wq-gsin⁡θ+Fxmc,v˙=-ur+wp+gcos⁡θsin⁡ϕ+Fymc,w˙=uq-vp+gcos⁡θcos⁡ϕ+Fzmc.(II) Kinematic equations:(2)ϕ˙=p+qsin⁡ϕ+rcos⁡ϕtan⁡θ,θ˙=qcos⁡ϕ-rsin⁡ϕ,ψ˙=qsin⁡ϕ+rcos⁡ϕcos⁡θ.(III) Moment equations:(3)p˙=c1r+c2pq+c3L-+c4N,q˙=c5pr-c6p2-r2+c7M,r˙=c8p-c2rq+c4L-+c9N,where c1=(Iy-Iz)Iz-Ixz2/IxIz-Ixz2, c2=((Ix-Iy+Iz)Ixz)/IxIz-Ixz2, c3=Iz/IxIz-Ixz2, c4=Ixz/IxIz-Ixz2, c5=(Iz-Ix)/Iy, c6=Ixz/Iy, c7=1/Iy, c8=Ix(Ix-Iy)+Ixz2/IxIz-Ixz2, and c9=Ix/IxIz-Ixz2.(IV) Navigation equations:(4)x˙g=ucos⁡θcos⁡ψ+vsin⁡ϕsin⁡θcos⁡ψ-cos⁡ϕsin⁡ψ+wsin⁡ϕsin⁡ψ+cos⁡ϕsin⁡θcos⁡ψ,y˙g=ucos⁡θsin⁡ψ+vsin⁡ϕsin⁡θsin⁡ψ+cos⁡ϕcos⁡ψ+w-sin⁡ϕcos⁡ψ+cos⁡ϕsin⁡θsin⁡ψ,h˙g=usin⁡θ-vsin⁡ϕcos⁡θ-wcos⁡ϕcos⁡θor(5)x˙g=Vcos⁡γcos⁡φ,y˙g=Vcos⁡γsin⁡φ,h˙g=Vsin⁡γ,where mc is the mass of the aircraft, Fi(i=x,y,z) represents the force of axes of aircraft-body coordinate frame, respectively, L-, M, and N are moments of body frame, u, v, and w are speed components of body frame, θ, ψ, and ϕ represent pitch angle, yaw angle, and bank angle, respectively, p, q, and r are angular velocity from body frame to inertial frame resolved in body frame, xg, yg, and hg represent the position of the aircraft in inertial frame, Ix, Iy, and Iz are rotary inertias of axes of body frame, V is true airspeed, and γ, φ are flight-path angles between the first/second axis of wind coordinate frame and inertial frame, respectively.In practice, we may not necessarily chooseu, v, and w as the state variables of an aircraft model concerning different requirements and we usually choose the true air speed V and aerodynamic angles, such as angle of attack α and sideslip angle β, instead. According to the rotation matrix RW2B from wind frame to body frame, along with the relationship Xbody=RW2BXwind, one has the equation(6)uvw=RW2BV00=Vcos⁡αcos⁡βVsin⁡βVsin⁡αcos⁡β.With (1)~(4) the 12 differential equations are obtained; however it is not adequate to establish a complete nonlinear model of a UAV. More additional parts are needed. Figure 1 shows the inner structure of the UAV dynamic model.Figure 1 The inner structure of the UAV model.Hereδe, δa, and δr are the angular deviations of the elevator, ailerons, and rudder, δT is the opening degree of throttle, and Mach represents Mach number. As shown in Figure 1, the actuators are δe, δa, δr, and δT rather than forces and moments. Then, it is viable to construct a simulation model of the given aircraft. By (1)~(4) a nonlinear state-space system can be obtained with state variables defined by XT=u,v,w,ϕ,θ,ψ,p,q,r,xg,yg,h and control input vector as UT=δe,δα,δr,δT. ## 2.2. Adjoint Representations of Lie Algebras Before the features of the error functions on SO(3) and SE(3) are discussed, the features of SO(3) and SE(3) themselves should be made clear. In the following part, some basic concepts of the screw algebra and Lie group theory are discussed in detail.According to the screw algebra theory of motions of the rigid body, the definitions and adjoint representations of the 3-dimensional special orthogonal groupSO3, the 3-dimensional special Euclidean group SE3, and the corresponding Lie algebras so3 and se3 are presented as follows.(I) The3×3 matrix adjoint representation of so3 is as follows.s o 3 is the Lie algebra of the 3-dimensional special orthogonal group SO3, the adjoint representation of which has a form of a skew-symmetric matrix:(7)ads=As=s∧=0-szsysz0-sx-sysx0.Since the Lie algebra so3 is represented as a 3×3 skew-symmetric matrix, one basis of the 3-dimensional vector space in the form of matrices is(8)ads1=s1∧=00000-1010,ads2=s2∧=001000-100,ads3=s3∧=0-10100000,where s1=1,0,0T, s2=0,1,0T, and s3=0,0,1T.Any elements belonging toso3 can be represented as a linear combination of this basis.(II) The standard and adjoint representation ofse3 is as follows.s e 3 is the Lie algebra of the 3-dimensional special Euclidean group SE3. SEn, also denoted E+n, is defined to describe rigid body motions including translations and rotations, which is based on an identity of a rigid body motion and a curve in the Euclidean group.The standard4×4 matrix representation of se3 is (9)E=s∧s00T0,where s∧∈so3 and s0∈R3. These elements belong to a space which is a subset of R4×4. The generators of this 6-dimensional vector space are(10)E1=000000-1001000000,E2=00100000-10000000,E3=0-100100000000000,E4=0001000000000000,E5=0000000100000000,E6=0000000000010000.Also there is a 6×6 matrix adjoint representation of se3 defined as(11)U=adS=s∧0s0∧s∧,where the operator adS⊂R6×6 is isomorphic to E and the generators are(12)adS1=s1∧00s1∧,adS2=s2∧00s2∧,adS3=s3∧00s3∧,adS4=00s1∧0,adS5=00s2∧0,adS6=00s3∧0.We can see that the Lie algebra so3≅R3 is a subspace of se3≅R6, where the symbol ≅ means isomorphic.(III) The exponential mapping is as follows.The exponential mapping establishes a connection betweenso3 and SO3, se3 and SE3 as well. According to the Rodrigues equation, when the rotation axis s and the revolute joint θ are given, the rotation matrix R can be obtained as (13)R=eθAs=I+sin⁡θAs+1-cos⁡θAs2,where s∈R3, As=s×∈so3, and R∈SO3.Formula (13) presents the exponential mapping from so3 to SO3, the proof of which can be found in literature [22].Similarly, the exponential mapping fromse3 to SE3 is defined as (14)H=eθE=eθAsθs00T0=Rd0T1=eθAsVs00T1,where E is the standard 4×4 matrix representation of se3, and(15)V=θI+1-cos⁡θAs+θ-sin⁡θAs2.(IV) The relationship between SO3 and SE3 is as follows.Special Euclidean groupSE3 is a closed subgroup of 3-dimensional affine group Aff3. SE3 can be represented as a semidirect product of the special orthogonal group SO3 and the translation group T3; that is,(16)SE3≅SO3∝T3.The geometric meaning of above semidirect product is a rotation motion acting on a translation.Furthermore, a6×6 finite displacement screw matrix is defined as(17)N=R0ARR=I0AIR00R=NtNr,where rotation matrix R∈SO3 and A is a skew-symmetric matrix of translation action. Then we can see that(18)det⁡N=det⁡R0ARR=det⁡Rdet⁡R=1.So the finite displacement screw matrix belongs to the special linear group SLn, which is a subgroup of the general linear group GLn. Also N is an element of the Lie group SE3. ## 3. Error Functions Defined on SO(3) and SE(3) ### 3.1. General Situation In the beginning of this section an example is introduced to show the features of equations with matrices belonging toSE3. The following equations are given:(19)R˙=RΩ∧,P˙=RV,where R∈SO3 and P∈R3. Introduce the matrices P, G defined by(20)P=RP01×31,G=Ω∧V01×31,where P∈SE3, G∈se3 are both 4×4 matrices. The following equation holds:(21)P˙=P·G.Also, there is 6×6 matrix representation of elements of SE3:(22)G~=Ω∧0V∧Ω∧.These two adjoint representations of 4×4 and 6×6 matrices, that is, G and G~, are isomorphic to each other. se3, the Lie algebra of SE3, is isomorphic to SE3 as well. It is convenient to choose different forms we need in different situations. However, it is not difficult to verify that the 6×6 matrix representations of SE3 do not satisfy (21).This is different from the previous error functions which are defined onSO3, which is the subgroup of SE3, such as(23)ΦR,Rd=12trI-RdTR.In this paper a trial has been made to define an error function straightforwardly on SE3, so that the error function will include both the information of the rotation matrix and the position vectors or the speed vectors. The new error function on SE3 is defined by(24)ΨR,Rd,P,Pd=12trP-PdTP.Actually, (24) has a close relationship with ΦR,Rd. Since(25)P-PdTP=RT-RdT03×1PT-PdT1RP01×31=RT-RdTRRT-RdTPPT-PdTRPT-PdTP,thus(26)ΨR,Rd,P,Pd=12trP-PdTP=12trI-RdTR00P2-PdTP=12trI-RdTR+12P2-PdTP.Hence one can see that with an initial position P0 and a trajectory Pd, as long as a negative feedback of position signals is guaranteed, the position error ΔP is certain to be a decreasing function when it tends to the steady state. So the error function P2-PdTP is bounded. Let supP2-PdTP=D; then the domain of attraction of Ψ is regarded as a linear manifold of Φ. That means some features about the error function Φ defined on SO3 will still be helpful. ### 3.2. Error Function on SO(3) To choose the tracking error vectorseR and eΩ reasonably, let(27)Ψ-=12trI-RdTR+D.By finding the derivative of (27) we have(28)Ψ-˙=Φ˙=-12trRdTR˙=-12trRdTRΩ∧.Before further discussion, the following properties of 3-order skew-symmetric matrices are presented:(29)ItrAx∧=12trx∧A-AT,(30)II12trx∧A-AT=-xTA-AT∨,(31)IIIx·y∧z=y·z∧x,property  of  the  vector  mixed  product,(32)IVx∧y=x×y=-y×x=-y∧x,(33)Vx∧y∧z=x×y×z=y·x·z-z·x·yproperty  of  the  vector  triple  product,(34)VIx∧A+ATx∧=trAI3×3-Ax∧,(35)VIIRx∧RT=Rx∧,(36)VIIIR˙-R˙dRdTR=RΩ-RTRdΩd∧.Some proofs of (29)~(36) are clearly given [23, 24]; hence only the proofs of (29), (30), and (36) are given here. First of all, Theorem 1 is presented.Theorem 1. LetA, B be n×m and m×n matrices, respectively; then the following equation holds:(37)trAB=trBA.Proof. Let(38)A=a11⋯a1ma21⋯a2m⋮⋱⋮an1⋯anm=aij,i=1,2,…,n,j=1,2,…,m,B=b11⋯b1nb21⋯b2n⋮⋱⋮bm1⋯bmn=bij,i=1,2,…,m,j=1,2,…,n,and thus A B = c i j is a square matrix of n order, where cij=∑k=1maikbkj,i,j=1,2,…,n. B A = d i j is a square matrix of m order, where dij=∑k=1nbikakj,i,j=1,2,…,m:(39)trAB=∑i=1ncii=∑i=1n∑k=1maikbki,trBA=∑i=1mdii=∑i=1m∑k=1nbikaki=∑k=1n∑i=1makibik=∑i=1n∑k=1maikbki.So trAB=trBA, proof finished.In addition, by the definition of the trace of a matrix, for any square matrixA, obviously we have(40)trA=trAT.Then the proof of (29) is presented as follows.Proof of (29). By (37), (40), and the property of skew-symmetric matrix that(41)-x∧T=x∧,we have(42)trAx∧=12trx∧A+trAx∧=12trx∧A+trx∧TAT=12trx∧A+tr-x∧AT=12trx∧A-x∧AT=12trx∧A-AT.Proof finished.Proof of (30). For a three-order square matrixA, it is easy to see that A-AT is a skew-symmetric matrix. Denoting(43)x=x1,x2,x3T,A-AT∨=z=z1,z2,z3T,then(44)12trx∧A-AT=12tr0-x3x2x30-x1-x2x100-z3z2z30-z1-z2z10=12-x2z2-x3z3+-x1z1-x3z3+-x1z1-x2z2=-x1z1+x2z2+x3z3=-xTA-AT∨.Proof finished.By the definition of the inner product, one has that(45)-xTA-AT∨=-A-AT∨·x.By (29), (30), and (45), the following equation holds:(46)trAx∧=-A-AT∨·x.With regard to (46), (28) can be rewritten as(47)Ψ-˙=-12trRdTRΩ∧=12RdTR-RTRd∨Ω,where RdTR-RTRd∈so3 is a skew-symmetric matrix, ∨ is the inverse mapping of the hat mapping ∧. Thus, the tracking error function of attitude can be defined as(48)eR=12RdTR-RTRd∨.Then the proof of (36) is presented as follows.Proof of (36). According to the rule of finding the derivatives of the rotation matrices with respect to time, we have that(49)R˙=RΩ∧,R˙d=RdΩd∧.According to (35)(50)R˙-R˙dRdTR=RΩ∧-RdΩd∧RdTR=RΩ∧RT-RdΩd∧RdTR=RΩ∧-RdΩd∧R=RΩ-RdΩd∧R=RΩ-RTRdΩd∧R=RΩ-RTRdΩd∧RTR=RΩ-RTRdΩd∧.Proof finished.So, we can choose(51)eΩ=Ω-RTRdΩd,as the tracking error function of angular velocity vector. Actually, eΩ is the angular velocity of the rotation matrix RdTR, which is represented in the body frame, because of the following formulation:(52)dRdTRdt=RdTReΩ.The proof of (52) is given as follows.Proof of (52). (53) d R d T R d t = d R d T d t R + R d T d R d t = - Ω d ∧ R d T R + R d T R Ω ∧ = R d T R Ω ∧ - R d T R R T R d Ω d ∧ R d T R = R d T R Ω ∧ - R T R d Ω d ∧ = R d T R Ω - R T R d Ω d ∧ = R d T R e Ω .Proof finished.In getting the above conclusion the following equations are used:(54)R˙d=RdΩd∧,dRdTdt=R˙dT=RdΩd∧T=Ωd∧TRdT=-Ωd∧RdT. ### 3.3. Error Function on SE(3) As mentioned above, the error functionΨ is defined as (24) by the 4×4 adjoint matrix representation of elements on SE(3). The benefit of this adjoint representation rests with the simplicity of defining the semidirect product. However, the form of 6×6 matrix representation is adopted here for the convenience of calculation. For a 6×6 matrix N∈SE(3), according to the principle of Chasles motion decomposition, one has(55)N=R0ARR=NlNc=I0lAsIR0re∧RR,where R∈SO3, A∈so3, and(56)A=Ae+Al=Ae+lAs=re∧+lAs,l being the Frobenius norm of matrices. By the definition of Frobenius norm, for a matrix A∈Rm×n(57)AF≜∑i=1m∑j=1naij21/2=trATA1/2.For a three-order skew-symmetric matrix Ax=x∧=0-x3x2x30-x1-x2x10 which is obtained by a hat mapping, its Frobenius norm is(58)AxF=2x12+x22+x321/2.Sometimes when it is necessary to change the pitch parameter of a screw and the variable h is added, Nh=I0hII, then(59)N~=NhN=I0hIIR0ARR=R0A~RR,where A~=h+A. See (59) for details and it is can be seen that N~∉SE(3) because A~∉so3. Before further discussion of N~, another theorem is presented.Theorem 2 (the Laplace’s expansion theorem). Ifk rows (or k columns) of n-order determinant D are selected, where 1≤k≤n-1, the sum of products of all the k-order subdeterminants of elements of the k rows (or k columns) and the corresponding algebraic cofactors equals the value of the determinant D.Detailed discussions of Laplace’s expansion theorem can easily be found in teaching materials of matrix theory or linear algebra, so the proof is omitted here. According to Theorem2, for a block lower (upper) triangular matrix(60)A=Bm×m0∗Cn×n,or (61)A=Bm×m∗0Cn×n,in all the subdeterminants of the first m rows of det⁡A, only one of them is nonzero. Thus, by expansion of the first m rows, the following deduction is obtained:(62)det⁡A=det⁡Bdet⁡C.So det⁡N~=R0A~RR=det⁡Rdet⁡R=1, and we can see that N~∈SL3 (the special linear group), which is a subgroup of general linear group GL3.Similar to (23) and (24), a new error function is defined by(63)Ξ1=trNdTN-NTNd,where, with regard to (59), Nd=RH2I0A~HRH2IRH2I∈SL(3), A~H=AH+hI=ωH∧+hI, ωH is the Darboux vector of the frame H and ωH∧∈so3, N=RB2I0ABRB2IRB2I∈SE(3), and AB=ωDarboux∣Bishop∧, substituted into (63), and we get(64)Ξ1=trRH2ITRH2ITA~HT0RH2ITRB2I0ABRB2IRB2I-RB2ITRB2ITABT0RB2ITRH2I0A~HRH2IRH2I=trRH2ITRB2I+RH2ITA~HTABRB2IRH2ITA~HTRB2IRH2ITABRB2IRH2ITRB2I-RB2ITRH2I+RB2ITABTA~HRH2IRB2ITABTRH2IRB2ITA~HRH2IRB2ITRH2I=trRH2ITRB2I+RH2ITA~HTABRB2I-RB2ITRH2I+RB2ITABTA~HRH2I+RH2ITRB2I-RB2ITRH2I=tr2RH2ITRB2I-RB2ITRH2I+RH2ITA~HTABRB2I-RB2ITABTA~HRH2I.Let M1=RH2ITRB2I-RB2ITRH2I and M2=RH2ITA~HTABRB2I-RB2ITABTA~HRH2I; then we have Ξ1=tr2M1+M2. Obviously, M1,M2 are both skew-symmetric matrices belonging to so3. Since elements of so3 have closure property with additive operation, thus 2M1+M2∈so3. Another vector ωHB=ωH-ωB is defined where ωB=AB∨=ωDarboux∣Bishop and another error function is defined by(65)Ξ2=12trωHB∧2M1+M2.By (30) (66)Ξ2=-ωHBT2M1+M2∨. ## 3.1. General Situation In the beginning of this section an example is introduced to show the features of equations with matrices belonging toSE3. The following equations are given:(19)R˙=RΩ∧,P˙=RV,where R∈SO3 and P∈R3. Introduce the matrices P, G defined by(20)P=RP01×31,G=Ω∧V01×31,where P∈SE3, G∈se3 are both 4×4 matrices. The following equation holds:(21)P˙=P·G.Also, there is 6×6 matrix representation of elements of SE3:(22)G~=Ω∧0V∧Ω∧.These two adjoint representations of 4×4 and 6×6 matrices, that is, G and G~, are isomorphic to each other. se3, the Lie algebra of SE3, is isomorphic to SE3 as well. It is convenient to choose different forms we need in different situations. However, it is not difficult to verify that the 6×6 matrix representations of SE3 do not satisfy (21).This is different from the previous error functions which are defined onSO3, which is the subgroup of SE3, such as(23)ΦR,Rd=12trI-RdTR.In this paper a trial has been made to define an error function straightforwardly on SE3, so that the error function will include both the information of the rotation matrix and the position vectors or the speed vectors. The new error function on SE3 is defined by(24)ΨR,Rd,P,Pd=12trP-PdTP.Actually, (24) has a close relationship with ΦR,Rd. Since(25)P-PdTP=RT-RdT03×1PT-PdT1RP01×31=RT-RdTRRT-RdTPPT-PdTRPT-PdTP,thus(26)ΨR,Rd,P,Pd=12trP-PdTP=12trI-RdTR00P2-PdTP=12trI-RdTR+12P2-PdTP.Hence one can see that with an initial position P0 and a trajectory Pd, as long as a negative feedback of position signals is guaranteed, the position error ΔP is certain to be a decreasing function when it tends to the steady state. So the error function P2-PdTP is bounded. Let supP2-PdTP=D; then the domain of attraction of Ψ is regarded as a linear manifold of Φ. That means some features about the error function Φ defined on SO3 will still be helpful. ## 3.2. Error Function on SO(3) To choose the tracking error vectorseR and eΩ reasonably, let(27)Ψ-=12trI-RdTR+D.By finding the derivative of (27) we have(28)Ψ-˙=Φ˙=-12trRdTR˙=-12trRdTRΩ∧.Before further discussion, the following properties of 3-order skew-symmetric matrices are presented:(29)ItrAx∧=12trx∧A-AT,(30)II12trx∧A-AT=-xTA-AT∨,(31)IIIx·y∧z=y·z∧x,property  of  the  vector  mixed  product,(32)IVx∧y=x×y=-y×x=-y∧x,(33)Vx∧y∧z=x×y×z=y·x·z-z·x·yproperty  of  the  vector  triple  product,(34)VIx∧A+ATx∧=trAI3×3-Ax∧,(35)VIIRx∧RT=Rx∧,(36)VIIIR˙-R˙dRdTR=RΩ-RTRdΩd∧.Some proofs of (29)~(36) are clearly given [23, 24]; hence only the proofs of (29), (30), and (36) are given here. First of all, Theorem 1 is presented.Theorem 1. LetA, B be n×m and m×n matrices, respectively; then the following equation holds:(37)trAB=trBA.Proof. Let(38)A=a11⋯a1ma21⋯a2m⋮⋱⋮an1⋯anm=aij,i=1,2,…,n,j=1,2,…,m,B=b11⋯b1nb21⋯b2n⋮⋱⋮bm1⋯bmn=bij,i=1,2,…,m,j=1,2,…,n,and thus A B = c i j is a square matrix of n order, where cij=∑k=1maikbkj,i,j=1,2,…,n. B A = d i j is a square matrix of m order, where dij=∑k=1nbikakj,i,j=1,2,…,m:(39)trAB=∑i=1ncii=∑i=1n∑k=1maikbki,trBA=∑i=1mdii=∑i=1m∑k=1nbikaki=∑k=1n∑i=1makibik=∑i=1n∑k=1maikbki.So trAB=trBA, proof finished.In addition, by the definition of the trace of a matrix, for any square matrixA, obviously we have(40)trA=trAT.Then the proof of (29) is presented as follows.Proof of (29). By (37), (40), and the property of skew-symmetric matrix that(41)-x∧T=x∧,we have(42)trAx∧=12trx∧A+trAx∧=12trx∧A+trx∧TAT=12trx∧A+tr-x∧AT=12trx∧A-x∧AT=12trx∧A-AT.Proof finished.Proof of (30). For a three-order square matrixA, it is easy to see that A-AT is a skew-symmetric matrix. Denoting(43)x=x1,x2,x3T,A-AT∨=z=z1,z2,z3T,then(44)12trx∧A-AT=12tr0-x3x2x30-x1-x2x100-z3z2z30-z1-z2z10=12-x2z2-x3z3+-x1z1-x3z3+-x1z1-x2z2=-x1z1+x2z2+x3z3=-xTA-AT∨.Proof finished.By the definition of the inner product, one has that(45)-xTA-AT∨=-A-AT∨·x.By (29), (30), and (45), the following equation holds:(46)trAx∧=-A-AT∨·x.With regard to (46), (28) can be rewritten as(47)Ψ-˙=-12trRdTRΩ∧=12RdTR-RTRd∨Ω,where RdTR-RTRd∈so3 is a skew-symmetric matrix, ∨ is the inverse mapping of the hat mapping ∧. Thus, the tracking error function of attitude can be defined as(48)eR=12RdTR-RTRd∨.Then the proof of (36) is presented as follows.Proof of (36). According to the rule of finding the derivatives of the rotation matrices with respect to time, we have that(49)R˙=RΩ∧,R˙d=RdΩd∧.According to (35)(50)R˙-R˙dRdTR=RΩ∧-RdΩd∧RdTR=RΩ∧RT-RdΩd∧RdTR=RΩ∧-RdΩd∧R=RΩ-RdΩd∧R=RΩ-RTRdΩd∧R=RΩ-RTRdΩd∧RTR=RΩ-RTRdΩd∧.Proof finished.So, we can choose(51)eΩ=Ω-RTRdΩd,as the tracking error function of angular velocity vector. Actually, eΩ is the angular velocity of the rotation matrix RdTR, which is represented in the body frame, because of the following formulation:(52)dRdTRdt=RdTReΩ.The proof of (52) is given as follows.Proof of (52). (53) d R d T R d t = d R d T d t R + R d T d R d t = - Ω d ∧ R d T R + R d T R Ω ∧ = R d T R Ω ∧ - R d T R R T R d Ω d ∧ R d T R = R d T R Ω ∧ - R T R d Ω d ∧ = R d T R Ω - R T R d Ω d ∧ = R d T R e Ω .Proof finished.In getting the above conclusion the following equations are used:(54)R˙d=RdΩd∧,dRdTdt=R˙dT=RdΩd∧T=Ωd∧TRdT=-Ωd∧RdT. ## 3.3. Error Function on SE(3) As mentioned above, the error functionΨ is defined as (24) by the 4×4 adjoint matrix representation of elements on SE(3). The benefit of this adjoint representation rests with the simplicity of defining the semidirect product. However, the form of 6×6 matrix representation is adopted here for the convenience of calculation. For a 6×6 matrix N∈SE(3), according to the principle of Chasles motion decomposition, one has(55)N=R0ARR=NlNc=I0lAsIR0re∧RR,where R∈SO3, A∈so3, and(56)A=Ae+Al=Ae+lAs=re∧+lAs,l being the Frobenius norm of matrices. By the definition of Frobenius norm, for a matrix A∈Rm×n(57)AF≜∑i=1m∑j=1naij21/2=trATA1/2.For a three-order skew-symmetric matrix Ax=x∧=0-x3x2x30-x1-x2x10 which is obtained by a hat mapping, its Frobenius norm is(58)AxF=2x12+x22+x321/2.Sometimes when it is necessary to change the pitch parameter of a screw and the variable h is added, Nh=I0hII, then(59)N~=NhN=I0hIIR0ARR=R0A~RR,where A~=h+A. See (59) for details and it is can be seen that N~∉SE(3) because A~∉so3. Before further discussion of N~, another theorem is presented.Theorem 2 (the Laplace’s expansion theorem). Ifk rows (or k columns) of n-order determinant D are selected, where 1≤k≤n-1, the sum of products of all the k-order subdeterminants of elements of the k rows (or k columns) and the corresponding algebraic cofactors equals the value of the determinant D.Detailed discussions of Laplace’s expansion theorem can easily be found in teaching materials of matrix theory or linear algebra, so the proof is omitted here. According to Theorem2, for a block lower (upper) triangular matrix(60)A=Bm×m0∗Cn×n,or (61)A=Bm×m∗0Cn×n,in all the subdeterminants of the first m rows of det⁡A, only one of them is nonzero. Thus, by expansion of the first m rows, the following deduction is obtained:(62)det⁡A=det⁡Bdet⁡C.So det⁡N~=R0A~RR=det⁡Rdet⁡R=1, and we can see that N~∈SL3 (the special linear group), which is a subgroup of general linear group GL3.Similar to (23) and (24), a new error function is defined by(63)Ξ1=trNdTN-NTNd,where, with regard to (59), Nd=RH2I0A~HRH2IRH2I∈SL(3), A~H=AH+hI=ωH∧+hI, ωH is the Darboux vector of the frame H and ωH∧∈so3, N=RB2I0ABRB2IRB2I∈SE(3), and AB=ωDarboux∣Bishop∧, substituted into (63), and we get(64)Ξ1=trRH2ITRH2ITA~HT0RH2ITRB2I0ABRB2IRB2I-RB2ITRB2ITABT0RB2ITRH2I0A~HRH2IRH2I=trRH2ITRB2I+RH2ITA~HTABRB2IRH2ITA~HTRB2IRH2ITABRB2IRH2ITRB2I-RB2ITRH2I+RB2ITABTA~HRH2IRB2ITABTRH2IRB2ITA~HRH2IRB2ITRH2I=trRH2ITRB2I+RH2ITA~HTABRB2I-RB2ITRH2I+RB2ITABTA~HRH2I+RH2ITRB2I-RB2ITRH2I=tr2RH2ITRB2I-RB2ITRH2I+RH2ITA~HTABRB2I-RB2ITABTA~HRH2I.Let M1=RH2ITRB2I-RB2ITRH2I and M2=RH2ITA~HTABRB2I-RB2ITABTA~HRH2I; then we have Ξ1=tr2M1+M2. Obviously, M1,M2 are both skew-symmetric matrices belonging to so3. Since elements of so3 have closure property with additive operation, thus 2M1+M2∈so3. Another vector ωHB=ωH-ωB is defined where ωB=AB∨=ωDarboux∣Bishop and another error function is defined by(65)Ξ2=12trωHB∧2M1+M2.By (30) (66)Ξ2=-ωHBT2M1+M2∨. ## 4. Simulations and Analysis Figure2 shows an overview structure of the whole flight control system. It is can be seen that there are inner loops (attitude loops and trajectory loops) since the 6-DOF model of the UAV is used. However, the designing of the inner loops is independent of the designing of error functions. So the inner loop controllers are all chosen as ordinary ones.Figure 2 Overview structure of the devised flight control system.In the simulations, the employed UAV model originates from an improved and trial type of China’s “Sharp Sword” unmanned combat aerial vehicle. Its three-view drawing is shown in Figure3.Figure 3 Three-view drawing of the UAV used in the simulations.The main data of the UAV are shown in Table1. From Table 1 it is can be seen that the UAV has a big size of 2300 kg; thus the target trajectory is also a large curve.Table 1 Data of the UAV in simulations. Aerodynamic configuration Flying-wing configuration Mass of UAV 2.3 × 1 0 3 kg Inertiaproperties I x = 2.6242 × 1 0 3 Wing span of UAV 8.76 m I y = 4.5183 × 1 0 3 Max thrust(single engine of two) 500 kgf I z = 8.9088 × 1 0 3 Max Mach 0.85 I x y = I z y = I x z = 0 Maximum flight altitude 15 kmThe target trajectory is chosen as an elliptic cylinder helix extending along the horizontal direction. The expression of the given trajectory with regard to a time parameter is defined as(67)rt=a1t+c1,a2cos⁡b1t+c2,a3sin⁡b2t+c3T,where a1,a2,a3,b1,b2,c1,c2,c3 are all the coefficients. The beginning of the tracking action is from the moment the UAV arrives in a small neighborhood of any point of the target trajectory. The initial speed vector of body frame is V0=200,0,0 and the attitude angles vector is ϕ,θ,ψ0=0,2,0, where the units are, respectively, “m/s” and “°.” According to the initial condition of the UAV, the values of the coefficients of (67) are finally chosen by(68)a1=200,a2=300,a3=-250,b1=0.1,b2=0.1,c1=0,c2=-300,c3=-3000.The curve of the target trajectory rt is shown as in Figure 4(a).Figure 4 (a) Curve of the target trajectory. (b) A comparison of the forward errors. (c) Curve of the pitch angle. (d) Force of the thrust. (a) (b) (c) (d)For the similarity of longitudinal and lateral channels of the flight control system, here just take the tracking errors in axis-x for example. In Figure 4(b), the magnitude of tracking error is largest with the ordinary error function. When the error function on SO(3) is added, the tracking error decreases significantly. When the error function on SE(3) is added, the tracking error decreases more. It should be noted that in Figure 4(b) the condition “error function on SE(3)” means “error function on SE(3) is added” so the previous error functions are still being used.Figure4(c) shows the tracking curve of the pitch angle θ. The pitch angle is limited in the reasonable ranges with amplitude limits of 20° and 30°. The pitch angle ranges within ±8°. Some slight buffeting near the vertex of the curve of the pitch angle has something to do with the inner loop controllers. Figure 4(d) shows the curve of the single thrust T and there are two engines. In most literatures there is a supposition that the thrust of the aircraft is large enough. Actually a too large thrust means a difficulty in rapid reduction of the forward speed. The UAV mass is about 2300 kg and in simulation the maximum thrust weight ratio is about 0.32. The first 30 s of Figure 4(d) indicates a process of rapid convergence of the forward tracking error. The thrust is controlled directly by the opening degree of the throttle which is constrained in the closed interval [0,1]. ## 5. Conclusions According to the nonlinear model of a UAV, a 3D trajectory tracking method is devised. Efforts have been made to discuss the features about the error functions on SO(3) and SE(3). The tracking effect of the flight control system is tested in the numerical simulation. The result of the simulations shows a satisfactory tracking performance so that the error functions designed in this paper are feasible in the UAV tracking process. The designing of the new error function on SE(3) provides a new way of error functions constructing in solving the guidance problem of the UAV. --- *Source: 1016530-2017-01-04.xml*
1016530-2017-01-04_1016530-2017-01-04.md
39,239
Some Discussions about the Error Functions on SO(3) and SE(3) for the Guidance of a UAV Using the Screw Algebra Theory
Yi Zhu; Xin Chen; Chuntao Li
Advances in Mathematical Physics (2017)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1016530
1016530-2017-01-04.xml
--- ## Abstract In this paper a new error function designed on 3-dimensional special Euclidean group SE(3) is proposed for the guidance of a UAV (Unmanned Aerial Vehicle). In the beginning, a detailed 6-DOF (Degree of Freedom) aircraft model is formulated including 12 nonlinear differential equations. Secondly the definitions of the adjoint representations are presented to establish the relationships of the Lie groups SO(3) and SE(3) and their Lie algebras so(3) and se(3). After that the general situation of the differential equations with matrices belonging to SO(3) and SE(3) is presented. According to these equations the features of the error function on SO(3) are discussed. Then an error function on SE(3) is devised which creates a new way of error functions constructing. In the simulation a trajectory tracking example is given with a target trajectory being a curve of elliptic cylinder helix. The result shows that a better tracking performance is obtained with the new devised error function. --- ## Body ## 1. Introduction The way of computing the tracking errors plays an important role in the guidance process of a UAV. For the problem of either a 2D tracking in a plane or a 3D tracking in the physical space, many valuable researches have been made about the guidance methods of “trajectory tracking” and “path following” [1].To solve the tracking problems, different researchers hold different opinions. The early methods somewhat originate from the target tracking of missiles such as proportional navigation, way point, and vector field method [2–4]. Then the body-mass point model is usually used so that the direct relationship between the position deviation and speed (or acceleration) can be concerned. Sometimes the influence of the attitude angles and the angular velocity of body frame is also taken into consideration [5]. On the contrary, the features of the inner loops of the aircraft system are often clearly figured out for lager aircrafts [6, 7]. For a 2D tracking issue there are some novel navigation methods and guidance strategies emerging in the light of geometrical intuition and physical interpretation [8–10]. Also, for the curves of 3D trajectories, the constraint of time parameter can be transformed into an arc length parameter by theory of differential geometry [11, 12].Actually, when a 6-DOF model of an aircraft is concerned, there are at least three basic coordinate frames included which are the inertial frame, the aircraft-body frame, and the airspeed frame. So the coordinate transformations between these different coordinate frames are directly related to the accuracy of the tracking errors computing, that is, where the error functions on SO(3) are used. For examples, in some literatures the guidance strategy is implemented based on a mixed structure of the attitude loops and guidance loops with controllers of the forces and moments [13, 14]. Another instance is the moving frame guidance method. This guidance method changes the ordinary error functions from the inertial frame to a moving frame by orthogonal matrices which belong to SO(3) [15].Many researches have been made about the formulation of a moving frame of a given trajectory, as recently in [16–19] and previously in [20, 21]. However, the designing of the error functions of a moving frame is a difficulty because there is interdisciplinary knowledge involved such as the Lie group theory. Some literatures indicate that the analyses about the Lie group can be simplified by the screw algebra theory [22]. Theses analyses are important particularly in the tracking process of aircrafts [23, 24]. So in this paper some discussions have been made to provide clear relationships between Lie groups SO(3) and SE(3) and their Lie algebras so(3) and se(3). Then some features of the error functions on SO(3) are proved before a new designed error function on SE(3) is proposed. Thus a new way of error functions constructing is presented. The effects of the different error functions are tested in the simulation with a 6-DOF UAV model. ## 2. Preliminary ### 2.1. UAV Model A flight control system is a bit more complicated than ordinary control systems. The analytic expressions of 6-DOF motion of an aircraft, that is, the 12 nonlinear differential equations, are formulated as follows:(I) Force equations:(1)u˙=vr-wq-gsin⁡θ+Fxmc,v˙=-ur+wp+gcos⁡θsin⁡ϕ+Fymc,w˙=uq-vp+gcos⁡θcos⁡ϕ+Fzmc.(II) Kinematic equations:(2)ϕ˙=p+qsin⁡ϕ+rcos⁡ϕtan⁡θ,θ˙=qcos⁡ϕ-rsin⁡ϕ,ψ˙=qsin⁡ϕ+rcos⁡ϕcos⁡θ.(III) Moment equations:(3)p˙=c1r+c2pq+c3L-+c4N,q˙=c5pr-c6p2-r2+c7M,r˙=c8p-c2rq+c4L-+c9N,where c1=(Iy-Iz)Iz-Ixz2/IxIz-Ixz2, c2=((Ix-Iy+Iz)Ixz)/IxIz-Ixz2, c3=Iz/IxIz-Ixz2, c4=Ixz/IxIz-Ixz2, c5=(Iz-Ix)/Iy, c6=Ixz/Iy, c7=1/Iy, c8=Ix(Ix-Iy)+Ixz2/IxIz-Ixz2, and c9=Ix/IxIz-Ixz2.(IV) Navigation equations:(4)x˙g=ucos⁡θcos⁡ψ+vsin⁡ϕsin⁡θcos⁡ψ-cos⁡ϕsin⁡ψ+wsin⁡ϕsin⁡ψ+cos⁡ϕsin⁡θcos⁡ψ,y˙g=ucos⁡θsin⁡ψ+vsin⁡ϕsin⁡θsin⁡ψ+cos⁡ϕcos⁡ψ+w-sin⁡ϕcos⁡ψ+cos⁡ϕsin⁡θsin⁡ψ,h˙g=usin⁡θ-vsin⁡ϕcos⁡θ-wcos⁡ϕcos⁡θor(5)x˙g=Vcos⁡γcos⁡φ,y˙g=Vcos⁡γsin⁡φ,h˙g=Vsin⁡γ,where mc is the mass of the aircraft, Fi(i=x,y,z) represents the force of axes of aircraft-body coordinate frame, respectively, L-, M, and N are moments of body frame, u, v, and w are speed components of body frame, θ, ψ, and ϕ represent pitch angle, yaw angle, and bank angle, respectively, p, q, and r are angular velocity from body frame to inertial frame resolved in body frame, xg, yg, and hg represent the position of the aircraft in inertial frame, Ix, Iy, and Iz are rotary inertias of axes of body frame, V is true airspeed, and γ, φ are flight-path angles between the first/second axis of wind coordinate frame and inertial frame, respectively.In practice, we may not necessarily chooseu, v, and w as the state variables of an aircraft model concerning different requirements and we usually choose the true air speed V and aerodynamic angles, such as angle of attack α and sideslip angle β, instead. According to the rotation matrix RW2B from wind frame to body frame, along with the relationship Xbody=RW2BXwind, one has the equation(6)uvw=RW2BV00=Vcos⁡αcos⁡βVsin⁡βVsin⁡αcos⁡β.With (1)~(4) the 12 differential equations are obtained; however it is not adequate to establish a complete nonlinear model of a UAV. More additional parts are needed. Figure 1 shows the inner structure of the UAV dynamic model.Figure 1 The inner structure of the UAV model.Hereδe, δa, and δr are the angular deviations of the elevator, ailerons, and rudder, δT is the opening degree of throttle, and Mach represents Mach number. As shown in Figure 1, the actuators are δe, δa, δr, and δT rather than forces and moments. Then, it is viable to construct a simulation model of the given aircraft. By (1)~(4) a nonlinear state-space system can be obtained with state variables defined by XT=u,v,w,ϕ,θ,ψ,p,q,r,xg,yg,h and control input vector as UT=δe,δα,δr,δT. ### 2.2. Adjoint Representations of Lie Algebras Before the features of the error functions on SO(3) and SE(3) are discussed, the features of SO(3) and SE(3) themselves should be made clear. In the following part, some basic concepts of the screw algebra and Lie group theory are discussed in detail.According to the screw algebra theory of motions of the rigid body, the definitions and adjoint representations of the 3-dimensional special orthogonal groupSO3, the 3-dimensional special Euclidean group SE3, and the corresponding Lie algebras so3 and se3 are presented as follows.(I) The3×3 matrix adjoint representation of so3 is as follows.s o 3 is the Lie algebra of the 3-dimensional special orthogonal group SO3, the adjoint representation of which has a form of a skew-symmetric matrix:(7)ads=As=s∧=0-szsysz0-sx-sysx0.Since the Lie algebra so3 is represented as a 3×3 skew-symmetric matrix, one basis of the 3-dimensional vector space in the form of matrices is(8)ads1=s1∧=00000-1010,ads2=s2∧=001000-100,ads3=s3∧=0-10100000,where s1=1,0,0T, s2=0,1,0T, and s3=0,0,1T.Any elements belonging toso3 can be represented as a linear combination of this basis.(II) The standard and adjoint representation ofse3 is as follows.s e 3 is the Lie algebra of the 3-dimensional special Euclidean group SE3. SEn, also denoted E+n, is defined to describe rigid body motions including translations and rotations, which is based on an identity of a rigid body motion and a curve in the Euclidean group.The standard4×4 matrix representation of se3 is (9)E=s∧s00T0,where s∧∈so3 and s0∈R3. These elements belong to a space which is a subset of R4×4. The generators of this 6-dimensional vector space are(10)E1=000000-1001000000,E2=00100000-10000000,E3=0-100100000000000,E4=0001000000000000,E5=0000000100000000,E6=0000000000010000.Also there is a 6×6 matrix adjoint representation of se3 defined as(11)U=adS=s∧0s0∧s∧,where the operator adS⊂R6×6 is isomorphic to E and the generators are(12)adS1=s1∧00s1∧,adS2=s2∧00s2∧,adS3=s3∧00s3∧,adS4=00s1∧0,adS5=00s2∧0,adS6=00s3∧0.We can see that the Lie algebra so3≅R3 is a subspace of se3≅R6, where the symbol ≅ means isomorphic.(III) The exponential mapping is as follows.The exponential mapping establishes a connection betweenso3 and SO3, se3 and SE3 as well. According to the Rodrigues equation, when the rotation axis s and the revolute joint θ are given, the rotation matrix R can be obtained as (13)R=eθAs=I+sin⁡θAs+1-cos⁡θAs2,where s∈R3, As=s×∈so3, and R∈SO3.Formula (13) presents the exponential mapping from so3 to SO3, the proof of which can be found in literature [22].Similarly, the exponential mapping fromse3 to SE3 is defined as (14)H=eθE=eθAsθs00T0=Rd0T1=eθAsVs00T1,where E is the standard 4×4 matrix representation of se3, and(15)V=θI+1-cos⁡θAs+θ-sin⁡θAs2.(IV) The relationship between SO3 and SE3 is as follows.Special Euclidean groupSE3 is a closed subgroup of 3-dimensional affine group Aff3. SE3 can be represented as a semidirect product of the special orthogonal group SO3 and the translation group T3; that is,(16)SE3≅SO3∝T3.The geometric meaning of above semidirect product is a rotation motion acting on a translation.Furthermore, a6×6 finite displacement screw matrix is defined as(17)N=R0ARR=I0AIR00R=NtNr,where rotation matrix R∈SO3 and A is a skew-symmetric matrix of translation action. Then we can see that(18)det⁡N=det⁡R0ARR=det⁡Rdet⁡R=1.So the finite displacement screw matrix belongs to the special linear group SLn, which is a subgroup of the general linear group GLn. Also N is an element of the Lie group SE3. ## 2.1. UAV Model A flight control system is a bit more complicated than ordinary control systems. The analytic expressions of 6-DOF motion of an aircraft, that is, the 12 nonlinear differential equations, are formulated as follows:(I) Force equations:(1)u˙=vr-wq-gsin⁡θ+Fxmc,v˙=-ur+wp+gcos⁡θsin⁡ϕ+Fymc,w˙=uq-vp+gcos⁡θcos⁡ϕ+Fzmc.(II) Kinematic equations:(2)ϕ˙=p+qsin⁡ϕ+rcos⁡ϕtan⁡θ,θ˙=qcos⁡ϕ-rsin⁡ϕ,ψ˙=qsin⁡ϕ+rcos⁡ϕcos⁡θ.(III) Moment equations:(3)p˙=c1r+c2pq+c3L-+c4N,q˙=c5pr-c6p2-r2+c7M,r˙=c8p-c2rq+c4L-+c9N,where c1=(Iy-Iz)Iz-Ixz2/IxIz-Ixz2, c2=((Ix-Iy+Iz)Ixz)/IxIz-Ixz2, c3=Iz/IxIz-Ixz2, c4=Ixz/IxIz-Ixz2, c5=(Iz-Ix)/Iy, c6=Ixz/Iy, c7=1/Iy, c8=Ix(Ix-Iy)+Ixz2/IxIz-Ixz2, and c9=Ix/IxIz-Ixz2.(IV) Navigation equations:(4)x˙g=ucos⁡θcos⁡ψ+vsin⁡ϕsin⁡θcos⁡ψ-cos⁡ϕsin⁡ψ+wsin⁡ϕsin⁡ψ+cos⁡ϕsin⁡θcos⁡ψ,y˙g=ucos⁡θsin⁡ψ+vsin⁡ϕsin⁡θsin⁡ψ+cos⁡ϕcos⁡ψ+w-sin⁡ϕcos⁡ψ+cos⁡ϕsin⁡θsin⁡ψ,h˙g=usin⁡θ-vsin⁡ϕcos⁡θ-wcos⁡ϕcos⁡θor(5)x˙g=Vcos⁡γcos⁡φ,y˙g=Vcos⁡γsin⁡φ,h˙g=Vsin⁡γ,where mc is the mass of the aircraft, Fi(i=x,y,z) represents the force of axes of aircraft-body coordinate frame, respectively, L-, M, and N are moments of body frame, u, v, and w are speed components of body frame, θ, ψ, and ϕ represent pitch angle, yaw angle, and bank angle, respectively, p, q, and r are angular velocity from body frame to inertial frame resolved in body frame, xg, yg, and hg represent the position of the aircraft in inertial frame, Ix, Iy, and Iz are rotary inertias of axes of body frame, V is true airspeed, and γ, φ are flight-path angles between the first/second axis of wind coordinate frame and inertial frame, respectively.In practice, we may not necessarily chooseu, v, and w as the state variables of an aircraft model concerning different requirements and we usually choose the true air speed V and aerodynamic angles, such as angle of attack α and sideslip angle β, instead. According to the rotation matrix RW2B from wind frame to body frame, along with the relationship Xbody=RW2BXwind, one has the equation(6)uvw=RW2BV00=Vcos⁡αcos⁡βVsin⁡βVsin⁡αcos⁡β.With (1)~(4) the 12 differential equations are obtained; however it is not adequate to establish a complete nonlinear model of a UAV. More additional parts are needed. Figure 1 shows the inner structure of the UAV dynamic model.Figure 1 The inner structure of the UAV model.Hereδe, δa, and δr are the angular deviations of the elevator, ailerons, and rudder, δT is the opening degree of throttle, and Mach represents Mach number. As shown in Figure 1, the actuators are δe, δa, δr, and δT rather than forces and moments. Then, it is viable to construct a simulation model of the given aircraft. By (1)~(4) a nonlinear state-space system can be obtained with state variables defined by XT=u,v,w,ϕ,θ,ψ,p,q,r,xg,yg,h and control input vector as UT=δe,δα,δr,δT. ## 2.2. Adjoint Representations of Lie Algebras Before the features of the error functions on SO(3) and SE(3) are discussed, the features of SO(3) and SE(3) themselves should be made clear. In the following part, some basic concepts of the screw algebra and Lie group theory are discussed in detail.According to the screw algebra theory of motions of the rigid body, the definitions and adjoint representations of the 3-dimensional special orthogonal groupSO3, the 3-dimensional special Euclidean group SE3, and the corresponding Lie algebras so3 and se3 are presented as follows.(I) The3×3 matrix adjoint representation of so3 is as follows.s o 3 is the Lie algebra of the 3-dimensional special orthogonal group SO3, the adjoint representation of which has a form of a skew-symmetric matrix:(7)ads=As=s∧=0-szsysz0-sx-sysx0.Since the Lie algebra so3 is represented as a 3×3 skew-symmetric matrix, one basis of the 3-dimensional vector space in the form of matrices is(8)ads1=s1∧=00000-1010,ads2=s2∧=001000-100,ads3=s3∧=0-10100000,where s1=1,0,0T, s2=0,1,0T, and s3=0,0,1T.Any elements belonging toso3 can be represented as a linear combination of this basis.(II) The standard and adjoint representation ofse3 is as follows.s e 3 is the Lie algebra of the 3-dimensional special Euclidean group SE3. SEn, also denoted E+n, is defined to describe rigid body motions including translations and rotations, which is based on an identity of a rigid body motion and a curve in the Euclidean group.The standard4×4 matrix representation of se3 is (9)E=s∧s00T0,where s∧∈so3 and s0∈R3. These elements belong to a space which is a subset of R4×4. The generators of this 6-dimensional vector space are(10)E1=000000-1001000000,E2=00100000-10000000,E3=0-100100000000000,E4=0001000000000000,E5=0000000100000000,E6=0000000000010000.Also there is a 6×6 matrix adjoint representation of se3 defined as(11)U=adS=s∧0s0∧s∧,where the operator adS⊂R6×6 is isomorphic to E and the generators are(12)adS1=s1∧00s1∧,adS2=s2∧00s2∧,adS3=s3∧00s3∧,adS4=00s1∧0,adS5=00s2∧0,adS6=00s3∧0.We can see that the Lie algebra so3≅R3 is a subspace of se3≅R6, where the symbol ≅ means isomorphic.(III) The exponential mapping is as follows.The exponential mapping establishes a connection betweenso3 and SO3, se3 and SE3 as well. According to the Rodrigues equation, when the rotation axis s and the revolute joint θ are given, the rotation matrix R can be obtained as (13)R=eθAs=I+sin⁡θAs+1-cos⁡θAs2,where s∈R3, As=s×∈so3, and R∈SO3.Formula (13) presents the exponential mapping from so3 to SO3, the proof of which can be found in literature [22].Similarly, the exponential mapping fromse3 to SE3 is defined as (14)H=eθE=eθAsθs00T0=Rd0T1=eθAsVs00T1,where E is the standard 4×4 matrix representation of se3, and(15)V=θI+1-cos⁡θAs+θ-sin⁡θAs2.(IV) The relationship between SO3 and SE3 is as follows.Special Euclidean groupSE3 is a closed subgroup of 3-dimensional affine group Aff3. SE3 can be represented as a semidirect product of the special orthogonal group SO3 and the translation group T3; that is,(16)SE3≅SO3∝T3.The geometric meaning of above semidirect product is a rotation motion acting on a translation.Furthermore, a6×6 finite displacement screw matrix is defined as(17)N=R0ARR=I0AIR00R=NtNr,where rotation matrix R∈SO3 and A is a skew-symmetric matrix of translation action. Then we can see that(18)det⁡N=det⁡R0ARR=det⁡Rdet⁡R=1.So the finite displacement screw matrix belongs to the special linear group SLn, which is a subgroup of the general linear group GLn. Also N is an element of the Lie group SE3. ## 3. Error Functions Defined on SO(3) and SE(3) ### 3.1. General Situation In the beginning of this section an example is introduced to show the features of equations with matrices belonging toSE3. The following equations are given:(19)R˙=RΩ∧,P˙=RV,where R∈SO3 and P∈R3. Introduce the matrices P, G defined by(20)P=RP01×31,G=Ω∧V01×31,where P∈SE3, G∈se3 are both 4×4 matrices. The following equation holds:(21)P˙=P·G.Also, there is 6×6 matrix representation of elements of SE3:(22)G~=Ω∧0V∧Ω∧.These two adjoint representations of 4×4 and 6×6 matrices, that is, G and G~, are isomorphic to each other. se3, the Lie algebra of SE3, is isomorphic to SE3 as well. It is convenient to choose different forms we need in different situations. However, it is not difficult to verify that the 6×6 matrix representations of SE3 do not satisfy (21).This is different from the previous error functions which are defined onSO3, which is the subgroup of SE3, such as(23)ΦR,Rd=12trI-RdTR.In this paper a trial has been made to define an error function straightforwardly on SE3, so that the error function will include both the information of the rotation matrix and the position vectors or the speed vectors. The new error function on SE3 is defined by(24)ΨR,Rd,P,Pd=12trP-PdTP.Actually, (24) has a close relationship with ΦR,Rd. Since(25)P-PdTP=RT-RdT03×1PT-PdT1RP01×31=RT-RdTRRT-RdTPPT-PdTRPT-PdTP,thus(26)ΨR,Rd,P,Pd=12trP-PdTP=12trI-RdTR00P2-PdTP=12trI-RdTR+12P2-PdTP.Hence one can see that with an initial position P0 and a trajectory Pd, as long as a negative feedback of position signals is guaranteed, the position error ΔP is certain to be a decreasing function when it tends to the steady state. So the error function P2-PdTP is bounded. Let supP2-PdTP=D; then the domain of attraction of Ψ is regarded as a linear manifold of Φ. That means some features about the error function Φ defined on SO3 will still be helpful. ### 3.2. Error Function on SO(3) To choose the tracking error vectorseR and eΩ reasonably, let(27)Ψ-=12trI-RdTR+D.By finding the derivative of (27) we have(28)Ψ-˙=Φ˙=-12trRdTR˙=-12trRdTRΩ∧.Before further discussion, the following properties of 3-order skew-symmetric matrices are presented:(29)ItrAx∧=12trx∧A-AT,(30)II12trx∧A-AT=-xTA-AT∨,(31)IIIx·y∧z=y·z∧x,property  of  the  vector  mixed  product,(32)IVx∧y=x×y=-y×x=-y∧x,(33)Vx∧y∧z=x×y×z=y·x·z-z·x·yproperty  of  the  vector  triple  product,(34)VIx∧A+ATx∧=trAI3×3-Ax∧,(35)VIIRx∧RT=Rx∧,(36)VIIIR˙-R˙dRdTR=RΩ-RTRdΩd∧.Some proofs of (29)~(36) are clearly given [23, 24]; hence only the proofs of (29), (30), and (36) are given here. First of all, Theorem 1 is presented.Theorem 1. LetA, B be n×m and m×n matrices, respectively; then the following equation holds:(37)trAB=trBA.Proof. Let(38)A=a11⋯a1ma21⋯a2m⋮⋱⋮an1⋯anm=aij,i=1,2,…,n,j=1,2,…,m,B=b11⋯b1nb21⋯b2n⋮⋱⋮bm1⋯bmn=bij,i=1,2,…,m,j=1,2,…,n,and thus A B = c i j is a square matrix of n order, where cij=∑k=1maikbkj,i,j=1,2,…,n. B A = d i j is a square matrix of m order, where dij=∑k=1nbikakj,i,j=1,2,…,m:(39)trAB=∑i=1ncii=∑i=1n∑k=1maikbki,trBA=∑i=1mdii=∑i=1m∑k=1nbikaki=∑k=1n∑i=1makibik=∑i=1n∑k=1maikbki.So trAB=trBA, proof finished.In addition, by the definition of the trace of a matrix, for any square matrixA, obviously we have(40)trA=trAT.Then the proof of (29) is presented as follows.Proof of (29). By (37), (40), and the property of skew-symmetric matrix that(41)-x∧T=x∧,we have(42)trAx∧=12trx∧A+trAx∧=12trx∧A+trx∧TAT=12trx∧A+tr-x∧AT=12trx∧A-x∧AT=12trx∧A-AT.Proof finished.Proof of (30). For a three-order square matrixA, it is easy to see that A-AT is a skew-symmetric matrix. Denoting(43)x=x1,x2,x3T,A-AT∨=z=z1,z2,z3T,then(44)12trx∧A-AT=12tr0-x3x2x30-x1-x2x100-z3z2z30-z1-z2z10=12-x2z2-x3z3+-x1z1-x3z3+-x1z1-x2z2=-x1z1+x2z2+x3z3=-xTA-AT∨.Proof finished.By the definition of the inner product, one has that(45)-xTA-AT∨=-A-AT∨·x.By (29), (30), and (45), the following equation holds:(46)trAx∧=-A-AT∨·x.With regard to (46), (28) can be rewritten as(47)Ψ-˙=-12trRdTRΩ∧=12RdTR-RTRd∨Ω,where RdTR-RTRd∈so3 is a skew-symmetric matrix, ∨ is the inverse mapping of the hat mapping ∧. Thus, the tracking error function of attitude can be defined as(48)eR=12RdTR-RTRd∨.Then the proof of (36) is presented as follows.Proof of (36). According to the rule of finding the derivatives of the rotation matrices with respect to time, we have that(49)R˙=RΩ∧,R˙d=RdΩd∧.According to (35)(50)R˙-R˙dRdTR=RΩ∧-RdΩd∧RdTR=RΩ∧RT-RdΩd∧RdTR=RΩ∧-RdΩd∧R=RΩ-RdΩd∧R=RΩ-RTRdΩd∧R=RΩ-RTRdΩd∧RTR=RΩ-RTRdΩd∧.Proof finished.So, we can choose(51)eΩ=Ω-RTRdΩd,as the tracking error function of angular velocity vector. Actually, eΩ is the angular velocity of the rotation matrix RdTR, which is represented in the body frame, because of the following formulation:(52)dRdTRdt=RdTReΩ.The proof of (52) is given as follows.Proof of (52). (53) d R d T R d t = d R d T d t R + R d T d R d t = - Ω d ∧ R d T R + R d T R Ω ∧ = R d T R Ω ∧ - R d T R R T R d Ω d ∧ R d T R = R d T R Ω ∧ - R T R d Ω d ∧ = R d T R Ω - R T R d Ω d ∧ = R d T R e Ω .Proof finished.In getting the above conclusion the following equations are used:(54)R˙d=RdΩd∧,dRdTdt=R˙dT=RdΩd∧T=Ωd∧TRdT=-Ωd∧RdT. ### 3.3. Error Function on SE(3) As mentioned above, the error functionΨ is defined as (24) by the 4×4 adjoint matrix representation of elements on SE(3). The benefit of this adjoint representation rests with the simplicity of defining the semidirect product. However, the form of 6×6 matrix representation is adopted here for the convenience of calculation. For a 6×6 matrix N∈SE(3), according to the principle of Chasles motion decomposition, one has(55)N=R0ARR=NlNc=I0lAsIR0re∧RR,where R∈SO3, A∈so3, and(56)A=Ae+Al=Ae+lAs=re∧+lAs,l being the Frobenius norm of matrices. By the definition of Frobenius norm, for a matrix A∈Rm×n(57)AF≜∑i=1m∑j=1naij21/2=trATA1/2.For a three-order skew-symmetric matrix Ax=x∧=0-x3x2x30-x1-x2x10 which is obtained by a hat mapping, its Frobenius norm is(58)AxF=2x12+x22+x321/2.Sometimes when it is necessary to change the pitch parameter of a screw and the variable h is added, Nh=I0hII, then(59)N~=NhN=I0hIIR0ARR=R0A~RR,where A~=h+A. See (59) for details and it is can be seen that N~∉SE(3) because A~∉so3. Before further discussion of N~, another theorem is presented.Theorem 2 (the Laplace’s expansion theorem). Ifk rows (or k columns) of n-order determinant D are selected, where 1≤k≤n-1, the sum of products of all the k-order subdeterminants of elements of the k rows (or k columns) and the corresponding algebraic cofactors equals the value of the determinant D.Detailed discussions of Laplace’s expansion theorem can easily be found in teaching materials of matrix theory or linear algebra, so the proof is omitted here. According to Theorem2, for a block lower (upper) triangular matrix(60)A=Bm×m0∗Cn×n,or (61)A=Bm×m∗0Cn×n,in all the subdeterminants of the first m rows of det⁡A, only one of them is nonzero. Thus, by expansion of the first m rows, the following deduction is obtained:(62)det⁡A=det⁡Bdet⁡C.So det⁡N~=R0A~RR=det⁡Rdet⁡R=1, and we can see that N~∈SL3 (the special linear group), which is a subgroup of general linear group GL3.Similar to (23) and (24), a new error function is defined by(63)Ξ1=trNdTN-NTNd,where, with regard to (59), Nd=RH2I0A~HRH2IRH2I∈SL(3), A~H=AH+hI=ωH∧+hI, ωH is the Darboux vector of the frame H and ωH∧∈so3, N=RB2I0ABRB2IRB2I∈SE(3), and AB=ωDarboux∣Bishop∧, substituted into (63), and we get(64)Ξ1=trRH2ITRH2ITA~HT0RH2ITRB2I0ABRB2IRB2I-RB2ITRB2ITABT0RB2ITRH2I0A~HRH2IRH2I=trRH2ITRB2I+RH2ITA~HTABRB2IRH2ITA~HTRB2IRH2ITABRB2IRH2ITRB2I-RB2ITRH2I+RB2ITABTA~HRH2IRB2ITABTRH2IRB2ITA~HRH2IRB2ITRH2I=trRH2ITRB2I+RH2ITA~HTABRB2I-RB2ITRH2I+RB2ITABTA~HRH2I+RH2ITRB2I-RB2ITRH2I=tr2RH2ITRB2I-RB2ITRH2I+RH2ITA~HTABRB2I-RB2ITABTA~HRH2I.Let M1=RH2ITRB2I-RB2ITRH2I and M2=RH2ITA~HTABRB2I-RB2ITABTA~HRH2I; then we have Ξ1=tr2M1+M2. Obviously, M1,M2 are both skew-symmetric matrices belonging to so3. Since elements of so3 have closure property with additive operation, thus 2M1+M2∈so3. Another vector ωHB=ωH-ωB is defined where ωB=AB∨=ωDarboux∣Bishop and another error function is defined by(65)Ξ2=12trωHB∧2M1+M2.By (30) (66)Ξ2=-ωHBT2M1+M2∨. ## 3.1. General Situation In the beginning of this section an example is introduced to show the features of equations with matrices belonging toSE3. The following equations are given:(19)R˙=RΩ∧,P˙=RV,where R∈SO3 and P∈R3. Introduce the matrices P, G defined by(20)P=RP01×31,G=Ω∧V01×31,where P∈SE3, G∈se3 are both 4×4 matrices. The following equation holds:(21)P˙=P·G.Also, there is 6×6 matrix representation of elements of SE3:(22)G~=Ω∧0V∧Ω∧.These two adjoint representations of 4×4 and 6×6 matrices, that is, G and G~, are isomorphic to each other. se3, the Lie algebra of SE3, is isomorphic to SE3 as well. It is convenient to choose different forms we need in different situations. However, it is not difficult to verify that the 6×6 matrix representations of SE3 do not satisfy (21).This is different from the previous error functions which are defined onSO3, which is the subgroup of SE3, such as(23)ΦR,Rd=12trI-RdTR.In this paper a trial has been made to define an error function straightforwardly on SE3, so that the error function will include both the information of the rotation matrix and the position vectors or the speed vectors. The new error function on SE3 is defined by(24)ΨR,Rd,P,Pd=12trP-PdTP.Actually, (24) has a close relationship with ΦR,Rd. Since(25)P-PdTP=RT-RdT03×1PT-PdT1RP01×31=RT-RdTRRT-RdTPPT-PdTRPT-PdTP,thus(26)ΨR,Rd,P,Pd=12trP-PdTP=12trI-RdTR00P2-PdTP=12trI-RdTR+12P2-PdTP.Hence one can see that with an initial position P0 and a trajectory Pd, as long as a negative feedback of position signals is guaranteed, the position error ΔP is certain to be a decreasing function when it tends to the steady state. So the error function P2-PdTP is bounded. Let supP2-PdTP=D; then the domain of attraction of Ψ is regarded as a linear manifold of Φ. That means some features about the error function Φ defined on SO3 will still be helpful. ## 3.2. Error Function on SO(3) To choose the tracking error vectorseR and eΩ reasonably, let(27)Ψ-=12trI-RdTR+D.By finding the derivative of (27) we have(28)Ψ-˙=Φ˙=-12trRdTR˙=-12trRdTRΩ∧.Before further discussion, the following properties of 3-order skew-symmetric matrices are presented:(29)ItrAx∧=12trx∧A-AT,(30)II12trx∧A-AT=-xTA-AT∨,(31)IIIx·y∧z=y·z∧x,property  of  the  vector  mixed  product,(32)IVx∧y=x×y=-y×x=-y∧x,(33)Vx∧y∧z=x×y×z=y·x·z-z·x·yproperty  of  the  vector  triple  product,(34)VIx∧A+ATx∧=trAI3×3-Ax∧,(35)VIIRx∧RT=Rx∧,(36)VIIIR˙-R˙dRdTR=RΩ-RTRdΩd∧.Some proofs of (29)~(36) are clearly given [23, 24]; hence only the proofs of (29), (30), and (36) are given here. First of all, Theorem 1 is presented.Theorem 1. LetA, B be n×m and m×n matrices, respectively; then the following equation holds:(37)trAB=trBA.Proof. Let(38)A=a11⋯a1ma21⋯a2m⋮⋱⋮an1⋯anm=aij,i=1,2,…,n,j=1,2,…,m,B=b11⋯b1nb21⋯b2n⋮⋱⋮bm1⋯bmn=bij,i=1,2,…,m,j=1,2,…,n,and thus A B = c i j is a square matrix of n order, where cij=∑k=1maikbkj,i,j=1,2,…,n. B A = d i j is a square matrix of m order, where dij=∑k=1nbikakj,i,j=1,2,…,m:(39)trAB=∑i=1ncii=∑i=1n∑k=1maikbki,trBA=∑i=1mdii=∑i=1m∑k=1nbikaki=∑k=1n∑i=1makibik=∑i=1n∑k=1maikbki.So trAB=trBA, proof finished.In addition, by the definition of the trace of a matrix, for any square matrixA, obviously we have(40)trA=trAT.Then the proof of (29) is presented as follows.Proof of (29). By (37), (40), and the property of skew-symmetric matrix that(41)-x∧T=x∧,we have(42)trAx∧=12trx∧A+trAx∧=12trx∧A+trx∧TAT=12trx∧A+tr-x∧AT=12trx∧A-x∧AT=12trx∧A-AT.Proof finished.Proof of (30). For a three-order square matrixA, it is easy to see that A-AT is a skew-symmetric matrix. Denoting(43)x=x1,x2,x3T,A-AT∨=z=z1,z2,z3T,then(44)12trx∧A-AT=12tr0-x3x2x30-x1-x2x100-z3z2z30-z1-z2z10=12-x2z2-x3z3+-x1z1-x3z3+-x1z1-x2z2=-x1z1+x2z2+x3z3=-xTA-AT∨.Proof finished.By the definition of the inner product, one has that(45)-xTA-AT∨=-A-AT∨·x.By (29), (30), and (45), the following equation holds:(46)trAx∧=-A-AT∨·x.With regard to (46), (28) can be rewritten as(47)Ψ-˙=-12trRdTRΩ∧=12RdTR-RTRd∨Ω,where RdTR-RTRd∈so3 is a skew-symmetric matrix, ∨ is the inverse mapping of the hat mapping ∧. Thus, the tracking error function of attitude can be defined as(48)eR=12RdTR-RTRd∨.Then the proof of (36) is presented as follows.Proof of (36). According to the rule of finding the derivatives of the rotation matrices with respect to time, we have that(49)R˙=RΩ∧,R˙d=RdΩd∧.According to (35)(50)R˙-R˙dRdTR=RΩ∧-RdΩd∧RdTR=RΩ∧RT-RdΩd∧RdTR=RΩ∧-RdΩd∧R=RΩ-RdΩd∧R=RΩ-RTRdΩd∧R=RΩ-RTRdΩd∧RTR=RΩ-RTRdΩd∧.Proof finished.So, we can choose(51)eΩ=Ω-RTRdΩd,as the tracking error function of angular velocity vector. Actually, eΩ is the angular velocity of the rotation matrix RdTR, which is represented in the body frame, because of the following formulation:(52)dRdTRdt=RdTReΩ.The proof of (52) is given as follows.Proof of (52). (53) d R d T R d t = d R d T d t R + R d T d R d t = - Ω d ∧ R d T R + R d T R Ω ∧ = R d T R Ω ∧ - R d T R R T R d Ω d ∧ R d T R = R d T R Ω ∧ - R T R d Ω d ∧ = R d T R Ω - R T R d Ω d ∧ = R d T R e Ω .Proof finished.In getting the above conclusion the following equations are used:(54)R˙d=RdΩd∧,dRdTdt=R˙dT=RdΩd∧T=Ωd∧TRdT=-Ωd∧RdT. ## 3.3. Error Function on SE(3) As mentioned above, the error functionΨ is defined as (24) by the 4×4 adjoint matrix representation of elements on SE(3). The benefit of this adjoint representation rests with the simplicity of defining the semidirect product. However, the form of 6×6 matrix representation is adopted here for the convenience of calculation. For a 6×6 matrix N∈SE(3), according to the principle of Chasles motion decomposition, one has(55)N=R0ARR=NlNc=I0lAsIR0re∧RR,where R∈SO3, A∈so3, and(56)A=Ae+Al=Ae+lAs=re∧+lAs,l being the Frobenius norm of matrices. By the definition of Frobenius norm, for a matrix A∈Rm×n(57)AF≜∑i=1m∑j=1naij21/2=trATA1/2.For a three-order skew-symmetric matrix Ax=x∧=0-x3x2x30-x1-x2x10 which is obtained by a hat mapping, its Frobenius norm is(58)AxF=2x12+x22+x321/2.Sometimes when it is necessary to change the pitch parameter of a screw and the variable h is added, Nh=I0hII, then(59)N~=NhN=I0hIIR0ARR=R0A~RR,where A~=h+A. See (59) for details and it is can be seen that N~∉SE(3) because A~∉so3. Before further discussion of N~, another theorem is presented.Theorem 2 (the Laplace’s expansion theorem). Ifk rows (or k columns) of n-order determinant D are selected, where 1≤k≤n-1, the sum of products of all the k-order subdeterminants of elements of the k rows (or k columns) and the corresponding algebraic cofactors equals the value of the determinant D.Detailed discussions of Laplace’s expansion theorem can easily be found in teaching materials of matrix theory or linear algebra, so the proof is omitted here. According to Theorem2, for a block lower (upper) triangular matrix(60)A=Bm×m0∗Cn×n,or (61)A=Bm×m∗0Cn×n,in all the subdeterminants of the first m rows of det⁡A, only one of them is nonzero. Thus, by expansion of the first m rows, the following deduction is obtained:(62)det⁡A=det⁡Bdet⁡C.So det⁡N~=R0A~RR=det⁡Rdet⁡R=1, and we can see that N~∈SL3 (the special linear group), which is a subgroup of general linear group GL3.Similar to (23) and (24), a new error function is defined by(63)Ξ1=trNdTN-NTNd,where, with regard to (59), Nd=RH2I0A~HRH2IRH2I∈SL(3), A~H=AH+hI=ωH∧+hI, ωH is the Darboux vector of the frame H and ωH∧∈so3, N=RB2I0ABRB2IRB2I∈SE(3), and AB=ωDarboux∣Bishop∧, substituted into (63), and we get(64)Ξ1=trRH2ITRH2ITA~HT0RH2ITRB2I0ABRB2IRB2I-RB2ITRB2ITABT0RB2ITRH2I0A~HRH2IRH2I=trRH2ITRB2I+RH2ITA~HTABRB2IRH2ITA~HTRB2IRH2ITABRB2IRH2ITRB2I-RB2ITRH2I+RB2ITABTA~HRH2IRB2ITABTRH2IRB2ITA~HRH2IRB2ITRH2I=trRH2ITRB2I+RH2ITA~HTABRB2I-RB2ITRH2I+RB2ITABTA~HRH2I+RH2ITRB2I-RB2ITRH2I=tr2RH2ITRB2I-RB2ITRH2I+RH2ITA~HTABRB2I-RB2ITABTA~HRH2I.Let M1=RH2ITRB2I-RB2ITRH2I and M2=RH2ITA~HTABRB2I-RB2ITABTA~HRH2I; then we have Ξ1=tr2M1+M2. Obviously, M1,M2 are both skew-symmetric matrices belonging to so3. Since elements of so3 have closure property with additive operation, thus 2M1+M2∈so3. Another vector ωHB=ωH-ωB is defined where ωB=AB∨=ωDarboux∣Bishop and another error function is defined by(65)Ξ2=12trωHB∧2M1+M2.By (30) (66)Ξ2=-ωHBT2M1+M2∨. ## 4. Simulations and Analysis Figure2 shows an overview structure of the whole flight control system. It is can be seen that there are inner loops (attitude loops and trajectory loops) since the 6-DOF model of the UAV is used. However, the designing of the inner loops is independent of the designing of error functions. So the inner loop controllers are all chosen as ordinary ones.Figure 2 Overview structure of the devised flight control system.In the simulations, the employed UAV model originates from an improved and trial type of China’s “Sharp Sword” unmanned combat aerial vehicle. Its three-view drawing is shown in Figure3.Figure 3 Three-view drawing of the UAV used in the simulations.The main data of the UAV are shown in Table1. From Table 1 it is can be seen that the UAV has a big size of 2300 kg; thus the target trajectory is also a large curve.Table 1 Data of the UAV in simulations. Aerodynamic configuration Flying-wing configuration Mass of UAV 2.3 × 1 0 3 kg Inertiaproperties I x = 2.6242 × 1 0 3 Wing span of UAV 8.76 m I y = 4.5183 × 1 0 3 Max thrust(single engine of two) 500 kgf I z = 8.9088 × 1 0 3 Max Mach 0.85 I x y = I z y = I x z = 0 Maximum flight altitude 15 kmThe target trajectory is chosen as an elliptic cylinder helix extending along the horizontal direction. The expression of the given trajectory with regard to a time parameter is defined as(67)rt=a1t+c1,a2cos⁡b1t+c2,a3sin⁡b2t+c3T,where a1,a2,a3,b1,b2,c1,c2,c3 are all the coefficients. The beginning of the tracking action is from the moment the UAV arrives in a small neighborhood of any point of the target trajectory. The initial speed vector of body frame is V0=200,0,0 and the attitude angles vector is ϕ,θ,ψ0=0,2,0, where the units are, respectively, “m/s” and “°.” According to the initial condition of the UAV, the values of the coefficients of (67) are finally chosen by(68)a1=200,a2=300,a3=-250,b1=0.1,b2=0.1,c1=0,c2=-300,c3=-3000.The curve of the target trajectory rt is shown as in Figure 4(a).Figure 4 (a) Curve of the target trajectory. (b) A comparison of the forward errors. (c) Curve of the pitch angle. (d) Force of the thrust. (a) (b) (c) (d)For the similarity of longitudinal and lateral channels of the flight control system, here just take the tracking errors in axis-x for example. In Figure 4(b), the magnitude of tracking error is largest with the ordinary error function. When the error function on SO(3) is added, the tracking error decreases significantly. When the error function on SE(3) is added, the tracking error decreases more. It should be noted that in Figure 4(b) the condition “error function on SE(3)” means “error function on SE(3) is added” so the previous error functions are still being used.Figure4(c) shows the tracking curve of the pitch angle θ. The pitch angle is limited in the reasonable ranges with amplitude limits of 20° and 30°. The pitch angle ranges within ±8°. Some slight buffeting near the vertex of the curve of the pitch angle has something to do with the inner loop controllers. Figure 4(d) shows the curve of the single thrust T and there are two engines. In most literatures there is a supposition that the thrust of the aircraft is large enough. Actually a too large thrust means a difficulty in rapid reduction of the forward speed. The UAV mass is about 2300 kg and in simulation the maximum thrust weight ratio is about 0.32. The first 30 s of Figure 4(d) indicates a process of rapid convergence of the forward tracking error. The thrust is controlled directly by the opening degree of the throttle which is constrained in the closed interval [0,1]. ## 5. Conclusions According to the nonlinear model of a UAV, a 3D trajectory tracking method is devised. Efforts have been made to discuss the features about the error functions on SO(3) and SE(3). The tracking effect of the flight control system is tested in the numerical simulation. The result of the simulations shows a satisfactory tracking performance so that the error functions designed in this paper are feasible in the UAV tracking process. The designing of the new error function on SE(3) provides a new way of error functions constructing in solving the guidance problem of the UAV. --- *Source: 1016530-2017-01-04.xml*
2017
# Unusual Presentation of a Sigmoid Mass with Chicken Bone Impaction in the Setting of Metastatic Lung Cancer **Authors:** Ziad Zeidan; Zarnie Lwin; Harish Iswariah; Sheyna Manawwar; Anitha Karunairajah; Manju Dashini Chandrasegaram **Journal:** Case Reports in Surgery (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1016534 --- ## Abstract Background. Ingestion of foreign bodies can cause various gastrointestinal tract complications including abscess formation, bowel obstruction, fistulae, haemorrhage, and perforation. While these foreign body-related complications can occur in normal bowel, diseased bowel from inflammation, strictures, or malignancy can cause diagnostic difficulties. Endoscopy is useful in visualising the bowel from within, providing views of the mucosa and malignancies arising from here, but its ability in diagnosing extramural malignancies arising beyond or external to the mucosa of the bowel as in the case of metastatic extramural disease can be limited. Case Summary. We present the case of a 60-year-old female with an impacted chicken bone in the sigmoid colon with formation of a sigmoid mass, on a background of metastatic lung cancer. On initial diagnosis of her lung cancer, there was mild Positron Emission Tomography (PET) avidity in the sigmoid colon which had been evaluated earlier in the year with a colonoscopy with findings of diverticular disease. Subsequent computed tomography (CT) scans demonstrated thickening of the sigmoid colon with a structure consistent with a foreign body distal to this colonic thickening. A repeat PET scan revealed an intensely fluorodeoxyglucose (FDG) avid mass in the sigmoid colon which was thought to be inflammatory. She was admitted for a flexible sigmoidoscopy and removal of the foreign body which was an impacted chicken bone. She had a fall and suffered a fractured hip. During her admission for her hip fracture, she had an exacerbation of her abdominal pain. She developed a large bowel obstruction, requiring laparotomy and Hartmann’s procedure to resect the sigmoid mass. Histopathology confirmed metastatic lung cancer to the sigmoid colon. Conclusion. This unusual presentation highlights the challenges of diagnosing ingested foreign bodies in patients with metastatic disease. --- ## Body ## 1. Introduction Around 20% of ingested foreign bodies fail to pass through the gastrointestinal tract [1]. These can result in complications such as abscess formation, bowel obstruction, fistulae, haemorrhage, and perforation [2]. These complications can present in a variety of different clinical scenarios. The purpose of this case report was to highlight a scenario in which an ingested foreign body may present, and to outline the challenges of reaching the diagnosis, along with outlining the possible limitations of endoscopic investigations in diagnosing a colonic malignancy.Our patient had an impacted chicken bone in the sigmoid colon in the setting of metastatic non-small-cell lung cancer. This was investigated radiologically and found to be an intensely FDG-PET avid mass, initially presumed to be either an inflammatory mass related to the chicken bone impaction or metastatic disease related to her lung cancer. This mass appeared to resolve upon removal of the chicken bone; however, she represented later with a subacute large bowel obstruction related to the sigmoid mass which was found to be metastatic lung cancer at surgery. Consequently, our case highlights the difficulties of establishing a diagnosis in this complex case.In this case report, we present a literature review of colonic chicken bones and investigate similar patterns across the various presentations reported. PubMed and Google Scholar were both utilised to identify the search terms “chicken bone” AND “bowel” OR “large bowel” OR “colon”. The results were systematically reviewed to include only case reports of chicken bones in the large bowel, while the details of each case were analysed for the purposes of the literature review. ## 2. Case Presentation We present the case of a 60-year-old lady who initially presented with a pseudomonas empyema and a right hilar mass. Initial diagnostic bronchoscopy revealed no endobronchial lesion. She was treated under the respiratory and infectious diseases’ teams with decortication and antibiotics which resulted in marked clinical improvement. Follow-up imaging showed a persistent right hilar mass, necessitating a repeat diagnostic bronchoscopy and biopsy. This revealed a non-small-cell lung cancer (NSCLC) adenocarcinoma which was EGFR and ALK negative.Baseline staging imaging revealed that she had metastatic disease with a right lung primary lesion, mediastinal nodes, and adrenal, frontal skull bone, and left pelvic bone metastases (T4N2M1c). She underwent an FDG-PET scan as part of her staging investigations in June 2017, revealing an area of intense heterogenous FDG-PET avidity in the sigmoid colon. This was suspicious for a metastatic deposit or a complication secondary to diverticular disease (Figure1). However, a colonoscopy done 6 months prior had been normal. A CT scan was performed which demonstrated a focal area of thickening of the sigmoid colon (Figure 2); however, given the recent colonoscopy findings, the possibility of malignancy was deemed less likely in this situation.Figure 1 FDG-PET scan with an extensive right upper lobar and mediastinal mass in keeping with primary non-small-cell lung cancer (arrow). Intense heterogenous uptake in the sigmoid colon (white arrow), which could represent a synchronous malignancy or complication secondary to diverticular disease.Figure 2 Axial CT highlighting a focal area of thickening in the wall of the sigmoid colon with surrounding diverticula.The patient had minimal comorbidities and palliative systemic treatment, including radiation, was organised. She proceeded to carboplatin plus gemcitabine chemotherapy and completed 4 cycles in September 2017. She received palliative radiation to the right frontal bone and left pelvis metastatic deposits. She was then commenced on maintenance pemetrexed chemotherapy in October 2017.In March 2018, she had a repeat colonoscopy, which revealed two polyps and evidence of diverticulosis in the sigmoid and descending colon. The polyps were removed, and histopathology revealed no evidence of malignancy.In April 2018, she developed asymptomatic low-volume brain metastases in the left temporal, left occipital, and right posterior frontal lobes ranging from 3 mm to 16 mm in diameter. She underwent gamma knife treatment to these lesions and proceeded to Nivolumab immunotherapy in April 2018.After 2 cycles of Nivolumab, our patient developed mild lower abdominal pain, which she complained of during her outpatient oncology visits. This had been diagnosed as diverticulitis by her general practitioner, who commenced antibiotic treatment.A CT scan demonstrated circumferential thickening of the bowel wall in the sigmoid colon and a suspicious-looking intraluminal tubular structure distal to this, suspicious for a foreign body (Figures3 and 4). The patient could not remember ingesting anything unusual or ingesting a bone. She, also, did not have any further colonic instrumentation after her colonoscopy. There was some thought that this may have been a clip from her colonoscopy, although the appearance of the foreign body was not consistent with this. Nivolumab was ceased and antibiotics were continued.Figure 3 Sagittal CT highlighting a hyperdense tubular foreign object in the sigmoid colon (arrow), in addition to displaying a mass-like thickening of the sigmoid colon.Figure 4 Axial CT scan. Arrow points out a cross-sectional image of the foreign body in question. Meanwhile, the area outlined represents the mass-like thickening of the sigmoid colon proximal to the foreign body.The patient continued to eat normally during this time and reported no changes in her bowel habits. She had no fevers and the only abnormality on her blood results was a raised C-reactive protein. The clinical decision was to follow this closely with serial imaging. Progress imaging 2 weeks later confirmed persistence of this foreign body. Consequently, our patient was admitted due to ongoing lower abdominal and suprapubic pain and for intravenous antibiotics. A repeat FDG-PET-CT scan was conducted, revealing an intensely FDG avid mass in the midsigmoid colon (Figure5). The increase in size of the mass was concerning for a primary neoplasm or an extramural metastatic deposit from our patient’s advanced lung cancer, given she had a colonoscopy which revealed no mucosal neoplasm.Figure 5 FDG-PET scan of our patient, following two weeks of serial radiological imaging, to monitor the foreign body. An intensely FDG-PET avid mass in the sigmoid colon was highlighted on imaging (arrow).Despite these findings, it was still possible that this was secondary to an inflammatory rather than a neoplastic process. The patient was scheduled for a flexible sigmoidoscopy to evaluate the intracolonic foreign body. This revealed a chicken bone impacted in the sigmoid colon (Figure6). The extent of the inflammation was such that the scope could not be passed 10 cm beyond the chicken bone. Nevertheless, the bone was easily removed with a snare. Imaging was conducted after 3 days to ensure there was no perforation or complication, as a result of procedure, given our patient’s concomitant chemotherapy, following which she was discharged.Figure 6 Image of chicken bone retrieved with flexible sigmoidoscopy. The bone measured 6 cm in length with no apparent sharp edges.The patient unfortunately represented the day after discharge with a hip fracture following a mechanical fall. She underwent a hip replacement and during her postoperative recovery developed more abdominal pain. A further CT scan raised concern that this mass had become an intramural abscess with images displaying some gas locules within it (Figure7). She was managed with further intravenous antibiotics for 2 weeks. Progress imaging had revealed little change in the mass, and the antibiotics were ceased.Figure 7 Coronal CT of the abdomen and pelvis highlighting an intramural sigmoid mass (a). Furthermore, the appearance of abscess transformation was noted, with gas locules evident (b). (a) (b)She was discharged and remained well the first week following her discharge. The following week, she developed worsening pain, fevers, and a subacute large bowel obstruction. She underwent an emergency laparotomy, at which time, she was found to have a large, fungating, and hard mass, which was densely adherent to the bladder. She underwent a resection of this sigmoid mass along with a contiguous segment of the bladder (Figure8). The segment of the bladder was repaired, and an end colostomy was fashioned. Histopathology confirmed that this mass was a large deposit of metastatic lung cancer (Figure 9).Figure 8 Image of resected sigmoid mass, following laparotomy. Histopathology confirmed the mass to be metastatic lung cancer.Figure 9 Photomicrograph of patient’s resected sigmoid mass. (a) H&E staining displaying atypical tumour cells and areas of necrosis under 100x magnification. (b) Specimen under 100x magnification with CK7 staining outlining a diffuse distribution of tumour cells. (a) (b)Unfortunately, during the course of her recovery, our patient had another fall and broke her other hip. She has since had this hip replaced and has recovered from her surgery and is managing her stoma. She underwent further rehabilitation and was discharged home. She remains on systemic treatment for metastatic lung cancer. ## 3. Discussion Our case represents a rare and unusual presentation of an impacted chicken bone in the setting of a sigmoid mass. Thirty-six reports of complications as a result of chicken bones in the large bowel were identified in the English literature (Table1). The sigmoid colon was implicated in 22 of these 36 case reports. This is not surprising as the rectosigmoid junction represents one of the narrowest regions in the gastrointestinal tract and hence represents the more likely area where complications from ingested foreign bodies may present [3].Table 1 Case reports of ingested chicken bones in the large bowel derived from the English literature [3–37]. Author and country Patient Presentation Investigations Diagnosis Management Glasson et al. [3]; Wagga Wagga, Australia 70-year-old, male (i) Abdominal pain(ii) Weight loss(iii) Altered bowel habits (i) Full blood count(ii) CT abdomen(iii) Abdominal X-ray(iv) Laparotomy Perforated sigmoid diverticulum with fibrous adhesions to the ileocaecal junction Subtotal colectomy with ileorectal anastomosis Werner and Gallegos-Orozco [4]; Arizona, USA 65-year-old, female (i) Fatigue(ii) Nausea(iii) Pyrexia (i) Abdominal examination(ii) Full blood count(iii) CT Sigmoid perforation with hepatic abscesses Colonoscopy and antibiotics McGregor et al. [5]; Kansas, USA 86-year-old, male (i) Left-sided abdominal pain(ii) Vomiting(iii) Anorexia (i) Abdominal X-ray(ii) Colonoscopy (i) Sigmoid perforation with peritonitis and adhesions(ii) Underlying adenocarcinoma Sigmoid resection with end colostomy and Hartmann’s pouch Girelli and Colombo [6]; Arsizio, Italy 70-year-old, male (i) Severe rectal bleeding (i) Endoscopy(ii) Colonoscopy Bone impacted in hepatic flexure Removal with polypectomy snare Coyte et al. [7]; Glasgow, UK 76-year-old, male (i) Abdominal pain(ii) Vomiting(iii) Pyrexia (i) Erect chest X-ray(ii) CT Small and large bowel perforation Resection of midjejunum and sigmoid Terrace et al. [8]; Edinburgh, UK 85-year-old, male (i) Left lower quadrant pain(ii) Diarrhoea (i) Erect chest X-ray(ii) CT abdomen and pelvis Sigmoid perforation with distal adenocarcinoma Anterior resection with colorectal anastomosis Mesina et al. [9]; Craiova, Romania 52-year-old, female (i) Left perianal pain with swelling(ii) Pyrexia (i) Physical examination Ischiorectal abscess Tear-drop incision Cardoso et al. [10]; Setubal, Portugal 80-year-old, male (i) Vomiting(ii) Diarrhoea(iii) Pyrexia (i) Full blood count(ii) Abdominal ultrasound(iii) CT Hepatic abscess, bone located in the ascending colon Colonoscopy Park et al. [11]; Seoul, South Korea 68-year-old, female (i) Anal pain and bleeding(ii) Constipation (i) Digital rectal examination(ii) Abdominal X-ray(iii) CT abdomen and pelvis Stercoral ulcer of the rectum Flexible sigmoidoscopy and sucralfate enema postoperatively Akhtar et al. [12]; Belfast, UK 46-year-old, male (i) Abdominal pain(ii) Vomiting (i) Full blood count(ii) Erect chest X-ray Sigmoid perforation Laparotomy with repair of perforation Glasson et al. [3]; Wagga Wagga, Australia 70-year-old, male (i) Abdominal pain(ii) Weight loss(iii) Altered bowel habits (i) Full blood count(ii) CT abdomen(iii) Abdominal X-ray(iv) Laparotomy Perforated sigmoid diverticulum with fibrous adhesions to the ileocaecal junction Subtotal colectomy with ileorectal anastomosis Vardaki et al. [13]; Athens, Greece 69-year-old, male (i) Abdominal pain (i) Full blood count(ii) CT abdomen Sigmoid perforation with underlying carcinoma Open surgery Rasheed et al. [14]; Massachusetts, USA 59-year-old, male (i) Left lower quadrant pain (i) CT abdomen Sigmoid perforation Surgical management Kornprat et al. [15]; Graz, Austria 82-year-old, female (i) Sepsis(ii) Severe abdominal pain (i) Full blood count(ii) CT abdomen and pelvis Perforated sigmoid diverticulum; phlegmonous inflammation of the abdominal wall Emergency Hartmann’s procedure with necrectomy of the abdominal wall Clements et al. [16]; Virginia, USA 66-year-old, male (i) Sepsis(ii) Anuria (i) Urine/blood cultures(ii) Renal ultrasound(iii) CT KUB(iv) Colonoscopy Colovesical fistula with submucosal/intramural haemorrhage in the sigmoid colon Low anterior resection with primary anastomosis and bladder repair Tay et al. [17]; Singapore 73-year-old, male (i) Irreducible left inguinal hernia(ii) Abdominal pain (i) CT abdomen(ii) Exploratory laparotomy Perforated sigmoid colon Sigmoid colectomy Joglekar et al. [18]; Great Yarmouth, UK 47-year-old, male (i) Abdominal pain(ii) Diarrhoea (i) Full blood count(ii) Urine dipstick(iii) Laparotomy Perforated sigmoid colon Repair of perforation Bleich [19]; Connecticut, USA 54-year-old, female (i) Left lower quadrant pain(ii) Pyrexia (i) Full blood count(ii) CT abdomen (IV and oral contrast) Impacted chicken bone in sigmoid diverticulum Flexible sigmoidoscopy and oral antibiotics Brucculeri et al. [20]; Monserrato, Italy 75-year-old, female Lower abdominal pain (i) CT abdomen (with and without contrast)(ii) Flexible sigmoidoscopy Impacted chicken bone across the diameter of the colon wall Laser source contact and removal of divided bone with forceps Domínguez-Jiménez and Jaén-Reyes [21]; Andujar, Spain 79-year-old, female Asymptomatic, presenting for programmed colonoscopy (i) Colonoscopy(ii) Subsequent CT abdomen Perforated sigmoid diverticulum with thickening of the right pelvic fascia Conservative management—patient expelled bone in faeces after 2 months Khan et al. [22]; Craigavon, Northern Ireland 56-year-old, male (i) Painful haematuria(ii) Polyuria(iii) Pneumaturia (i) Urine culture(ii) IV urogram(iii) CT scan(iv) Cystoscopy Colovesical fistula, secondary to perforated colon wall Surgical exploration with resection of perforated bowel Lubel and Wiley [23]; Woodville South, Australia 54-year-old, female (i) Persistent lower abdominal pain(ii) Rectal mucous (i) Colonoscopy Chicken bone impacted in inflamed diverticula Laparotomy with sigmoid resection Mapelli et al. [24]; Louisiana, USA 72-year-old, female (i) Left lower quadrant pain(ii) Anorexia (i) Abdominal ultrasound(ii) Colonoscopy Perforation of the sigmoid colon Resection of the sigmoid colon Milivojevic et al. [25]; Belgrade, Serbia 75-year-old, female (i) Nausea(ii) Left lower quadrant pain(iii) Fever (i) Abdominal X-ray(ii) Colonoscopy Impacted chicken bone in the sigmoid colon Colonoscopy Owen et al. [26]; London, UK 65-year-old, male (i) Severe lower abdominal pain(ii) Dehydration(iii) Pyrexia (i) Erect chest X-ray(ii) CT abdomen(iii) Laparoscopy Sigmoid perforation Colonoscopy with insertion of abdominal drain Rabb et al. [27]; South Yorkshire, UK 69-year-old, male (i) Asymptomatic(ii) Bowel cancer screening (positive for faecal occult blood) (i) Colonoscopy Impaction of chicken bone in bowel diverticulum Laparoscopic sigmoid colectomy Rex and Bilotta [28]; Indiana, USA 73-year-old, male (i) Lower abdominal pain (i) Colonoscopy Impacted chicken bone across two diverticula Colonoscopy Rex and Bilotta [28]; Indiana, USA 81-year-old, female (i) Lower abdominal pain(ii) Positive faecal occult blood (i) Barium enema(ii) Colonoscopy Impacted chicken bone in sigmoid diverticula Colonoscopy Tarnasky et al. [29]; North Carolina, USA 80-year-old, female (i) Chronic diarrhoea(ii) Positive faecal occult blood (i) Abdominal examination(ii) Colonoscopy Perforated sigmoid colon, due to impacted chicken bone Colonoscopy Chen et al. [30]; Sydney, Australia 84-year-old, female (i) Lower abdominal pain (i) Colonoscopy(ii) CT abdomen Impacted chicken bone across the diameter of the colon lumen Nd:YAG laser/colonoscopy Ross et al. [31]; Glasgow, UK 87-year-old, female (i) Severe bleeding per anus (i) CT abdomen with angiogram Impacted chicken bone in sigmoid diverticulum with arterial bleeding Flexible sigmoidoscopy Elmoghrabi et al. [32]; Michigan, USA 70-year-old, female (i) Lower abdominal, pelvic, and rectal pain (i) Abdominal X-ray(ii) CT abdomen/pelvis Large rectal stricture secondary to impacted chicken bone Resection of the rectum and distal sigmoid colon Davies [33]; Cardiff, UK 31-year-old, male (i) Severe rectal pain (i) Abdominal X-ray Perforated rectum immediately proximal to anal margin Digital removal followed by proctosigmoidoscopy Jeen et al. [34]; Seoul, South Korea 73-year-old, female (i) Abdominal cramping(ii) Diarrhoea (i) Colonoscopy Chicken bone impaction in the sigmoid colon across the lumen diameter Balloon dilatation and extraction Osler et al. [35]; New York, USA 78-year-old, female (i) Abdominal pain(ii) Nausea(iii) Vomiting (i) CT abdomen(ii) Exploratory laparotomy Sigmoid perforation distal to colonic carcinoma Hartmann’s resection with end sigmoid colostomy Moreira et al. [36]; Pennsylvania, USA 31-year-old, male (i) Pain around the anal canal and scrotum(ii) Pyrexia (i) Open surgery Necrotising fasciitis with perianal and scrotal abscesses Debridement and antibiotic therapy Muñoz et al. [37]; Baracaldo, Spain 67-year-old, male (i) Left lower quadrant abdominal pain(ii) Tenesmus (i) Abdominal X-ray(ii) Barium enema Impacted chicken bone in the sigmoid colon ColonoscopyIn our review of the 36 reported cases of complications from chicken bone ingestion, nonspecific abdominal pain was a common presenting complaint throughout, in similar fashion to how our patient presented. Radiology formed a cornerstone in the workup of patients with ingested foreign bodies, with CT and X-ray of the abdomen standing out as the most common investigations organised. Ultimately, endoscopy served as the most common means of gaining a definitive diagnosis, while concomitantly managing the condition. Surgery, however, was necessary in cases where the chicken bone had led to serious complications.In terms of these complications from ingested chicken bones, bowel perforation was noted to occur in 19 of the 36 case reports analysed. A history of gastrointestinal disease, such as diverticulosis and colonic malignancy, predisposes individuals to experiencing complications of ingested foreign bodies, especially that of perforation [1]. In our patient, the foreign body persisted in its location in the sigmoid colon just beyond the mass and did not cause a perforation despite her known diverticulosis.A review of the literature revealed that patients with a history of alcoholism, dentures, or sensory neuropathy are most at risk of swallowing a foreign body [2]. Our patient did have dentures, which may very well have predisposed her to accidentally ingesting a chicken bone, which at the time she could not recall.Our patient was on an immune checkpoint inhibitor for her lung cancer. Immune checkpoint inhibitors have known immune-related gastrointestinal toxicities such as diarrhoea and colitis. Rare cases of bowel perforation requiring colostomy have been reported in the literature [38]. The development of an intramural sigmoid abscess in our patient following the chicken bone removal could be solely attributed to the chicken bone impaction and subsequent removal. It is possible that the impacted chicken bone may have affected or breached the luminal integrity of the bowel focally leading to abscess formation on its removal. Equally, hypothetically, tumour response in the metastatic lung cancer deposit in the sigmoid colon could have contributed to some degree to a breach in the colonic integrity and the formation of the intramural abscess. In our patient, both factors may have played a role in the abscess formation and ensuing colonic obstruction that necessitated surgery.To our knowledge, this is the first case of chicken bone impaction in the setting of metastatic lung cancer to the sigmoid colon. Some of the difficulties, even with modern imaging and FDG-PET, in differentiating inflammatory from neoplastic processes in the bowel are described. This case highlights that while colonoscopy is useful in visualising the bowel from within and is crucial in diagnosing malignancies arising from the bowel mucosa, its ability in diagnosing extramural malignancies arising beyond or external to the mucosa of the bowel as in the case of metastatic extramural disease can be limited. ## 4. Conclusion Foreign bodies mostly present with nonspecific abdominal pain, and while the majority are managed surgically, they can sometimes be retrieved endoscopically. In the large bowel, the sigmoid colon is the most common site of complications arising from ingested chicken bones. The literature review identified that perforation of the bowel tends to occur in the setting of diverticular disease and malignancy. Our case reflects the diagnostic complexity in a patient with an ingested foreign body in the setting of metastatic disease, despite modern radiological investigative modalities and endoscopy. This report highlights the value of keeping ingested foreign bodies in mind when formulating differential diagnoses for nonspecific abdominal pain. At the same time, it identifies a key area in oncological practice, where rigorous follow-up is essential to the screening for metastasis of primary malignancies to distant organ sites. --- *Source: 1016534-2019-06-26.xml*
1016534-2019-06-26_1016534-2019-06-26.md
24,932
Unusual Presentation of a Sigmoid Mass with Chicken Bone Impaction in the Setting of Metastatic Lung Cancer
Ziad Zeidan; Zarnie Lwin; Harish Iswariah; Sheyna Manawwar; Anitha Karunairajah; Manju Dashini Chandrasegaram
Case Reports in Surgery (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1016534
1016534-2019-06-26.xml
--- ## Abstract Background. Ingestion of foreign bodies can cause various gastrointestinal tract complications including abscess formation, bowel obstruction, fistulae, haemorrhage, and perforation. While these foreign body-related complications can occur in normal bowel, diseased bowel from inflammation, strictures, or malignancy can cause diagnostic difficulties. Endoscopy is useful in visualising the bowel from within, providing views of the mucosa and malignancies arising from here, but its ability in diagnosing extramural malignancies arising beyond or external to the mucosa of the bowel as in the case of metastatic extramural disease can be limited. Case Summary. We present the case of a 60-year-old female with an impacted chicken bone in the sigmoid colon with formation of a sigmoid mass, on a background of metastatic lung cancer. On initial diagnosis of her lung cancer, there was mild Positron Emission Tomography (PET) avidity in the sigmoid colon which had been evaluated earlier in the year with a colonoscopy with findings of diverticular disease. Subsequent computed tomography (CT) scans demonstrated thickening of the sigmoid colon with a structure consistent with a foreign body distal to this colonic thickening. A repeat PET scan revealed an intensely fluorodeoxyglucose (FDG) avid mass in the sigmoid colon which was thought to be inflammatory. She was admitted for a flexible sigmoidoscopy and removal of the foreign body which was an impacted chicken bone. She had a fall and suffered a fractured hip. During her admission for her hip fracture, she had an exacerbation of her abdominal pain. She developed a large bowel obstruction, requiring laparotomy and Hartmann’s procedure to resect the sigmoid mass. Histopathology confirmed metastatic lung cancer to the sigmoid colon. Conclusion. This unusual presentation highlights the challenges of diagnosing ingested foreign bodies in patients with metastatic disease. --- ## Body ## 1. Introduction Around 20% of ingested foreign bodies fail to pass through the gastrointestinal tract [1]. These can result in complications such as abscess formation, bowel obstruction, fistulae, haemorrhage, and perforation [2]. These complications can present in a variety of different clinical scenarios. The purpose of this case report was to highlight a scenario in which an ingested foreign body may present, and to outline the challenges of reaching the diagnosis, along with outlining the possible limitations of endoscopic investigations in diagnosing a colonic malignancy.Our patient had an impacted chicken bone in the sigmoid colon in the setting of metastatic non-small-cell lung cancer. This was investigated radiologically and found to be an intensely FDG-PET avid mass, initially presumed to be either an inflammatory mass related to the chicken bone impaction or metastatic disease related to her lung cancer. This mass appeared to resolve upon removal of the chicken bone; however, she represented later with a subacute large bowel obstruction related to the sigmoid mass which was found to be metastatic lung cancer at surgery. Consequently, our case highlights the difficulties of establishing a diagnosis in this complex case.In this case report, we present a literature review of colonic chicken bones and investigate similar patterns across the various presentations reported. PubMed and Google Scholar were both utilised to identify the search terms “chicken bone” AND “bowel” OR “large bowel” OR “colon”. The results were systematically reviewed to include only case reports of chicken bones in the large bowel, while the details of each case were analysed for the purposes of the literature review. ## 2. Case Presentation We present the case of a 60-year-old lady who initially presented with a pseudomonas empyema and a right hilar mass. Initial diagnostic bronchoscopy revealed no endobronchial lesion. She was treated under the respiratory and infectious diseases’ teams with decortication and antibiotics which resulted in marked clinical improvement. Follow-up imaging showed a persistent right hilar mass, necessitating a repeat diagnostic bronchoscopy and biopsy. This revealed a non-small-cell lung cancer (NSCLC) adenocarcinoma which was EGFR and ALK negative.Baseline staging imaging revealed that she had metastatic disease with a right lung primary lesion, mediastinal nodes, and adrenal, frontal skull bone, and left pelvic bone metastases (T4N2M1c). She underwent an FDG-PET scan as part of her staging investigations in June 2017, revealing an area of intense heterogenous FDG-PET avidity in the sigmoid colon. This was suspicious for a metastatic deposit or a complication secondary to diverticular disease (Figure1). However, a colonoscopy done 6 months prior had been normal. A CT scan was performed which demonstrated a focal area of thickening of the sigmoid colon (Figure 2); however, given the recent colonoscopy findings, the possibility of malignancy was deemed less likely in this situation.Figure 1 FDG-PET scan with an extensive right upper lobar and mediastinal mass in keeping with primary non-small-cell lung cancer (arrow). Intense heterogenous uptake in the sigmoid colon (white arrow), which could represent a synchronous malignancy or complication secondary to diverticular disease.Figure 2 Axial CT highlighting a focal area of thickening in the wall of the sigmoid colon with surrounding diverticula.The patient had minimal comorbidities and palliative systemic treatment, including radiation, was organised. She proceeded to carboplatin plus gemcitabine chemotherapy and completed 4 cycles in September 2017. She received palliative radiation to the right frontal bone and left pelvis metastatic deposits. She was then commenced on maintenance pemetrexed chemotherapy in October 2017.In March 2018, she had a repeat colonoscopy, which revealed two polyps and evidence of diverticulosis in the sigmoid and descending colon. The polyps were removed, and histopathology revealed no evidence of malignancy.In April 2018, she developed asymptomatic low-volume brain metastases in the left temporal, left occipital, and right posterior frontal lobes ranging from 3 mm to 16 mm in diameter. She underwent gamma knife treatment to these lesions and proceeded to Nivolumab immunotherapy in April 2018.After 2 cycles of Nivolumab, our patient developed mild lower abdominal pain, which she complained of during her outpatient oncology visits. This had been diagnosed as diverticulitis by her general practitioner, who commenced antibiotic treatment.A CT scan demonstrated circumferential thickening of the bowel wall in the sigmoid colon and a suspicious-looking intraluminal tubular structure distal to this, suspicious for a foreign body (Figures3 and 4). The patient could not remember ingesting anything unusual or ingesting a bone. She, also, did not have any further colonic instrumentation after her colonoscopy. There was some thought that this may have been a clip from her colonoscopy, although the appearance of the foreign body was not consistent with this. Nivolumab was ceased and antibiotics were continued.Figure 3 Sagittal CT highlighting a hyperdense tubular foreign object in the sigmoid colon (arrow), in addition to displaying a mass-like thickening of the sigmoid colon.Figure 4 Axial CT scan. Arrow points out a cross-sectional image of the foreign body in question. Meanwhile, the area outlined represents the mass-like thickening of the sigmoid colon proximal to the foreign body.The patient continued to eat normally during this time and reported no changes in her bowel habits. She had no fevers and the only abnormality on her blood results was a raised C-reactive protein. The clinical decision was to follow this closely with serial imaging. Progress imaging 2 weeks later confirmed persistence of this foreign body. Consequently, our patient was admitted due to ongoing lower abdominal and suprapubic pain and for intravenous antibiotics. A repeat FDG-PET-CT scan was conducted, revealing an intensely FDG avid mass in the midsigmoid colon (Figure5). The increase in size of the mass was concerning for a primary neoplasm or an extramural metastatic deposit from our patient’s advanced lung cancer, given she had a colonoscopy which revealed no mucosal neoplasm.Figure 5 FDG-PET scan of our patient, following two weeks of serial radiological imaging, to monitor the foreign body. An intensely FDG-PET avid mass in the sigmoid colon was highlighted on imaging (arrow).Despite these findings, it was still possible that this was secondary to an inflammatory rather than a neoplastic process. The patient was scheduled for a flexible sigmoidoscopy to evaluate the intracolonic foreign body. This revealed a chicken bone impacted in the sigmoid colon (Figure6). The extent of the inflammation was such that the scope could not be passed 10 cm beyond the chicken bone. Nevertheless, the bone was easily removed with a snare. Imaging was conducted after 3 days to ensure there was no perforation or complication, as a result of procedure, given our patient’s concomitant chemotherapy, following which she was discharged.Figure 6 Image of chicken bone retrieved with flexible sigmoidoscopy. The bone measured 6 cm in length with no apparent sharp edges.The patient unfortunately represented the day after discharge with a hip fracture following a mechanical fall. She underwent a hip replacement and during her postoperative recovery developed more abdominal pain. A further CT scan raised concern that this mass had become an intramural abscess with images displaying some gas locules within it (Figure7). She was managed with further intravenous antibiotics for 2 weeks. Progress imaging had revealed little change in the mass, and the antibiotics were ceased.Figure 7 Coronal CT of the abdomen and pelvis highlighting an intramural sigmoid mass (a). Furthermore, the appearance of abscess transformation was noted, with gas locules evident (b). (a) (b)She was discharged and remained well the first week following her discharge. The following week, she developed worsening pain, fevers, and a subacute large bowel obstruction. She underwent an emergency laparotomy, at which time, she was found to have a large, fungating, and hard mass, which was densely adherent to the bladder. She underwent a resection of this sigmoid mass along with a contiguous segment of the bladder (Figure8). The segment of the bladder was repaired, and an end colostomy was fashioned. Histopathology confirmed that this mass was a large deposit of metastatic lung cancer (Figure 9).Figure 8 Image of resected sigmoid mass, following laparotomy. Histopathology confirmed the mass to be metastatic lung cancer.Figure 9 Photomicrograph of patient’s resected sigmoid mass. (a) H&E staining displaying atypical tumour cells and areas of necrosis under 100x magnification. (b) Specimen under 100x magnification with CK7 staining outlining a diffuse distribution of tumour cells. (a) (b)Unfortunately, during the course of her recovery, our patient had another fall and broke her other hip. She has since had this hip replaced and has recovered from her surgery and is managing her stoma. She underwent further rehabilitation and was discharged home. She remains on systemic treatment for metastatic lung cancer. ## 3. Discussion Our case represents a rare and unusual presentation of an impacted chicken bone in the setting of a sigmoid mass. Thirty-six reports of complications as a result of chicken bones in the large bowel were identified in the English literature (Table1). The sigmoid colon was implicated in 22 of these 36 case reports. This is not surprising as the rectosigmoid junction represents one of the narrowest regions in the gastrointestinal tract and hence represents the more likely area where complications from ingested foreign bodies may present [3].Table 1 Case reports of ingested chicken bones in the large bowel derived from the English literature [3–37]. Author and country Patient Presentation Investigations Diagnosis Management Glasson et al. [3]; Wagga Wagga, Australia 70-year-old, male (i) Abdominal pain(ii) Weight loss(iii) Altered bowel habits (i) Full blood count(ii) CT abdomen(iii) Abdominal X-ray(iv) Laparotomy Perforated sigmoid diverticulum with fibrous adhesions to the ileocaecal junction Subtotal colectomy with ileorectal anastomosis Werner and Gallegos-Orozco [4]; Arizona, USA 65-year-old, female (i) Fatigue(ii) Nausea(iii) Pyrexia (i) Abdominal examination(ii) Full blood count(iii) CT Sigmoid perforation with hepatic abscesses Colonoscopy and antibiotics McGregor et al. [5]; Kansas, USA 86-year-old, male (i) Left-sided abdominal pain(ii) Vomiting(iii) Anorexia (i) Abdominal X-ray(ii) Colonoscopy (i) Sigmoid perforation with peritonitis and adhesions(ii) Underlying adenocarcinoma Sigmoid resection with end colostomy and Hartmann’s pouch Girelli and Colombo [6]; Arsizio, Italy 70-year-old, male (i) Severe rectal bleeding (i) Endoscopy(ii) Colonoscopy Bone impacted in hepatic flexure Removal with polypectomy snare Coyte et al. [7]; Glasgow, UK 76-year-old, male (i) Abdominal pain(ii) Vomiting(iii) Pyrexia (i) Erect chest X-ray(ii) CT Small and large bowel perforation Resection of midjejunum and sigmoid Terrace et al. [8]; Edinburgh, UK 85-year-old, male (i) Left lower quadrant pain(ii) Diarrhoea (i) Erect chest X-ray(ii) CT abdomen and pelvis Sigmoid perforation with distal adenocarcinoma Anterior resection with colorectal anastomosis Mesina et al. [9]; Craiova, Romania 52-year-old, female (i) Left perianal pain with swelling(ii) Pyrexia (i) Physical examination Ischiorectal abscess Tear-drop incision Cardoso et al. [10]; Setubal, Portugal 80-year-old, male (i) Vomiting(ii) Diarrhoea(iii) Pyrexia (i) Full blood count(ii) Abdominal ultrasound(iii) CT Hepatic abscess, bone located in the ascending colon Colonoscopy Park et al. [11]; Seoul, South Korea 68-year-old, female (i) Anal pain and bleeding(ii) Constipation (i) Digital rectal examination(ii) Abdominal X-ray(iii) CT abdomen and pelvis Stercoral ulcer of the rectum Flexible sigmoidoscopy and sucralfate enema postoperatively Akhtar et al. [12]; Belfast, UK 46-year-old, male (i) Abdominal pain(ii) Vomiting (i) Full blood count(ii) Erect chest X-ray Sigmoid perforation Laparotomy with repair of perforation Glasson et al. [3]; Wagga Wagga, Australia 70-year-old, male (i) Abdominal pain(ii) Weight loss(iii) Altered bowel habits (i) Full blood count(ii) CT abdomen(iii) Abdominal X-ray(iv) Laparotomy Perforated sigmoid diverticulum with fibrous adhesions to the ileocaecal junction Subtotal colectomy with ileorectal anastomosis Vardaki et al. [13]; Athens, Greece 69-year-old, male (i) Abdominal pain (i) Full blood count(ii) CT abdomen Sigmoid perforation with underlying carcinoma Open surgery Rasheed et al. [14]; Massachusetts, USA 59-year-old, male (i) Left lower quadrant pain (i) CT abdomen Sigmoid perforation Surgical management Kornprat et al. [15]; Graz, Austria 82-year-old, female (i) Sepsis(ii) Severe abdominal pain (i) Full blood count(ii) CT abdomen and pelvis Perforated sigmoid diverticulum; phlegmonous inflammation of the abdominal wall Emergency Hartmann’s procedure with necrectomy of the abdominal wall Clements et al. [16]; Virginia, USA 66-year-old, male (i) Sepsis(ii) Anuria (i) Urine/blood cultures(ii) Renal ultrasound(iii) CT KUB(iv) Colonoscopy Colovesical fistula with submucosal/intramural haemorrhage in the sigmoid colon Low anterior resection with primary anastomosis and bladder repair Tay et al. [17]; Singapore 73-year-old, male (i) Irreducible left inguinal hernia(ii) Abdominal pain (i) CT abdomen(ii) Exploratory laparotomy Perforated sigmoid colon Sigmoid colectomy Joglekar et al. [18]; Great Yarmouth, UK 47-year-old, male (i) Abdominal pain(ii) Diarrhoea (i) Full blood count(ii) Urine dipstick(iii) Laparotomy Perforated sigmoid colon Repair of perforation Bleich [19]; Connecticut, USA 54-year-old, female (i) Left lower quadrant pain(ii) Pyrexia (i) Full blood count(ii) CT abdomen (IV and oral contrast) Impacted chicken bone in sigmoid diverticulum Flexible sigmoidoscopy and oral antibiotics Brucculeri et al. [20]; Monserrato, Italy 75-year-old, female Lower abdominal pain (i) CT abdomen (with and without contrast)(ii) Flexible sigmoidoscopy Impacted chicken bone across the diameter of the colon wall Laser source contact and removal of divided bone with forceps Domínguez-Jiménez and Jaén-Reyes [21]; Andujar, Spain 79-year-old, female Asymptomatic, presenting for programmed colonoscopy (i) Colonoscopy(ii) Subsequent CT abdomen Perforated sigmoid diverticulum with thickening of the right pelvic fascia Conservative management—patient expelled bone in faeces after 2 months Khan et al. [22]; Craigavon, Northern Ireland 56-year-old, male (i) Painful haematuria(ii) Polyuria(iii) Pneumaturia (i) Urine culture(ii) IV urogram(iii) CT scan(iv) Cystoscopy Colovesical fistula, secondary to perforated colon wall Surgical exploration with resection of perforated bowel Lubel and Wiley [23]; Woodville South, Australia 54-year-old, female (i) Persistent lower abdominal pain(ii) Rectal mucous (i) Colonoscopy Chicken bone impacted in inflamed diverticula Laparotomy with sigmoid resection Mapelli et al. [24]; Louisiana, USA 72-year-old, female (i) Left lower quadrant pain(ii) Anorexia (i) Abdominal ultrasound(ii) Colonoscopy Perforation of the sigmoid colon Resection of the sigmoid colon Milivojevic et al. [25]; Belgrade, Serbia 75-year-old, female (i) Nausea(ii) Left lower quadrant pain(iii) Fever (i) Abdominal X-ray(ii) Colonoscopy Impacted chicken bone in the sigmoid colon Colonoscopy Owen et al. [26]; London, UK 65-year-old, male (i) Severe lower abdominal pain(ii) Dehydration(iii) Pyrexia (i) Erect chest X-ray(ii) CT abdomen(iii) Laparoscopy Sigmoid perforation Colonoscopy with insertion of abdominal drain Rabb et al. [27]; South Yorkshire, UK 69-year-old, male (i) Asymptomatic(ii) Bowel cancer screening (positive for faecal occult blood) (i) Colonoscopy Impaction of chicken bone in bowel diverticulum Laparoscopic sigmoid colectomy Rex and Bilotta [28]; Indiana, USA 73-year-old, male (i) Lower abdominal pain (i) Colonoscopy Impacted chicken bone across two diverticula Colonoscopy Rex and Bilotta [28]; Indiana, USA 81-year-old, female (i) Lower abdominal pain(ii) Positive faecal occult blood (i) Barium enema(ii) Colonoscopy Impacted chicken bone in sigmoid diverticula Colonoscopy Tarnasky et al. [29]; North Carolina, USA 80-year-old, female (i) Chronic diarrhoea(ii) Positive faecal occult blood (i) Abdominal examination(ii) Colonoscopy Perforated sigmoid colon, due to impacted chicken bone Colonoscopy Chen et al. [30]; Sydney, Australia 84-year-old, female (i) Lower abdominal pain (i) Colonoscopy(ii) CT abdomen Impacted chicken bone across the diameter of the colon lumen Nd:YAG laser/colonoscopy Ross et al. [31]; Glasgow, UK 87-year-old, female (i) Severe bleeding per anus (i) CT abdomen with angiogram Impacted chicken bone in sigmoid diverticulum with arterial bleeding Flexible sigmoidoscopy Elmoghrabi et al. [32]; Michigan, USA 70-year-old, female (i) Lower abdominal, pelvic, and rectal pain (i) Abdominal X-ray(ii) CT abdomen/pelvis Large rectal stricture secondary to impacted chicken bone Resection of the rectum and distal sigmoid colon Davies [33]; Cardiff, UK 31-year-old, male (i) Severe rectal pain (i) Abdominal X-ray Perforated rectum immediately proximal to anal margin Digital removal followed by proctosigmoidoscopy Jeen et al. [34]; Seoul, South Korea 73-year-old, female (i) Abdominal cramping(ii) Diarrhoea (i) Colonoscopy Chicken bone impaction in the sigmoid colon across the lumen diameter Balloon dilatation and extraction Osler et al. [35]; New York, USA 78-year-old, female (i) Abdominal pain(ii) Nausea(iii) Vomiting (i) CT abdomen(ii) Exploratory laparotomy Sigmoid perforation distal to colonic carcinoma Hartmann’s resection with end sigmoid colostomy Moreira et al. [36]; Pennsylvania, USA 31-year-old, male (i) Pain around the anal canal and scrotum(ii) Pyrexia (i) Open surgery Necrotising fasciitis with perianal and scrotal abscesses Debridement and antibiotic therapy Muñoz et al. [37]; Baracaldo, Spain 67-year-old, male (i) Left lower quadrant abdominal pain(ii) Tenesmus (i) Abdominal X-ray(ii) Barium enema Impacted chicken bone in the sigmoid colon ColonoscopyIn our review of the 36 reported cases of complications from chicken bone ingestion, nonspecific abdominal pain was a common presenting complaint throughout, in similar fashion to how our patient presented. Radiology formed a cornerstone in the workup of patients with ingested foreign bodies, with CT and X-ray of the abdomen standing out as the most common investigations organised. Ultimately, endoscopy served as the most common means of gaining a definitive diagnosis, while concomitantly managing the condition. Surgery, however, was necessary in cases where the chicken bone had led to serious complications.In terms of these complications from ingested chicken bones, bowel perforation was noted to occur in 19 of the 36 case reports analysed. A history of gastrointestinal disease, such as diverticulosis and colonic malignancy, predisposes individuals to experiencing complications of ingested foreign bodies, especially that of perforation [1]. In our patient, the foreign body persisted in its location in the sigmoid colon just beyond the mass and did not cause a perforation despite her known diverticulosis.A review of the literature revealed that patients with a history of alcoholism, dentures, or sensory neuropathy are most at risk of swallowing a foreign body [2]. Our patient did have dentures, which may very well have predisposed her to accidentally ingesting a chicken bone, which at the time she could not recall.Our patient was on an immune checkpoint inhibitor for her lung cancer. Immune checkpoint inhibitors have known immune-related gastrointestinal toxicities such as diarrhoea and colitis. Rare cases of bowel perforation requiring colostomy have been reported in the literature [38]. The development of an intramural sigmoid abscess in our patient following the chicken bone removal could be solely attributed to the chicken bone impaction and subsequent removal. It is possible that the impacted chicken bone may have affected or breached the luminal integrity of the bowel focally leading to abscess formation on its removal. Equally, hypothetically, tumour response in the metastatic lung cancer deposit in the sigmoid colon could have contributed to some degree to a breach in the colonic integrity and the formation of the intramural abscess. In our patient, both factors may have played a role in the abscess formation and ensuing colonic obstruction that necessitated surgery.To our knowledge, this is the first case of chicken bone impaction in the setting of metastatic lung cancer to the sigmoid colon. Some of the difficulties, even with modern imaging and FDG-PET, in differentiating inflammatory from neoplastic processes in the bowel are described. This case highlights that while colonoscopy is useful in visualising the bowel from within and is crucial in diagnosing malignancies arising from the bowel mucosa, its ability in diagnosing extramural malignancies arising beyond or external to the mucosa of the bowel as in the case of metastatic extramural disease can be limited. ## 4. Conclusion Foreign bodies mostly present with nonspecific abdominal pain, and while the majority are managed surgically, they can sometimes be retrieved endoscopically. In the large bowel, the sigmoid colon is the most common site of complications arising from ingested chicken bones. The literature review identified that perforation of the bowel tends to occur in the setting of diverticular disease and malignancy. Our case reflects the diagnostic complexity in a patient with an ingested foreign body in the setting of metastatic disease, despite modern radiological investigative modalities and endoscopy. This report highlights the value of keeping ingested foreign bodies in mind when formulating differential diagnoses for nonspecific abdominal pain. At the same time, it identifies a key area in oncological practice, where rigorous follow-up is essential to the screening for metastasis of primary malignancies to distant organ sites. --- *Source: 1016534-2019-06-26.xml*
2019
# A Novel Adjuvant “Sublancin” Enhances Immune Response in Specific Pathogen-Free Broiler Chickens Inoculated with Newcastle Disease Vaccine **Authors:** Yangke Liu; Jiang Zhang; Shuai Wang; Yong Guo; Tao He; Rui Zhou **Journal:** Journal of Immunology Research (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1016567 --- ## Abstract Sublancin is a glycosylated antimicrobial peptide produced byBacillus subtilis 168 possessing antibacterial and immunomodulatory activities. This study was aimed at investigating the effects of sublancin on immune functions and serum antibody titer in specific pathogen-free (SPF) broiler chickens vaccinated with Newcastle disease (ND) vaccine. For this purpose, 3 experiments were performed. Experiment 1: SPF broiler chicks (14 days old) were randomly allotted to 1 of 7 groups including a blank control (BC), vaccine control (VC), and 5 (3-7) vaccinated and sublancin supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Vaccinated groups (2-7) were vaccinated with ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 days post vaccination (dpv), the blood samples were collected for the determination of serum hemagglutination inhibition (HI) antibody titer. Experiment 2: SPF broiler chicks were divided into 1 of 3 groups, i.e., blank control (BC), vaccine control (VC), and sublancin treatment (ST). On 7, 14, and 21 dpv, the blood samples were collected for measuring HI antibody titer by micromethod. Experiment 3: the design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation were carried out. It was noted that sublancin promoted B lymphocyte proliferation, increased the proportion of CD8+ T lymphocyte subpopulations, and enhanced the antibody titer in broiler chickens. In addition, it was also observed that sublancin has the potential to induce the secretion of IFN-γ, IL-10, and IL-4. In conclusion, these findings suggested that sublancin could promote both humoral and cellular immune responses and has the potential to be a promising vaccine adjuvant. --- ## Body ## 1. Introduction Infectious diseases, especially viral diseases, remain one of the most critical challenges in poultry industry partly due to the genetic variation of viruses or the inferior quality of the vaccines. It is widely recognized that the application of vaccines coupled with immunopotentiator could improve the efficacy of vaccination [1]. However, commonly used adjuvants, e.g., aluminum and oil emulsion, are reported to produce some side effects, such as carcinogenesis, strong local stimulation, or failure to enhance immunogenicity of weak antigens [2]. Hence, the development of a new type of adjuvant with low toxicity and high efficiency could be of great significance and of immediate practical value in safeguarding the health-associated risk factors in the poultry industry.Antimicrobial peptides (AMPs) are various naturally occurring molecules which provide immediate and nonspecific defense against invading pathogens [3]. A number of studies pointed out that AMPs participate in the modulation of the immune response [4, 5]. These immunopotentiating properties of AMPs make them a suitable candidate for the adjuvant design. Sublancin is a 37-amino acid AMP isolated from Bacillus subtilis 168 with high stability [6]. In our previous studies, we noted that sublancin alleviated Clostridium perfringens-induced necrotic enteritis in broilers mainly by alleviating the inflammatory response [7]. Importantly, we also found that sublancin possess the ability to activate macrophages, thereby protecting mice from cyclophosphamide-induced immunosuppression [8]. In addition, intragastric administration of sublancin induced a mixed immune response of Th1 and Th2 in ovalbumin-immunized mice [9]. These reports elucidated that sublancin could be a promising immunomodulator.However, the immunomodulatory effects of sublancin on SPF broiler chickens remain poorly understood. Additionally, whether sublancin can improve the immune response of ND vaccine in SPF broilers is yet to be known. Although AMPs can improve the cellular and humoral immunity in animals [4], whether sublancin exhibits similar effects in SPF chickens remains to be investigated. Therefore, the present study evaluated the effects of sublancin on immune response for inducing humoral and cellular immunity against ND vaccine in SPF broilers. ## 2. Material and Methods All experiments involving animals were approved by the China Agricultural University Institutional Animal Care and Use Committee (ID: SKLAB-B-2010-003). ### 2.1. Preparation of Sublancin Sublancin was produced in our laboratory using a highly efficient expression system involvingBacillus subtilis 800 as described previously [10]. The amino acid sequence of sublancin was determined as GLGKAQCAALWLQCASGGTIGCGGGAVACQNYRQFCR, and the peptide purity was >99.6% as determined by high-performance liquid chromatography. Sublancin was produced as lyophilized powder and stored at –20°C until further use. ### 2.2. Animals Fourteen-day-old SPF broiler chicks were obtained from the Quality Control Department of Beijing Merial Vital Laboratory Animal Technology Co., Ltd. (Beijing, China) and were housed under standard conditions of temperature (22-26°C), relative humidity (40-65%), and light intensity (150-300 lux). The broilers were fed with Co60-irradiated sterile nutritious feed in Complete Feed (Beijing Keao Feed Co., Ltd, Beijing, China) while clean and fresh water was made available ad libitum. ### 2.3. Experimental Design #### 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. #### 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. #### 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ### 2.4. Serum HI Antibody Assay Blood samples (0.5 mL per chick) were collected from the brachial vein, put into 2 mL Eppendorf tubes, and allowed to clot at 37°C for 2 h. Serum was separated by centrifugation at 3000 rpm for 15 min for the determination of HI antibody. Serum HI antibody assay was performed as previously described [11]. The geometric mean titer was presented as reciprocal log2 values of the highest dilution that displayed HI. ### 2.5. Determination of Pinocytosis of Peritoneal Macrophages Peritoneal cells were harvested by peritoneal lavage with 20 mL RPMI-1640 (Gibco) medium. The cell-rich lavage fluid was aspirated and centrifuged at 1500 rpm for 15 min. The pellet was resuspended at1×106 cells/mL in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) and 100 units/mL penicillin/streptomycin (Life Technologies) and seeded in 96-well plates at 100 μL/well. Cells were purified by adherence to culture plates for 3 h. Thereafter, the culture medium was discarded and 100 mL/well of 0.075% neutral red was added and incubated for 1 h. After washing with PBS for 3 times, 200 μL of lysis solution (alcohol : acetic acid, 1 : 1 v/v) was added into each well and maintained at 37°C for 10 min. The absorbance was measured at 570 nm by a microplate reader (IMARK type, Bio-Rad, USA). ### 2.6. Proliferation Assay of B Lymphocyte Blood samples from the heart were collected and then carefully layered on the surface of the lymphocyte separation medium. After centrifugation at 1500 rpm for 15 min, a white cloud-like lymphocyte band was collected and washed twice with RPMI-1640 medium. The cell pellet was resuspended at1×106 cells/mL with RPMI-1640 medium and seeded in 96-well plates at 80 μL per well, then another 20 μL LPS (10 μg/mL) was added. The plates were incubated at 37°C in a humidified atmosphere with 5% CO2. After 44 h, 20 μL of MTT (5 μg/mL) was added into each well. The plates were reincubated for 4 h and then centrifuged at 1500 rpm for 10 min. The supernatant was removed carefully, and 100 μL of DMSO was added into each well. The absorbance at 450 nm was measured by a microplate auto reader as the index of B lymphocyte proliferation. ### 2.7. Measurement of CD4+ and CD8+ T Cells Cellular populations in the peripheral blood from the broilers were analyzed using flow cytometry. The lymphocytes were stained with CD3-PE, CD4-FITC, and CD8-SPRD at 4°C for 30 min and then analyzed by flow cytometry (Gallios, Beckman Coulter, Brea, CA, USA). The antibodies were purchased from Southern Biotech. ### 2.8. Serum Cytokine Quantitation Blood samples from the brachial vein were allowed to clot at 37°C for 2 h and subsequently centrifuged at 3000 rpm for 15 min to separate the serum. The concentrations of INF-γ, IL-2, IL-4, and IL-10 in serum were measured using commercially available chicken Enzyme-Linked Immunosorbent Assay (ELISA) kits (Cusabio Biotech Company, Wuhan, China). ### 2.9. Statistical Analysis All the data were analyzed by ANOVA using SPSS Version 20.0 (SPSS Inc., Chicago, IL). Statistical differences among treatments were determined using Duncan’s Multiple Range Test. Results are presented asmeans±SD. P value < 0.05 was considered significant. ## 2.1. Preparation of Sublancin Sublancin was produced in our laboratory using a highly efficient expression system involvingBacillus subtilis 800 as described previously [10]. The amino acid sequence of sublancin was determined as GLGKAQCAALWLQCASGGTIGCGGGAVACQNYRQFCR, and the peptide purity was >99.6% as determined by high-performance liquid chromatography. Sublancin was produced as lyophilized powder and stored at –20°C until further use. ## 2.2. Animals Fourteen-day-old SPF broiler chicks were obtained from the Quality Control Department of Beijing Merial Vital Laboratory Animal Technology Co., Ltd. (Beijing, China) and were housed under standard conditions of temperature (22-26°C), relative humidity (40-65%), and light intensity (150-300 lux). The broilers were fed with Co60-irradiated sterile nutritious feed in Complete Feed (Beijing Keao Feed Co., Ltd, Beijing, China) while clean and fresh water was made available ad libitum. ## 2.3. Experimental Design ### 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. ### 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. ### 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ## 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. ## 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. ## 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ## 2.4. Serum HI Antibody Assay Blood samples (0.5 mL per chick) were collected from the brachial vein, put into 2 mL Eppendorf tubes, and allowed to clot at 37°C for 2 h. Serum was separated by centrifugation at 3000 rpm for 15 min for the determination of HI antibody. Serum HI antibody assay was performed as previously described [11]. The geometric mean titer was presented as reciprocal log2 values of the highest dilution that displayed HI. ## 2.5. Determination of Pinocytosis of Peritoneal Macrophages Peritoneal cells were harvested by peritoneal lavage with 20 mL RPMI-1640 (Gibco) medium. The cell-rich lavage fluid was aspirated and centrifuged at 1500 rpm for 15 min. The pellet was resuspended at1×106 cells/mL in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) and 100 units/mL penicillin/streptomycin (Life Technologies) and seeded in 96-well plates at 100 μL/well. Cells were purified by adherence to culture plates for 3 h. Thereafter, the culture medium was discarded and 100 mL/well of 0.075% neutral red was added and incubated for 1 h. After washing with PBS for 3 times, 200 μL of lysis solution (alcohol : acetic acid, 1 : 1 v/v) was added into each well and maintained at 37°C for 10 min. The absorbance was measured at 570 nm by a microplate reader (IMARK type, Bio-Rad, USA). ## 2.6. Proliferation Assay of B Lymphocyte Blood samples from the heart were collected and then carefully layered on the surface of the lymphocyte separation medium. After centrifugation at 1500 rpm for 15 min, a white cloud-like lymphocyte band was collected and washed twice with RPMI-1640 medium. The cell pellet was resuspended at1×106 cells/mL with RPMI-1640 medium and seeded in 96-well plates at 80 μL per well, then another 20 μL LPS (10 μg/mL) was added. The plates were incubated at 37°C in a humidified atmosphere with 5% CO2. After 44 h, 20 μL of MTT (5 μg/mL) was added into each well. The plates were reincubated for 4 h and then centrifuged at 1500 rpm for 10 min. The supernatant was removed carefully, and 100 μL of DMSO was added into each well. The absorbance at 450 nm was measured by a microplate auto reader as the index of B lymphocyte proliferation. ## 2.7. Measurement of CD4+ and CD8+ T Cells Cellular populations in the peripheral blood from the broilers were analyzed using flow cytometry. The lymphocytes were stained with CD3-PE, CD4-FITC, and CD8-SPRD at 4°C for 30 min and then analyzed by flow cytometry (Gallios, Beckman Coulter, Brea, CA, USA). The antibodies were purchased from Southern Biotech. ## 2.8. Serum Cytokine Quantitation Blood samples from the brachial vein were allowed to clot at 37°C for 2 h and subsequently centrifuged at 3000 rpm for 15 min to separate the serum. The concentrations of INF-γ, IL-2, IL-4, and IL-10 in serum were measured using commercially available chicken Enzyme-Linked Immunosorbent Assay (ELISA) kits (Cusabio Biotech Company, Wuhan, China). ## 2.9. Statistical Analysis All the data were analyzed by ANOVA using SPSS Version 20.0 (SPSS Inc., Chicago, IL). Statistical differences among treatments were determined using Duncan’s Multiple Range Test. Results are presented asmeans±SD. P value < 0.05 was considered significant. ## 3. Results ### 3.1. Experiment 1 #### 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.2. Experiment 2 #### 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.3. Experiment 3 #### 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. #### 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). #### 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) #### 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 3.1. Experiment 1 ### 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.2. Experiment 2 ### 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.3. Experiment 3 ### 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. ### 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) ### 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. ## 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) ## 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 4. Discussion Naturally occurring AMPs exhibit antibacterial properties and are also suggested to possess immune-enhancing activities [12], which make them promising adjuvant candidates for vaccine design. Ribosomally synthesized and posttranslationally modified peptides are a fast-expanding class of natural products that display a wide range of interesting biological activities. Sublancin is a member of the glycocin family containing 2α-helices and a well-defined interhelical loop connected by an S-glucosidic linkage to Cys [13]. Mature sublancin has a molecular mass of 3879.8 Da [10]. It has previously been reported that sublancin possesses immunomodulatory properties [8, 14]. Acquired immunity comprising the humoral and cellular immunity constitutes an integral component of bird’s health. Humoral immunity mediated by B lymphocytes is a crucial immune reaction against infections, thereby a change in the antibody titer reflects the state of humoral immunity in animals [15]. In our study, sublancin significantly increased the serum ND antibody titers compared with the VC group, suggesting that sublancin could promote humoral immunity.It is well known that B cells are primarily responsible for humoral immunity, whereas T cells participate in cellular immunity. B lymphocytes mainly secrete antigens by binding the antibodies from effector B cells to eliminate antigens and participate in the humoral immune process of the body. In addition, cytokines can also be released to participate in immune regulation [16]. We noted that sublancin treatment with 30 mg activity/L of water significantly increased the proliferation of B lymphocytes, indicating that B lymphocytes were activated by sublancin. To further test the efficacy of sublancin on cellular immunity, we determined the amount of CD4+ and CD8+ T lymphocyte subpopulations. CD4+ T lymphocytes can be activated by immunoreactive reactions with polypeptide antigens presented by major histocompatibility complex class II molecules. CD8+ T lymphocytes recognize antigens presented by major histocompatibility complex class I molecules and directly kill infected or variant cells. The number and status of CD4+ and CD8+ T lymphocytes directly reflect the status of immunity of the body [17]. Generally, the ratio of CD4+/CD8+ T remains relatively stable, so the value and proportion of CD4+/CD8+ T lymphocyte subsets in the peripheral blood and the ability to produce cytokines can be measured in order to assess the immune status of the body cells [18]. Our results showed that the broilers receiving sublancin at 30 mg activity/L of water had an increased value of CD4+ T lymphocytes on days 7 and 21 after the vaccination. The value of CD4+/CD8+ was significantly increased on days 7 and 21 after the vaccination. These results are in agreement with Xiaofei et al. [19] who reported that compound mucosal immune adjuvant can increase the percentage of CD4+ T and CD8+ T lymphocytes in chicken orally vaccinated with attenuated Newcastle disease vaccine.Phagocytosis is one of the primary functions of macrophages, and this process is extremely crucial in excluding foreign bodies [20]. In the present study, we evaluated the phagocytic activity of macrophages by phagocytic index via neutral red uptake. The results showed that sublancin treatment with 30 mg activity/L of water had no significant effect on the phagocytic activity of macrophages compared with that in the BC and VC groups. These findings suggested that sublancin had no effect on the regulation of macrophages in SPF broilers. On the contrary, our previous study in mice demonstrated that oral administration of sublancin could enhance phagocytic activity of peritoneal macrophages under normal conditions and attenuate the cyclophosphamide-induced inhibition of peritoneal macrophages phagocytic activity [8]. This discrepancy is most likely due to a species difference or physiological state of the birds.In addition to stimulating the proliferation of immune cells, sublancin has the potential to induce the secretion of IFN-γ, IL-10, and IL-4. The Th1 cytokine IFN-γ provides protective immunity against intracellular infections by organisms including bacteria, viruses, and protozoa [21] whereas IL-10 and IL-4 participate in the Th2 immune response. In this study, sublancin was administered via oral route, thus the possibility of a loss of glucosylation, reduction of disulfides, and/or attack by endogenous proteases on this peptide during its transit through the intestine cannot be ignored. Such reactions would modify some or all characteristics of the mature sublancin. Therefore, it can be postulated that the observed effects of sublancin in the present study might be due to the action of a partially modified mature sublancin or sublancin-derived peptides. ## 5. Conclusion In summary, our study demonstrated that sublancin exhibited immunostimulatory properties which effectively activated B lymphocytes, increased the value of CD4+/CD8+, enhanced the ability to respond to antigens, and consequently increased the serum ND antibody titers in SPF broilers. Hence, the present study suggested that sublancin is a potential candidate to be a vaccine adjuvant. --- *Source: 1016567-2019-12-01.xml*
1016567-2019-12-01_1016567-2019-12-01.md
36,465
A Novel Adjuvant “Sublancin” Enhances Immune Response in Specific Pathogen-Free Broiler Chickens Inoculated with Newcastle Disease Vaccine
Yangke Liu; Jiang Zhang; Shuai Wang; Yong Guo; Tao He; Rui Zhou
Journal of Immunology Research (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1016567
1016567-2019-12-01.xml
--- ## Abstract Sublancin is a glycosylated antimicrobial peptide produced byBacillus subtilis 168 possessing antibacterial and immunomodulatory activities. This study was aimed at investigating the effects of sublancin on immune functions and serum antibody titer in specific pathogen-free (SPF) broiler chickens vaccinated with Newcastle disease (ND) vaccine. For this purpose, 3 experiments were performed. Experiment 1: SPF broiler chicks (14 days old) were randomly allotted to 1 of 7 groups including a blank control (BC), vaccine control (VC), and 5 (3-7) vaccinated and sublancin supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Vaccinated groups (2-7) were vaccinated with ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 days post vaccination (dpv), the blood samples were collected for the determination of serum hemagglutination inhibition (HI) antibody titer. Experiment 2: SPF broiler chicks were divided into 1 of 3 groups, i.e., blank control (BC), vaccine control (VC), and sublancin treatment (ST). On 7, 14, and 21 dpv, the blood samples were collected for measuring HI antibody titer by micromethod. Experiment 3: the design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation were carried out. It was noted that sublancin promoted B lymphocyte proliferation, increased the proportion of CD8+ T lymphocyte subpopulations, and enhanced the antibody titer in broiler chickens. In addition, it was also observed that sublancin has the potential to induce the secretion of IFN-γ, IL-10, and IL-4. In conclusion, these findings suggested that sublancin could promote both humoral and cellular immune responses and has the potential to be a promising vaccine adjuvant. --- ## Body ## 1. Introduction Infectious diseases, especially viral diseases, remain one of the most critical challenges in poultry industry partly due to the genetic variation of viruses or the inferior quality of the vaccines. It is widely recognized that the application of vaccines coupled with immunopotentiator could improve the efficacy of vaccination [1]. However, commonly used adjuvants, e.g., aluminum and oil emulsion, are reported to produce some side effects, such as carcinogenesis, strong local stimulation, or failure to enhance immunogenicity of weak antigens [2]. Hence, the development of a new type of adjuvant with low toxicity and high efficiency could be of great significance and of immediate practical value in safeguarding the health-associated risk factors in the poultry industry.Antimicrobial peptides (AMPs) are various naturally occurring molecules which provide immediate and nonspecific defense against invading pathogens [3]. A number of studies pointed out that AMPs participate in the modulation of the immune response [4, 5]. These immunopotentiating properties of AMPs make them a suitable candidate for the adjuvant design. Sublancin is a 37-amino acid AMP isolated from Bacillus subtilis 168 with high stability [6]. In our previous studies, we noted that sublancin alleviated Clostridium perfringens-induced necrotic enteritis in broilers mainly by alleviating the inflammatory response [7]. Importantly, we also found that sublancin possess the ability to activate macrophages, thereby protecting mice from cyclophosphamide-induced immunosuppression [8]. In addition, intragastric administration of sublancin induced a mixed immune response of Th1 and Th2 in ovalbumin-immunized mice [9]. These reports elucidated that sublancin could be a promising immunomodulator.However, the immunomodulatory effects of sublancin on SPF broiler chickens remain poorly understood. Additionally, whether sublancin can improve the immune response of ND vaccine in SPF broilers is yet to be known. Although AMPs can improve the cellular and humoral immunity in animals [4], whether sublancin exhibits similar effects in SPF chickens remains to be investigated. Therefore, the present study evaluated the effects of sublancin on immune response for inducing humoral and cellular immunity against ND vaccine in SPF broilers. ## 2. Material and Methods All experiments involving animals were approved by the China Agricultural University Institutional Animal Care and Use Committee (ID: SKLAB-B-2010-003). ### 2.1. Preparation of Sublancin Sublancin was produced in our laboratory using a highly efficient expression system involvingBacillus subtilis 800 as described previously [10]. The amino acid sequence of sublancin was determined as GLGKAQCAALWLQCASGGTIGCGGGAVACQNYRQFCR, and the peptide purity was >99.6% as determined by high-performance liquid chromatography. Sublancin was produced as lyophilized powder and stored at –20°C until further use. ### 2.2. Animals Fourteen-day-old SPF broiler chicks were obtained from the Quality Control Department of Beijing Merial Vital Laboratory Animal Technology Co., Ltd. (Beijing, China) and were housed under standard conditions of temperature (22-26°C), relative humidity (40-65%), and light intensity (150-300 lux). The broilers were fed with Co60-irradiated sterile nutritious feed in Complete Feed (Beijing Keao Feed Co., Ltd, Beijing, China) while clean and fresh water was made available ad libitum. ### 2.3. Experimental Design #### 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. #### 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. #### 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ### 2.4. Serum HI Antibody Assay Blood samples (0.5 mL per chick) were collected from the brachial vein, put into 2 mL Eppendorf tubes, and allowed to clot at 37°C for 2 h. Serum was separated by centrifugation at 3000 rpm for 15 min for the determination of HI antibody. Serum HI antibody assay was performed as previously described [11]. The geometric mean titer was presented as reciprocal log2 values of the highest dilution that displayed HI. ### 2.5. Determination of Pinocytosis of Peritoneal Macrophages Peritoneal cells were harvested by peritoneal lavage with 20 mL RPMI-1640 (Gibco) medium. The cell-rich lavage fluid was aspirated and centrifuged at 1500 rpm for 15 min. The pellet was resuspended at1×106 cells/mL in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) and 100 units/mL penicillin/streptomycin (Life Technologies) and seeded in 96-well plates at 100 μL/well. Cells were purified by adherence to culture plates for 3 h. Thereafter, the culture medium was discarded and 100 mL/well of 0.075% neutral red was added and incubated for 1 h. After washing with PBS for 3 times, 200 μL of lysis solution (alcohol : acetic acid, 1 : 1 v/v) was added into each well and maintained at 37°C for 10 min. The absorbance was measured at 570 nm by a microplate reader (IMARK type, Bio-Rad, USA). ### 2.6. Proliferation Assay of B Lymphocyte Blood samples from the heart were collected and then carefully layered on the surface of the lymphocyte separation medium. After centrifugation at 1500 rpm for 15 min, a white cloud-like lymphocyte band was collected and washed twice with RPMI-1640 medium. The cell pellet was resuspended at1×106 cells/mL with RPMI-1640 medium and seeded in 96-well plates at 80 μL per well, then another 20 μL LPS (10 μg/mL) was added. The plates were incubated at 37°C in a humidified atmosphere with 5% CO2. After 44 h, 20 μL of MTT (5 μg/mL) was added into each well. The plates were reincubated for 4 h and then centrifuged at 1500 rpm for 10 min. The supernatant was removed carefully, and 100 μL of DMSO was added into each well. The absorbance at 450 nm was measured by a microplate auto reader as the index of B lymphocyte proliferation. ### 2.7. Measurement of CD4+ and CD8+ T Cells Cellular populations in the peripheral blood from the broilers were analyzed using flow cytometry. The lymphocytes were stained with CD3-PE, CD4-FITC, and CD8-SPRD at 4°C for 30 min and then analyzed by flow cytometry (Gallios, Beckman Coulter, Brea, CA, USA). The antibodies were purchased from Southern Biotech. ### 2.8. Serum Cytokine Quantitation Blood samples from the brachial vein were allowed to clot at 37°C for 2 h and subsequently centrifuged at 3000 rpm for 15 min to separate the serum. The concentrations of INF-γ, IL-2, IL-4, and IL-10 in serum were measured using commercially available chicken Enzyme-Linked Immunosorbent Assay (ELISA) kits (Cusabio Biotech Company, Wuhan, China). ### 2.9. Statistical Analysis All the data were analyzed by ANOVA using SPSS Version 20.0 (SPSS Inc., Chicago, IL). Statistical differences among treatments were determined using Duncan’s Multiple Range Test. Results are presented asmeans±SD. P value < 0.05 was considered significant. ## 2.1. Preparation of Sublancin Sublancin was produced in our laboratory using a highly efficient expression system involvingBacillus subtilis 800 as described previously [10]. The amino acid sequence of sublancin was determined as GLGKAQCAALWLQCASGGTIGCGGGAVACQNYRQFCR, and the peptide purity was >99.6% as determined by high-performance liquid chromatography. Sublancin was produced as lyophilized powder and stored at –20°C until further use. ## 2.2. Animals Fourteen-day-old SPF broiler chicks were obtained from the Quality Control Department of Beijing Merial Vital Laboratory Animal Technology Co., Ltd. (Beijing, China) and were housed under standard conditions of temperature (22-26°C), relative humidity (40-65%), and light intensity (150-300 lux). The broilers were fed with Co60-irradiated sterile nutritious feed in Complete Feed (Beijing Keao Feed Co., Ltd, Beijing, China) while clean and fresh water was made available ad libitum. ## 2.3. Experimental Design ### 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. ### 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. ### 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ## 2.3.1. Experiment 1 Ninety-one, 14-day-old SPF broiler chicks were randomly allotted to 1 of 7 groups with 13 chicks in each treatment. The treatments included a blank control (BC), vaccine control (VC), and 5 sublancin treatments in which sublancin was supplemented at 5, 15, 30, 45, and 60 mg activity/L of water, respectively. Briefly, soluble sublancin powder was mixed in 1-L drinking barrel located in each group at the rate of 5, 15, 30, 45, and 60 mg activity/L of water. Fresh sublancin was administered daily throughout the experiment. When the barrel containing sublancin was emptied, purified water without treatment was added to the barrel for the remainder of the day. The broilers in the BC and VC treatments had access to purified water without sublancin treatment all day. All the broilers except the BC group were vaccinated with LaSota ND vaccine by intranasal and intraocular routes at the 14th day. On 7, 14, 21, and 28 dpv, the blood samples were collected from the brachial vein for the determination of serum HI antibody titer by micromethod. ## 2.3.2. Experiment 2 Thirty, 14-day-old SPF broiler chicks were divided into 1 of 3 groups with 10 chicks in each group. The experimental treatments were similar to Exp. 1 except only one sublancin treatment was used in this experiment. In the ST group, birds were provided purified water mixed with sublancin at 30 mg activity/L of water and vaccinated with ND vaccine as in experiment 1. On 7, 14, and 21 dpv, the blood samples from the brachial vein were collected for the determination of HI antibody titer by micromethod. ## 2.3.3. Experiment 3 Thirty-six, 14-day-old SPF broiler chicks were randomly allocated to 1 of 3 groups with 12 chicks in each group. The design of this experiment was the same as that of experiment 2. On 7 and 21 dpv, 6 chickens per group were selected randomly for the determination of pinocytosis of peritoneal macrophages, B lymphocyte proliferation assay, measurement of CD4+ and CD8+ T cells, and serum cytokine quantitation. ## 2.4. Serum HI Antibody Assay Blood samples (0.5 mL per chick) were collected from the brachial vein, put into 2 mL Eppendorf tubes, and allowed to clot at 37°C for 2 h. Serum was separated by centrifugation at 3000 rpm for 15 min for the determination of HI antibody. Serum HI antibody assay was performed as previously described [11]. The geometric mean titer was presented as reciprocal log2 values of the highest dilution that displayed HI. ## 2.5. Determination of Pinocytosis of Peritoneal Macrophages Peritoneal cells were harvested by peritoneal lavage with 20 mL RPMI-1640 (Gibco) medium. The cell-rich lavage fluid was aspirated and centrifuged at 1500 rpm for 15 min. The pellet was resuspended at1×106 cells/mL in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) and 100 units/mL penicillin/streptomycin (Life Technologies) and seeded in 96-well plates at 100 μL/well. Cells were purified by adherence to culture plates for 3 h. Thereafter, the culture medium was discarded and 100 mL/well of 0.075% neutral red was added and incubated for 1 h. After washing with PBS for 3 times, 200 μL of lysis solution (alcohol : acetic acid, 1 : 1 v/v) was added into each well and maintained at 37°C for 10 min. The absorbance was measured at 570 nm by a microplate reader (IMARK type, Bio-Rad, USA). ## 2.6. Proliferation Assay of B Lymphocyte Blood samples from the heart were collected and then carefully layered on the surface of the lymphocyte separation medium. After centrifugation at 1500 rpm for 15 min, a white cloud-like lymphocyte band was collected and washed twice with RPMI-1640 medium. The cell pellet was resuspended at1×106 cells/mL with RPMI-1640 medium and seeded in 96-well plates at 80 μL per well, then another 20 μL LPS (10 μg/mL) was added. The plates were incubated at 37°C in a humidified atmosphere with 5% CO2. After 44 h, 20 μL of MTT (5 μg/mL) was added into each well. The plates were reincubated for 4 h and then centrifuged at 1500 rpm for 10 min. The supernatant was removed carefully, and 100 μL of DMSO was added into each well. The absorbance at 450 nm was measured by a microplate auto reader as the index of B lymphocyte proliferation. ## 2.7. Measurement of CD4+ and CD8+ T Cells Cellular populations in the peripheral blood from the broilers were analyzed using flow cytometry. The lymphocytes were stained with CD3-PE, CD4-FITC, and CD8-SPRD at 4°C for 30 min and then analyzed by flow cytometry (Gallios, Beckman Coulter, Brea, CA, USA). The antibodies were purchased from Southern Biotech. ## 2.8. Serum Cytokine Quantitation Blood samples from the brachial vein were allowed to clot at 37°C for 2 h and subsequently centrifuged at 3000 rpm for 15 min to separate the serum. The concentrations of INF-γ, IL-2, IL-4, and IL-10 in serum were measured using commercially available chicken Enzyme-Linked Immunosorbent Assay (ELISA) kits (Cusabio Biotech Company, Wuhan, China). ## 2.9. Statistical Analysis All the data were analyzed by ANOVA using SPSS Version 20.0 (SPSS Inc., Chicago, IL). Statistical differences among treatments were determined using Duncan’s Multiple Range Test. Results are presented asmeans±SD. P value < 0.05 was considered significant. ## 3. Results ### 3.1. Experiment 1 #### 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.2. Experiment 2 #### 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.3. Experiment 3 #### 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. #### 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). #### 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) #### 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 3.1. Experiment 1 ### 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.1.1. The Dynamic Changes of Antibody Titer The dynamic changes of antibody titer in experiment 1 are presented in Figure1. On 21 dpv, the sublancin treatments with 30 and 60 mg activity/L of water significantly increased (P<0.05) the antibody titer compared with the VC group. A numerical increase in antibody titer was observed in the 5 sublancin treatments compared with the VC group on 7, 14, and 28 dpv, although there was no statistical difference. Overall, compared with the VC group, the sublancin treatments increased the antibody titer by 1.72~40%.Figure 1 The dynamic variation of HI antibody titer in each group (log2) in Exp. 1. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.2. Experiment 2 ### 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.2.1. Effect of Sublancin on Serum ND Antibody Titers Figure2 shows the effect of sublancin on serum ND HI antibody titers in experiment 2. In agreement with the results of experiment 1, the antibody titers in the sublancin treatment with 30 mg activity/L of water were significantly higher (P<0.05) than those in the VC group on 21 dpv. On 7 and 14 dpv, the sublancin treatment with 30 mg activity/L of water resulted in a numerical increase in antibody titers by 11.76 and 21.15% compared with the VC group, although there was no statistical difference.Figure 2 The dynamic changes of antibody titer in each group (log2) in Exp. 2. a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.3. Experiment 3 ### 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. ### 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). ### 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) ### 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 3.3.1. Effect of Sublancin on Pinocytosis of Peritoneal Macrophages The pinocytosis activity of broiler peritoneal macrophages was examined by the uptake of neutral red. As shown in Figure3, the sublancin treatment with 30 mg activity/L of water had no significant effect on the pinocytosis activity compared with the BC and VC groups on 7 and 21 dpv.Figure 3 Effect of sublancin on pinocytosis of peritoneal macrophages in Exp. 3. ## 3.3.2. The Dynamic Changes of B Lymphocyte Proliferation The dynamic changes of theA450 value are presented in Figure 4. On 7 dpv, the A450 values did not differ among the 3 groups. However, on 21 dpv, the A450 values in the sublancin treatment with 30 mg activity/L of water were higher than those in the BC and VC groups (P<0.05).Figure 4 The changes of B lymphocyte proliferation in each group in Exp. 3.a,bBars in the same day without the same superscripts differ significantly (P<0.05). ## 3.3.3. Effect of Sublancin on T Lymphocyte Subpopulations The CD4+ and CD8+ subsets of T lymphocytes are primarily involved in the immune responses to specific antigenic challenges. We found that the percentage of CD8+ peripheral blood lymphocytes in each group remained unchanged between the groups (P>0.05) on 7 and 21 dpv. However, the percentage of CD4+ peripheral blood lymphocytes in the sublancin treatment was higher (P<0.05) than that in the BC and VC groups on 7 dpv (Figure 5). Likewise, the values of CD4+/CD8+ were higher (P<0.05) than those in the BC and VC groups on 21 dpv.Figure 5 The dynamic changes of CD4+, CD8+, and CD4+/CD8+ T lymphocyte subpopulations in each group in Exp. 3. a,bBars in the same day without the same superscripts differ significantly (P<0.05). (a) (b) (c) ## 3.3.4. Effect of Sublancin on Cytokine Production As shown in Figure6, on 7 dpv, a numerical increase in serum concentrations of INF-γ and IL-10 was observed in the sublancin treatment compared with the BC group (P>0.05), although there was no statistical difference. On 21 dpv, the IL-4 concentration in the sublancin treatment also showed numerical increase when compared with that in the BC group (P>0.05).Figure 6 The changes of serum (a) IFN-γ, (b) IL-2, (c) IL-4, and (d) IL-10 concentrations in each group in Exp. 3. (a) (b) (c) (d) ## 4. Discussion Naturally occurring AMPs exhibit antibacterial properties and are also suggested to possess immune-enhancing activities [12], which make them promising adjuvant candidates for vaccine design. Ribosomally synthesized and posttranslationally modified peptides are a fast-expanding class of natural products that display a wide range of interesting biological activities. Sublancin is a member of the glycocin family containing 2α-helices and a well-defined interhelical loop connected by an S-glucosidic linkage to Cys [13]. Mature sublancin has a molecular mass of 3879.8 Da [10]. It has previously been reported that sublancin possesses immunomodulatory properties [8, 14]. Acquired immunity comprising the humoral and cellular immunity constitutes an integral component of bird’s health. Humoral immunity mediated by B lymphocytes is a crucial immune reaction against infections, thereby a change in the antibody titer reflects the state of humoral immunity in animals [15]. In our study, sublancin significantly increased the serum ND antibody titers compared with the VC group, suggesting that sublancin could promote humoral immunity.It is well known that B cells are primarily responsible for humoral immunity, whereas T cells participate in cellular immunity. B lymphocytes mainly secrete antigens by binding the antibodies from effector B cells to eliminate antigens and participate in the humoral immune process of the body. In addition, cytokines can also be released to participate in immune regulation [16]. We noted that sublancin treatment with 30 mg activity/L of water significantly increased the proliferation of B lymphocytes, indicating that B lymphocytes were activated by sublancin. To further test the efficacy of sublancin on cellular immunity, we determined the amount of CD4+ and CD8+ T lymphocyte subpopulations. CD4+ T lymphocytes can be activated by immunoreactive reactions with polypeptide antigens presented by major histocompatibility complex class II molecules. CD8+ T lymphocytes recognize antigens presented by major histocompatibility complex class I molecules and directly kill infected or variant cells. The number and status of CD4+ and CD8+ T lymphocytes directly reflect the status of immunity of the body [17]. Generally, the ratio of CD4+/CD8+ T remains relatively stable, so the value and proportion of CD4+/CD8+ T lymphocyte subsets in the peripheral blood and the ability to produce cytokines can be measured in order to assess the immune status of the body cells [18]. Our results showed that the broilers receiving sublancin at 30 mg activity/L of water had an increased value of CD4+ T lymphocytes on days 7 and 21 after the vaccination. The value of CD4+/CD8+ was significantly increased on days 7 and 21 after the vaccination. These results are in agreement with Xiaofei et al. [19] who reported that compound mucosal immune adjuvant can increase the percentage of CD4+ T and CD8+ T lymphocytes in chicken orally vaccinated with attenuated Newcastle disease vaccine.Phagocytosis is one of the primary functions of macrophages, and this process is extremely crucial in excluding foreign bodies [20]. In the present study, we evaluated the phagocytic activity of macrophages by phagocytic index via neutral red uptake. The results showed that sublancin treatment with 30 mg activity/L of water had no significant effect on the phagocytic activity of macrophages compared with that in the BC and VC groups. These findings suggested that sublancin had no effect on the regulation of macrophages in SPF broilers. On the contrary, our previous study in mice demonstrated that oral administration of sublancin could enhance phagocytic activity of peritoneal macrophages under normal conditions and attenuate the cyclophosphamide-induced inhibition of peritoneal macrophages phagocytic activity [8]. This discrepancy is most likely due to a species difference or physiological state of the birds.In addition to stimulating the proliferation of immune cells, sublancin has the potential to induce the secretion of IFN-γ, IL-10, and IL-4. The Th1 cytokine IFN-γ provides protective immunity against intracellular infections by organisms including bacteria, viruses, and protozoa [21] whereas IL-10 and IL-4 participate in the Th2 immune response. In this study, sublancin was administered via oral route, thus the possibility of a loss of glucosylation, reduction of disulfides, and/or attack by endogenous proteases on this peptide during its transit through the intestine cannot be ignored. Such reactions would modify some or all characteristics of the mature sublancin. Therefore, it can be postulated that the observed effects of sublancin in the present study might be due to the action of a partially modified mature sublancin or sublancin-derived peptides. ## 5. Conclusion In summary, our study demonstrated that sublancin exhibited immunostimulatory properties which effectively activated B lymphocytes, increased the value of CD4+/CD8+, enhanced the ability to respond to antigens, and consequently increased the serum ND antibody titers in SPF broilers. Hence, the present study suggested that sublancin is a potential candidate to be a vaccine adjuvant. --- *Source: 1016567-2019-12-01.xml*
2019
# Key Frame Extraction for Sports Training Based on Improved Deep Learning **Authors:** Changhai Lv; Junfeng Li; Jian Tian **Journal:** Scientific Programming (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1016574 --- ## Abstract With the rapid technological advances in sports, the number of athletics increases gradually. For sports professionals, it is obligatory to oversee and explore the athletics pose in athletes’ training. Key frame extraction of training videos plays a significant role to ease the analysis of sport training videos. This paper develops a sports actions’ classification system for accurately classifying athlete’s actions. The key video frames are extracted from the sports training video to highlight the distinct actions in sports training. Subsequently, a fully convolutional network (FCN) is used to extract the region of interest (ROI) pose detection of frames followed by the application of a convolution neural network (CNN) to estimate the pose probability of each frame. Moreover, a distinct key frame extraction approach is established to extract the key frames considering neighboring frames’ probability differences. The experimental results determine that the proposed method showed better performance and can recognize the athlete’s posture with an average classification rate of 98%. The experimental results and analysis validate that the proposed key frame extraction method outperforms its counterparts in key pose probability estimation and key pose extraction. --- ## Body ## 1. Introduction With the advent of artificial intelligence, performance analysis in sport has undergone significant changes in recent years. In general, manual analysis performed by trained sports analysts has some drawbacks such as being time-consuming, subjective in nature, and prone to human errors. Objective measurement and assessment for sports actions are indispensable to understand the physical and technical demands related to sports performance [1]. Intelligent sports action recognition methods are developed to provide objective analysis and evaluation in sport and improve the accuracy of sports performance analysis and validate the efficiency of training programs. Common sports action recognition systems can be developed using advanced machine learning methods to process the data collected via computer vision systems and wearable sensors [2]. Sports activities recorded through a computer vision system can be used for athlete action detection, movement analysis, and pose estimation [3]. The vision-based sports action recognition can provide real-time feedback for athletes and coaches. However, the player's actions in sports videos are more complex and skillful. Compared with daily activities, the analysis of sports videos is more challenging. This is because, the players while playing perform rapid and consistent actions within the camera view, thus degrading the action recognition performance [4].In sports video analysis and processing for action recognition, pertinent and basic information extraction is a mandatory task. If the video is large, then it is hard to process the whole video in a short time while preserving its semantics [5]. The extraction of the key frame is a prime step of video analysis. The key frame provides eloquent information and is a summary of the entire video sequence [6]. A video is normally recorded 30 frames per second and contains additional information for the recognition of a particular computer vision task. Key frame detection is mainly applied in video summarization and visual localization in videos. To use all the frames of a video, more computational resources and memory are required. In many computer vision applications, one or few key frames may be enough to accomplish the desired recognition results [3].The key frames are applied in many applications such as searching, information retrieval, and scene analysis in videos [7]. The video represents a composite structure and is made of several scenes, shots, and several frames. Figure 1 shows the division of video into shots and frames. In many video and image processing tasks, such as scene analysis and sequence summarization, it is essential to perform an analysis of the complete video. During the analysis of videos, the major steps are scene segmentation, detection of shot margin, and key frame extraction [8, 9]. The shot is a contiguous, adjacent combination of frames recorded by a camera. The key objective of extracting key frames is to extract unique frames in a video and prepare the video sequences for quick processing [10]. In this paper, we propose an effective method for the extraction of a key frame from athlete sports video, which is accurate, fast, and efficient. The proposed key frame extraction model uses a long sports action video as input and extracts the key frames, which can better represent the sports action for recognition. We introduced an improved convolution neural network method to detect key frames in athletes’ videos. We performed experiments on athletes’ training video dataset to show the triumph of our method for key frames’ detection.Figure 1 Structure of sport video.We structured the rest of the paper as follows. In Section2, related work is presented. Section 3 provides the detail of the proposed method. Sections 4 and 5 are about the experimental results and conclusion, respectively. ## 2. Related Work With the advancement of sports, competition in sports is becoming a base to develop people’s social life and emotions. In order to enhance the competitive skills of athletes, active investigation of sports training is one of the central issues. Many previous analysis methods in this field depend on using a segmentation-based approach [11]. These methods usually extract visual features from videos. One of the first attempts discovered local minimum changes within videos concerning similarity between the consecutive frames. Later on, other works augmented this approach by using the key points’ detection method for local feature extraction and combining the key points to find the key frames [12]. All of these methods have a common shortcoming of extracting redundant frames rather than fully covering the video contents.Another group of traditional methods is based on feature clusters and detects the key video frames with prediction of a prominent frame in individual clusters. Zhuang et al. [13] employed the joint entropy (JE) and mutual information (MI) between successive video frames and detected key frames. Tang et al. [14] developed a clustering method for recognizing the key frame using visual content and motion analysis. A frame extraction method for hand gesture images’ recognition using image entropy and density clustering was presented in [15]. Cun et al. [16] developed a method for the extraction of key frames using spectral clustering. The feature locality in the video sequence was extracted using a graph as an alternative to relying on a similarity measure shared between two images.To overcome the shortcomings of traditional frame detection methods, recent works focused on deep learning to perform key frame recognition in videos [17]. Deep learning has made a great breakthrough in the application of speech recognition, vision-based systems, human activity recognition, and image classification [18]. The deep learning models simulate human neurons and form the combination of low- and high-level features, to describe and understand objects [19]. Deep learning is relative to “shallow learning.” The major difference between deep learning and “shallow learning” is that the deep model contains several nonlinear operations and more layers of a neural network [20]. “Shallow learning” relies on manual feature extraction and ultimately obtains single-layer features. Deep learning extracts different levels of features from the original signal from shallow to deep. In addition, deep learning can describe learning deeper and more complex features, to better express the image, which is conducive to classification and other tasks. The structure of deep learning is comprised of a large number of neurons, each of which is connected with other neurons. The process of deep learning is to update the weights through continuous iteration. Deep neural networks (DNN) are a deep network structure. The network structure of a deep neural network includes multiple single-layer nonlinear networks. At present, the more common networks can be categorized into feedback deep networks (FBDN), bidirectional deep networks (BDDN) [21], and feedforward deep networks (FFDN).Different supervised and unsupervised deep learning methods have been suggested for key frame detection in sports videos which considerably enhance the performance of action recognition systems. Yang et al. [22] employed the method of generative adversarial networks for the detection of key frames in videos. For key features’ extraction, CNNs were employed to extract the discriminant features which were encoded using long short-term memory (LSTM) networks. Another approach using bidirectional long short-term memory (Bi-LSTM) was introduced in [23]. The method was effective for extracting the highlighting the key video’s frames automatically. Huang and Wang [24] proposed a two-stream CNNs’ approach to detect the key frames for action recognition. Likewise, Jian et al. [25] devised a unique key frame and shot selection model for summarization of video. Wen et al. [26] employed a frame extraction system through estimation of the pose probability of each neighboring frame in a sports video. Moreover, Wu et al. [27] presented a video generation approach based on key frames. In this study, we propose an improved key frame extraction technique for sports action recognition using a convolutional neural network. FCN is applied to get the ROI for a more accurate pose detection of frames followed by the application of a CNN to estimate the pose probability of individual frames. ## 3. Methods ### 3.1. Overview of CNN CNN is an artificial neural network that mimics the human brain and can grip the training and learning of layered network structures. CNN uses the local receptive field to acquire autonomous learning capability and handle huge data images for processing. CNN is a specific type of FFDN. It is extensively used for recognition of images. CNN represents image data in the form of multidimensional arrays or matrices. CNN extracts each slice of an input image and assigns weights to each neuron based on the important role of the receptive field. Simultaneous interpretation of weight points and pooling functions reduces the dimension of image features, reduces the complexity of parameter adjustment, and improves the stability of network structure. Lastly, prominent features are generated for classification, so they are broadly used for object detection and classification of images.CNN is primarily comprised of the input layer, convolution layer, pooling layer, full connection layer, and output layer. The input image is given to the input layer for processing. The convolution layer performs convolution operation over the input matrix between the input layer and convolution layer, and the input image is processed for feature extraction. The function of the pooling layer is to take the maximum value of the pixels in the target area of the input image, to condense the resolution of the feature image and avoid overfitting. The full connection layer is composed of zero or more neurons. Each neuron is linked with all the neurons in the preceding layer. The obtained feature vector is mapped to the output layer to facilitate classification. The function of the output layer is to classify feature vectors mapped from the full connection layer and create a one-dimensional output vector, with dimensions equal to the number of classes. ### 3.2. Deep Key Frame Extraction #### 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. #### 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. #### 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 3.1. Overview of CNN CNN is an artificial neural network that mimics the human brain and can grip the training and learning of layered network structures. CNN uses the local receptive field to acquire autonomous learning capability and handle huge data images for processing. CNN is a specific type of FFDN. It is extensively used for recognition of images. CNN represents image data in the form of multidimensional arrays or matrices. CNN extracts each slice of an input image and assigns weights to each neuron based on the important role of the receptive field. Simultaneous interpretation of weight points and pooling functions reduces the dimension of image features, reduces the complexity of parameter adjustment, and improves the stability of network structure. Lastly, prominent features are generated for classification, so they are broadly used for object detection and classification of images.CNN is primarily comprised of the input layer, convolution layer, pooling layer, full connection layer, and output layer. The input image is given to the input layer for processing. The convolution layer performs convolution operation over the input matrix between the input layer and convolution layer, and the input image is processed for feature extraction. The function of the pooling layer is to take the maximum value of the pixels in the target area of the input image, to condense the resolution of the feature image and avoid overfitting. The full connection layer is composed of zero or more neurons. Each neuron is linked with all the neurons in the preceding layer. The obtained feature vector is mapped to the output layer to facilitate classification. The function of the output layer is to classify feature vectors mapped from the full connection layer and create a one-dimensional output vector, with dimensions equal to the number of classes. ## 3.2. Deep Key Frame Extraction ### 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. ### 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. ### 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. ## 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. ## 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 4. Results In this section, we performed experimental analysis to confirm the performance of the proposed key frame extraction method. We performed experiments on sports videos collected from the Chinese Administration of Sports. All the videos contain four key athletes’ poses. ### 4.1. CNN-Based Key Pose Estimation The proposed key frame extraction method used CNN with ROI of the extracted video frames as input to predict probabilities of all poses. In all sports videos, there are four groups of key poses. The CNN model was used to calculate the probability of each frame for all frames estimated with accurate or inaccurate poses. Table2 provides the classification results of 4 subjects corresponding to 4 poses of sport action videos. Firstly, 612 image frames are tested. Table 2 provides the number of correct and wrongly predicted frame number and the associated accuracy, sensitivity, and specificity for all poses predicted by CNN. It is evident that the accuracy, sensitivity, and specificity of pose probability estimated in the proposed model are higher than 90% on all the poses which provide a base for the ultimate key pose extraction in sports training.Table 2 Test accuracy of four key frames. TotalCorrectWrongAccuracy (%)Sensitivity (%)Specificity (%)Pose 1169166398.292.494.7Pose 2130125596.190.495.3Pose 3155152398.196.398.3Pose 4158155398.197.697.7 ### 4.2. Experimental Comparison To ratify the superiority of the proposed key frame extraction method, we compared the obtained results with the existing pose estimation methods. The comparison results are shown in Table3. Compared with the traditional deep learning method, the method in this paper has a great improvement. Because the athlete skeleton information is extracted from the key objects, the feature expression of human posture is enhanced, and the accuracy is improved. It can be observed that athletes’ skeleton extraction of key objects can improve the accuracy of classification. Wu et al. [27] achieved the highest accuracy of 90.6%, whereas Jian et al. [28] reported 97.4% accuracy. Compared with the aforementioned two methods, the proposed key frame extraction method achieved the highest average accuracy of 97.7% for all the pose categories.Table 3 Experimental comparison. MethodAccuracy (%)Wu et al. [27]90.6Jian et al. [28]97.4Proposed skeleton-based method97.7 ### 4.3. Key Frame Extraction Figure9 shows the probability distribution of proposed skeleton-based key frame extraction for four groups of poses from training videos. It can be seen that the unique characteristics of each pose are properly captured, and the estimation of all four poses is good. In addition, the method in this paper has a very obvious performance in performance and effect and has a strong expression in each type of key posture. It combines FCN with CNN to extract ROI and distinct features and lays down the foundation for key frame extraction from sports videos. It further confirms that the proposed skeleton-based method conquers other key frame extraction methods. Test the results of the video is shown in Figure 9.Figure 9 Test the results of the video. ## 4.1. CNN-Based Key Pose Estimation The proposed key frame extraction method used CNN with ROI of the extracted video frames as input to predict probabilities of all poses. In all sports videos, there are four groups of key poses. The CNN model was used to calculate the probability of each frame for all frames estimated with accurate or inaccurate poses. Table2 provides the classification results of 4 subjects corresponding to 4 poses of sport action videos. Firstly, 612 image frames are tested. Table 2 provides the number of correct and wrongly predicted frame number and the associated accuracy, sensitivity, and specificity for all poses predicted by CNN. It is evident that the accuracy, sensitivity, and specificity of pose probability estimated in the proposed model are higher than 90% on all the poses which provide a base for the ultimate key pose extraction in sports training.Table 2 Test accuracy of four key frames. TotalCorrectWrongAccuracy (%)Sensitivity (%)Specificity (%)Pose 1169166398.292.494.7Pose 2130125596.190.495.3Pose 3155152398.196.398.3Pose 4158155398.197.697.7 ## 4.2. Experimental Comparison To ratify the superiority of the proposed key frame extraction method, we compared the obtained results with the existing pose estimation methods. The comparison results are shown in Table3. Compared with the traditional deep learning method, the method in this paper has a great improvement. Because the athlete skeleton information is extracted from the key objects, the feature expression of human posture is enhanced, and the accuracy is improved. It can be observed that athletes’ skeleton extraction of key objects can improve the accuracy of classification. Wu et al. [27] achieved the highest accuracy of 90.6%, whereas Jian et al. [28] reported 97.4% accuracy. Compared with the aforementioned two methods, the proposed key frame extraction method achieved the highest average accuracy of 97.7% for all the pose categories.Table 3 Experimental comparison. MethodAccuracy (%)Wu et al. [27]90.6Jian et al. [28]97.4Proposed skeleton-based method97.7 ## 4.3. Key Frame Extraction Figure9 shows the probability distribution of proposed skeleton-based key frame extraction for four groups of poses from training videos. It can be seen that the unique characteristics of each pose are properly captured, and the estimation of all four poses is good. In addition, the method in this paper has a very obvious performance in performance and effect and has a strong expression in each type of key posture. It combines FCN with CNN to extract ROI and distinct features and lays down the foundation for key frame extraction from sports videos. It further confirms that the proposed skeleton-based method conquers other key frame extraction methods. Test the results of the video is shown in Figure 9.Figure 9 Test the results of the video. ## 5. Conclusion Object detection and behavior understanding in the video has become a point of contention in the field of machine vision. Sports video contains a lot of information related to the human movement which is complex and highly skilled. Compared with the analysis of human daily movements, the analysis and recognition movements in sports video are more challenging. In this study, we have presented a deep key frame extraction method for sport video analysis based on CNN. This paper extracts the key posture of athletes’ training, to assist the coach to carry out more professional training for athletes, and puts forward a method to extract the skeleton information of human athletes. The human body in sports training action video is extracted through the skeleton of athletes to enhance the expression of features and the accuracy of key frame extraction. The experimental results and analysis validate that the proposed skeleton-based key frame extraction method outperforms its counterparts in key pose probability estimation and key pose extraction. --- *Source: 1016574-2021-09-02.xml*
1016574-2021-09-02_1016574-2021-09-02.md
40,968
Key Frame Extraction for Sports Training Based on Improved Deep Learning
Changhai Lv; Junfeng Li; Jian Tian
Scientific Programming (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1016574
1016574-2021-09-02.xml
--- ## Abstract With the rapid technological advances in sports, the number of athletics increases gradually. For sports professionals, it is obligatory to oversee and explore the athletics pose in athletes’ training. Key frame extraction of training videos plays a significant role to ease the analysis of sport training videos. This paper develops a sports actions’ classification system for accurately classifying athlete’s actions. The key video frames are extracted from the sports training video to highlight the distinct actions in sports training. Subsequently, a fully convolutional network (FCN) is used to extract the region of interest (ROI) pose detection of frames followed by the application of a convolution neural network (CNN) to estimate the pose probability of each frame. Moreover, a distinct key frame extraction approach is established to extract the key frames considering neighboring frames’ probability differences. The experimental results determine that the proposed method showed better performance and can recognize the athlete’s posture with an average classification rate of 98%. The experimental results and analysis validate that the proposed key frame extraction method outperforms its counterparts in key pose probability estimation and key pose extraction. --- ## Body ## 1. Introduction With the advent of artificial intelligence, performance analysis in sport has undergone significant changes in recent years. In general, manual analysis performed by trained sports analysts has some drawbacks such as being time-consuming, subjective in nature, and prone to human errors. Objective measurement and assessment for sports actions are indispensable to understand the physical and technical demands related to sports performance [1]. Intelligent sports action recognition methods are developed to provide objective analysis and evaluation in sport and improve the accuracy of sports performance analysis and validate the efficiency of training programs. Common sports action recognition systems can be developed using advanced machine learning methods to process the data collected via computer vision systems and wearable sensors [2]. Sports activities recorded through a computer vision system can be used for athlete action detection, movement analysis, and pose estimation [3]. The vision-based sports action recognition can provide real-time feedback for athletes and coaches. However, the player's actions in sports videos are more complex and skillful. Compared with daily activities, the analysis of sports videos is more challenging. This is because, the players while playing perform rapid and consistent actions within the camera view, thus degrading the action recognition performance [4].In sports video analysis and processing for action recognition, pertinent and basic information extraction is a mandatory task. If the video is large, then it is hard to process the whole video in a short time while preserving its semantics [5]. The extraction of the key frame is a prime step of video analysis. The key frame provides eloquent information and is a summary of the entire video sequence [6]. A video is normally recorded 30 frames per second and contains additional information for the recognition of a particular computer vision task. Key frame detection is mainly applied in video summarization and visual localization in videos. To use all the frames of a video, more computational resources and memory are required. In many computer vision applications, one or few key frames may be enough to accomplish the desired recognition results [3].The key frames are applied in many applications such as searching, information retrieval, and scene analysis in videos [7]. The video represents a composite structure and is made of several scenes, shots, and several frames. Figure 1 shows the division of video into shots and frames. In many video and image processing tasks, such as scene analysis and sequence summarization, it is essential to perform an analysis of the complete video. During the analysis of videos, the major steps are scene segmentation, detection of shot margin, and key frame extraction [8, 9]. The shot is a contiguous, adjacent combination of frames recorded by a camera. The key objective of extracting key frames is to extract unique frames in a video and prepare the video sequences for quick processing [10]. In this paper, we propose an effective method for the extraction of a key frame from athlete sports video, which is accurate, fast, and efficient. The proposed key frame extraction model uses a long sports action video as input and extracts the key frames, which can better represent the sports action for recognition. We introduced an improved convolution neural network method to detect key frames in athletes’ videos. We performed experiments on athletes’ training video dataset to show the triumph of our method for key frames’ detection.Figure 1 Structure of sport video.We structured the rest of the paper as follows. In Section2, related work is presented. Section 3 provides the detail of the proposed method. Sections 4 and 5 are about the experimental results and conclusion, respectively. ## 2. Related Work With the advancement of sports, competition in sports is becoming a base to develop people’s social life and emotions. In order to enhance the competitive skills of athletes, active investigation of sports training is one of the central issues. Many previous analysis methods in this field depend on using a segmentation-based approach [11]. These methods usually extract visual features from videos. One of the first attempts discovered local minimum changes within videos concerning similarity between the consecutive frames. Later on, other works augmented this approach by using the key points’ detection method for local feature extraction and combining the key points to find the key frames [12]. All of these methods have a common shortcoming of extracting redundant frames rather than fully covering the video contents.Another group of traditional methods is based on feature clusters and detects the key video frames with prediction of a prominent frame in individual clusters. Zhuang et al. [13] employed the joint entropy (JE) and mutual information (MI) between successive video frames and detected key frames. Tang et al. [14] developed a clustering method for recognizing the key frame using visual content and motion analysis. A frame extraction method for hand gesture images’ recognition using image entropy and density clustering was presented in [15]. Cun et al. [16] developed a method for the extraction of key frames using spectral clustering. The feature locality in the video sequence was extracted using a graph as an alternative to relying on a similarity measure shared between two images.To overcome the shortcomings of traditional frame detection methods, recent works focused on deep learning to perform key frame recognition in videos [17]. Deep learning has made a great breakthrough in the application of speech recognition, vision-based systems, human activity recognition, and image classification [18]. The deep learning models simulate human neurons and form the combination of low- and high-level features, to describe and understand objects [19]. Deep learning is relative to “shallow learning.” The major difference between deep learning and “shallow learning” is that the deep model contains several nonlinear operations and more layers of a neural network [20]. “Shallow learning” relies on manual feature extraction and ultimately obtains single-layer features. Deep learning extracts different levels of features from the original signal from shallow to deep. In addition, deep learning can describe learning deeper and more complex features, to better express the image, which is conducive to classification and other tasks. The structure of deep learning is comprised of a large number of neurons, each of which is connected with other neurons. The process of deep learning is to update the weights through continuous iteration. Deep neural networks (DNN) are a deep network structure. The network structure of a deep neural network includes multiple single-layer nonlinear networks. At present, the more common networks can be categorized into feedback deep networks (FBDN), bidirectional deep networks (BDDN) [21], and feedforward deep networks (FFDN).Different supervised and unsupervised deep learning methods have been suggested for key frame detection in sports videos which considerably enhance the performance of action recognition systems. Yang et al. [22] employed the method of generative adversarial networks for the detection of key frames in videos. For key features’ extraction, CNNs were employed to extract the discriminant features which were encoded using long short-term memory (LSTM) networks. Another approach using bidirectional long short-term memory (Bi-LSTM) was introduced in [23]. The method was effective for extracting the highlighting the key video’s frames automatically. Huang and Wang [24] proposed a two-stream CNNs’ approach to detect the key frames for action recognition. Likewise, Jian et al. [25] devised a unique key frame and shot selection model for summarization of video. Wen et al. [26] employed a frame extraction system through estimation of the pose probability of each neighboring frame in a sports video. Moreover, Wu et al. [27] presented a video generation approach based on key frames. In this study, we propose an improved key frame extraction technique for sports action recognition using a convolutional neural network. FCN is applied to get the ROI for a more accurate pose detection of frames followed by the application of a CNN to estimate the pose probability of individual frames. ## 3. Methods ### 3.1. Overview of CNN CNN is an artificial neural network that mimics the human brain and can grip the training and learning of layered network structures. CNN uses the local receptive field to acquire autonomous learning capability and handle huge data images for processing. CNN is a specific type of FFDN. It is extensively used for recognition of images. CNN represents image data in the form of multidimensional arrays or matrices. CNN extracts each slice of an input image and assigns weights to each neuron based on the important role of the receptive field. Simultaneous interpretation of weight points and pooling functions reduces the dimension of image features, reduces the complexity of parameter adjustment, and improves the stability of network structure. Lastly, prominent features are generated for classification, so they are broadly used for object detection and classification of images.CNN is primarily comprised of the input layer, convolution layer, pooling layer, full connection layer, and output layer. The input image is given to the input layer for processing. The convolution layer performs convolution operation over the input matrix between the input layer and convolution layer, and the input image is processed for feature extraction. The function of the pooling layer is to take the maximum value of the pixels in the target area of the input image, to condense the resolution of the feature image and avoid overfitting. The full connection layer is composed of zero or more neurons. Each neuron is linked with all the neurons in the preceding layer. The obtained feature vector is mapped to the output layer to facilitate classification. The function of the output layer is to classify feature vectors mapped from the full connection layer and create a one-dimensional output vector, with dimensions equal to the number of classes. ### 3.2. Deep Key Frame Extraction #### 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. #### 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. #### 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 3.1. Overview of CNN CNN is an artificial neural network that mimics the human brain and can grip the training and learning of layered network structures. CNN uses the local receptive field to acquire autonomous learning capability and handle huge data images for processing. CNN is a specific type of FFDN. It is extensively used for recognition of images. CNN represents image data in the form of multidimensional arrays or matrices. CNN extracts each slice of an input image and assigns weights to each neuron based on the important role of the receptive field. Simultaneous interpretation of weight points and pooling functions reduces the dimension of image features, reduces the complexity of parameter adjustment, and improves the stability of network structure. Lastly, prominent features are generated for classification, so they are broadly used for object detection and classification of images.CNN is primarily comprised of the input layer, convolution layer, pooling layer, full connection layer, and output layer. The input image is given to the input layer for processing. The convolution layer performs convolution operation over the input matrix between the input layer and convolution layer, and the input image is processed for feature extraction. The function of the pooling layer is to take the maximum value of the pixels in the target area of the input image, to condense the resolution of the feature image and avoid overfitting. The full connection layer is composed of zero or more neurons. Each neuron is linked with all the neurons in the preceding layer. The obtained feature vector is mapped to the output layer to facilitate classification. The function of the output layer is to classify feature vectors mapped from the full connection layer and create a one-dimensional output vector, with dimensions equal to the number of classes. ## 3.2. Deep Key Frame Extraction ### 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. ### 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. ### 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 3.2.1. Proposed Algorithm In this section, we provide the details of the proposed deep key frame extraction method for sports training. The method is based on athlete skeleton extraction. As illustrated in Figure2, the proposed frame extraction technique consists of four steps: preprocessing of the athlete training video, ROI extraction based on FCN, skeleton and feature extraction, and CNN-based key frame extraction. The proposed deep frame extraction method examines the poses of athletes in training videos. It first divides input videos into frame sequences followed by exploring ROI. FCN is applied for the extraction of foreground features of the athlete. Next, all the video frames are cropped according to the extracted ROI in the first frame.Figure 2 Algorithm framework. ## 3.2.2. Extracting Athletes’ Skeletons We used the ROI image extracted by the FCN network and the previously labeled ground truth to make the training data of the deep skeleton network. The original training image and the labeled ground truth are shown in Figure3.Figure 3 Original and ground truth.The Matlab (R2015a) software was used to extract the athletes’ skeleton information of ground truth. The Matlab ‘bwmorph’ function was applied to perform the morphological operation on all images. The general syntax of Matlab bwmorp function is as follows:BW2 = bwmorph (BW, operation,n), which applies morphological operation n times and n can be inf; in this case, the operation is repeated until the image no longer changes. Table 1 lists some of the different morphological operations that can be performed on images.Table 1 bwmorph morphological operations on images. OperationDescriptionBotha’It is a morphological “bottom cap” transformation operation, and the returned image is the original image minus the morphological closing operation (closing operation: first expand and then corrode)BridgeDisconnected pixels: the value pixel is set to 1 if it has two nonzero unconnected (8 neighborhood) pixelsCleanRemove isolated pixels (by О 1)ClosePerform morphological closing operation (expansion before corrosion)DiagThe diagonal filling is used to eliminate the 8 connected regions in the backgroundDilateThe structure ones (3) are used to perform the expansion operationErodeThe structure ones (3) are used to perform the corrosion operationFillFill in isolated internal pixels (0 surrounded by 1)The different morphological operations can be selected to generate the athlete’s skeleton information. The athlete’s skeleton information of the four key postures is shown in Figure4.Figure 4 (a) The original picture of athlete’s and the ground truth of athletes’ skeleton. (b) The original drawing of the knee lead and the ground truth of the athlete’s skeleton. (c) The original drawing and the ground truth of the athlete’s skeleton. (d) The original map of the highest point and the ground truth of the athlete’s skeleton. (a)(b)(c)(d)It can be seen from the athletes’ skeleton information map that the four key postures have different athletes’ skeleton information. The 373 labeled images were used to extract their athletes’ skeleton information as the label of training deep skeleton network. ## 3.2.3. Generation of Athlete Skeleton Information We prepared the training and test files, as shown in Figure5. The left side represents the original image, whereas the right side is the ground truth.Figure 5 Training parameter.Because the CNN network is changed from VGG (visual geometry group) network, some parameters of the VGG network are selected. The VGG is the conventional CNN architecture and consists of blocks, where each block consists of 2D convolution and max pooling layers. Similar to the FCN training method, the deep skeleton is different from the traditional single-label classification network but uses the image of athletes’ skeleton information as the label.After 20000 iterations, the trained model is obtained. The test set was randomly selected to test the recognition performance. According to the predicted value of each pixel, after normalization, the predicted gray image is drawn. The original and predicted images are shown in Figure6.Figure 6 Original and predicted results.The white portion in the figure indicates the skeleton information of athletes. The higher the value is, the more likely it is to be the skeleton information of athletes. Next, the nonmaximum suppression (NMS) algorithm is used to find the athletes' skeleton information. The NMS technique is used in several image processing tasks. It is a group of algorithms that chooses one entity out of many entities. We take 3 neighborhoods as an example to introduce the implementation of the NMS algorithm.NMS in three neighborhoods is to judge whether the elementI [x] (2 < = I < = W-1) of a dimension group I [w] is greater than its left neighbor I [I-1] and right neighbor I [x + 1] (Algorithm 1).Algorithm 1: NMS for three neighborhoods. (1) x = 2(2) WhileI ≤ w−1 do(3) IfI [x]x >x I [x + 1] then(4) IfI [x] >= I [x−1] then(5) Maximum At (x)(6) Else(7) x = x + 1(8) Whilex ≤ w−1 AND I [x] <= I [x + 1] do(9) x = x + 1(10) Ifx ≤ w−1 then(11) Maximum At (x)(12) I = x + 2Lines 3–5 of the algorithm flow check whether the current element is greater than its left and right neighbor elements. If the condition is met, the element is the maximum point. For the maximum pointI [x], it is known that I [x] > I [x + 1], so there is no need to further process the I + 1 position element. Instead, it directly jumps to the I + 2 position, corresponding to the 12th line of the algorithm flow. If the element I [x] does not meet the judgment condition of the third line of the algorithm flow, its right neighbor I [x + 1] is taken as the maximum candidate, corresponding to the seventh line of the algorithm flow. A monotonically increasing method is used to search the right until the element satisfying I [x] > I [x + 1] is found. If I < = W−1, this point is the maximum point, corresponding to lines 10-11 of the algorithm flow.We used the NMS method of MATLAB toolkit, according to the results of deep skeleton network output, and, finally, determined the information pixels that may be athletes’ skeleton. The predicted results and NMS results are shown in Figure7. The test effect picture including the athlete skeleton is shown in Figure 8.Figure 7 Prediction results and NMS results.Figure 8 Test effect picture including athlete skeleton. ## 4. Results In this section, we performed experimental analysis to confirm the performance of the proposed key frame extraction method. We performed experiments on sports videos collected from the Chinese Administration of Sports. All the videos contain four key athletes’ poses. ### 4.1. CNN-Based Key Pose Estimation The proposed key frame extraction method used CNN with ROI of the extracted video frames as input to predict probabilities of all poses. In all sports videos, there are four groups of key poses. The CNN model was used to calculate the probability of each frame for all frames estimated with accurate or inaccurate poses. Table2 provides the classification results of 4 subjects corresponding to 4 poses of sport action videos. Firstly, 612 image frames are tested. Table 2 provides the number of correct and wrongly predicted frame number and the associated accuracy, sensitivity, and specificity for all poses predicted by CNN. It is evident that the accuracy, sensitivity, and specificity of pose probability estimated in the proposed model are higher than 90% on all the poses which provide a base for the ultimate key pose extraction in sports training.Table 2 Test accuracy of four key frames. TotalCorrectWrongAccuracy (%)Sensitivity (%)Specificity (%)Pose 1169166398.292.494.7Pose 2130125596.190.495.3Pose 3155152398.196.398.3Pose 4158155398.197.697.7 ### 4.2. Experimental Comparison To ratify the superiority of the proposed key frame extraction method, we compared the obtained results with the existing pose estimation methods. The comparison results are shown in Table3. Compared with the traditional deep learning method, the method in this paper has a great improvement. Because the athlete skeleton information is extracted from the key objects, the feature expression of human posture is enhanced, and the accuracy is improved. It can be observed that athletes’ skeleton extraction of key objects can improve the accuracy of classification. Wu et al. [27] achieved the highest accuracy of 90.6%, whereas Jian et al. [28] reported 97.4% accuracy. Compared with the aforementioned two methods, the proposed key frame extraction method achieved the highest average accuracy of 97.7% for all the pose categories.Table 3 Experimental comparison. MethodAccuracy (%)Wu et al. [27]90.6Jian et al. [28]97.4Proposed skeleton-based method97.7 ### 4.3. Key Frame Extraction Figure9 shows the probability distribution of proposed skeleton-based key frame extraction for four groups of poses from training videos. It can be seen that the unique characteristics of each pose are properly captured, and the estimation of all four poses is good. In addition, the method in this paper has a very obvious performance in performance and effect and has a strong expression in each type of key posture. It combines FCN with CNN to extract ROI and distinct features and lays down the foundation for key frame extraction from sports videos. It further confirms that the proposed skeleton-based method conquers other key frame extraction methods. Test the results of the video is shown in Figure 9.Figure 9 Test the results of the video. ## 4.1. CNN-Based Key Pose Estimation The proposed key frame extraction method used CNN with ROI of the extracted video frames as input to predict probabilities of all poses. In all sports videos, there are four groups of key poses. The CNN model was used to calculate the probability of each frame for all frames estimated with accurate or inaccurate poses. Table2 provides the classification results of 4 subjects corresponding to 4 poses of sport action videos. Firstly, 612 image frames are tested. Table 2 provides the number of correct and wrongly predicted frame number and the associated accuracy, sensitivity, and specificity for all poses predicted by CNN. It is evident that the accuracy, sensitivity, and specificity of pose probability estimated in the proposed model are higher than 90% on all the poses which provide a base for the ultimate key pose extraction in sports training.Table 2 Test accuracy of four key frames. TotalCorrectWrongAccuracy (%)Sensitivity (%)Specificity (%)Pose 1169166398.292.494.7Pose 2130125596.190.495.3Pose 3155152398.196.398.3Pose 4158155398.197.697.7 ## 4.2. Experimental Comparison To ratify the superiority of the proposed key frame extraction method, we compared the obtained results with the existing pose estimation methods. The comparison results are shown in Table3. Compared with the traditional deep learning method, the method in this paper has a great improvement. Because the athlete skeleton information is extracted from the key objects, the feature expression of human posture is enhanced, and the accuracy is improved. It can be observed that athletes’ skeleton extraction of key objects can improve the accuracy of classification. Wu et al. [27] achieved the highest accuracy of 90.6%, whereas Jian et al. [28] reported 97.4% accuracy. Compared with the aforementioned two methods, the proposed key frame extraction method achieved the highest average accuracy of 97.7% for all the pose categories.Table 3 Experimental comparison. MethodAccuracy (%)Wu et al. [27]90.6Jian et al. [28]97.4Proposed skeleton-based method97.7 ## 4.3. Key Frame Extraction Figure9 shows the probability distribution of proposed skeleton-based key frame extraction for four groups of poses from training videos. It can be seen that the unique characteristics of each pose are properly captured, and the estimation of all four poses is good. In addition, the method in this paper has a very obvious performance in performance and effect and has a strong expression in each type of key posture. It combines FCN with CNN to extract ROI and distinct features and lays down the foundation for key frame extraction from sports videos. It further confirms that the proposed skeleton-based method conquers other key frame extraction methods. Test the results of the video is shown in Figure 9.Figure 9 Test the results of the video. ## 5. Conclusion Object detection and behavior understanding in the video has become a point of contention in the field of machine vision. Sports video contains a lot of information related to the human movement which is complex and highly skilled. Compared with the analysis of human daily movements, the analysis and recognition movements in sports video are more challenging. In this study, we have presented a deep key frame extraction method for sport video analysis based on CNN. This paper extracts the key posture of athletes’ training, to assist the coach to carry out more professional training for athletes, and puts forward a method to extract the skeleton information of human athletes. The human body in sports training action video is extracted through the skeleton of athletes to enhance the expression of features and the accuracy of key frame extraction. The experimental results and analysis validate that the proposed skeleton-based key frame extraction method outperforms its counterparts in key pose probability estimation and key pose extraction. --- *Source: 1016574-2021-09-02.xml*
2021
# Studying the Regional Cyberspace by Exploiting Internet Sequential Information Flows **Authors:** Biao Jin; Jin-ming Sha; Jian-wan Ji; Yi-su Liu; Wu-heng Yang **Journal:** Mathematical Problems in Engineering (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1016587 --- ## Abstract The study of cyberspace is faced with the challenge of the data shortage and model verification. This paper proposed a method to explore the regional cyberspace by employing Internet sequential information flows crawled from social network platforms. Compared with previous studies which only use one type of data sources for analysis, the main contribution of this manuscript is adopting the scheme that uses one kind of Internet information flow to extract cyberspace feature while relevant data collected from the other network platform is used for verification. Moreover, starting from measuring the informatization level of a region, a modified gravity model is designed by adding the value of informatization level to the traditional method. Then, an information association matrix based on the improved gravity model is constructed for analyzing the characteristics of cyberspace. To demonstrate the efficiency, Fuzhou city is considered as an interesting regional sample in this paper. The reasonable results indicate that the proposed approach is practical for regional cyberspace. --- ## Body ## 1. Introduction By breaking through the limits of space and time, the relationships and interactive behaviors among humans have extended from the realistic geo-space to cyberspace. Cyberspace is the communication and information space created by computer, which is an abstract concept in the field of philosophy and computer science. It uses information flow as its studied data, while realistic geo-space is based on material flows.The research of cyberspace has received various attentions. In the world, the research on cyberspace mainly focuses on three aspects: (1) cyberspace security; (2) the access control mechanism and communication protocol in cyberspace; (3) the study of the spatial network pattern based on sequence information in cyberspace. For example, Clark [1–7] designed trustworthy mechanisms and access control models to ensure the security of cyberspace. Chawki [8] tried to find the balance between privacy and security. Iyer [9] focused on “smart grid” and used cryptography and key management techniques to overcome some attacks to cyber security. Wechsler [10] advanced new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing service. Slonim [11] presented a novel sequential clustering algorithm which is motivated by the Information Bottleneck method. Prinzie [12] tried to overcome the inability to capture sequential patterns by modeling sequential independent variables by sequence-analysis methods. Mcculloch [13] and Mishra [14, 15] use the sequential information flow to diagnose the Swiss inflation in real time. Meanwhile, Tijsseling [16] presented a variant of Categorization-and-Learning-Module network, which is capable of categorizing sequential information with feedback. Copeland [17] studied the effect of sequential information arrival on asset prices. Mishra [18] utilized the Sequence and Set Similarity Measure with rough set based similarity upper approximation clustering algorithm to group web users based on their navigational patterns. Lottes [19] proposed a novel crop-weed classification system that relies on a fully convolutional network with an encoder-decoder structure and incorporates spatial information by considering image sequences. In mainland China, the study of cyberspace mainly focuses on two aspects: (1) relationship between cyberspace and realistic geo-space; (2) characteristics of cyberspace in specific areas. For instance, based on the Internet infrastructure, Wang [20–22] discussed the relationship between the Internet geographical structure and the Internet urban system of China. Bakis [23], Dong [24] and Sun [25] conduct a comprehensive analysis on the hierarchical structure and information flows pattern, as well as, reveal the spatial distributions of the Internet network structure of China. By linking the relationship between micro-blog users and geography, Zhen [26], Wang [27], and Chen [28] studied the centrality of nodes in the networks and the consistency of the whole network. Although Zhang [29–32] explored some methodologies of mining the relationship between geo-space and cyberspace by using of information flows, it is difficult to obtain the sequential information of cyberspace in mainland China except cooperating with the data owners (usually government agencies). Therefore, researchers often obtain data from various statistical yearbooks and annual public reports. In this paper, we focus on the use of sequence information crawled from social networks to analyze the pattern and characteristics of cyberspace.Currently, there are two main shortcomings in the prior researches of cyberspace: (1) many scholars conducted their studies only based on one kind of sequential information flow, which makes their conclusions less convincing; (2) when they explore the linkages among studied regions, they often directly use the classical gravity model which ignores the attributes of the studied region itself. In order to fix the first weakness, this paper adopts the scheme that uses one kind of internet information flow (data of Sina micro-blog) to extract cyberspace feature while relevant data collected from the other network platform (Baidu Index) is used for verification. Aiming at the second shortcoming, a method is firstly proposed to measure the informatization level of a region and then the classical gravity model is improved by introducing some attributes of the studied regions themselves; finally, an information association matrix is built based on the improved gravity model. By inputting the information association matrix into the network analysis tools (e.g., UCINET software) and selecting appropriate evaluation indicator (e.g., the degree centrality of nodes), the important nodes in the network space can be detected.In order to explore the efficiency, Fuzhou city, the capital of Fujian province, is considered as an interesting region for approach verification. Specifically, firstly, we use a crawler to grab information about Sina micro-blog users, such as their registration addresses and other fundamental information. Then, for those users whose registration addresses are Fuzhou, we also grab the information in their concern lists and concerned lists to analyze their social relationships. Based on the obtained data, the intensity of active connection, passive connection, and total connection are used to study the spatial pattern of cyberspace in Fuzhou city. To make the conclusions more convincing and more credible, the data from the Baidu Index is used for verification. According to the data collected from Baidu Index, we can get the number of times that one research unit is retrieved by another. Finally, we explore some possible factors which may have impacts on the pattern and characteristics of cyberspace. ## 2. Measurement Method of Informatization Development Level of Provinces in Mainland China The regional informatization level is one of the most important factors that may affect the spatial pattern of cyberspace, so we propose a method for measuring the informatization level of a province in this section.In this paper, we selected some indicators that can well reflect the regional informatization level according to the following steps.Step 1. Obtain 186 indicators from China National Information Center which can be used for describing the development level of information society.Step 2. Use world cloud analysis tools for counting the frequency of keywords contained in the 186 indicators. Then, 42 indicators that contain the keywords with higher frequency remain.Step 3. The correlation coefficient and variation coefficient of each of the 42 indicators are calculated. Then, 17 indicators with poor correlation or high redundancy are eliminated.Step 4. KMO test and factor analysis are carried out on the remaining 25 indicators. Finally, 9 indicators (shown in Table1) remain and will be used to evaluate the level of information development in a region.Table 1 Detailed information of the nine indicators. Indexes Indexes Indexes (1) Per capital GDP (2) The number of express (3) Income of express business (4) Number of Colleges and Universities (5) Length of optical cable line (6) Number of IPv4 addresses (7) Comprehensive coverage rate of TV programs (8) The number of computers used by every hundred people (9) The number of students in Colleges and UniversitiesAfter that, Standard Deviation method, CRITIC (Criteria Importance through Inter Criteria Correlation), and Entropy Weight method are used for calculating the weight of each indicator for each province. The calculation formulas of each method are shown as follows.( 1) Standard Deviation Method (1) δ j = ∑ i = 1 n v i - v ¯ n W j _ S D = δ j ∑ j = 1 n δ jwhereWj_SD is the weight of indicator j in indicator system, n stands for the number of research units, vi represents the specific value of index i, and v¯ stands for the arithmetic mean of index j.( 2) CRITIC (Criteria Importance through Inter Criteria Correlation) Method (2) r i j = ∑ k = 1 n x k i - x i ¯ x k j - x j ¯ ∑ k = 1 n x k i - x i ¯ 2 ∑ k = 1 n x k j - x j ¯ 2 C j = δ j ∑ i = 1 n 1 - r i j W j _ C R I T I C = C j ∑ j = 1 n C jwhereδj and n have the same meanings in formula (1) and rij is the correlation coefficient between indicator j and indicator i.( 3) Entropy Weight Method (3) P i = d i j ∑ i = 1 m d i j E j = - ln ⁡ m - 1 ∑ i = 1 m P i j ln ⁡ P i W j _ E W = 1 - E j n - ∑ j = 1 n E jwheredij stands for the value of indicator j being normalized and n have the same meanings in formula (1).Then,Wj will be used as the final weight of the indicator j. The value of Wj can be calculated using formula (4).(4)Wj=Wj_SD+Wj_CRITIC+Wj_EW3Finally, the composite scores of information for each province on these indicators could be calculated according to the following formula.(5)Scorei=∑j=19Wj∗VPijwhereVPij represents the value of the indicator j of province i, and i=1,2,3,⋯,31. ## 3. Construction of Information Association Matrix Based on Improved Gravity Model Gravity model is a widely used model for measuring spatial interaction capability; its formula is shown as follows:(6)Gij=kMi∗Mjdijcwheredij stands for the distance between unit i and unit j and k and c represent the coefficient of gravity and the distance attenuation coefficient, respectively, while the meaning of Mi (Mj) is varying in different applications. For example, if we study the intensity of communication between two regions, the meaning of Mi (Mj) can be the number of calls made by the mobile users in the two regions.When using the above model, researchers usually only focus on the connection between nodes, but ignore the attributes of nodes.Assume the following cases (Figure1).Figure 1 The interpretation of relevant variables in the scenario assumption.(1) The interactions betweenA and B and the interactions between C and D occur at the same time periods.(2) The number of times thatA actively interacts with B is the same as the times that B actively interacts with A, such that tAB=tBA=10.(3) The number of times thatC actively interacts with D is different from the times that D actively interacts with C, such that tCD≠tDC, tCD=100, tDC=1.(4) The distance betweenA and B is the same as that between C and D(dAB=dCD).Then, if we use the classical formula of gravity model, we would get the conclusion that the interaction intensity betweenA and B is the same as that between C and D. It is clearly incorrect, because it ignores the essential attributes of the research objects.In view of the above analysis, and combined with the actual situation, in this paper, we modify the model as follows:(7)Yij=gYScorei∗ScorejdYrRijwhereYij stands for the intensity of information flow between province i and province j, gY is the gravity coefficient of information network, in which dYr is the distance attenuation factor of information space, dY is the shortest road distance between province i and province j, and Rij represents the intensity of network concern between province i and province j. In this paper, we use the parameter estimation method in Wang [31, 32] and set the values of gY and r to 0.85 and 1, while the value of dY can be obtained from Baidu map. We take the average number of searches between corresponding provinces from January-1-2016 to January-1-2017 as the value of Rij. Then, for the sake of comparison, we standardize the value of Yij using formula (8).(8)Yij′=Yij-YminYmax-YminFinally, we construct the information association matrix as follows:(9)MY=Y11′Y12′⋯Y1j′⋯Y1n-1′Y1n′Y21′Y22′⋯Y2j′⋯Y2n-1′Y2n′⋮⋮⋱⋮⋮⋮⋮Yi1′Yi2′⋯Yij′⋯Yin-1′Yin′⋮⋮⋮⋮⋱⋮⋮Yn-11′Yn-12′⋯Yn-1j′⋯Yn-1n-1′Yn-1n′Yn1′Yn2′⋯Ynj′⋯Ynn-1′Ynn′In this matrix, for example,Ynj′ stands for the standardized intensity of information flow between province n and province j. Inputting this matrix into network analysis software, we can find out the important nodes in the network. ## 4. Research Object and Experimental Data In this section, Fuzhou city, the capital of Fujian Province in mainland China, is used as an example of model verification. The sequential information flows which we grab from Sina micro-blog platform and Baidu website are applied to study the pattern and characteristics of the cyberspace in Fuzhou. ### 4.1. Brief Introduction of the Research Object In May 6, 2009, the State Council of the People’s Republic of China issued the “opinions on supporting Fujian province to speed up the construction of the Economic Zone on the West Coast of China (EZWCC).” As an important part of China’s coastal economic zone, the EZWCC is separated by the Taiwan Strait, north of the Yangtze River Delta and south of the Pearl River Delta. It occupies an important position in the layout of regional economic development. The EZWCC is composed of Fujian, Guangdong, Zhejiang, and Jiangxi provinces. Fujian province is the most closely related to Taiwan, because of their geographical proximities and historical and cultural similarities. With this unique advantage, Fujian province occupies the dominant position in the EZWCC.As the capital of Fujian province, Fuzhou city is a hometown of overseas Chinese people. There are many overseas Chinese people with Fuzhou descent distributed all around the world. Fuzhou and Taiwan face each other across the Taiwan Strait. People in these two regions have close links with each other. As one of the important city nodes of the EZWCC, the development of Fuzhou has received attention from the local government and also the Chinese government.In this section, Fuzhou is chosen as the studied region, and then the pattern and characteristics of its cyberspace are analyzed. ### 4.2. Principle Data Acquisition and Preprocessing In order to analyze the cyberspace pattern of Fuzhou more accurately, two kinds of actual network information flows were chosen as principal experimental data. #### 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. #### 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 4.1. Brief Introduction of the Research Object In May 6, 2009, the State Council of the People’s Republic of China issued the “opinions on supporting Fujian province to speed up the construction of the Economic Zone on the West Coast of China (EZWCC).” As an important part of China’s coastal economic zone, the EZWCC is separated by the Taiwan Strait, north of the Yangtze River Delta and south of the Pearl River Delta. It occupies an important position in the layout of regional economic development. The EZWCC is composed of Fujian, Guangdong, Zhejiang, and Jiangxi provinces. Fujian province is the most closely related to Taiwan, because of their geographical proximities and historical and cultural similarities. With this unique advantage, Fujian province occupies the dominant position in the EZWCC.As the capital of Fujian province, Fuzhou city is a hometown of overseas Chinese people. There are many overseas Chinese people with Fuzhou descent distributed all around the world. Fuzhou and Taiwan face each other across the Taiwan Strait. People in these two regions have close links with each other. As one of the important city nodes of the EZWCC, the development of Fuzhou has received attention from the local government and also the Chinese government.In this section, Fuzhou is chosen as the studied region, and then the pattern and characteristics of its cyberspace are analyzed. ## 4.2. Principle Data Acquisition and Preprocessing In order to analyze the cyberspace pattern of Fuzhou more accurately, two kinds of actual network information flows were chosen as principal experimental data. ### 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. ### 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. ## 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 5. Spatial Pattern Analysis of Cyberspace in Fuzhou ### 5.1. Spatial Difference Analysis on the Intensity of the Network Information Flows Cyberspace Breaks through the Limitation of Time and Geographical Space. Table 5 shows that the cyberspace in Fuzhou is very wide; the users either in the concern lists or in the concerned lists are distributed in all of provinces in mainland China. As a famous hometown of overseas Chinese people, there were also network information flows between Fuzhou users and some foreigners.There Were Grade Differences in Cyberspace. Taking into account the actual situation of the object region and the data selected in this paper, all provinces in mainland China are divided into seven levels by the value of Xa, Xb, and (Xa+Xb), and the provinces are divided into five levels by the value of Xa′, Xb′, and Xa+Xb′. The results are shown in Figures 2(a), 2(b), and 2(c). It is necessary to explain that the solid line with an arrow in Figures 2(a) and 2(b) shows the direction of concern. For example, “Fujian←Xinjiang” means thatXijiang takes the initiative to pay attention toFujian, which also means thatFujian is paid attention to byXinjiang. The solid line without arrow in Figure 2(c) means the total intensity of attention between two units. In all of these three figures, the thicker the solid line is, the stronger the attention is.Figure 2 Grade classification at provincial level. (a) Passive connections (b) Active connections (c) Total connectionsIn general, there is an obvious grading phenomenon in the intensity of connections, such that, from the east to the west, the intensities become gradually weaker: (1) the provinces with the highest intensity are mainly distributed in the southeast coastal area; (2) most of the central provinces are in the middle level; (3) the western provinces are at the lowest level.Regional Embedding Still Exists in Cyberspace. First, the results of grade division show that although cyberspace breaks through the limit of time and geographical space, the geographic distance factor still has some influence on the spatial pattern of cyberspace. The distance attenuation phenomenon also exists in cyberspace to some extent. Second, among these research units, Fujian province has the strongest connection with Fuzhou in the intensity of passive connection and total connection. Although information technology has compressed the space-time distance and expanded the scope of social communication, the information connections within the local domain occupied the dominant position in its cyberspace because of their geographical proximity and social cultural similarity.The Information Flow in Cyberspace Is Asymmetric. “Information potential” refers to the capability to picking up, using, transmitting, formulating, aggregating, and processing information. The difference of “information potential” also leads to the asymmetry of information flow. Regions are more likely to establish contact with other regions with higher “information potential.” For example, Beijing, Shanghai, Zhejiang, and Jiangsu are far less concerned about Fuzhou than Fuzhou’s attention to them.The Pattern of Cyberspace Can Be Affected by Population Flow (Labor Input and Output). Taking Sichuan province as an example, the distance between Sichuan and Fuzhou is much farther than that between Jiangxi and Fuzhou, but the connection intensity between the former two is stronger than that of the latter. These results are primarily because Sichuan is a big province of labor output in mainland China. Most of the laborers have been outputting to Fuzhou, Xiamen, Quanzhou, and some other cities in Fujian province. The flows of population will inevitably bring about information flows.The Pattern of Cyberspace Has a High Correlation with Regional Economic Development Pattern. Excluding Fujian province, strong connections occurred between Fuzhou and some economically developed areas, such as Zhejiang, Shanghai, Guangdong, Jiangsu, and Beijing. It means that the pattern of cyberspace is also affected by the level of economic development. It is primarily because the developed areas have higher “information potential.” The influence of “information potential” sometimes is even greater than that of geographic distance. Taking Beijing as an example, Fuzhou is concerned about Beijing much more actively than Fujian province.Then, the information association matrixDijY is inputted into the UCINET software. If there is information connection between two provinces, there will be a connection line between them. In this way, the information spatial association network diagram at provincial level in mainland China is generated. In this paper, the degree centrality is used to evaluate the importance of nodes in the information network. If the degree centrality of nodes is higher, then the larger area of the graph is used to describe the node. Finally, we get Figure 3, which shows that economically developed provinces, such as Beijing, Shanghai, Guangdong, and Zhejiang, usually are the key nodes in the cyberspace. Other provinces with relatively sluggish economic development are willing to contact these key nodes more frequently.Figure 3 The information spatial association network diagram at provincial level in mainland China. ### 5.2. Verifying the Effectiveness of Grade Division All the analysis results in Section5.1 are based on the grade division. Therefore, in this section, another kind of data (data from Baidu Index) is used to verify the effectiveness of grade division.From Tables5 and 6, the rankings of the number of total connections and total retrievals for each province can be obtained. The detailed rankings are shown in Table 7.Table 7 Total connection rankings and total retrieval rankings. Provinces Total connection rankings Total retrieval rankings Provinces Total connection rankings Total retrieval rankings Anhui 14 9 Fujian 1 1 Guangdong 3 2 Guizhou 24 27 Hebei 15 13 Heilongjiang 19 23 Hunan 10 10 Jiangsu 6 6 Liaoning 12 15 Ningxia 29 22 Shandong 8 8 Shanxi 17 11 Sichuan 7 7 Xizang 31 28 Yunnan 18 26 Chongqing 13 21 Beijing 2 4 Gansu 25 30 Guangxi 21 16 Hainan 28 19 Henan 11 14 Hubei 9 18 Jilin 23 20 Jiangxi 16 17 Neimenggu 26 29 Qianghai 30 25 Shaanxi 22 24 Shanghai 4 3 Tianjin 20 12 Xinjiang 27 31 Zhejiang 5 5Figure4 is visualized based on the information in Table 7. And then the correlation between the two discrete curves is calculated according to formula (11) and 87.10% is achieved.(11)CorrelX,Y=∑i=131xi-x-yi-y-∑i=131xi-x-2∑i=131yi-y-2Figure 4 Correlation between total connection rankings and total retrieval rankings.Although there are little differences in some provinces’ rankings by using the two different kinds of data, most of the provinces do not change too much in their rankings. The same conclusions can be achieved by comparing Figure2(c) with Figure 5. Therefore, the conclusions obtained based on the first set of data in Section 5.1 have high credibility. What needs to be explained is that the provinces or cities in Figure 5 are classified according to the total intensity of mutual retrieval between them and Fuzhou city.Figure 5 Grade division based on total retrieval. ### 5.3. Analysis of the Possible Influential Factors In this section, some influential factors which can be quantified and may have impacts on the pattern of cyberspace in Fuzhou are analyzed.Geographic distance, the “Internet plus” index, and regional informatization development level are considered as the most likely influential factors that can affect the pattern of cyberspace. The “Internet plus” index is an important indicator to reflect the level of regional development. It consists of four subindexes: “Internet plus infrastructure,” “Internet plus industry,” “Internet plus innovation,” and “Internet plus smart city.” Further, these subindexes consists of 14 first-class indexes and 135 second-class indexes. Its content covers social, news, video, cloud computing, and the 19 major subindustries of the three industries. It uses the Tencent users’ digital economic behavior as basic data and collects data from Didi Taxi, Meituan Dianping, Jingdong Mall, Ctrip, and some other Internet companies. Because the data are very comprehensive, it can reflect the degree of combination of Internet and all walks of life and the ability of the Internet utilization.Geographic distances between Fuzhou city and other research units are measured with the help of Baidu map, the value of “Internet plus” indexes and rankings for each research unit are obtained from T. R. Institue [40], and the informatization development level of each research unit can be calculated with the method that we proposed in Section 2. These factors are shown in Tables 8–10.Table 8 Distances between Fuzhou and other provinces. Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Zhejiang 554.7 Jiangxi 626.5 Shanghai 775.9 Guangdong 865.3 Anhui 875.0 Hunan 942.0 Jiangsu 1005.2 Hubei 1167.5 Henan 1295.7 Guangxi 1375.8 Shandong 1422.8 Guizhou 1560.0 Hainan 1587.1 Shanxi 1712.6 Hebei 1758.9 Chongqing 1760.7 Tianjin 1772.2 Shanxi 1812.9 Beijing 1887.1 Sichaun 2044.2 Yunnan 2098.7 Ningxia 2263.8 Liaoning 2406.9 Neimenggu 2468.2 Gansu 2547.2 Jilini 2694.9 Qinghai 2996.2 Heilongjiang 3277.6 Xizang 4097.6 Xinjiang 4375.1Table 9 “China Internet plus” indexes 2016 for each province. Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes 1 Guangdong 18.072 2 Beijing 11.256 3 Shanghai 6.179 4 Zhejiang 5.512 5 Jiangsu 5.031 6 Fujian 3.967 7 Shandong 3.709 8 Sichuan 3.591 9 Henan 3.202 10 Chongqing 3.021 11 Hunan 2.884 12 Hubei 2.797 13 Hebei 2.579 14 Liaoning 2.239 15 Shanxi 2.153 16 Anhui 2.069 17 Guangxi 1.856 18 Jiangxi 1.764 19 Shanxi 1.702 20 Yunnan 1.586 21 Tainjin 1.491 22 Heilongjiang 1.45 23 Hainan 1.246 24 Jilin 1.23 25 Neimenggu 1.166 26 Guizhou 1.165 27 Xinjiang 0.933 28 Gansu 0.917 29 Ningxia 0.785 30 Qinghai 0.457 31 Xizang 0.35Table 10 Composite scores and rankings for each province. Provinces Composite Scores Information level rankings Provinces Composite Scores Information level rankings Guangdong 0.8322 1 Shaanxi 0.2879 17 Jiangsu 0.614 2 Heilongjiang 0.2651 18 Zhejiang 0.5805 3 Jiangxi 0.2471 19 Shanghai 0.5223 4 Shanxi 0.2348 20 Beijing 0.4595 5 Guangxi 0.2314 21 Shandong 0.4173 6 Yunnan 0.2305 22 Sichuan 0.3704 7 Chongqing 0.2253 23 Hubei 0.3584 8 Jilin 0.2162 24 Hunan 0.3245 9 Xinjiang 0.1724 25 Fujian 0.3242 10 Gansu 0.1677 26 Henan 0.3232 11 Qinghai 0.1599 27 Hebei 0.3212 12 Guizhou 0.1527 28 Liaoning 0.3159 13 Ningxia 0.1474 29 Neimenggu 0.3133 14 Xizang 0.0971 30 Tianjin 0.3099 15 Hainan 0.0866 31 Anhui 0.3059 16SPSS (Statistical Product and Service Solutions) software is used to analyze the correlation between rankings of the three factors and rankings of the connection intensities. The analysis results are shown in Table11.Table 11 Pearson correlation in provincial level. Information level rankings Distance rankings “China Internet plus index” rankings Connection intensity rankings Information level rankings 1 .634 .874 .948 Distance rankings .634 1 .681 .646 ‘China Internet plus index’ rankings .874 .681 1 .972 Connection intensity rankings .948 .646 .972 1Table11 shows that there are high correlations existing between the connection intensity and the “Internet plus” index, as well as between the connection intensity and information level. The “Internet plus” index has the highest positive correlation with intensities. Results of correlation analysis also illustrate that there is a certain correlation between the distance factor and the connection intensity. However, this factor is no longer the primary factor. Some other factors, such as the government’s economic policies and cultural similarities, can also affect the connections intensity, but they are difficult to quantify. ## 5.1. Spatial Difference Analysis on the Intensity of the Network Information Flows Cyberspace Breaks through the Limitation of Time and Geographical Space. Table 5 shows that the cyberspace in Fuzhou is very wide; the users either in the concern lists or in the concerned lists are distributed in all of provinces in mainland China. As a famous hometown of overseas Chinese people, there were also network information flows between Fuzhou users and some foreigners.There Were Grade Differences in Cyberspace. Taking into account the actual situation of the object region and the data selected in this paper, all provinces in mainland China are divided into seven levels by the value of Xa, Xb, and (Xa+Xb), and the provinces are divided into five levels by the value of Xa′, Xb′, and Xa+Xb′. The results are shown in Figures 2(a), 2(b), and 2(c). It is necessary to explain that the solid line with an arrow in Figures 2(a) and 2(b) shows the direction of concern. For example, “Fujian←Xinjiang” means thatXijiang takes the initiative to pay attention toFujian, which also means thatFujian is paid attention to byXinjiang. The solid line without arrow in Figure 2(c) means the total intensity of attention between two units. In all of these three figures, the thicker the solid line is, the stronger the attention is.Figure 2 Grade classification at provincial level. (a) Passive connections (b) Active connections (c) Total connectionsIn general, there is an obvious grading phenomenon in the intensity of connections, such that, from the east to the west, the intensities become gradually weaker: (1) the provinces with the highest intensity are mainly distributed in the southeast coastal area; (2) most of the central provinces are in the middle level; (3) the western provinces are at the lowest level.Regional Embedding Still Exists in Cyberspace. First, the results of grade division show that although cyberspace breaks through the limit of time and geographical space, the geographic distance factor still has some influence on the spatial pattern of cyberspace. The distance attenuation phenomenon also exists in cyberspace to some extent. Second, among these research units, Fujian province has the strongest connection with Fuzhou in the intensity of passive connection and total connection. Although information technology has compressed the space-time distance and expanded the scope of social communication, the information connections within the local domain occupied the dominant position in its cyberspace because of their geographical proximity and social cultural similarity.The Information Flow in Cyberspace Is Asymmetric. “Information potential” refers to the capability to picking up, using, transmitting, formulating, aggregating, and processing information. The difference of “information potential” also leads to the asymmetry of information flow. Regions are more likely to establish contact with other regions with higher “information potential.” For example, Beijing, Shanghai, Zhejiang, and Jiangsu are far less concerned about Fuzhou than Fuzhou’s attention to them.The Pattern of Cyberspace Can Be Affected by Population Flow (Labor Input and Output). Taking Sichuan province as an example, the distance between Sichuan and Fuzhou is much farther than that between Jiangxi and Fuzhou, but the connection intensity between the former two is stronger than that of the latter. These results are primarily because Sichuan is a big province of labor output in mainland China. Most of the laborers have been outputting to Fuzhou, Xiamen, Quanzhou, and some other cities in Fujian province. The flows of population will inevitably bring about information flows.The Pattern of Cyberspace Has a High Correlation with Regional Economic Development Pattern. Excluding Fujian province, strong connections occurred between Fuzhou and some economically developed areas, such as Zhejiang, Shanghai, Guangdong, Jiangsu, and Beijing. It means that the pattern of cyberspace is also affected by the level of economic development. It is primarily because the developed areas have higher “information potential.” The influence of “information potential” sometimes is even greater than that of geographic distance. Taking Beijing as an example, Fuzhou is concerned about Beijing much more actively than Fujian province.Then, the information association matrixDijY is inputted into the UCINET software. If there is information connection between two provinces, there will be a connection line between them. In this way, the information spatial association network diagram at provincial level in mainland China is generated. In this paper, the degree centrality is used to evaluate the importance of nodes in the information network. If the degree centrality of nodes is higher, then the larger area of the graph is used to describe the node. Finally, we get Figure 3, which shows that economically developed provinces, such as Beijing, Shanghai, Guangdong, and Zhejiang, usually are the key nodes in the cyberspace. Other provinces with relatively sluggish economic development are willing to contact these key nodes more frequently.Figure 3 The information spatial association network diagram at provincial level in mainland China. ## 5.2. Verifying the Effectiveness of Grade Division All the analysis results in Section5.1 are based on the grade division. Therefore, in this section, another kind of data (data from Baidu Index) is used to verify the effectiveness of grade division.From Tables5 and 6, the rankings of the number of total connections and total retrievals for each province can be obtained. The detailed rankings are shown in Table 7.Table 7 Total connection rankings and total retrieval rankings. Provinces Total connection rankings Total retrieval rankings Provinces Total connection rankings Total retrieval rankings Anhui 14 9 Fujian 1 1 Guangdong 3 2 Guizhou 24 27 Hebei 15 13 Heilongjiang 19 23 Hunan 10 10 Jiangsu 6 6 Liaoning 12 15 Ningxia 29 22 Shandong 8 8 Shanxi 17 11 Sichuan 7 7 Xizang 31 28 Yunnan 18 26 Chongqing 13 21 Beijing 2 4 Gansu 25 30 Guangxi 21 16 Hainan 28 19 Henan 11 14 Hubei 9 18 Jilin 23 20 Jiangxi 16 17 Neimenggu 26 29 Qianghai 30 25 Shaanxi 22 24 Shanghai 4 3 Tianjin 20 12 Xinjiang 27 31 Zhejiang 5 5Figure4 is visualized based on the information in Table 7. And then the correlation between the two discrete curves is calculated according to formula (11) and 87.10% is achieved.(11)CorrelX,Y=∑i=131xi-x-yi-y-∑i=131xi-x-2∑i=131yi-y-2Figure 4 Correlation between total connection rankings and total retrieval rankings.Although there are little differences in some provinces’ rankings by using the two different kinds of data, most of the provinces do not change too much in their rankings. The same conclusions can be achieved by comparing Figure2(c) with Figure 5. Therefore, the conclusions obtained based on the first set of data in Section 5.1 have high credibility. What needs to be explained is that the provinces or cities in Figure 5 are classified according to the total intensity of mutual retrieval between them and Fuzhou city.Figure 5 Grade division based on total retrieval. ## 5.3. Analysis of the Possible Influential Factors In this section, some influential factors which can be quantified and may have impacts on the pattern of cyberspace in Fuzhou are analyzed.Geographic distance, the “Internet plus” index, and regional informatization development level are considered as the most likely influential factors that can affect the pattern of cyberspace. The “Internet plus” index is an important indicator to reflect the level of regional development. It consists of four subindexes: “Internet plus infrastructure,” “Internet plus industry,” “Internet plus innovation,” and “Internet plus smart city.” Further, these subindexes consists of 14 first-class indexes and 135 second-class indexes. Its content covers social, news, video, cloud computing, and the 19 major subindustries of the three industries. It uses the Tencent users’ digital economic behavior as basic data and collects data from Didi Taxi, Meituan Dianping, Jingdong Mall, Ctrip, and some other Internet companies. Because the data are very comprehensive, it can reflect the degree of combination of Internet and all walks of life and the ability of the Internet utilization.Geographic distances between Fuzhou city and other research units are measured with the help of Baidu map, the value of “Internet plus” indexes and rankings for each research unit are obtained from T. R. Institue [40], and the informatization development level of each research unit can be calculated with the method that we proposed in Section 2. These factors are shown in Tables 8–10.Table 8 Distances between Fuzhou and other provinces. Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Zhejiang 554.7 Jiangxi 626.5 Shanghai 775.9 Guangdong 865.3 Anhui 875.0 Hunan 942.0 Jiangsu 1005.2 Hubei 1167.5 Henan 1295.7 Guangxi 1375.8 Shandong 1422.8 Guizhou 1560.0 Hainan 1587.1 Shanxi 1712.6 Hebei 1758.9 Chongqing 1760.7 Tianjin 1772.2 Shanxi 1812.9 Beijing 1887.1 Sichaun 2044.2 Yunnan 2098.7 Ningxia 2263.8 Liaoning 2406.9 Neimenggu 2468.2 Gansu 2547.2 Jilini 2694.9 Qinghai 2996.2 Heilongjiang 3277.6 Xizang 4097.6 Xinjiang 4375.1Table 9 “China Internet plus” indexes 2016 for each province. Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes 1 Guangdong 18.072 2 Beijing 11.256 3 Shanghai 6.179 4 Zhejiang 5.512 5 Jiangsu 5.031 6 Fujian 3.967 7 Shandong 3.709 8 Sichuan 3.591 9 Henan 3.202 10 Chongqing 3.021 11 Hunan 2.884 12 Hubei 2.797 13 Hebei 2.579 14 Liaoning 2.239 15 Shanxi 2.153 16 Anhui 2.069 17 Guangxi 1.856 18 Jiangxi 1.764 19 Shanxi 1.702 20 Yunnan 1.586 21 Tainjin 1.491 22 Heilongjiang 1.45 23 Hainan 1.246 24 Jilin 1.23 25 Neimenggu 1.166 26 Guizhou 1.165 27 Xinjiang 0.933 28 Gansu 0.917 29 Ningxia 0.785 30 Qinghai 0.457 31 Xizang 0.35Table 10 Composite scores and rankings for each province. Provinces Composite Scores Information level rankings Provinces Composite Scores Information level rankings Guangdong 0.8322 1 Shaanxi 0.2879 17 Jiangsu 0.614 2 Heilongjiang 0.2651 18 Zhejiang 0.5805 3 Jiangxi 0.2471 19 Shanghai 0.5223 4 Shanxi 0.2348 20 Beijing 0.4595 5 Guangxi 0.2314 21 Shandong 0.4173 6 Yunnan 0.2305 22 Sichuan 0.3704 7 Chongqing 0.2253 23 Hubei 0.3584 8 Jilin 0.2162 24 Hunan 0.3245 9 Xinjiang 0.1724 25 Fujian 0.3242 10 Gansu 0.1677 26 Henan 0.3232 11 Qinghai 0.1599 27 Hebei 0.3212 12 Guizhou 0.1527 28 Liaoning 0.3159 13 Ningxia 0.1474 29 Neimenggu 0.3133 14 Xizang 0.0971 30 Tianjin 0.3099 15 Hainan 0.0866 31 Anhui 0.3059 16SPSS (Statistical Product and Service Solutions) software is used to analyze the correlation between rankings of the three factors and rankings of the connection intensities. The analysis results are shown in Table11.Table 11 Pearson correlation in provincial level. Information level rankings Distance rankings “China Internet plus index” rankings Connection intensity rankings Information level rankings 1 .634 .874 .948 Distance rankings .634 1 .681 .646 ‘China Internet plus index’ rankings .874 .681 1 .972 Connection intensity rankings .948 .646 .972 1Table11 shows that there are high correlations existing between the connection intensity and the “Internet plus” index, as well as between the connection intensity and information level. The “Internet plus” index has the highest positive correlation with intensities. Results of correlation analysis also illustrate that there is a certain correlation between the distance factor and the connection intensity. However, this factor is no longer the primary factor. Some other factors, such as the government’s economic policies and cultural similarities, can also affect the connections intensity, but they are difficult to quantify. ## 6. Conclusion This paper points out the shortcomings of existing works in the field of cyberspace: (1) researchers conduct their studies only based on one kind of sequential information flow, which makes their conclusions less convincing; (2) the study of the cyberspace pattern based on the sequential information flow usually directly employs the classical gravity model but ignores the attributes of the studied objects themselves. To overcome these weaknesses, we hold the idea that it is necessary to use some different kinds of sequential information flows to analyze this problem. We also advocate that we should consider the attributes of our study objects themselves when the gravity model is used. Accordingly, we proposed a method for measuring the informatization level of a region and improved the classical gravity model by adding the score of informatization level to the classical formula. And then, we constructed an information association matrix based on the improved gravity model. Finally, we took Fuzhou city as our study object and focused on its cyberspace characteristics. Experiments in this paper are conducted on two kinds of sequential information flows, data about Sina micro-blog users and data of Baidu Index.According to our experimental result, the following conclusions can be drawn:First, cyberspace breaks through the limit of geographical distance and has a wider range of communication.Second, there is an obvious grade difference in cyberspace. From the east of China to the west, the intensities of the total connections decrease gradually.Third, the social communication mode in realistic geo-space has also been brought into cyberspace, such that the local domain information still occupied the dominant position in cyberspace.Fourth, the economically developed provinces usually are the principle node in network information space.Fifth, the information flows in cyberspace are asymmetric. Areas with low information potential are easily attracted by the areas with higher information potential. Economically backward areas are more willing to establish active contacts with economically developed areas.There are many factors that can affect the pattern of cyberspace. By conducting comprehensive analysis of these factors, the characteristics and future of cyberspace could be understood and grasped more accurately. --- *Source: 1016587-2018-10-08.xml*
1016587-2018-10-08_1016587-2018-10-08.md
68,266
Studying the Regional Cyberspace by Exploiting Internet Sequential Information Flows
Biao Jin; Jin-ming Sha; Jian-wan Ji; Yi-su Liu; Wu-heng Yang
Mathematical Problems in Engineering (2018)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1016587
1016587-2018-10-08.xml
--- ## Abstract The study of cyberspace is faced with the challenge of the data shortage and model verification. This paper proposed a method to explore the regional cyberspace by employing Internet sequential information flows crawled from social network platforms. Compared with previous studies which only use one type of data sources for analysis, the main contribution of this manuscript is adopting the scheme that uses one kind of Internet information flow to extract cyberspace feature while relevant data collected from the other network platform is used for verification. Moreover, starting from measuring the informatization level of a region, a modified gravity model is designed by adding the value of informatization level to the traditional method. Then, an information association matrix based on the improved gravity model is constructed for analyzing the characteristics of cyberspace. To demonstrate the efficiency, Fuzhou city is considered as an interesting regional sample in this paper. The reasonable results indicate that the proposed approach is practical for regional cyberspace. --- ## Body ## 1. Introduction By breaking through the limits of space and time, the relationships and interactive behaviors among humans have extended from the realistic geo-space to cyberspace. Cyberspace is the communication and information space created by computer, which is an abstract concept in the field of philosophy and computer science. It uses information flow as its studied data, while realistic geo-space is based on material flows.The research of cyberspace has received various attentions. In the world, the research on cyberspace mainly focuses on three aspects: (1) cyberspace security; (2) the access control mechanism and communication protocol in cyberspace; (3) the study of the spatial network pattern based on sequence information in cyberspace. For example, Clark [1–7] designed trustworthy mechanisms and access control models to ensure the security of cyberspace. Chawki [8] tried to find the balance between privacy and security. Iyer [9] focused on “smart grid” and used cryptography and key management techniques to overcome some attacks to cyber security. Wechsler [10] advanced new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing service. Slonim [11] presented a novel sequential clustering algorithm which is motivated by the Information Bottleneck method. Prinzie [12] tried to overcome the inability to capture sequential patterns by modeling sequential independent variables by sequence-analysis methods. Mcculloch [13] and Mishra [14, 15] use the sequential information flow to diagnose the Swiss inflation in real time. Meanwhile, Tijsseling [16] presented a variant of Categorization-and-Learning-Module network, which is capable of categorizing sequential information with feedback. Copeland [17] studied the effect of sequential information arrival on asset prices. Mishra [18] utilized the Sequence and Set Similarity Measure with rough set based similarity upper approximation clustering algorithm to group web users based on their navigational patterns. Lottes [19] proposed a novel crop-weed classification system that relies on a fully convolutional network with an encoder-decoder structure and incorporates spatial information by considering image sequences. In mainland China, the study of cyberspace mainly focuses on two aspects: (1) relationship between cyberspace and realistic geo-space; (2) characteristics of cyberspace in specific areas. For instance, based on the Internet infrastructure, Wang [20–22] discussed the relationship between the Internet geographical structure and the Internet urban system of China. Bakis [23], Dong [24] and Sun [25] conduct a comprehensive analysis on the hierarchical structure and information flows pattern, as well as, reveal the spatial distributions of the Internet network structure of China. By linking the relationship between micro-blog users and geography, Zhen [26], Wang [27], and Chen [28] studied the centrality of nodes in the networks and the consistency of the whole network. Although Zhang [29–32] explored some methodologies of mining the relationship between geo-space and cyberspace by using of information flows, it is difficult to obtain the sequential information of cyberspace in mainland China except cooperating with the data owners (usually government agencies). Therefore, researchers often obtain data from various statistical yearbooks and annual public reports. In this paper, we focus on the use of sequence information crawled from social networks to analyze the pattern and characteristics of cyberspace.Currently, there are two main shortcomings in the prior researches of cyberspace: (1) many scholars conducted their studies only based on one kind of sequential information flow, which makes their conclusions less convincing; (2) when they explore the linkages among studied regions, they often directly use the classical gravity model which ignores the attributes of the studied region itself. In order to fix the first weakness, this paper adopts the scheme that uses one kind of internet information flow (data of Sina micro-blog) to extract cyberspace feature while relevant data collected from the other network platform (Baidu Index) is used for verification. Aiming at the second shortcoming, a method is firstly proposed to measure the informatization level of a region and then the classical gravity model is improved by introducing some attributes of the studied regions themselves; finally, an information association matrix is built based on the improved gravity model. By inputting the information association matrix into the network analysis tools (e.g., UCINET software) and selecting appropriate evaluation indicator (e.g., the degree centrality of nodes), the important nodes in the network space can be detected.In order to explore the efficiency, Fuzhou city, the capital of Fujian province, is considered as an interesting region for approach verification. Specifically, firstly, we use a crawler to grab information about Sina micro-blog users, such as their registration addresses and other fundamental information. Then, for those users whose registration addresses are Fuzhou, we also grab the information in their concern lists and concerned lists to analyze their social relationships. Based on the obtained data, the intensity of active connection, passive connection, and total connection are used to study the spatial pattern of cyberspace in Fuzhou city. To make the conclusions more convincing and more credible, the data from the Baidu Index is used for verification. According to the data collected from Baidu Index, we can get the number of times that one research unit is retrieved by another. Finally, we explore some possible factors which may have impacts on the pattern and characteristics of cyberspace. ## 2. Measurement Method of Informatization Development Level of Provinces in Mainland China The regional informatization level is one of the most important factors that may affect the spatial pattern of cyberspace, so we propose a method for measuring the informatization level of a province in this section.In this paper, we selected some indicators that can well reflect the regional informatization level according to the following steps.Step 1. Obtain 186 indicators from China National Information Center which can be used for describing the development level of information society.Step 2. Use world cloud analysis tools for counting the frequency of keywords contained in the 186 indicators. Then, 42 indicators that contain the keywords with higher frequency remain.Step 3. The correlation coefficient and variation coefficient of each of the 42 indicators are calculated. Then, 17 indicators with poor correlation or high redundancy are eliminated.Step 4. KMO test and factor analysis are carried out on the remaining 25 indicators. Finally, 9 indicators (shown in Table1) remain and will be used to evaluate the level of information development in a region.Table 1 Detailed information of the nine indicators. Indexes Indexes Indexes (1) Per capital GDP (2) The number of express (3) Income of express business (4) Number of Colleges and Universities (5) Length of optical cable line (6) Number of IPv4 addresses (7) Comprehensive coverage rate of TV programs (8) The number of computers used by every hundred people (9) The number of students in Colleges and UniversitiesAfter that, Standard Deviation method, CRITIC (Criteria Importance through Inter Criteria Correlation), and Entropy Weight method are used for calculating the weight of each indicator for each province. The calculation formulas of each method are shown as follows.( 1) Standard Deviation Method (1) δ j = ∑ i = 1 n v i - v ¯ n W j _ S D = δ j ∑ j = 1 n δ jwhereWj_SD is the weight of indicator j in indicator system, n stands for the number of research units, vi represents the specific value of index i, and v¯ stands for the arithmetic mean of index j.( 2) CRITIC (Criteria Importance through Inter Criteria Correlation) Method (2) r i j = ∑ k = 1 n x k i - x i ¯ x k j - x j ¯ ∑ k = 1 n x k i - x i ¯ 2 ∑ k = 1 n x k j - x j ¯ 2 C j = δ j ∑ i = 1 n 1 - r i j W j _ C R I T I C = C j ∑ j = 1 n C jwhereδj and n have the same meanings in formula (1) and rij is the correlation coefficient between indicator j and indicator i.( 3) Entropy Weight Method (3) P i = d i j ∑ i = 1 m d i j E j = - ln ⁡ m - 1 ∑ i = 1 m P i j ln ⁡ P i W j _ E W = 1 - E j n - ∑ j = 1 n E jwheredij stands for the value of indicator j being normalized and n have the same meanings in formula (1).Then,Wj will be used as the final weight of the indicator j. The value of Wj can be calculated using formula (4).(4)Wj=Wj_SD+Wj_CRITIC+Wj_EW3Finally, the composite scores of information for each province on these indicators could be calculated according to the following formula.(5)Scorei=∑j=19Wj∗VPijwhereVPij represents the value of the indicator j of province i, and i=1,2,3,⋯,31. ## 3. Construction of Information Association Matrix Based on Improved Gravity Model Gravity model is a widely used model for measuring spatial interaction capability; its formula is shown as follows:(6)Gij=kMi∗Mjdijcwheredij stands for the distance between unit i and unit j and k and c represent the coefficient of gravity and the distance attenuation coefficient, respectively, while the meaning of Mi (Mj) is varying in different applications. For example, if we study the intensity of communication between two regions, the meaning of Mi (Mj) can be the number of calls made by the mobile users in the two regions.When using the above model, researchers usually only focus on the connection between nodes, but ignore the attributes of nodes.Assume the following cases (Figure1).Figure 1 The interpretation of relevant variables in the scenario assumption.(1) The interactions betweenA and B and the interactions between C and D occur at the same time periods.(2) The number of times thatA actively interacts with B is the same as the times that B actively interacts with A, such that tAB=tBA=10.(3) The number of times thatC actively interacts with D is different from the times that D actively interacts with C, such that tCD≠tDC, tCD=100, tDC=1.(4) The distance betweenA and B is the same as that between C and D(dAB=dCD).Then, if we use the classical formula of gravity model, we would get the conclusion that the interaction intensity betweenA and B is the same as that between C and D. It is clearly incorrect, because it ignores the essential attributes of the research objects.In view of the above analysis, and combined with the actual situation, in this paper, we modify the model as follows:(7)Yij=gYScorei∗ScorejdYrRijwhereYij stands for the intensity of information flow between province i and province j, gY is the gravity coefficient of information network, in which dYr is the distance attenuation factor of information space, dY is the shortest road distance between province i and province j, and Rij represents the intensity of network concern between province i and province j. In this paper, we use the parameter estimation method in Wang [31, 32] and set the values of gY and r to 0.85 and 1, while the value of dY can be obtained from Baidu map. We take the average number of searches between corresponding provinces from January-1-2016 to January-1-2017 as the value of Rij. Then, for the sake of comparison, we standardize the value of Yij using formula (8).(8)Yij′=Yij-YminYmax-YminFinally, we construct the information association matrix as follows:(9)MY=Y11′Y12′⋯Y1j′⋯Y1n-1′Y1n′Y21′Y22′⋯Y2j′⋯Y2n-1′Y2n′⋮⋮⋱⋮⋮⋮⋮Yi1′Yi2′⋯Yij′⋯Yin-1′Yin′⋮⋮⋮⋮⋱⋮⋮Yn-11′Yn-12′⋯Yn-1j′⋯Yn-1n-1′Yn-1n′Yn1′Yn2′⋯Ynj′⋯Ynn-1′Ynn′In this matrix, for example,Ynj′ stands for the standardized intensity of information flow between province n and province j. Inputting this matrix into network analysis software, we can find out the important nodes in the network. ## 4. Research Object and Experimental Data In this section, Fuzhou city, the capital of Fujian Province in mainland China, is used as an example of model verification. The sequential information flows which we grab from Sina micro-blog platform and Baidu website are applied to study the pattern and characteristics of the cyberspace in Fuzhou. ### 4.1. Brief Introduction of the Research Object In May 6, 2009, the State Council of the People’s Republic of China issued the “opinions on supporting Fujian province to speed up the construction of the Economic Zone on the West Coast of China (EZWCC).” As an important part of China’s coastal economic zone, the EZWCC is separated by the Taiwan Strait, north of the Yangtze River Delta and south of the Pearl River Delta. It occupies an important position in the layout of regional economic development. The EZWCC is composed of Fujian, Guangdong, Zhejiang, and Jiangxi provinces. Fujian province is the most closely related to Taiwan, because of their geographical proximities and historical and cultural similarities. With this unique advantage, Fujian province occupies the dominant position in the EZWCC.As the capital of Fujian province, Fuzhou city is a hometown of overseas Chinese people. There are many overseas Chinese people with Fuzhou descent distributed all around the world. Fuzhou and Taiwan face each other across the Taiwan Strait. People in these two regions have close links with each other. As one of the important city nodes of the EZWCC, the development of Fuzhou has received attention from the local government and also the Chinese government.In this section, Fuzhou is chosen as the studied region, and then the pattern and characteristics of its cyberspace are analyzed. ### 4.2. Principle Data Acquisition and Preprocessing In order to analyze the cyberspace pattern of Fuzhou more accurately, two kinds of actual network information flows were chosen as principal experimental data. #### 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. #### 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 4.1. Brief Introduction of the Research Object In May 6, 2009, the State Council of the People’s Republic of China issued the “opinions on supporting Fujian province to speed up the construction of the Economic Zone on the West Coast of China (EZWCC).” As an important part of China’s coastal economic zone, the EZWCC is separated by the Taiwan Strait, north of the Yangtze River Delta and south of the Pearl River Delta. It occupies an important position in the layout of regional economic development. The EZWCC is composed of Fujian, Guangdong, Zhejiang, and Jiangxi provinces. Fujian province is the most closely related to Taiwan, because of their geographical proximities and historical and cultural similarities. With this unique advantage, Fujian province occupies the dominant position in the EZWCC.As the capital of Fujian province, Fuzhou city is a hometown of overseas Chinese people. There are many overseas Chinese people with Fuzhou descent distributed all around the world. Fuzhou and Taiwan face each other across the Taiwan Strait. People in these two regions have close links with each other. As one of the important city nodes of the EZWCC, the development of Fuzhou has received attention from the local government and also the Chinese government.In this section, Fuzhou is chosen as the studied region, and then the pattern and characteristics of its cyberspace are analyzed. ## 4.2. Principle Data Acquisition and Preprocessing In order to analyze the cyberspace pattern of Fuzhou more accurately, two kinds of actual network information flows were chosen as principal experimental data. ### 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. ### 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 4.2.1. Data about Sina Micro-Blog Users Micro-blog is a completely open network interaction platform for public participation. The research of cyberspace based on micro-blog reveals the communication characteristic among people more clearly and reflects the influence of information on human relations networks more directly. Geographical attributes of micro-blog users provide the basis for the association of cyberspace and realistic geo-space.According to the report of QuestMobile [33], the number of monthly active users (MAU) of Sina micro-blog has increased more than 45% and reached up to 341 million by the end of 2016. Table 2 shows that the MAU of Sina micro-blog is top-1. Considering the availability of Sina micro-blog users’ location information, Sina micro-blog is chosen finally as the principle data source.Table 2 The value list of Social App in 2016. Order Number App’s Name MAU(million) Growth Rate Proportion of the high value users 1 Sina micro-blog 341.18 45.70% 76.30% 2 QQ Zone 92.60 -35.80% 70.20% 3 Momo 70.03 19.50% 68.70% 4 Tantan 15.89 128.90% 76.90% 5 OPPO paradise 11.87 -34.80% 53.50% 6 Her community 7.41 -29.00% 69.10% 7 SNOW 7.06 16692.60% 75.60% 8 Douban 5.18 -42.90% 89.70% 9 Idol 4.65 -4.50% 74.00% 10 Apploving 3.95 25.80% 79.30%In this paper, one thousand users’ (OTU) information is grabbed. These users meet the following three conditions: (1) their registration addresses are Fuzhou; (2) they are ordinary users rather than celebrities or big V (the verified users who have more than 500,000 fans and use the micro-blog mainly for commercial or personal propaganda rather than sociality); (3) they are active users who not only are concerned about one hundred to five hundred fans but also concern one hundred to five hundred other users actively. According to the report of CNNIC [34], the proportions of Internet users of different ages at the end of 2016 are shown in Table 3. Accordingly, in order to make the sample more reasonable, the numbers of sampled users in these ranges are 234, 303, 232, and 231.Table 3 The proportions of Internet users of all ages. Age range The proportions age≤19 23.4% 20≤age≤29 30.3% 30≤age≤39 23.2% age≥40 23.1%For the one thousand users, we not only grab their basic information (such as IDs, nicknames, sexes, registered addresses, character signatures, birthdays, marital status, and home links), but also obtain the registered addresses of the top 100 users in OUT’s concern lists and concerned lists. Finally, 92015 users concerned about the OTU and 55449 users that concern the OTU are found. Among these users, there are 10451 pairs of friends.There are three kinds of relationships between these users: active concern, passive concern, and mutual concern. Unilateral concern or be concerned model is a weak relationship, while the mutual concern model is a strong relationship. If user B is concerned about user A, then the direction of information flows can be described from A to B.Three indicators are used to evaluate the intensity of the network information flows between Fuzhou city and other regions. These three indicators are the intensity of active connection (the number of concerns,Xa), the intensity of passive connections (the number of be concerned, Xb), and the intensity of total connections (sum of concern and be concerned, Xa+Xb). The meanings of the three indicators are shown in Table 4. All of the top 100 users in OUT’s concern lists and concerned lists were classified and counted according to their registered addresses and relationships. The results are shown in Table 5.Table 4 Three indicators and their meanings. Indicators’ names Meanings of indicators the intensity of active connection (Xa) used to measure the active connection between Fuzhou and other research units; the greater the value means the research units have greater impacts on Fuzhou the intensity of passive connection (Xb) used to measure the interest of other research units in Fuzhou; the larger the value indicates that the units are more willing to accept the information from Fuzhou the intensity of total connection(Xa+Xb) used to measure the intensity of the total linkage between Fuzhou and other research units; the larger the value, the greater the intensity of interactionTable 5 Classification statistics of the users in provincial level. Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Provinces Concern (Xa) Be Concerned (Xb) Total(Xa+Xb) Fujian 21093 15353 36446 Jiangxi 406 655 1061 Beijing 23775 2592 26367 Shaanxi 458 597 1055 Foreign 7814 2866 10680 Yunnan 405 568 973 Guangdong 7450 3197 10647 Heilongjiang 324 602 926 Shanghai 6234 1702 7936 Tianjin 427 491 918 Zhejiang 3481 1429 4910 Guangxi 321 573 894 Jiangsu 2083 1361 3444 Shanxi 251 516 767 Hongkong 1740 485 2225 Jilin 255 493 748 Taiwan 1763 353 2116 Guizhou 187 423 610 Sichuan 1261 845 2106 Gansu 179 424 603 Shangdong 998 1064 2062 Neimenggu 182 378 560 Hubei 911 836 1747 Xinjiang 146 399 545 Hunan 786 846 1632 Hainan 154 350 504 Henan 7667 891 1558 Macao 138 225 363 Liaoning 635 693 1328 Ningxia 61 282 343 Chongqing 699 617 1316 Qinghai 61 277 338 Anhui 575 679 1254 Xizang 83 244 327 Hebei 478 721 1199In this paper, the network information flows between Fuzhou city and other provinces in mainland China are studied. To makeXa, Xb and (Xa+Xb) comparable, maximum value standardization was carried out by using formulas (10).(10)Xa′=Xamax⁡Xa∗100Xb′=Xbmax⁡Xb∗100Xa+Xb′=Xa+Xbmax⁡Xa+Xb∗100Because users in the concern lists and concerned lists are dominated by Fujian province, Fujian province is analyzed separately. When the values ofmax⁡(Xa), max⁡(Xb), and max⁡(Xa+Xb) are selected, Fujian province is excluded. ## 4.2.2. Data about Baidu Index The Baidu Index (https://index.baidu.com/) is one of the most important data sharing and statistical analysis platforms. It records a large amount of Internet users’ behavior data. With its help, we can obtain the number of times that one specified keyword was retrieved in different areas and at different times, which can reflect the intensity of the connection between two regions to some extent [25, 35–39]. In this paper, the indexes between Fuzhou city and all the provinces in mainland China are obtained. The results are shown in Table 6.Table 6 Baidu indexes between Fuzhou and all the provinces in mainland China. Unit A:Unit B Baidu Unit A:Unit B Baidu Total Unit A:Unit B Baidu Unit A:Unit B Baidu Total Indexes Indexes Indexes Indexes Fujian:Fuzhou 1793 Fuzhou:Fujian 524 2317 Guangdong:Fuzhou 653 Fuzhou:Guangdong 167 820 Shanghai:Fuzhou 417 Fuzhou:Shanghai 362 779 Beijing:Fuzhou 421 Fuzhou:Beijing 304 725 Zhejiang:Fuzhou 534 Fuzhou:Zhejiang 167 701 Jiangsu:Fuzhou 479 Fuzhou:Jiangsu 202 681 Sichuan:Fuzhou 406 Fuzhou:Sichuan 193 599 Shandong:Fuzhou 316 Fuzhou:Shandong 182 498 Anhui:Fuzhou 249 Fuzhou:Anhui 216 465 Hunan:Fuzhou 233 Fuzhou:Hunan 167 400 Shanxi:Fuzhou 240 Fuzhou:Shanxi 150 390 Tianjin:Fuzhou 186 Fuzhou:Tianjin 194 380 Hebei:Fuzhou 248 Fuzhou:Hebei 126 374 Henan:Fuzhou 316 Fuzhou:Henan 57 373 Liaoning:Fuzhou 222 Fuzhou:Liaoning 145 367 Guangxi:Fuzhou 186 Fuzhou:Guangxi 167 353 Jiangxi:Fuzhou 280 Fuzhou:Jiangxi 67 347 Hubei:Fuzhou 279 Fuzhou:Hubei 50 329 Hainan:Fuzhou 127 Fuzhou:Hainan 174 301 Jilin:Fuzhou 167 Fuzhou:Jilin 133 300 Chongqing:Fuzhou 180 Fuzhou:Chongqing 90 270 Ningxia:Fuzhou 87 Fuzhou:Ningxia 164 251 Heilongjiang:Fuzhou 197 Fuzhou:Heilongjiang 45 242 Shaanxi:Fuzhou 191 Fuzhou:Shaanxi 49 240 Qinghai:Fuzhou 75 Fuzhou:Qinghai 161 236 Yunnan:Fuzhou 167 Fuzhou:Yunnan 67 234 Guizhou:Fuzhou 158 Fuzhou:Guizhou 66 224 Xizang:Fuzhou 41 Fuzhou:Xizang 169 210 Neimenggu:Fuzhou 142 Fuzhou:Neimenggu 50 192 Gansu:Fuzhou 140 Fuzhou:Gansu 50 190 Xinjiang:Fuzhou 127 Fuzhou:Xinjiang 56 183There may be greater contingency if we use the number of passive retrievals to verify the intensity of passive concern or use the number of active retrievals to verify the intensity of active concern. So, the sum of passive retrieval and active retrieval was used for verifying the final intensity of total connections between two research units. ## 5. Spatial Pattern Analysis of Cyberspace in Fuzhou ### 5.1. Spatial Difference Analysis on the Intensity of the Network Information Flows Cyberspace Breaks through the Limitation of Time and Geographical Space. Table 5 shows that the cyberspace in Fuzhou is very wide; the users either in the concern lists or in the concerned lists are distributed in all of provinces in mainland China. As a famous hometown of overseas Chinese people, there were also network information flows between Fuzhou users and some foreigners.There Were Grade Differences in Cyberspace. Taking into account the actual situation of the object region and the data selected in this paper, all provinces in mainland China are divided into seven levels by the value of Xa, Xb, and (Xa+Xb), and the provinces are divided into five levels by the value of Xa′, Xb′, and Xa+Xb′. The results are shown in Figures 2(a), 2(b), and 2(c). It is necessary to explain that the solid line with an arrow in Figures 2(a) and 2(b) shows the direction of concern. For example, “Fujian←Xinjiang” means thatXijiang takes the initiative to pay attention toFujian, which also means thatFujian is paid attention to byXinjiang. The solid line without arrow in Figure 2(c) means the total intensity of attention between two units. In all of these three figures, the thicker the solid line is, the stronger the attention is.Figure 2 Grade classification at provincial level. (a) Passive connections (b) Active connections (c) Total connectionsIn general, there is an obvious grading phenomenon in the intensity of connections, such that, from the east to the west, the intensities become gradually weaker: (1) the provinces with the highest intensity are mainly distributed in the southeast coastal area; (2) most of the central provinces are in the middle level; (3) the western provinces are at the lowest level.Regional Embedding Still Exists in Cyberspace. First, the results of grade division show that although cyberspace breaks through the limit of time and geographical space, the geographic distance factor still has some influence on the spatial pattern of cyberspace. The distance attenuation phenomenon also exists in cyberspace to some extent. Second, among these research units, Fujian province has the strongest connection with Fuzhou in the intensity of passive connection and total connection. Although information technology has compressed the space-time distance and expanded the scope of social communication, the information connections within the local domain occupied the dominant position in its cyberspace because of their geographical proximity and social cultural similarity.The Information Flow in Cyberspace Is Asymmetric. “Information potential” refers to the capability to picking up, using, transmitting, formulating, aggregating, and processing information. The difference of “information potential” also leads to the asymmetry of information flow. Regions are more likely to establish contact with other regions with higher “information potential.” For example, Beijing, Shanghai, Zhejiang, and Jiangsu are far less concerned about Fuzhou than Fuzhou’s attention to them.The Pattern of Cyberspace Can Be Affected by Population Flow (Labor Input and Output). Taking Sichuan province as an example, the distance between Sichuan and Fuzhou is much farther than that between Jiangxi and Fuzhou, but the connection intensity between the former two is stronger than that of the latter. These results are primarily because Sichuan is a big province of labor output in mainland China. Most of the laborers have been outputting to Fuzhou, Xiamen, Quanzhou, and some other cities in Fujian province. The flows of population will inevitably bring about information flows.The Pattern of Cyberspace Has a High Correlation with Regional Economic Development Pattern. Excluding Fujian province, strong connections occurred between Fuzhou and some economically developed areas, such as Zhejiang, Shanghai, Guangdong, Jiangsu, and Beijing. It means that the pattern of cyberspace is also affected by the level of economic development. It is primarily because the developed areas have higher “information potential.” The influence of “information potential” sometimes is even greater than that of geographic distance. Taking Beijing as an example, Fuzhou is concerned about Beijing much more actively than Fujian province.Then, the information association matrixDijY is inputted into the UCINET software. If there is information connection between two provinces, there will be a connection line between them. In this way, the information spatial association network diagram at provincial level in mainland China is generated. In this paper, the degree centrality is used to evaluate the importance of nodes in the information network. If the degree centrality of nodes is higher, then the larger area of the graph is used to describe the node. Finally, we get Figure 3, which shows that economically developed provinces, such as Beijing, Shanghai, Guangdong, and Zhejiang, usually are the key nodes in the cyberspace. Other provinces with relatively sluggish economic development are willing to contact these key nodes more frequently.Figure 3 The information spatial association network diagram at provincial level in mainland China. ### 5.2. Verifying the Effectiveness of Grade Division All the analysis results in Section5.1 are based on the grade division. Therefore, in this section, another kind of data (data from Baidu Index) is used to verify the effectiveness of grade division.From Tables5 and 6, the rankings of the number of total connections and total retrievals for each province can be obtained. The detailed rankings are shown in Table 7.Table 7 Total connection rankings and total retrieval rankings. Provinces Total connection rankings Total retrieval rankings Provinces Total connection rankings Total retrieval rankings Anhui 14 9 Fujian 1 1 Guangdong 3 2 Guizhou 24 27 Hebei 15 13 Heilongjiang 19 23 Hunan 10 10 Jiangsu 6 6 Liaoning 12 15 Ningxia 29 22 Shandong 8 8 Shanxi 17 11 Sichuan 7 7 Xizang 31 28 Yunnan 18 26 Chongqing 13 21 Beijing 2 4 Gansu 25 30 Guangxi 21 16 Hainan 28 19 Henan 11 14 Hubei 9 18 Jilin 23 20 Jiangxi 16 17 Neimenggu 26 29 Qianghai 30 25 Shaanxi 22 24 Shanghai 4 3 Tianjin 20 12 Xinjiang 27 31 Zhejiang 5 5Figure4 is visualized based on the information in Table 7. And then the correlation between the two discrete curves is calculated according to formula (11) and 87.10% is achieved.(11)CorrelX,Y=∑i=131xi-x-yi-y-∑i=131xi-x-2∑i=131yi-y-2Figure 4 Correlation between total connection rankings and total retrieval rankings.Although there are little differences in some provinces’ rankings by using the two different kinds of data, most of the provinces do not change too much in their rankings. The same conclusions can be achieved by comparing Figure2(c) with Figure 5. Therefore, the conclusions obtained based on the first set of data in Section 5.1 have high credibility. What needs to be explained is that the provinces or cities in Figure 5 are classified according to the total intensity of mutual retrieval between them and Fuzhou city.Figure 5 Grade division based on total retrieval. ### 5.3. Analysis of the Possible Influential Factors In this section, some influential factors which can be quantified and may have impacts on the pattern of cyberspace in Fuzhou are analyzed.Geographic distance, the “Internet plus” index, and regional informatization development level are considered as the most likely influential factors that can affect the pattern of cyberspace. The “Internet plus” index is an important indicator to reflect the level of regional development. It consists of four subindexes: “Internet plus infrastructure,” “Internet plus industry,” “Internet plus innovation,” and “Internet plus smart city.” Further, these subindexes consists of 14 first-class indexes and 135 second-class indexes. Its content covers social, news, video, cloud computing, and the 19 major subindustries of the three industries. It uses the Tencent users’ digital economic behavior as basic data and collects data from Didi Taxi, Meituan Dianping, Jingdong Mall, Ctrip, and some other Internet companies. Because the data are very comprehensive, it can reflect the degree of combination of Internet and all walks of life and the ability of the Internet utilization.Geographic distances between Fuzhou city and other research units are measured with the help of Baidu map, the value of “Internet plus” indexes and rankings for each research unit are obtained from T. R. Institue [40], and the informatization development level of each research unit can be calculated with the method that we proposed in Section 2. These factors are shown in Tables 8–10.Table 8 Distances between Fuzhou and other provinces. Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Zhejiang 554.7 Jiangxi 626.5 Shanghai 775.9 Guangdong 865.3 Anhui 875.0 Hunan 942.0 Jiangsu 1005.2 Hubei 1167.5 Henan 1295.7 Guangxi 1375.8 Shandong 1422.8 Guizhou 1560.0 Hainan 1587.1 Shanxi 1712.6 Hebei 1758.9 Chongqing 1760.7 Tianjin 1772.2 Shanxi 1812.9 Beijing 1887.1 Sichaun 2044.2 Yunnan 2098.7 Ningxia 2263.8 Liaoning 2406.9 Neimenggu 2468.2 Gansu 2547.2 Jilini 2694.9 Qinghai 2996.2 Heilongjiang 3277.6 Xizang 4097.6 Xinjiang 4375.1Table 9 “China Internet plus” indexes 2016 for each province. Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes 1 Guangdong 18.072 2 Beijing 11.256 3 Shanghai 6.179 4 Zhejiang 5.512 5 Jiangsu 5.031 6 Fujian 3.967 7 Shandong 3.709 8 Sichuan 3.591 9 Henan 3.202 10 Chongqing 3.021 11 Hunan 2.884 12 Hubei 2.797 13 Hebei 2.579 14 Liaoning 2.239 15 Shanxi 2.153 16 Anhui 2.069 17 Guangxi 1.856 18 Jiangxi 1.764 19 Shanxi 1.702 20 Yunnan 1.586 21 Tainjin 1.491 22 Heilongjiang 1.45 23 Hainan 1.246 24 Jilin 1.23 25 Neimenggu 1.166 26 Guizhou 1.165 27 Xinjiang 0.933 28 Gansu 0.917 29 Ningxia 0.785 30 Qinghai 0.457 31 Xizang 0.35Table 10 Composite scores and rankings for each province. Provinces Composite Scores Information level rankings Provinces Composite Scores Information level rankings Guangdong 0.8322 1 Shaanxi 0.2879 17 Jiangsu 0.614 2 Heilongjiang 0.2651 18 Zhejiang 0.5805 3 Jiangxi 0.2471 19 Shanghai 0.5223 4 Shanxi 0.2348 20 Beijing 0.4595 5 Guangxi 0.2314 21 Shandong 0.4173 6 Yunnan 0.2305 22 Sichuan 0.3704 7 Chongqing 0.2253 23 Hubei 0.3584 8 Jilin 0.2162 24 Hunan 0.3245 9 Xinjiang 0.1724 25 Fujian 0.3242 10 Gansu 0.1677 26 Henan 0.3232 11 Qinghai 0.1599 27 Hebei 0.3212 12 Guizhou 0.1527 28 Liaoning 0.3159 13 Ningxia 0.1474 29 Neimenggu 0.3133 14 Xizang 0.0971 30 Tianjin 0.3099 15 Hainan 0.0866 31 Anhui 0.3059 16SPSS (Statistical Product and Service Solutions) software is used to analyze the correlation between rankings of the three factors and rankings of the connection intensities. The analysis results are shown in Table11.Table 11 Pearson correlation in provincial level. Information level rankings Distance rankings “China Internet plus index” rankings Connection intensity rankings Information level rankings 1 .634 .874 .948 Distance rankings .634 1 .681 .646 ‘China Internet plus index’ rankings .874 .681 1 .972 Connection intensity rankings .948 .646 .972 1Table11 shows that there are high correlations existing between the connection intensity and the “Internet plus” index, as well as between the connection intensity and information level. The “Internet plus” index has the highest positive correlation with intensities. Results of correlation analysis also illustrate that there is a certain correlation between the distance factor and the connection intensity. However, this factor is no longer the primary factor. Some other factors, such as the government’s economic policies and cultural similarities, can also affect the connections intensity, but they are difficult to quantify. ## 5.1. Spatial Difference Analysis on the Intensity of the Network Information Flows Cyberspace Breaks through the Limitation of Time and Geographical Space. Table 5 shows that the cyberspace in Fuzhou is very wide; the users either in the concern lists or in the concerned lists are distributed in all of provinces in mainland China. As a famous hometown of overseas Chinese people, there were also network information flows between Fuzhou users and some foreigners.There Were Grade Differences in Cyberspace. Taking into account the actual situation of the object region and the data selected in this paper, all provinces in mainland China are divided into seven levels by the value of Xa, Xb, and (Xa+Xb), and the provinces are divided into five levels by the value of Xa′, Xb′, and Xa+Xb′. The results are shown in Figures 2(a), 2(b), and 2(c). It is necessary to explain that the solid line with an arrow in Figures 2(a) and 2(b) shows the direction of concern. For example, “Fujian←Xinjiang” means thatXijiang takes the initiative to pay attention toFujian, which also means thatFujian is paid attention to byXinjiang. The solid line without arrow in Figure 2(c) means the total intensity of attention between two units. In all of these three figures, the thicker the solid line is, the stronger the attention is.Figure 2 Grade classification at provincial level. (a) Passive connections (b) Active connections (c) Total connectionsIn general, there is an obvious grading phenomenon in the intensity of connections, such that, from the east to the west, the intensities become gradually weaker: (1) the provinces with the highest intensity are mainly distributed in the southeast coastal area; (2) most of the central provinces are in the middle level; (3) the western provinces are at the lowest level.Regional Embedding Still Exists in Cyberspace. First, the results of grade division show that although cyberspace breaks through the limit of time and geographical space, the geographic distance factor still has some influence on the spatial pattern of cyberspace. The distance attenuation phenomenon also exists in cyberspace to some extent. Second, among these research units, Fujian province has the strongest connection with Fuzhou in the intensity of passive connection and total connection. Although information technology has compressed the space-time distance and expanded the scope of social communication, the information connections within the local domain occupied the dominant position in its cyberspace because of their geographical proximity and social cultural similarity.The Information Flow in Cyberspace Is Asymmetric. “Information potential” refers to the capability to picking up, using, transmitting, formulating, aggregating, and processing information. The difference of “information potential” also leads to the asymmetry of information flow. Regions are more likely to establish contact with other regions with higher “information potential.” For example, Beijing, Shanghai, Zhejiang, and Jiangsu are far less concerned about Fuzhou than Fuzhou’s attention to them.The Pattern of Cyberspace Can Be Affected by Population Flow (Labor Input and Output). Taking Sichuan province as an example, the distance between Sichuan and Fuzhou is much farther than that between Jiangxi and Fuzhou, but the connection intensity between the former two is stronger than that of the latter. These results are primarily because Sichuan is a big province of labor output in mainland China. Most of the laborers have been outputting to Fuzhou, Xiamen, Quanzhou, and some other cities in Fujian province. The flows of population will inevitably bring about information flows.The Pattern of Cyberspace Has a High Correlation with Regional Economic Development Pattern. Excluding Fujian province, strong connections occurred between Fuzhou and some economically developed areas, such as Zhejiang, Shanghai, Guangdong, Jiangsu, and Beijing. It means that the pattern of cyberspace is also affected by the level of economic development. It is primarily because the developed areas have higher “information potential.” The influence of “information potential” sometimes is even greater than that of geographic distance. Taking Beijing as an example, Fuzhou is concerned about Beijing much more actively than Fujian province.Then, the information association matrixDijY is inputted into the UCINET software. If there is information connection between two provinces, there will be a connection line between them. In this way, the information spatial association network diagram at provincial level in mainland China is generated. In this paper, the degree centrality is used to evaluate the importance of nodes in the information network. If the degree centrality of nodes is higher, then the larger area of the graph is used to describe the node. Finally, we get Figure 3, which shows that economically developed provinces, such as Beijing, Shanghai, Guangdong, and Zhejiang, usually are the key nodes in the cyberspace. Other provinces with relatively sluggish economic development are willing to contact these key nodes more frequently.Figure 3 The information spatial association network diagram at provincial level in mainland China. ## 5.2. Verifying the Effectiveness of Grade Division All the analysis results in Section5.1 are based on the grade division. Therefore, in this section, another kind of data (data from Baidu Index) is used to verify the effectiveness of grade division.From Tables5 and 6, the rankings of the number of total connections and total retrievals for each province can be obtained. The detailed rankings are shown in Table 7.Table 7 Total connection rankings and total retrieval rankings. Provinces Total connection rankings Total retrieval rankings Provinces Total connection rankings Total retrieval rankings Anhui 14 9 Fujian 1 1 Guangdong 3 2 Guizhou 24 27 Hebei 15 13 Heilongjiang 19 23 Hunan 10 10 Jiangsu 6 6 Liaoning 12 15 Ningxia 29 22 Shandong 8 8 Shanxi 17 11 Sichuan 7 7 Xizang 31 28 Yunnan 18 26 Chongqing 13 21 Beijing 2 4 Gansu 25 30 Guangxi 21 16 Hainan 28 19 Henan 11 14 Hubei 9 18 Jilin 23 20 Jiangxi 16 17 Neimenggu 26 29 Qianghai 30 25 Shaanxi 22 24 Shanghai 4 3 Tianjin 20 12 Xinjiang 27 31 Zhejiang 5 5Figure4 is visualized based on the information in Table 7. And then the correlation between the two discrete curves is calculated according to formula (11) and 87.10% is achieved.(11)CorrelX,Y=∑i=131xi-x-yi-y-∑i=131xi-x-2∑i=131yi-y-2Figure 4 Correlation between total connection rankings and total retrieval rankings.Although there are little differences in some provinces’ rankings by using the two different kinds of data, most of the provinces do not change too much in their rankings. The same conclusions can be achieved by comparing Figure2(c) with Figure 5. Therefore, the conclusions obtained based on the first set of data in Section 5.1 have high credibility. What needs to be explained is that the provinces or cities in Figure 5 are classified according to the total intensity of mutual retrieval between them and Fuzhou city.Figure 5 Grade division based on total retrieval. ## 5.3. Analysis of the Possible Influential Factors In this section, some influential factors which can be quantified and may have impacts on the pattern of cyberspace in Fuzhou are analyzed.Geographic distance, the “Internet plus” index, and regional informatization development level are considered as the most likely influential factors that can affect the pattern of cyberspace. The “Internet plus” index is an important indicator to reflect the level of regional development. It consists of four subindexes: “Internet plus infrastructure,” “Internet plus industry,” “Internet plus innovation,” and “Internet plus smart city.” Further, these subindexes consists of 14 first-class indexes and 135 second-class indexes. Its content covers social, news, video, cloud computing, and the 19 major subindustries of the three industries. It uses the Tencent users’ digital economic behavior as basic data and collects data from Didi Taxi, Meituan Dianping, Jingdong Mall, Ctrip, and some other Internet companies. Because the data are very comprehensive, it can reflect the degree of combination of Internet and all walks of life and the ability of the Internet utilization.Geographic distances between Fuzhou city and other research units are measured with the help of Baidu map, the value of “Internet plus” indexes and rankings for each research unit are obtained from T. R. Institue [40], and the informatization development level of each research unit can be calculated with the method that we proposed in Section 2. These factors are shown in Tables 8–10.Table 8 Distances between Fuzhou and other provinces. Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Research Unit Distance(km) Zhejiang 554.7 Jiangxi 626.5 Shanghai 775.9 Guangdong 865.3 Anhui 875.0 Hunan 942.0 Jiangsu 1005.2 Hubei 1167.5 Henan 1295.7 Guangxi 1375.8 Shandong 1422.8 Guizhou 1560.0 Hainan 1587.1 Shanxi 1712.6 Hebei 1758.9 Chongqing 1760.7 Tianjin 1772.2 Shanxi 1812.9 Beijing 1887.1 Sichaun 2044.2 Yunnan 2098.7 Ningxia 2263.8 Liaoning 2406.9 Neimenggu 2468.2 Gansu 2547.2 Jilini 2694.9 Qinghai 2996.2 Heilongjiang 3277.6 Xizang 4097.6 Xinjiang 4375.1Table 9 “China Internet plus” indexes 2016 for each province. Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes Rankings Provinces “China Internet plus” indexes 1 Guangdong 18.072 2 Beijing 11.256 3 Shanghai 6.179 4 Zhejiang 5.512 5 Jiangsu 5.031 6 Fujian 3.967 7 Shandong 3.709 8 Sichuan 3.591 9 Henan 3.202 10 Chongqing 3.021 11 Hunan 2.884 12 Hubei 2.797 13 Hebei 2.579 14 Liaoning 2.239 15 Shanxi 2.153 16 Anhui 2.069 17 Guangxi 1.856 18 Jiangxi 1.764 19 Shanxi 1.702 20 Yunnan 1.586 21 Tainjin 1.491 22 Heilongjiang 1.45 23 Hainan 1.246 24 Jilin 1.23 25 Neimenggu 1.166 26 Guizhou 1.165 27 Xinjiang 0.933 28 Gansu 0.917 29 Ningxia 0.785 30 Qinghai 0.457 31 Xizang 0.35Table 10 Composite scores and rankings for each province. Provinces Composite Scores Information level rankings Provinces Composite Scores Information level rankings Guangdong 0.8322 1 Shaanxi 0.2879 17 Jiangsu 0.614 2 Heilongjiang 0.2651 18 Zhejiang 0.5805 3 Jiangxi 0.2471 19 Shanghai 0.5223 4 Shanxi 0.2348 20 Beijing 0.4595 5 Guangxi 0.2314 21 Shandong 0.4173 6 Yunnan 0.2305 22 Sichuan 0.3704 7 Chongqing 0.2253 23 Hubei 0.3584 8 Jilin 0.2162 24 Hunan 0.3245 9 Xinjiang 0.1724 25 Fujian 0.3242 10 Gansu 0.1677 26 Henan 0.3232 11 Qinghai 0.1599 27 Hebei 0.3212 12 Guizhou 0.1527 28 Liaoning 0.3159 13 Ningxia 0.1474 29 Neimenggu 0.3133 14 Xizang 0.0971 30 Tianjin 0.3099 15 Hainan 0.0866 31 Anhui 0.3059 16SPSS (Statistical Product and Service Solutions) software is used to analyze the correlation between rankings of the three factors and rankings of the connection intensities. The analysis results are shown in Table11.Table 11 Pearson correlation in provincial level. Information level rankings Distance rankings “China Internet plus index” rankings Connection intensity rankings Information level rankings 1 .634 .874 .948 Distance rankings .634 1 .681 .646 ‘China Internet plus index’ rankings .874 .681 1 .972 Connection intensity rankings .948 .646 .972 1Table11 shows that there are high correlations existing between the connection intensity and the “Internet plus” index, as well as between the connection intensity and information level. The “Internet plus” index has the highest positive correlation with intensities. Results of correlation analysis also illustrate that there is a certain correlation between the distance factor and the connection intensity. However, this factor is no longer the primary factor. Some other factors, such as the government’s economic policies and cultural similarities, can also affect the connections intensity, but they are difficult to quantify. ## 6. Conclusion This paper points out the shortcomings of existing works in the field of cyberspace: (1) researchers conduct their studies only based on one kind of sequential information flow, which makes their conclusions less convincing; (2) the study of the cyberspace pattern based on the sequential information flow usually directly employs the classical gravity model but ignores the attributes of the studied objects themselves. To overcome these weaknesses, we hold the idea that it is necessary to use some different kinds of sequential information flows to analyze this problem. We also advocate that we should consider the attributes of our study objects themselves when the gravity model is used. Accordingly, we proposed a method for measuring the informatization level of a region and improved the classical gravity model by adding the score of informatization level to the classical formula. And then, we constructed an information association matrix based on the improved gravity model. Finally, we took Fuzhou city as our study object and focused on its cyberspace characteristics. Experiments in this paper are conducted on two kinds of sequential information flows, data about Sina micro-blog users and data of Baidu Index.According to our experimental result, the following conclusions can be drawn:First, cyberspace breaks through the limit of geographical distance and has a wider range of communication.Second, there is an obvious grade difference in cyberspace. From the east of China to the west, the intensities of the total connections decrease gradually.Third, the social communication mode in realistic geo-space has also been brought into cyberspace, such that the local domain information still occupied the dominant position in cyberspace.Fourth, the economically developed provinces usually are the principle node in network information space.Fifth, the information flows in cyberspace are asymmetric. Areas with low information potential are easily attracted by the areas with higher information potential. Economically backward areas are more willing to establish active contacts with economically developed areas.There are many factors that can affect the pattern of cyberspace. By conducting comprehensive analysis of these factors, the characteristics and future of cyberspace could be understood and grasped more accurately. --- *Source: 1016587-2018-10-08.xml*
2018
# Metallothionein as an Anti-Inflammatory Mediator **Authors:** Ken-ichiro Inoue; Hirohisa Takano; Akinori Shimada; Masahiko Satoh **Journal:** Mediators of Inflammation (2009) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2009/101659 --- ## Abstract The integration of knowledge concerning the regulation of MT, a highly conserved, low molecular weight, cystein-rich metalloprotein, on its proposed functions is necessary to clarify how MT affects cellular processes. MT expression is induced/enhanced in various tissues by a number of physiological mediators. The cellular accumulation of MT depends on the availability of cellular zinc derived from the diet. MT modulates the binding and exchange/transport of heavy metals such as zinc, cadmium, or copper under physiological conditions and cytoprotection from their toxicities, and the release of gaseous mediators such as hydroxyl radicals or nitric oxide. In addition, MT reportedly affects a number of cellular processes, such as gene expression, apoptosis, proliferation, and differentiation. Given the genetic approach, the apparently healthy status of MT-deficient mice argues against an essential biological role for MT; however, this molecule may be critical in cells/tissues/organs in times of stress, since MT expression is also evoked/enhanced by various stresses. In particular, because metallothionein (MT) is induced by inflammatory stress, its roles in inflammation are implied. Also, MT expression in various organs/tissues can be enhanced by inflammatory stimuli, implicating in inflammatory diseases. In this paper, we review the role of MT of various inflammatory conditions. --- ## Body ## 1. Introduction Metallothioneins (MTs) discovered as cadmium-binding protein from horse kidney approximately five decades ago, and were later characterized as a low molecular weight protein with a high cysteine content and a high affinity for divalent important metals, such as zinc and copper, and unimportant ones, such as cadmium and mercury (Margoshes) [1]. Because of their high metal content and unusual bioinorganic structure, they are classified as metalloproteins [2]. MTs are unusually rich in cysteine residues that coordinate multiple zinc and copper atoms under physiological conditions. ## 2. Classification In mice, there are 4 MT genes that reside in a 50-kb region on chromosome 8 [3]. The mouse MT-I and–II genes are expressed at all stages of development in many cell types of most organs; they are coordinately regulated by metals, glucocorticoids, and inflammatory stress [4]. MT-III is expressed predominantly in not only neurons but also in glia and male reproductive organs [5–7]. MT-IV is expressed in differentiating stratified squamous epithelial cells [3]. All four MT genes are expressed in the maternal deciduum [8]. In humans, whereas, MTs are encoded by a family of genes consisting of 10 functional MT isoforms, and the encoded proteins are conventionally subdivided into 4 groups: MT-1, MT-2, MT-3, and MT-4 proteins [9]. While a single MT-2A gene encodes MT-2 protein, MT-1 protein comprises many subtypes encoded by a set of MT-1 genes (MT-1A, MT-1B, MT-1E, MT-1F, MT-1G, MT-1H, and MT-1X), accounting for the microheterogeneity of the MT-1 protein [2]. As shown above, there are multiple MT genes, expressed in distinct patterns, suggesting that they possess important functions; however, whether they have redundant or divergent functions under both physiological and pathological conditions is not fully understood, although the known functions of MTs include metalloregulatory roles in cell growth, differentiation, and apoptosis and the enhanced synthesis of MTs in rapidly proliferating tissues, implying their crucial role in normal and neoplastic cell growth [10]. ## 3. Characteristics These intracellular proteins are characterized by their unusually high cysteine content (30%) and lack of aromatic amino acids. Because of their thiol rich content, MTs can bind to a number of trace metals such as cadmium, mercury, platinum, and silver, and protect cells and tissues against the toxicity of these metals. Furthermore, MTs are among the most abundant components interacting with the biologically essential metals zinc and copper. MT metal-thiolate fractions, being dynamic and of a high affinity, also facilitate metal exchange in tissues [11].MTs are present in a great variety of eukaryotes [12], functioning as antioxidants; they also play a protective role against hydroxyl-free radicals. This is relevant in tumors which are known to be markedly radiosensitive, where radiotherapy is the treatment of choice [13]. ## 4. Function under Physiological Conditions The putative functions of MT include intracellular metal metabolism and/or storage, metal donation to target apometalloproteins (especially zinc finger proteins and enzymes), metal detoxification, and protection against oxidants and electrophils [14]. Evidence for these functions originally came from traditional animal, cell culture, and in vitro models. Furthermore, these studies have been supported by experiments using murine models with the targeted deletion or transgenic overexpression of MT genes. MT most likely functions in the regulation of zinc metabolism [14]. Elevations of dietary zinc induce/enhance intestinal MT [15], whereas maximal intestinal Zn accumulation seems to depend on MT synthesis [16]. MT (−/−) mice accumulate less zinc in the distal gastrointestinal tract when fed a high zinc diet [17]. In most studies, zinc absorption was inversely related to the intestinal MT content after MT was induced by dietary, parenteral zinc, or by fasting [14]. Studies using transgenic and knockout mice have confirmed that MT can alter the processing of zinc taken orally because the serum zinc concentration was inversely related to the intestinal MT level in these mice after single oral doses of zinc [17, 18]. In turn, urinary Zn excretion levels measured during a fast or Zn intake restriction were greater in MT (−/−) mice than in MT (+/+) mice [19]. As well, the increase in hepatic Zn concentration after the administration of lipopolysaccharide (LPS) was found in MT (+/+) but not in MT (−/−) mice [20]. These results suggest that MT has the ability to retain Zn under physiological and pathological conditions. On the other hand, tissue Zn concentration was reduced, and the sensitivity to Zn deficiency during pregnancy was enhanced in MT (−/−) mice [21], and further, Zn deficiency caused abnormalities in neonate kidney differentiation in MT (−/−) mice [22]. Conversely, in MT-transgenic mice, Zn was accumulated in female organs, and the teratogenicity of Zn deficiency during pregnancy was significantly ameliorated. Taken together, MT is likely to have a Zn metabolizing activity in the individual level [23].Besides, MT demonstrates strong antioxidant properties. MT protein levels in rodent liver [24, 25] and mRNA levels in hepatic cell lines [26] are increased following injection with compounds that result in free radical formation, for example, carbon tetrachloride, menadione, or paraquat. An injection of ferric nitrilotriacetate, which produces reactive oxygen species (ROS), induces transcriptional level of MT in the liver and kidney [27]. These findings suggest that MT plays a role in oxidative stress. Consistent with this, MT is able to scavenge a wide range of ROS including superoxide, hydrogen peroxide, hydroxyl radicals, and nitric oxide [19, 28, 29]. In particular, it has been shown that the ability of MT to capture hydroxyl radicals, which are primarily responsible for the toxicity of ROS, is three hundred-times greater than that of glutathione [30], the most abundant antioxidant in the cytosol [19]. Further, metal-thiolate clusters are reportedly oxidized in vitro; thus, they could scavenge deleterious oxygen radicals. Compelling genetic evidence for this concept comes from work using yeast. In brief, yeast that cannot synthesize copper MTs are more sensitive to oxidative stress if they also lack superoxide dismutase, suggesting that yeast MT has antioxidant functions [31]. In addition, the expression of monkey MTs under the control of the yeast MT promoter also protects against oxidative stress [31]. Many agents that induce oxidative stress, such as chloroform, turpentine, diethyl maleate, paraquat, and H2O2, can also induce MT-I and MT-II in vitro and in vivo [24, 26, 32]. This strongly suggests that MT is involved in protecting against oxidative damage. Conversely, mammalian cells that express excess MTs appear to be resistant to the toxic effects of nitric oxide [33] and many electrophilic antineoplastic agents [34], which are capable of reacting with the cysteines of MT. Further, relatively recent studies have demonstrated that MT is induced by oxidative stress-producing chemicals [35], and exhibits cytoprotection against oxidative stress-related organ damage in vivo [36, 37].Despite the confirmed roles of MT under physical conditions, as mentioned above, a complete identification of all the functions of this unique protein within an integrative context has yet to emerge, particularly under pathophysiological conditions. Particularly, since proinflammatory cytokines including interleukin (IL)-1, IL-6, and interferon-γ also induce hepatic MT gene expression in vivo [38–40], the roles of MT in inflammation have been focused on. There are conflicting reports about the role of MT in inflammatory processes. In fact, MT (−/−) mice were resistant to tumor necrosis factor (TNF)-induced lethal shock compared to MT (+/+) mice [38]. MT-I-overexpressing mice are more sensitive to the lethal effects of TNF than MT (+/+) mice [38]. In contrast, Kimura et al. have reported that MT (−/−) mice are more susceptible to LPS-induced lethal shock in D-galactosamine (GalN)-sensitized mice through the reduction of alpha (1)-acid glycoprotein than MT (+/+) mice [41]. Accordingly, it seems that the roles of MT in inflammation depend on pathophysiologic conditions (site, route of stimuli, and type).Whereas, to date, inflammatory diseases such as systemic inflammatory response syndrome including acute lung injury, allergic asthma, oxidative lung injury, and acute liver injury are as yet refractory and/or hindering daily life, possibly due to the incomplete understanding of molecular targets. Thus, investigation for the role of MT in these inflammatory diseases may provide hint for novel therapeutic options. ## 5. Function of MT under Pathophysiological Conditions in Inflammation ### 5.1. Role of MT in Lung Injury Related to LPS Previous as well as our recent studies have shown the expression of MT in the lung [39, 42]. Immunohistopathological examination led to the detection of immunoreactive MT-I/II proteins in the lungs in endothelial and alveolar epithelial cells of MT (+/+) mice, whereas they were not detected in those of MT (−/−) mice. Furthermore, the expression was confirmed to be enhanced by oxidative stimuli like LPS and ozone (O3) exposure (data not shown).The intratracheal instillation of LPS produces a well-recognized model of acute lung injury, leading to the activation of alveolar macroghages, tissue infiltration of neutrophils, and interstitial edema [43]. Although the inhalation of LPS has been reported to induce MT expression in the lung in vivo [39, 42], there is no evidence regarding the direct contribution of MT in acute lung injury related to LPS.MT (−/−) and MT (+/+) mice were administered vehicle or LPS (125 μg/kg) intratracheally. Thereafter, the cellular profile of the bronchoalveolar lavage (BAL) fluid, pulmonary edema, lung histology, expression of proinflammatory molecules, and nuclear localization of nuclear factor-κ B (NF-κ B) in the lung were evaluated. As a result, MT (−/−) mice were more susceptible than MT (+/+) mice to neutrophilic lung inflammation and lung edema, which was induced by intratracheal challenge with LPS. After LPS challenge, MT deficiency enhanced the vacuolar degeneration of pulmonary endothelial and type I alveolar epithelial cells, and caused a focal loss of the basement membrane. However, unexpectedly, LPS treatment induced no significant differences neither in the enhanced expression of proinflammatory cytokines and chemokines, nor in the activation of the NF-κ B pathway in the lung between the two genotypes. Lipid peroxide levels in the lungs were significantly higher in LPS-treated MT (−/−) than LPS-treated MT (+/+) mice. These findings suggest that MT protects against acute lung injury related to LPS. The effects are possibly mediated via the enhancement of pulmonary endothelial and epithelial integrity, not via inhibition of the NF-κ B pathway [44].Next, MT (−/−) and MT (+/+) mice were administered vehicle or LPS (30 mg/kg) intraperitoneally. Thereafter, coagulatory parameters, organ histology (lung, liver, and kidney), and the local expression of proinflammatory molecules were evaluated. As a result, compared with MT (+/+) mice, MT (−/−) mice showed a significant prolongation of the prothrombin time (PT) and activated partial thromboplastin time (APTT), a significant increase in the levels of fibrinogen and fibrinogen/fibrin degradation products, and a significant decrease in activated protein C, after LPS treatment. LPS induced inflammatory organ damage in the lung, kidney, and liver in both genotypes of mice. The damage including neutrophil infiltration in the organs was more prominent in MT (−/−) than MT (+/+) mice after LPS treatment. In both genotypes of mice, LPS enhanced the protein expression of interleukin (IL)-1β, IL-6, granulocyte/macrophage-colony-stimulating factor, macrophage inflammatory protein (MIP)-1α, MIP-2, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant (KC) in the lung, kidney, and liver and circulatory levels of IL-1β, IL-6, MIP-2, and KC. In overall trends, however, the levels of these proinflammatory proteins were greater in MT (−/−) than in MT (+/+) mice after LPS challenge. Our results suggest that MT protects against coagulatory and fibrinolytic disturbance and multiple organ damage including lung injury induced by LPS, at least partly, via inhibition of the local expression of proinflammatory proteins in this model (Figure 1) [45]. Nonetheless, its underlying mechanistic pathways including new ones (e.g., Toll-like receptors [46], NALP inflammasomes [47], neurotensin [48], RANK-RANKL [49]) remain to be explored in future.Figure 1 Hypothesized mechanisms of cytoprotection of MT in LPS-related inflammation. Figure reproduced with some modifications with permission from FASEB journal [45]. ### 5.2. Role of MT in Allergic Inflammation Bronchial asthma is a complex syndrome, characterized by obstruction, hyperresponsiveness, and persistent inflammation of the airways. Inflammation in asthma is characterized by an accumulation of eosinophils, lymphocytes, and neutrophils in the bronchial wall and lumen [50–52]. The mechanisms via which inflammatory cells alter airway function in asthmatic conditions include the release of Th2 cytokines (IL-4, IL-5, and IL-13) and chemotactic mediators such as thymus and activation-regulated chemokine, macrophage-derived chemokine, and eotaxin, and various proteases as well as the generation of reactive oxygen species. Thus, next, we determined the role of MT in allergic airway inflammation induced by ovalbumin (OVA) using MT (−/−) mice. MT (−/−) and MT (+/+) mice were intratracheally challenged with OVA (1 μg/body) biweekly 3 times. Thereafter, the cellular profile of the BAL fluid, lung histology, and expression of proinflammatory molecules in the lung were evaluated. After the final OVA challenge, significant increases were noted in the numbers of total cells, eosinophils, and neutrophils in BAL fluid in MT (−/−) mice compared to those in MT (+/+) mice. Histopathologically, in the presence of OVA, the number of inflammatory cells including eosinophils and neutrophils in the lung was larger in MT (−/−) than in MT (+/+) mice. The protein level of IL-1β was significantly greater in MT (−/−) than in MT (+/+) mice after OVA challenge. Immunohistochemistry showed that the formations of 8-hydroxy-2′-deoxyguanosine, a proper marker of oxidative DNA damage, and nitrotyrosine in the lung were more intense in MT (−/−) than in MT (+/+) mice after OVA challenge. These results indicate that endogenous MT protects against allergic airway inflammation induced by OVA, at least partly, via suppression of the enhanced lung expression of IL-1β and via its antioxidative potential [53]. ### 5.3. Role of MT in Oxidative Lung Injury Ozone (O3) is a highly toxic principal oxidant found in urban environments throughout the world. Experimental research has shown that O3 inhalation causes airway inflammation/injury in vivo [54]. Furthermore, O3 is a strong oxidizing agent that can be rapidly converted into a number of ROS, including hydrogen peroxide [55, 56]. In fact, O3-induced lung inflammation/injury comprises oxidative stress-related tissue injury [57–59]. Also, O3 exposure reportedly results in oxidative stress in the airway, possibly through the disruption of iron homeostasis [59]; iron can increase oxidant generation after O3 interaction with aqueous media and produce hydroxyl radicals [60, 61]. On the other hand, lung expression of MT is reportedly induced by O3 exposure in vivo [62, 63]. Thus, we next examined the role of MT in lung inflammation induced by subacute exposure to O3 using MT (−/−) mice. After subacute exposure to O3 (0.3 ppm), the cellular profile of BAL fluid, pulmonary edema, lung histology, and expression of proinflammatory molecules in the lung were evaluated. Exposure to O3 induced lung inflammation and enhanced vascular permeability, which was significantly greater in MT (−/−) than in MT (+/+) mice. Electron microscopically, O3 exposure induced the vacuolar degeneration of pulmonary endothelial and epithelial cells, and interstitial edema with focal loss of the basement membrane, which was more prominent in MT (−/−) than in MT (+/+) mice. O3-induced lung expression of IL-6 was significantly greater in MT (−/−) than in MT (+/+) mice; however, lung expression of the chemokines such as eotaxin, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant was comparable between both genotypes of mice in the presence of O3. Following O3 exposure, the formation of oxidative stress-related molecules/adducts, such as heme oxygenase-1, inducible nitric oxide synthase, 8-OHdG, and nitrotyrosine in the lung was significantly greater in MT (−/−) than in MT (+/+) mice. Collectively, MT protects against O3-induced lung inflammation, at least partly, via the regulation of pulmonary endothelial and epithelial integrity and its antioxidative property (64). ### 5.4. Role of MT in Lethal Liver Injury Liver has high levels of Zn- and Cu-bound MT and has a high capacity to regenerate. MT is reportedly involved in hepatocyte regeneration after partial hepatectomy [64–67] and chemical injury [68]. Similarly, previous studies have shown that induction of MT can protect animals from hepatotoxicity of several chemicals, such as ethanol, carbon tetrachloride, acetaminophen, and cadmium [69].Hepatic dysfunction due to liver disorders such as viral hepatitis, liver cirrhosis, and hepatocellular carcinoma is frequently associated with lethal coagulopathy such as DIC. Kimura et al. previously reported that MT is protective against acute liver injury induced by LPS/D- GalN through the suppression of TNF-α production/release using MT (−/−) mice [41]. An animal model of acute (lethal) liver injury using LPS/D-GalN develops severe coagulopathy with histological evidence of DIC [70] quite similar to that in humans. Furthermore, most coagulatory factors as well as MT are produced mainly in the liver, indicating a possible role of MT in the pathogenesis of hepatic disorder-related coagulopathy. Besides, our above mentioned study has implicated MT in pathophysiology of coagulatory disturbance [45]. To expand the findings by Kimura et al., therefore, we explored the role of MT in coagulatory disturbance during acute liver injury induced by LPS/D-GalN. Both MT (−/−) and MT (+/+) mice were injected intraperitoneally with 30 μg/kg of LPS and 800 mg/kg of D-GalN dissolved in vehicle. Five hours after the injection, blood samples were collected and platelet counts and coagulatory parameters were measured. LPS/D-GalN challenge significantly decreased platelet number in both genotypes of mice in a time-dependent fashion as compared to vehicle challenge. However, in the presence of LPS/D-GalN, the decrease was significantly greater in MT (−/−) than in MT (+/+) mice. LPS/D-GalN challenge caused prolongation of the plasma coagulatory parameters such as PT and APTT in both genotypes of mice as compared with vehicle challenge. In the presence of LPS/D-GalN, PT and APTT were longer in MT (−/−) than in MT (+/+) mice. The level of fibrinogen significantly decreased 5 hours after LPS/D-GalN challenge in both genotypes of mice as compared to vehicle challenge. After LPS/D-GalN challenge, the level was significantly lower in MT (−/−) than in MT (+/+) mice. As compared to vehicle administration, LPS/D-GalN administration elicited an increase in the plasma level of von Willebrand factor in both genotypes of mice. Further, in the presence of LPS/D-GalN, the level was significantly greater in MT (−/−) than in MT (+/+) mice ([71]. ## 5.1. Role of MT in Lung Injury Related to LPS Previous as well as our recent studies have shown the expression of MT in the lung [39, 42]. Immunohistopathological examination led to the detection of immunoreactive MT-I/II proteins in the lungs in endothelial and alveolar epithelial cells of MT (+/+) mice, whereas they were not detected in those of MT (−/−) mice. Furthermore, the expression was confirmed to be enhanced by oxidative stimuli like LPS and ozone (O3) exposure (data not shown).The intratracheal instillation of LPS produces a well-recognized model of acute lung injury, leading to the activation of alveolar macroghages, tissue infiltration of neutrophils, and interstitial edema [43]. Although the inhalation of LPS has been reported to induce MT expression in the lung in vivo [39, 42], there is no evidence regarding the direct contribution of MT in acute lung injury related to LPS.MT (−/−) and MT (+/+) mice were administered vehicle or LPS (125 μg/kg) intratracheally. Thereafter, the cellular profile of the bronchoalveolar lavage (BAL) fluid, pulmonary edema, lung histology, expression of proinflammatory molecules, and nuclear localization of nuclear factor-κ B (NF-κ B) in the lung were evaluated. As a result, MT (−/−) mice were more susceptible than MT (+/+) mice to neutrophilic lung inflammation and lung edema, which was induced by intratracheal challenge with LPS. After LPS challenge, MT deficiency enhanced the vacuolar degeneration of pulmonary endothelial and type I alveolar epithelial cells, and caused a focal loss of the basement membrane. However, unexpectedly, LPS treatment induced no significant differences neither in the enhanced expression of proinflammatory cytokines and chemokines, nor in the activation of the NF-κ B pathway in the lung between the two genotypes. Lipid peroxide levels in the lungs were significantly higher in LPS-treated MT (−/−) than LPS-treated MT (+/+) mice. These findings suggest that MT protects against acute lung injury related to LPS. The effects are possibly mediated via the enhancement of pulmonary endothelial and epithelial integrity, not via inhibition of the NF-κ B pathway [44].Next, MT (−/−) and MT (+/+) mice were administered vehicle or LPS (30 mg/kg) intraperitoneally. Thereafter, coagulatory parameters, organ histology (lung, liver, and kidney), and the local expression of proinflammatory molecules were evaluated. As a result, compared with MT (+/+) mice, MT (−/−) mice showed a significant prolongation of the prothrombin time (PT) and activated partial thromboplastin time (APTT), a significant increase in the levels of fibrinogen and fibrinogen/fibrin degradation products, and a significant decrease in activated protein C, after LPS treatment. LPS induced inflammatory organ damage in the lung, kidney, and liver in both genotypes of mice. The damage including neutrophil infiltration in the organs was more prominent in MT (−/−) than MT (+/+) mice after LPS treatment. In both genotypes of mice, LPS enhanced the protein expression of interleukin (IL)-1β, IL-6, granulocyte/macrophage-colony-stimulating factor, macrophage inflammatory protein (MIP)-1α, MIP-2, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant (KC) in the lung, kidney, and liver and circulatory levels of IL-1β, IL-6, MIP-2, and KC. In overall trends, however, the levels of these proinflammatory proteins were greater in MT (−/−) than in MT (+/+) mice after LPS challenge. Our results suggest that MT protects against coagulatory and fibrinolytic disturbance and multiple organ damage including lung injury induced by LPS, at least partly, via inhibition of the local expression of proinflammatory proteins in this model (Figure 1) [45]. Nonetheless, its underlying mechanistic pathways including new ones (e.g., Toll-like receptors [46], NALP inflammasomes [47], neurotensin [48], RANK-RANKL [49]) remain to be explored in future.Figure 1 Hypothesized mechanisms of cytoprotection of MT in LPS-related inflammation. Figure reproduced with some modifications with permission from FASEB journal [45]. ## 5.2. Role of MT in Allergic Inflammation Bronchial asthma is a complex syndrome, characterized by obstruction, hyperresponsiveness, and persistent inflammation of the airways. Inflammation in asthma is characterized by an accumulation of eosinophils, lymphocytes, and neutrophils in the bronchial wall and lumen [50–52]. The mechanisms via which inflammatory cells alter airway function in asthmatic conditions include the release of Th2 cytokines (IL-4, IL-5, and IL-13) and chemotactic mediators such as thymus and activation-regulated chemokine, macrophage-derived chemokine, and eotaxin, and various proteases as well as the generation of reactive oxygen species. Thus, next, we determined the role of MT in allergic airway inflammation induced by ovalbumin (OVA) using MT (−/−) mice. MT (−/−) and MT (+/+) mice were intratracheally challenged with OVA (1 μg/body) biweekly 3 times. Thereafter, the cellular profile of the BAL fluid, lung histology, and expression of proinflammatory molecules in the lung were evaluated. After the final OVA challenge, significant increases were noted in the numbers of total cells, eosinophils, and neutrophils in BAL fluid in MT (−/−) mice compared to those in MT (+/+) mice. Histopathologically, in the presence of OVA, the number of inflammatory cells including eosinophils and neutrophils in the lung was larger in MT (−/−) than in MT (+/+) mice. The protein level of IL-1β was significantly greater in MT (−/−) than in MT (+/+) mice after OVA challenge. Immunohistochemistry showed that the formations of 8-hydroxy-2′-deoxyguanosine, a proper marker of oxidative DNA damage, and nitrotyrosine in the lung were more intense in MT (−/−) than in MT (+/+) mice after OVA challenge. These results indicate that endogenous MT protects against allergic airway inflammation induced by OVA, at least partly, via suppression of the enhanced lung expression of IL-1β and via its antioxidative potential [53]. ## 5.3. Role of MT in Oxidative Lung Injury Ozone (O3) is a highly toxic principal oxidant found in urban environments throughout the world. Experimental research has shown that O3 inhalation causes airway inflammation/injury in vivo [54]. Furthermore, O3 is a strong oxidizing agent that can be rapidly converted into a number of ROS, including hydrogen peroxide [55, 56]. In fact, O3-induced lung inflammation/injury comprises oxidative stress-related tissue injury [57–59]. Also, O3 exposure reportedly results in oxidative stress in the airway, possibly through the disruption of iron homeostasis [59]; iron can increase oxidant generation after O3 interaction with aqueous media and produce hydroxyl radicals [60, 61]. On the other hand, lung expression of MT is reportedly induced by O3 exposure in vivo [62, 63]. Thus, we next examined the role of MT in lung inflammation induced by subacute exposure to O3 using MT (−/−) mice. After subacute exposure to O3 (0.3 ppm), the cellular profile of BAL fluid, pulmonary edema, lung histology, and expression of proinflammatory molecules in the lung were evaluated. Exposure to O3 induced lung inflammation and enhanced vascular permeability, which was significantly greater in MT (−/−) than in MT (+/+) mice. Electron microscopically, O3 exposure induced the vacuolar degeneration of pulmonary endothelial and epithelial cells, and interstitial edema with focal loss of the basement membrane, which was more prominent in MT (−/−) than in MT (+/+) mice. O3-induced lung expression of IL-6 was significantly greater in MT (−/−) than in MT (+/+) mice; however, lung expression of the chemokines such as eotaxin, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant was comparable between both genotypes of mice in the presence of O3. Following O3 exposure, the formation of oxidative stress-related molecules/adducts, such as heme oxygenase-1, inducible nitric oxide synthase, 8-OHdG, and nitrotyrosine in the lung was significantly greater in MT (−/−) than in MT (+/+) mice. Collectively, MT protects against O3-induced lung inflammation, at least partly, via the regulation of pulmonary endothelial and epithelial integrity and its antioxidative property (64). ## 5.4. Role of MT in Lethal Liver Injury Liver has high levels of Zn- and Cu-bound MT and has a high capacity to regenerate. MT is reportedly involved in hepatocyte regeneration after partial hepatectomy [64–67] and chemical injury [68]. Similarly, previous studies have shown that induction of MT can protect animals from hepatotoxicity of several chemicals, such as ethanol, carbon tetrachloride, acetaminophen, and cadmium [69].Hepatic dysfunction due to liver disorders such as viral hepatitis, liver cirrhosis, and hepatocellular carcinoma is frequently associated with lethal coagulopathy such as DIC. Kimura et al. previously reported that MT is protective against acute liver injury induced by LPS/D- GalN through the suppression of TNF-α production/release using MT (−/−) mice [41]. An animal model of acute (lethal) liver injury using LPS/D-GalN develops severe coagulopathy with histological evidence of DIC [70] quite similar to that in humans. Furthermore, most coagulatory factors as well as MT are produced mainly in the liver, indicating a possible role of MT in the pathogenesis of hepatic disorder-related coagulopathy. Besides, our above mentioned study has implicated MT in pathophysiology of coagulatory disturbance [45]. To expand the findings by Kimura et al., therefore, we explored the role of MT in coagulatory disturbance during acute liver injury induced by LPS/D-GalN. Both MT (−/−) and MT (+/+) mice were injected intraperitoneally with 30 μg/kg of LPS and 800 mg/kg of D-GalN dissolved in vehicle. Five hours after the injection, blood samples were collected and platelet counts and coagulatory parameters were measured. LPS/D-GalN challenge significantly decreased platelet number in both genotypes of mice in a time-dependent fashion as compared to vehicle challenge. However, in the presence of LPS/D-GalN, the decrease was significantly greater in MT (−/−) than in MT (+/+) mice. LPS/D-GalN challenge caused prolongation of the plasma coagulatory parameters such as PT and APTT in both genotypes of mice as compared with vehicle challenge. In the presence of LPS/D-GalN, PT and APTT were longer in MT (−/−) than in MT (+/+) mice. The level of fibrinogen significantly decreased 5 hours after LPS/D-GalN challenge in both genotypes of mice as compared to vehicle challenge. After LPS/D-GalN challenge, the level was significantly lower in MT (−/−) than in MT (+/+) mice. As compared to vehicle administration, LPS/D-GalN administration elicited an increase in the plasma level of von Willebrand factor in both genotypes of mice. Further, in the presence of LPS/D-GalN, the level was significantly greater in MT (−/−) than in MT (+/+) mice ([71]. ## 6. Conclusion MTs play important roles in the physiological condition, such as heavy metal homeostasis and radical scavenging. Furthermore, through a genetic approach, MT has been shown to protect against various types of (including LPS-related, allergic, and oxidative) inflammatory conditions in mice, implicating MT-induction/enhancement and/or zinc supplementation to induce/enhance MT as possible therapeutic options for inflammatory diseases, although additional research is needed to conclude its clinical utility. --- *Source: 101659-2009-05-11.xml*
101659-2009-05-11_101659-2009-05-11.md
33,628
Metallothionein as an Anti-Inflammatory Mediator
Ken-ichiro Inoue; Hirohisa Takano; Akinori Shimada; Masahiko Satoh
Mediators of Inflammation (2009)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2009/101659
101659-2009-05-11.xml
--- ## Abstract The integration of knowledge concerning the regulation of MT, a highly conserved, low molecular weight, cystein-rich metalloprotein, on its proposed functions is necessary to clarify how MT affects cellular processes. MT expression is induced/enhanced in various tissues by a number of physiological mediators. The cellular accumulation of MT depends on the availability of cellular zinc derived from the diet. MT modulates the binding and exchange/transport of heavy metals such as zinc, cadmium, or copper under physiological conditions and cytoprotection from their toxicities, and the release of gaseous mediators such as hydroxyl radicals or nitric oxide. In addition, MT reportedly affects a number of cellular processes, such as gene expression, apoptosis, proliferation, and differentiation. Given the genetic approach, the apparently healthy status of MT-deficient mice argues against an essential biological role for MT; however, this molecule may be critical in cells/tissues/organs in times of stress, since MT expression is also evoked/enhanced by various stresses. In particular, because metallothionein (MT) is induced by inflammatory stress, its roles in inflammation are implied. Also, MT expression in various organs/tissues can be enhanced by inflammatory stimuli, implicating in inflammatory diseases. In this paper, we review the role of MT of various inflammatory conditions. --- ## Body ## 1. Introduction Metallothioneins (MTs) discovered as cadmium-binding protein from horse kidney approximately five decades ago, and were later characterized as a low molecular weight protein with a high cysteine content and a high affinity for divalent important metals, such as zinc and copper, and unimportant ones, such as cadmium and mercury (Margoshes) [1]. Because of their high metal content and unusual bioinorganic structure, they are classified as metalloproteins [2]. MTs are unusually rich in cysteine residues that coordinate multiple zinc and copper atoms under physiological conditions. ## 2. Classification In mice, there are 4 MT genes that reside in a 50-kb region on chromosome 8 [3]. The mouse MT-I and–II genes are expressed at all stages of development in many cell types of most organs; they are coordinately regulated by metals, glucocorticoids, and inflammatory stress [4]. MT-III is expressed predominantly in not only neurons but also in glia and male reproductive organs [5–7]. MT-IV is expressed in differentiating stratified squamous epithelial cells [3]. All four MT genes are expressed in the maternal deciduum [8]. In humans, whereas, MTs are encoded by a family of genes consisting of 10 functional MT isoforms, and the encoded proteins are conventionally subdivided into 4 groups: MT-1, MT-2, MT-3, and MT-4 proteins [9]. While a single MT-2A gene encodes MT-2 protein, MT-1 protein comprises many subtypes encoded by a set of MT-1 genes (MT-1A, MT-1B, MT-1E, MT-1F, MT-1G, MT-1H, and MT-1X), accounting for the microheterogeneity of the MT-1 protein [2]. As shown above, there are multiple MT genes, expressed in distinct patterns, suggesting that they possess important functions; however, whether they have redundant or divergent functions under both physiological and pathological conditions is not fully understood, although the known functions of MTs include metalloregulatory roles in cell growth, differentiation, and apoptosis and the enhanced synthesis of MTs in rapidly proliferating tissues, implying their crucial role in normal and neoplastic cell growth [10]. ## 3. Characteristics These intracellular proteins are characterized by their unusually high cysteine content (30%) and lack of aromatic amino acids. Because of their thiol rich content, MTs can bind to a number of trace metals such as cadmium, mercury, platinum, and silver, and protect cells and tissues against the toxicity of these metals. Furthermore, MTs are among the most abundant components interacting with the biologically essential metals zinc and copper. MT metal-thiolate fractions, being dynamic and of a high affinity, also facilitate metal exchange in tissues [11].MTs are present in a great variety of eukaryotes [12], functioning as antioxidants; they also play a protective role against hydroxyl-free radicals. This is relevant in tumors which are known to be markedly radiosensitive, where radiotherapy is the treatment of choice [13]. ## 4. Function under Physiological Conditions The putative functions of MT include intracellular metal metabolism and/or storage, metal donation to target apometalloproteins (especially zinc finger proteins and enzymes), metal detoxification, and protection against oxidants and electrophils [14]. Evidence for these functions originally came from traditional animal, cell culture, and in vitro models. Furthermore, these studies have been supported by experiments using murine models with the targeted deletion or transgenic overexpression of MT genes. MT most likely functions in the regulation of zinc metabolism [14]. Elevations of dietary zinc induce/enhance intestinal MT [15], whereas maximal intestinal Zn accumulation seems to depend on MT synthesis [16]. MT (−/−) mice accumulate less zinc in the distal gastrointestinal tract when fed a high zinc diet [17]. In most studies, zinc absorption was inversely related to the intestinal MT content after MT was induced by dietary, parenteral zinc, or by fasting [14]. Studies using transgenic and knockout mice have confirmed that MT can alter the processing of zinc taken orally because the serum zinc concentration was inversely related to the intestinal MT level in these mice after single oral doses of zinc [17, 18]. In turn, urinary Zn excretion levels measured during a fast or Zn intake restriction were greater in MT (−/−) mice than in MT (+/+) mice [19]. As well, the increase in hepatic Zn concentration after the administration of lipopolysaccharide (LPS) was found in MT (+/+) but not in MT (−/−) mice [20]. These results suggest that MT has the ability to retain Zn under physiological and pathological conditions. On the other hand, tissue Zn concentration was reduced, and the sensitivity to Zn deficiency during pregnancy was enhanced in MT (−/−) mice [21], and further, Zn deficiency caused abnormalities in neonate kidney differentiation in MT (−/−) mice [22]. Conversely, in MT-transgenic mice, Zn was accumulated in female organs, and the teratogenicity of Zn deficiency during pregnancy was significantly ameliorated. Taken together, MT is likely to have a Zn metabolizing activity in the individual level [23].Besides, MT demonstrates strong antioxidant properties. MT protein levels in rodent liver [24, 25] and mRNA levels in hepatic cell lines [26] are increased following injection with compounds that result in free radical formation, for example, carbon tetrachloride, menadione, or paraquat. An injection of ferric nitrilotriacetate, which produces reactive oxygen species (ROS), induces transcriptional level of MT in the liver and kidney [27]. These findings suggest that MT plays a role in oxidative stress. Consistent with this, MT is able to scavenge a wide range of ROS including superoxide, hydrogen peroxide, hydroxyl radicals, and nitric oxide [19, 28, 29]. In particular, it has been shown that the ability of MT to capture hydroxyl radicals, which are primarily responsible for the toxicity of ROS, is three hundred-times greater than that of glutathione [30], the most abundant antioxidant in the cytosol [19]. Further, metal-thiolate clusters are reportedly oxidized in vitro; thus, they could scavenge deleterious oxygen radicals. Compelling genetic evidence for this concept comes from work using yeast. In brief, yeast that cannot synthesize copper MTs are more sensitive to oxidative stress if they also lack superoxide dismutase, suggesting that yeast MT has antioxidant functions [31]. In addition, the expression of monkey MTs under the control of the yeast MT promoter also protects against oxidative stress [31]. Many agents that induce oxidative stress, such as chloroform, turpentine, diethyl maleate, paraquat, and H2O2, can also induce MT-I and MT-II in vitro and in vivo [24, 26, 32]. This strongly suggests that MT is involved in protecting against oxidative damage. Conversely, mammalian cells that express excess MTs appear to be resistant to the toxic effects of nitric oxide [33] and many electrophilic antineoplastic agents [34], which are capable of reacting with the cysteines of MT. Further, relatively recent studies have demonstrated that MT is induced by oxidative stress-producing chemicals [35], and exhibits cytoprotection against oxidative stress-related organ damage in vivo [36, 37].Despite the confirmed roles of MT under physical conditions, as mentioned above, a complete identification of all the functions of this unique protein within an integrative context has yet to emerge, particularly under pathophysiological conditions. Particularly, since proinflammatory cytokines including interleukin (IL)-1, IL-6, and interferon-γ also induce hepatic MT gene expression in vivo [38–40], the roles of MT in inflammation have been focused on. There are conflicting reports about the role of MT in inflammatory processes. In fact, MT (−/−) mice were resistant to tumor necrosis factor (TNF)-induced lethal shock compared to MT (+/+) mice [38]. MT-I-overexpressing mice are more sensitive to the lethal effects of TNF than MT (+/+) mice [38]. In contrast, Kimura et al. have reported that MT (−/−) mice are more susceptible to LPS-induced lethal shock in D-galactosamine (GalN)-sensitized mice through the reduction of alpha (1)-acid glycoprotein than MT (+/+) mice [41]. Accordingly, it seems that the roles of MT in inflammation depend on pathophysiologic conditions (site, route of stimuli, and type).Whereas, to date, inflammatory diseases such as systemic inflammatory response syndrome including acute lung injury, allergic asthma, oxidative lung injury, and acute liver injury are as yet refractory and/or hindering daily life, possibly due to the incomplete understanding of molecular targets. Thus, investigation for the role of MT in these inflammatory diseases may provide hint for novel therapeutic options. ## 5. Function of MT under Pathophysiological Conditions in Inflammation ### 5.1. Role of MT in Lung Injury Related to LPS Previous as well as our recent studies have shown the expression of MT in the lung [39, 42]. Immunohistopathological examination led to the detection of immunoreactive MT-I/II proteins in the lungs in endothelial and alveolar epithelial cells of MT (+/+) mice, whereas they were not detected in those of MT (−/−) mice. Furthermore, the expression was confirmed to be enhanced by oxidative stimuli like LPS and ozone (O3) exposure (data not shown).The intratracheal instillation of LPS produces a well-recognized model of acute lung injury, leading to the activation of alveolar macroghages, tissue infiltration of neutrophils, and interstitial edema [43]. Although the inhalation of LPS has been reported to induce MT expression in the lung in vivo [39, 42], there is no evidence regarding the direct contribution of MT in acute lung injury related to LPS.MT (−/−) and MT (+/+) mice were administered vehicle or LPS (125 μg/kg) intratracheally. Thereafter, the cellular profile of the bronchoalveolar lavage (BAL) fluid, pulmonary edema, lung histology, expression of proinflammatory molecules, and nuclear localization of nuclear factor-κ B (NF-κ B) in the lung were evaluated. As a result, MT (−/−) mice were more susceptible than MT (+/+) mice to neutrophilic lung inflammation and lung edema, which was induced by intratracheal challenge with LPS. After LPS challenge, MT deficiency enhanced the vacuolar degeneration of pulmonary endothelial and type I alveolar epithelial cells, and caused a focal loss of the basement membrane. However, unexpectedly, LPS treatment induced no significant differences neither in the enhanced expression of proinflammatory cytokines and chemokines, nor in the activation of the NF-κ B pathway in the lung between the two genotypes. Lipid peroxide levels in the lungs were significantly higher in LPS-treated MT (−/−) than LPS-treated MT (+/+) mice. These findings suggest that MT protects against acute lung injury related to LPS. The effects are possibly mediated via the enhancement of pulmonary endothelial and epithelial integrity, not via inhibition of the NF-κ B pathway [44].Next, MT (−/−) and MT (+/+) mice were administered vehicle or LPS (30 mg/kg) intraperitoneally. Thereafter, coagulatory parameters, organ histology (lung, liver, and kidney), and the local expression of proinflammatory molecules were evaluated. As a result, compared with MT (+/+) mice, MT (−/−) mice showed a significant prolongation of the prothrombin time (PT) and activated partial thromboplastin time (APTT), a significant increase in the levels of fibrinogen and fibrinogen/fibrin degradation products, and a significant decrease in activated protein C, after LPS treatment. LPS induced inflammatory organ damage in the lung, kidney, and liver in both genotypes of mice. The damage including neutrophil infiltration in the organs was more prominent in MT (−/−) than MT (+/+) mice after LPS treatment. In both genotypes of mice, LPS enhanced the protein expression of interleukin (IL)-1β, IL-6, granulocyte/macrophage-colony-stimulating factor, macrophage inflammatory protein (MIP)-1α, MIP-2, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant (KC) in the lung, kidney, and liver and circulatory levels of IL-1β, IL-6, MIP-2, and KC. In overall trends, however, the levels of these proinflammatory proteins were greater in MT (−/−) than in MT (+/+) mice after LPS challenge. Our results suggest that MT protects against coagulatory and fibrinolytic disturbance and multiple organ damage including lung injury induced by LPS, at least partly, via inhibition of the local expression of proinflammatory proteins in this model (Figure 1) [45]. Nonetheless, its underlying mechanistic pathways including new ones (e.g., Toll-like receptors [46], NALP inflammasomes [47], neurotensin [48], RANK-RANKL [49]) remain to be explored in future.Figure 1 Hypothesized mechanisms of cytoprotection of MT in LPS-related inflammation. Figure reproduced with some modifications with permission from FASEB journal [45]. ### 5.2. Role of MT in Allergic Inflammation Bronchial asthma is a complex syndrome, characterized by obstruction, hyperresponsiveness, and persistent inflammation of the airways. Inflammation in asthma is characterized by an accumulation of eosinophils, lymphocytes, and neutrophils in the bronchial wall and lumen [50–52]. The mechanisms via which inflammatory cells alter airway function in asthmatic conditions include the release of Th2 cytokines (IL-4, IL-5, and IL-13) and chemotactic mediators such as thymus and activation-regulated chemokine, macrophage-derived chemokine, and eotaxin, and various proteases as well as the generation of reactive oxygen species. Thus, next, we determined the role of MT in allergic airway inflammation induced by ovalbumin (OVA) using MT (−/−) mice. MT (−/−) and MT (+/+) mice were intratracheally challenged with OVA (1 μg/body) biweekly 3 times. Thereafter, the cellular profile of the BAL fluid, lung histology, and expression of proinflammatory molecules in the lung were evaluated. After the final OVA challenge, significant increases were noted in the numbers of total cells, eosinophils, and neutrophils in BAL fluid in MT (−/−) mice compared to those in MT (+/+) mice. Histopathologically, in the presence of OVA, the number of inflammatory cells including eosinophils and neutrophils in the lung was larger in MT (−/−) than in MT (+/+) mice. The protein level of IL-1β was significantly greater in MT (−/−) than in MT (+/+) mice after OVA challenge. Immunohistochemistry showed that the formations of 8-hydroxy-2′-deoxyguanosine, a proper marker of oxidative DNA damage, and nitrotyrosine in the lung were more intense in MT (−/−) than in MT (+/+) mice after OVA challenge. These results indicate that endogenous MT protects against allergic airway inflammation induced by OVA, at least partly, via suppression of the enhanced lung expression of IL-1β and via its antioxidative potential [53]. ### 5.3. Role of MT in Oxidative Lung Injury Ozone (O3) is a highly toxic principal oxidant found in urban environments throughout the world. Experimental research has shown that O3 inhalation causes airway inflammation/injury in vivo [54]. Furthermore, O3 is a strong oxidizing agent that can be rapidly converted into a number of ROS, including hydrogen peroxide [55, 56]. In fact, O3-induced lung inflammation/injury comprises oxidative stress-related tissue injury [57–59]. Also, O3 exposure reportedly results in oxidative stress in the airway, possibly through the disruption of iron homeostasis [59]; iron can increase oxidant generation after O3 interaction with aqueous media and produce hydroxyl radicals [60, 61]. On the other hand, lung expression of MT is reportedly induced by O3 exposure in vivo [62, 63]. Thus, we next examined the role of MT in lung inflammation induced by subacute exposure to O3 using MT (−/−) mice. After subacute exposure to O3 (0.3 ppm), the cellular profile of BAL fluid, pulmonary edema, lung histology, and expression of proinflammatory molecules in the lung were evaluated. Exposure to O3 induced lung inflammation and enhanced vascular permeability, which was significantly greater in MT (−/−) than in MT (+/+) mice. Electron microscopically, O3 exposure induced the vacuolar degeneration of pulmonary endothelial and epithelial cells, and interstitial edema with focal loss of the basement membrane, which was more prominent in MT (−/−) than in MT (+/+) mice. O3-induced lung expression of IL-6 was significantly greater in MT (−/−) than in MT (+/+) mice; however, lung expression of the chemokines such as eotaxin, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant was comparable between both genotypes of mice in the presence of O3. Following O3 exposure, the formation of oxidative stress-related molecules/adducts, such as heme oxygenase-1, inducible nitric oxide synthase, 8-OHdG, and nitrotyrosine in the lung was significantly greater in MT (−/−) than in MT (+/+) mice. Collectively, MT protects against O3-induced lung inflammation, at least partly, via the regulation of pulmonary endothelial and epithelial integrity and its antioxidative property (64). ### 5.4. Role of MT in Lethal Liver Injury Liver has high levels of Zn- and Cu-bound MT and has a high capacity to regenerate. MT is reportedly involved in hepatocyte regeneration after partial hepatectomy [64–67] and chemical injury [68]. Similarly, previous studies have shown that induction of MT can protect animals from hepatotoxicity of several chemicals, such as ethanol, carbon tetrachloride, acetaminophen, and cadmium [69].Hepatic dysfunction due to liver disorders such as viral hepatitis, liver cirrhosis, and hepatocellular carcinoma is frequently associated with lethal coagulopathy such as DIC. Kimura et al. previously reported that MT is protective against acute liver injury induced by LPS/D- GalN through the suppression of TNF-α production/release using MT (−/−) mice [41]. An animal model of acute (lethal) liver injury using LPS/D-GalN develops severe coagulopathy with histological evidence of DIC [70] quite similar to that in humans. Furthermore, most coagulatory factors as well as MT are produced mainly in the liver, indicating a possible role of MT in the pathogenesis of hepatic disorder-related coagulopathy. Besides, our above mentioned study has implicated MT in pathophysiology of coagulatory disturbance [45]. To expand the findings by Kimura et al., therefore, we explored the role of MT in coagulatory disturbance during acute liver injury induced by LPS/D-GalN. Both MT (−/−) and MT (+/+) mice were injected intraperitoneally with 30 μg/kg of LPS and 800 mg/kg of D-GalN dissolved in vehicle. Five hours after the injection, blood samples were collected and platelet counts and coagulatory parameters were measured. LPS/D-GalN challenge significantly decreased platelet number in both genotypes of mice in a time-dependent fashion as compared to vehicle challenge. However, in the presence of LPS/D-GalN, the decrease was significantly greater in MT (−/−) than in MT (+/+) mice. LPS/D-GalN challenge caused prolongation of the plasma coagulatory parameters such as PT and APTT in both genotypes of mice as compared with vehicle challenge. In the presence of LPS/D-GalN, PT and APTT were longer in MT (−/−) than in MT (+/+) mice. The level of fibrinogen significantly decreased 5 hours after LPS/D-GalN challenge in both genotypes of mice as compared to vehicle challenge. After LPS/D-GalN challenge, the level was significantly lower in MT (−/−) than in MT (+/+) mice. As compared to vehicle administration, LPS/D-GalN administration elicited an increase in the plasma level of von Willebrand factor in both genotypes of mice. Further, in the presence of LPS/D-GalN, the level was significantly greater in MT (−/−) than in MT (+/+) mice ([71]. ## 5.1. Role of MT in Lung Injury Related to LPS Previous as well as our recent studies have shown the expression of MT in the lung [39, 42]. Immunohistopathological examination led to the detection of immunoreactive MT-I/II proteins in the lungs in endothelial and alveolar epithelial cells of MT (+/+) mice, whereas they were not detected in those of MT (−/−) mice. Furthermore, the expression was confirmed to be enhanced by oxidative stimuli like LPS and ozone (O3) exposure (data not shown).The intratracheal instillation of LPS produces a well-recognized model of acute lung injury, leading to the activation of alveolar macroghages, tissue infiltration of neutrophils, and interstitial edema [43]. Although the inhalation of LPS has been reported to induce MT expression in the lung in vivo [39, 42], there is no evidence regarding the direct contribution of MT in acute lung injury related to LPS.MT (−/−) and MT (+/+) mice were administered vehicle or LPS (125 μg/kg) intratracheally. Thereafter, the cellular profile of the bronchoalveolar lavage (BAL) fluid, pulmonary edema, lung histology, expression of proinflammatory molecules, and nuclear localization of nuclear factor-κ B (NF-κ B) in the lung were evaluated. As a result, MT (−/−) mice were more susceptible than MT (+/+) mice to neutrophilic lung inflammation and lung edema, which was induced by intratracheal challenge with LPS. After LPS challenge, MT deficiency enhanced the vacuolar degeneration of pulmonary endothelial and type I alveolar epithelial cells, and caused a focal loss of the basement membrane. However, unexpectedly, LPS treatment induced no significant differences neither in the enhanced expression of proinflammatory cytokines and chemokines, nor in the activation of the NF-κ B pathway in the lung between the two genotypes. Lipid peroxide levels in the lungs were significantly higher in LPS-treated MT (−/−) than LPS-treated MT (+/+) mice. These findings suggest that MT protects against acute lung injury related to LPS. The effects are possibly mediated via the enhancement of pulmonary endothelial and epithelial integrity, not via inhibition of the NF-κ B pathway [44].Next, MT (−/−) and MT (+/+) mice were administered vehicle or LPS (30 mg/kg) intraperitoneally. Thereafter, coagulatory parameters, organ histology (lung, liver, and kidney), and the local expression of proinflammatory molecules were evaluated. As a result, compared with MT (+/+) mice, MT (−/−) mice showed a significant prolongation of the prothrombin time (PT) and activated partial thromboplastin time (APTT), a significant increase in the levels of fibrinogen and fibrinogen/fibrin degradation products, and a significant decrease in activated protein C, after LPS treatment. LPS induced inflammatory organ damage in the lung, kidney, and liver in both genotypes of mice. The damage including neutrophil infiltration in the organs was more prominent in MT (−/−) than MT (+/+) mice after LPS treatment. In both genotypes of mice, LPS enhanced the protein expression of interleukin (IL)-1β, IL-6, granulocyte/macrophage-colony-stimulating factor, macrophage inflammatory protein (MIP)-1α, MIP-2, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant (KC) in the lung, kidney, and liver and circulatory levels of IL-1β, IL-6, MIP-2, and KC. In overall trends, however, the levels of these proinflammatory proteins were greater in MT (−/−) than in MT (+/+) mice after LPS challenge. Our results suggest that MT protects against coagulatory and fibrinolytic disturbance and multiple organ damage including lung injury induced by LPS, at least partly, via inhibition of the local expression of proinflammatory proteins in this model (Figure 1) [45]. Nonetheless, its underlying mechanistic pathways including new ones (e.g., Toll-like receptors [46], NALP inflammasomes [47], neurotensin [48], RANK-RANKL [49]) remain to be explored in future.Figure 1 Hypothesized mechanisms of cytoprotection of MT in LPS-related inflammation. Figure reproduced with some modifications with permission from FASEB journal [45]. ## 5.2. Role of MT in Allergic Inflammation Bronchial asthma is a complex syndrome, characterized by obstruction, hyperresponsiveness, and persistent inflammation of the airways. Inflammation in asthma is characterized by an accumulation of eosinophils, lymphocytes, and neutrophils in the bronchial wall and lumen [50–52]. The mechanisms via which inflammatory cells alter airway function in asthmatic conditions include the release of Th2 cytokines (IL-4, IL-5, and IL-13) and chemotactic mediators such as thymus and activation-regulated chemokine, macrophage-derived chemokine, and eotaxin, and various proteases as well as the generation of reactive oxygen species. Thus, next, we determined the role of MT in allergic airway inflammation induced by ovalbumin (OVA) using MT (−/−) mice. MT (−/−) and MT (+/+) mice were intratracheally challenged with OVA (1 μg/body) biweekly 3 times. Thereafter, the cellular profile of the BAL fluid, lung histology, and expression of proinflammatory molecules in the lung were evaluated. After the final OVA challenge, significant increases were noted in the numbers of total cells, eosinophils, and neutrophils in BAL fluid in MT (−/−) mice compared to those in MT (+/+) mice. Histopathologically, in the presence of OVA, the number of inflammatory cells including eosinophils and neutrophils in the lung was larger in MT (−/−) than in MT (+/+) mice. The protein level of IL-1β was significantly greater in MT (−/−) than in MT (+/+) mice after OVA challenge. Immunohistochemistry showed that the formations of 8-hydroxy-2′-deoxyguanosine, a proper marker of oxidative DNA damage, and nitrotyrosine in the lung were more intense in MT (−/−) than in MT (+/+) mice after OVA challenge. These results indicate that endogenous MT protects against allergic airway inflammation induced by OVA, at least partly, via suppression of the enhanced lung expression of IL-1β and via its antioxidative potential [53]. ## 5.3. Role of MT in Oxidative Lung Injury Ozone (O3) is a highly toxic principal oxidant found in urban environments throughout the world. Experimental research has shown that O3 inhalation causes airway inflammation/injury in vivo [54]. Furthermore, O3 is a strong oxidizing agent that can be rapidly converted into a number of ROS, including hydrogen peroxide [55, 56]. In fact, O3-induced lung inflammation/injury comprises oxidative stress-related tissue injury [57–59]. Also, O3 exposure reportedly results in oxidative stress in the airway, possibly through the disruption of iron homeostasis [59]; iron can increase oxidant generation after O3 interaction with aqueous media and produce hydroxyl radicals [60, 61]. On the other hand, lung expression of MT is reportedly induced by O3 exposure in vivo [62, 63]. Thus, we next examined the role of MT in lung inflammation induced by subacute exposure to O3 using MT (−/−) mice. After subacute exposure to O3 (0.3 ppm), the cellular profile of BAL fluid, pulmonary edema, lung histology, and expression of proinflammatory molecules in the lung were evaluated. Exposure to O3 induced lung inflammation and enhanced vascular permeability, which was significantly greater in MT (−/−) than in MT (+/+) mice. Electron microscopically, O3 exposure induced the vacuolar degeneration of pulmonary endothelial and epithelial cells, and interstitial edema with focal loss of the basement membrane, which was more prominent in MT (−/−) than in MT (+/+) mice. O3-induced lung expression of IL-6 was significantly greater in MT (−/−) than in MT (+/+) mice; however, lung expression of the chemokines such as eotaxin, macrophage chemoattractant protein-1, and keratinocyte-derived chemoattractant was comparable between both genotypes of mice in the presence of O3. Following O3 exposure, the formation of oxidative stress-related molecules/adducts, such as heme oxygenase-1, inducible nitric oxide synthase, 8-OHdG, and nitrotyrosine in the lung was significantly greater in MT (−/−) than in MT (+/+) mice. Collectively, MT protects against O3-induced lung inflammation, at least partly, via the regulation of pulmonary endothelial and epithelial integrity and its antioxidative property (64). ## 5.4. Role of MT in Lethal Liver Injury Liver has high levels of Zn- and Cu-bound MT and has a high capacity to regenerate. MT is reportedly involved in hepatocyte regeneration after partial hepatectomy [64–67] and chemical injury [68]. Similarly, previous studies have shown that induction of MT can protect animals from hepatotoxicity of several chemicals, such as ethanol, carbon tetrachloride, acetaminophen, and cadmium [69].Hepatic dysfunction due to liver disorders such as viral hepatitis, liver cirrhosis, and hepatocellular carcinoma is frequently associated with lethal coagulopathy such as DIC. Kimura et al. previously reported that MT is protective against acute liver injury induced by LPS/D- GalN through the suppression of TNF-α production/release using MT (−/−) mice [41]. An animal model of acute (lethal) liver injury using LPS/D-GalN develops severe coagulopathy with histological evidence of DIC [70] quite similar to that in humans. Furthermore, most coagulatory factors as well as MT are produced mainly in the liver, indicating a possible role of MT in the pathogenesis of hepatic disorder-related coagulopathy. Besides, our above mentioned study has implicated MT in pathophysiology of coagulatory disturbance [45]. To expand the findings by Kimura et al., therefore, we explored the role of MT in coagulatory disturbance during acute liver injury induced by LPS/D-GalN. Both MT (−/−) and MT (+/+) mice were injected intraperitoneally with 30 μg/kg of LPS and 800 mg/kg of D-GalN dissolved in vehicle. Five hours after the injection, blood samples were collected and platelet counts and coagulatory parameters were measured. LPS/D-GalN challenge significantly decreased platelet number in both genotypes of mice in a time-dependent fashion as compared to vehicle challenge. However, in the presence of LPS/D-GalN, the decrease was significantly greater in MT (−/−) than in MT (+/+) mice. LPS/D-GalN challenge caused prolongation of the plasma coagulatory parameters such as PT and APTT in both genotypes of mice as compared with vehicle challenge. In the presence of LPS/D-GalN, PT and APTT were longer in MT (−/−) than in MT (+/+) mice. The level of fibrinogen significantly decreased 5 hours after LPS/D-GalN challenge in both genotypes of mice as compared to vehicle challenge. After LPS/D-GalN challenge, the level was significantly lower in MT (−/−) than in MT (+/+) mice. As compared to vehicle administration, LPS/D-GalN administration elicited an increase in the plasma level of von Willebrand factor in both genotypes of mice. Further, in the presence of LPS/D-GalN, the level was significantly greater in MT (−/−) than in MT (+/+) mice ([71]. ## 6. Conclusion MTs play important roles in the physiological condition, such as heavy metal homeostasis and radical scavenging. Furthermore, through a genetic approach, MT has been shown to protect against various types of (including LPS-related, allergic, and oxidative) inflammatory conditions in mice, implicating MT-induction/enhancement and/or zinc supplementation to induce/enhance MT as possible therapeutic options for inflammatory diseases, although additional research is needed to conclude its clinical utility. --- *Source: 101659-2009-05-11.xml*
2009
# Identification of Sports Athletes’ High-Strength Sports Injuries Based on NMR **Authors:** Wenyong Zhou; Huan Chu **Journal:** Scanning (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016628 --- ## Abstract In order to study the high-strength sports injury in sports, this paper proposes a method based on NMR to identify the high-strength sports injury of sports athletes. This method carries out a questionnaire survey and research on the athletes who are excellent in sports dance major from 2019 to 2021 in the Institute of Physical Education. The athletes’ age range is 18-25 years, and the training period of sports dance is 3-5 years. The results show that compared with other recognition methods, the recognition method based on NMR has higher accuracy and efficiency. The method of this study is helpful to improve the recognition efficiency and accuracy. Athletes are very easy to get injured during sports. In order to reduce the degree of injury of athletes, we should strictly follow the action standards in the training process to avoid serious injury. --- ## Body ## 1. Introduction Any injury occurring in the course of sports training is closely related to the sports and the technical characteristics of the sports. For example, sports dance events require athletes to do a lot of somersaults, jumps, supports, and other actions, which is easy to cause sports injuries to the waist, shoulders, and wrists of sports dance athletes [1]. Tennis players and javelin throwers are prone to “tennis elbow.” The main causes of injury are improper training methods, poor physical fitness, wrong technical movements, athletes’ lack of self-protection awareness, lack of attention to warm-up activities, accumulation of body fatigue, inappropriate environment, and unfavorable training and competition organization [2]. Sports injury can be divided into acute sports injury and chronic injury (Figure 1). Acute sports injury can be caused by external factors, such as fierce physical confrontation with other athletes, or by their own factors. Muscle strain and ligament strain are common in training. Acute sports injury can be distinguished as follows according to the specific location of the injury: (1) skin damage, (2) muscle injury, (3) joint injury, (4) nerve injury, etc., or classified according to the type of injury, such as strain, dislocation, and fracture. Chronic sports injury may be caused by local overburden, accumulation of repeated minor injuries, and failure to deal with acute injuries in time or improper treatment methods. The characteristics of chronic injury are slow onset, gradual deepening of symptoms, and long recovery time, such as fatigue periostitis and patella strain [3].Figure 1 Sports injury.Khodov et al. pointed out that sports dance competition and training cause more injuries, mainly soft tissue injuries. The knee, ankle, waist, back, and shoulder are easy to be pulled, and the toe is easy to be abraded and bruised. Secondly, the injuries of ligaments, muscle bonds, muscles, and joint capsules were mostly soft injuries. Chronic strain, repeated accumulation of minor injuries, and failure to heal major injuries may cause chronic injuries to sports dancers [4]. Novakovic et al. pointed out that the psychological causes of sports injury mainly include anxiety, stress response, personality characteristics, motivation, life events, psychological preparation, and psychological fatigue. Intervention measures mainly include guiding athletes’ correct attribution, setting feasible rehabilitation goals, mastering psychological coping skills, and problem oriented analysis [5]. Gkoura et al. started with the mechanism analysis of sports injury; focused on the circular relationship between muscle balance, abnormal posture, and movement mode and injury; and analyzed and pointed out the key factors of posture and movement mode, as well as the role of rehabilitation functional exercise on human motion system and the basic principle of injury rehabilitation. Then, it puts forward the process of injury rehabilitation functional exercise from the aspects of posture, movement, and muscle balance assessment, mainly including assessment process and detail requirements, targeted muscle tension and muscle weakness treatment methods and processes, proprioceptive training, and integration training points [6]. Siudem et al. pointed out that the hot spots of sports injury research mainly focus on four categories: sports related concussion, anterior cruciate ligament injury, joint instability, and overuse injury; and each research is closely focused on the mechanism of sports injury, injury prevention, treatment, rehabilitation, and rehabilitation standards that can return to the field [7]. Derman et al. proposed a recognition method based on linear discrimination and ultrasonic image features. This method has good recognition efficiency, but its recognition accuracy is relatively low [8]. Wang and Li proposed a recognition method based on improved spectral clustering, which has certain recognition effect, but its accuracy is not high [9]. Sollerhed et al. proposed a recognition method based on wavelet coefficient Hu, which can obtain more accurate recognition effect, but it takes a long time [10]. Therefore, this paper will study a recognition method based on NMR to investigate and analyze the sports dancers in the Institute of Physical Education, in order to improve the accuracy and efficiency of recognition. ## 2. Athletes’ High-Strength Sports Injury Identification Firstly, it is necessary to perform gray-scale conversion on the sports injury image. For the color image, the pixels can be represented by 3 bytes, and their bytes correspond to the brightness generated by 3 components [11], of which 3 components are represented by R, G, and B, respectively. When the 3 components are the same, it is a gray-scale image; otherwise, it is a color image. The gray-scale value conversion formula is as follows: (1)Grayi,j=0.299⋅Ri,j+0.587⋅Gi,j+0.114⋅Bi,j.After conversion, the 24 bit image representation of the image still does not change. The main function of the gray conversion is to improve the efficiency of damage recognition [12].In order to improve the accuracy of damage identification, it is necessary to extract its contour. In this study, mathematical morphology and adaptive thresholding are used to extract the contour, and curve fitting method is also used to obtain a curve, that is, the damaged contour [12]. The damage active contour model is a snake model, which can obtain the contour of the damaged part. When the snake point is at an equilibrium position, the energy will be at a very small value, and the obtained contour will converge to the edge of the identified damaged part. Therefore, in order to identify the damaged part, it is necessary to make the contour energy reach a very small value. The expression formula of contour energy is as follows: (2)EC=αEinC+βEexCGrayi,j,where α and β are the weighted values and EexC and EinC are the complementary energy and internal energy, respectively. After the damage contour is obtained, the damaged part can be preliminarily identified by using the K−L transformation analysis method. After obtaining the contour, the number of damaged pixels and other relevant information can be obtained, so the digital matrix is established by using these information [13]. In order to improve the accuracy of identifying the damaged position, it is necessary to arrange the images into 64 feature vectors, which are arranged in series according to the column. Then, there are m images, and the formula for X=x1,x2,⋯xn to calculate the overall mean vector of images is (3)μ=1m∑i=1mxiEC.Arrange the eigenvaluesA in a decreasing manner. After the arrangement, select the first J eigenvalues λ that are not zero, and then, extract their corresponding vector O. Then, the covariance matrix eigenvector μ can be calculated according to the following formula. Select the first 60% of the eigenvalues, so that most of the damage images can be retained. ### 2.1. Pixel Calculation of Damage Location Based on NMR Through the above analysis, the damage location can be preliminarily identified, but the exact location cannot be obtained. Therefore, the article will further identify the damaged part by using NMR, so as to obtain a more accurate damaged part and calculate the area of the damaged area. Using NMR in image damage recognition is to treat each solution as a fish and then form a solution set of all solutions. There are two ways to find the final solution in the solution set: taking the cluster center as the solution and the cluster result as the solution [9]. In order to improve the recognition accuracy, this paper uses the cluster center as the solution. That is, the objective function (4) of fish can be expressed by the following formula: (4)jg=∑i=1EVi−xk2⋅dx,y,where g represents the number of cluster centers, xk represents the cluster object, and Vi represents the pixel cluster centers. When jg is the minimum value in the formula, it is set as the best clustering point, which is helpful to achieve the purpose of damage image segmentation [14]. After clustering, the gray pixel value of the image will reach the corresponding effect with the original pixel. After clustering results, the color rendering of pixels is realized, so different colors in the image will represent different representations. Thus, the RGB representation value of pixels can be calculated by accumulating the GRB flux of each type of pixel value and dividing it by the total number of pixels. ## 2.1. Pixel Calculation of Damage Location Based on NMR Through the above analysis, the damage location can be preliminarily identified, but the exact location cannot be obtained. Therefore, the article will further identify the damaged part by using NMR, so as to obtain a more accurate damaged part and calculate the area of the damaged area. Using NMR in image damage recognition is to treat each solution as a fish and then form a solution set of all solutions. There are two ways to find the final solution in the solution set: taking the cluster center as the solution and the cluster result as the solution [9]. In order to improve the recognition accuracy, this paper uses the cluster center as the solution. That is, the objective function (4) of fish can be expressed by the following formula: (4)jg=∑i=1EVi−xk2⋅dx,y,where g represents the number of cluster centers, xk represents the cluster object, and Vi represents the pixel cluster centers. When jg is the minimum value in the formula, it is set as the best clustering point, which is helpful to achieve the purpose of damage image segmentation [14]. After clustering, the gray pixel value of the image will reach the corresponding effect with the original pixel. After clustering results, the color rendering of pixels is realized, so different colors in the image will represent different representations. Thus, the RGB representation value of pixels can be calculated by accumulating the GRB flux of each type of pixel value and dividing it by the total number of pixels. ## 3. Research and Analysis ### 3.1. Research Object A questionnaire survey was conducted on the professional athletes of sports dance major from 2019 to 2021 in the Institute of Physical Education [15]. The athletes were 18-25 years old, and the training period of sports dance was 3-5 years. ### 3.2. Research Methods We conducted face-to-face interviews with experts in aerobics, sports dance, sports injury, sports statistics, and sports art in the Institute of Physical Education; solicited their opinions on the research content, questionnaire, and other aspects; and obtained valuable information. At the same time, during the period of issuing the questionnaire, the coaches and principals of sports dance examinee training institutions in various colleges and universities had an in-depth understanding of the relevant contents of this article and obtained valuable information [16].In order to fully understand the sports injury of college sports dance candidates, a questionnaire for college sports dance candidates in 2021 is designed according to a large number of data, the opinions and suggestions of relevant experts, and the characteristics of the survey object.Ten experts (associate professors or professors) were employed to evaluate the contents of the questionnaire design, content design, and structure design according to the five grades of indicators (a) very appropriate, (b) relatively appropriate, (c) average, (d) inappropriate, and (e) very inappropriate. After the first round of evaluation, the experts put forward many valuable opinions. After the modification of the questionnaire, the same experts were asked to evaluate again. Experts do not disagree with the questionnaire design, questionnaire content, and structure design. 27.7% think it is very appropriate, and 58.7% think it is more appropriate [17].The reliability test adopts the retest method. After two weeks of issuing the questionnaire, 50 candidates are randomly selected from the sports dance candidates and sent the questionnaire again by e-mail. After recovery, the scores are given to each option, and the two-time correlation coefficient is calculated (r=0.882, p<0.01), indicating that the survey results have high reliability [18]. ### 3.3. Data Statistics After the questionnaire was collected, the questionnaire data were analyzed, the invalid questionnaires were eliminated, all the survey results were carefully counted, and the data were analyzed by SPSS, mainly using chi-square test, factor analysis, and other statistical methods, which provided strong data support for this paper. The questionnaire information is sorted and summarized by Excel software, and the database is established on the software, and the software is used for statistical analysis. ### 3.4. Result Analysis Figure2 shows the time-consuming results of different recognition methods. From the figure, it can be seen that when the number of images to be recognized is different, the recognition method based on NMR is the shortest among the three methods. Therefore, it can be concluded that the method studied in this paper has faster recognition efficiency than other methods. Because this method has gone through image gray conversion before recognition, this step is conducive to improve the recognition efficiency.Figure 2 Time-consuming of damage identification method based on NMR. #### 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. #### 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 3.1. Research Object A questionnaire survey was conducted on the professional athletes of sports dance major from 2019 to 2021 in the Institute of Physical Education [15]. The athletes were 18-25 years old, and the training period of sports dance was 3-5 years. ## 3.2. Research Methods We conducted face-to-face interviews with experts in aerobics, sports dance, sports injury, sports statistics, and sports art in the Institute of Physical Education; solicited their opinions on the research content, questionnaire, and other aspects; and obtained valuable information. At the same time, during the period of issuing the questionnaire, the coaches and principals of sports dance examinee training institutions in various colleges and universities had an in-depth understanding of the relevant contents of this article and obtained valuable information [16].In order to fully understand the sports injury of college sports dance candidates, a questionnaire for college sports dance candidates in 2021 is designed according to a large number of data, the opinions and suggestions of relevant experts, and the characteristics of the survey object.Ten experts (associate professors or professors) were employed to evaluate the contents of the questionnaire design, content design, and structure design according to the five grades of indicators (a) very appropriate, (b) relatively appropriate, (c) average, (d) inappropriate, and (e) very inappropriate. After the first round of evaluation, the experts put forward many valuable opinions. After the modification of the questionnaire, the same experts were asked to evaluate again. Experts do not disagree with the questionnaire design, questionnaire content, and structure design. 27.7% think it is very appropriate, and 58.7% think it is more appropriate [17].The reliability test adopts the retest method. After two weeks of issuing the questionnaire, 50 candidates are randomly selected from the sports dance candidates and sent the questionnaire again by e-mail. After recovery, the scores are given to each option, and the two-time correlation coefficient is calculated (r=0.882, p<0.01), indicating that the survey results have high reliability [18]. ## 3.3. Data Statistics After the questionnaire was collected, the questionnaire data were analyzed, the invalid questionnaires were eliminated, all the survey results were carefully counted, and the data were analyzed by SPSS, mainly using chi-square test, factor analysis, and other statistical methods, which provided strong data support for this paper. The questionnaire information is sorted and summarized by Excel software, and the database is established on the software, and the software is used for statistical analysis. ## 3.4. Result Analysis Figure2 shows the time-consuming results of different recognition methods. From the figure, it can be seen that when the number of images to be recognized is different, the recognition method based on NMR is the shortest among the three methods. Therefore, it can be concluded that the method studied in this paper has faster recognition efficiency than other methods. Because this method has gone through image gray conversion before recognition, this step is conducive to improve the recognition efficiency.Figure 2 Time-consuming of damage identification method based on NMR. ### 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. ### 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. ## 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 4. Discussion In the process of competition performance, athletes must first do a good job in warm-up activities to make the nervous system and joint muscles active and excited. Athletes should scientifically and systematically formulate training programs and training plans. In combination with the characteristics of special projects, special physical quality training is carried out to better support the development of special projects, such as strength, explosiveness, flexibility, and endurance. Athletes should reasonably choose the technical difficulty, avoid aiming too high, aim too high and do too little, and resolutely follow the principle of step-by-step training. Athletes should learn to adjust their physical state before the competition or before the performance. They should reduce heavy load training about a week before the competition and avoid long-term training. They can do some low-intensity adaptive exercises, get familiar with the music rhythm, review the competition routine with their dance partners, and adjust their diet and sleep to prepare for the competition.The athletes’ preparatory activities should consist of free hand exercises, stretching exercises, and basic pace exercises. This can not only improve the flexibility of athletes’ joints and muscles but also significantly improve the excitability of the nervous system, effectively prevent sports injuries caused by uncoordinated joint muscle stiffness and athletes’ inattention, and greatly improve the training efficiency of athletes. The intensity of warm-up preparation activities should be controlled at low to medium intensity [22]. Athletes should feel their bodies warm and sweat slightly and do not make their bodies feel tired. According to the characteristics of sports, it is appropriate to control the time of daily training warm-up preparation activities to about 10 minutes. In the competition, the athletes should prepare for warm-up according to the actual situation, dance types, and weather factors.In the training process, athletes of different levels should be different from person to person when formulating training plans. They must follow the principle of step-by-step and persistent sports training. They should formulate annual training plans, monthly training plans, and weekly training plans to deal with competitions and performances, so as to avoid temporary cramming before competitions. In the training process, it is not that the greater the amount of training, the faster the improvement of technical level. The improvement of technical level is a cumulative process. Excessive training will only cause physical and mental fatigue of athletes, reduce training enthusiasm, and increase the risk of sports injury.By massaging the joints, the elasticity of the ligaments can be enhanced and the range of motion of the joints can be increased, especially for the damaged joints, ligaments, and muscle bonds, which can greatly accelerate the recovery effect. When massaging and relaxing, you can choose to focus on the parts that are easy to be damaged, such as the soleus, gastrocnemius, and quadriceps femoris of the lower limbs in Latin dance. You can also massage and relax the tired parts according to your body feeling. When athletes feel very tired, they need to massage and relax their muscles and joints [23]. The timing of massage and relaxation can be carried out together with stretching activities after the end of competition and training, or after bathing or before going to bed after the end of competition and training. During the massage, the strength shall be from light to heavy, and the feedback of the massaged athletes shall be listened to, and the strength and massage parts shall be adjusted appropriately according to the feedback. ## 5. Conclusion Athletes will inevitably be injured during sports. The identification of injury pictures is helpful to improve the therapeutic effect of athletes. In this paper, the damaged parts are identified based on NMR; the method in this paper helps to improve the identification efficiency and accuracy. Athletes are very easy to get injured during sports. In order to reduce the degree of injury of athletes, we should strictly follow the action standards in the training process to avoid serious injury. The strategies to deal with the risk of acute sports injury are risk control and risk transfer. There are two methods for risk control: take risk prevention measures before the occurrence of risk events and take risk mitigation measures during and after the occurrence of risk events. The main measure to transfer the risk of acute sports injury is insurance. As an advanced noninvasive and nonradioactive diagnostic method, NMR provides an effective auxiliary diagnostic method for doctors with high accuracy, and its examination results are an important basis for arthroscopic examination. However, its cost is high, and there are still a certain degree of false positives and false negatives, but with the reduction of inspection costs, the development of MRI technology, and the accumulation of clinical data, NMR will become the first choice for early diagnosis of sports injuries.In the future, we will formulate the archives of sports dance athletes’ acute sports injury risk events, study the quantitative probability of sports dance athletes’ acute sports injury risk, and establish the sports athletes’ acute sports injury risk model, in order to obtain the correlation between sports injury and sports performance. --- *Source: 1016628-2022-07-15.xml*
1016628-2022-07-15_1016628-2022-07-15.md
44,402
Identification of Sports Athletes’ High-Strength Sports Injuries Based on NMR
Wenyong Zhou; Huan Chu
Scanning (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016628
1016628-2022-07-15.xml
--- ## Abstract In order to study the high-strength sports injury in sports, this paper proposes a method based on NMR to identify the high-strength sports injury of sports athletes. This method carries out a questionnaire survey and research on the athletes who are excellent in sports dance major from 2019 to 2021 in the Institute of Physical Education. The athletes’ age range is 18-25 years, and the training period of sports dance is 3-5 years. The results show that compared with other recognition methods, the recognition method based on NMR has higher accuracy and efficiency. The method of this study is helpful to improve the recognition efficiency and accuracy. Athletes are very easy to get injured during sports. In order to reduce the degree of injury of athletes, we should strictly follow the action standards in the training process to avoid serious injury. --- ## Body ## 1. Introduction Any injury occurring in the course of sports training is closely related to the sports and the technical characteristics of the sports. For example, sports dance events require athletes to do a lot of somersaults, jumps, supports, and other actions, which is easy to cause sports injuries to the waist, shoulders, and wrists of sports dance athletes [1]. Tennis players and javelin throwers are prone to “tennis elbow.” The main causes of injury are improper training methods, poor physical fitness, wrong technical movements, athletes’ lack of self-protection awareness, lack of attention to warm-up activities, accumulation of body fatigue, inappropriate environment, and unfavorable training and competition organization [2]. Sports injury can be divided into acute sports injury and chronic injury (Figure 1). Acute sports injury can be caused by external factors, such as fierce physical confrontation with other athletes, or by their own factors. Muscle strain and ligament strain are common in training. Acute sports injury can be distinguished as follows according to the specific location of the injury: (1) skin damage, (2) muscle injury, (3) joint injury, (4) nerve injury, etc., or classified according to the type of injury, such as strain, dislocation, and fracture. Chronic sports injury may be caused by local overburden, accumulation of repeated minor injuries, and failure to deal with acute injuries in time or improper treatment methods. The characteristics of chronic injury are slow onset, gradual deepening of symptoms, and long recovery time, such as fatigue periostitis and patella strain [3].Figure 1 Sports injury.Khodov et al. pointed out that sports dance competition and training cause more injuries, mainly soft tissue injuries. The knee, ankle, waist, back, and shoulder are easy to be pulled, and the toe is easy to be abraded and bruised. Secondly, the injuries of ligaments, muscle bonds, muscles, and joint capsules were mostly soft injuries. Chronic strain, repeated accumulation of minor injuries, and failure to heal major injuries may cause chronic injuries to sports dancers [4]. Novakovic et al. pointed out that the psychological causes of sports injury mainly include anxiety, stress response, personality characteristics, motivation, life events, psychological preparation, and psychological fatigue. Intervention measures mainly include guiding athletes’ correct attribution, setting feasible rehabilitation goals, mastering psychological coping skills, and problem oriented analysis [5]. Gkoura et al. started with the mechanism analysis of sports injury; focused on the circular relationship between muscle balance, abnormal posture, and movement mode and injury; and analyzed and pointed out the key factors of posture and movement mode, as well as the role of rehabilitation functional exercise on human motion system and the basic principle of injury rehabilitation. Then, it puts forward the process of injury rehabilitation functional exercise from the aspects of posture, movement, and muscle balance assessment, mainly including assessment process and detail requirements, targeted muscle tension and muscle weakness treatment methods and processes, proprioceptive training, and integration training points [6]. Siudem et al. pointed out that the hot spots of sports injury research mainly focus on four categories: sports related concussion, anterior cruciate ligament injury, joint instability, and overuse injury; and each research is closely focused on the mechanism of sports injury, injury prevention, treatment, rehabilitation, and rehabilitation standards that can return to the field [7]. Derman et al. proposed a recognition method based on linear discrimination and ultrasonic image features. This method has good recognition efficiency, but its recognition accuracy is relatively low [8]. Wang and Li proposed a recognition method based on improved spectral clustering, which has certain recognition effect, but its accuracy is not high [9]. Sollerhed et al. proposed a recognition method based on wavelet coefficient Hu, which can obtain more accurate recognition effect, but it takes a long time [10]. Therefore, this paper will study a recognition method based on NMR to investigate and analyze the sports dancers in the Institute of Physical Education, in order to improve the accuracy and efficiency of recognition. ## 2. Athletes’ High-Strength Sports Injury Identification Firstly, it is necessary to perform gray-scale conversion on the sports injury image. For the color image, the pixels can be represented by 3 bytes, and their bytes correspond to the brightness generated by 3 components [11], of which 3 components are represented by R, G, and B, respectively. When the 3 components are the same, it is a gray-scale image; otherwise, it is a color image. The gray-scale value conversion formula is as follows: (1)Grayi,j=0.299⋅Ri,j+0.587⋅Gi,j+0.114⋅Bi,j.After conversion, the 24 bit image representation of the image still does not change. The main function of the gray conversion is to improve the efficiency of damage recognition [12].In order to improve the accuracy of damage identification, it is necessary to extract its contour. In this study, mathematical morphology and adaptive thresholding are used to extract the contour, and curve fitting method is also used to obtain a curve, that is, the damaged contour [12]. The damage active contour model is a snake model, which can obtain the contour of the damaged part. When the snake point is at an equilibrium position, the energy will be at a very small value, and the obtained contour will converge to the edge of the identified damaged part. Therefore, in order to identify the damaged part, it is necessary to make the contour energy reach a very small value. The expression formula of contour energy is as follows: (2)EC=αEinC+βEexCGrayi,j,where α and β are the weighted values and EexC and EinC are the complementary energy and internal energy, respectively. After the damage contour is obtained, the damaged part can be preliminarily identified by using the K−L transformation analysis method. After obtaining the contour, the number of damaged pixels and other relevant information can be obtained, so the digital matrix is established by using these information [13]. In order to improve the accuracy of identifying the damaged position, it is necessary to arrange the images into 64 feature vectors, which are arranged in series according to the column. Then, there are m images, and the formula for X=x1,x2,⋯xn to calculate the overall mean vector of images is (3)μ=1m∑i=1mxiEC.Arrange the eigenvaluesA in a decreasing manner. After the arrangement, select the first J eigenvalues λ that are not zero, and then, extract their corresponding vector O. Then, the covariance matrix eigenvector μ can be calculated according to the following formula. Select the first 60% of the eigenvalues, so that most of the damage images can be retained. ### 2.1. Pixel Calculation of Damage Location Based on NMR Through the above analysis, the damage location can be preliminarily identified, but the exact location cannot be obtained. Therefore, the article will further identify the damaged part by using NMR, so as to obtain a more accurate damaged part and calculate the area of the damaged area. Using NMR in image damage recognition is to treat each solution as a fish and then form a solution set of all solutions. There are two ways to find the final solution in the solution set: taking the cluster center as the solution and the cluster result as the solution [9]. In order to improve the recognition accuracy, this paper uses the cluster center as the solution. That is, the objective function (4) of fish can be expressed by the following formula: (4)jg=∑i=1EVi−xk2⋅dx,y,where g represents the number of cluster centers, xk represents the cluster object, and Vi represents the pixel cluster centers. When jg is the minimum value in the formula, it is set as the best clustering point, which is helpful to achieve the purpose of damage image segmentation [14]. After clustering, the gray pixel value of the image will reach the corresponding effect with the original pixel. After clustering results, the color rendering of pixels is realized, so different colors in the image will represent different representations. Thus, the RGB representation value of pixels can be calculated by accumulating the GRB flux of each type of pixel value and dividing it by the total number of pixels. ## 2.1. Pixel Calculation of Damage Location Based on NMR Through the above analysis, the damage location can be preliminarily identified, but the exact location cannot be obtained. Therefore, the article will further identify the damaged part by using NMR, so as to obtain a more accurate damaged part and calculate the area of the damaged area. Using NMR in image damage recognition is to treat each solution as a fish and then form a solution set of all solutions. There are two ways to find the final solution in the solution set: taking the cluster center as the solution and the cluster result as the solution [9]. In order to improve the recognition accuracy, this paper uses the cluster center as the solution. That is, the objective function (4) of fish can be expressed by the following formula: (4)jg=∑i=1EVi−xk2⋅dx,y,where g represents the number of cluster centers, xk represents the cluster object, and Vi represents the pixel cluster centers. When jg is the minimum value in the formula, it is set as the best clustering point, which is helpful to achieve the purpose of damage image segmentation [14]. After clustering, the gray pixel value of the image will reach the corresponding effect with the original pixel. After clustering results, the color rendering of pixels is realized, so different colors in the image will represent different representations. Thus, the RGB representation value of pixels can be calculated by accumulating the GRB flux of each type of pixel value and dividing it by the total number of pixels. ## 3. Research and Analysis ### 3.1. Research Object A questionnaire survey was conducted on the professional athletes of sports dance major from 2019 to 2021 in the Institute of Physical Education [15]. The athletes were 18-25 years old, and the training period of sports dance was 3-5 years. ### 3.2. Research Methods We conducted face-to-face interviews with experts in aerobics, sports dance, sports injury, sports statistics, and sports art in the Institute of Physical Education; solicited their opinions on the research content, questionnaire, and other aspects; and obtained valuable information. At the same time, during the period of issuing the questionnaire, the coaches and principals of sports dance examinee training institutions in various colleges and universities had an in-depth understanding of the relevant contents of this article and obtained valuable information [16].In order to fully understand the sports injury of college sports dance candidates, a questionnaire for college sports dance candidates in 2021 is designed according to a large number of data, the opinions and suggestions of relevant experts, and the characteristics of the survey object.Ten experts (associate professors or professors) were employed to evaluate the contents of the questionnaire design, content design, and structure design according to the five grades of indicators (a) very appropriate, (b) relatively appropriate, (c) average, (d) inappropriate, and (e) very inappropriate. After the first round of evaluation, the experts put forward many valuable opinions. After the modification of the questionnaire, the same experts were asked to evaluate again. Experts do not disagree with the questionnaire design, questionnaire content, and structure design. 27.7% think it is very appropriate, and 58.7% think it is more appropriate [17].The reliability test adopts the retest method. After two weeks of issuing the questionnaire, 50 candidates are randomly selected from the sports dance candidates and sent the questionnaire again by e-mail. After recovery, the scores are given to each option, and the two-time correlation coefficient is calculated (r=0.882, p<0.01), indicating that the survey results have high reliability [18]. ### 3.3. Data Statistics After the questionnaire was collected, the questionnaire data were analyzed, the invalid questionnaires were eliminated, all the survey results were carefully counted, and the data were analyzed by SPSS, mainly using chi-square test, factor analysis, and other statistical methods, which provided strong data support for this paper. The questionnaire information is sorted and summarized by Excel software, and the database is established on the software, and the software is used for statistical analysis. ### 3.4. Result Analysis Figure2 shows the time-consuming results of different recognition methods. From the figure, it can be seen that when the number of images to be recognized is different, the recognition method based on NMR is the shortest among the three methods. Therefore, it can be concluded that the method studied in this paper has faster recognition efficiency than other methods. Because this method has gone through image gray conversion before recognition, this step is conducive to improve the recognition efficiency.Figure 2 Time-consuming of damage identification method based on NMR. #### 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. #### 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 3.1. Research Object A questionnaire survey was conducted on the professional athletes of sports dance major from 2019 to 2021 in the Institute of Physical Education [15]. The athletes were 18-25 years old, and the training period of sports dance was 3-5 years. ## 3.2. Research Methods We conducted face-to-face interviews with experts in aerobics, sports dance, sports injury, sports statistics, and sports art in the Institute of Physical Education; solicited their opinions on the research content, questionnaire, and other aspects; and obtained valuable information. At the same time, during the period of issuing the questionnaire, the coaches and principals of sports dance examinee training institutions in various colleges and universities had an in-depth understanding of the relevant contents of this article and obtained valuable information [16].In order to fully understand the sports injury of college sports dance candidates, a questionnaire for college sports dance candidates in 2021 is designed according to a large number of data, the opinions and suggestions of relevant experts, and the characteristics of the survey object.Ten experts (associate professors or professors) were employed to evaluate the contents of the questionnaire design, content design, and structure design according to the five grades of indicators (a) very appropriate, (b) relatively appropriate, (c) average, (d) inappropriate, and (e) very inappropriate. After the first round of evaluation, the experts put forward many valuable opinions. After the modification of the questionnaire, the same experts were asked to evaluate again. Experts do not disagree with the questionnaire design, questionnaire content, and structure design. 27.7% think it is very appropriate, and 58.7% think it is more appropriate [17].The reliability test adopts the retest method. After two weeks of issuing the questionnaire, 50 candidates are randomly selected from the sports dance candidates and sent the questionnaire again by e-mail. After recovery, the scores are given to each option, and the two-time correlation coefficient is calculated (r=0.882, p<0.01), indicating that the survey results have high reliability [18]. ## 3.3. Data Statistics After the questionnaire was collected, the questionnaire data were analyzed, the invalid questionnaires were eliminated, all the survey results were carefully counted, and the data were analyzed by SPSS, mainly using chi-square test, factor analysis, and other statistical methods, which provided strong data support for this paper. The questionnaire information is sorted and summarized by Excel software, and the database is established on the software, and the software is used for statistical analysis. ## 3.4. Result Analysis Figure2 shows the time-consuming results of different recognition methods. From the figure, it can be seen that when the number of images to be recognized is different, the recognition method based on NMR is the shortest among the three methods. Therefore, it can be concluded that the method studied in this paper has faster recognition efficiency than other methods. Because this method has gone through image gray conversion before recognition, this step is conducive to improve the recognition efficiency.Figure 2 Time-consuming of damage identification method based on NMR. ### 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. ### 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 3.4.1. Comparison of Injury Rates of Athletes of Different Genders Table1 shows that the injury rate of women is 52.6% and that of men is 51%. It shows that no matter in Latin dance, the injury rate of women in modern dance is higher than that of men. Women’s dance steps are complex and fancy, so the requirements for women’s flexibility and body coordination are higher than that of men. The injury rate of women is bound to be higher than that of men.Table 1 Comparison of injury rates of athletes of different genders. FemaleMaleTotalNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rateNumber of people investigatedNumber of injuredDamage rate1236652.6%603051%1839652.5As can be seen from Figure3, Latin dancers suffer more injuries than modern dancers in general. The injury rate of female Latin dancers is higher than that of modern dancers, and that of male Latin dancers is also higher than that of modern dancers.Figure 3 Comparison of injury rates of athletes of different dances.So the conclusion is as follows: (1) the probability of injury in Latin dance competition training is higher than that in modern dance. (2) The injury rate of women is higher than that of men, whether they are Latin dancers or modern dancers. This result is closely related to the technical style characteristics of the two dances. Compared with modern dance, Latin dance is more complex and changeable in technical movements, and the music rhythm is more cheerful and passionate, all of which have higher requirements for athletes [19]. Whether it is Latin dance or modern dance, women’s technical movements are more abundant, mainly to show women. Women’s coordination and flexibility are highly required, and it is inevitable that their injury rate is higher than that of men.According to the classification of injury nature, 112 Latin dancers were counted, including 47 men and 65 women.From Table2, we find that the skin abrasion rate caused by Latin dance competition training is as high as 65.6% for women and 58.6% for men. Latin dancers have varied shapes. In addition to the basic dance steps, there are many different styles of modeling actions, such as the man kneeling at the end of the dance in the Paso. Repeated training is very easy to cause skin abrasion or even subcutaneous bleeding.Table 2 Investigation and research on injury nature of Latin dancers. LadyManInjuriesNumber of injuredInjury probabilityNumber of injuredInjury probabilitySkin abrasion2065.6%1558.6%Muscle strain1051.7%1454.2%Muscle contusion1145.6%1143.6%Ligament injury2146.6%640.4%Dislocation of joint21.8%14.3%Fracture11.6%00In Latin dance, the probability of muscle strain and muscle contusion is also very high. Among them, the probability of female muscle strain is 51.7%, and the probability of muscle contusion is 45.6%; In men, the probability of muscle strain is as high as 54.2%, and the probability of muscle contusion is as high as 43.6%. The results show that the probability of muscle strain and muscle contusion of women is higher than that of men [20]. This is because the Latin dance mainly shows women’s dance posture. Except for the bullfight dance, women’s dance moves are more difficult and complex than men’s. Frequent muscle control during competition and training, muscle stretching is very easy to cause muscle strain, and the probability of joint dislocation and fracture during competition and training is very low. ## 3.4.2. Cause Analysis of Sports Injury Many factors can cause sports injuries to sports dancers, such as (1) no warm-up activities or perfunctory warm-up activities before exercise; (2) poor technical level of athletes; (3) poor physical quality; (4) unscientific training methods, excessive exercise volume, and intensity; (5) the training time is unreasonable and too long; (6) choose difficult dance movements that do not meet their own level; (7) uncoordinated cooperation between male and female athletes; (8) unreasonable music rhythm; (9) decreased physical fitness during the competition; (10) unable to reasonably adjust their own state before the competition; and (11) collision with other players during the competition.From Table3, in the investigation of sports dance athletes, it is found that the main factors leading to athletes’ injury are insufficient warm-up preparation; poor physical fitness; unscientific training methods; it is too difficult to select technical action; and poor condition before the game [21]. (1) Sports dancers do not pay attention to warm-up activities, or insufficient warm-up activities are an important factor causing sports injuries. Both Latin dance and modern dance need a high degree of coordination of athletes’ bodies. In particular, Latin dance has very high requirements for movement speed and strength. Without preparatory activities or without systematic preparatory activities, the excitability of nervous system cannot be reached, and the stiffness and uncoordinated muscles and joints are very easy to cause sports injuries to athletes. In the survey, it is found that most sports dancers are dismissive of preparatory activities. Some athletes mistakenly think that doing preparatory activities will appear to be their own low level and directly start training their unskilled movement routines or difficulties. Another situation is that sports dance athletes lack targeted special preparation warm-up, only basic preparation activities. Sports dance includes two categories, ten dance types with different styles and simple and single preparation activities, which simply cannot meet the requirements of this sports art project(2) The physical quality of sports dancers is poor. During the investigation of the athletes in the Institute of Physical Education, it is found that although the institute has set up a ballet body course, the athletes generally lack basic physical quality training and special physical quality training. It is difficult to effectively improve strength, explosiveness, speed, and endurance, which is unfavorable to the development of sports dance. With the continuous development of sports dance competition towards difficulty and beauty, the competition is becoming increasingly fierce. Athletes’ poor physical quality makes it difficult to support technical requirements and competition intensity. Except for a few athletes in sports dance training institutions who will invite professional physical fitness coaches to carry out physical fitness training, most of them also have the same problem, or even more seriousTable 3 Factors of sports injury of municipal sports dancers. Cause of damageNumber of personsProportionInsufficient warm-up preparation4523%Poor physical fitness3619.1%Unscientific training methods3116.5%Difficult to choose technical action2714.3%Poor preparation before the game1911.4%Other2512.7% ## 4. Discussion In the process of competition performance, athletes must first do a good job in warm-up activities to make the nervous system and joint muscles active and excited. Athletes should scientifically and systematically formulate training programs and training plans. In combination with the characteristics of special projects, special physical quality training is carried out to better support the development of special projects, such as strength, explosiveness, flexibility, and endurance. Athletes should reasonably choose the technical difficulty, avoid aiming too high, aim too high and do too little, and resolutely follow the principle of step-by-step training. Athletes should learn to adjust their physical state before the competition or before the performance. They should reduce heavy load training about a week before the competition and avoid long-term training. They can do some low-intensity adaptive exercises, get familiar with the music rhythm, review the competition routine with their dance partners, and adjust their diet and sleep to prepare for the competition.The athletes’ preparatory activities should consist of free hand exercises, stretching exercises, and basic pace exercises. This can not only improve the flexibility of athletes’ joints and muscles but also significantly improve the excitability of the nervous system, effectively prevent sports injuries caused by uncoordinated joint muscle stiffness and athletes’ inattention, and greatly improve the training efficiency of athletes. The intensity of warm-up preparation activities should be controlled at low to medium intensity [22]. Athletes should feel their bodies warm and sweat slightly and do not make their bodies feel tired. According to the characteristics of sports, it is appropriate to control the time of daily training warm-up preparation activities to about 10 minutes. In the competition, the athletes should prepare for warm-up according to the actual situation, dance types, and weather factors.In the training process, athletes of different levels should be different from person to person when formulating training plans. They must follow the principle of step-by-step and persistent sports training. They should formulate annual training plans, monthly training plans, and weekly training plans to deal with competitions and performances, so as to avoid temporary cramming before competitions. In the training process, it is not that the greater the amount of training, the faster the improvement of technical level. The improvement of technical level is a cumulative process. Excessive training will only cause physical and mental fatigue of athletes, reduce training enthusiasm, and increase the risk of sports injury.By massaging the joints, the elasticity of the ligaments can be enhanced and the range of motion of the joints can be increased, especially for the damaged joints, ligaments, and muscle bonds, which can greatly accelerate the recovery effect. When massaging and relaxing, you can choose to focus on the parts that are easy to be damaged, such as the soleus, gastrocnemius, and quadriceps femoris of the lower limbs in Latin dance. You can also massage and relax the tired parts according to your body feeling. When athletes feel very tired, they need to massage and relax their muscles and joints [23]. The timing of massage and relaxation can be carried out together with stretching activities after the end of competition and training, or after bathing or before going to bed after the end of competition and training. During the massage, the strength shall be from light to heavy, and the feedback of the massaged athletes shall be listened to, and the strength and massage parts shall be adjusted appropriately according to the feedback. ## 5. Conclusion Athletes will inevitably be injured during sports. The identification of injury pictures is helpful to improve the therapeutic effect of athletes. In this paper, the damaged parts are identified based on NMR; the method in this paper helps to improve the identification efficiency and accuracy. Athletes are very easy to get injured during sports. In order to reduce the degree of injury of athletes, we should strictly follow the action standards in the training process to avoid serious injury. The strategies to deal with the risk of acute sports injury are risk control and risk transfer. There are two methods for risk control: take risk prevention measures before the occurrence of risk events and take risk mitigation measures during and after the occurrence of risk events. The main measure to transfer the risk of acute sports injury is insurance. As an advanced noninvasive and nonradioactive diagnostic method, NMR provides an effective auxiliary diagnostic method for doctors with high accuracy, and its examination results are an important basis for arthroscopic examination. However, its cost is high, and there are still a certain degree of false positives and false negatives, but with the reduction of inspection costs, the development of MRI technology, and the accumulation of clinical data, NMR will become the first choice for early diagnosis of sports injuries.In the future, we will formulate the archives of sports dance athletes’ acute sports injury risk events, study the quantitative probability of sports dance athletes’ acute sports injury risk, and establish the sports athletes’ acute sports injury risk model, in order to obtain the correlation between sports injury and sports performance. --- *Source: 1016628-2022-07-15.xml*
2022
# The Public Opinion Evolution under Group Interaction in Different Information Features **Authors:** Jing Wei; Yuguang Jia; Yaozeng Zhang; Hengmin Zhu; Weidong Huang **Journal:** Complexity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016692 --- ## Abstract Before expressing opinions, most people usually consider the standpoint of their friends nearby to avoid being isolated, which may lead to the herding effect. The words of celebrities in social networks usually attract public attention and affect the opinion evolution in the entire network. This process also causes the similar status quo. In this study, we find that the key figures play the guiding roles in public opinions who undertake the group pressure from information amount. Therefore, we build the cost function on opinion changes to study opinion evolution rules for public persons based on the spreading scope of information and information amount. Simulation analysis reveals that the information amount held by agents will affect the converging speed of public opinions, while enhancing the ability of key nodes may no more effective in guiding public opinion. --- ## Body ## 1. Introduction The expression on individuals’ opinions and spreading of information are evolutionary powers in the public opinion networks. The rapid spreading of information allows most agents in the networks to quickly evaluate information based on themselves judgments, resulting in the development of the irrespective opinion values. Opinions are developed from the information interaction and vary in public opinion networks because of the evolution on public opinion. When an agent has sufficient information about an event and has own knowledge on this specific event, his/her opinion can hardly to be influenced by the public environment. In this case, this agent can be regarded as “stubborn” one. Undoubtedly, on the Internet, the speeches of agents with various followers are also important. In public opinion networks, however, the information amount for the event held by the group, that is, the degree of their understanding of this event, will lead to varying response to opinions proposed by these agents. On the other hand, the spreading scope of information of special agents also has impacts on the overall opinion evolution in public opinion networks. Additionally, for a specific event, agents’ opinions are divided into opposition, neutrality, and support, with slight differences in details. Agents tend to have preferences instead of fully supporting or opposing an event.Opinion, as the carrier, is inseparable from information. Key agents, as one of the source of information, are bound to affect the development of public opinion. To study the influence of information changes on the opinions of different agents, the influence of information dissemination scope on group opinions, we propose an opinion evolution formula based on the cost function and public opinion pressure. In theory, our work expands the continuous opinion evolution model based on the cost function, which make us study the change rule on opinion and information specifically. In practical, our research can provide relevant departments with measurements for supervision and management on public opinion to some extent. It can effectively reduce the consumption on public resource and guide the development on public opinion.The remainder of the paper is organized as follows: we review related literature and compare current works with our work in Section2. In Section 3, with the growth function on information, the pressure function on public opinion and cost function, we construct the opinion dynamic models based on BBV network for different agents. Section 4 compares simulation results with different information environment to further study the influence of information amount and information scope on public opinion evolution. Finally, conclusions and contributions are given in Section 5. ## 2. Literature Review The development of social group opinions is embedded in the process of social interaction [1], and the spread of numerous opinions is accomplished by interactions between agents in the society [2, 3]. Studies of the evolution of group opinions can reflect and explain various complex social phenomena, including the aggregation of group opinions and the spread of rumors. It can also be applied to group decision-making and sociology, as well as the development of group wisdom. Examples include studying how social media and human behavior jointly influence the spread of infectious diseases through an epidemic model [1], dividing social trust into a continuous range to study how business reputation of firms can be improved in business networks [2], or introducing game theory and treating public opinion as a continuous interval within [0,1], and discussing the conformity and manipulation behavior of agents in realistic opinion networks and studying agent voting choices [3–6]. Hence, the study of the spread mechanism of group opinions can clarify various political, economic, and management phenomena, including popularity, the existence of minority opinions, consistency and diversity, and the leading role of the government [7–14]. To date, various studies of dynamics of opinion evolution have been reported. Previous studies on opinion evolution models have focused on discrete/continuous opinions, which greatly simplify the opinions expressed by agents in the social network, usually using plain interaction rules, and thus describe the most representative agent behavior and associated agent interaction characteristics. Most of the group opinions in discrete opinion models are classified as pro, con and neutral or buy and sell, left and right, usually denoted by 1, −1, 0. Discrete models include the Ising model [15–18], the Sznajd model [19–22],and the majority decision model [23–27]. Although the discrete opinion is consistent with the judgment of “right and wrong” issues, however, these models can hardly reflect the preference degree of agent opinions. In practical, agent opinions are somehow different from each other, while opinions held at a specific moment can be regarded as strong and steady. Usually, the value of an agent’s opinion is normalized to a continuous closed interval [0, 1], which can reflect more information about the agent and describe the process of gradual change of the agent’s opinion, such as the satisfaction evaluation of a product, the degree of confidence in a judgment, the degree of support, or opposition to an event. These characteristics can be well described by the continuous opinion evolution models, including the Deffuant model [28–31] and the HK (Hegsekmann-Krause) model [32–34] and their extended model.A large number of extension studies have been generated based on the above-mentioned thinking approach. For example, considering agents in social networks tend to be influenced by pressures from agents nearby. Dong et al. presented a novel DW model combined with local world opinion from individuals’ common friends where the opinions update depends on distance between individual opinions and network structure similarity. Finally, they analyzed the convergence of the model by simulation experiments [35]. Cheng and Yu proposed a modified HK model involving group pressure and claimed that the pressured agents can always reach a consensus infinite time [36]. Lu et al. reported that external pressure induced by public focus has negligible or weak influences [37]. Ferraioli and Ventre demonstrated that the dynamics in clique social networks always converge to consensus if the social pressure is sufficiently high [38]. For the influences of agent information on evolution of public opinions, the base agent model has a bounded confidence mode, in which information is introduced and different information releasing modes are explored [39]. Lan et al. proposed a statistical model for the influences of network rumors on the information amount of public opinion networks [40]. However, these studies did not simultaneously consider the influences of different factors.Above studies mainly focus on the natural evolution of group opinions, but rarely study the effect of guidance from opinion leaders, and few consider the influence of the scope of opinion dissemination by opinion leaders in the evolution of group opinions. Comparison of related studies of public opinion on the impact of group pressure and agent information amount is shown in Table1.Table 1 Comparison of related studies of public opinion on the impact of group pressure and agent information amount. ReferencesMethodOpinion valueInformation amountGroup pressureCheng and Yu [36]SimulationContinuousNoYesLu et al. [37]Empirical analysisDiscreteNoYesFerraioli and Ventre [38]Mathematical proofContinuousNoYesZhu and Hu [39]Empirical analysis simulationContinuousYesNoLan et al. [40]SimulationDiscreteYesNoCurrent studySimulationContinuousYesYes ## 3. Modeling With the in-depth study of complex network topology, numerous network models have been proposed to describe abstract social networks. Typical network models include small-world networks [41] and scale-free networks [42–44]. In most previous models, weights between nodes were not taken into account. However, in mainstream social platforms, such as Weibo or Facebook, key agents as hub nodes are not only obtain more attention, but also have a stronger impact on ordinary agents. Therefore, our work introduces to BBV network model [45] to describe the social network. In this network model, the degree, the weight and strength of agents in the network follow the power-law distribution.We assume that social networks can be abstracted as a BBV networkG=ν,ε,δ, where ν=1,2,…,k,k+1,..,k+n represents the node set, ε is the edge set, δ is the set of weight on the edge. Among where, k means the number of key agents and n means the number of normal agents. The parameter wij describes the weight between node i and its connected node j, which measure relation intensity between agents in our network model. Define Ii∈0,1 as the information amount of agent i, the A0=I10,I20,…,Ik0,Ik+10,…,Ik+n0 represents the information amount of all agents at initial time and the N0=I10,I20,…,In0 represents the information amount of normal agents at initial time. In addition, to study the influence of information amount and group pressure on public opinion, we introduce some relevant important symbols, as shown in Table 2.Table 2 Relevant important symbols. SymbolImplicationOiThe opinion of agentidiThe degree of agentiFiThe objective public opinion pressure on agentiEiThe actual public opinion pressure on agentiKAOThe key agent’s opinion valueKAIThe information amount of key agentAAIThe maximum information amount of all agentsNAIThe maximum information amount of normal agentsIn social networks, all agents deliver their opinions and are influenced by agents nearby. Therefore, we divide agents into key agents, neighbour agents and other agents in our model. Key agents, such as opinion leaders, are the most important nodes in BBV network with the largest degree, weight, and strength. They think about not only their own ideas but also the public opinion environment before expressing their opinions. As an agent with more attention, they are under pressure from public opinion, so they convey the information on event by opinion more objectively and comprehensively. Neighbour agents are a kind of agents that directly connect with the key agents. Due to close influential key agents, their opinions are influenced by the key agents in part while insisting on themselves. Other agents are not directly connected to key agents, so they are not sensitive to information. In other words, the information sometimes cannot cover to them. Especially, the opinion evolution rule of other agents is the same as that of its neighbour agents. Finally, neighbour agents and other agents are called normal agents together.To study the relationship between the amount of information, the scope of spread and the change of group opinion, we consider two modes on opinion spread. First, the limited spreading, that is, opinion interaction happens between key agents and neighbour agents only, which causes the change in group opinions. Second, the wide spreading, that is, when key agents and neighbour agents finish interacting, other agents are influenced by the connected neighbour agents and change their opinions unidirectionally at next time. After the above process is finished, the other agents spread the opinion to the next level other agents and influence the opinion evolution of the next level agents at next time, and so on. The opinion spread process is shown in Figure1.Figure 1 Information wide spreading process. (The nodeK represents key agents, the node Nei represents neighbour agents, the node O represents other agents, and the arrows show the direction of opinion flow.) ### 3.1. Growth Function of Information Amount When each agent obtains information about a specific event first, he/she has different degrees of mastery for the information. Obviously, the closer to event an agent is, the more complete information can be obtained and the larger information amount is. As spreading distance of information increases, the degree of distortion and the degree of misinterpretation for the news increase, which means that the information and opinion deviate away from the event itself. However, with the interaction of agent opinions and information in public opinion networks, agents’ information amount will increase and accumulate over time. We defineIi∈0,1 as information amount of Agent i. For central agent i and its neighbour agent j, there are Ii∪​Ij≤1,Ii∩​Ij≤minIi,Ij, that is, the maximum information amount grasped by a single agent in the network is 1, and the overlapping information amount between the two agents is less than 1; at the same time, the sum of information amount of two agents does not exceed 1 after the common information is removed. The growth rule of information amount for agent is defined as follows. This formula is applicable to the key agents and the neighbour agents of the key agents. When obtaining information unilaterally, other neighbour nodes in the surrounding area are not taken into account and only a single information source node is considered. Here, i is the central node and j is the neighbour node, k is the information growth coefficient:If the affected agent is the key agenti,(1)Iit+1=Iit⋅1+kτIit∈0,|Iit−∑j∈αiIjdi|,τIi≥∑j∈αiIjdi,τ=|Iit−∑j∈αiIjdi|,Ii<∑j∈αiIjdi.When the affected node is neighbour agentj(2)Ijt+1=Ijt⋅1+kτIjtτ∈0,|Ijt−Ii|,Ij≥Ii,τ=|Ijt−Ii|,Ij<Ii.The parameterτ in the above equation is the information difference, which indicates the agent’s judgment of the difference between the information amount held by neighbour and itself. For equation (1), i is the central node, at this time, we compare the information amount of average for all neighbouring nodes of i with itself. If the information of the central node is greater than the average information of the neighbour, the difference τ is randomly taken in limited range, where the minimum amount obtained by the agent i is 0 and the maximum is the difference value. Meanwhile, the coefficient k value is also smaller. On the contrary, if the information amount of node i is less than the average information amount of neighbour, the corresponding information difference τ is the difference between the two. Because node i believes that more available information is from the outside and the value of k is larger. Equation (2) considers that neighbour node j is influenced by the information from the central node i in the same way as before. ### 3.2. Public Opinion Pressure Function Human beings are social, and the motivation on behavior is influenced by other people around. Before expressing their opinions, the agents will take into account the attitudes of other familiar neighbours around them. Under the influence of friends, the opinions of agents tend to the main stream. For the agent, the pressure that agents feel is inversely proportional to information amount they have, that is, the more information they have, the less likely they are to be influenced by other agents around them. In addition, they also pay attention to themselves opinions whether they are influenced by others or delivery opinions to the outside world.Define the opinion pressure by output by all neighbour of agenti as γi∈0,+∞, which is based on the difference in opinion and strength of the relationship between agent i and its neighbour. Considering the influence of the information amount held by itself, we define that the objective opinion pressure on agent i is Fi∈0,+∞. The factual opinion pressure on agent i is Ei∈0,1 and is influenced by Fi and the parameter a (a refers to the “stress level,” which reflects how much pressure an agent is subjected to before he or she starts to become patient with the growth of pressure from outside). In addition, because of diminishing marginal utility in human psychology, the logistic function is suitable for describing this process and this form has been proven and widely used [46–50]. In our work, as the pressure exerted by the surrounding nodes on node i increases, the factual opinion pressure Ei grows faster and slower and finally stabilizes. Therefore, the three following relationship equation is available:(3)γi=∑j∈αiwijOj−Oi,Fi=γiIi,Ei=11+ea−Fi. ### 3.3. Cost Function of Opinion Change and Optimal Strategy Formula When a key agent is affected by a neighbour node, the influence and overall opinion value difference based on the relation intensity with the surrounding agents and the difference in opinion will be considered. The information amount grasped is combined to change the agent’s own opinion value. And when the neighbour node receives the opinion value of the central agent, only the impact of the difference between the information amount and the opinion value and the relation intensity is considered because the information is obtained unilaterally. The interaction process of the opinion of a single agent and surrounding neighbour agents is divided into two stages: first the central agent is affected and opinions change and express, and then the opinions of neighbour change.Referring to an opinion evolution rule based on cost function proposed by Li and Zhu [51], we construct the cost function under the external pressure from the neighbour. And then the best strategy formula describing the change on the opinion is deduced. The decision cost function for node i to change its opinion after being influenced by surrounding nodes is shown in equation (4). The two items on the right-hand side of the equation represent the cost of a change in perspective due to self-inflicted costs and the cost of external pressure, respectively,(4)JiOi,Oαi=12IiOi−Oit2+12Ei∑j∈αiOi−Oj2.Derivation ofOi in the formula is(5)∂Ji∂Oi=IiOi−Oit+Ei∑j∈αiOi−Oj.The cost minimization condition is∂Ji/∂Oi=0. Rearrange the term(6)IiOi+EidiOi=IiOit+Ei∑j∈αiOj,OiIi+Eidi=IiOit+Ei∑j∈αiOj,Oit+1=IiIi+EidiOit+EiIi+Eidi∑j∈αiOj.For the neighbour agentj of agent i, the equation for the cost of change on opinion regarding neighbour agent j is like the above, where Oi is a known number and Oj is unknown. In this case, because the agent j is influenced separately from the central agent i, the relationship and attribution between them is considered only. In equation (7), the first term on the right-hand side represents the cost function of the change on opinion held by agent j relative to the previous moment, and the second term is the cost function of the degree of influence of agent i on the opinion of agent j. The parameter wij represents the strength of the relationship between the two, and the magnitude of Ii−Ij represents the difference in the amount of information between them.(7)JjOj,Oi=12IjOj−Ojt2+12wij|Ii−Ij|Oi−Ojt2.Similarly, take the derivative ofOj(8)∂Jj∂Oj=IjOj−Ojt+wij|Ii−Ij|Oi−Ojt.The cost minimization condition is∂Jj/∂Oj=0. Rearrange the term(9)Ojt+1=Ijwij|Ii−Ij|+IjOjt+wij|Ii−Ij|wij|Ii−Ij|+IjOi. ## 3.1. Growth Function of Information Amount When each agent obtains information about a specific event first, he/she has different degrees of mastery for the information. Obviously, the closer to event an agent is, the more complete information can be obtained and the larger information amount is. As spreading distance of information increases, the degree of distortion and the degree of misinterpretation for the news increase, which means that the information and opinion deviate away from the event itself. However, with the interaction of agent opinions and information in public opinion networks, agents’ information amount will increase and accumulate over time. We defineIi∈0,1 as information amount of Agent i. For central agent i and its neighbour agent j, there are Ii∪​Ij≤1,Ii∩​Ij≤minIi,Ij, that is, the maximum information amount grasped by a single agent in the network is 1, and the overlapping information amount between the two agents is less than 1; at the same time, the sum of information amount of two agents does not exceed 1 after the common information is removed. The growth rule of information amount for agent is defined as follows. This formula is applicable to the key agents and the neighbour agents of the key agents. When obtaining information unilaterally, other neighbour nodes in the surrounding area are not taken into account and only a single information source node is considered. Here, i is the central node and j is the neighbour node, k is the information growth coefficient:If the affected agent is the key agenti,(1)Iit+1=Iit⋅1+kτIit∈0,|Iit−∑j∈αiIjdi|,τIi≥∑j∈αiIjdi,τ=|Iit−∑j∈αiIjdi|,Ii<∑j∈αiIjdi.When the affected node is neighbour agentj(2)Ijt+1=Ijt⋅1+kτIjtτ∈0,|Ijt−Ii|,Ij≥Ii,τ=|Ijt−Ii|,Ij<Ii.The parameterτ in the above equation is the information difference, which indicates the agent’s judgment of the difference between the information amount held by neighbour and itself. For equation (1), i is the central node, at this time, we compare the information amount of average for all neighbouring nodes of i with itself. If the information of the central node is greater than the average information of the neighbour, the difference τ is randomly taken in limited range, where the minimum amount obtained by the agent i is 0 and the maximum is the difference value. Meanwhile, the coefficient k value is also smaller. On the contrary, if the information amount of node i is less than the average information amount of neighbour, the corresponding information difference τ is the difference between the two. Because node i believes that more available information is from the outside and the value of k is larger. Equation (2) considers that neighbour node j is influenced by the information from the central node i in the same way as before. ## 3.2. Public Opinion Pressure Function Human beings are social, and the motivation on behavior is influenced by other people around. Before expressing their opinions, the agents will take into account the attitudes of other familiar neighbours around them. Under the influence of friends, the opinions of agents tend to the main stream. For the agent, the pressure that agents feel is inversely proportional to information amount they have, that is, the more information they have, the less likely they are to be influenced by other agents around them. In addition, they also pay attention to themselves opinions whether they are influenced by others or delivery opinions to the outside world.Define the opinion pressure by output by all neighbour of agenti as γi∈0,+∞, which is based on the difference in opinion and strength of the relationship between agent i and its neighbour. Considering the influence of the information amount held by itself, we define that the objective opinion pressure on agent i is Fi∈0,+∞. The factual opinion pressure on agent i is Ei∈0,1 and is influenced by Fi and the parameter a (a refers to the “stress level,” which reflects how much pressure an agent is subjected to before he or she starts to become patient with the growth of pressure from outside). In addition, because of diminishing marginal utility in human psychology, the logistic function is suitable for describing this process and this form has been proven and widely used [46–50]. In our work, as the pressure exerted by the surrounding nodes on node i increases, the factual opinion pressure Ei grows faster and slower and finally stabilizes. Therefore, the three following relationship equation is available:(3)γi=∑j∈αiwijOj−Oi,Fi=γiIi,Ei=11+ea−Fi. ## 3.3. Cost Function of Opinion Change and Optimal Strategy Formula When a key agent is affected by a neighbour node, the influence and overall opinion value difference based on the relation intensity with the surrounding agents and the difference in opinion will be considered. The information amount grasped is combined to change the agent’s own opinion value. And when the neighbour node receives the opinion value of the central agent, only the impact of the difference between the information amount and the opinion value and the relation intensity is considered because the information is obtained unilaterally. The interaction process of the opinion of a single agent and surrounding neighbour agents is divided into two stages: first the central agent is affected and opinions change and express, and then the opinions of neighbour change.Referring to an opinion evolution rule based on cost function proposed by Li and Zhu [51], we construct the cost function under the external pressure from the neighbour. And then the best strategy formula describing the change on the opinion is deduced. The decision cost function for node i to change its opinion after being influenced by surrounding nodes is shown in equation (4). The two items on the right-hand side of the equation represent the cost of a change in perspective due to self-inflicted costs and the cost of external pressure, respectively,(4)JiOi,Oαi=12IiOi−Oit2+12Ei∑j∈αiOi−Oj2.Derivation ofOi in the formula is(5)∂Ji∂Oi=IiOi−Oit+Ei∑j∈αiOi−Oj.The cost minimization condition is∂Ji/∂Oi=0. Rearrange the term(6)IiOi+EidiOi=IiOit+Ei∑j∈αiOj,OiIi+Eidi=IiOit+Ei∑j∈αiOj,Oit+1=IiIi+EidiOit+EiIi+Eidi∑j∈αiOj.For the neighbour agentj of agent i, the equation for the cost of change on opinion regarding neighbour agent j is like the above, where Oi is a known number and Oj is unknown. In this case, because the agent j is influenced separately from the central agent i, the relationship and attribution between them is considered only. In equation (7), the first term on the right-hand side represents the cost function of the change on opinion held by agent j relative to the previous moment, and the second term is the cost function of the degree of influence of agent i on the opinion of agent j. The parameter wij represents the strength of the relationship between the two, and the magnitude of Ii−Ij represents the difference in the amount of information between them.(7)JjOj,Oi=12IjOj−Ojt2+12wij|Ii−Ij|Oi−Ojt2.Similarly, take the derivative ofOj(8)∂Jj∂Oj=IjOj−Ojt+wij|Ii−Ij|Oi−Ojt.The cost minimization condition is∂Jj/∂Oj=0. Rearrange the term(9)Ojt+1=Ijwij|Ii−Ij|+IjOjt+wij|Ii−Ij|wij|Ii−Ij|+IjOi. ## 4. Simulation and Discussion ABBV network with 300 nodes is established. To compare the simulation results, we set the maximum weight is 1. To discuss the influence of different information features on the public opinion, several situations are additionally set up. Additionally, we also introduce the wide spreading mechanism to explore whether opinions of Key agent sand information spreading will impact differently on the opinion evolution. Finally, we simulate the process on public opinion under different situations with the changes in information spreading scope and information amount. In the simulation, when the information amount of agenti is greater than or equal to that of agent j or neighbour average information amount, the information growth coefficient k in the formulas (1) and (2) is 0.001, otherwise it is 0.01. The coefficient a is 5 in the formula (3). To observe the evolution mechanism on group opinion and to combine with the scale-free characteristics for the network, we arrange all opinion values at each time step in descending order, which means that the agent label only represents the number instead of the serial number and does not represent the change of opinion of any agent in the horizontal axis direction overtime. Moreover, each time step iterates 60 times.The above is the free evolution result of the BBV network with 300 nodes. Figure2 shows the simulation results without any changes where the opinion value of all nodes is uniformly distributed in [0, 1], and the information amount is uniformly distributed in [0, 1]. In the figure, when it is more closed to the red part, the opinion value is more closed to 1; when it is more closed to the blue part, the opinion value is more closed to 0; and when it is more closed to green part, the opinion value is more closed to 0.5. It can be observed that the values of all agent opinions in the network converge to 0.5 with time in Figure 2(a). After 100 time steps, the final average opinion value of 0.4981 is close to 0.5. In Figure 3, the maximum information amount of all nodes is set to 0.1. Obviously, compared with Figure 2(a), the opinion of all agents in the network is still inclined to 0.5, but the convergence rate is faster at the same time. These two experiments are to simulate the evolution on public opinions under different information amount grasped by agents at the beginning of the event. Apparently, in the initial period of a specific event, less information the agents have, the more easily the opinions reach agreement. Imagine that in public opinion networks, each agent has a limited degree on information mastery; the agent will easily agree with the opinions of other agents under the same level of public opinion pressure from the neighbour nodes, but not vice versa. If the truth of the event was concealed deliberately and even the public did not understand the event, the public would make extensive speculations and arbitrarily express their positions on the event. Even if there is insufficient evidence and information is insufficient, the public tends to believe the opinions from the surrounding agents whether they are for or against the opinions originally. After full understanding of the opinions from both sides, agents usually fall into neutrality. In the end, the group quickly reaches an agreement. In a transparent society, however, the instant information publication allows each agent to understand the full picture of the event first time. Agents will have enough information to support their own opinions and are not easily affected by other agents. As a result, the time for converge on all opinions and reach consensus will be delayed.Figure 2 Comparison of the opinion distribution of the network with time. (a)A0∼U0,1, (b)A0∼U0,0.1. (a)(b)Figure 3 Comparison of guiding effect of key agents in different situation under limit spread, target opinion is 1. (a) Case 1. (b) Case 2. (c) Case 3. (d) Case 4. (a)(b)(c)(d)In the simulation, the external public opinion pressure suffered by each agent is also counted. It is found that the key agents are exposed most evidently to public opinion pressure from neighbour agents, while normal agents are not. Under insufficient information amount of key agents, the greater actual pressure of public opinions agent feels, the more easily his/her opinion value changes. With the time, the opinion value of key agents tends to be the average level of the group. At the same time, both the felt and actual pressure of public opinions will decrease and eventually converge with the group. ### 4.1. Simulation of the Guidance Effect of Key Agents Aiming at exploring the influence of key agents on public opinion, the different situations are set up as follows: whether the information can be widely spread, whether the information amount is sufficient. And the influence of these situations on the evolution of network opinions is discussed. Considering the comparability on the simulation results, we set an appropriate number of iterations and 16.7% (50 nodes) keys agents in network. The set of cases to simulate is shown in Table3.Table 3 The set of cases to simulate. CaseLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,1Case 1Case AKAO = 1, AAI = 0.1,A0∼U0,0.1Case 2Case BKAO = 1, KAI = 1, NAI = 1,N0∼U0,1Case 3Case CKAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.1Case 4Case DNote: limited spreading indicates key agents have no power of spread its opinion to the outside in one time. Wide spreading indicates that key agents can propagate its opinion to the public at once, and the wide spreading level is 1 here. #### 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. #### 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. #### 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ### 4.2. The Average Information Amount on the Network and Public Opinion Pressure under Different Circumstances The opinion evolution is a process along with change on information amount. Here, the average information amount of the group in each case is compared. Notably, the group information amount and the opinion value are not directly related, and after the group opinion is stable, the information amount will continue to grow. Therefore, the first 20 time steps in the figure are equivalent to 100 time steps in the previous cases.Figure6(a) clearly shows the different changes on the information amount with time in several different situations. The network average information amount in Case 1, Case 3, and Case 4 almost reach 1 during the same period, while Case 2 does not. In Case 1 and Case 3, the information amount of the network group is relatively sufficient, so the group information amount quickly approaches to the maximum. The initial growth on information amount in Case 2 is slow; and it achieves rapid increase after reaching a certain level. In Case 4, the information amount on key agent plays an important role. Although the information amount on most agents in the network is in sufficient and the maximum information amount is limited to 0.1, the key agents have sufficient information amount. Due to the high degree characteristic on key agents, that is, key agents are connected to numerous nodes; it effectively helps with the external information spread from key agents. Thus, the information amount of the group can reach the maximum at the same time.Figure 6 Comparison of the information amount in different scenes. In the long term, it shows the average information amount with time and matches the previous 8 cases. (a)(b)Figure6(b) exhibits the results of the information wide spreading. In this case, the average opinion value of network groups in Case A, Case C, and Case D reaches 1 in a short period of time. Case B first grows slightly after a short period of time and then almost stagnates. Compared with the limited spreading in Figure 4(c), Cases A, C, and D in Figure 6(b) reach the maximum with a faster speed, and the information wide spreading strongly promotes the information acquisition speed of other nodes. However, Case B does not follow this rule, and there is a long-term stagnate, which is a huge difference compared with Case 2 in Figure 6(a). It can be an explained as following. When the information of key agents is insufficient and with wide spreading, the normal agents accept such small information amount repeatedly and they cannot understand the event fully. It results in a stagnate on group average information amount in a long term. In this case, it can be said that key agents hinder the information acquisition for other agents to a certain extent.Case 2 in Figure6(a) restricts the information spreading, but it does not prevent each agent in the network from obtaining different information in various aspects. Therefore, after the information amount has accumulated to a certain extent, the information amount explodes, instead of being widely restricted by the information of key agents.The above figures reflect the change on the average public opinion pressure from the group with time, among whereE is the actual public opinion pressure that agents felt, and F is the objective public opinion pressure agents receive. Obviously, the effect on opinions convergence makes the external pressure on agents rapidly decrease. It can be found that the average level of public opinion pressure suffered by the group is the smallest in Figure 7(a) case 3 than other cases. Followed by Case 1 and Case 4, it is not difficult to find that the smaller the amount information of group is, the greater the pressure on public opinion is. It means that sufficient information on key agents can reduce the pressure of public opinion on the group to a certain extent, which is verified by Case 3 and Case 4. Comparing Figures 7(a) and 7(b), the wide spreading on information of key agents promotes the convergence on the group and makes the E approach to 0 faster.Figure 7 Comparison of the public opinion pressure (a, b). The averageF of different cases (c, d). The average E of different cases (e). The F of each agent, the blue stems show the initial F, and the red stem shows the final F after once opinion evolution in case 1 (f). The E of each agent, the blue stems show the initial E, and the red stem shows the final E after once opinion evolution in case 1. (a)(b)(c)(d)(e)(f)Figures7(e) and 7(f) show the comparison of the objective public opinion pressure and the actual public opinion pressure on each agent at the initial time and a round of opinion evolution. It can be found that agents with higher degrees may be more likely to suffer from objective public opinion pressure. It is related to the uniform distribution for opinion on neighbours, and the large weight also enlarges this. Even if the difference in opinion is small, the pressure on public opinion will increase under the influence on weight. The positive or negative of the objective public opinion pressure reflects the direction of change on the agent’s opinion value. If the public opinion pressure is positive, the agent’s opinion value will change towards 1, or conversely. In addition, the pressure felt by the agent is positively correlated with the objective pressure (compare Figures 7(a) and 7(c), Figures 7(b) and 7(d)). After opinion evolution, the group opinion pressure is about 0, which indicates that the final group opinion value and information amount reach a steady state and the agent is minimally affected by the opinion on neighbour. ### 4.3. Guiding Effectiveness of Changing Spreading Scope and Group Maximum Information Amount As shown in Figure8, Strategy 1 can be observed that the spreading scope changes from 0 to 1, and the guidance becomes more effective. As the spreading scope continues to increase, the guiding effect does not continue to increase but oscillates. In Strategy 2, with the increase of information spreading scope from key agents, the guiding effect does not change significantly. From limited spreading to spreading scope of 1, the guiding effect of this process becomes slightly worse. This situation occurs because insufficient information makes the group in the network easily affected by the opinions on other nodes around. In the wide spreading, due to the insufficient information amount, it is easy for the group to accept the opinions on key agents and the opinions on other agents, so the spreading effect on key agents is not dramatical. In strategy 1 where the information amount is relatively sufficient, it is not easy to change opinions under limited spreading because the agent has a certain information amount. If information from the key agents is widely spread, it can make more agents whose information amount has not reached the threshold change opinions. Taking into account the scale-free behavior in the BBV network, information spread widely will cause repeated spread. In other words, key agents with the large degree could affect most of the agents with in a shorter spreading scope. At the same time, opinions on key agents be re-affected by those agents quickly because of short spreading path. Therefore, the continuous expansion of wide spreading is difficult to improve the guiding effect of key agents in the network. Here, we can consider the cost of key agents to spread opinion information. Obviously, there is no need to blindly expand the spreading scope of key agents. It is only necessary to spread information from key agents as far as possible to reach a certain appropriate range. For relevant departments, they can save the cost that guiding the public to make comments or forward information. (For example, if you live-stream your products, it is better to have more people watch it directly than to have viewers spread their product descriptions to their friends, which is more conducive to increasing sales.) Apparently, the guiding effect on key agents in Strategy 1 and Strategy 2 in Figure 5 is similar, but as the spreading scope expands, the guiding effect of Strategy 1 is generally better than that of Strategy 2 gradually. It is still caused by the difference in the information amount on the agents. The agent with insufficient information amount is impacted by neighbour agents, and it is difficult to maintain the original opinion, and vice versa.Figure 8 The distribution of the result of average opinion by the guidance from key agents. The scope of transmission represents that the scope of key agents can spread its opinion once time. Strategy 1: KAO = 1, AAI = 1; strategy 2: KAO = 1, AAI = 0.1; strategy 3: KAO = 1, KAI = 1, NAI = 1; strategy 4: KAO = 1, KAI = 1, NAI = 0.1.In Figure8, strategy 3, the wide spreading scope of 0 means that the effect of limited spreading is rather inferior to that of situation where the wide spreading scope is 1 and the spreading scope continues to expand. In the case of Strategy 3, allowing information of key agents and opinions to be widely spread when the initial information amount is relatively insufficient, so that the evolution will have an effective impact on public opinion networks from the beginning. The reason why this scenario is more effective than strategy 1 and strategy 2 is that key agents have enough information to convince surrounding neighbour. Strategy 4 is a scenario where the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 0.1. However, it can be found that no matter it is limited spreading or wide spreading, the final guiding effect is almost the same. By comparing Figures 3(b) and 4(b), it is not difficult to find a feature. The final opinion value held by all nodes in Figure 3(b) has a small difference, while the difference in Figure 4(b) is relatively large and there is a less obvious stratification phenomenon. It can be considered that the intention of wide spreading is to affect as many agents as possible in a short period of time, but after some agents accept the opinions of key agents, as the overall information amount in the network increases, agents actually feel less pressure on public opinion, that is, they become stubborn and it is difficult to change their opinion value. Limited spreading is a slow process; key agents affect a small number of agents and then these small agents will affect other agents next in a short time. Compared with wide spreading, this process has stronger stability, and the opinion value is easier to converge. Additionally, in contrast to the insufficient information amount of key agents, sufficient information amount of key agents has a more stable guiding effect on the public opinion networks.The increase in the information amount means that the external public opinion pressure that the agent feels is relatively reduced. Figure9 compares the effectiveness of different strategies under different information amount of the group. Overall, the difference in different strategies is relatively obvious. Strategy A, as the initial information amount of the group continues to increase, the guiding effect of key agents gradually deteriorates and there is a light oscillation and instability. For Strategy B, as the initial group information amount continues to increase, the guiding effect of key agents declines steadily. While for Strategy C, as the initial information amount of the group increases, the public opinion pressure felt by the agent is reduced, and the guiding effect does not change significantly, but oscillate at the same level. For Strategy D, the influencing effect of key agents slowly decreases as the group information amount increases.Figure 9 The distribution of the result of average opinion by the guidance from key agents. Strategy A : KAO = 1,A0∼U0,Info, limited spreading. Strategy B : KAO = 1, KAI = 1, N0∼U0,Info, limited spreading. Strategy C : KAO = 1, A0∼U0,Info, wide spreading. Strategy D : KAO = 1,KAI = 1, N0∼U0,Info, wide spreading.Strategy C and strategy D are the results of the guiding effectiveness of key agents under the change of the information amount during wide spreading, and that in strategy A and strategy B under the information limited spreading are relatively inferior. In strategy A and strategy C, only when the information amount is extremely small, the guiding effect under wide spreading is inferior to that under limited spreading. In the case that the group information amount is quite small and the information of key agents is also very small, that is, the maximum information amount is 0.1, strategy A will have a better effect than strategy C. When the maximum information amount reaches 1, the key agents of strategy B and strategy C have the similar guiding effect. At this time, strategy B is the result when the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 1 with limited spreading (Case C), and the final value of strategy C is the result when the key agent is 1, the maximum information amount of all nodes is 1 with wide spreading (Case A). By comparing Figures3(a) and 3(c), it is indeed found that the final average opinion value of the two is similar. The value of Figure 3(c) is slightly higher than that of Figure 3(a), which is also inline with the situation in Figure 6(a) where strategy C is finally slightly higher than strategy B. The stable phenomenon of strategy C is because when key agents have the same information amount as other nodes, no matter how the group information amount level changes, even if the wide spreading is introduced, the guiding effect of key agents will not change much. The decline in Strategy A is because under limited spreading, as the information amount of the group grows, the opinion value of key agents becomes less and less convincing and difficult to be accepted by the group. Strategy B and strategy D also follow this rule. According to the analysis for above different strategies under the change of the information amount, we can conduct the different strategies to achieve the goal on guiding opinions based on different understanding on group and the cost. ## 4.1. Simulation of the Guidance Effect of Key Agents Aiming at exploring the influence of key agents on public opinion, the different situations are set up as follows: whether the information can be widely spread, whether the information amount is sufficient. And the influence of these situations on the evolution of network opinions is discussed. Considering the comparability on the simulation results, we set an appropriate number of iterations and 16.7% (50 nodes) keys agents in network. The set of cases to simulate is shown in Table3.Table 3 The set of cases to simulate. CaseLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,1Case 1Case AKAO = 1, AAI = 0.1,A0∼U0,0.1Case 2Case BKAO = 1, KAI = 1, NAI = 1,N0∼U0,1Case 3Case CKAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.1Case 4Case DNote: limited spreading indicates key agents have no power of spread its opinion to the outside in one time. Wide spreading indicates that key agents can propagate its opinion to the public at once, and the wide spreading level is 1 here. ### 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. ### 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. ### 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ## 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. ## 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. ## 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ## 4.2. The Average Information Amount on the Network and Public Opinion Pressure under Different Circumstances The opinion evolution is a process along with change on information amount. Here, the average information amount of the group in each case is compared. Notably, the group information amount and the opinion value are not directly related, and after the group opinion is stable, the information amount will continue to grow. Therefore, the first 20 time steps in the figure are equivalent to 100 time steps in the previous cases.Figure6(a) clearly shows the different changes on the information amount with time in several different situations. The network average information amount in Case 1, Case 3, and Case 4 almost reach 1 during the same period, while Case 2 does not. In Case 1 and Case 3, the information amount of the network group is relatively sufficient, so the group information amount quickly approaches to the maximum. The initial growth on information amount in Case 2 is slow; and it achieves rapid increase after reaching a certain level. In Case 4, the information amount on key agent plays an important role. Although the information amount on most agents in the network is in sufficient and the maximum information amount is limited to 0.1, the key agents have sufficient information amount. Due to the high degree characteristic on key agents, that is, key agents are connected to numerous nodes; it effectively helps with the external information spread from key agents. Thus, the information amount of the group can reach the maximum at the same time.Figure 6 Comparison of the information amount in different scenes. In the long term, it shows the average information amount with time and matches the previous 8 cases. (a)(b)Figure6(b) exhibits the results of the information wide spreading. In this case, the average opinion value of network groups in Case A, Case C, and Case D reaches 1 in a short period of time. Case B first grows slightly after a short period of time and then almost stagnates. Compared with the limited spreading in Figure 4(c), Cases A, C, and D in Figure 6(b) reach the maximum with a faster speed, and the information wide spreading strongly promotes the information acquisition speed of other nodes. However, Case B does not follow this rule, and there is a long-term stagnate, which is a huge difference compared with Case 2 in Figure 6(a). It can be an explained as following. When the information of key agents is insufficient and with wide spreading, the normal agents accept such small information amount repeatedly and they cannot understand the event fully. It results in a stagnate on group average information amount in a long term. In this case, it can be said that key agents hinder the information acquisition for other agents to a certain extent.Case 2 in Figure6(a) restricts the information spreading, but it does not prevent each agent in the network from obtaining different information in various aspects. Therefore, after the information amount has accumulated to a certain extent, the information amount explodes, instead of being widely restricted by the information of key agents.The above figures reflect the change on the average public opinion pressure from the group with time, among whereE is the actual public opinion pressure that agents felt, and F is the objective public opinion pressure agents receive. Obviously, the effect on opinions convergence makes the external pressure on agents rapidly decrease. It can be found that the average level of public opinion pressure suffered by the group is the smallest in Figure 7(a) case 3 than other cases. Followed by Case 1 and Case 4, it is not difficult to find that the smaller the amount information of group is, the greater the pressure on public opinion is. It means that sufficient information on key agents can reduce the pressure of public opinion on the group to a certain extent, which is verified by Case 3 and Case 4. Comparing Figures 7(a) and 7(b), the wide spreading on information of key agents promotes the convergence on the group and makes the E approach to 0 faster.Figure 7 Comparison of the public opinion pressure (a, b). The averageF of different cases (c, d). The average E of different cases (e). The F of each agent, the blue stems show the initial F, and the red stem shows the final F after once opinion evolution in case 1 (f). The E of each agent, the blue stems show the initial E, and the red stem shows the final E after once opinion evolution in case 1. (a)(b)(c)(d)(e)(f)Figures7(e) and 7(f) show the comparison of the objective public opinion pressure and the actual public opinion pressure on each agent at the initial time and a round of opinion evolution. It can be found that agents with higher degrees may be more likely to suffer from objective public opinion pressure. It is related to the uniform distribution for opinion on neighbours, and the large weight also enlarges this. Even if the difference in opinion is small, the pressure on public opinion will increase under the influence on weight. The positive or negative of the objective public opinion pressure reflects the direction of change on the agent’s opinion value. If the public opinion pressure is positive, the agent’s opinion value will change towards 1, or conversely. In addition, the pressure felt by the agent is positively correlated with the objective pressure (compare Figures 7(a) and 7(c), Figures 7(b) and 7(d)). After opinion evolution, the group opinion pressure is about 0, which indicates that the final group opinion value and information amount reach a steady state and the agent is minimally affected by the opinion on neighbour. ## 4.3. Guiding Effectiveness of Changing Spreading Scope and Group Maximum Information Amount As shown in Figure8, Strategy 1 can be observed that the spreading scope changes from 0 to 1, and the guidance becomes more effective. As the spreading scope continues to increase, the guiding effect does not continue to increase but oscillates. In Strategy 2, with the increase of information spreading scope from key agents, the guiding effect does not change significantly. From limited spreading to spreading scope of 1, the guiding effect of this process becomes slightly worse. This situation occurs because insufficient information makes the group in the network easily affected by the opinions on other nodes around. In the wide spreading, due to the insufficient information amount, it is easy for the group to accept the opinions on key agents and the opinions on other agents, so the spreading effect on key agents is not dramatical. In strategy 1 where the information amount is relatively sufficient, it is not easy to change opinions under limited spreading because the agent has a certain information amount. If information from the key agents is widely spread, it can make more agents whose information amount has not reached the threshold change opinions. Taking into account the scale-free behavior in the BBV network, information spread widely will cause repeated spread. In other words, key agents with the large degree could affect most of the agents with in a shorter spreading scope. At the same time, opinions on key agents be re-affected by those agents quickly because of short spreading path. Therefore, the continuous expansion of wide spreading is difficult to improve the guiding effect of key agents in the network. Here, we can consider the cost of key agents to spread opinion information. Obviously, there is no need to blindly expand the spreading scope of key agents. It is only necessary to spread information from key agents as far as possible to reach a certain appropriate range. For relevant departments, they can save the cost that guiding the public to make comments or forward information. (For example, if you live-stream your products, it is better to have more people watch it directly than to have viewers spread their product descriptions to their friends, which is more conducive to increasing sales.) Apparently, the guiding effect on key agents in Strategy 1 and Strategy 2 in Figure 5 is similar, but as the spreading scope expands, the guiding effect of Strategy 1 is generally better than that of Strategy 2 gradually. It is still caused by the difference in the information amount on the agents. The agent with insufficient information amount is impacted by neighbour agents, and it is difficult to maintain the original opinion, and vice versa.Figure 8 The distribution of the result of average opinion by the guidance from key agents. The scope of transmission represents that the scope of key agents can spread its opinion once time. Strategy 1: KAO = 1, AAI = 1; strategy 2: KAO = 1, AAI = 0.1; strategy 3: KAO = 1, KAI = 1, NAI = 1; strategy 4: KAO = 1, KAI = 1, NAI = 0.1.In Figure8, strategy 3, the wide spreading scope of 0 means that the effect of limited spreading is rather inferior to that of situation where the wide spreading scope is 1 and the spreading scope continues to expand. In the case of Strategy 3, allowing information of key agents and opinions to be widely spread when the initial information amount is relatively insufficient, so that the evolution will have an effective impact on public opinion networks from the beginning. The reason why this scenario is more effective than strategy 1 and strategy 2 is that key agents have enough information to convince surrounding neighbour. Strategy 4 is a scenario where the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 0.1. However, it can be found that no matter it is limited spreading or wide spreading, the final guiding effect is almost the same. By comparing Figures 3(b) and 4(b), it is not difficult to find a feature. The final opinion value held by all nodes in Figure 3(b) has a small difference, while the difference in Figure 4(b) is relatively large and there is a less obvious stratification phenomenon. It can be considered that the intention of wide spreading is to affect as many agents as possible in a short period of time, but after some agents accept the opinions of key agents, as the overall information amount in the network increases, agents actually feel less pressure on public opinion, that is, they become stubborn and it is difficult to change their opinion value. Limited spreading is a slow process; key agents affect a small number of agents and then these small agents will affect other agents next in a short time. Compared with wide spreading, this process has stronger stability, and the opinion value is easier to converge. Additionally, in contrast to the insufficient information amount of key agents, sufficient information amount of key agents has a more stable guiding effect on the public opinion networks.The increase in the information amount means that the external public opinion pressure that the agent feels is relatively reduced. Figure9 compares the effectiveness of different strategies under different information amount of the group. Overall, the difference in different strategies is relatively obvious. Strategy A, as the initial information amount of the group continues to increase, the guiding effect of key agents gradually deteriorates and there is a light oscillation and instability. For Strategy B, as the initial group information amount continues to increase, the guiding effect of key agents declines steadily. While for Strategy C, as the initial information amount of the group increases, the public opinion pressure felt by the agent is reduced, and the guiding effect does not change significantly, but oscillate at the same level. For Strategy D, the influencing effect of key agents slowly decreases as the group information amount increases.Figure 9 The distribution of the result of average opinion by the guidance from key agents. Strategy A : KAO = 1,A0∼U0,Info, limited spreading. Strategy B : KAO = 1, KAI = 1, N0∼U0,Info, limited spreading. Strategy C : KAO = 1, A0∼U0,Info, wide spreading. Strategy D : KAO = 1,KAI = 1, N0∼U0,Info, wide spreading.Strategy C and strategy D are the results of the guiding effectiveness of key agents under the change of the information amount during wide spreading, and that in strategy A and strategy B under the information limited spreading are relatively inferior. In strategy A and strategy C, only when the information amount is extremely small, the guiding effect under wide spreading is inferior to that under limited spreading. In the case that the group information amount is quite small and the information of key agents is also very small, that is, the maximum information amount is 0.1, strategy A will have a better effect than strategy C. When the maximum information amount reaches 1, the key agents of strategy B and strategy C have the similar guiding effect. At this time, strategy B is the result when the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 1 with limited spreading (Case C), and the final value of strategy C is the result when the key agent is 1, the maximum information amount of all nodes is 1 with wide spreading (Case A). By comparing Figures3(a) and 3(c), it is indeed found that the final average opinion value of the two is similar. The value of Figure 3(c) is slightly higher than that of Figure 3(a), which is also inline with the situation in Figure 6(a) where strategy C is finally slightly higher than strategy B. The stable phenomenon of strategy C is because when key agents have the same information amount as other nodes, no matter how the group information amount level changes, even if the wide spreading is introduced, the guiding effect of key agents will not change much. The decline in Strategy A is because under limited spreading, as the information amount of the group grows, the opinion value of key agents becomes less and less convincing and difficult to be accepted by the group. Strategy B and strategy D also follow this rule. According to the analysis for above different strategies under the change of the information amount, we can conduct the different strategies to achieve the goal on guiding opinions based on different understanding on group and the cost. ## 5. Conclusion In our paper, according to the cost function, we construct different opinion evolution laws for different agents. And then we study the influence of information amount and information dissemination mode on group opinion by setting different information characteristics. To summarize, the main contributions of the paper are as follows:(1) How much individuals know about information on event affect the trend of final public opinion. When an agent knows less about the event information, he/she is more likely to be influenced by others around, and the opinion is easier to change. It inspires that relevant departments should put a more comprehensive opinion as soon as possible, which could guide the development of public opinion by increasing individual cognition.(2) Generally, it is more dramatical that public opinions are close to key agents’ opinions under the wide spreading than the limited spreading. It is noticeable that the expansion on information diffusion does not necessarily mean that the aggregation effect on public opinion is better. Public opinions tend to be relatively stable when the range of information diffusion is expanded to some extent. It inspired that relevant departments could control the limited scope of information dissemination to achieve public opinion control, which avoid waste of public resources. For example, when there are COVID-19 cases, the government only needs to supervise the close contact or secondary contact to control the epidemic effectively.Although this work only considers the least-cost decision-making methods of agents regarding information amount and opinion changes, and the public opinion pressure imposed by neighbour agents in a simple manner, all conclusions are drawn from simulation results, it still reveals some special mechanisms in some opinion evolution.As a result, our research enriches the study of predicting the public opinion development. And at the same time, it can also provide a theoretical basis for government public opinion monitoring. --- *Source: 1016692-2022-05-27.xml*
1016692-2022-05-27_1016692-2022-05-27.md
89,177
The Public Opinion Evolution under Group Interaction in Different Information Features
Jing Wei; Yuguang Jia; Yaozeng Zhang; Hengmin Zhu; Weidong Huang
Complexity (2022)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016692
1016692-2022-05-27.xml
--- ## Abstract Before expressing opinions, most people usually consider the standpoint of their friends nearby to avoid being isolated, which may lead to the herding effect. The words of celebrities in social networks usually attract public attention and affect the opinion evolution in the entire network. This process also causes the similar status quo. In this study, we find that the key figures play the guiding roles in public opinions who undertake the group pressure from information amount. Therefore, we build the cost function on opinion changes to study opinion evolution rules for public persons based on the spreading scope of information and information amount. Simulation analysis reveals that the information amount held by agents will affect the converging speed of public opinions, while enhancing the ability of key nodes may no more effective in guiding public opinion. --- ## Body ## 1. Introduction The expression on individuals’ opinions and spreading of information are evolutionary powers in the public opinion networks. The rapid spreading of information allows most agents in the networks to quickly evaluate information based on themselves judgments, resulting in the development of the irrespective opinion values. Opinions are developed from the information interaction and vary in public opinion networks because of the evolution on public opinion. When an agent has sufficient information about an event and has own knowledge on this specific event, his/her opinion can hardly to be influenced by the public environment. In this case, this agent can be regarded as “stubborn” one. Undoubtedly, on the Internet, the speeches of agents with various followers are also important. In public opinion networks, however, the information amount for the event held by the group, that is, the degree of their understanding of this event, will lead to varying response to opinions proposed by these agents. On the other hand, the spreading scope of information of special agents also has impacts on the overall opinion evolution in public opinion networks. Additionally, for a specific event, agents’ opinions are divided into opposition, neutrality, and support, with slight differences in details. Agents tend to have preferences instead of fully supporting or opposing an event.Opinion, as the carrier, is inseparable from information. Key agents, as one of the source of information, are bound to affect the development of public opinion. To study the influence of information changes on the opinions of different agents, the influence of information dissemination scope on group opinions, we propose an opinion evolution formula based on the cost function and public opinion pressure. In theory, our work expands the continuous opinion evolution model based on the cost function, which make us study the change rule on opinion and information specifically. In practical, our research can provide relevant departments with measurements for supervision and management on public opinion to some extent. It can effectively reduce the consumption on public resource and guide the development on public opinion.The remainder of the paper is organized as follows: we review related literature and compare current works with our work in Section2. In Section 3, with the growth function on information, the pressure function on public opinion and cost function, we construct the opinion dynamic models based on BBV network for different agents. Section 4 compares simulation results with different information environment to further study the influence of information amount and information scope on public opinion evolution. Finally, conclusions and contributions are given in Section 5. ## 2. Literature Review The development of social group opinions is embedded in the process of social interaction [1], and the spread of numerous opinions is accomplished by interactions between agents in the society [2, 3]. Studies of the evolution of group opinions can reflect and explain various complex social phenomena, including the aggregation of group opinions and the spread of rumors. It can also be applied to group decision-making and sociology, as well as the development of group wisdom. Examples include studying how social media and human behavior jointly influence the spread of infectious diseases through an epidemic model [1], dividing social trust into a continuous range to study how business reputation of firms can be improved in business networks [2], or introducing game theory and treating public opinion as a continuous interval within [0,1], and discussing the conformity and manipulation behavior of agents in realistic opinion networks and studying agent voting choices [3–6]. Hence, the study of the spread mechanism of group opinions can clarify various political, economic, and management phenomena, including popularity, the existence of minority opinions, consistency and diversity, and the leading role of the government [7–14]. To date, various studies of dynamics of opinion evolution have been reported. Previous studies on opinion evolution models have focused on discrete/continuous opinions, which greatly simplify the opinions expressed by agents in the social network, usually using plain interaction rules, and thus describe the most representative agent behavior and associated agent interaction characteristics. Most of the group opinions in discrete opinion models are classified as pro, con and neutral or buy and sell, left and right, usually denoted by 1, −1, 0. Discrete models include the Ising model [15–18], the Sznajd model [19–22],and the majority decision model [23–27]. Although the discrete opinion is consistent with the judgment of “right and wrong” issues, however, these models can hardly reflect the preference degree of agent opinions. In practical, agent opinions are somehow different from each other, while opinions held at a specific moment can be regarded as strong and steady. Usually, the value of an agent’s opinion is normalized to a continuous closed interval [0, 1], which can reflect more information about the agent and describe the process of gradual change of the agent’s opinion, such as the satisfaction evaluation of a product, the degree of confidence in a judgment, the degree of support, or opposition to an event. These characteristics can be well described by the continuous opinion evolution models, including the Deffuant model [28–31] and the HK (Hegsekmann-Krause) model [32–34] and their extended model.A large number of extension studies have been generated based on the above-mentioned thinking approach. For example, considering agents in social networks tend to be influenced by pressures from agents nearby. Dong et al. presented a novel DW model combined with local world opinion from individuals’ common friends where the opinions update depends on distance between individual opinions and network structure similarity. Finally, they analyzed the convergence of the model by simulation experiments [35]. Cheng and Yu proposed a modified HK model involving group pressure and claimed that the pressured agents can always reach a consensus infinite time [36]. Lu et al. reported that external pressure induced by public focus has negligible or weak influences [37]. Ferraioli and Ventre demonstrated that the dynamics in clique social networks always converge to consensus if the social pressure is sufficiently high [38]. For the influences of agent information on evolution of public opinions, the base agent model has a bounded confidence mode, in which information is introduced and different information releasing modes are explored [39]. Lan et al. proposed a statistical model for the influences of network rumors on the information amount of public opinion networks [40]. However, these studies did not simultaneously consider the influences of different factors.Above studies mainly focus on the natural evolution of group opinions, but rarely study the effect of guidance from opinion leaders, and few consider the influence of the scope of opinion dissemination by opinion leaders in the evolution of group opinions. Comparison of related studies of public opinion on the impact of group pressure and agent information amount is shown in Table1.Table 1 Comparison of related studies of public opinion on the impact of group pressure and agent information amount. ReferencesMethodOpinion valueInformation amountGroup pressureCheng and Yu [36]SimulationContinuousNoYesLu et al. [37]Empirical analysisDiscreteNoYesFerraioli and Ventre [38]Mathematical proofContinuousNoYesZhu and Hu [39]Empirical analysis simulationContinuousYesNoLan et al. [40]SimulationDiscreteYesNoCurrent studySimulationContinuousYesYes ## 3. Modeling With the in-depth study of complex network topology, numerous network models have been proposed to describe abstract social networks. Typical network models include small-world networks [41] and scale-free networks [42–44]. In most previous models, weights between nodes were not taken into account. However, in mainstream social platforms, such as Weibo or Facebook, key agents as hub nodes are not only obtain more attention, but also have a stronger impact on ordinary agents. Therefore, our work introduces to BBV network model [45] to describe the social network. In this network model, the degree, the weight and strength of agents in the network follow the power-law distribution.We assume that social networks can be abstracted as a BBV networkG=ν,ε,δ, where ν=1,2,…,k,k+1,..,k+n represents the node set, ε is the edge set, δ is the set of weight on the edge. Among where, k means the number of key agents and n means the number of normal agents. The parameter wij describes the weight between node i and its connected node j, which measure relation intensity between agents in our network model. Define Ii∈0,1 as the information amount of agent i, the A0=I10,I20,…,Ik0,Ik+10,…,Ik+n0 represents the information amount of all agents at initial time and the N0=I10,I20,…,In0 represents the information amount of normal agents at initial time. In addition, to study the influence of information amount and group pressure on public opinion, we introduce some relevant important symbols, as shown in Table 2.Table 2 Relevant important symbols. SymbolImplicationOiThe opinion of agentidiThe degree of agentiFiThe objective public opinion pressure on agentiEiThe actual public opinion pressure on agentiKAOThe key agent’s opinion valueKAIThe information amount of key agentAAIThe maximum information amount of all agentsNAIThe maximum information amount of normal agentsIn social networks, all agents deliver their opinions and are influenced by agents nearby. Therefore, we divide agents into key agents, neighbour agents and other agents in our model. Key agents, such as opinion leaders, are the most important nodes in BBV network with the largest degree, weight, and strength. They think about not only their own ideas but also the public opinion environment before expressing their opinions. As an agent with more attention, they are under pressure from public opinion, so they convey the information on event by opinion more objectively and comprehensively. Neighbour agents are a kind of agents that directly connect with the key agents. Due to close influential key agents, their opinions are influenced by the key agents in part while insisting on themselves. Other agents are not directly connected to key agents, so they are not sensitive to information. In other words, the information sometimes cannot cover to them. Especially, the opinion evolution rule of other agents is the same as that of its neighbour agents. Finally, neighbour agents and other agents are called normal agents together.To study the relationship between the amount of information, the scope of spread and the change of group opinion, we consider two modes on opinion spread. First, the limited spreading, that is, opinion interaction happens between key agents and neighbour agents only, which causes the change in group opinions. Second, the wide spreading, that is, when key agents and neighbour agents finish interacting, other agents are influenced by the connected neighbour agents and change their opinions unidirectionally at next time. After the above process is finished, the other agents spread the opinion to the next level other agents and influence the opinion evolution of the next level agents at next time, and so on. The opinion spread process is shown in Figure1.Figure 1 Information wide spreading process. (The nodeK represents key agents, the node Nei represents neighbour agents, the node O represents other agents, and the arrows show the direction of opinion flow.) ### 3.1. Growth Function of Information Amount When each agent obtains information about a specific event first, he/she has different degrees of mastery for the information. Obviously, the closer to event an agent is, the more complete information can be obtained and the larger information amount is. As spreading distance of information increases, the degree of distortion and the degree of misinterpretation for the news increase, which means that the information and opinion deviate away from the event itself. However, with the interaction of agent opinions and information in public opinion networks, agents’ information amount will increase and accumulate over time. We defineIi∈0,1 as information amount of Agent i. For central agent i and its neighbour agent j, there are Ii∪​Ij≤1,Ii∩​Ij≤minIi,Ij, that is, the maximum information amount grasped by a single agent in the network is 1, and the overlapping information amount between the two agents is less than 1; at the same time, the sum of information amount of two agents does not exceed 1 after the common information is removed. The growth rule of information amount for agent is defined as follows. This formula is applicable to the key agents and the neighbour agents of the key agents. When obtaining information unilaterally, other neighbour nodes in the surrounding area are not taken into account and only a single information source node is considered. Here, i is the central node and j is the neighbour node, k is the information growth coefficient:If the affected agent is the key agenti,(1)Iit+1=Iit⋅1+kτIit∈0,|Iit−∑j∈αiIjdi|,τIi≥∑j∈αiIjdi,τ=|Iit−∑j∈αiIjdi|,Ii<∑j∈αiIjdi.When the affected node is neighbour agentj(2)Ijt+1=Ijt⋅1+kτIjtτ∈0,|Ijt−Ii|,Ij≥Ii,τ=|Ijt−Ii|,Ij<Ii.The parameterτ in the above equation is the information difference, which indicates the agent’s judgment of the difference between the information amount held by neighbour and itself. For equation (1), i is the central node, at this time, we compare the information amount of average for all neighbouring nodes of i with itself. If the information of the central node is greater than the average information of the neighbour, the difference τ is randomly taken in limited range, where the minimum amount obtained by the agent i is 0 and the maximum is the difference value. Meanwhile, the coefficient k value is also smaller. On the contrary, if the information amount of node i is less than the average information amount of neighbour, the corresponding information difference τ is the difference between the two. Because node i believes that more available information is from the outside and the value of k is larger. Equation (2) considers that neighbour node j is influenced by the information from the central node i in the same way as before. ### 3.2. Public Opinion Pressure Function Human beings are social, and the motivation on behavior is influenced by other people around. Before expressing their opinions, the agents will take into account the attitudes of other familiar neighbours around them. Under the influence of friends, the opinions of agents tend to the main stream. For the agent, the pressure that agents feel is inversely proportional to information amount they have, that is, the more information they have, the less likely they are to be influenced by other agents around them. In addition, they also pay attention to themselves opinions whether they are influenced by others or delivery opinions to the outside world.Define the opinion pressure by output by all neighbour of agenti as γi∈0,+∞, which is based on the difference in opinion and strength of the relationship between agent i and its neighbour. Considering the influence of the information amount held by itself, we define that the objective opinion pressure on agent i is Fi∈0,+∞. The factual opinion pressure on agent i is Ei∈0,1 and is influenced by Fi and the parameter a (a refers to the “stress level,” which reflects how much pressure an agent is subjected to before he or she starts to become patient with the growth of pressure from outside). In addition, because of diminishing marginal utility in human psychology, the logistic function is suitable for describing this process and this form has been proven and widely used [46–50]. In our work, as the pressure exerted by the surrounding nodes on node i increases, the factual opinion pressure Ei grows faster and slower and finally stabilizes. Therefore, the three following relationship equation is available:(3)γi=∑j∈αiwijOj−Oi,Fi=γiIi,Ei=11+ea−Fi. ### 3.3. Cost Function of Opinion Change and Optimal Strategy Formula When a key agent is affected by a neighbour node, the influence and overall opinion value difference based on the relation intensity with the surrounding agents and the difference in opinion will be considered. The information amount grasped is combined to change the agent’s own opinion value. And when the neighbour node receives the opinion value of the central agent, only the impact of the difference between the information amount and the opinion value and the relation intensity is considered because the information is obtained unilaterally. The interaction process of the opinion of a single agent and surrounding neighbour agents is divided into two stages: first the central agent is affected and opinions change and express, and then the opinions of neighbour change.Referring to an opinion evolution rule based on cost function proposed by Li and Zhu [51], we construct the cost function under the external pressure from the neighbour. And then the best strategy formula describing the change on the opinion is deduced. The decision cost function for node i to change its opinion after being influenced by surrounding nodes is shown in equation (4). The two items on the right-hand side of the equation represent the cost of a change in perspective due to self-inflicted costs and the cost of external pressure, respectively,(4)JiOi,Oαi=12IiOi−Oit2+12Ei∑j∈αiOi−Oj2.Derivation ofOi in the formula is(5)∂Ji∂Oi=IiOi−Oit+Ei∑j∈αiOi−Oj.The cost minimization condition is∂Ji/∂Oi=0. Rearrange the term(6)IiOi+EidiOi=IiOit+Ei∑j∈αiOj,OiIi+Eidi=IiOit+Ei∑j∈αiOj,Oit+1=IiIi+EidiOit+EiIi+Eidi∑j∈αiOj.For the neighbour agentj of agent i, the equation for the cost of change on opinion regarding neighbour agent j is like the above, where Oi is a known number and Oj is unknown. In this case, because the agent j is influenced separately from the central agent i, the relationship and attribution between them is considered only. In equation (7), the first term on the right-hand side represents the cost function of the change on opinion held by agent j relative to the previous moment, and the second term is the cost function of the degree of influence of agent i on the opinion of agent j. The parameter wij represents the strength of the relationship between the two, and the magnitude of Ii−Ij represents the difference in the amount of information between them.(7)JjOj,Oi=12IjOj−Ojt2+12wij|Ii−Ij|Oi−Ojt2.Similarly, take the derivative ofOj(8)∂Jj∂Oj=IjOj−Ojt+wij|Ii−Ij|Oi−Ojt.The cost minimization condition is∂Jj/∂Oj=0. Rearrange the term(9)Ojt+1=Ijwij|Ii−Ij|+IjOjt+wij|Ii−Ij|wij|Ii−Ij|+IjOi. ## 3.1. Growth Function of Information Amount When each agent obtains information about a specific event first, he/she has different degrees of mastery for the information. Obviously, the closer to event an agent is, the more complete information can be obtained and the larger information amount is. As spreading distance of information increases, the degree of distortion and the degree of misinterpretation for the news increase, which means that the information and opinion deviate away from the event itself. However, with the interaction of agent opinions and information in public opinion networks, agents’ information amount will increase and accumulate over time. We defineIi∈0,1 as information amount of Agent i. For central agent i and its neighbour agent j, there are Ii∪​Ij≤1,Ii∩​Ij≤minIi,Ij, that is, the maximum information amount grasped by a single agent in the network is 1, and the overlapping information amount between the two agents is less than 1; at the same time, the sum of information amount of two agents does not exceed 1 after the common information is removed. The growth rule of information amount for agent is defined as follows. This formula is applicable to the key agents and the neighbour agents of the key agents. When obtaining information unilaterally, other neighbour nodes in the surrounding area are not taken into account and only a single information source node is considered. Here, i is the central node and j is the neighbour node, k is the information growth coefficient:If the affected agent is the key agenti,(1)Iit+1=Iit⋅1+kτIit∈0,|Iit−∑j∈αiIjdi|,τIi≥∑j∈αiIjdi,τ=|Iit−∑j∈αiIjdi|,Ii<∑j∈αiIjdi.When the affected node is neighbour agentj(2)Ijt+1=Ijt⋅1+kτIjtτ∈0,|Ijt−Ii|,Ij≥Ii,τ=|Ijt−Ii|,Ij<Ii.The parameterτ in the above equation is the information difference, which indicates the agent’s judgment of the difference between the information amount held by neighbour and itself. For equation (1), i is the central node, at this time, we compare the information amount of average for all neighbouring nodes of i with itself. If the information of the central node is greater than the average information of the neighbour, the difference τ is randomly taken in limited range, where the minimum amount obtained by the agent i is 0 and the maximum is the difference value. Meanwhile, the coefficient k value is also smaller. On the contrary, if the information amount of node i is less than the average information amount of neighbour, the corresponding information difference τ is the difference between the two. Because node i believes that more available information is from the outside and the value of k is larger. Equation (2) considers that neighbour node j is influenced by the information from the central node i in the same way as before. ## 3.2. Public Opinion Pressure Function Human beings are social, and the motivation on behavior is influenced by other people around. Before expressing their opinions, the agents will take into account the attitudes of other familiar neighbours around them. Under the influence of friends, the opinions of agents tend to the main stream. For the agent, the pressure that agents feel is inversely proportional to information amount they have, that is, the more information they have, the less likely they are to be influenced by other agents around them. In addition, they also pay attention to themselves opinions whether they are influenced by others or delivery opinions to the outside world.Define the opinion pressure by output by all neighbour of agenti as γi∈0,+∞, which is based on the difference in opinion and strength of the relationship between agent i and its neighbour. Considering the influence of the information amount held by itself, we define that the objective opinion pressure on agent i is Fi∈0,+∞. The factual opinion pressure on agent i is Ei∈0,1 and is influenced by Fi and the parameter a (a refers to the “stress level,” which reflects how much pressure an agent is subjected to before he or she starts to become patient with the growth of pressure from outside). In addition, because of diminishing marginal utility in human psychology, the logistic function is suitable for describing this process and this form has been proven and widely used [46–50]. In our work, as the pressure exerted by the surrounding nodes on node i increases, the factual opinion pressure Ei grows faster and slower and finally stabilizes. Therefore, the three following relationship equation is available:(3)γi=∑j∈αiwijOj−Oi,Fi=γiIi,Ei=11+ea−Fi. ## 3.3. Cost Function of Opinion Change and Optimal Strategy Formula When a key agent is affected by a neighbour node, the influence and overall opinion value difference based on the relation intensity with the surrounding agents and the difference in opinion will be considered. The information amount grasped is combined to change the agent’s own opinion value. And when the neighbour node receives the opinion value of the central agent, only the impact of the difference between the information amount and the opinion value and the relation intensity is considered because the information is obtained unilaterally. The interaction process of the opinion of a single agent and surrounding neighbour agents is divided into two stages: first the central agent is affected and opinions change and express, and then the opinions of neighbour change.Referring to an opinion evolution rule based on cost function proposed by Li and Zhu [51], we construct the cost function under the external pressure from the neighbour. And then the best strategy formula describing the change on the opinion is deduced. The decision cost function for node i to change its opinion after being influenced by surrounding nodes is shown in equation (4). The two items on the right-hand side of the equation represent the cost of a change in perspective due to self-inflicted costs and the cost of external pressure, respectively,(4)JiOi,Oαi=12IiOi−Oit2+12Ei∑j∈αiOi−Oj2.Derivation ofOi in the formula is(5)∂Ji∂Oi=IiOi−Oit+Ei∑j∈αiOi−Oj.The cost minimization condition is∂Ji/∂Oi=0. Rearrange the term(6)IiOi+EidiOi=IiOit+Ei∑j∈αiOj,OiIi+Eidi=IiOit+Ei∑j∈αiOj,Oit+1=IiIi+EidiOit+EiIi+Eidi∑j∈αiOj.For the neighbour agentj of agent i, the equation for the cost of change on opinion regarding neighbour agent j is like the above, where Oi is a known number and Oj is unknown. In this case, because the agent j is influenced separately from the central agent i, the relationship and attribution between them is considered only. In equation (7), the first term on the right-hand side represents the cost function of the change on opinion held by agent j relative to the previous moment, and the second term is the cost function of the degree of influence of agent i on the opinion of agent j. The parameter wij represents the strength of the relationship between the two, and the magnitude of Ii−Ij represents the difference in the amount of information between them.(7)JjOj,Oi=12IjOj−Ojt2+12wij|Ii−Ij|Oi−Ojt2.Similarly, take the derivative ofOj(8)∂Jj∂Oj=IjOj−Ojt+wij|Ii−Ij|Oi−Ojt.The cost minimization condition is∂Jj/∂Oj=0. Rearrange the term(9)Ojt+1=Ijwij|Ii−Ij|+IjOjt+wij|Ii−Ij|wij|Ii−Ij|+IjOi. ## 4. Simulation and Discussion ABBV network with 300 nodes is established. To compare the simulation results, we set the maximum weight is 1. To discuss the influence of different information features on the public opinion, several situations are additionally set up. Additionally, we also introduce the wide spreading mechanism to explore whether opinions of Key agent sand information spreading will impact differently on the opinion evolution. Finally, we simulate the process on public opinion under different situations with the changes in information spreading scope and information amount. In the simulation, when the information amount of agenti is greater than or equal to that of agent j or neighbour average information amount, the information growth coefficient k in the formulas (1) and (2) is 0.001, otherwise it is 0.01. The coefficient a is 5 in the formula (3). To observe the evolution mechanism on group opinion and to combine with the scale-free characteristics for the network, we arrange all opinion values at each time step in descending order, which means that the agent label only represents the number instead of the serial number and does not represent the change of opinion of any agent in the horizontal axis direction overtime. Moreover, each time step iterates 60 times.The above is the free evolution result of the BBV network with 300 nodes. Figure2 shows the simulation results without any changes where the opinion value of all nodes is uniformly distributed in [0, 1], and the information amount is uniformly distributed in [0, 1]. In the figure, when it is more closed to the red part, the opinion value is more closed to 1; when it is more closed to the blue part, the opinion value is more closed to 0; and when it is more closed to green part, the opinion value is more closed to 0.5. It can be observed that the values of all agent opinions in the network converge to 0.5 with time in Figure 2(a). After 100 time steps, the final average opinion value of 0.4981 is close to 0.5. In Figure 3, the maximum information amount of all nodes is set to 0.1. Obviously, compared with Figure 2(a), the opinion of all agents in the network is still inclined to 0.5, but the convergence rate is faster at the same time. These two experiments are to simulate the evolution on public opinions under different information amount grasped by agents at the beginning of the event. Apparently, in the initial period of a specific event, less information the agents have, the more easily the opinions reach agreement. Imagine that in public opinion networks, each agent has a limited degree on information mastery; the agent will easily agree with the opinions of other agents under the same level of public opinion pressure from the neighbour nodes, but not vice versa. If the truth of the event was concealed deliberately and even the public did not understand the event, the public would make extensive speculations and arbitrarily express their positions on the event. Even if there is insufficient evidence and information is insufficient, the public tends to believe the opinions from the surrounding agents whether they are for or against the opinions originally. After full understanding of the opinions from both sides, agents usually fall into neutrality. In the end, the group quickly reaches an agreement. In a transparent society, however, the instant information publication allows each agent to understand the full picture of the event first time. Agents will have enough information to support their own opinions and are not easily affected by other agents. As a result, the time for converge on all opinions and reach consensus will be delayed.Figure 2 Comparison of the opinion distribution of the network with time. (a)A0∼U0,1, (b)A0∼U0,0.1. (a)(b)Figure 3 Comparison of guiding effect of key agents in different situation under limit spread, target opinion is 1. (a) Case 1. (b) Case 2. (c) Case 3. (d) Case 4. (a)(b)(c)(d)In the simulation, the external public opinion pressure suffered by each agent is also counted. It is found that the key agents are exposed most evidently to public opinion pressure from neighbour agents, while normal agents are not. Under insufficient information amount of key agents, the greater actual pressure of public opinions agent feels, the more easily his/her opinion value changes. With the time, the opinion value of key agents tends to be the average level of the group. At the same time, both the felt and actual pressure of public opinions will decrease and eventually converge with the group. ### 4.1. Simulation of the Guidance Effect of Key Agents Aiming at exploring the influence of key agents on public opinion, the different situations are set up as follows: whether the information can be widely spread, whether the information amount is sufficient. And the influence of these situations on the evolution of network opinions is discussed. Considering the comparability on the simulation results, we set an appropriate number of iterations and 16.7% (50 nodes) keys agents in network. The set of cases to simulate is shown in Table3.Table 3 The set of cases to simulate. CaseLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,1Case 1Case AKAO = 1, AAI = 0.1,A0∼U0,0.1Case 2Case BKAO = 1, KAI = 1, NAI = 1,N0∼U0,1Case 3Case CKAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.1Case 4Case DNote: limited spreading indicates key agents have no power of spread its opinion to the outside in one time. Wide spreading indicates that key agents can propagate its opinion to the public at once, and the wide spreading level is 1 here. #### 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. #### 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. #### 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ### 4.2. The Average Information Amount on the Network and Public Opinion Pressure under Different Circumstances The opinion evolution is a process along with change on information amount. Here, the average information amount of the group in each case is compared. Notably, the group information amount and the opinion value are not directly related, and after the group opinion is stable, the information amount will continue to grow. Therefore, the first 20 time steps in the figure are equivalent to 100 time steps in the previous cases.Figure6(a) clearly shows the different changes on the information amount with time in several different situations. The network average information amount in Case 1, Case 3, and Case 4 almost reach 1 during the same period, while Case 2 does not. In Case 1 and Case 3, the information amount of the network group is relatively sufficient, so the group information amount quickly approaches to the maximum. The initial growth on information amount in Case 2 is slow; and it achieves rapid increase after reaching a certain level. In Case 4, the information amount on key agent plays an important role. Although the information amount on most agents in the network is in sufficient and the maximum information amount is limited to 0.1, the key agents have sufficient information amount. Due to the high degree characteristic on key agents, that is, key agents are connected to numerous nodes; it effectively helps with the external information spread from key agents. Thus, the information amount of the group can reach the maximum at the same time.Figure 6 Comparison of the information amount in different scenes. In the long term, it shows the average information amount with time and matches the previous 8 cases. (a)(b)Figure6(b) exhibits the results of the information wide spreading. In this case, the average opinion value of network groups in Case A, Case C, and Case D reaches 1 in a short period of time. Case B first grows slightly after a short period of time and then almost stagnates. Compared with the limited spreading in Figure 4(c), Cases A, C, and D in Figure 6(b) reach the maximum with a faster speed, and the information wide spreading strongly promotes the information acquisition speed of other nodes. However, Case B does not follow this rule, and there is a long-term stagnate, which is a huge difference compared with Case 2 in Figure 6(a). It can be an explained as following. When the information of key agents is insufficient and with wide spreading, the normal agents accept such small information amount repeatedly and they cannot understand the event fully. It results in a stagnate on group average information amount in a long term. In this case, it can be said that key agents hinder the information acquisition for other agents to a certain extent.Case 2 in Figure6(a) restricts the information spreading, but it does not prevent each agent in the network from obtaining different information in various aspects. Therefore, after the information amount has accumulated to a certain extent, the information amount explodes, instead of being widely restricted by the information of key agents.The above figures reflect the change on the average public opinion pressure from the group with time, among whereE is the actual public opinion pressure that agents felt, and F is the objective public opinion pressure agents receive. Obviously, the effect on opinions convergence makes the external pressure on agents rapidly decrease. It can be found that the average level of public opinion pressure suffered by the group is the smallest in Figure 7(a) case 3 than other cases. Followed by Case 1 and Case 4, it is not difficult to find that the smaller the amount information of group is, the greater the pressure on public opinion is. It means that sufficient information on key agents can reduce the pressure of public opinion on the group to a certain extent, which is verified by Case 3 and Case 4. Comparing Figures 7(a) and 7(b), the wide spreading on information of key agents promotes the convergence on the group and makes the E approach to 0 faster.Figure 7 Comparison of the public opinion pressure (a, b). The averageF of different cases (c, d). The average E of different cases (e). The F of each agent, the blue stems show the initial F, and the red stem shows the final F after once opinion evolution in case 1 (f). The E of each agent, the blue stems show the initial E, and the red stem shows the final E after once opinion evolution in case 1. (a)(b)(c)(d)(e)(f)Figures7(e) and 7(f) show the comparison of the objective public opinion pressure and the actual public opinion pressure on each agent at the initial time and a round of opinion evolution. It can be found that agents with higher degrees may be more likely to suffer from objective public opinion pressure. It is related to the uniform distribution for opinion on neighbours, and the large weight also enlarges this. Even if the difference in opinion is small, the pressure on public opinion will increase under the influence on weight. The positive or negative of the objective public opinion pressure reflects the direction of change on the agent’s opinion value. If the public opinion pressure is positive, the agent’s opinion value will change towards 1, or conversely. In addition, the pressure felt by the agent is positively correlated with the objective pressure (compare Figures 7(a) and 7(c), Figures 7(b) and 7(d)). After opinion evolution, the group opinion pressure is about 0, which indicates that the final group opinion value and information amount reach a steady state and the agent is minimally affected by the opinion on neighbour. ### 4.3. Guiding Effectiveness of Changing Spreading Scope and Group Maximum Information Amount As shown in Figure8, Strategy 1 can be observed that the spreading scope changes from 0 to 1, and the guidance becomes more effective. As the spreading scope continues to increase, the guiding effect does not continue to increase but oscillates. In Strategy 2, with the increase of information spreading scope from key agents, the guiding effect does not change significantly. From limited spreading to spreading scope of 1, the guiding effect of this process becomes slightly worse. This situation occurs because insufficient information makes the group in the network easily affected by the opinions on other nodes around. In the wide spreading, due to the insufficient information amount, it is easy for the group to accept the opinions on key agents and the opinions on other agents, so the spreading effect on key agents is not dramatical. In strategy 1 where the information amount is relatively sufficient, it is not easy to change opinions under limited spreading because the agent has a certain information amount. If information from the key agents is widely spread, it can make more agents whose information amount has not reached the threshold change opinions. Taking into account the scale-free behavior in the BBV network, information spread widely will cause repeated spread. In other words, key agents with the large degree could affect most of the agents with in a shorter spreading scope. At the same time, opinions on key agents be re-affected by those agents quickly because of short spreading path. Therefore, the continuous expansion of wide spreading is difficult to improve the guiding effect of key agents in the network. Here, we can consider the cost of key agents to spread opinion information. Obviously, there is no need to blindly expand the spreading scope of key agents. It is only necessary to spread information from key agents as far as possible to reach a certain appropriate range. For relevant departments, they can save the cost that guiding the public to make comments or forward information. (For example, if you live-stream your products, it is better to have more people watch it directly than to have viewers spread their product descriptions to their friends, which is more conducive to increasing sales.) Apparently, the guiding effect on key agents in Strategy 1 and Strategy 2 in Figure 5 is similar, but as the spreading scope expands, the guiding effect of Strategy 1 is generally better than that of Strategy 2 gradually. It is still caused by the difference in the information amount on the agents. The agent with insufficient information amount is impacted by neighbour agents, and it is difficult to maintain the original opinion, and vice versa.Figure 8 The distribution of the result of average opinion by the guidance from key agents. The scope of transmission represents that the scope of key agents can spread its opinion once time. Strategy 1: KAO = 1, AAI = 1; strategy 2: KAO = 1, AAI = 0.1; strategy 3: KAO = 1, KAI = 1, NAI = 1; strategy 4: KAO = 1, KAI = 1, NAI = 0.1.In Figure8, strategy 3, the wide spreading scope of 0 means that the effect of limited spreading is rather inferior to that of situation where the wide spreading scope is 1 and the spreading scope continues to expand. In the case of Strategy 3, allowing information of key agents and opinions to be widely spread when the initial information amount is relatively insufficient, so that the evolution will have an effective impact on public opinion networks from the beginning. The reason why this scenario is more effective than strategy 1 and strategy 2 is that key agents have enough information to convince surrounding neighbour. Strategy 4 is a scenario where the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 0.1. However, it can be found that no matter it is limited spreading or wide spreading, the final guiding effect is almost the same. By comparing Figures 3(b) and 4(b), it is not difficult to find a feature. The final opinion value held by all nodes in Figure 3(b) has a small difference, while the difference in Figure 4(b) is relatively large and there is a less obvious stratification phenomenon. It can be considered that the intention of wide spreading is to affect as many agents as possible in a short period of time, but after some agents accept the opinions of key agents, as the overall information amount in the network increases, agents actually feel less pressure on public opinion, that is, they become stubborn and it is difficult to change their opinion value. Limited spreading is a slow process; key agents affect a small number of agents and then these small agents will affect other agents next in a short time. Compared with wide spreading, this process has stronger stability, and the opinion value is easier to converge. Additionally, in contrast to the insufficient information amount of key agents, sufficient information amount of key agents has a more stable guiding effect on the public opinion networks.The increase in the information amount means that the external public opinion pressure that the agent feels is relatively reduced. Figure9 compares the effectiveness of different strategies under different information amount of the group. Overall, the difference in different strategies is relatively obvious. Strategy A, as the initial information amount of the group continues to increase, the guiding effect of key agents gradually deteriorates and there is a light oscillation and instability. For Strategy B, as the initial group information amount continues to increase, the guiding effect of key agents declines steadily. While for Strategy C, as the initial information amount of the group increases, the public opinion pressure felt by the agent is reduced, and the guiding effect does not change significantly, but oscillate at the same level. For Strategy D, the influencing effect of key agents slowly decreases as the group information amount increases.Figure 9 The distribution of the result of average opinion by the guidance from key agents. Strategy A : KAO = 1,A0∼U0,Info, limited spreading. Strategy B : KAO = 1, KAI = 1, N0∼U0,Info, limited spreading. Strategy C : KAO = 1, A0∼U0,Info, wide spreading. Strategy D : KAO = 1,KAI = 1, N0∼U0,Info, wide spreading.Strategy C and strategy D are the results of the guiding effectiveness of key agents under the change of the information amount during wide spreading, and that in strategy A and strategy B under the information limited spreading are relatively inferior. In strategy A and strategy C, only when the information amount is extremely small, the guiding effect under wide spreading is inferior to that under limited spreading. In the case that the group information amount is quite small and the information of key agents is also very small, that is, the maximum information amount is 0.1, strategy A will have a better effect than strategy C. When the maximum information amount reaches 1, the key agents of strategy B and strategy C have the similar guiding effect. At this time, strategy B is the result when the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 1 with limited spreading (Case C), and the final value of strategy C is the result when the key agent is 1, the maximum information amount of all nodes is 1 with wide spreading (Case A). By comparing Figures3(a) and 3(c), it is indeed found that the final average opinion value of the two is similar. The value of Figure 3(c) is slightly higher than that of Figure 3(a), which is also inline with the situation in Figure 6(a) where strategy C is finally slightly higher than strategy B. The stable phenomenon of strategy C is because when key agents have the same information amount as other nodes, no matter how the group information amount level changes, even if the wide spreading is introduced, the guiding effect of key agents will not change much. The decline in Strategy A is because under limited spreading, as the information amount of the group grows, the opinion value of key agents becomes less and less convincing and difficult to be accepted by the group. Strategy B and strategy D also follow this rule. According to the analysis for above different strategies under the change of the information amount, we can conduct the different strategies to achieve the goal on guiding opinions based on different understanding on group and the cost. ## 4.1. Simulation of the Guidance Effect of Key Agents Aiming at exploring the influence of key agents on public opinion, the different situations are set up as follows: whether the information can be widely spread, whether the information amount is sufficient. And the influence of these situations on the evolution of network opinions is discussed. Considering the comparability on the simulation results, we set an appropriate number of iterations and 16.7% (50 nodes) keys agents in network. The set of cases to simulate is shown in Table3.Table 3 The set of cases to simulate. CaseLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,1Case 1Case AKAO = 1, AAI = 0.1,A0∼U0,0.1Case 2Case BKAO = 1, KAI = 1, NAI = 1,N0∼U0,1Case 3Case CKAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.1Case 4Case DNote: limited spreading indicates key agents have no power of spread its opinion to the outside in one time. Wide spreading indicates that key agents can propagate its opinion to the public at once, and the wide spreading level is 1 here. ### 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. ### 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. ### 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ## 4.1.1. Simulation Results of Different Cases under Limited Spreading First, we strict the external spread of opinions of key agents. It means that we only consider that key agents interact the opinions and information with neighbour agents directly connected to them. In Figure3(a), the final average opinion value is 0.56711 in the same period. Obviously, the public opinion on entire networks is affected by the opinions of these 50 key agents where the group opinion trend is towards 1, but not evident. Figure 3(b) reflects the influence of key agents on the group opinion under information is insufficient in the entire networks. The final average opinion value of the group in the same period is 0.61211. Compared with Figure 3(a), the impact of key agents is relatively more obvious. Note worthily, the opinions of almost all agents here are finally close to the final average group opinion under this situation, which attribute to insufficient information amount on group. And a slight stratification phenomenon appears in Figure 3(c). The pressure on the group is relatively smaller because the agents have grasped a certain information amount at the same time, so it is difficult to reach consensus. In this case, the final average group opinion is greater than that in Figure 3(a) but smaller than that in Figure 3(b). The case describes whether the normal agents absorb the opinion on key agents, when the group has some cognition for the event and the key agents have sufficient information. Apparently, compared to the situation where all agents do not know the event very well, the impact on key agents is weaken. Under limited spreading on information of key agents, the average final opinion is 0.86014 in Figure 3(d) which is larger than that in any previous situation. When key agents have sufficient information, agents who do not know much about event information play a more effective role of guiding. These normal agents are under more public opinion pressure, while key agents are under relatively less public opinion pressure. When key agents have sufficient information and the other knows little about an event, the group opinion can be easily guided. It is worth noting that information is divided into objective and subjective. We do not rule out the case that the key agents fabricate a logical “complete story,” which is extremely dangerous for some sensitive public. Because when the group is unable to fully understand the event, a seemingly perfect and flawless “story” is easily accepted by the group that knows almost nothing. In this case, the stand points conveyed by key agents are more likely to be recognized by the group, which obviously has a certain negative impact on social stability. ## 4.1.2. Simulation Results of Different Cases under Wide Spreading In this simulation, we set external wide spreading on information of key agents, that is, the neighbour of key agent will spread the opinion value to other agents who are connected with them. Of course, the spread opinion value is not the original opinion by the key agents but will be slightly changed by the passed nodes after being spread again and again. And the degree of external spreading information on all key agents is 1; that is, a single spread link will pass through 1 node atmost.In Figure4(a), a distinct opinion stratification phenomenon finally appears. Obviously, the wide spreading of information on key agents has a guiding effect to some extent, but there are still some agents who do not recognize mainstream opinions. Additionally, the wide spreading of information on key agents will quickly make some agents have a certain degree of understanding of event information, so that the actual public opinion pressure will be reduced. This causes that the group opinion will become difficult to be unified.Figure 4 Comparison of guiding effect of key agents in different situation under wide spreading, target opinion is 1. (a) Case A. (b) Case B. (c) Case C. (d) Case D. (a)(b)(c)(d)Figure4(b) shows the results of setting the maximum information amount of all agents in the network to 0.1 differing to the parameter conditions of Figure 4(a). It is unexpected that there are almost no agents with relatively large differences in opinions in the network and almost all nodes reach consensus. However, in this case, the average opinion value of the group is 0.64849, slightly lower than 0.65708. The reason for this situation can be explained by the fact that under insufficient information the actual pressure felt by agents is relatively larger, which is likely to change their opinions and normal agents hardly trust key agents without sufficient evidence. Therefore, although all agents are affected by key agents, the effect is slightly weaker than that in Figure 3(b). At this time, the wide spreading on information is even less effective than limited spreading.In Figure4(c), the final average opinion value of the group is higher but there still is the stratification phenomenon because some agents disagree with the main stream opinions. When the group has a certain understanding for information, key agents express opinions with sufficient information, which enable some agents who do not know enough about the event to obtain more information. And the wide spreading of opinions accelerates this process. As above, the information amount can affect the actual pressure on public opinion and determine whether the opinions are changed easily or not.In Figure4(d), the group has the higher acceptability for the opinion on key agents. Although there are a small number of agents who have opinions different from the mainstream opinion in the early stage, they still disappear quickly. Under sufficient information and wide spreading on information, normal agents with maximum information amount of 0.1 and random distribution of the opinion value are extremely easy to accept the opinions of key agents. After careful observation, it can be found that there is an unstable chaotic phenomenon in the initial part, which shows that the agent is hesitant when face different opinions from each side in the early stage. It is related to the limited information the agent has in the early stage and the actual pressure on public opinions. ## 4.1.3. Effectiveness Comparison To reduce the influence of randomness on the conclusion, we simulate each case 20 times and calculate the final average opinion value on the group, as shown in Table4. Also, a radar chart is drawn for easy comparison.Table 4 The results of cases which key agents guiding the public opinion under different scenes. CasesLimited spreadingWide spreadingKAO = 1, AAI = 1,A0∼U0,10.5740 (Case 1)0.6433 (Case A)KAO = 1, AAI = 0.1,A0∼U0,0.10.6175 (Case 2)0.6522 (Case B)KAO = 1, KAI = 1, NAI = 1,N0∼U0,10.6374 (Case 3)0.7692 (Case C)KAO = 1, KAI = 1, NAI = 0.1,N0∼U0,0.10.8687 (Case 4)0.8996 (Case D)Note: all values correspond to the cases in Table3.In general, the wide spreading of value on key agents has better effect than limited spreading. In particular, when the opinion value on key agent is 1, the information amount is 1 with wide spreading and the maximum information amount of other agents is 0.1, group opinion are the closest to the original opinion on key agents. The difference between the effects of wide spreading and limited spreading of key agents’ opinions is greater when the amount of information about the group is more adequate (Case 1(A) and Case 3(C)). To analyze the results in Figure5, we study the influence of information amount on the group, information spreading scope, and the initial information amount under different cases.Figure 5 Comparison of the guidance effects of key agents in different cases, and the radar chart shows the difference in the result of limit and wide spreading. ## 4.2. The Average Information Amount on the Network and Public Opinion Pressure under Different Circumstances The opinion evolution is a process along with change on information amount. Here, the average information amount of the group in each case is compared. Notably, the group information amount and the opinion value are not directly related, and after the group opinion is stable, the information amount will continue to grow. Therefore, the first 20 time steps in the figure are equivalent to 100 time steps in the previous cases.Figure6(a) clearly shows the different changes on the information amount with time in several different situations. The network average information amount in Case 1, Case 3, and Case 4 almost reach 1 during the same period, while Case 2 does not. In Case 1 and Case 3, the information amount of the network group is relatively sufficient, so the group information amount quickly approaches to the maximum. The initial growth on information amount in Case 2 is slow; and it achieves rapid increase after reaching a certain level. In Case 4, the information amount on key agent plays an important role. Although the information amount on most agents in the network is in sufficient and the maximum information amount is limited to 0.1, the key agents have sufficient information amount. Due to the high degree characteristic on key agents, that is, key agents are connected to numerous nodes; it effectively helps with the external information spread from key agents. Thus, the information amount of the group can reach the maximum at the same time.Figure 6 Comparison of the information amount in different scenes. In the long term, it shows the average information amount with time and matches the previous 8 cases. (a)(b)Figure6(b) exhibits the results of the information wide spreading. In this case, the average opinion value of network groups in Case A, Case C, and Case D reaches 1 in a short period of time. Case B first grows slightly after a short period of time and then almost stagnates. Compared with the limited spreading in Figure 4(c), Cases A, C, and D in Figure 6(b) reach the maximum with a faster speed, and the information wide spreading strongly promotes the information acquisition speed of other nodes. However, Case B does not follow this rule, and there is a long-term stagnate, which is a huge difference compared with Case 2 in Figure 6(a). It can be an explained as following. When the information of key agents is insufficient and with wide spreading, the normal agents accept such small information amount repeatedly and they cannot understand the event fully. It results in a stagnate on group average information amount in a long term. In this case, it can be said that key agents hinder the information acquisition for other agents to a certain extent.Case 2 in Figure6(a) restricts the information spreading, but it does not prevent each agent in the network from obtaining different information in various aspects. Therefore, after the information amount has accumulated to a certain extent, the information amount explodes, instead of being widely restricted by the information of key agents.The above figures reflect the change on the average public opinion pressure from the group with time, among whereE is the actual public opinion pressure that agents felt, and F is the objective public opinion pressure agents receive. Obviously, the effect on opinions convergence makes the external pressure on agents rapidly decrease. It can be found that the average level of public opinion pressure suffered by the group is the smallest in Figure 7(a) case 3 than other cases. Followed by Case 1 and Case 4, it is not difficult to find that the smaller the amount information of group is, the greater the pressure on public opinion is. It means that sufficient information on key agents can reduce the pressure of public opinion on the group to a certain extent, which is verified by Case 3 and Case 4. Comparing Figures 7(a) and 7(b), the wide spreading on information of key agents promotes the convergence on the group and makes the E approach to 0 faster.Figure 7 Comparison of the public opinion pressure (a, b). The averageF of different cases (c, d). The average E of different cases (e). The F of each agent, the blue stems show the initial F, and the red stem shows the final F after once opinion evolution in case 1 (f). The E of each agent, the blue stems show the initial E, and the red stem shows the final E after once opinion evolution in case 1. (a)(b)(c)(d)(e)(f)Figures7(e) and 7(f) show the comparison of the objective public opinion pressure and the actual public opinion pressure on each agent at the initial time and a round of opinion evolution. It can be found that agents with higher degrees may be more likely to suffer from objective public opinion pressure. It is related to the uniform distribution for opinion on neighbours, and the large weight also enlarges this. Even if the difference in opinion is small, the pressure on public opinion will increase under the influence on weight. The positive or negative of the objective public opinion pressure reflects the direction of change on the agent’s opinion value. If the public opinion pressure is positive, the agent’s opinion value will change towards 1, or conversely. In addition, the pressure felt by the agent is positively correlated with the objective pressure (compare Figures 7(a) and 7(c), Figures 7(b) and 7(d)). After opinion evolution, the group opinion pressure is about 0, which indicates that the final group opinion value and information amount reach a steady state and the agent is minimally affected by the opinion on neighbour. ## 4.3. Guiding Effectiveness of Changing Spreading Scope and Group Maximum Information Amount As shown in Figure8, Strategy 1 can be observed that the spreading scope changes from 0 to 1, and the guidance becomes more effective. As the spreading scope continues to increase, the guiding effect does not continue to increase but oscillates. In Strategy 2, with the increase of information spreading scope from key agents, the guiding effect does not change significantly. From limited spreading to spreading scope of 1, the guiding effect of this process becomes slightly worse. This situation occurs because insufficient information makes the group in the network easily affected by the opinions on other nodes around. In the wide spreading, due to the insufficient information amount, it is easy for the group to accept the opinions on key agents and the opinions on other agents, so the spreading effect on key agents is not dramatical. In strategy 1 where the information amount is relatively sufficient, it is not easy to change opinions under limited spreading because the agent has a certain information amount. If information from the key agents is widely spread, it can make more agents whose information amount has not reached the threshold change opinions. Taking into account the scale-free behavior in the BBV network, information spread widely will cause repeated spread. In other words, key agents with the large degree could affect most of the agents with in a shorter spreading scope. At the same time, opinions on key agents be re-affected by those agents quickly because of short spreading path. Therefore, the continuous expansion of wide spreading is difficult to improve the guiding effect of key agents in the network. Here, we can consider the cost of key agents to spread opinion information. Obviously, there is no need to blindly expand the spreading scope of key agents. It is only necessary to spread information from key agents as far as possible to reach a certain appropriate range. For relevant departments, they can save the cost that guiding the public to make comments or forward information. (For example, if you live-stream your products, it is better to have more people watch it directly than to have viewers spread their product descriptions to their friends, which is more conducive to increasing sales.) Apparently, the guiding effect on key agents in Strategy 1 and Strategy 2 in Figure 5 is similar, but as the spreading scope expands, the guiding effect of Strategy 1 is generally better than that of Strategy 2 gradually. It is still caused by the difference in the information amount on the agents. The agent with insufficient information amount is impacted by neighbour agents, and it is difficult to maintain the original opinion, and vice versa.Figure 8 The distribution of the result of average opinion by the guidance from key agents. The scope of transmission represents that the scope of key agents can spread its opinion once time. Strategy 1: KAO = 1, AAI = 1; strategy 2: KAO = 1, AAI = 0.1; strategy 3: KAO = 1, KAI = 1, NAI = 1; strategy 4: KAO = 1, KAI = 1, NAI = 0.1.In Figure8, strategy 3, the wide spreading scope of 0 means that the effect of limited spreading is rather inferior to that of situation where the wide spreading scope is 1 and the spreading scope continues to expand. In the case of Strategy 3, allowing information of key agents and opinions to be widely spread when the initial information amount is relatively insufficient, so that the evolution will have an effective impact on public opinion networks from the beginning. The reason why this scenario is more effective than strategy 1 and strategy 2 is that key agents have enough information to convince surrounding neighbour. Strategy 4 is a scenario where the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 0.1. However, it can be found that no matter it is limited spreading or wide spreading, the final guiding effect is almost the same. By comparing Figures 3(b) and 4(b), it is not difficult to find a feature. The final opinion value held by all nodes in Figure 3(b) has a small difference, while the difference in Figure 4(b) is relatively large and there is a less obvious stratification phenomenon. It can be considered that the intention of wide spreading is to affect as many agents as possible in a short period of time, but after some agents accept the opinions of key agents, as the overall information amount in the network increases, agents actually feel less pressure on public opinion, that is, they become stubborn and it is difficult to change their opinion value. Limited spreading is a slow process; key agents affect a small number of agents and then these small agents will affect other agents next in a short time. Compared with wide spreading, this process has stronger stability, and the opinion value is easier to converge. Additionally, in contrast to the insufficient information amount of key agents, sufficient information amount of key agents has a more stable guiding effect on the public opinion networks.The increase in the information amount means that the external public opinion pressure that the agent feels is relatively reduced. Figure9 compares the effectiveness of different strategies under different information amount of the group. Overall, the difference in different strategies is relatively obvious. Strategy A, as the initial information amount of the group continues to increase, the guiding effect of key agents gradually deteriorates and there is a light oscillation and instability. For Strategy B, as the initial group information amount continues to increase, the guiding effect of key agents declines steadily. While for Strategy C, as the initial information amount of the group increases, the public opinion pressure felt by the agent is reduced, and the guiding effect does not change significantly, but oscillate at the same level. For Strategy D, the influencing effect of key agents slowly decreases as the group information amount increases.Figure 9 The distribution of the result of average opinion by the guidance from key agents. Strategy A : KAO = 1,A0∼U0,Info, limited spreading. Strategy B : KAO = 1, KAI = 1, N0∼U0,Info, limited spreading. Strategy C : KAO = 1, A0∼U0,Info, wide spreading. Strategy D : KAO = 1,KAI = 1, N0∼U0,Info, wide spreading.Strategy C and strategy D are the results of the guiding effectiveness of key agents under the change of the information amount during wide spreading, and that in strategy A and strategy B under the information limited spreading are relatively inferior. In strategy A and strategy C, only when the information amount is extremely small, the guiding effect under wide spreading is inferior to that under limited spreading. In the case that the group information amount is quite small and the information of key agents is also very small, that is, the maximum information amount is 0.1, strategy A will have a better effect than strategy C. When the maximum information amount reaches 1, the key agents of strategy B and strategy C have the similar guiding effect. At this time, strategy B is the result when the opinion value of key agents is 1, the information amount is 1, and the maximum information amount of other nodes is 1 with limited spreading (Case C), and the final value of strategy C is the result when the key agent is 1, the maximum information amount of all nodes is 1 with wide spreading (Case A). By comparing Figures3(a) and 3(c), it is indeed found that the final average opinion value of the two is similar. The value of Figure 3(c) is slightly higher than that of Figure 3(a), which is also inline with the situation in Figure 6(a) where strategy C is finally slightly higher than strategy B. The stable phenomenon of strategy C is because when key agents have the same information amount as other nodes, no matter how the group information amount level changes, even if the wide spreading is introduced, the guiding effect of key agents will not change much. The decline in Strategy A is because under limited spreading, as the information amount of the group grows, the opinion value of key agents becomes less and less convincing and difficult to be accepted by the group. Strategy B and strategy D also follow this rule. According to the analysis for above different strategies under the change of the information amount, we can conduct the different strategies to achieve the goal on guiding opinions based on different understanding on group and the cost. ## 5. Conclusion In our paper, according to the cost function, we construct different opinion evolution laws for different agents. And then we study the influence of information amount and information dissemination mode on group opinion by setting different information characteristics. To summarize, the main contributions of the paper are as follows:(1) How much individuals know about information on event affect the trend of final public opinion. When an agent knows less about the event information, he/she is more likely to be influenced by others around, and the opinion is easier to change. It inspires that relevant departments should put a more comprehensive opinion as soon as possible, which could guide the development of public opinion by increasing individual cognition.(2) Generally, it is more dramatical that public opinions are close to key agents’ opinions under the wide spreading than the limited spreading. It is noticeable that the expansion on information diffusion does not necessarily mean that the aggregation effect on public opinion is better. Public opinions tend to be relatively stable when the range of information diffusion is expanded to some extent. It inspired that relevant departments could control the limited scope of information dissemination to achieve public opinion control, which avoid waste of public resources. For example, when there are COVID-19 cases, the government only needs to supervise the close contact or secondary contact to control the epidemic effectively.Although this work only considers the least-cost decision-making methods of agents regarding information amount and opinion changes, and the public opinion pressure imposed by neighbour agents in a simple manner, all conclusions are drawn from simulation results, it still reveals some special mechanisms in some opinion evolution.As a result, our research enriches the study of predicting the public opinion development. And at the same time, it can also provide a theoretical basis for government public opinion monitoring. --- *Source: 1016692-2022-05-27.xml*
2022
# Multiphase, Multicomponent Simulation for Flow and Transport during Polymer Flood under Various Wettability Conditions **Authors:** Ji Ho Lee; Kun Sang Lee **Journal:** Journal of Applied Mathematics (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101670 --- ## Abstract Accurate assessment of polymer flood requires the understanding of flow and transport of fluids involved in the process under different wettability of reservoirs. Because variations in relative permeability and capillary pressure induced from different wettability control the distribution and flow of fluids in the reservoirs, the performance of polymer flood depends on reservoir wettability. A multiphase, multicomponent reservoir simulator, which covers three-dimensional fluid flow and mass transport, is used to investigate the effects of wettability on the flow process during polymer flood. Results of polymer flood are compared with those of waterflood to evaluate how much polymer flood improves the oil recovery and water-oil ratio. When polymer flood is applied to water-wet and oil-wet reservoirs, the appearance of influence is delayed for oil-wet reservoirs compared with water-wet reservoirs due to unfavorable mobility ratio. In spite of the delay, significant improvement in oil recovery is obtained for oil-wet reservoirs. With respect to water production, polymer flood leads to substantial reduction for oil-wet reservoirs compared with water-wet reservoirs. Moreover, application of polymer flood for oil-wet reservoirs extends productive period which is longer than water-wet reservoir case. --- ## Body ## 1. Introduction After primary and secondary oil recovery, there remains lots of oil in place in the reservoirs. To gain unrecovered oil in the reservoirs, additional flood, called enhanced oil recovery (EOR) method, is needed to be applied. Polymer flood has been known as one of the most widely used chemical EOR method. Increased viscosity of displacing fluid by adding low concentrations of water soluble and high molecular weight polymer into water produces the lower mobility ratio. Favorable mobility ratio improves the sweep efficiency so that oil recovery is increased.Craig defined wettability as the tendency of one fluid to spread on or adhere to a solid surface in the presence of other immiscible fluids [1]. The influences of reservoir wettability on multiphase flow on porous medium and hence on oil recovery have been well known. The wettability of a rock controls the location, flow, and distribution of fluids within reservoir rocks [2], which affects the relative permeability and capillary pressure [3, 4]. Therefore, the recovery of polymer flood in oil-wet reservoirs is totally different from that in water-wet reservoirs. Also, considerable hydrocarbon reserves are remained in mixed-wet or oil-wet reservoirs. Consequently, there is a necessity to better understand the effects of wettability on the oil recovery during polymer flood. Most of polymer flood studies have been carried out without considering wettability effect or focused on water-wet reservoirs with respect to field scale [5–7]. The investigations on the role of wettability on the various aspects of oil recovery during polymer flooding have been mainly based on the experimental works [8–10]. Therefore, there is need to conduct comprehensive numerical study on the flow and transport of aqueous and oleic phases during polymer flood under various wetting conditions on field scale. ## 2. Mathematical Formulation Simulations of polymer flood were conducted with UTCHEM, which is a 3D, multicomponent, multiphase, compositional model of chemical flooding processes considering complex phase behavior, chemical and physical transformation, and heterogeneous porous media properties [11].The basic mass conservation equation for components can be written as follows:(1)∂∂t(ϕC~κρκ)+∇·[∑l=1npρκ(Cκjuj-Dκj)]=Rκ, where κ is the component index, j is the phase index including aqueous (w) and oleic (o) phases, ϕ is the porosity, C~κ is the overall concentration of component κ (volume fraction), ρκ is the density of component κ [ML−3], np is the number of phases, Cκj is the concentration of component κ in phase j (volume fraction), uj is Darcy velocity of phase j [LT−1], Rκ is the total source/sink term for component κ (volume of component κ per unit volume of porous media per unit time), Dκj is the dispersion tensor.The phase flux from Darcy’s law is(2)uj=-kkrjμj∇(pj-γjh), where k is the intrinsic permeability tensor, h is the vertical depth, krj is the relative permeability, μj is the viscosity, and γj is the specific weight of phase j.To predict the reservoir behavior under multiphase condition, it is important to understand relative permeability of a reservoir rock to each of the fluids flowing through it. Relative permeability is assumed to be solely determined by own saturation and residual saturations and investigated by experimental and analytical methods [12]. Extensive studies have been conducted resulting in representative correlations between relative permeability, saturation, and other factors [12]. In this study, multiphase relative permeabilities are modeled with Corey-type functions [11]. Corey-type relative permeability is expressed with relative permeability on residual saturation, exponent defining the curvature of relative permeability, and residual saturation determining normalized saturation. Corey-type relative permeability equation is given as follows: (3)krj=krjoSnjnjforj=w,o, where krjo is the end point of relative permeability meaning relative permeability at residual saturation, nj is the exponent of relative permeability of phase j determining the curvature of relative permeability, Snj represents the normalized saturation of phase j calculated as follows: (4)Snj=Sj-Sjr1-∑j=1npSjr, where Sj is the saturation of phase j and Sjr is the residual saturation of phase j.Whether oil displaces water or water displaces oil through flow channels in reservoirs, flow phenomenon for immiscible fluid phases, oil and water, is involved with the pressure difference between these phases. Brooks and Corey observed a large number of data on consolidated rock cores and analyzed them by plotting the logarithm of effective saturation versus the logarithm of capillary pressure [13]. Due to this study, linear relationship between logarithm of effective saturation and logarithm of capillary pressure has been revealed and is shown as follows: (5)lnSe=-λlnpc+λlnpbforpc≥pb, where Se is the effective saturation calculated with residual saturations, λ and pb are constants realized from intercept and slope, λ means pore size distribution, pb is interpreted as maximum capillary pressure, the pc is the capillary pressure which represents pressure difference between wetting phase pressure and nonwetting phase pressure.Capillary pressure is strong function of saturation as presented by (5). Leverett derived capillary pressure scaled by soil permeability and porosity for homogeneous reservoirs [11, 14]. Reflected on previous relations, Brooks and Corey capillary pressure-saturation is calculated as follows: (6)pc=pbϕkforj=w,o,pb=Cpc(1-Snj)Epc, where Epc is equivalent to -(1/λ), Cpc is constant, and k and ϕ are the permeability and the porosity of the reservoirs. ## 3. Numerical Modeling This study analyzes the effect of wettability on flow and transport of fluids during polymer flood. Reservoir wettability is implemented in the numerical model by changing relative permeability and capillary pressure curves simultaneously. The reservoir depth is 2,000 ft and initial reservoir pressure is maintained at 400 psi. Horizontal area is360×360 ft2 and vertical thickness of reservoir is 25 ft. The simulation domain consists of 10 layers and each layer is discretized into 15×15 grid blocks in horizontal direction. As close to production and injection wells, grid block has smaller size comparing with blocks which are away from wells to assess pressures and saturations more accurately near wells. The model assumes that the reservoir is homogeneous so the porosity and permeability are constant everywhere as 0.2 and 300 md for horizontal direction and 30 md for vertical direction. The initial saturations of oil and water are assumed to be constant for 0.62 and 0.38, respectively. Properties of water and oil in reservoir are listed in Table 1. Water for injection is assumed to be identical as water in reservoir. Viscosity of polymeric solution including salinity and mechanical effects is considered as listed in Table 1.Table 1 Properties of water and oil and viscosity of polymeric solution. Fluid Viscosity(μ) Water 0.73 cp Oil 40 cp Density(ρ) Water 0.43353 psi/ft Oil 0.385839 psi/ft Compressibility(Cf) Water 0 psi−1 Oil 0 psi−1 Polymer viscosity Parameters for zero shear viscosity A p 1 38.47 wt%−1 A p 2 1,600 wt%−2 A p 3 0 wt%−3 Parameters for effective salinity β p for CSEP 20 C SEP , min 0.01 meq/mL Slope ofμpo versus CSEP −0.3 Parameters for shear rate dependence γ ˙ c 130 day (darcy)0.5/ft·s γ ˙ 1 / 2 280 P a 2.2The simulation was continued over 1,000 days. Injection designs for water and polymeric solution with conditions of constant rate are assumed to be identical for both water-wet and oil-wet reservoirs cases to access the performance of polymer flood quantitatively. To prevent high injection pressure due to increased viscosity of polymeric solution, injection of water and polymeric solution was operated in three steps as given in Figure1. The flow rate of solution was constant at each step. In the first step, with 0.1% of polymer, injection was operated at low rate as 800 ft3/day to prevent fractures on the injection well until 180 days. In the next step, the concentration of polymer was reduced to 0.05%, whereas the flow rate of injection was increased to 1,000 ft3/day by 360 days. In the last step, from 360 to 1,000 days, only water was injected through well by the end of operation. The reservoir fluids were recovered from the production well constrained with at 200 psi.Figure 1 History of injection rate and polymer concentration.In order to analyze the effect of wettability on the flow and transport of fluids during polymer flood, comparisons were made for results from simulations implementing different relative permeability curves and capillary pressure curves. Relative permeability and capillary pressure are the most important factors that show the different performances of polymer flood applied to water-wet and oil-wet reservoirs. To model wettability effect, data for different relative permeabilities and capillary pressure curves are attained from several studies. Anderson derived an equation of Corey-type functions by curve fitting the data measured by Morrow et al. [15–17]. Table 1 lists fitted relative permeability parameters and Figure 2(a) shows relative permeability curves generated with the parameters.Properties of water-wet and oil-wet reservoirs: (a) relative permeability curves and (b) capillary pressure curves. (a) (b)Calculation of capillary pressure is based on Brooks and Corey equation as previously explained. Capillary pressure end point,Cpc, and capillary pressure exponent, Epc, for water-wet and oil-wet reservoirs are listed in Table 2 [17]. Capillary pressures are also calculated with the data as shown in Figure 2(b).Table 2 Input parameters for capillary pressure and relative permeability depending on wettability. Parameters S wr S or k rw o k ro o n w n o Relative permeability Water-wet 0.12 0.25 0.26 1 3 1.3 Oil-wet 0.12 0.28 0.56 0.8 1.4 3.3 Parameters C pc E pc Capillary pressure Water-wet 7 2 Oil-wet −15 6 ## 4. Results and Discussion Based on the simulation results, comparisons were made between the performances of polymer flood and waterflood applied to water-wet and oil-wet reservoirs. The results were presented with cumulative oil recovery, water-oil ratio, and oil saturation distribution of the fifth (middle) layer. To decide whether application of polymer flood for oil-wet reservoirs is effective or not, water cut was also analyzed.Figures3(a) and 3(b) present the cumulative oil recovery and water-oil ratio which was obtained from the application of waterflood and polymer flood to water-wet reservoirs. As can be seen, the cumulative oil recovery by polymer flood is 0.58, which is considerably higher than the oil recovery by waterflood, 0.40, at the end of production. In terms of water-oil ratio, polymer flood results in lower water-oil ratio from 70 to 850 days, not from the initiation of injection. Even though polymeric solution is injected with a start of simulation, it takes time, 70 days in this model, to manifest the effect of polymer flood so that no improvement can be seen until 70 days. After 850 days, polymer flood already recovers almost movable oil in reservoirs, so that significant water production and lower increment of oil recovery are obtained. These improvements, higher oil recovery and lower water-oil ratio, during 70 days to 850 days result from increased viscosity of polymeric solution leading to lower or favorable mobility ratio. Furthermore, productive period is calculated to prove the influence of polymer flood applied to water-wet reservoir compared with waterflood case. The life span of production is determined with the assumption that producer is valid until it produces 90% of water cut. Considering results shown in Figure 5(a), polymer flood for water-wet reservoir sustains productive period of 160 days longer than that of waterflood case.Comparison of waterflood and polymer flood for water-wet reservoir: (a) cumulative oil recovery and (b) water-oil ratio. (a) (b)In attempts to study the effectiveness of polymer flood in oil-wet reservoirs, relative permeability and capillary pressure curves are set to be as Figures2(a) and 2(b). Figures 4(a) and 4(b) compare the cumulative oil recovery and water-oil ratio between numerical simulations of polymer flood and waterflood for oil-wet reservoir. Application of polymer flood to oil-wet reservoir increases cumulative recovery as much as 0.36 which is higher than the result of waterflood case, 0.08, and decreases water-oil ratio significantly. According to the calculation of productive period, while production by waterflood seems invalid for whole production period due to higher water cut as much as 90%, polymer flood extends the life span of the well as much as 315 days as shown Figure 5(b). From the analysis of the average reservoir pressure in application of polymer flood shown in Figure 6, pressure profile for oil-wet reservoir has been maintained to be lower than water-wet reservoir. Due to the high viscosity of polymeric solution, pressure starts to be increased initially regardless of wettability. After post-flush as waterflood is applied, it is going to be decreased.Comparison of waterflood and polymer flood for oil-wet reservoir: (a) cumulative oil recovery and (b) water-oil ratio. (a) (b)Water cut of waterflood and polymer flood depending on wettability: (a) water-wet and (b) oil-wet. (a) (b)Figure 6 Average reservoir pressure profiles for polymer flood depending on wettability.Despite of these effective performances of polymer flood for oil-wet reservoirs, utilization of polymer flood has not widely fulfilled. As shown in Figures3 and 4, lower recovery and higher water-oil ratio are observed than those of water-wet reservoirs when both waterflood and polymer flood are applied to oil-wet reservoirs. In agreement with previous studies, the performance of polymer flood in oil-wet reservoir seems less effective than that in water-wet reservoir. These results could draw the conclusion that the application of polymer flood to oil-wet seems not very effective and may not be recommended because it still shows low performance. However, not only the absolute value of oil recovery and water-oil ratio but also the improvement should be analyzed to consider efficiency of polymer flood. Therefore, additional analysis was repeated with respect to improvement by polymer flood. For water-wet reservoirs, polymer flood increases cumulative oil recovery up to 45% than that of waterflood from 0.40 to 0.58. Polymer flood also reduces water-oil ratio at least 50% less than the waterflood case. If there is enough capability for waterflood to produce high performance, these improvements attained by polymer flood may not be so significant. Whereas, for the oil-wet reservoirs, the application of polymer flood leads to substantial improvement of oil recovery and extensive reduction of water-oil ratio compared with waterflood case due to low performance of waterflood. As shown in Figure 5(b), when waterflood is applied to oil-wet reservoirs, it already exceeds 90% water cut at the early stage and it increases up to 99%. Because of high water production, waterflood should be suspended for oil-wet reservoir. However, polymer flood reduces water-oil ratio as much as maximum 90% compared with waterflood case, so that it makes oil-wet reservoirs as fruitful. Moreover, polymer flood leads to increment of oil recovery as much as 351% at the end of production as shown in Figure 4. Additionally, analysis on improvements of water cut by polymer flood for different wettability is calculated and shown in Figure 7. Even though polymer flood for water-wet reservoirs represents the maximum reduction of water cut as much as 22%, it results in reduction of as much as 80% for oil-wet reservoirs. Also, it sustains longer productive period as much as 315 days compared with that of water-wet reservoirs, 160 days, as shown in Figures 5 and 7. Therefore, the application of polymer flood results in more significant improvement in the case of oil-wet reservoir than in the case of water-wet reservoir.Figure 7 Improvement of water cut depending on wettability.Figure8 compares remained oil saturation in the fifth layer in application of waterflood and polymer flood to water-wet reservoirs at 1,000 days or 1.98 pore volumes injected. Due to the favorable mobility ratio by polymer flood, relatively higher contrast of oil saturation between swept and unswept regions exits in reservoir. On the other hand, Figure 9 shows simulated oil saturation in the same condition like Figure 8, except for wettability as oil-wet reservoir. The overall oil saturation of polymer-flooded water-wet reservoirs is lower than that of water-flooded scheme and is almost close to residual oil saturation. For oil-wet reservoirs, remained oil saturation is still higher than residual oil saturation after polymer flood is applied. Nevertheless, the contrast of oil saturation distribution between before and after polymer flood is applied is clearly seen in oil-wet reservoirs rather than water-wet reservoirs.Comparison of oil saturation distribution in the fifth layer at 1,000 days for water-wet reservoir: (a) waterflood and (b) polymer flood. (a) (b)Comparison of oil saturation distribution in the fifth layer at 1,000 days for oil-wet reservoir: (a) waterflood and (b) polymer flood. (a) (b) ## 5. Conclusions According to multiphase, multicomponent simulation, polymer flood allows the mobility ratio between aqueous and oleic fluids to be favorable, which increases oil recovery and decreases water-oil ratio in both water-wet and oil-wet reservoir conditions. The efficiency of polymer flood is remarkably affected by reservoir wettability. Performance of polymer flood for water-wet condition seems better than that for oil-wet condition. Because polymer flood is kind of modified waterflood, polymeric solution can displace oil out of the pore easily in water-wet scheme so that it sweeps lots of mobile oil and leaves just a little oil including residual oil saturation. This mechanism results in higher recovery for water-wet reservoir than oil-wet one. It could mislead us that polymer flood for oil-wet reservoirs is not effective as much as that for water-wet reservoirs. If anything, application of polymer flood to oil-wet reservoirs clearly shows higher improvement of oil recovery, water-oil ratio, and productive period than that of water-wet reservoirs. These results demonstrate that oil-wet reservoirs seem good candidates for the application of polymer flood because the technique is very effective in terms of the improvement of performance. Therefore, reliable evaluation of polymer flood should take into account wettability of reservoirs. --- *Source: 101670-2013-11-20.xml*
101670-2013-11-20_101670-2013-11-20.md
20,742
Multiphase, Multicomponent Simulation for Flow and Transport during Polymer Flood under Various Wettability Conditions
Ji Ho Lee; Kun Sang Lee
Journal of Applied Mathematics (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101670
101670-2013-11-20.xml
--- ## Abstract Accurate assessment of polymer flood requires the understanding of flow and transport of fluids involved in the process under different wettability of reservoirs. Because variations in relative permeability and capillary pressure induced from different wettability control the distribution and flow of fluids in the reservoirs, the performance of polymer flood depends on reservoir wettability. A multiphase, multicomponent reservoir simulator, which covers three-dimensional fluid flow and mass transport, is used to investigate the effects of wettability on the flow process during polymer flood. Results of polymer flood are compared with those of waterflood to evaluate how much polymer flood improves the oil recovery and water-oil ratio. When polymer flood is applied to water-wet and oil-wet reservoirs, the appearance of influence is delayed for oil-wet reservoirs compared with water-wet reservoirs due to unfavorable mobility ratio. In spite of the delay, significant improvement in oil recovery is obtained for oil-wet reservoirs. With respect to water production, polymer flood leads to substantial reduction for oil-wet reservoirs compared with water-wet reservoirs. Moreover, application of polymer flood for oil-wet reservoirs extends productive period which is longer than water-wet reservoir case. --- ## Body ## 1. Introduction After primary and secondary oil recovery, there remains lots of oil in place in the reservoirs. To gain unrecovered oil in the reservoirs, additional flood, called enhanced oil recovery (EOR) method, is needed to be applied. Polymer flood has been known as one of the most widely used chemical EOR method. Increased viscosity of displacing fluid by adding low concentrations of water soluble and high molecular weight polymer into water produces the lower mobility ratio. Favorable mobility ratio improves the sweep efficiency so that oil recovery is increased.Craig defined wettability as the tendency of one fluid to spread on or adhere to a solid surface in the presence of other immiscible fluids [1]. The influences of reservoir wettability on multiphase flow on porous medium and hence on oil recovery have been well known. The wettability of a rock controls the location, flow, and distribution of fluids within reservoir rocks [2], which affects the relative permeability and capillary pressure [3, 4]. Therefore, the recovery of polymer flood in oil-wet reservoirs is totally different from that in water-wet reservoirs. Also, considerable hydrocarbon reserves are remained in mixed-wet or oil-wet reservoirs. Consequently, there is a necessity to better understand the effects of wettability on the oil recovery during polymer flood. Most of polymer flood studies have been carried out without considering wettability effect or focused on water-wet reservoirs with respect to field scale [5–7]. The investigations on the role of wettability on the various aspects of oil recovery during polymer flooding have been mainly based on the experimental works [8–10]. Therefore, there is need to conduct comprehensive numerical study on the flow and transport of aqueous and oleic phases during polymer flood under various wetting conditions on field scale. ## 2. Mathematical Formulation Simulations of polymer flood were conducted with UTCHEM, which is a 3D, multicomponent, multiphase, compositional model of chemical flooding processes considering complex phase behavior, chemical and physical transformation, and heterogeneous porous media properties [11].The basic mass conservation equation for components can be written as follows:(1)∂∂t(ϕC~κρκ)+∇·[∑l=1npρκ(Cκjuj-Dκj)]=Rκ, where κ is the component index, j is the phase index including aqueous (w) and oleic (o) phases, ϕ is the porosity, C~κ is the overall concentration of component κ (volume fraction), ρκ is the density of component κ [ML−3], np is the number of phases, Cκj is the concentration of component κ in phase j (volume fraction), uj is Darcy velocity of phase j [LT−1], Rκ is the total source/sink term for component κ (volume of component κ per unit volume of porous media per unit time), Dκj is the dispersion tensor.The phase flux from Darcy’s law is(2)uj=-kkrjμj∇(pj-γjh), where k is the intrinsic permeability tensor, h is the vertical depth, krj is the relative permeability, μj is the viscosity, and γj is the specific weight of phase j.To predict the reservoir behavior under multiphase condition, it is important to understand relative permeability of a reservoir rock to each of the fluids flowing through it. Relative permeability is assumed to be solely determined by own saturation and residual saturations and investigated by experimental and analytical methods [12]. Extensive studies have been conducted resulting in representative correlations between relative permeability, saturation, and other factors [12]. In this study, multiphase relative permeabilities are modeled with Corey-type functions [11]. Corey-type relative permeability is expressed with relative permeability on residual saturation, exponent defining the curvature of relative permeability, and residual saturation determining normalized saturation. Corey-type relative permeability equation is given as follows: (3)krj=krjoSnjnjforj=w,o, where krjo is the end point of relative permeability meaning relative permeability at residual saturation, nj is the exponent of relative permeability of phase j determining the curvature of relative permeability, Snj represents the normalized saturation of phase j calculated as follows: (4)Snj=Sj-Sjr1-∑j=1npSjr, where Sj is the saturation of phase j and Sjr is the residual saturation of phase j.Whether oil displaces water or water displaces oil through flow channels in reservoirs, flow phenomenon for immiscible fluid phases, oil and water, is involved with the pressure difference between these phases. Brooks and Corey observed a large number of data on consolidated rock cores and analyzed them by plotting the logarithm of effective saturation versus the logarithm of capillary pressure [13]. Due to this study, linear relationship between logarithm of effective saturation and logarithm of capillary pressure has been revealed and is shown as follows: (5)lnSe=-λlnpc+λlnpbforpc≥pb, where Se is the effective saturation calculated with residual saturations, λ and pb are constants realized from intercept and slope, λ means pore size distribution, pb is interpreted as maximum capillary pressure, the pc is the capillary pressure which represents pressure difference between wetting phase pressure and nonwetting phase pressure.Capillary pressure is strong function of saturation as presented by (5). Leverett derived capillary pressure scaled by soil permeability and porosity for homogeneous reservoirs [11, 14]. Reflected on previous relations, Brooks and Corey capillary pressure-saturation is calculated as follows: (6)pc=pbϕkforj=w,o,pb=Cpc(1-Snj)Epc, where Epc is equivalent to -(1/λ), Cpc is constant, and k and ϕ are the permeability and the porosity of the reservoirs. ## 3. Numerical Modeling This study analyzes the effect of wettability on flow and transport of fluids during polymer flood. Reservoir wettability is implemented in the numerical model by changing relative permeability and capillary pressure curves simultaneously. The reservoir depth is 2,000 ft and initial reservoir pressure is maintained at 400 psi. Horizontal area is360×360 ft2 and vertical thickness of reservoir is 25 ft. The simulation domain consists of 10 layers and each layer is discretized into 15×15 grid blocks in horizontal direction. As close to production and injection wells, grid block has smaller size comparing with blocks which are away from wells to assess pressures and saturations more accurately near wells. The model assumes that the reservoir is homogeneous so the porosity and permeability are constant everywhere as 0.2 and 300 md for horizontal direction and 30 md for vertical direction. The initial saturations of oil and water are assumed to be constant for 0.62 and 0.38, respectively. Properties of water and oil in reservoir are listed in Table 1. Water for injection is assumed to be identical as water in reservoir. Viscosity of polymeric solution including salinity and mechanical effects is considered as listed in Table 1.Table 1 Properties of water and oil and viscosity of polymeric solution. Fluid Viscosity(μ) Water 0.73 cp Oil 40 cp Density(ρ) Water 0.43353 psi/ft Oil 0.385839 psi/ft Compressibility(Cf) Water 0 psi−1 Oil 0 psi−1 Polymer viscosity Parameters for zero shear viscosity A p 1 38.47 wt%−1 A p 2 1,600 wt%−2 A p 3 0 wt%−3 Parameters for effective salinity β p for CSEP 20 C SEP , min 0.01 meq/mL Slope ofμpo versus CSEP −0.3 Parameters for shear rate dependence γ ˙ c 130 day (darcy)0.5/ft·s γ ˙ 1 / 2 280 P a 2.2The simulation was continued over 1,000 days. Injection designs for water and polymeric solution with conditions of constant rate are assumed to be identical for both water-wet and oil-wet reservoirs cases to access the performance of polymer flood quantitatively. To prevent high injection pressure due to increased viscosity of polymeric solution, injection of water and polymeric solution was operated in three steps as given in Figure1. The flow rate of solution was constant at each step. In the first step, with 0.1% of polymer, injection was operated at low rate as 800 ft3/day to prevent fractures on the injection well until 180 days. In the next step, the concentration of polymer was reduced to 0.05%, whereas the flow rate of injection was increased to 1,000 ft3/day by 360 days. In the last step, from 360 to 1,000 days, only water was injected through well by the end of operation. The reservoir fluids were recovered from the production well constrained with at 200 psi.Figure 1 History of injection rate and polymer concentration.In order to analyze the effect of wettability on the flow and transport of fluids during polymer flood, comparisons were made for results from simulations implementing different relative permeability curves and capillary pressure curves. Relative permeability and capillary pressure are the most important factors that show the different performances of polymer flood applied to water-wet and oil-wet reservoirs. To model wettability effect, data for different relative permeabilities and capillary pressure curves are attained from several studies. Anderson derived an equation of Corey-type functions by curve fitting the data measured by Morrow et al. [15–17]. Table 1 lists fitted relative permeability parameters and Figure 2(a) shows relative permeability curves generated with the parameters.Properties of water-wet and oil-wet reservoirs: (a) relative permeability curves and (b) capillary pressure curves. (a) (b)Calculation of capillary pressure is based on Brooks and Corey equation as previously explained. Capillary pressure end point,Cpc, and capillary pressure exponent, Epc, for water-wet and oil-wet reservoirs are listed in Table 2 [17]. Capillary pressures are also calculated with the data as shown in Figure 2(b).Table 2 Input parameters for capillary pressure and relative permeability depending on wettability. Parameters S wr S or k rw o k ro o n w n o Relative permeability Water-wet 0.12 0.25 0.26 1 3 1.3 Oil-wet 0.12 0.28 0.56 0.8 1.4 3.3 Parameters C pc E pc Capillary pressure Water-wet 7 2 Oil-wet −15 6 ## 4. Results and Discussion Based on the simulation results, comparisons were made between the performances of polymer flood and waterflood applied to water-wet and oil-wet reservoirs. The results were presented with cumulative oil recovery, water-oil ratio, and oil saturation distribution of the fifth (middle) layer. To decide whether application of polymer flood for oil-wet reservoirs is effective or not, water cut was also analyzed.Figures3(a) and 3(b) present the cumulative oil recovery and water-oil ratio which was obtained from the application of waterflood and polymer flood to water-wet reservoirs. As can be seen, the cumulative oil recovery by polymer flood is 0.58, which is considerably higher than the oil recovery by waterflood, 0.40, at the end of production. In terms of water-oil ratio, polymer flood results in lower water-oil ratio from 70 to 850 days, not from the initiation of injection. Even though polymeric solution is injected with a start of simulation, it takes time, 70 days in this model, to manifest the effect of polymer flood so that no improvement can be seen until 70 days. After 850 days, polymer flood already recovers almost movable oil in reservoirs, so that significant water production and lower increment of oil recovery are obtained. These improvements, higher oil recovery and lower water-oil ratio, during 70 days to 850 days result from increased viscosity of polymeric solution leading to lower or favorable mobility ratio. Furthermore, productive period is calculated to prove the influence of polymer flood applied to water-wet reservoir compared with waterflood case. The life span of production is determined with the assumption that producer is valid until it produces 90% of water cut. Considering results shown in Figure 5(a), polymer flood for water-wet reservoir sustains productive period of 160 days longer than that of waterflood case.Comparison of waterflood and polymer flood for water-wet reservoir: (a) cumulative oil recovery and (b) water-oil ratio. (a) (b)In attempts to study the effectiveness of polymer flood in oil-wet reservoirs, relative permeability and capillary pressure curves are set to be as Figures2(a) and 2(b). Figures 4(a) and 4(b) compare the cumulative oil recovery and water-oil ratio between numerical simulations of polymer flood and waterflood for oil-wet reservoir. Application of polymer flood to oil-wet reservoir increases cumulative recovery as much as 0.36 which is higher than the result of waterflood case, 0.08, and decreases water-oil ratio significantly. According to the calculation of productive period, while production by waterflood seems invalid for whole production period due to higher water cut as much as 90%, polymer flood extends the life span of the well as much as 315 days as shown Figure 5(b). From the analysis of the average reservoir pressure in application of polymer flood shown in Figure 6, pressure profile for oil-wet reservoir has been maintained to be lower than water-wet reservoir. Due to the high viscosity of polymeric solution, pressure starts to be increased initially regardless of wettability. After post-flush as waterflood is applied, it is going to be decreased.Comparison of waterflood and polymer flood for oil-wet reservoir: (a) cumulative oil recovery and (b) water-oil ratio. (a) (b)Water cut of waterflood and polymer flood depending on wettability: (a) water-wet and (b) oil-wet. (a) (b)Figure 6 Average reservoir pressure profiles for polymer flood depending on wettability.Despite of these effective performances of polymer flood for oil-wet reservoirs, utilization of polymer flood has not widely fulfilled. As shown in Figures3 and 4, lower recovery and higher water-oil ratio are observed than those of water-wet reservoirs when both waterflood and polymer flood are applied to oil-wet reservoirs. In agreement with previous studies, the performance of polymer flood in oil-wet reservoir seems less effective than that in water-wet reservoir. These results could draw the conclusion that the application of polymer flood to oil-wet seems not very effective and may not be recommended because it still shows low performance. However, not only the absolute value of oil recovery and water-oil ratio but also the improvement should be analyzed to consider efficiency of polymer flood. Therefore, additional analysis was repeated with respect to improvement by polymer flood. For water-wet reservoirs, polymer flood increases cumulative oil recovery up to 45% than that of waterflood from 0.40 to 0.58. Polymer flood also reduces water-oil ratio at least 50% less than the waterflood case. If there is enough capability for waterflood to produce high performance, these improvements attained by polymer flood may not be so significant. Whereas, for the oil-wet reservoirs, the application of polymer flood leads to substantial improvement of oil recovery and extensive reduction of water-oil ratio compared with waterflood case due to low performance of waterflood. As shown in Figure 5(b), when waterflood is applied to oil-wet reservoirs, it already exceeds 90% water cut at the early stage and it increases up to 99%. Because of high water production, waterflood should be suspended for oil-wet reservoir. However, polymer flood reduces water-oil ratio as much as maximum 90% compared with waterflood case, so that it makes oil-wet reservoirs as fruitful. Moreover, polymer flood leads to increment of oil recovery as much as 351% at the end of production as shown in Figure 4. Additionally, analysis on improvements of water cut by polymer flood for different wettability is calculated and shown in Figure 7. Even though polymer flood for water-wet reservoirs represents the maximum reduction of water cut as much as 22%, it results in reduction of as much as 80% for oil-wet reservoirs. Also, it sustains longer productive period as much as 315 days compared with that of water-wet reservoirs, 160 days, as shown in Figures 5 and 7. Therefore, the application of polymer flood results in more significant improvement in the case of oil-wet reservoir than in the case of water-wet reservoir.Figure 7 Improvement of water cut depending on wettability.Figure8 compares remained oil saturation in the fifth layer in application of waterflood and polymer flood to water-wet reservoirs at 1,000 days or 1.98 pore volumes injected. Due to the favorable mobility ratio by polymer flood, relatively higher contrast of oil saturation between swept and unswept regions exits in reservoir. On the other hand, Figure 9 shows simulated oil saturation in the same condition like Figure 8, except for wettability as oil-wet reservoir. The overall oil saturation of polymer-flooded water-wet reservoirs is lower than that of water-flooded scheme and is almost close to residual oil saturation. For oil-wet reservoirs, remained oil saturation is still higher than residual oil saturation after polymer flood is applied. Nevertheless, the contrast of oil saturation distribution between before and after polymer flood is applied is clearly seen in oil-wet reservoirs rather than water-wet reservoirs.Comparison of oil saturation distribution in the fifth layer at 1,000 days for water-wet reservoir: (a) waterflood and (b) polymer flood. (a) (b)Comparison of oil saturation distribution in the fifth layer at 1,000 days for oil-wet reservoir: (a) waterflood and (b) polymer flood. (a) (b) ## 5. Conclusions According to multiphase, multicomponent simulation, polymer flood allows the mobility ratio between aqueous and oleic fluids to be favorable, which increases oil recovery and decreases water-oil ratio in both water-wet and oil-wet reservoir conditions. The efficiency of polymer flood is remarkably affected by reservoir wettability. Performance of polymer flood for water-wet condition seems better than that for oil-wet condition. Because polymer flood is kind of modified waterflood, polymeric solution can displace oil out of the pore easily in water-wet scheme so that it sweeps lots of mobile oil and leaves just a little oil including residual oil saturation. This mechanism results in higher recovery for water-wet reservoir than oil-wet one. It could mislead us that polymer flood for oil-wet reservoirs is not effective as much as that for water-wet reservoirs. If anything, application of polymer flood to oil-wet reservoirs clearly shows higher improvement of oil recovery, water-oil ratio, and productive period than that of water-wet reservoirs. These results demonstrate that oil-wet reservoirs seem good candidates for the application of polymer flood because the technique is very effective in terms of the improvement of performance. Therefore, reliable evaluation of polymer flood should take into account wettability of reservoirs. --- *Source: 101670-2013-11-20.xml*
2013
# Preparation and Characterization of Novel Electrospinnable PBT/POSS Hybrid Systems Starting from c-PBT **Authors:** Lorenza Gardella; Alberto Fina; Orietta Monticelli **Journal:** Journal of Nanomaterials (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101674 --- ## Abstract Novel hybrid systems based on poly(butyleneterephthalate) (PBT) and polyhedral oligomeric silsesquioxanes (POSS) have been prepared by applying the ring-opening polymerization of cyclic poly(butyleneterephthalate) oligomers. Two types of POSS have been used: one characterized by hydroxyl functionalities (named POSS-OH) and another without specific reactive groups (named oib-POSS). It was demonstrated that POSS-OH acts as an initiator for the polymerization reaction, leading to the direct insertion of the silsesquioxane into the polymer backbone. Among the possible applications of the PBT/POSS hybrid system, the possibility to obtain nanofibers has been assessed in this work. --- ## Body ## 1. Introduction Polyhedral oligomeric silsesquioxanes (POSS) are ideal nanobuilding blocks for the synthesis of organic-inorganic hybrid materials [1–3]. Indeed, incorporation of POSS into polymer matrices results in improvement of properties, such as mechanical properties [4], thermal stability [5–7], flammability [8], gas permeability, and dielectric permittivity [9].Recently, POSS-based polymeric materials have been produced also by using electrospinning, a simple and effective technique to generate continuous fibers ranging from micrometer to nanometer size in diameter [10]. Two different approaches have been proposed for the preparation of these nanofibers. POSS can be added directly into the polymer electrospinning solution. By this method, several polymers, such as cellulose acetate [11], polyvinylidene fluoride [12], sulfonated poly(arylene ether sulfone) [13], poly(vinyl alcohol) [14], poly(N-isopropylacrylamide) [15], poly(butylene terephthalate) (PBT) [16], ethylene-propylene-diene rubber [17], poly(styrene-co-maleic anhydride) [18], polylactic acid [19, 20], and protein [21], were exploited to prepare electrospun POSS-based nanofibers.Nevertheless, the method, which involves the direct solubilization of POSS in the electrospinning solution, could be challenging because of the need to find a common solvent for the silsesquioxane and the polymer and because of the need to limit POSS concentration in the solution in order to obtain its nanometric distribution. Few papers report on the second method, which consists in the electrospinning of hybrids, namely, systems containing silsesquioxane molecules chemically bound to the macromolecule backbone. Kim et al. [22] studied the water resistance of the nanofiber web, obtained by electrospinning of a hybrid PVA/POSS, and incorporating silsesquioxane molecules into hydrophilic PVA backbone. Wu et al. [23] reported on the preparation of nanostructured fibrous hydrogel scaffolds of POSS-poly(ethylene glycol) hybrid thermoplastic polyurethanes incorporating Ag for antimicrobial applications whereas superhydrophobic electrospun fibers based on POSS-polymethylmethacrylate copolymers were prepared by Xue et al. [24]. Despite the limited efforts in this field, this kind of processing opens up POSS-based hybrids for interesting applications in several fields such as biomedicine and filtration. A second motivation for the present work is related to the recent developments of cyclic poly(butyleneterephthalate) (c-PBT) oligomers. Since the pioneer work of Brunelle et al. [25], who demonstrated that the polymerization of cyclic oligomers, with a variety of initiators, can be completed within minutes, the above monomer systems have been used in the development of composites and nanocomposites. In particular, as far as nanostructured system preparation is concerned, in situ polymerization of organically modified clay dispersed in low viscosity c-PBT allowed obtaining PBT-based nanocomposites, characterized by a high level of polymer intercalation [26–29]. Baets et al. [30] described the influence of the presence of multiwalled carbon nanotubes (MWCNT) in c-PBT on the final polymer mechanical properties. Moreover, MWCNT were also functionalized with PBT, which was covalently attached onto the carbon nanotube surface, by a method based on in situ polymerization of c-PBT using a MWCNT-supported initiator. More recently, Fabbri et al. [31] prepared PBT/graphene composites by in situ polymerization of c-PBT in the presence of graphene, which turned out to be electrically conductive. Among the various nanostructured systems, developed by using c-PBT, also PBT/silica nanocomposites were prepared by using in situ polymerization [32]. Indeed, the incorporation of silica nanoparticles into c-PBT was found to affect both the properties of c-PBT resins and the final features of the nanostructured polymer. On this basis, the present work reports on novel nanostructured nanofibers prepared by electrospinning PBT/POSS hybrids, which were synthesized, for the first time, from c-PBT. In particular, a hydroxyl-bearing silsesquioxane, potentially capable of taking part to the ring-opening polymerization (ROP) of c-PBT, has been dissolved in the molten monomer system and the in situ polymerization has been carried out. The so-prepared hybrid systems have been electrospun in order to obtain nanostructured nanofibers. ## 2. Experimental ### 2.1. Materials Cyclic oligomers of poly(butylene terephthalate) (c-PBT) were kindly supplied by Cyclics Corp.Octaisobutyl POSS (referred to as oib-POSS in the following) andtrans-cyclohexanediolisobutyl POSS (referred to as POSS-OH in the following) were purchased from Hybrid Plastics (USA) as crystalline powders and used as received. Chemical structures for oib-POSS (M=873.6 g/mol) and POSS-OH (M=959.7 g/mol) are reported in Figure 1.Figure 1 (a) Octaisobutyl POSS (oib-POSS) and (b)trans-cyclohexanediolisobutyl POSS (POSS-OH). (a) (b)Butyltin chloride dihydroxide was purchased from Sigma Aldrich and used as received. ### 2.2. POSS-Based Hybrid System Preparation Before accomplishing the hybrid preparation, c-PBT was dried overnight at 80°C. c-PBT was added to the glass reactor, namely, a laboratory internal mixer provided with a mechanical stirrer (Heidolph, type RZR1) which was connected to a vacuum line and evacuated for 30 min at 80°C. Then, the reactor was purged with helium for 30 min. The above operations were repeated at least three times to be sure to prevent humidity contact with the reagents. The reactor was placed in an oil bath at 190°C and when the monomer was completely molten, POSS and the catalyst were added under inert atmosphere. c-PBT/POSS systems were prepared by adding to the reaction mixture, both octaisobutyl andtrans-cyclohexanediolisobutyl POSS, at various concentrations, from 2 to 10 wt.%, by using a polymerization time of 10 minutes. Neat PBT was prepared and characterized under the same conditions, as reference material. Materials are identified in the text with the format polymer/POSS type (concentration), es.: PBT/POSS-OH(10). In order to evaluate the reaction yield after melt blending, all solid samples were broken into small pieces and purified from unreacted POSS by Soxhlet extraction with tetrahydrofuran (THF) for 48 h. The grafting yield was calculated by weighing composite samples before and after the above treatment. ### 2.3. Electrospun Fiber Preparation Polymeric solutions were prepared by dissolving 15 wt.% of PBT or PBT/POSS in the solvent mixture methylene chloride (MC, from Aldrich) and trifluoroacetic acid (TFA, from Aldrich) with a 1 : 1 v/v ratio. The solutions were stirred for 6 h at room temperature to reach complete PBT dissolution. Electrospun nanofibers were prepared by using a conventional electrospinning system [10]. The viscous fluid was loaded into a syringe (Model Z314544, diameter d=11.6 mm, Aldrich Fortuna Optima) placed in the horizontal direction. A syringe pump (Harvard Apparatus Model 44 Programmable Syringe Pump) was used to feed the needle with a controlled flow rate of 0.003 mL/min. The needle of the syringe (diameter d=0.45 mm) was connected to the positive electrode of Gamma High Voltage Research Power Supply (Model ES30P-5W) which generated a constant voltage. The negative electrode was attached to the grounded collector, an aluminium sheet wrapped on a glass cylinder (height 4 cm, diameter 14.5 cm). The needle tip and the ground electrode were incorporated into a plastic hollow cylinder (height 30.5 cm, inner diameter 24 cm, and thickness 3.5 mm) chamber, internally coated with a polytetrafluoroethylene sheet (thickness 1 mm), which was supplied with an XS Instruments digital thermohygrometer (Model UR100, accuracy ±3% RH and ±0.8°C) as humidity and temperature sensor to monitor and control the ambient parameters (temperature around 21°C). A glass Brooks rotameter was used to keep the air flow (Fa) constant in the enclosed electrospinning space. The air flow was fed into the chamber at atmospheric pressure from an inlet placed behind the collector. ### 2.4. Characterization The intrinsic viscosity of the samples, dissolved in 0.5 g/dL concentrated mixture solvent of phenol/1,1,2,2-tetrachloroethane (w/w = 60 : 40), was determined with an Ubbelohde viscometer thermostated at30±0.5°C in a water bath. Mv was obtained from the following Mark-Houwink equation [33]:(1)η=1.166×10-4Mv0.871.1H-NMR spectra were obtained on a Varian 300 NMR, dissolving the samples in CDCl3. The 1H-NMR spectra were referenced to the residual solvent protons at ca. 7.26 ppm.Differential scanning calorimetry (DSC) was performed under a continuous nitrogen purge on a Mettler calorimeter, Model TC10A. Both calibrations of heat flow and temperature were based on a run in which one standard sample (indium) was heated through its melting point. Samples having a mass between 5 and 11 mg were used. The procedure was as follows: first heating scan at 10°C/min from 25°C up to 250°C, then cooling at 10°C/min down to 25°C, and, finally, second heating scan from 25°C to 250°C, again at 10°C/min. The first scan was meant to erase the prior uncontrolled thermal history of the samples. The degree of crystallinity was determined by considering the melting enthalpy of 142 J/g for 100% crystalline PBT [34]. Both temperature and heat flow were previously calibrated using a standard indium sample. To study the electrospun samples surface morphology, a Leica Stereoscan 440 scanning electron microscope was used. All the samples were thinly sputter-coated with carbon using a Polaron E5100 sputter coater. The fibers diameter and their distribution were measured using an image analyser, namely, ImageJ 1.41 software. ## 2.1. Materials Cyclic oligomers of poly(butylene terephthalate) (c-PBT) were kindly supplied by Cyclics Corp.Octaisobutyl POSS (referred to as oib-POSS in the following) andtrans-cyclohexanediolisobutyl POSS (referred to as POSS-OH in the following) were purchased from Hybrid Plastics (USA) as crystalline powders and used as received. Chemical structures for oib-POSS (M=873.6 g/mol) and POSS-OH (M=959.7 g/mol) are reported in Figure 1.Figure 1 (a) Octaisobutyl POSS (oib-POSS) and (b)trans-cyclohexanediolisobutyl POSS (POSS-OH). (a) (b)Butyltin chloride dihydroxide was purchased from Sigma Aldrich and used as received. ## 2.2. POSS-Based Hybrid System Preparation Before accomplishing the hybrid preparation, c-PBT was dried overnight at 80°C. c-PBT was added to the glass reactor, namely, a laboratory internal mixer provided with a mechanical stirrer (Heidolph, type RZR1) which was connected to a vacuum line and evacuated for 30 min at 80°C. Then, the reactor was purged with helium for 30 min. The above operations were repeated at least three times to be sure to prevent humidity contact with the reagents. The reactor was placed in an oil bath at 190°C and when the monomer was completely molten, POSS and the catalyst were added under inert atmosphere. c-PBT/POSS systems were prepared by adding to the reaction mixture, both octaisobutyl andtrans-cyclohexanediolisobutyl POSS, at various concentrations, from 2 to 10 wt.%, by using a polymerization time of 10 minutes. Neat PBT was prepared and characterized under the same conditions, as reference material. Materials are identified in the text with the format polymer/POSS type (concentration), es.: PBT/POSS-OH(10). In order to evaluate the reaction yield after melt blending, all solid samples were broken into small pieces and purified from unreacted POSS by Soxhlet extraction with tetrahydrofuran (THF) for 48 h. The grafting yield was calculated by weighing composite samples before and after the above treatment. ## 2.3. Electrospun Fiber Preparation Polymeric solutions were prepared by dissolving 15 wt.% of PBT or PBT/POSS in the solvent mixture methylene chloride (MC, from Aldrich) and trifluoroacetic acid (TFA, from Aldrich) with a 1 : 1 v/v ratio. The solutions were stirred for 6 h at room temperature to reach complete PBT dissolution. Electrospun nanofibers were prepared by using a conventional electrospinning system [10]. The viscous fluid was loaded into a syringe (Model Z314544, diameter d=11.6 mm, Aldrich Fortuna Optima) placed in the horizontal direction. A syringe pump (Harvard Apparatus Model 44 Programmable Syringe Pump) was used to feed the needle with a controlled flow rate of 0.003 mL/min. The needle of the syringe (diameter d=0.45 mm) was connected to the positive electrode of Gamma High Voltage Research Power Supply (Model ES30P-5W) which generated a constant voltage. The negative electrode was attached to the grounded collector, an aluminium sheet wrapped on a glass cylinder (height 4 cm, diameter 14.5 cm). The needle tip and the ground electrode were incorporated into a plastic hollow cylinder (height 30.5 cm, inner diameter 24 cm, and thickness 3.5 mm) chamber, internally coated with a polytetrafluoroethylene sheet (thickness 1 mm), which was supplied with an XS Instruments digital thermohygrometer (Model UR100, accuracy ±3% RH and ±0.8°C) as humidity and temperature sensor to monitor and control the ambient parameters (temperature around 21°C). A glass Brooks rotameter was used to keep the air flow (Fa) constant in the enclosed electrospinning space. The air flow was fed into the chamber at atmospheric pressure from an inlet placed behind the collector. ## 2.4. Characterization The intrinsic viscosity of the samples, dissolved in 0.5 g/dL concentrated mixture solvent of phenol/1,1,2,2-tetrachloroethane (w/w = 60 : 40), was determined with an Ubbelohde viscometer thermostated at30±0.5°C in a water bath. Mv was obtained from the following Mark-Houwink equation [33]:(1)η=1.166×10-4Mv0.871.1H-NMR spectra were obtained on a Varian 300 NMR, dissolving the samples in CDCl3. The 1H-NMR spectra were referenced to the residual solvent protons at ca. 7.26 ppm.Differential scanning calorimetry (DSC) was performed under a continuous nitrogen purge on a Mettler calorimeter, Model TC10A. Both calibrations of heat flow and temperature were based on a run in which one standard sample (indium) was heated through its melting point. Samples having a mass between 5 and 11 mg were used. The procedure was as follows: first heating scan at 10°C/min from 25°C up to 250°C, then cooling at 10°C/min down to 25°C, and, finally, second heating scan from 25°C to 250°C, again at 10°C/min. The first scan was meant to erase the prior uncontrolled thermal history of the samples. The degree of crystallinity was determined by considering the melting enthalpy of 142 J/g for 100% crystalline PBT [34]. Both temperature and heat flow were previously calibrated using a standard indium sample. To study the electrospun samples surface morphology, a Leica Stereoscan 440 scanning electron microscope was used. All the samples were thinly sputter-coated with carbon using a Polaron E5100 sputter coater. The fibers diameter and their distribution were measured using an image analyser, namely, ImageJ 1.41 software. ## 3. Results and Discussion ### 3.1. Preparation and Characterization of Hybrid Systems The preparation of PBT/POSS hybrid systems, starting from c-PBT, has been carried out by introducing into the reaction medium two different silsesquioxane molecules, a potentially reactive one (POSS-OH) and a nonreactive one (oib-POSS). Apart from their functional groups, the two POSS molecules differ also for the melting temperature (Tm), being Tm of POSS-OHca. 140°C and that of oib-POSSca. 250°C. As such, it should be taken into account that also the above feature could lead to different behaviors of the two silsesquioxanes during the polymerization process, as POSS-OH turns out to be molten at the polymerization temperature (190°C). Table 1 shows the reaction yields, calculated by extraction with THF, which is capable of solubilizing silsesquioxane molecules, the monomer, and the oligomers.Table 1 Characteristics of the samples prepared. Sample code POSS type POSS conc. (wt.%) Reaction yield (%) M v · 10 4 PBT — 0 99 2.1 PBT(POSS-OH2) POSS-OH 2 99 1.9 PBT(POSS-OH5) POSS-OH 5 99 1.7 PBT(POSS-OH10) POSS-OH 10 58 0.5 PBT(oib-POSS5) oib-POSS 5 94 2.1According to Table1, it comes out that by using a catalyst concentration of 2 wt.% it was possible to reach a yield close to 100%. It is of outmost relevance that the limited polymerization time applied, namely, 10 minutes, makes the process potentially applicable also to a reactive extrusion polymerization. Concentrations of 2 and 5 wt.% of POSS-OH in the reaction mixture are found not to influence the reaction yield, which evidences both complete conversion of c-PBT and reactivity of POSS-OH in these conditions. On the other hand, a significantly lower reaction yield was obtained for PBT/POSS-OH(10) showing that high loading of POSS is detrimental for the polymerization of C-PBT. As for the case of the system based on isobutyl-POSS, PBT/oib-POSS(5), it is likely that the yield drop (down to 94%) is due to complete extraction of the unreacted oib-POSS by the washing solvent, which evidences the efficiency of the experimental procedure for the extraction of unbound silsesquioxanes.Molecular masses of POSS-based hybrids were found to decrease by increasing the hydroxyl-silsesquioxane concentration in the reaction mixture. In particular, in the case of PBT/POSS-OH(10), prepared by adding 10 wt.% of POSS to the molten c-PBT, the reduction of molecular mass is significant, the sample being characterized by a molecular mass of only 5000, compared to about 21000 for pristine PBT. This finding highlights the active role of the hydroxyl-silsesquioxane in the polymerization reaction. Conversely, isobutyl-POSS seems not to influence the characteristics of the system prepared, the molecular mass of PBT/oib-POSS(5) being equal to that of the neat polymer.The structure of the synthesized systems has been investigated by1H-NMR measurements. Figure 2 shows 1H-NMR spectra of neat PBT (PBT) and that of PBT/POSS-OH(5), both samples being synthesized by using the same conditions.Figure 2 1H-NMR spectra of (a) PBT(1) and (b) PBT(POSS-OH5). (a) (b)Peaks at 8.09 ppm are due to aromatic protons (1); those at 4.41 ppm are assigned to the methylene protons (2) attached to the -O- in the ester groups and those at 2.19 ppm to the unshielded methylene protons (3) of the PBT unit.In the case of the PBT/POSS-OH(5) sample, based on 5 wt.% of POSS-OH, the general structure of the polymer seems not to change, but a new peak appears atca. 1 ppm. This signal is due to the presence of POSS moiety, the same peak being present also in the spectrum of the neat silsesquioxane [35]. This finding confirms that POSS-OH is indeed chemically bound to PBT chains, the unreacted silsesquioxane having been removed by the purification process. On the basis of the previous described results and taking into account the information gathered by 1H-NMR measurements, it is possible to infer that the polymerization proceeds via a coordination-insertion mechanism, as described in the reaction mechanism reported in Figure 3.Figure 3 Proposed polymerization mechanism of c-PBT and POSS-OH.The first step of the proposed mechanism consists of the coordination of both the monomer and of a silsesquioxane molecule to the Lewis-acid Sn metal center. One of the hydroxyl groups of the silsesquioxane subsequently attacks the carbonyl carbon of the monomer, followed by ring opening via acyl-oxygen cleavage, which ultimately results in the insertion of a c-PBT into the O-H bond of the coordinated POSS. Clearly, the direct insertion of the silsesquioxane molecule into the polymer backbone indicates that POSS-OH, acting as an initiator, allows the obtainment of a hybrid system. As a consequence of this, by increasing the concentration of silsesquioxane in the reaction mixture, the number of initiated PBT chains increases, resulting in a lower average molecular mass.The thermal properties obtained for the different formulation prepared are collected in Table2. Neat PBT exhibits a double melting peak, which consists of a small peak at 210°C and a major one at 224°C. Indeed, multiple peaks, which are typical for polyesters, including PBT prepared from c-PBT, are attributed to melting and recrystallization processes of thinner and less perfect crystallites into thicker and more perfect crystalline structures with a subsequent higher melting temperature [36–38]. As far as POSS-based hybrids are concerned, similar melting peaks are observed. The presence of the silsesquioxane molecule at the chain end seems not to affect the material thermal properties, since, for the imposed thermal history, the overall crystallinity as well as the maximum achievable crystal thickness and order is almost the same for all samples.Table 2 Thermal properties of the samples prepared. Sample code POSS type POSS conc. (wt.%) T m I (°C) T m II (°C) Δ H m (J/g) X c (%) PBT — 0 224 210 58 41 PBT(POSS-OH2) POSS-OH 2 223 215 61 43 PBT(POSS-OH5) POSS-OH 5 224 204 59 41 PBT(POSS-OH10) POSS-OH 10 224 215 61 43 PBT(oib-POSS5) oib-POSS 5 223 212 62 44The sample PBT(POSS-OH10) deserves a particular comment as its low molecular mass is not associated with an increase of the crystallinity as expected. However, it is necessary to consider also the influence of the presence of POSS, which being directly linked to the macromolecular chain might limit the polymer organization, as already reported with other polymer matrices [39]. ### 3.2. Preparation and Characterization of Nanostructured Nanofibers Both solutions PBT/POSS-OH hybrids and neat PBT, prepared from c-PBT, were electrospun by applying the same conditions (type of solvent, polymer concentration, voltage, humidity, etc.) used in a previous work [16]. Figure 4 compares a SEM micrograph of the nanofibers prepared from the PBT solution with those obtained by electrospinning the PBT/POSS-OH(5) solution. In the same figure, histograms of nanofiber diameter dimensions are reported.Figure 4 SEM micrograph of (a) PBT nanofibers and the relative diameter distribution; (b) PBT(POSS-OH5) nanofibers and the relative diameter distribution. (a) (b)Both mats are characterized by defect-free nanofibers, with no visible beads and with a similar average dimension, ca. 400 nm. Nevertheless, by comparing the nanofiber diameter distribution, PBT/POSS-OH nanofibers exhibit narrower dimensional distribution than that for neat PBT. As the electrospinning conditions applied were the same for the preparation of both materials, this phenomenon has to be related to the modification of the polymer solution properties (polymer solubility, viscosity, surface tension, etc.) as a consequence of POSS insertion into PBT chains. Indeed, a similar effect was already assessed in the case of electrospun PVDF nanofibers, prepared by adding POSS in the electrospinning solutions [12]. In that case, the phenomenon was ascribed to the silsesquioxane molecules, which, without influencing the solution viscosity or conductivity, favored the formation of uniform structures, by decreasing the system surface tension. As far as our hybrid systems are concerned, taking into account the limited difference in molecular weight, thus weakly affecting viscosity of the solution, a similar effect of POSS on surface tension may be hypothesized. ## 3.1. Preparation and Characterization of Hybrid Systems The preparation of PBT/POSS hybrid systems, starting from c-PBT, has been carried out by introducing into the reaction medium two different silsesquioxane molecules, a potentially reactive one (POSS-OH) and a nonreactive one (oib-POSS). Apart from their functional groups, the two POSS molecules differ also for the melting temperature (Tm), being Tm of POSS-OHca. 140°C and that of oib-POSSca. 250°C. As such, it should be taken into account that also the above feature could lead to different behaviors of the two silsesquioxanes during the polymerization process, as POSS-OH turns out to be molten at the polymerization temperature (190°C). Table 1 shows the reaction yields, calculated by extraction with THF, which is capable of solubilizing silsesquioxane molecules, the monomer, and the oligomers.Table 1 Characteristics of the samples prepared. Sample code POSS type POSS conc. (wt.%) Reaction yield (%) M v · 10 4 PBT — 0 99 2.1 PBT(POSS-OH2) POSS-OH 2 99 1.9 PBT(POSS-OH5) POSS-OH 5 99 1.7 PBT(POSS-OH10) POSS-OH 10 58 0.5 PBT(oib-POSS5) oib-POSS 5 94 2.1According to Table1, it comes out that by using a catalyst concentration of 2 wt.% it was possible to reach a yield close to 100%. It is of outmost relevance that the limited polymerization time applied, namely, 10 minutes, makes the process potentially applicable also to a reactive extrusion polymerization. Concentrations of 2 and 5 wt.% of POSS-OH in the reaction mixture are found not to influence the reaction yield, which evidences both complete conversion of c-PBT and reactivity of POSS-OH in these conditions. On the other hand, a significantly lower reaction yield was obtained for PBT/POSS-OH(10) showing that high loading of POSS is detrimental for the polymerization of C-PBT. As for the case of the system based on isobutyl-POSS, PBT/oib-POSS(5), it is likely that the yield drop (down to 94%) is due to complete extraction of the unreacted oib-POSS by the washing solvent, which evidences the efficiency of the experimental procedure for the extraction of unbound silsesquioxanes.Molecular masses of POSS-based hybrids were found to decrease by increasing the hydroxyl-silsesquioxane concentration in the reaction mixture. In particular, in the case of PBT/POSS-OH(10), prepared by adding 10 wt.% of POSS to the molten c-PBT, the reduction of molecular mass is significant, the sample being characterized by a molecular mass of only 5000, compared to about 21000 for pristine PBT. This finding highlights the active role of the hydroxyl-silsesquioxane in the polymerization reaction. Conversely, isobutyl-POSS seems not to influence the characteristics of the system prepared, the molecular mass of PBT/oib-POSS(5) being equal to that of the neat polymer.The structure of the synthesized systems has been investigated by1H-NMR measurements. Figure 2 shows 1H-NMR spectra of neat PBT (PBT) and that of PBT/POSS-OH(5), both samples being synthesized by using the same conditions.Figure 2 1H-NMR spectra of (a) PBT(1) and (b) PBT(POSS-OH5). (a) (b)Peaks at 8.09 ppm are due to aromatic protons (1); those at 4.41 ppm are assigned to the methylene protons (2) attached to the -O- in the ester groups and those at 2.19 ppm to the unshielded methylene protons (3) of the PBT unit.In the case of the PBT/POSS-OH(5) sample, based on 5 wt.% of POSS-OH, the general structure of the polymer seems not to change, but a new peak appears atca. 1 ppm. This signal is due to the presence of POSS moiety, the same peak being present also in the spectrum of the neat silsesquioxane [35]. This finding confirms that POSS-OH is indeed chemically bound to PBT chains, the unreacted silsesquioxane having been removed by the purification process. On the basis of the previous described results and taking into account the information gathered by 1H-NMR measurements, it is possible to infer that the polymerization proceeds via a coordination-insertion mechanism, as described in the reaction mechanism reported in Figure 3.Figure 3 Proposed polymerization mechanism of c-PBT and POSS-OH.The first step of the proposed mechanism consists of the coordination of both the monomer and of a silsesquioxane molecule to the Lewis-acid Sn metal center. One of the hydroxyl groups of the silsesquioxane subsequently attacks the carbonyl carbon of the monomer, followed by ring opening via acyl-oxygen cleavage, which ultimately results in the insertion of a c-PBT into the O-H bond of the coordinated POSS. Clearly, the direct insertion of the silsesquioxane molecule into the polymer backbone indicates that POSS-OH, acting as an initiator, allows the obtainment of a hybrid system. As a consequence of this, by increasing the concentration of silsesquioxane in the reaction mixture, the number of initiated PBT chains increases, resulting in a lower average molecular mass.The thermal properties obtained for the different formulation prepared are collected in Table2. Neat PBT exhibits a double melting peak, which consists of a small peak at 210°C and a major one at 224°C. Indeed, multiple peaks, which are typical for polyesters, including PBT prepared from c-PBT, are attributed to melting and recrystallization processes of thinner and less perfect crystallites into thicker and more perfect crystalline structures with a subsequent higher melting temperature [36–38]. As far as POSS-based hybrids are concerned, similar melting peaks are observed. The presence of the silsesquioxane molecule at the chain end seems not to affect the material thermal properties, since, for the imposed thermal history, the overall crystallinity as well as the maximum achievable crystal thickness and order is almost the same for all samples.Table 2 Thermal properties of the samples prepared. Sample code POSS type POSS conc. (wt.%) T m I (°C) T m II (°C) Δ H m (J/g) X c (%) PBT — 0 224 210 58 41 PBT(POSS-OH2) POSS-OH 2 223 215 61 43 PBT(POSS-OH5) POSS-OH 5 224 204 59 41 PBT(POSS-OH10) POSS-OH 10 224 215 61 43 PBT(oib-POSS5) oib-POSS 5 223 212 62 44The sample PBT(POSS-OH10) deserves a particular comment as its low molecular mass is not associated with an increase of the crystallinity as expected. However, it is necessary to consider also the influence of the presence of POSS, which being directly linked to the macromolecular chain might limit the polymer organization, as already reported with other polymer matrices [39]. ## 3.2. Preparation and Characterization of Nanostructured Nanofibers Both solutions PBT/POSS-OH hybrids and neat PBT, prepared from c-PBT, were electrospun by applying the same conditions (type of solvent, polymer concentration, voltage, humidity, etc.) used in a previous work [16]. Figure 4 compares a SEM micrograph of the nanofibers prepared from the PBT solution with those obtained by electrospinning the PBT/POSS-OH(5) solution. In the same figure, histograms of nanofiber diameter dimensions are reported.Figure 4 SEM micrograph of (a) PBT nanofibers and the relative diameter distribution; (b) PBT(POSS-OH5) nanofibers and the relative diameter distribution. (a) (b)Both mats are characterized by defect-free nanofibers, with no visible beads and with a similar average dimension, ca. 400 nm. Nevertheless, by comparing the nanofiber diameter distribution, PBT/POSS-OH nanofibers exhibit narrower dimensional distribution than that for neat PBT. As the electrospinning conditions applied were the same for the preparation of both materials, this phenomenon has to be related to the modification of the polymer solution properties (polymer solubility, viscosity, surface tension, etc.) as a consequence of POSS insertion into PBT chains. Indeed, a similar effect was already assessed in the case of electrospun PVDF nanofibers, prepared by adding POSS in the electrospinning solutions [12]. In that case, the phenomenon was ascribed to the silsesquioxane molecules, which, without influencing the solution viscosity or conductivity, favored the formation of uniform structures, by decreasing the system surface tension. As far as our hybrid systems are concerned, taking into account the limited difference in molecular weight, thus weakly affecting viscosity of the solution, a similar effect of POSS on surface tension may be hypothesized. ## 4. Conclusions Novel hybrid systems based on poly(butyleneterephthalate) (PBT) and hydroxyl-bearing polyhedral oligomeric silsesquioxanes (POSS) were developed by using cyclic poly(butyleneterephthalate) oligomers (c-PBT) as monomer system. Indeed, the polymerization reaction was found to occur through a coordination-insertion mechanism where the silsesquioxane molecules, acting as initiators, remain attached to the polymer backbone. It is of outmost interest that the used catalyst, namely, butyltin chloride dihydroxide, allowed reaching a complete conversion by applying a polymerization time (10 minutes) which is close to processing time reachable in melt reactive extrusion processing. Complete yields for polymerization were obtained even in the presence of reactive POSS, for concentrations up to 5% wt, whereas molecular mass was found to be decreased, depending on the POSS concentration.Among the possible applications of the prepared PBT/POSS hybrid system, the possibility to obtain nanofibers by electrospinning was successfully assessed, showing defect-free fibers with an average diameter of 400 nm. Furthermore, the nanostructured nanofibers were found to be more homogenous in diameter than those prepared starting from a neat PBT solution. --- *Source: 101674-2015-04-23.xml*
101674-2015-04-23_101674-2015-04-23.md
34,352
Preparation and Characterization of Novel Electrospinnable PBT/POSS Hybrid Systems Starting from c-PBT
Lorenza Gardella; Alberto Fina; Orietta Monticelli
Journal of Nanomaterials (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101674
101674-2015-04-23.xml
--- ## Abstract Novel hybrid systems based on poly(butyleneterephthalate) (PBT) and polyhedral oligomeric silsesquioxanes (POSS) have been prepared by applying the ring-opening polymerization of cyclic poly(butyleneterephthalate) oligomers. Two types of POSS have been used: one characterized by hydroxyl functionalities (named POSS-OH) and another without specific reactive groups (named oib-POSS). It was demonstrated that POSS-OH acts as an initiator for the polymerization reaction, leading to the direct insertion of the silsesquioxane into the polymer backbone. Among the possible applications of the PBT/POSS hybrid system, the possibility to obtain nanofibers has been assessed in this work. --- ## Body ## 1. Introduction Polyhedral oligomeric silsesquioxanes (POSS) are ideal nanobuilding blocks for the synthesis of organic-inorganic hybrid materials [1–3]. Indeed, incorporation of POSS into polymer matrices results in improvement of properties, such as mechanical properties [4], thermal stability [5–7], flammability [8], gas permeability, and dielectric permittivity [9].Recently, POSS-based polymeric materials have been produced also by using electrospinning, a simple and effective technique to generate continuous fibers ranging from micrometer to nanometer size in diameter [10]. Two different approaches have been proposed for the preparation of these nanofibers. POSS can be added directly into the polymer electrospinning solution. By this method, several polymers, such as cellulose acetate [11], polyvinylidene fluoride [12], sulfonated poly(arylene ether sulfone) [13], poly(vinyl alcohol) [14], poly(N-isopropylacrylamide) [15], poly(butylene terephthalate) (PBT) [16], ethylene-propylene-diene rubber [17], poly(styrene-co-maleic anhydride) [18], polylactic acid [19, 20], and protein [21], were exploited to prepare electrospun POSS-based nanofibers.Nevertheless, the method, which involves the direct solubilization of POSS in the electrospinning solution, could be challenging because of the need to find a common solvent for the silsesquioxane and the polymer and because of the need to limit POSS concentration in the solution in order to obtain its nanometric distribution. Few papers report on the second method, which consists in the electrospinning of hybrids, namely, systems containing silsesquioxane molecules chemically bound to the macromolecule backbone. Kim et al. [22] studied the water resistance of the nanofiber web, obtained by electrospinning of a hybrid PVA/POSS, and incorporating silsesquioxane molecules into hydrophilic PVA backbone. Wu et al. [23] reported on the preparation of nanostructured fibrous hydrogel scaffolds of POSS-poly(ethylene glycol) hybrid thermoplastic polyurethanes incorporating Ag for antimicrobial applications whereas superhydrophobic electrospun fibers based on POSS-polymethylmethacrylate copolymers were prepared by Xue et al. [24]. Despite the limited efforts in this field, this kind of processing opens up POSS-based hybrids for interesting applications in several fields such as biomedicine and filtration. A second motivation for the present work is related to the recent developments of cyclic poly(butyleneterephthalate) (c-PBT) oligomers. Since the pioneer work of Brunelle et al. [25], who demonstrated that the polymerization of cyclic oligomers, with a variety of initiators, can be completed within minutes, the above monomer systems have been used in the development of composites and nanocomposites. In particular, as far as nanostructured system preparation is concerned, in situ polymerization of organically modified clay dispersed in low viscosity c-PBT allowed obtaining PBT-based nanocomposites, characterized by a high level of polymer intercalation [26–29]. Baets et al. [30] described the influence of the presence of multiwalled carbon nanotubes (MWCNT) in c-PBT on the final polymer mechanical properties. Moreover, MWCNT were also functionalized with PBT, which was covalently attached onto the carbon nanotube surface, by a method based on in situ polymerization of c-PBT using a MWCNT-supported initiator. More recently, Fabbri et al. [31] prepared PBT/graphene composites by in situ polymerization of c-PBT in the presence of graphene, which turned out to be electrically conductive. Among the various nanostructured systems, developed by using c-PBT, also PBT/silica nanocomposites were prepared by using in situ polymerization [32]. Indeed, the incorporation of silica nanoparticles into c-PBT was found to affect both the properties of c-PBT resins and the final features of the nanostructured polymer. On this basis, the present work reports on novel nanostructured nanofibers prepared by electrospinning PBT/POSS hybrids, which were synthesized, for the first time, from c-PBT. In particular, a hydroxyl-bearing silsesquioxane, potentially capable of taking part to the ring-opening polymerization (ROP) of c-PBT, has been dissolved in the molten monomer system and the in situ polymerization has been carried out. The so-prepared hybrid systems have been electrospun in order to obtain nanostructured nanofibers. ## 2. Experimental ### 2.1. Materials Cyclic oligomers of poly(butylene terephthalate) (c-PBT) were kindly supplied by Cyclics Corp.Octaisobutyl POSS (referred to as oib-POSS in the following) andtrans-cyclohexanediolisobutyl POSS (referred to as POSS-OH in the following) were purchased from Hybrid Plastics (USA) as crystalline powders and used as received. Chemical structures for oib-POSS (M=873.6 g/mol) and POSS-OH (M=959.7 g/mol) are reported in Figure 1.Figure 1 (a) Octaisobutyl POSS (oib-POSS) and (b)trans-cyclohexanediolisobutyl POSS (POSS-OH). (a) (b)Butyltin chloride dihydroxide was purchased from Sigma Aldrich and used as received. ### 2.2. POSS-Based Hybrid System Preparation Before accomplishing the hybrid preparation, c-PBT was dried overnight at 80°C. c-PBT was added to the glass reactor, namely, a laboratory internal mixer provided with a mechanical stirrer (Heidolph, type RZR1) which was connected to a vacuum line and evacuated for 30 min at 80°C. Then, the reactor was purged with helium for 30 min. The above operations were repeated at least three times to be sure to prevent humidity contact with the reagents. The reactor was placed in an oil bath at 190°C and when the monomer was completely molten, POSS and the catalyst were added under inert atmosphere. c-PBT/POSS systems were prepared by adding to the reaction mixture, both octaisobutyl andtrans-cyclohexanediolisobutyl POSS, at various concentrations, from 2 to 10 wt.%, by using a polymerization time of 10 minutes. Neat PBT was prepared and characterized under the same conditions, as reference material. Materials are identified in the text with the format polymer/POSS type (concentration), es.: PBT/POSS-OH(10). In order to evaluate the reaction yield after melt blending, all solid samples were broken into small pieces and purified from unreacted POSS by Soxhlet extraction with tetrahydrofuran (THF) for 48 h. The grafting yield was calculated by weighing composite samples before and after the above treatment. ### 2.3. Electrospun Fiber Preparation Polymeric solutions were prepared by dissolving 15 wt.% of PBT or PBT/POSS in the solvent mixture methylene chloride (MC, from Aldrich) and trifluoroacetic acid (TFA, from Aldrich) with a 1 : 1 v/v ratio. The solutions were stirred for 6 h at room temperature to reach complete PBT dissolution. Electrospun nanofibers were prepared by using a conventional electrospinning system [10]. The viscous fluid was loaded into a syringe (Model Z314544, diameter d=11.6 mm, Aldrich Fortuna Optima) placed in the horizontal direction. A syringe pump (Harvard Apparatus Model 44 Programmable Syringe Pump) was used to feed the needle with a controlled flow rate of 0.003 mL/min. The needle of the syringe (diameter d=0.45 mm) was connected to the positive electrode of Gamma High Voltage Research Power Supply (Model ES30P-5W) which generated a constant voltage. The negative electrode was attached to the grounded collector, an aluminium sheet wrapped on a glass cylinder (height 4 cm, diameter 14.5 cm). The needle tip and the ground electrode were incorporated into a plastic hollow cylinder (height 30.5 cm, inner diameter 24 cm, and thickness 3.5 mm) chamber, internally coated with a polytetrafluoroethylene sheet (thickness 1 mm), which was supplied with an XS Instruments digital thermohygrometer (Model UR100, accuracy ±3% RH and ±0.8°C) as humidity and temperature sensor to monitor and control the ambient parameters (temperature around 21°C). A glass Brooks rotameter was used to keep the air flow (Fa) constant in the enclosed electrospinning space. The air flow was fed into the chamber at atmospheric pressure from an inlet placed behind the collector. ### 2.4. Characterization The intrinsic viscosity of the samples, dissolved in 0.5 g/dL concentrated mixture solvent of phenol/1,1,2,2-tetrachloroethane (w/w = 60 : 40), was determined with an Ubbelohde viscometer thermostated at30±0.5°C in a water bath. Mv was obtained from the following Mark-Houwink equation [33]:(1)η=1.166×10-4Mv0.871.1H-NMR spectra were obtained on a Varian 300 NMR, dissolving the samples in CDCl3. The 1H-NMR spectra were referenced to the residual solvent protons at ca. 7.26 ppm.Differential scanning calorimetry (DSC) was performed under a continuous nitrogen purge on a Mettler calorimeter, Model TC10A. Both calibrations of heat flow and temperature were based on a run in which one standard sample (indium) was heated through its melting point. Samples having a mass between 5 and 11 mg were used. The procedure was as follows: first heating scan at 10°C/min from 25°C up to 250°C, then cooling at 10°C/min down to 25°C, and, finally, second heating scan from 25°C to 250°C, again at 10°C/min. The first scan was meant to erase the prior uncontrolled thermal history of the samples. The degree of crystallinity was determined by considering the melting enthalpy of 142 J/g for 100% crystalline PBT [34]. Both temperature and heat flow were previously calibrated using a standard indium sample. To study the electrospun samples surface morphology, a Leica Stereoscan 440 scanning electron microscope was used. All the samples were thinly sputter-coated with carbon using a Polaron E5100 sputter coater. The fibers diameter and their distribution were measured using an image analyser, namely, ImageJ 1.41 software. ## 2.1. Materials Cyclic oligomers of poly(butylene terephthalate) (c-PBT) were kindly supplied by Cyclics Corp.Octaisobutyl POSS (referred to as oib-POSS in the following) andtrans-cyclohexanediolisobutyl POSS (referred to as POSS-OH in the following) were purchased from Hybrid Plastics (USA) as crystalline powders and used as received. Chemical structures for oib-POSS (M=873.6 g/mol) and POSS-OH (M=959.7 g/mol) are reported in Figure 1.Figure 1 (a) Octaisobutyl POSS (oib-POSS) and (b)trans-cyclohexanediolisobutyl POSS (POSS-OH). (a) (b)Butyltin chloride dihydroxide was purchased from Sigma Aldrich and used as received. ## 2.2. POSS-Based Hybrid System Preparation Before accomplishing the hybrid preparation, c-PBT was dried overnight at 80°C. c-PBT was added to the glass reactor, namely, a laboratory internal mixer provided with a mechanical stirrer (Heidolph, type RZR1) which was connected to a vacuum line and evacuated for 30 min at 80°C. Then, the reactor was purged with helium for 30 min. The above operations were repeated at least three times to be sure to prevent humidity contact with the reagents. The reactor was placed in an oil bath at 190°C and when the monomer was completely molten, POSS and the catalyst were added under inert atmosphere. c-PBT/POSS systems were prepared by adding to the reaction mixture, both octaisobutyl andtrans-cyclohexanediolisobutyl POSS, at various concentrations, from 2 to 10 wt.%, by using a polymerization time of 10 minutes. Neat PBT was prepared and characterized under the same conditions, as reference material. Materials are identified in the text with the format polymer/POSS type (concentration), es.: PBT/POSS-OH(10). In order to evaluate the reaction yield after melt blending, all solid samples were broken into small pieces and purified from unreacted POSS by Soxhlet extraction with tetrahydrofuran (THF) for 48 h. The grafting yield was calculated by weighing composite samples before and after the above treatment. ## 2.3. Electrospun Fiber Preparation Polymeric solutions were prepared by dissolving 15 wt.% of PBT or PBT/POSS in the solvent mixture methylene chloride (MC, from Aldrich) and trifluoroacetic acid (TFA, from Aldrich) with a 1 : 1 v/v ratio. The solutions were stirred for 6 h at room temperature to reach complete PBT dissolution. Electrospun nanofibers were prepared by using a conventional electrospinning system [10]. The viscous fluid was loaded into a syringe (Model Z314544, diameter d=11.6 mm, Aldrich Fortuna Optima) placed in the horizontal direction. A syringe pump (Harvard Apparatus Model 44 Programmable Syringe Pump) was used to feed the needle with a controlled flow rate of 0.003 mL/min. The needle of the syringe (diameter d=0.45 mm) was connected to the positive electrode of Gamma High Voltage Research Power Supply (Model ES30P-5W) which generated a constant voltage. The negative electrode was attached to the grounded collector, an aluminium sheet wrapped on a glass cylinder (height 4 cm, diameter 14.5 cm). The needle tip and the ground electrode were incorporated into a plastic hollow cylinder (height 30.5 cm, inner diameter 24 cm, and thickness 3.5 mm) chamber, internally coated with a polytetrafluoroethylene sheet (thickness 1 mm), which was supplied with an XS Instruments digital thermohygrometer (Model UR100, accuracy ±3% RH and ±0.8°C) as humidity and temperature sensor to monitor and control the ambient parameters (temperature around 21°C). A glass Brooks rotameter was used to keep the air flow (Fa) constant in the enclosed electrospinning space. The air flow was fed into the chamber at atmospheric pressure from an inlet placed behind the collector. ## 2.4. Characterization The intrinsic viscosity of the samples, dissolved in 0.5 g/dL concentrated mixture solvent of phenol/1,1,2,2-tetrachloroethane (w/w = 60 : 40), was determined with an Ubbelohde viscometer thermostated at30±0.5°C in a water bath. Mv was obtained from the following Mark-Houwink equation [33]:(1)η=1.166×10-4Mv0.871.1H-NMR spectra were obtained on a Varian 300 NMR, dissolving the samples in CDCl3. The 1H-NMR spectra were referenced to the residual solvent protons at ca. 7.26 ppm.Differential scanning calorimetry (DSC) was performed under a continuous nitrogen purge on a Mettler calorimeter, Model TC10A. Both calibrations of heat flow and temperature were based on a run in which one standard sample (indium) was heated through its melting point. Samples having a mass between 5 and 11 mg were used. The procedure was as follows: first heating scan at 10°C/min from 25°C up to 250°C, then cooling at 10°C/min down to 25°C, and, finally, second heating scan from 25°C to 250°C, again at 10°C/min. The first scan was meant to erase the prior uncontrolled thermal history of the samples. The degree of crystallinity was determined by considering the melting enthalpy of 142 J/g for 100% crystalline PBT [34]. Both temperature and heat flow were previously calibrated using a standard indium sample. To study the electrospun samples surface morphology, a Leica Stereoscan 440 scanning electron microscope was used. All the samples were thinly sputter-coated with carbon using a Polaron E5100 sputter coater. The fibers diameter and their distribution were measured using an image analyser, namely, ImageJ 1.41 software. ## 3. Results and Discussion ### 3.1. Preparation and Characterization of Hybrid Systems The preparation of PBT/POSS hybrid systems, starting from c-PBT, has been carried out by introducing into the reaction medium two different silsesquioxane molecules, a potentially reactive one (POSS-OH) and a nonreactive one (oib-POSS). Apart from their functional groups, the two POSS molecules differ also for the melting temperature (Tm), being Tm of POSS-OHca. 140°C and that of oib-POSSca. 250°C. As such, it should be taken into account that also the above feature could lead to different behaviors of the two silsesquioxanes during the polymerization process, as POSS-OH turns out to be molten at the polymerization temperature (190°C). Table 1 shows the reaction yields, calculated by extraction with THF, which is capable of solubilizing silsesquioxane molecules, the monomer, and the oligomers.Table 1 Characteristics of the samples prepared. Sample code POSS type POSS conc. (wt.%) Reaction yield (%) M v · 10 4 PBT — 0 99 2.1 PBT(POSS-OH2) POSS-OH 2 99 1.9 PBT(POSS-OH5) POSS-OH 5 99 1.7 PBT(POSS-OH10) POSS-OH 10 58 0.5 PBT(oib-POSS5) oib-POSS 5 94 2.1According to Table1, it comes out that by using a catalyst concentration of 2 wt.% it was possible to reach a yield close to 100%. It is of outmost relevance that the limited polymerization time applied, namely, 10 minutes, makes the process potentially applicable also to a reactive extrusion polymerization. Concentrations of 2 and 5 wt.% of POSS-OH in the reaction mixture are found not to influence the reaction yield, which evidences both complete conversion of c-PBT and reactivity of POSS-OH in these conditions. On the other hand, a significantly lower reaction yield was obtained for PBT/POSS-OH(10) showing that high loading of POSS is detrimental for the polymerization of C-PBT. As for the case of the system based on isobutyl-POSS, PBT/oib-POSS(5), it is likely that the yield drop (down to 94%) is due to complete extraction of the unreacted oib-POSS by the washing solvent, which evidences the efficiency of the experimental procedure for the extraction of unbound silsesquioxanes.Molecular masses of POSS-based hybrids were found to decrease by increasing the hydroxyl-silsesquioxane concentration in the reaction mixture. In particular, in the case of PBT/POSS-OH(10), prepared by adding 10 wt.% of POSS to the molten c-PBT, the reduction of molecular mass is significant, the sample being characterized by a molecular mass of only 5000, compared to about 21000 for pristine PBT. This finding highlights the active role of the hydroxyl-silsesquioxane in the polymerization reaction. Conversely, isobutyl-POSS seems not to influence the characteristics of the system prepared, the molecular mass of PBT/oib-POSS(5) being equal to that of the neat polymer.The structure of the synthesized systems has been investigated by1H-NMR measurements. Figure 2 shows 1H-NMR spectra of neat PBT (PBT) and that of PBT/POSS-OH(5), both samples being synthesized by using the same conditions.Figure 2 1H-NMR spectra of (a) PBT(1) and (b) PBT(POSS-OH5). (a) (b)Peaks at 8.09 ppm are due to aromatic protons (1); those at 4.41 ppm are assigned to the methylene protons (2) attached to the -O- in the ester groups and those at 2.19 ppm to the unshielded methylene protons (3) of the PBT unit.In the case of the PBT/POSS-OH(5) sample, based on 5 wt.% of POSS-OH, the general structure of the polymer seems not to change, but a new peak appears atca. 1 ppm. This signal is due to the presence of POSS moiety, the same peak being present also in the spectrum of the neat silsesquioxane [35]. This finding confirms that POSS-OH is indeed chemically bound to PBT chains, the unreacted silsesquioxane having been removed by the purification process. On the basis of the previous described results and taking into account the information gathered by 1H-NMR measurements, it is possible to infer that the polymerization proceeds via a coordination-insertion mechanism, as described in the reaction mechanism reported in Figure 3.Figure 3 Proposed polymerization mechanism of c-PBT and POSS-OH.The first step of the proposed mechanism consists of the coordination of both the monomer and of a silsesquioxane molecule to the Lewis-acid Sn metal center. One of the hydroxyl groups of the silsesquioxane subsequently attacks the carbonyl carbon of the monomer, followed by ring opening via acyl-oxygen cleavage, which ultimately results in the insertion of a c-PBT into the O-H bond of the coordinated POSS. Clearly, the direct insertion of the silsesquioxane molecule into the polymer backbone indicates that POSS-OH, acting as an initiator, allows the obtainment of a hybrid system. As a consequence of this, by increasing the concentration of silsesquioxane in the reaction mixture, the number of initiated PBT chains increases, resulting in a lower average molecular mass.The thermal properties obtained for the different formulation prepared are collected in Table2. Neat PBT exhibits a double melting peak, which consists of a small peak at 210°C and a major one at 224°C. Indeed, multiple peaks, which are typical for polyesters, including PBT prepared from c-PBT, are attributed to melting and recrystallization processes of thinner and less perfect crystallites into thicker and more perfect crystalline structures with a subsequent higher melting temperature [36–38]. As far as POSS-based hybrids are concerned, similar melting peaks are observed. The presence of the silsesquioxane molecule at the chain end seems not to affect the material thermal properties, since, for the imposed thermal history, the overall crystallinity as well as the maximum achievable crystal thickness and order is almost the same for all samples.Table 2 Thermal properties of the samples prepared. Sample code POSS type POSS conc. (wt.%) T m I (°C) T m II (°C) Δ H m (J/g) X c (%) PBT — 0 224 210 58 41 PBT(POSS-OH2) POSS-OH 2 223 215 61 43 PBT(POSS-OH5) POSS-OH 5 224 204 59 41 PBT(POSS-OH10) POSS-OH 10 224 215 61 43 PBT(oib-POSS5) oib-POSS 5 223 212 62 44The sample PBT(POSS-OH10) deserves a particular comment as its low molecular mass is not associated with an increase of the crystallinity as expected. However, it is necessary to consider also the influence of the presence of POSS, which being directly linked to the macromolecular chain might limit the polymer organization, as already reported with other polymer matrices [39]. ### 3.2. Preparation and Characterization of Nanostructured Nanofibers Both solutions PBT/POSS-OH hybrids and neat PBT, prepared from c-PBT, were electrospun by applying the same conditions (type of solvent, polymer concentration, voltage, humidity, etc.) used in a previous work [16]. Figure 4 compares a SEM micrograph of the nanofibers prepared from the PBT solution with those obtained by electrospinning the PBT/POSS-OH(5) solution. In the same figure, histograms of nanofiber diameter dimensions are reported.Figure 4 SEM micrograph of (a) PBT nanofibers and the relative diameter distribution; (b) PBT(POSS-OH5) nanofibers and the relative diameter distribution. (a) (b)Both mats are characterized by defect-free nanofibers, with no visible beads and with a similar average dimension, ca. 400 nm. Nevertheless, by comparing the nanofiber diameter distribution, PBT/POSS-OH nanofibers exhibit narrower dimensional distribution than that for neat PBT. As the electrospinning conditions applied were the same for the preparation of both materials, this phenomenon has to be related to the modification of the polymer solution properties (polymer solubility, viscosity, surface tension, etc.) as a consequence of POSS insertion into PBT chains. Indeed, a similar effect was already assessed in the case of electrospun PVDF nanofibers, prepared by adding POSS in the electrospinning solutions [12]. In that case, the phenomenon was ascribed to the silsesquioxane molecules, which, without influencing the solution viscosity or conductivity, favored the formation of uniform structures, by decreasing the system surface tension. As far as our hybrid systems are concerned, taking into account the limited difference in molecular weight, thus weakly affecting viscosity of the solution, a similar effect of POSS on surface tension may be hypothesized. ## 3.1. Preparation and Characterization of Hybrid Systems The preparation of PBT/POSS hybrid systems, starting from c-PBT, has been carried out by introducing into the reaction medium two different silsesquioxane molecules, a potentially reactive one (POSS-OH) and a nonreactive one (oib-POSS). Apart from their functional groups, the two POSS molecules differ also for the melting temperature (Tm), being Tm of POSS-OHca. 140°C and that of oib-POSSca. 250°C. As such, it should be taken into account that also the above feature could lead to different behaviors of the two silsesquioxanes during the polymerization process, as POSS-OH turns out to be molten at the polymerization temperature (190°C). Table 1 shows the reaction yields, calculated by extraction with THF, which is capable of solubilizing silsesquioxane molecules, the monomer, and the oligomers.Table 1 Characteristics of the samples prepared. Sample code POSS type POSS conc. (wt.%) Reaction yield (%) M v · 10 4 PBT — 0 99 2.1 PBT(POSS-OH2) POSS-OH 2 99 1.9 PBT(POSS-OH5) POSS-OH 5 99 1.7 PBT(POSS-OH10) POSS-OH 10 58 0.5 PBT(oib-POSS5) oib-POSS 5 94 2.1According to Table1, it comes out that by using a catalyst concentration of 2 wt.% it was possible to reach a yield close to 100%. It is of outmost relevance that the limited polymerization time applied, namely, 10 minutes, makes the process potentially applicable also to a reactive extrusion polymerization. Concentrations of 2 and 5 wt.% of POSS-OH in the reaction mixture are found not to influence the reaction yield, which evidences both complete conversion of c-PBT and reactivity of POSS-OH in these conditions. On the other hand, a significantly lower reaction yield was obtained for PBT/POSS-OH(10) showing that high loading of POSS is detrimental for the polymerization of C-PBT. As for the case of the system based on isobutyl-POSS, PBT/oib-POSS(5), it is likely that the yield drop (down to 94%) is due to complete extraction of the unreacted oib-POSS by the washing solvent, which evidences the efficiency of the experimental procedure for the extraction of unbound silsesquioxanes.Molecular masses of POSS-based hybrids were found to decrease by increasing the hydroxyl-silsesquioxane concentration in the reaction mixture. In particular, in the case of PBT/POSS-OH(10), prepared by adding 10 wt.% of POSS to the molten c-PBT, the reduction of molecular mass is significant, the sample being characterized by a molecular mass of only 5000, compared to about 21000 for pristine PBT. This finding highlights the active role of the hydroxyl-silsesquioxane in the polymerization reaction. Conversely, isobutyl-POSS seems not to influence the characteristics of the system prepared, the molecular mass of PBT/oib-POSS(5) being equal to that of the neat polymer.The structure of the synthesized systems has been investigated by1H-NMR measurements. Figure 2 shows 1H-NMR spectra of neat PBT (PBT) and that of PBT/POSS-OH(5), both samples being synthesized by using the same conditions.Figure 2 1H-NMR spectra of (a) PBT(1) and (b) PBT(POSS-OH5). (a) (b)Peaks at 8.09 ppm are due to aromatic protons (1); those at 4.41 ppm are assigned to the methylene protons (2) attached to the -O- in the ester groups and those at 2.19 ppm to the unshielded methylene protons (3) of the PBT unit.In the case of the PBT/POSS-OH(5) sample, based on 5 wt.% of POSS-OH, the general structure of the polymer seems not to change, but a new peak appears atca. 1 ppm. This signal is due to the presence of POSS moiety, the same peak being present also in the spectrum of the neat silsesquioxane [35]. This finding confirms that POSS-OH is indeed chemically bound to PBT chains, the unreacted silsesquioxane having been removed by the purification process. On the basis of the previous described results and taking into account the information gathered by 1H-NMR measurements, it is possible to infer that the polymerization proceeds via a coordination-insertion mechanism, as described in the reaction mechanism reported in Figure 3.Figure 3 Proposed polymerization mechanism of c-PBT and POSS-OH.The first step of the proposed mechanism consists of the coordination of both the monomer and of a silsesquioxane molecule to the Lewis-acid Sn metal center. One of the hydroxyl groups of the silsesquioxane subsequently attacks the carbonyl carbon of the monomer, followed by ring opening via acyl-oxygen cleavage, which ultimately results in the insertion of a c-PBT into the O-H bond of the coordinated POSS. Clearly, the direct insertion of the silsesquioxane molecule into the polymer backbone indicates that POSS-OH, acting as an initiator, allows the obtainment of a hybrid system. As a consequence of this, by increasing the concentration of silsesquioxane in the reaction mixture, the number of initiated PBT chains increases, resulting in a lower average molecular mass.The thermal properties obtained for the different formulation prepared are collected in Table2. Neat PBT exhibits a double melting peak, which consists of a small peak at 210°C and a major one at 224°C. Indeed, multiple peaks, which are typical for polyesters, including PBT prepared from c-PBT, are attributed to melting and recrystallization processes of thinner and less perfect crystallites into thicker and more perfect crystalline structures with a subsequent higher melting temperature [36–38]. As far as POSS-based hybrids are concerned, similar melting peaks are observed. The presence of the silsesquioxane molecule at the chain end seems not to affect the material thermal properties, since, for the imposed thermal history, the overall crystallinity as well as the maximum achievable crystal thickness and order is almost the same for all samples.Table 2 Thermal properties of the samples prepared. Sample code POSS type POSS conc. (wt.%) T m I (°C) T m II (°C) Δ H m (J/g) X c (%) PBT — 0 224 210 58 41 PBT(POSS-OH2) POSS-OH 2 223 215 61 43 PBT(POSS-OH5) POSS-OH 5 224 204 59 41 PBT(POSS-OH10) POSS-OH 10 224 215 61 43 PBT(oib-POSS5) oib-POSS 5 223 212 62 44The sample PBT(POSS-OH10) deserves a particular comment as its low molecular mass is not associated with an increase of the crystallinity as expected. However, it is necessary to consider also the influence of the presence of POSS, which being directly linked to the macromolecular chain might limit the polymer organization, as already reported with other polymer matrices [39]. ## 3.2. Preparation and Characterization of Nanostructured Nanofibers Both solutions PBT/POSS-OH hybrids and neat PBT, prepared from c-PBT, were electrospun by applying the same conditions (type of solvent, polymer concentration, voltage, humidity, etc.) used in a previous work [16]. Figure 4 compares a SEM micrograph of the nanofibers prepared from the PBT solution with those obtained by electrospinning the PBT/POSS-OH(5) solution. In the same figure, histograms of nanofiber diameter dimensions are reported.Figure 4 SEM micrograph of (a) PBT nanofibers and the relative diameter distribution; (b) PBT(POSS-OH5) nanofibers and the relative diameter distribution. (a) (b)Both mats are characterized by defect-free nanofibers, with no visible beads and with a similar average dimension, ca. 400 nm. Nevertheless, by comparing the nanofiber diameter distribution, PBT/POSS-OH nanofibers exhibit narrower dimensional distribution than that for neat PBT. As the electrospinning conditions applied were the same for the preparation of both materials, this phenomenon has to be related to the modification of the polymer solution properties (polymer solubility, viscosity, surface tension, etc.) as a consequence of POSS insertion into PBT chains. Indeed, a similar effect was already assessed in the case of electrospun PVDF nanofibers, prepared by adding POSS in the electrospinning solutions [12]. In that case, the phenomenon was ascribed to the silsesquioxane molecules, which, without influencing the solution viscosity or conductivity, favored the formation of uniform structures, by decreasing the system surface tension. As far as our hybrid systems are concerned, taking into account the limited difference in molecular weight, thus weakly affecting viscosity of the solution, a similar effect of POSS on surface tension may be hypothesized. ## 4. Conclusions Novel hybrid systems based on poly(butyleneterephthalate) (PBT) and hydroxyl-bearing polyhedral oligomeric silsesquioxanes (POSS) were developed by using cyclic poly(butyleneterephthalate) oligomers (c-PBT) as monomer system. Indeed, the polymerization reaction was found to occur through a coordination-insertion mechanism where the silsesquioxane molecules, acting as initiators, remain attached to the polymer backbone. It is of outmost interest that the used catalyst, namely, butyltin chloride dihydroxide, allowed reaching a complete conversion by applying a polymerization time (10 minutes) which is close to processing time reachable in melt reactive extrusion processing. Complete yields for polymerization were obtained even in the presence of reactive POSS, for concentrations up to 5% wt, whereas molecular mass was found to be decreased, depending on the POSS concentration.Among the possible applications of the prepared PBT/POSS hybrid system, the possibility to obtain nanofibers by electrospinning was successfully assessed, showing defect-free fibers with an average diameter of 400 nm. Furthermore, the nanostructured nanofibers were found to be more homogenous in diameter than those prepared starting from a neat PBT solution. --- *Source: 101674-2015-04-23.xml*
2015
# Trace Lead Measurement andOnlineRemoval of Matrix Interference in Geosamples by Ion-Exchange Coupled with Flow Injection and Hydride Generation Atomic FluorescenceSpectrometry **Authors:** Chun-Hua Tan; Xu-Guang Huang **Journal:** Journal of Automated Methods and Management in Chemistry (2009) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2009/101679 --- ## Abstract A flow injection method has been developed for the direct determination of free available Pb(II). The method is based on the chemical sorption of Pb(II), from pH7 solutions, on a column packed of chelating resin. The retained complex was afterwards eluted with hydrochloric acid followed by hydride generation with reduction by tetrahydroborate. The preconcentration system proposed in this paper allows the elimination of great part of the saline content in the sample. A thorough scrutiny was made for chemical variables and FI parameters. With a sampling volume of 10.5 mL, quantitative retention of Pb (II) was obtained, along with an enrichment factor of 40 and a sampling frequency of 15h−1. The detection limit, defined as 3 times the blank standard deviation (3σ), was 0.0031 ngml−1. The precision was characterized by an RSD value of 3.78% (at the 4 ng·ml−1 level, n=11). The developed method has been applied to the determination of trace Pb in three standard reference materials. Accuracy was assessed through comparing the results with the accepted values. --- ## Body ## 1. Introduction There is an ongoing need to determine lead because of its extensive distribution and high toxicity; lead is among the most toxic heavy metals for human health. The main methods for lead measurement include GFAAS [1], ICP-MS [2], and HG-AFS [3]. ICP-MS and HG-AFS have lower detection limits than GFAAS. The HG-AFS method, however, uses much simpler and cheaper equipment and therefore is more practical in laboratory use. However, despite the sensitivity and selectivity of atomic fluorescence spectrometric (AFS), there is a great necessity for the preconcentration of trace lead prior to its determination, basically due to its low concentrations or the matrix interferences in aqueous samples. To improve the sensitivity and selectivity, preconcentration procedures such as liquid-liquid extraction, precipitation coprecipitation, ion-exchange, and solid phase extraction (SPE) are generally used before the detection. Ion-exchange technique and SPE have increasingly become popular because of its several major advantages: high enrichment/collection factors, better removal of interferent ions, high performance and rate of reaction process, and the possibility of the combination with several detections methods. Ion-exchange/SPE coupled with flow injection (FI) on-line microcolumn separation and preconcentration techniques has been proved to be a good idea. This combination not only provides an improvement in the detection limits, but also reduces the interference from matrix. The method has so far been proposed for the determination of trace lead in diverse samples. The following examples have been reported for these several years.Lead in drinking water was preconcentrated as 2-(5-bromo-2-pyridylazo)-5-diethyl aminophenol complexes on a minicolumn packed with Amberlite XAD-16 prior to its determination by ICP-AES using pneumatic nebulization [4]. A flow injection method using a minicolumn loaded with 8-hydroxyquinoline immobilized on controlled pore glass was also described for the determination of trace lead along with copper, cadmium, zinc, nickel, and iron by ion chromatography [5]. On-line preconcentration and simultaneous determination of heavy metal ions in different water samples by ICP-AES were carried out using retention of diethyldithiocarbamate chelates on an octadecyl silica minicolumn [6]. A cationic resin (Chelex 100) for preconcentration and elimination of interferences was also used for the spectrophotometric determination of lead in water samples [7]. Methylthiosalicylated silica gel and chitosan were used for preconcentration of lead for ICP-AES determination [8] and spectrophotometric detection of lead-dithizone complex in aqueous medium [9], respectively. Lead in seawater was complexed with 8-hydroxyquinoline-5-sulfonic acid (8-HQS) and then collected on a mini-column filled with florisil [10]. Total lead and lead isotope ratios in natural waters were determined using sorption of lead complexes with 1-phenyl-3-methyl-4-benzoylpyrazol-5-one on the inner walls of PTFE knotted reactor in advance of the on-line ICP-TOFMS detection [11]. Lead in wine and water was preconcentrated on a mini-column filled with polyurethane form modified with 2-(2-benzothiazolylazo)-p-cresol [12] or Pb-Spec resin [13] for the FAAS determination of lead. For the determination of lead in environmental samples, the on-line formed lead-pyrrolidinedithio carbamate complex was sorbed on the polyurethane form, subsequently eluted by 2-methyl-4-pentanone, and determined by FAAS[14]. Achelating resin, MuromacA-1 [15], and a new packing material, acrylic acid grafted PTFE fibers [16], were used for on-line of lead in urine and environmental and biological samples, respectively. Mai Kuramochi have reported the flow injection determination of lead in iron and steel [17], river water [18], glazed ceramic, and sea water [18, 19] using Pb-Spec resin for on-line preconcentration of lead and atomic spectroscopic detection. A nanometer-sized alumina packed microcolumn was used for on-line preconcentration of V, Cr, Mn, Co, Ni, Cu, Zn, Cd, and Pb in environmental samples [20]. Dimitrova-Koleva et al. [21] have developed for the separation and preconcentration of traces of Ag, Cd, Co, Ni, Pb, U, and Y from natural water samples with subsequent detection by ICP TOF MS. a PCTFEbeads for on-line preconcentration of chromium (VI) and lead in water samples [22]. The detection techniques of the aforementioned reports are mostly ICP-AES and AAS; to the best of our knowledge, few research works on the use of ion-exchange coupled with flow injection and HG-AFS for the determination of trace lead.The aim of this work was to develop a sensitive and selectiv FI-HG-AFS method for the determination of trace lead and to investigate the potential of this “more refined” on-line separation method for the determination. The matrix of samples is removed on-line by an FI system and a microcolumn filled with D401. The extent of interferent effect can be minimized by increasing the concentration of the analyte while keeping the interferent at a minimum. The analyte collected on the column was eluted with HCl and used to facilitate hydride generation. For the sake of improving the performance of the procedure, special attention is given to the design of FI manifolds. The detection limit of this procedure is comparable or even superior to those obtained with detection by HG-ICPMS [10, 23, 24] or HG-ETAAS [25], while a significant improvement was achieved as compared with the published FI-HG-AFS procedures [26]. The proposed method, which is convenient, low cost, and sensitive, was successfully applied to the analysis of environmental samples, and its accuracy was tested by the analysis of certified reference materials. ## 2. Experimental ### 2.1. Apparatus An AF-610 atomic fluorescence spectrometer with a commercial gas liquid separator (Beijing Raileigh Analytic Instrument Corporation) was used, and the operating parameters of the AFS instrument are summarized in Table1. A lead hollow cathode lamp was used as the radiation source. The hydride and hydrogen generated were separated from liquid in the first-stage gas-liquid separator (GLS1) and swept by an argon flow through the second-stage gas-liquid separator (GLS2) and finally into the atomizer, where the hydride was atomized by an argon-hydrogen flame.Table 1 Operating parameters of the AFS. PMT-voltage320 mvMain lamp current80 mALamp ancillary electrode current30 mAArgon carrier gas flow rate600 mlmin-1Atomization temperatureLow (200°C)Observation height7 mmThe flow injection analytical system applied was a JTY-1B FI multifunction solution autohandling system (Faculty of Material Science and Chemical Engineering, China University of Geosciences). Figure1 shows the manifold for on-line ion-exchange used in this study. The manifold program for this FI system is showed in Table 2. All the tubes used were 1 mm i.d. PTFE tubing.Table 2 The operating program of the FI ion-exchange system. StepTimes (s)Pump rate (mlmin-1)Valve positionDescriptionPaPbPaPb01801803.500Pa is active, sample load and lead are preconcentration in columns a, and b111000Pa stops; sample loops dip into water210103.500Water pushes the remain sample of the loop to pass through the columns355021Pb is active; remain water of the loop is pushed out by air411001Pa stops; sample loops dip into the eluent52020021Pa drives eluent to pass through b and a in series62020071Pb and Pc are active; sample is reacted with NaBH4 and pushed to AFS for determination.722400Tube is washed by waterDiagram of the flow system used to preconcentration and determination of lead by ion-exchange coupled with HG-AFS, (a) the sample load process and (b) the elution process.Pa, Pb and, Pc, peristaltic pumps; V, 8 channel rotary injection valve; S, sample; a and b, microcolumns; T, T-tube; W, waste. (a)(b) ### 2.2. Reagents All chemicals were of analytical reagent, and deionized water was used throughout. Working standard solutions of lead were prepared by appropriate stepwise dilution of a 1000 mgl-1 stock standard solution to the required μgl-1 levels just before use. A 10 gl-1 sodium tetrahydroborate solution containing 20 gl-1 potassium ferricyanide (K3Fe(CN)6) was prepared by dissolving the NaBH4 and K3Fe(CN)6 reagent in 2 gl-1 sodium hydroxide solution just before use; 3% HCl (v/v) solution was used as eluent.5% NaOH (m/v) solution and 10% (m/v) sulfocarbamide solution were also used in this work.The ion-exchange resin minicolumns were built up by packing the D401 Chelating resin (about 0.08 g, 60 mesh) into a3.0cm×2.0mm i.d Teflon tube. Plug of glass wools was placed at both ends of the column to avoid resin loss during system operation. ### 2.3. Sample Pretreatment Three certified reference materials, GSD-8, GBW-07114, and GSD-6 (National Center for Standard Materials, Beijing, China), were used for the validation of the developed methodology.0.5000 g of the sample was precisely weighed into a 30 ml PTFE crucible was wetted by 5 mL of hydrofluoric acid, and followed by 7 mL of concentrated hydrofluoric acid and 2 ml of perchloric acid, heating up for 10~20 minutes at low temperature on a sand bath till dense white fumes appeared. After cooling, 3 ml of nitric acid was added. It was again heated to the appearance of the dense white fumes. After cooling, 4 ml of 50% HCl (v/v) was added to the crucible to dissolve the residue; the final digests of the solid samples were adjusted to pH = 7 with 5% NaOH (m/v) solution and diluted to the mark with water in a 100 ml volumetric flask.A calibration graph was plotted for standards of each sample. Standards and analytical blanks were treated in the same way as the samples. ### 2.4. Operating Procedure The diagram of the FI manifold and its operational sequence are represented in Figure1 and Table 2, respectively. In step 1, the sample was pumped through the microcolumn. The sampling volume is controlled by the sampling time. In the next step, the column was washed with deionized water to remove any salt residues coming from the matrixs. In step 3, the analyte adsorbed in the column was eluted with HCl (3% v/v) solution and led into the gas-liquid separator. After the determination, the columns were washed with deionized water to return it to the condition in preparation for loading the next sample.The flow system used was operated in the time-based mode and deionized water served as the carrier stream. ## 2.1. Apparatus An AF-610 atomic fluorescence spectrometer with a commercial gas liquid separator (Beijing Raileigh Analytic Instrument Corporation) was used, and the operating parameters of the AFS instrument are summarized in Table1. A lead hollow cathode lamp was used as the radiation source. The hydride and hydrogen generated were separated from liquid in the first-stage gas-liquid separator (GLS1) and swept by an argon flow through the second-stage gas-liquid separator (GLS2) and finally into the atomizer, where the hydride was atomized by an argon-hydrogen flame.Table 1 Operating parameters of the AFS. PMT-voltage320 mvMain lamp current80 mALamp ancillary electrode current30 mAArgon carrier gas flow rate600 mlmin-1Atomization temperatureLow (200°C)Observation height7 mmThe flow injection analytical system applied was a JTY-1B FI multifunction solution autohandling system (Faculty of Material Science and Chemical Engineering, China University of Geosciences). Figure1 shows the manifold for on-line ion-exchange used in this study. The manifold program for this FI system is showed in Table 2. All the tubes used were 1 mm i.d. PTFE tubing.Table 2 The operating program of the FI ion-exchange system. StepTimes (s)Pump rate (mlmin-1)Valve positionDescriptionPaPbPaPb01801803.500Pa is active, sample load and lead are preconcentration in columns a, and b111000Pa stops; sample loops dip into water210103.500Water pushes the remain sample of the loop to pass through the columns355021Pb is active; remain water of the loop is pushed out by air411001Pa stops; sample loops dip into the eluent52020021Pa drives eluent to pass through b and a in series62020071Pb and Pc are active; sample is reacted with NaBH4 and pushed to AFS for determination.722400Tube is washed by waterDiagram of the flow system used to preconcentration and determination of lead by ion-exchange coupled with HG-AFS, (a) the sample load process and (b) the elution process.Pa, Pb and, Pc, peristaltic pumps; V, 8 channel rotary injection valve; S, sample; a and b, microcolumns; T, T-tube; W, waste. (a)(b) ## 2.2. Reagents All chemicals were of analytical reagent, and deionized water was used throughout. Working standard solutions of lead were prepared by appropriate stepwise dilution of a 1000 mgl-1 stock standard solution to the required μgl-1 levels just before use. A 10 gl-1 sodium tetrahydroborate solution containing 20 gl-1 potassium ferricyanide (K3Fe(CN)6) was prepared by dissolving the NaBH4 and K3Fe(CN)6 reagent in 2 gl-1 sodium hydroxide solution just before use; 3% HCl (v/v) solution was used as eluent.5% NaOH (m/v) solution and 10% (m/v) sulfocarbamide solution were also used in this work.The ion-exchange resin minicolumns were built up by packing the D401 Chelating resin (about 0.08 g, 60 mesh) into a3.0cm×2.0mm i.d Teflon tube. Plug of glass wools was placed at both ends of the column to avoid resin loss during system operation. ## 2.3. Sample Pretreatment Three certified reference materials, GSD-8, GBW-07114, and GSD-6 (National Center for Standard Materials, Beijing, China), were used for the validation of the developed methodology.0.5000 g of the sample was precisely weighed into a 30 ml PTFE crucible was wetted by 5 mL of hydrofluoric acid, and followed by 7 mL of concentrated hydrofluoric acid and 2 ml of perchloric acid, heating up for 10~20 minutes at low temperature on a sand bath till dense white fumes appeared. After cooling, 3 ml of nitric acid was added. It was again heated to the appearance of the dense white fumes. After cooling, 4 ml of 50% HCl (v/v) was added to the crucible to dissolve the residue; the final digests of the solid samples were adjusted to pH = 7 with 5% NaOH (m/v) solution and diluted to the mark with water in a 100 ml volumetric flask.A calibration graph was plotted for standards of each sample. Standards and analytical blanks were treated in the same way as the samples. ## 2.4. Operating Procedure The diagram of the FI manifold and its operational sequence are represented in Figure1 and Table 2, respectively. In step 1, the sample was pumped through the microcolumn. The sampling volume is controlled by the sampling time. In the next step, the column was washed with deionized water to remove any salt residues coming from the matrixs. In step 3, the analyte adsorbed in the column was eluted with HCl (3% v/v) solution and led into the gas-liquid separator. After the determination, the columns were washed with deionized water to return it to the condition in preparation for loading the next sample.The flow system used was operated in the time-based mode and deionized water served as the carrier stream. ## 3. Results and Discussion In order to achieve the most of efficient performance in terms of highest analytical sensitivity and lowest deviation of signals (measurement precision), some experimental parameters were investigated. After an initial assessment to select approximate values for each parameter, optimization of the variables was carried out by the univariate method. ### 3.1. Development of the Flow Injection Ion-Exchange Manifold The flow system in our work is optimized to achieve a better enrichment efficiency and detection limit. In previous reports, the FI ion-exchange manifold was designed to preconcentration of metal ions using a single column, which is tedious and time-consuming, causing the enrichment factor and sampling frequency hard to improve. To solve the problem, the double column system is used in this paper: the sample was pumped through columns a, and b simultaneously (Figure1), in the stage of elution; eluent was pumped through columns a, and b in series. The comparison experiments showed that signal peak of the double column system is twice of the single column system’s.Before elution step, air is used to push out the waste water of the loop, which is the improvement of the manifold in our work. With this amelioration, the sensitivity and reproducibility are improved effectively. Table3 showed the fluorescence intensity of 10 ngml-1 lead solution obtained with different two eluting process by HG-AFS determination in the same condition.Table 3 Comparison of different eluting process (4 ngml-1 Pb). TypeFluorescence intensityRSD (%,n=10)With the step of air pushing out water7653.5Without the step of air pushing out water5868.8 ### 3.2. The AFS Parameters The AFS parameters, including lamp current, atomizer height, negative high voltage of the photomultiplier, carrier argon flow, were investigated in the term of sensitivity and reproducibility. The optimized parameters were summarized in Table1. ### 3.3. The Choice of the Resin and the Medium of Hydride Generation In this paper, D401 chelating resin is used as the ion-exchanger of lead; experiments show that the 60 mesh resin has the optimal enrichment efficiency. HCl solution was chosen for eluting lead and the medium of hydride generation.With NaBH4 as reducing reagent, its concentration affects significantly the hydride generation. The results showed that an enhancement of the signal was observed with the increase of NaBH4 concentration up to 1.5% (w/v), while an even higher concentration led to a deterioration of the sensitivity. Considering the sensitivity and saving reagents, a concentration of 1% (w/v) was therefore employed. ### 3.4. Optimization of the Chemical Variables #### 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). #### 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). #### 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ### 3.5. Interferences The potential interfering effects of some foreign species, which are frequently encountered in geosamples, were tested with the present procedure. At a Pb concentration of 10 ng/ml, Fe (5μg/ml); Cr, Ca, Zr, V (500 μg/ml); Al (5 mg/ml); As3+, Se2+ (50 μg/ml); Cu, Ni (100 μg/ml) did not interfere with the determination (no tests for more higher concentration levels). For general geosamples, the contents of the above metal ions in sample digests or after appropriate dilution will not exceed the tolerant concentration levels. If the contents of Fe and Cu exceed the tolerant concentration levels, they can be masked by 10% (m/v) sulfocarbamide solution. So in most cases, the present procedure can be directly employed, and no further treatment of masking reagents are needed. ### 3.6. Performance and Validation of the Procedure Under the optimal conditions, the performance data obtained for the flow injection on-line ion-exchange preconcentration system for Pb with hydride generation atomic fluorescence spectrometry were summarized in Table4. With a sample loading volume of 10.5 ml and a retention time of 180 seconds, an enrichment factor of 40 and a sampling frequency of 15 h-1 was obtained. The detection limit (3σ) was (0.0031 ng·ml-1) and the relative standard deviation (RSD) was 3.78% (n=11) at the 4 ng·ml-1 level.Table 4 Performance data for the on-line ion-exchange preconcentration HG-AFS system. Calibration graph, 0–10μg·l-1Regression equation (fluorescence intensity versus concentration (μg·l-1)Y=102.33x+13.52Correlation coefficient0.9991Sampling frequency15h-1Enrichment factor40Detection limit (3σ)0.0031 ng·ml-1Relative standard deviation(4 ng·ml-1 of Pb, n=11)3.78%Sampling consumption10.5 mLAt a similar precision level, a comparison of the detection limit of the present procedure with the reported one [26] (8 ngl-1) based on hydride generation protocol with detection by AFS has shown that the detection limit of the present protocol (3.1 ngl-1) is superior to the published procedures. Namely, the method of this paper was much improved with respect to the HG-AFS-based procedures. The procedure was validated using certified reference materials, that is, GSD-8, GBW-07114, and GSD-6. The obtained results were summarized in Table 5 with an agreement between the obtained results and the certified values.Table 5 Analysis of reference material (ω(μg·g-1, n=10)). SampleRecommended valueFounded valuesRSD (%)GSD-820.018.24.8GBW-071144.44.15.2GSD-627.025.64.6 ## 3.1. Development of the Flow Injection Ion-Exchange Manifold The flow system in our work is optimized to achieve a better enrichment efficiency and detection limit. In previous reports, the FI ion-exchange manifold was designed to preconcentration of metal ions using a single column, which is tedious and time-consuming, causing the enrichment factor and sampling frequency hard to improve. To solve the problem, the double column system is used in this paper: the sample was pumped through columns a, and b simultaneously (Figure1), in the stage of elution; eluent was pumped through columns a, and b in series. The comparison experiments showed that signal peak of the double column system is twice of the single column system’s.Before elution step, air is used to push out the waste water of the loop, which is the improvement of the manifold in our work. With this amelioration, the sensitivity and reproducibility are improved effectively. Table3 showed the fluorescence intensity of 10 ngml-1 lead solution obtained with different two eluting process by HG-AFS determination in the same condition.Table 3 Comparison of different eluting process (4 ngml-1 Pb). TypeFluorescence intensityRSD (%,n=10)With the step of air pushing out water7653.5Without the step of air pushing out water5868.8 ## 3.2. The AFS Parameters The AFS parameters, including lamp current, atomizer height, negative high voltage of the photomultiplier, carrier argon flow, were investigated in the term of sensitivity and reproducibility. The optimized parameters were summarized in Table1. ## 3.3. The Choice of the Resin and the Medium of Hydride Generation In this paper, D401 chelating resin is used as the ion-exchanger of lead; experiments show that the 60 mesh resin has the optimal enrichment efficiency. HCl solution was chosen for eluting lead and the medium of hydride generation.With NaBH4 as reducing reagent, its concentration affects significantly the hydride generation. The results showed that an enhancement of the signal was observed with the increase of NaBH4 concentration up to 1.5% (w/v), while an even higher concentration led to a deterioration of the sensitivity. Considering the sensitivity and saving reagents, a concentration of 1% (w/v) was therefore employed. ## 3.4. Optimization of the Chemical Variables ### 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). ### 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). ### 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ## 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). ## 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). ## 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ## 3.5. Interferences The potential interfering effects of some foreign species, which are frequently encountered in geosamples, were tested with the present procedure. At a Pb concentration of 10 ng/ml, Fe (5μg/ml); Cr, Ca, Zr, V (500 μg/ml); Al (5 mg/ml); As3+, Se2+ (50 μg/ml); Cu, Ni (100 μg/ml) did not interfere with the determination (no tests for more higher concentration levels). For general geosamples, the contents of the above metal ions in sample digests or after appropriate dilution will not exceed the tolerant concentration levels. If the contents of Fe and Cu exceed the tolerant concentration levels, they can be masked by 10% (m/v) sulfocarbamide solution. So in most cases, the present procedure can be directly employed, and no further treatment of masking reagents are needed. ## 3.6. Performance and Validation of the Procedure Under the optimal conditions, the performance data obtained for the flow injection on-line ion-exchange preconcentration system for Pb with hydride generation atomic fluorescence spectrometry were summarized in Table4. With a sample loading volume of 10.5 ml and a retention time of 180 seconds, an enrichment factor of 40 and a sampling frequency of 15 h-1 was obtained. The detection limit (3σ) was (0.0031 ng·ml-1) and the relative standard deviation (RSD) was 3.78% (n=11) at the 4 ng·ml-1 level.Table 4 Performance data for the on-line ion-exchange preconcentration HG-AFS system. Calibration graph, 0–10μg·l-1Regression equation (fluorescence intensity versus concentration (μg·l-1)Y=102.33x+13.52Correlation coefficient0.9991Sampling frequency15h-1Enrichment factor40Detection limit (3σ)0.0031 ng·ml-1Relative standard deviation(4 ng·ml-1 of Pb, n=11)3.78%Sampling consumption10.5 mLAt a similar precision level, a comparison of the detection limit of the present procedure with the reported one [26] (8 ngl-1) based on hydride generation protocol with detection by AFS has shown that the detection limit of the present protocol (3.1 ngl-1) is superior to the published procedures. Namely, the method of this paper was much improved with respect to the HG-AFS-based procedures. The procedure was validated using certified reference materials, that is, GSD-8, GBW-07114, and GSD-6. The obtained results were summarized in Table 5 with an agreement between the obtained results and the certified values.Table 5 Analysis of reference material (ω(μg·g-1, n=10)). SampleRecommended valueFounded valuesRSD (%)GSD-820.018.24.8GBW-071144.44.15.2GSD-627.025.64.6 ## 4. Conclusions A method of on-line FI separation system coupled with HG-AFS for the determination of sup-trace lead in geosamples has been developed. Parameters of the operation system including pH value of chelating reaction, sample loading time and injection speed, eluting acid concentration and eluting speed, and instrumental parameters of HG-AFS were optimized and selected. The detection limit of this methodestimatedas 3xstandard deviation of the procedural blank was 3.1 ngl-1. Three standard reference materials were used to assess the accuracy of the method. The Pb concentration measured was in good agreement with the certified value; this method can be used to determine the sup-trace lead in high-salt samples. --- *Source: 101679-2009-09-06.xml*
101679-2009-09-06_101679-2009-09-06.md
40,318
Trace Lead Measurement andOnlineRemoval of Matrix Interference in Geosamples by Ion-Exchange Coupled with Flow Injection and Hydride Generation Atomic FluorescenceSpectrometry
Chun-Hua Tan; Xu-Guang Huang
Journal of Automated Methods and Management in Chemistry (2009)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2009/101679
101679-2009-09-06.xml
--- ## Abstract A flow injection method has been developed for the direct determination of free available Pb(II). The method is based on the chemical sorption of Pb(II), from pH7 solutions, on a column packed of chelating resin. The retained complex was afterwards eluted with hydrochloric acid followed by hydride generation with reduction by tetrahydroborate. The preconcentration system proposed in this paper allows the elimination of great part of the saline content in the sample. A thorough scrutiny was made for chemical variables and FI parameters. With a sampling volume of 10.5 mL, quantitative retention of Pb (II) was obtained, along with an enrichment factor of 40 and a sampling frequency of 15h−1. The detection limit, defined as 3 times the blank standard deviation (3σ), was 0.0031 ngml−1. The precision was characterized by an RSD value of 3.78% (at the 4 ng·ml−1 level, n=11). The developed method has been applied to the determination of trace Pb in three standard reference materials. Accuracy was assessed through comparing the results with the accepted values. --- ## Body ## 1. Introduction There is an ongoing need to determine lead because of its extensive distribution and high toxicity; lead is among the most toxic heavy metals for human health. The main methods for lead measurement include GFAAS [1], ICP-MS [2], and HG-AFS [3]. ICP-MS and HG-AFS have lower detection limits than GFAAS. The HG-AFS method, however, uses much simpler and cheaper equipment and therefore is more practical in laboratory use. However, despite the sensitivity and selectivity of atomic fluorescence spectrometric (AFS), there is a great necessity for the preconcentration of trace lead prior to its determination, basically due to its low concentrations or the matrix interferences in aqueous samples. To improve the sensitivity and selectivity, preconcentration procedures such as liquid-liquid extraction, precipitation coprecipitation, ion-exchange, and solid phase extraction (SPE) are generally used before the detection. Ion-exchange technique and SPE have increasingly become popular because of its several major advantages: high enrichment/collection factors, better removal of interferent ions, high performance and rate of reaction process, and the possibility of the combination with several detections methods. Ion-exchange/SPE coupled with flow injection (FI) on-line microcolumn separation and preconcentration techniques has been proved to be a good idea. This combination not only provides an improvement in the detection limits, but also reduces the interference from matrix. The method has so far been proposed for the determination of trace lead in diverse samples. The following examples have been reported for these several years.Lead in drinking water was preconcentrated as 2-(5-bromo-2-pyridylazo)-5-diethyl aminophenol complexes on a minicolumn packed with Amberlite XAD-16 prior to its determination by ICP-AES using pneumatic nebulization [4]. A flow injection method using a minicolumn loaded with 8-hydroxyquinoline immobilized on controlled pore glass was also described for the determination of trace lead along with copper, cadmium, zinc, nickel, and iron by ion chromatography [5]. On-line preconcentration and simultaneous determination of heavy metal ions in different water samples by ICP-AES were carried out using retention of diethyldithiocarbamate chelates on an octadecyl silica minicolumn [6]. A cationic resin (Chelex 100) for preconcentration and elimination of interferences was also used for the spectrophotometric determination of lead in water samples [7]. Methylthiosalicylated silica gel and chitosan were used for preconcentration of lead for ICP-AES determination [8] and spectrophotometric detection of lead-dithizone complex in aqueous medium [9], respectively. Lead in seawater was complexed with 8-hydroxyquinoline-5-sulfonic acid (8-HQS) and then collected on a mini-column filled with florisil [10]. Total lead and lead isotope ratios in natural waters were determined using sorption of lead complexes with 1-phenyl-3-methyl-4-benzoylpyrazol-5-one on the inner walls of PTFE knotted reactor in advance of the on-line ICP-TOFMS detection [11]. Lead in wine and water was preconcentrated on a mini-column filled with polyurethane form modified with 2-(2-benzothiazolylazo)-p-cresol [12] or Pb-Spec resin [13] for the FAAS determination of lead. For the determination of lead in environmental samples, the on-line formed lead-pyrrolidinedithio carbamate complex was sorbed on the polyurethane form, subsequently eluted by 2-methyl-4-pentanone, and determined by FAAS[14]. Achelating resin, MuromacA-1 [15], and a new packing material, acrylic acid grafted PTFE fibers [16], were used for on-line of lead in urine and environmental and biological samples, respectively. Mai Kuramochi have reported the flow injection determination of lead in iron and steel [17], river water [18], glazed ceramic, and sea water [18, 19] using Pb-Spec resin for on-line preconcentration of lead and atomic spectroscopic detection. A nanometer-sized alumina packed microcolumn was used for on-line preconcentration of V, Cr, Mn, Co, Ni, Cu, Zn, Cd, and Pb in environmental samples [20]. Dimitrova-Koleva et al. [21] have developed for the separation and preconcentration of traces of Ag, Cd, Co, Ni, Pb, U, and Y from natural water samples with subsequent detection by ICP TOF MS. a PCTFEbeads for on-line preconcentration of chromium (VI) and lead in water samples [22]. The detection techniques of the aforementioned reports are mostly ICP-AES and AAS; to the best of our knowledge, few research works on the use of ion-exchange coupled with flow injection and HG-AFS for the determination of trace lead.The aim of this work was to develop a sensitive and selectiv FI-HG-AFS method for the determination of trace lead and to investigate the potential of this “more refined” on-line separation method for the determination. The matrix of samples is removed on-line by an FI system and a microcolumn filled with D401. The extent of interferent effect can be minimized by increasing the concentration of the analyte while keeping the interferent at a minimum. The analyte collected on the column was eluted with HCl and used to facilitate hydride generation. For the sake of improving the performance of the procedure, special attention is given to the design of FI manifolds. The detection limit of this procedure is comparable or even superior to those obtained with detection by HG-ICPMS [10, 23, 24] or HG-ETAAS [25], while a significant improvement was achieved as compared with the published FI-HG-AFS procedures [26]. The proposed method, which is convenient, low cost, and sensitive, was successfully applied to the analysis of environmental samples, and its accuracy was tested by the analysis of certified reference materials. ## 2. Experimental ### 2.1. Apparatus An AF-610 atomic fluorescence spectrometer with a commercial gas liquid separator (Beijing Raileigh Analytic Instrument Corporation) was used, and the operating parameters of the AFS instrument are summarized in Table1. A lead hollow cathode lamp was used as the radiation source. The hydride and hydrogen generated were separated from liquid in the first-stage gas-liquid separator (GLS1) and swept by an argon flow through the second-stage gas-liquid separator (GLS2) and finally into the atomizer, where the hydride was atomized by an argon-hydrogen flame.Table 1 Operating parameters of the AFS. PMT-voltage320 mvMain lamp current80 mALamp ancillary electrode current30 mAArgon carrier gas flow rate600 mlmin-1Atomization temperatureLow (200°C)Observation height7 mmThe flow injection analytical system applied was a JTY-1B FI multifunction solution autohandling system (Faculty of Material Science and Chemical Engineering, China University of Geosciences). Figure1 shows the manifold for on-line ion-exchange used in this study. The manifold program for this FI system is showed in Table 2. All the tubes used were 1 mm i.d. PTFE tubing.Table 2 The operating program of the FI ion-exchange system. StepTimes (s)Pump rate (mlmin-1)Valve positionDescriptionPaPbPaPb01801803.500Pa is active, sample load and lead are preconcentration in columns a, and b111000Pa stops; sample loops dip into water210103.500Water pushes the remain sample of the loop to pass through the columns355021Pb is active; remain water of the loop is pushed out by air411001Pa stops; sample loops dip into the eluent52020021Pa drives eluent to pass through b and a in series62020071Pb and Pc are active; sample is reacted with NaBH4 and pushed to AFS for determination.722400Tube is washed by waterDiagram of the flow system used to preconcentration and determination of lead by ion-exchange coupled with HG-AFS, (a) the sample load process and (b) the elution process.Pa, Pb and, Pc, peristaltic pumps; V, 8 channel rotary injection valve; S, sample; a and b, microcolumns; T, T-tube; W, waste. (a)(b) ### 2.2. Reagents All chemicals were of analytical reagent, and deionized water was used throughout. Working standard solutions of lead were prepared by appropriate stepwise dilution of a 1000 mgl-1 stock standard solution to the required μgl-1 levels just before use. A 10 gl-1 sodium tetrahydroborate solution containing 20 gl-1 potassium ferricyanide (K3Fe(CN)6) was prepared by dissolving the NaBH4 and K3Fe(CN)6 reagent in 2 gl-1 sodium hydroxide solution just before use; 3% HCl (v/v) solution was used as eluent.5% NaOH (m/v) solution and 10% (m/v) sulfocarbamide solution were also used in this work.The ion-exchange resin minicolumns were built up by packing the D401 Chelating resin (about 0.08 g, 60 mesh) into a3.0cm×2.0mm i.d Teflon tube. Plug of glass wools was placed at both ends of the column to avoid resin loss during system operation. ### 2.3. Sample Pretreatment Three certified reference materials, GSD-8, GBW-07114, and GSD-6 (National Center for Standard Materials, Beijing, China), were used for the validation of the developed methodology.0.5000 g of the sample was precisely weighed into a 30 ml PTFE crucible was wetted by 5 mL of hydrofluoric acid, and followed by 7 mL of concentrated hydrofluoric acid and 2 ml of perchloric acid, heating up for 10~20 minutes at low temperature on a sand bath till dense white fumes appeared. After cooling, 3 ml of nitric acid was added. It was again heated to the appearance of the dense white fumes. After cooling, 4 ml of 50% HCl (v/v) was added to the crucible to dissolve the residue; the final digests of the solid samples were adjusted to pH = 7 with 5% NaOH (m/v) solution and diluted to the mark with water in a 100 ml volumetric flask.A calibration graph was plotted for standards of each sample. Standards and analytical blanks were treated in the same way as the samples. ### 2.4. Operating Procedure The diagram of the FI manifold and its operational sequence are represented in Figure1 and Table 2, respectively. In step 1, the sample was pumped through the microcolumn. The sampling volume is controlled by the sampling time. In the next step, the column was washed with deionized water to remove any salt residues coming from the matrixs. In step 3, the analyte adsorbed in the column was eluted with HCl (3% v/v) solution and led into the gas-liquid separator. After the determination, the columns were washed with deionized water to return it to the condition in preparation for loading the next sample.The flow system used was operated in the time-based mode and deionized water served as the carrier stream. ## 2.1. Apparatus An AF-610 atomic fluorescence spectrometer with a commercial gas liquid separator (Beijing Raileigh Analytic Instrument Corporation) was used, and the operating parameters of the AFS instrument are summarized in Table1. A lead hollow cathode lamp was used as the radiation source. The hydride and hydrogen generated were separated from liquid in the first-stage gas-liquid separator (GLS1) and swept by an argon flow through the second-stage gas-liquid separator (GLS2) and finally into the atomizer, where the hydride was atomized by an argon-hydrogen flame.Table 1 Operating parameters of the AFS. PMT-voltage320 mvMain lamp current80 mALamp ancillary electrode current30 mAArgon carrier gas flow rate600 mlmin-1Atomization temperatureLow (200°C)Observation height7 mmThe flow injection analytical system applied was a JTY-1B FI multifunction solution autohandling system (Faculty of Material Science and Chemical Engineering, China University of Geosciences). Figure1 shows the manifold for on-line ion-exchange used in this study. The manifold program for this FI system is showed in Table 2. All the tubes used were 1 mm i.d. PTFE tubing.Table 2 The operating program of the FI ion-exchange system. StepTimes (s)Pump rate (mlmin-1)Valve positionDescriptionPaPbPaPb01801803.500Pa is active, sample load and lead are preconcentration in columns a, and b111000Pa stops; sample loops dip into water210103.500Water pushes the remain sample of the loop to pass through the columns355021Pb is active; remain water of the loop is pushed out by air411001Pa stops; sample loops dip into the eluent52020021Pa drives eluent to pass through b and a in series62020071Pb and Pc are active; sample is reacted with NaBH4 and pushed to AFS for determination.722400Tube is washed by waterDiagram of the flow system used to preconcentration and determination of lead by ion-exchange coupled with HG-AFS, (a) the sample load process and (b) the elution process.Pa, Pb and, Pc, peristaltic pumps; V, 8 channel rotary injection valve; S, sample; a and b, microcolumns; T, T-tube; W, waste. (a)(b) ## 2.2. Reagents All chemicals were of analytical reagent, and deionized water was used throughout. Working standard solutions of lead were prepared by appropriate stepwise dilution of a 1000 mgl-1 stock standard solution to the required μgl-1 levels just before use. A 10 gl-1 sodium tetrahydroborate solution containing 20 gl-1 potassium ferricyanide (K3Fe(CN)6) was prepared by dissolving the NaBH4 and K3Fe(CN)6 reagent in 2 gl-1 sodium hydroxide solution just before use; 3% HCl (v/v) solution was used as eluent.5% NaOH (m/v) solution and 10% (m/v) sulfocarbamide solution were also used in this work.The ion-exchange resin minicolumns were built up by packing the D401 Chelating resin (about 0.08 g, 60 mesh) into a3.0cm×2.0mm i.d Teflon tube. Plug of glass wools was placed at both ends of the column to avoid resin loss during system operation. ## 2.3. Sample Pretreatment Three certified reference materials, GSD-8, GBW-07114, and GSD-6 (National Center for Standard Materials, Beijing, China), were used for the validation of the developed methodology.0.5000 g of the sample was precisely weighed into a 30 ml PTFE crucible was wetted by 5 mL of hydrofluoric acid, and followed by 7 mL of concentrated hydrofluoric acid and 2 ml of perchloric acid, heating up for 10~20 minutes at low temperature on a sand bath till dense white fumes appeared. After cooling, 3 ml of nitric acid was added. It was again heated to the appearance of the dense white fumes. After cooling, 4 ml of 50% HCl (v/v) was added to the crucible to dissolve the residue; the final digests of the solid samples were adjusted to pH = 7 with 5% NaOH (m/v) solution and diluted to the mark with water in a 100 ml volumetric flask.A calibration graph was plotted for standards of each sample. Standards and analytical blanks were treated in the same way as the samples. ## 2.4. Operating Procedure The diagram of the FI manifold and its operational sequence are represented in Figure1 and Table 2, respectively. In step 1, the sample was pumped through the microcolumn. The sampling volume is controlled by the sampling time. In the next step, the column was washed with deionized water to remove any salt residues coming from the matrixs. In step 3, the analyte adsorbed in the column was eluted with HCl (3% v/v) solution and led into the gas-liquid separator. After the determination, the columns were washed with deionized water to return it to the condition in preparation for loading the next sample.The flow system used was operated in the time-based mode and deionized water served as the carrier stream. ## 3. Results and Discussion In order to achieve the most of efficient performance in terms of highest analytical sensitivity and lowest deviation of signals (measurement precision), some experimental parameters were investigated. After an initial assessment to select approximate values for each parameter, optimization of the variables was carried out by the univariate method. ### 3.1. Development of the Flow Injection Ion-Exchange Manifold The flow system in our work is optimized to achieve a better enrichment efficiency and detection limit. In previous reports, the FI ion-exchange manifold was designed to preconcentration of metal ions using a single column, which is tedious and time-consuming, causing the enrichment factor and sampling frequency hard to improve. To solve the problem, the double column system is used in this paper: the sample was pumped through columns a, and b simultaneously (Figure1), in the stage of elution; eluent was pumped through columns a, and b in series. The comparison experiments showed that signal peak of the double column system is twice of the single column system’s.Before elution step, air is used to push out the waste water of the loop, which is the improvement of the manifold in our work. With this amelioration, the sensitivity and reproducibility are improved effectively. Table3 showed the fluorescence intensity of 10 ngml-1 lead solution obtained with different two eluting process by HG-AFS determination in the same condition.Table 3 Comparison of different eluting process (4 ngml-1 Pb). TypeFluorescence intensityRSD (%,n=10)With the step of air pushing out water7653.5Without the step of air pushing out water5868.8 ### 3.2. The AFS Parameters The AFS parameters, including lamp current, atomizer height, negative high voltage of the photomultiplier, carrier argon flow, were investigated in the term of sensitivity and reproducibility. The optimized parameters were summarized in Table1. ### 3.3. The Choice of the Resin and the Medium of Hydride Generation In this paper, D401 chelating resin is used as the ion-exchanger of lead; experiments show that the 60 mesh resin has the optimal enrichment efficiency. HCl solution was chosen for eluting lead and the medium of hydride generation.With NaBH4 as reducing reagent, its concentration affects significantly the hydride generation. The results showed that an enhancement of the signal was observed with the increase of NaBH4 concentration up to 1.5% (w/v), while an even higher concentration led to a deterioration of the sensitivity. Considering the sensitivity and saving reagents, a concentration of 1% (w/v) was therefore employed. ### 3.4. Optimization of the Chemical Variables #### 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). #### 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). #### 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ### 3.5. Interferences The potential interfering effects of some foreign species, which are frequently encountered in geosamples, were tested with the present procedure. At a Pb concentration of 10 ng/ml, Fe (5μg/ml); Cr, Ca, Zr, V (500 μg/ml); Al (5 mg/ml); As3+, Se2+ (50 μg/ml); Cu, Ni (100 μg/ml) did not interfere with the determination (no tests for more higher concentration levels). For general geosamples, the contents of the above metal ions in sample digests or after appropriate dilution will not exceed the tolerant concentration levels. If the contents of Fe and Cu exceed the tolerant concentration levels, they can be masked by 10% (m/v) sulfocarbamide solution. So in most cases, the present procedure can be directly employed, and no further treatment of masking reagents are needed. ### 3.6. Performance and Validation of the Procedure Under the optimal conditions, the performance data obtained for the flow injection on-line ion-exchange preconcentration system for Pb with hydride generation atomic fluorescence spectrometry were summarized in Table4. With a sample loading volume of 10.5 ml and a retention time of 180 seconds, an enrichment factor of 40 and a sampling frequency of 15 h-1 was obtained. The detection limit (3σ) was (0.0031 ng·ml-1) and the relative standard deviation (RSD) was 3.78% (n=11) at the 4 ng·ml-1 level.Table 4 Performance data for the on-line ion-exchange preconcentration HG-AFS system. Calibration graph, 0–10μg·l-1Regression equation (fluorescence intensity versus concentration (μg·l-1)Y=102.33x+13.52Correlation coefficient0.9991Sampling frequency15h-1Enrichment factor40Detection limit (3σ)0.0031 ng·ml-1Relative standard deviation(4 ng·ml-1 of Pb, n=11)3.78%Sampling consumption10.5 mLAt a similar precision level, a comparison of the detection limit of the present procedure with the reported one [26] (8 ngl-1) based on hydride generation protocol with detection by AFS has shown that the detection limit of the present protocol (3.1 ngl-1) is superior to the published procedures. Namely, the method of this paper was much improved with respect to the HG-AFS-based procedures. The procedure was validated using certified reference materials, that is, GSD-8, GBW-07114, and GSD-6. The obtained results were summarized in Table 5 with an agreement between the obtained results and the certified values.Table 5 Analysis of reference material (ω(μg·g-1, n=10)). SampleRecommended valueFounded valuesRSD (%)GSD-820.018.24.8GBW-071144.44.15.2GSD-627.025.64.6 ## 3.1. Development of the Flow Injection Ion-Exchange Manifold The flow system in our work is optimized to achieve a better enrichment efficiency and detection limit. In previous reports, the FI ion-exchange manifold was designed to preconcentration of metal ions using a single column, which is tedious and time-consuming, causing the enrichment factor and sampling frequency hard to improve. To solve the problem, the double column system is used in this paper: the sample was pumped through columns a, and b simultaneously (Figure1), in the stage of elution; eluent was pumped through columns a, and b in series. The comparison experiments showed that signal peak of the double column system is twice of the single column system’s.Before elution step, air is used to push out the waste water of the loop, which is the improvement of the manifold in our work. With this amelioration, the sensitivity and reproducibility are improved effectively. Table3 showed the fluorescence intensity of 10 ngml-1 lead solution obtained with different two eluting process by HG-AFS determination in the same condition.Table 3 Comparison of different eluting process (4 ngml-1 Pb). TypeFluorescence intensityRSD (%,n=10)With the step of air pushing out water7653.5Without the step of air pushing out water5868.8 ## 3.2. The AFS Parameters The AFS parameters, including lamp current, atomizer height, negative high voltage of the photomultiplier, carrier argon flow, were investigated in the term of sensitivity and reproducibility. The optimized parameters were summarized in Table1. ## 3.3. The Choice of the Resin and the Medium of Hydride Generation In this paper, D401 chelating resin is used as the ion-exchanger of lead; experiments show that the 60 mesh resin has the optimal enrichment efficiency. HCl solution was chosen for eluting lead and the medium of hydride generation.With NaBH4 as reducing reagent, its concentration affects significantly the hydride generation. The results showed that an enhancement of the signal was observed with the increase of NaBH4 concentration up to 1.5% (w/v), while an even higher concentration led to a deterioration of the sensitivity. Considering the sensitivity and saving reagents, a concentration of 1% (w/v) was therefore employed. ## 3.4. Optimization of the Chemical Variables ### 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). ### 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). ### 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ## 3.4.1. The Effect of Sample Acidity on the Retention of Lead A 4 ngml-1 lead standard solution was used to optimize the acidity of the sample. The sample acidity was adjusted with hydrochloric acid within the range of 0~8% (0, 0.2%, 0.4%, 0.8% 2%, 4%, 8%, v/v). The results are shown in Figure 2. Figure 2 indicates that the maximum retention of lead occurs in neutral media, and the fluorescence intensity is declined markedly between 0~0.8% HCl. However, the retention is not changed markedly between 0.8~8% HCl. The explanation is that the chelating agent becomes a chelating form in alkaline medium, while it is preferentially H+ when the pH is low. In our subsequent experiments, pH7 was selected.Figure 2 Effects of sample acid concentration on chelating reaction between Pb and speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). ## 3.4.2. Eluting Acid Concentration and Speed The lead chelate adsorbed on the column could be eluted by hydrochloric acid of different concentrations and at different eluting speeds. The results (Figure3) has shown that the signal increased as the HCl concentration increased to 4%, then a small decrease was obtained in the range 4–14%. The decrease of the signal at concentrations higher than 4% may be due to the dilution of the analyte by the excessive generation of hydrogen, which is byproduct of the hydride generation reaction. Although the maximum of the signal has occurred at the HCl concentration of 4%, the generation of hydride of lead derived a better efficiency in the range 1%–3% (HCl, v/v), and taking into account the acidity of eluting and the acidity of hydride generation, 3% HCl was selected throughout.(a) Effects of eluent acid concentrations and eluting speed (4 ngml-1 Pb solution; 180 seconds sample load time; injection speed 3.5 mlmin-1; 10 seconds washing time). (a) Different concentrations of HCl versus signal of pb. (b) eluting speed versus signal of pb, eluting with 3% HCl. (a)(b)The effect of the eluting is similarly examined within the range 2–7 mlmin-1, the results (Figure 4) indicated that the determination signal reached a maximum at a 1 mlmin-1, and the sensitivity was dropped with the increase of the eluting speed to 3 mlmin-1, while afterwards the curve was leveled off and only a small change was obtained within the range of 3–5 mlmin-1. At higher eluting speed, a significant decrease of the signal was observed. Although a lower speed is preferential for the effective eluting, it took a longer time and sacrificed the sampling frequency. As mentioned above that flow resistance was frequently encountered by employing higher flow rates. As a compromise, a eluting speed of 3 mlmin-1 was employed for further experiments. The speed of NaBH4 solutions during hydride generation was also optimized, by adopting an identical speed for the two streams.Figure 4 Sample injection speed (4 ngml-1 Pb solution; pH 7; washing with water for 10 seconds; 180 seconds sample load time; eluting with 3% HCl). ## 3.4.3. Sampling Time and Speed The sample load time and injection speed were optimized for achieving adequate enriching times of lead in the column. The results has shown that, with sample load time increasing, the signal almost increased linearly at a constant injection speed. Although the enrichment factor for lead in the column could be improved for a longer sample load time, the analytical time is also prolonged, and the consumption of reagents increases at the same time. For these reasons, the sample load time was set at 180 seconds. For a 180 seconds sample loading time, the results of a sample injection speed test are shown in Figure4. At first, the signal increased with injection speed, but the increment of signal decreased when the injection speed increased more rapidly (>3…5 mlmin-1). This result was explained that with injection speed increasing, the lead chelate could not be taken in the column because of the shortened time of adsorption between the chelate and sorbent in the column. Thus, an injection speed of 3.5 mlmin-1 was selected. ## 3.5. Interferences The potential interfering effects of some foreign species, which are frequently encountered in geosamples, were tested with the present procedure. At a Pb concentration of 10 ng/ml, Fe (5μg/ml); Cr, Ca, Zr, V (500 μg/ml); Al (5 mg/ml); As3+, Se2+ (50 μg/ml); Cu, Ni (100 μg/ml) did not interfere with the determination (no tests for more higher concentration levels). For general geosamples, the contents of the above metal ions in sample digests or after appropriate dilution will not exceed the tolerant concentration levels. If the contents of Fe and Cu exceed the tolerant concentration levels, they can be masked by 10% (m/v) sulfocarbamide solution. So in most cases, the present procedure can be directly employed, and no further treatment of masking reagents are needed. ## 3.6. Performance and Validation of the Procedure Under the optimal conditions, the performance data obtained for the flow injection on-line ion-exchange preconcentration system for Pb with hydride generation atomic fluorescence spectrometry were summarized in Table4. With a sample loading volume of 10.5 ml and a retention time of 180 seconds, an enrichment factor of 40 and a sampling frequency of 15 h-1 was obtained. The detection limit (3σ) was (0.0031 ng·ml-1) and the relative standard deviation (RSD) was 3.78% (n=11) at the 4 ng·ml-1 level.Table 4 Performance data for the on-line ion-exchange preconcentration HG-AFS system. Calibration graph, 0–10μg·l-1Regression equation (fluorescence intensity versus concentration (μg·l-1)Y=102.33x+13.52Correlation coefficient0.9991Sampling frequency15h-1Enrichment factor40Detection limit (3σ)0.0031 ng·ml-1Relative standard deviation(4 ng·ml-1 of Pb, n=11)3.78%Sampling consumption10.5 mLAt a similar precision level, a comparison of the detection limit of the present procedure with the reported one [26] (8 ngl-1) based on hydride generation protocol with detection by AFS has shown that the detection limit of the present protocol (3.1 ngl-1) is superior to the published procedures. Namely, the method of this paper was much improved with respect to the HG-AFS-based procedures. The procedure was validated using certified reference materials, that is, GSD-8, GBW-07114, and GSD-6. The obtained results were summarized in Table 5 with an agreement between the obtained results and the certified values.Table 5 Analysis of reference material (ω(μg·g-1, n=10)). SampleRecommended valueFounded valuesRSD (%)GSD-820.018.24.8GBW-071144.44.15.2GSD-627.025.64.6 ## 4. Conclusions A method of on-line FI separation system coupled with HG-AFS for the determination of sup-trace lead in geosamples has been developed. Parameters of the operation system including pH value of chelating reaction, sample loading time and injection speed, eluting acid concentration and eluting speed, and instrumental parameters of HG-AFS were optimized and selected. The detection limit of this methodestimatedas 3xstandard deviation of the procedural blank was 3.1 ngl-1. Three standard reference materials were used to assess the accuracy of the method. The Pb concentration measured was in good agreement with the certified value; this method can be used to determine the sup-trace lead in high-salt samples. --- *Source: 101679-2009-09-06.xml*
2009
# Plankton Resting Stages in the Marine Sediments of the Bay of Vlorë (Albania) **Authors:** Fernando Rubino; Salvatore Moscatello; Manuela Belmonte; Gianmarco Ingrosso; Genuario Belmonte **Journal:** International Journal of Ecology (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101682 --- ## Abstract In the frame of the INTERREG III CISM project, sediment cores were collected at 2 stations in the Gulf of Vlorë to study the plankton resting stage assemblages. A total of 87 morphotypes were identified and produced by Dinophyta, Ciliophora, Rotifera, and Crustacea. In 22 cases, the cyst belonged to a species absent from the plankton of the same period. The most abundant resting stages were those produced byScrippsiella species (Dinophyta). Some calcareous cysts were identified as fossil species associated with Pleistocene to Pliocene sediment, although they were also found in surface sediments and some of them successfully germinated, thus proving their modern status. Total abundance generally decreased with sediment depth at station 40, while station 45 showed distinct maxima at 3 and 8 cm below the sediment surface. The depth of peak abundance in the sediment varied with species. This paper presents the first study of the plankton resting stages in the Bay of Vlorë. The study confirmed the utility of this type of investigation for a more correct evaluation of species diversity. In addition, the varying distribution with sediment depth suggests that this field could be of some importance in determining the history of species assemblages. --- ## Body ## 1. Introduction Resting stages produced by plankton organisms in temperate seas accumulate in the bottom sediments of confined coastal areas [1]. Their assemblages represent reservoirs of biodiversity which sustain the high resilience of plankton communities, providing recruits of propagules at each return of favourable conditions, in accordance with the so-called Supply Vertical Ecology model [2]. The existence of benthic stages in the life cycles of holoplankton provides a new key for understanding the role of life cycles in the pelagic-benthic relationship in coastal waters [3, 4]. Consequently, assessments of biodiversity at marine sites should take account of the unexpressed fraction of the plankton community contained in the bottom sediments by performing integrated sampling programs [5, 6]. Despite the proven importance of resting stage banks in coastal marine ecology, the issue of “resting versus active” plankters has commonly been considered for single taxa and only rarely from the whole-community point of view. This is probably due to the great complexity (compositional, functional, and distributional) of resting stage banks. Indeed, it has been demonstrated that at any given moment the species assemblages in bottom sediments (as resting stages) are quite different from the species detectable in the water column (as active stages) [6, 7]. However, the study of such marine “seed banks” (as understood by [3], analogous to terrestrial seed banks in forest soils) is complex on many levels. Resting stages share a common morphological plan despite belonging to organisms from different kingdoms [8]. Consequently, resting stage morphology differs sharply from that of active stages and in some cases their identification is highly problematic. However, it is also true that for some naked dinoflagellates or for thecate ones with a similar thecal plate pattern, cysts are quite different, allowing correct identification without the use of SEM or molecular techniques. Cyst-producing dinoflagellates differ in the length of their life cycle and/or the timing of cyst production, and the rest capacity of resting stages also varies. They are generally programmed to rest for the duration of the adverse period, but fractions of them can also rest for longer periods, allowing the population to reappear decades later, ([9–11], for copepods, [12, 13], for dinoflagellates). Hairston et al. [14] reported a rest of more than 300 years for a calanoid resting egg, albeit of a freshwater species.The scarcity of literature on whole resting stage communities encouraged us to describe situations in various parts of the Mediterranean, in order to obtain a rich data set useful for building models and in experimental situations. Here a detailed description of the structure of the marine “seed bank” produced by plankton in the Bay of Vlorë is reported.The present study also focuses on an Albanian bay that has not been extensively studied from the marine biodiversity point of view. The data from the benthos are compared with analyses of the phytoplankton and microzooplankton [15] in the water column to assess with more precision the biodiversity of the plankton in the Bay of Vlorë.Moscatello et al. [15] reported that the microzooplankton community of the Bay of Vlorë was composed of more than 200 taxa, of which 97 were classified as “seasonally absent.” The aim of the present paper is to determine whether these absences in the water column correspond to resting stages in the sediments. ## 2. Materials and Methods ### 2.1. Study Site An oceanographic campaign was carried out in the Bay of Vlorë from 17th to 23rd of January 2008 aboard the oceanographic vessel “Universitatis”. This survey was conducted as part of the PIC Interreg III Italy, Albania Project for providing technical assistance for the management of an International Centre of Marine Sciences (CISM) in Albania. The sampling period of the present study (January 2008) coincided with that of Moscatello et al. [15] who investigated active plankton in the water column in the same area.In order to investigate the presence and distribution of resting stages produced by plankton species in the area, 2 stations were chosen, representing 2 different types of environment: a deep zone (station 40, depth: 54 m), with sediments of terrigenous mud dominated byLabidoplax digitata (Holothuroidea) and a shallower site (station 45, depth: 28 m), with sediments of terrigenous mud dominated by Turritella communis (Gastropoda) (Figure 1) (for the classification of mud biocenosis in the Bay of Vlorë, see [16]).Figure 1 Map of study area showing location of two sampling stations (40, 45) in Bay of Vlorë (Albania). ### 2.2. Sampling Procedure Samples of bottom sediments were collected in three replicates (named 40 a, b, c and 45 a, b, c) using a Van Veen grab with upper windows that allowed the collection of undisturbed sediment cores. At each station, 2 different PVC corers (h: 30 cm, inner  ⃠ 4 and 8 cm) were used in order to obtain 2 different sets of samples. The smaller core was processed to obtain cysts of protists; the larger core was processed to obtain resting eggs of metazoans. This differentiation was necessary because metazoan resting stages are less abundant, so a greater amount of sediment is required. Moreover, their walls are only organic, allowing the adoption of a centrifugation method coupled with filtration to obtain a “clean” sample from a relatively large quantity of sediment. In contrast, protistan cysts are more abundant and have different types of walls (calcareous, siliceous, organic), which complicates the procedure when the whole cyst bank is studied. Thus, the most fruitful method of separating cysts from sediment is filtration through meshes of different sizes.After extraction, sediment cores were immediately subdivided into 1 cm thick layers, until the 15th cm from the sediment surface. The thickness of 15 cm was chosen because in previous studies we noted that abundances diminished significantly at depths of more than 7–10 cm below the sediment surface [1, 12]. The outer edge of each layer was discarded to avoid contamination from material from the overlaying layer during the insertion of the corer into the sediments. Once obtained, the samples were stored in the dark at 5°C until treatment in the laboratory. ### 2.3. Protistan Cysts (20–125μm) In the laboratory the small-core samples were treated using a sieving technique consisting of the following steps.(i) The entire sample is homogenized and then subsampled, obtaining 3–5 mL of wet sediment which is passed through a 20μm mesh (Endecott’s LTD steel sieves, ISO3310-1, London, England), using natural filtered (0.45 μm) seawater (Gulf of Taranto).(ii) The retained fraction is ultrasonified for 1 min and again passed through a series of sieves (125, 75, and 20μm mesh sizes), obtaining a fine-grained fraction (20–75 μm) containing most of the protistan cysts and a 75–125 μm fraction with the larger ones (e.g., Lingulodinium spp.) and the zooplankton resting eggs. The material retained by the 125 μm mesh was not considered.No chemicals were used to disaggregate sediment particles, in order to avoid the dissolution of calcareous and siliceous cyst walls.Qualitative and quantitative analyses were carried out under an inverted microscope (Zeiss Axiovert S100 equipped with a Nikon Coolpix 990 digital camera) at 200 and 320 magnifications. Both full (i.e., probably viable) and empty (i.e., probably germinated) cysts were considered. At least 1/5 of the finer fraction and all of the >75μm fraction were analyzed.All the resting stage morphotypes were identified on the basis of published descriptions, ([17–19], for dinoflagellates, [20, 21], for ciliates and germination experiments).Identification was performed to the lowest possible taxonomic level. As a rule, modern, biological names were used. The paleontological name is reported only for morphotypes whose active stage was not known.A fixed aliquot (≈5 g) of wet sediment from each sample was oven-dried at 70°C for 24 h to calculate the water content and obtain quantitative data for each taxon as cysts g−1 of dry sediment. ### 2.4. Metazoan Resting Eggs (45–200μm) For the analysis of the large-core samples the Onbè [22] method was used, slightly modified by using 45 and 200 μm mesh sizes to obtain a size range typical of mesozooplankton resting eggs.For each sample a fixed quantity of wet sediment was treated (≈45 cm3).Only full (i.e., probably viable) resting eggs were counted and quantitative data for eachtaxon are reported as resting eggs × 100 g−1 of dry sediment. ### 2.5. Germination Experiments To achieve germination, all putative viable cysts of protists isolated from the sediment were individually positioned in Nunclon microwells (Nalge Nunc International, Roskilde, Denmark) containing≈1 mL of natural sterilized seawater. Cysts were incubated at 20°C, 12 : 12 h LD cycle, 100 μE m−2 sec−1 irradiance, and examined on a daily basis, until germination up to a maximum of 30 days. The incubation conditions were chosen on the basis of previous studies [5, 6, 23]. They have proved to be effective for a large number of species. ### 2.6. Data Analysis For the 1st cm of the cores, data on resting stage abundance from the 2 sampling stations were obtained by merging the data from the 3 replicates of the 2 sets of samples (those for protistan cysts and metazoan resting eggs). For samples below the first cm, only the 45–200μm fraction was used, in order to facilitate and accelerate the analysis. From the abundance matrices (taxa versus stations and taxa versus station and cm respectively) of both surface and deeper sediments, the Bray Curtis similarity measure was calculated after 4th root transformation in order to allow rare species to become more evident.The PRIMER “DIVERSE” function (Primer-E Ltd, Plymouth, UK) was used to calculate the taxonomic richness (S), taxon abundance (N), Margalef index (d), Shannon-Wiener diversity index (H′), and Pielou’s evenness index (J′) for each sample.The relationships between the samples collected at the 2 stations were analyzed by means of nonmetric multidimensional scaling (nMDS) with superimposed hierarchical clustering with a cutoff at 60% similarity (for surface sediments) and 70% (for the sediment core as a whole), while the SIMPER routine was used to identify relative dissimilarity and thetaxa that contributed most to the differences.The statistical significance of the differences between the 2 stations was calculated by means of a 2-way crossed analysis of similarities (ANOSIM) on the Bray-Curtis similarity matrix based on the stratigraphy.All univariate and multivariate analyses were performed using PRIMER v.6 (Primer-E Ltd, Plymouth, UK). ## 2.1. Study Site An oceanographic campaign was carried out in the Bay of Vlorë from 17th to 23rd of January 2008 aboard the oceanographic vessel “Universitatis”. This survey was conducted as part of the PIC Interreg III Italy, Albania Project for providing technical assistance for the management of an International Centre of Marine Sciences (CISM) in Albania. The sampling period of the present study (January 2008) coincided with that of Moscatello et al. [15] who investigated active plankton in the water column in the same area.In order to investigate the presence and distribution of resting stages produced by plankton species in the area, 2 stations were chosen, representing 2 different types of environment: a deep zone (station 40, depth: 54 m), with sediments of terrigenous mud dominated byLabidoplax digitata (Holothuroidea) and a shallower site (station 45, depth: 28 m), with sediments of terrigenous mud dominated by Turritella communis (Gastropoda) (Figure 1) (for the classification of mud biocenosis in the Bay of Vlorë, see [16]).Figure 1 Map of study area showing location of two sampling stations (40, 45) in Bay of Vlorë (Albania). ## 2.2. Sampling Procedure Samples of bottom sediments were collected in three replicates (named 40 a, b, c and 45 a, b, c) using a Van Veen grab with upper windows that allowed the collection of undisturbed sediment cores. At each station, 2 different PVC corers (h: 30 cm, inner  ⃠ 4 and 8 cm) were used in order to obtain 2 different sets of samples. The smaller core was processed to obtain cysts of protists; the larger core was processed to obtain resting eggs of metazoans. This differentiation was necessary because metazoan resting stages are less abundant, so a greater amount of sediment is required. Moreover, their walls are only organic, allowing the adoption of a centrifugation method coupled with filtration to obtain a “clean” sample from a relatively large quantity of sediment. In contrast, protistan cysts are more abundant and have different types of walls (calcareous, siliceous, organic), which complicates the procedure when the whole cyst bank is studied. Thus, the most fruitful method of separating cysts from sediment is filtration through meshes of different sizes.After extraction, sediment cores were immediately subdivided into 1 cm thick layers, until the 15th cm from the sediment surface. The thickness of 15 cm was chosen because in previous studies we noted that abundances diminished significantly at depths of more than 7–10 cm below the sediment surface [1, 12]. The outer edge of each layer was discarded to avoid contamination from material from the overlaying layer during the insertion of the corer into the sediments. Once obtained, the samples were stored in the dark at 5°C until treatment in the laboratory. ## 2.3. Protistan Cysts (20–125μm) In the laboratory the small-core samples were treated using a sieving technique consisting of the following steps.(i) The entire sample is homogenized and then subsampled, obtaining 3–5 mL of wet sediment which is passed through a 20μm mesh (Endecott’s LTD steel sieves, ISO3310-1, London, England), using natural filtered (0.45 μm) seawater (Gulf of Taranto).(ii) The retained fraction is ultrasonified for 1 min and again passed through a series of sieves (125, 75, and 20μm mesh sizes), obtaining a fine-grained fraction (20–75 μm) containing most of the protistan cysts and a 75–125 μm fraction with the larger ones (e.g., Lingulodinium spp.) and the zooplankton resting eggs. The material retained by the 125 μm mesh was not considered.No chemicals were used to disaggregate sediment particles, in order to avoid the dissolution of calcareous and siliceous cyst walls.Qualitative and quantitative analyses were carried out under an inverted microscope (Zeiss Axiovert S100 equipped with a Nikon Coolpix 990 digital camera) at 200 and 320 magnifications. Both full (i.e., probably viable) and empty (i.e., probably germinated) cysts were considered. At least 1/5 of the finer fraction and all of the >75μm fraction were analyzed.All the resting stage morphotypes were identified on the basis of published descriptions, ([17–19], for dinoflagellates, [20, 21], for ciliates and germination experiments).Identification was performed to the lowest possible taxonomic level. As a rule, modern, biological names were used. The paleontological name is reported only for morphotypes whose active stage was not known.A fixed aliquot (≈5 g) of wet sediment from each sample was oven-dried at 70°C for 24 h to calculate the water content and obtain quantitative data for each taxon as cysts g−1 of dry sediment. ## 2.4. Metazoan Resting Eggs (45–200μm) For the analysis of the large-core samples the Onbè [22] method was used, slightly modified by using 45 and 200 μm mesh sizes to obtain a size range typical of mesozooplankton resting eggs.For each sample a fixed quantity of wet sediment was treated (≈45 cm3).Only full (i.e., probably viable) resting eggs were counted and quantitative data for eachtaxon are reported as resting eggs × 100 g−1 of dry sediment. ## 2.5. Germination Experiments To achieve germination, all putative viable cysts of protists isolated from the sediment were individually positioned in Nunclon microwells (Nalge Nunc International, Roskilde, Denmark) containing≈1 mL of natural sterilized seawater. Cysts were incubated at 20°C, 12 : 12 h LD cycle, 100 μE m−2 sec−1 irradiance, and examined on a daily basis, until germination up to a maximum of 30 days. The incubation conditions were chosen on the basis of previous studies [5, 6, 23]. They have proved to be effective for a large number of species. ## 2.6. Data Analysis For the 1st cm of the cores, data on resting stage abundance from the 2 sampling stations were obtained by merging the data from the 3 replicates of the 2 sets of samples (those for protistan cysts and metazoan resting eggs). For samples below the first cm, only the 45–200μm fraction was used, in order to facilitate and accelerate the analysis. From the abundance matrices (taxa versus stations and taxa versus station and cm respectively) of both surface and deeper sediments, the Bray Curtis similarity measure was calculated after 4th root transformation in order to allow rare species to become more evident.The PRIMER “DIVERSE” function (Primer-E Ltd, Plymouth, UK) was used to calculate the taxonomic richness (S), taxon abundance (N), Margalef index (d), Shannon-Wiener diversity index (H′), and Pielou’s evenness index (J′) for each sample.The relationships between the samples collected at the 2 stations were analyzed by means of nonmetric multidimensional scaling (nMDS) with superimposed hierarchical clustering with a cutoff at 60% similarity (for surface sediments) and 70% (for the sediment core as a whole), while the SIMPER routine was used to identify relative dissimilarity and thetaxa that contributed most to the differences.The statistical significance of the differences between the 2 stations was calculated by means of a 2-way crossed analysis of similarities (ANOSIM) on the Bray-Curtis similarity matrix based on the stratigraphy.All univariate and multivariate analyses were performed using PRIMER v.6 (Primer-E Ltd, Plymouth, UK). ## 3. Results ### 3.1. Total Biodiversity Resting stages were found at all levels of the 15 cm sediment core columns from the 2 investigated sites in the Gulf of Vlorë.Merging the data from the 2 sets of samples (20–125μm and 45–200 μm) and considering both full (probably viable) and empty (probably germinated) forms from each station, 87 different resting stage morphotypes produced by plankton were recognized (Table 1). Most of them (59, belonging to 20 genera) were dinoflagellates, 16 were ciliates (9 genera), 4 rotifers (2 genera), and 5 crustaceans (4 genera), while 3 (1 protistan cyst type and 2 resting eggs) remained unidentified. Station 40 showed higher biodiversity, with 79 morphotypes, 35 of them exclusive to the site. At station 45, 52 morphotypes were observed, 8 of them being exclusive.Table 1 List of resting stage (cyst) morphotypes recovered from sediments of Bay of Vlorë (Albania). Taxon St.40 St.45 Dinoflagellates Alexandrium minutum Halim ⚫ ⚫ ○ Alexandrium tamarense(Lebour) Balech ⚫ Alexandrium sp. 1 ⚫ Alexandriumsp. 2 ⚫ Bicarinellum tricarinelloides Versteegh ⚫ ○ ⚫ Calcicarpinum perfectum Versteegh ○ Calciodinellum albatrosianum (Kamptner) Janofske and Karwath ⚫ ○ ⚫ ○ Calciodinellum operosum (Deflandre) Montresor ⚫ ○ ○ Calciperidinium asymmetricum Versteegh ○ Cochlodinium polykrikoidesMargalef type 1 ⚫ Cochlodinium polykrikoidesMargalef type 2 ○ Diplopelta parva (Abé) Matsuoka ○ Diplopsalis lenticula Bergh ⚫ ○ Follisdinellum splendidum Versteegh ○ Gonyaulax group ⚫ ○ ⚫ ○ Gymnodinium impudicum(Fraga and Bravo) G. Hansen and Möestrup ⚫ Gymnodinium nolleri Ellegaard and Möestrup ⚫ Gymnodiniumsp. 1 ⚫ ○ ⚫ Lingulodinium polyedrum (Stein) Dodge ⚫ ○ ⚫ ○ Melodomuncula berlinensis Versteegh ⚫ ○ ⚫ Nematodinium armatum (Dogiel) Kofoid and Swezy ⚫ ○ Oblea rotunda (Lebour) Balech ex Sournia ⚫ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 1 ⚫ ○ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 2 ⚫ ⚫ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 1 ⚫ ○ ⚫ ○ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 2 ○ Polykrikos kofoidii Chatton ○ Polykrikos schwartzii Bütschli ○ Protoperidinium compressum (Abé) Balech ○ Protoperidinium conicum (Gran) Balech ⚫ ○ Protoperidinium oblongum (Aurivillius) Parke and Dodge ⚫ Protoperidinium parthenopesZingone and Montresor ⚫ Protoperidinium steidingerae Balech ⚫ Protoperidinium subinerme (Paulsen) Loeblich III ○ Protoperidinium thorianum (Paulsen) Balech ⚫ ○ ○ Protoperidiniumsp. 1 ⚫ ○ ○ Protoperidinium sp. 5 ⚫ ○ Protoperidiniumsp. 6 ⚫ Pyrophacus horologium Stein ⚫ ⚫ Scrippsiellacf. crystallina Lewis ○ Scrippsiella lachrymosa Lewis ⚫ ○ ⚫ ○ Scrippsiella ramonii Montresor ⚫ ○ ○ Scrippsiella trochoidea (Stein) Loeblich rough type ⚫ ⚫ Scrippsiella trochoidea (Stein) Loeblich smooth type ⚫ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich large type ⚫ ○ ⚫ Scrippsiella trochoidea (Stein) Loeblich medium type ⚫ ○ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich small type ⚫ ○ ⚫ Scrippsiellasp. 1 ⚫ ○ ⚫ ○ Scrippsiellasp. 4 ⚫ ⚫ Scrippsiellasp. 5 ⚫ ⚫ Scrippsiellasp. 6 ⚫ ⚫ Scrippsiellasp. 8 ⚫ ○ ⚫ Thoracosphaerasp. ⚫ ⚫ Dinophyta sp. 2 ⚫ ⚫ Dinophyta sp. 7 ⚫ ⚫ Dinophyta sp. 17 ⚫ Dinophyta sp. 26 ⚫ Dinophyta sp. 30 ⚫ Dinophyta sp. 33 ⚫ ⚫ Ciliates Codonella asperaKofoid and Campbell ⚫ Codonella orthoceras Heackel ⚫ Codonellopsis monacensis(Rampi) Balech ⚫ Codonellopsis schabii (Brandt) Kofoid and Campbell ⚫ ⚫ Epiplocylis undella(Ostenfeld and Schmidt) Jörgensen ⚫ ⚫ Rabdonella spiralis (Fol) Brandt ⚫ Stenosemella ventricosa(Claparède and Lachmann) Jörgensen ⚫ ⚫ Strobilidium sp. ⚫ ⚫ Strombidium cf. acutum (Leegaard) Kahl ⚫ ⚫ Strombidium conicum (Lohman) Wulff ○ ⚫ Tintinnopsis beroideaStein ⚫ Tintinnopsis butschliiKofoid and Campbell ⚫ Tintinnopsis campanulaEhrenberg ⚫ Tintinnopsis cylindricaDaday ⚫ ⚫ Tintinnopsis radix(Imhof) ⚫ Undella claparedei(Entz) Daday ⚫ Rotifers Brachionus plicatilis Müller ⚫ Synchaetasp. spiny type ⚫ ○ Synchaetasp. rough type ⚫ Synchaetasp. mucous type ⚫ Crustaceans Cladocerans Penilia avirostris Dana ⚫ ⚫ Crustaceans Copepods Acartia clausi/margalefi ⚫ Acartiasp. 1 ⚫ ○ ⚫ ○ Centropagessp. ⚫ ⚫ ○ Paracartia latisetosa (Krizcaguin) ⚫ ○ ⚫ Unidentified Cyst type 1 ⚫ Resting Egg 1 ⚫ Resting Egg 9 ⚫ ⚫: cysts observed as full (i.e., probably viable). ○: cysts observed as germinated (i.e., empty).Moreover, analysis of the empty forms found among the 20–125μm fraction led to the recognition of 11 morphotypes, all dinoflagellates.A total of 36 cyst types were identified astaxa missing from the plankton list of the same period (January 2008; [15]). Partly due to nomenclature problems, uncertainty of identification, and differences in examined periods, it was possible to ascertain the contemporaneous presence of species in both pelagic and benthic compartments only in a very few cases.Identification was frequently impossible due to the presence of previously unreported resting stage morphologies. In such cases, germination experiments allowed the cysts to be attributed to a high leveltaxon at least, as with a Strombidium (Ciliophora) cyst, whose morphology is reported here for the first time (see Figure 2).Figure 2 Photographs of Ciliophora cyst, with two opposite papulae (a). Its empty shell (hatch occurs from one of two papulae) (b). Germinated active stage,Strombidium ciliate (c). ### 3.2. Surface Sediments The analysis of surface sediments (the 1st cm of the cores), that is, those most affected by cyst deposition and resuspension/germination, revealed sharp differences between the 2 analysed stations. In total, 36 different resting stage morphotypes were observed in this first layer (Table2), 23 produced by dinoflagellates, 6 by ciliates, 2 by rotifers, 4 by crustaceans, and 1 undetermined. Even considering the small amount of available data, station 40 showed higher biodiversity, in terms of both number of taxa and diversity indexes (see Table 3). Total abundances were comparable, however, with 389±127 cysts g−1 (average ± s.d.) at station 40 versus329±123 cysts g−1 at station 45. SIMPER showed 58% dissimilarity between the assemblages of the two sites (Table 4).Table 2 Abundance (cysts g−1 dw) of probably viable resting stages (cysts) observed in surface sediments of two stations in Bay of Vlorë (Albania). Values from three replicates are reported. 40a 40b 40c 45a 45b 45c Calciodinellum albatrosianum 20.1 18.3 35.1 0.0 0.0 0.0 Calciodinellum operosum 0.0 0.0 11.7 0.0 0.0 0.0 Gonyaulax group 20.1 0.0 0.0 0.0 0.0 59.6 Gymnodiniumsp. 1 20.1 9.2 0.0 0.0 22.2 0.0 Lingulodinium polyedrum 40.2 0.0 0.0 0.0 0.0 0.0 Melodomuncula berlinensis 40.2 0.0 0.0 0.0 0.0 0.0 Oblea rotunda 0.0 0.0 11.7 0.0 0.0 0.0 Pentapharsodinium dalei type 1 20.1 0.0 0.0 0.0 11.1 0.0 Pentapharsodinium tyrrhenicum type 1 40.2 18.3 23.4 0.0 11.1 0.0 Protoperidiniumsp. 1 0.0 9.2 0.0 0.0 0.0 0.0 Protoperidinium sp. 5 0.0 0.0 11.7 0.0 11.1 0.0 Scrippsiella ramonii 0.0 9.2 0.0 0.0 0.0 0.0 Scrippsiella trochoidea rough type 40.2 18.3 46.8 0.0 0.0 0.0 Scrippsiella trochoidea smooth type 0.0 9.2 11.7 0.0 11.1 0.0 Scrippsiella trochoideamedium type 181.0 73.3 105.4 173.1 111.1 238.4 Scrippsiella trochoideasmall type 80.5 64.2 58.5 230.8 0.0 0.0 Scrippsiellasp. 1 20.1 0.0 11.7 0.0 11.1 0.0 Scrippsiellasp. 4 0.0 9.2 0.0 0.0 0.0 0.0 Thoracosphaerasp. 1 0.0 0.0 11.7 0.0 11.1 0.0 Dinophyta sp. 2 0.0 0.0 23.4 0.0 0.0 0.0 Dinophyta sp. 17 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 26 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 33 0.0 0.0 0.0 0.0 11.1 0.0 Codonellopsis schabii 1.0 0.0 0.9 0.6 0.3 0.5 Stenosemella ventricosa 0.1 0.0 0.0 0.0 0.0 0.0 Strobilidium sp. 0.1 0.0 0.1 0.0 0.0 0.0 Strombidium acutum 0.0 0.0 0.0 0.0 11.1 0.0 Tintinnopsis cylindrica 0.0 0.0 0.0 0.0 0.1 0.1 Undella claparedei 0.1 0.0 0.1 0.0 0.0 0.0 Brachionus plicatilis 0.2 0.0 0.1 0.3 0.0 0.1 Synchaetasp spiny type 0.3 0.2 0.0 0.2 0.0 0.1 Penilia avirostris 0.0 0.0 0.1 0.0 0.0 0.0 Acartia clausi/margalefi 1.0 0.3 0.7 1.5 0.3 0.8 Acartiasp. 1 0.1 0.0 0.1 0.0 0.0 0.0 Centropagessp. 0.3 0.2 0.0 0.2 0.1 0.2 Cyst type 1 0.0 0.0 0.0 57.7 0.0 0.0Table 3 Abundance and diversity indices calculated for resting stages in surface sediments at two stations investigated in Bay of Vlorë. Abundancecysts g−1 dw Total densitycysts g−1 dw S d H ′ J ′ Station 40 389 ± 127 1167 18 ± 2.7 2.9 ± 0.3 2.2 ± 0.1 0.7 ± 0.1 Station 45 329 ± 123 987 11 ± 4.4 1.8 ± 0.9 0.5 ± 0.2 0.5 ± 0.2 Abundance: average ± standard deviation from three replicates. Total density: sum of cyst abundances observed in three replicates from each station.S: number of taxa identified (average ± standard deviation). d: Margalef diversity index. H′: Shannon diversity index. J′: Pielou’s evenness index.Table 4 Results of SIMPER analysis for resting stages from surface sediments at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 56.81 Scrippsiella trochoidea medium type 119.92 21.61 7.45 38.05 38.05 Scrippsiella trochoidea small type 67.73 15.81 6.14 27.83 65.87 Scrippsiella trochoidea rough type 35.13 6.44 2.78 11.34 77.21 Pentapharsodinium tyrrhenicum type 1 27.33 5.18 8.96 9.13 86.34 Calciodinellum albatrosianum 24.53 4.94 7.24 8.69 95.03 Station 45 Average similarity: 40.37 Scrippsiella trochoidea medium type 174.21 40.03 5.87 99.16 99.16 Stations 40 and 45 Average dissimilarity = 58.20.The most abundant cyst morphotypes in the surface layers were calcareous cysts produced by species of the Calciodinellaceae family (Dinophyta). At station 40, five cyst morphotypes of this family accounted for 95% of total abundance, while at station 45, 99% was accounted for by just one cyst morphotype,Scrippsiella trochoidea medium type, confirming the lower evenness at this station.The nMDS ordination (Figure3, stress = 0), with the hierarchical cluster superimposed with a cutoff at 60% similarity, clearly reflects the separation between the samples from stations 40 and 45. Among these, due to its higher diversity, sample 45b is farther from samples 45a and 45c than it is from the samples of station 40.Figure 3 nMDS plot of surface sediment samples collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 60% similarity. ### 3.3. Whole Sediment Cores At both the investigated stations, a general decrease in total abundances was observed with depth along the sediment columns. At station 40, higher total abundance and diversity values than station 45 were registered (Figure4), with a sharp decline between the 6th and 7th centimetres. Beyond this depth, total abundance remained below 100 cysts 100 g−1. In terms of species, Codonellopsis schabii cysts and Synchaeta sp. and Acartia clausi/margalefiresting eggs were continuously observed along the whole sediment column at both stations. The ciliate C. schabii was by far the most abundant taxon at station 40 (43% of total abundance), with density highest in the 2nd cm (342±192 cysts 100 g−1); as with total abundance, a sharp decrease was observed between the 6th and 7th centimetres. Other important species were the copepods Centropages sp. (181±50 resting eggs 100 g−1 at 4th cm) and Acartiaspp. (67±32 resting eggs 100 g−1 at 1st cm). At station 45 the most abundant type was Acartiaspp. (86±57 resting eggs 100 g−1 at 1st cm) followed by C. schabii (70±60 cysts 100 g−1 at 4th cm) and Synchaeta sp. (41±23 resting eggs 100 g−1 at 5th cm).Resting stage abundance (average± standard deviations) and Shannon’s index (H′) values recorded for each cm layer of sediment cores collected at two investigated stations in Bay of Vlorë (Albania). (a) (b) (c) (d)In the nMDS ordination (Figure5, stress = 0.12) with superimposition of the hierarchical cluster with a cutoff at 70% similarity, all the samples from station 45 cluster together, while the samples from station 40 were widely dispersed, a sign of greater variability at this site.Figure 5 nMDS plot of samples from each cm of sediment cores collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 70% similarity.The assemblage structure of the two stations differed significantly at all layers (ANOSIMR=0.655; P=0.001), showing 59% dissimilarity (SIMPER, Table 5).Table 5 Results of SIMPER analysis for resting stages in sediment cores collected at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 44.16 Centropages sp. 1.77 8.77 0.86 19.86 19.86 Codonellopsis schabii 2.10 7.00 1.31 15.86 35.72 Acartia clausi/margalefi 1.56 6.21 1.22 14.06 49.78 Synchaeta sp. spiny type 1.47 5.31 1.10 12.02 61.81 Penilia avirostris 1.14 4.47 0.98 10.13 71.93 Brachionus plicatilis 0.96 2.90 0.81 6.56 78.49 Stenosemella ventricosa 0.78 1.47 0.55 3.32 81.81 Strobilidium sp. 0.73 1.43 0.51 3.23 85.04 Scrippsiella spp. 0.57 0.95 0.36 2.14 87.18 Gonyaulax spp. 0.54 0.71 0.34 1.60 88.79 Strombidium conicum 0.44 0.69 0.33 1.57 90.36 Station 45 Average similarity: 52.61 Acartia clausi/margalefi 1.88 12.13 2.08 23.05 23.05 Synchaeta sp. spiny type 1.73 10.91 1.59 20.74 43.79 Codonellopsis schabii 1.43 6.73 1.12 12.80 56.59 Strobilidium sp. 1.12 6.05 0.92 11.51 68.10 Centropages sp. 1.20 6.05 1.02 11.50 79.60 Brachionus plicatilis 0.84 2.73 0.63 5.19 84.79 Acartia sp.1 0.75 2.57 0.58 4.89 89.67 Lingulodinium polyedrum 0.74 2.39 0.59 4.54 94.22 Groups 40 and 45. Average dissimilarity = 58.63. ### 3.4. Germination Experiments All putatively viable (i.e., full) protistan cyst types observed were isolated and incubated under controlled conditions to obtain germination. Successful germination generally allowed us to confirm the cyst-based identification, but in some cases it enabled us to go beyond this and discriminate between cysts sharing similar morphology. For example,Alexandrium minutum and Scrippsiella sp. 1, both have a round cyst with a thin and smooth wall with mucous material attached, Protoperidinium thorianum and Protoperidinium sp. 1 cysts are both round-brown and smooth, and Gymnodinium nolleri and Scrippsiella sp. 4 both produce round-brown cysts with a red spot inside. The germination of all these cyst types allowed us to correctly identify these species.Cysts ascribed to the paleontologicaltaxa Bicarinellum tricarinelloides and Calciperidinium asymmetricum both germinated, thus confirming that they belong to modern taxa. The active stages obtained were tentatively identified as scrippsielloid dinoflagellates.An unknown ciliate cyst, with a papula at both extremities, produced an active stage identifiable as belonging to theStrombidium genus (Figure 2). ## 3.1. Total Biodiversity Resting stages were found at all levels of the 15 cm sediment core columns from the 2 investigated sites in the Gulf of Vlorë.Merging the data from the 2 sets of samples (20–125μm and 45–200 μm) and considering both full (probably viable) and empty (probably germinated) forms from each station, 87 different resting stage morphotypes produced by plankton were recognized (Table 1). Most of them (59, belonging to 20 genera) were dinoflagellates, 16 were ciliates (9 genera), 4 rotifers (2 genera), and 5 crustaceans (4 genera), while 3 (1 protistan cyst type and 2 resting eggs) remained unidentified. Station 40 showed higher biodiversity, with 79 morphotypes, 35 of them exclusive to the site. At station 45, 52 morphotypes were observed, 8 of them being exclusive.Table 1 List of resting stage (cyst) morphotypes recovered from sediments of Bay of Vlorë (Albania). Taxon St.40 St.45 Dinoflagellates Alexandrium minutum Halim ⚫ ⚫ ○ Alexandrium tamarense(Lebour) Balech ⚫ Alexandrium sp. 1 ⚫ Alexandriumsp. 2 ⚫ Bicarinellum tricarinelloides Versteegh ⚫ ○ ⚫ Calcicarpinum perfectum Versteegh ○ Calciodinellum albatrosianum (Kamptner) Janofske and Karwath ⚫ ○ ⚫ ○ Calciodinellum operosum (Deflandre) Montresor ⚫ ○ ○ Calciperidinium asymmetricum Versteegh ○ Cochlodinium polykrikoidesMargalef type 1 ⚫ Cochlodinium polykrikoidesMargalef type 2 ○ Diplopelta parva (Abé) Matsuoka ○ Diplopsalis lenticula Bergh ⚫ ○ Follisdinellum splendidum Versteegh ○ Gonyaulax group ⚫ ○ ⚫ ○ Gymnodinium impudicum(Fraga and Bravo) G. Hansen and Möestrup ⚫ Gymnodinium nolleri Ellegaard and Möestrup ⚫ Gymnodiniumsp. 1 ⚫ ○ ⚫ Lingulodinium polyedrum (Stein) Dodge ⚫ ○ ⚫ ○ Melodomuncula berlinensis Versteegh ⚫ ○ ⚫ Nematodinium armatum (Dogiel) Kofoid and Swezy ⚫ ○ Oblea rotunda (Lebour) Balech ex Sournia ⚫ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 1 ⚫ ○ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 2 ⚫ ⚫ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 1 ⚫ ○ ⚫ ○ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 2 ○ Polykrikos kofoidii Chatton ○ Polykrikos schwartzii Bütschli ○ Protoperidinium compressum (Abé) Balech ○ Protoperidinium conicum (Gran) Balech ⚫ ○ Protoperidinium oblongum (Aurivillius) Parke and Dodge ⚫ Protoperidinium parthenopesZingone and Montresor ⚫ Protoperidinium steidingerae Balech ⚫ Protoperidinium subinerme (Paulsen) Loeblich III ○ Protoperidinium thorianum (Paulsen) Balech ⚫ ○ ○ Protoperidiniumsp. 1 ⚫ ○ ○ Protoperidinium sp. 5 ⚫ ○ Protoperidiniumsp. 6 ⚫ Pyrophacus horologium Stein ⚫ ⚫ Scrippsiellacf. crystallina Lewis ○ Scrippsiella lachrymosa Lewis ⚫ ○ ⚫ ○ Scrippsiella ramonii Montresor ⚫ ○ ○ Scrippsiella trochoidea (Stein) Loeblich rough type ⚫ ⚫ Scrippsiella trochoidea (Stein) Loeblich smooth type ⚫ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich large type ⚫ ○ ⚫ Scrippsiella trochoidea (Stein) Loeblich medium type ⚫ ○ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich small type ⚫ ○ ⚫ Scrippsiellasp. 1 ⚫ ○ ⚫ ○ Scrippsiellasp. 4 ⚫ ⚫ Scrippsiellasp. 5 ⚫ ⚫ Scrippsiellasp. 6 ⚫ ⚫ Scrippsiellasp. 8 ⚫ ○ ⚫ Thoracosphaerasp. ⚫ ⚫ Dinophyta sp. 2 ⚫ ⚫ Dinophyta sp. 7 ⚫ ⚫ Dinophyta sp. 17 ⚫ Dinophyta sp. 26 ⚫ Dinophyta sp. 30 ⚫ Dinophyta sp. 33 ⚫ ⚫ Ciliates Codonella asperaKofoid and Campbell ⚫ Codonella orthoceras Heackel ⚫ Codonellopsis monacensis(Rampi) Balech ⚫ Codonellopsis schabii (Brandt) Kofoid and Campbell ⚫ ⚫ Epiplocylis undella(Ostenfeld and Schmidt) Jörgensen ⚫ ⚫ Rabdonella spiralis (Fol) Brandt ⚫ Stenosemella ventricosa(Claparède and Lachmann) Jörgensen ⚫ ⚫ Strobilidium sp. ⚫ ⚫ Strombidium cf. acutum (Leegaard) Kahl ⚫ ⚫ Strombidium conicum (Lohman) Wulff ○ ⚫ Tintinnopsis beroideaStein ⚫ Tintinnopsis butschliiKofoid and Campbell ⚫ Tintinnopsis campanulaEhrenberg ⚫ Tintinnopsis cylindricaDaday ⚫ ⚫ Tintinnopsis radix(Imhof) ⚫ Undella claparedei(Entz) Daday ⚫ Rotifers Brachionus plicatilis Müller ⚫ Synchaetasp. spiny type ⚫ ○ Synchaetasp. rough type ⚫ Synchaetasp. mucous type ⚫ Crustaceans Cladocerans Penilia avirostris Dana ⚫ ⚫ Crustaceans Copepods Acartia clausi/margalefi ⚫ Acartiasp. 1 ⚫ ○ ⚫ ○ Centropagessp. ⚫ ⚫ ○ Paracartia latisetosa (Krizcaguin) ⚫ ○ ⚫ Unidentified Cyst type 1 ⚫ Resting Egg 1 ⚫ Resting Egg 9 ⚫ ⚫: cysts observed as full (i.e., probably viable). ○: cysts observed as germinated (i.e., empty).Moreover, analysis of the empty forms found among the 20–125μm fraction led to the recognition of 11 morphotypes, all dinoflagellates.A total of 36 cyst types were identified astaxa missing from the plankton list of the same period (January 2008; [15]). Partly due to nomenclature problems, uncertainty of identification, and differences in examined periods, it was possible to ascertain the contemporaneous presence of species in both pelagic and benthic compartments only in a very few cases.Identification was frequently impossible due to the presence of previously unreported resting stage morphologies. In such cases, germination experiments allowed the cysts to be attributed to a high leveltaxon at least, as with a Strombidium (Ciliophora) cyst, whose morphology is reported here for the first time (see Figure 2).Figure 2 Photographs of Ciliophora cyst, with two opposite papulae (a). Its empty shell (hatch occurs from one of two papulae) (b). Germinated active stage,Strombidium ciliate (c). ## 3.2. Surface Sediments The analysis of surface sediments (the 1st cm of the cores), that is, those most affected by cyst deposition and resuspension/germination, revealed sharp differences between the 2 analysed stations. In total, 36 different resting stage morphotypes were observed in this first layer (Table2), 23 produced by dinoflagellates, 6 by ciliates, 2 by rotifers, 4 by crustaceans, and 1 undetermined. Even considering the small amount of available data, station 40 showed higher biodiversity, in terms of both number of taxa and diversity indexes (see Table 3). Total abundances were comparable, however, with 389±127 cysts g−1 (average ± s.d.) at station 40 versus329±123 cysts g−1 at station 45. SIMPER showed 58% dissimilarity between the assemblages of the two sites (Table 4).Table 2 Abundance (cysts g−1 dw) of probably viable resting stages (cysts) observed in surface sediments of two stations in Bay of Vlorë (Albania). Values from three replicates are reported. 40a 40b 40c 45a 45b 45c Calciodinellum albatrosianum 20.1 18.3 35.1 0.0 0.0 0.0 Calciodinellum operosum 0.0 0.0 11.7 0.0 0.0 0.0 Gonyaulax group 20.1 0.0 0.0 0.0 0.0 59.6 Gymnodiniumsp. 1 20.1 9.2 0.0 0.0 22.2 0.0 Lingulodinium polyedrum 40.2 0.0 0.0 0.0 0.0 0.0 Melodomuncula berlinensis 40.2 0.0 0.0 0.0 0.0 0.0 Oblea rotunda 0.0 0.0 11.7 0.0 0.0 0.0 Pentapharsodinium dalei type 1 20.1 0.0 0.0 0.0 11.1 0.0 Pentapharsodinium tyrrhenicum type 1 40.2 18.3 23.4 0.0 11.1 0.0 Protoperidiniumsp. 1 0.0 9.2 0.0 0.0 0.0 0.0 Protoperidinium sp. 5 0.0 0.0 11.7 0.0 11.1 0.0 Scrippsiella ramonii 0.0 9.2 0.0 0.0 0.0 0.0 Scrippsiella trochoidea rough type 40.2 18.3 46.8 0.0 0.0 0.0 Scrippsiella trochoidea smooth type 0.0 9.2 11.7 0.0 11.1 0.0 Scrippsiella trochoideamedium type 181.0 73.3 105.4 173.1 111.1 238.4 Scrippsiella trochoideasmall type 80.5 64.2 58.5 230.8 0.0 0.0 Scrippsiellasp. 1 20.1 0.0 11.7 0.0 11.1 0.0 Scrippsiellasp. 4 0.0 9.2 0.0 0.0 0.0 0.0 Thoracosphaerasp. 1 0.0 0.0 11.7 0.0 11.1 0.0 Dinophyta sp. 2 0.0 0.0 23.4 0.0 0.0 0.0 Dinophyta sp. 17 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 26 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 33 0.0 0.0 0.0 0.0 11.1 0.0 Codonellopsis schabii 1.0 0.0 0.9 0.6 0.3 0.5 Stenosemella ventricosa 0.1 0.0 0.0 0.0 0.0 0.0 Strobilidium sp. 0.1 0.0 0.1 0.0 0.0 0.0 Strombidium acutum 0.0 0.0 0.0 0.0 11.1 0.0 Tintinnopsis cylindrica 0.0 0.0 0.0 0.0 0.1 0.1 Undella claparedei 0.1 0.0 0.1 0.0 0.0 0.0 Brachionus plicatilis 0.2 0.0 0.1 0.3 0.0 0.1 Synchaetasp spiny type 0.3 0.2 0.0 0.2 0.0 0.1 Penilia avirostris 0.0 0.0 0.1 0.0 0.0 0.0 Acartia clausi/margalefi 1.0 0.3 0.7 1.5 0.3 0.8 Acartiasp. 1 0.1 0.0 0.1 0.0 0.0 0.0 Centropagessp. 0.3 0.2 0.0 0.2 0.1 0.2 Cyst type 1 0.0 0.0 0.0 57.7 0.0 0.0Table 3 Abundance and diversity indices calculated for resting stages in surface sediments at two stations investigated in Bay of Vlorë. Abundancecysts g−1 dw Total densitycysts g−1 dw S d H ′ J ′ Station 40 389 ± 127 1167 18 ± 2.7 2.9 ± 0.3 2.2 ± 0.1 0.7 ± 0.1 Station 45 329 ± 123 987 11 ± 4.4 1.8 ± 0.9 0.5 ± 0.2 0.5 ± 0.2 Abundance: average ± standard deviation from three replicates. Total density: sum of cyst abundances observed in three replicates from each station.S: number of taxa identified (average ± standard deviation). d: Margalef diversity index. H′: Shannon diversity index. J′: Pielou’s evenness index.Table 4 Results of SIMPER analysis for resting stages from surface sediments at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 56.81 Scrippsiella trochoidea medium type 119.92 21.61 7.45 38.05 38.05 Scrippsiella trochoidea small type 67.73 15.81 6.14 27.83 65.87 Scrippsiella trochoidea rough type 35.13 6.44 2.78 11.34 77.21 Pentapharsodinium tyrrhenicum type 1 27.33 5.18 8.96 9.13 86.34 Calciodinellum albatrosianum 24.53 4.94 7.24 8.69 95.03 Station 45 Average similarity: 40.37 Scrippsiella trochoidea medium type 174.21 40.03 5.87 99.16 99.16 Stations 40 and 45 Average dissimilarity = 58.20.The most abundant cyst morphotypes in the surface layers were calcareous cysts produced by species of the Calciodinellaceae family (Dinophyta). At station 40, five cyst morphotypes of this family accounted for 95% of total abundance, while at station 45, 99% was accounted for by just one cyst morphotype,Scrippsiella trochoidea medium type, confirming the lower evenness at this station.The nMDS ordination (Figure3, stress = 0), with the hierarchical cluster superimposed with a cutoff at 60% similarity, clearly reflects the separation between the samples from stations 40 and 45. Among these, due to its higher diversity, sample 45b is farther from samples 45a and 45c than it is from the samples of station 40.Figure 3 nMDS plot of surface sediment samples collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 60% similarity. ## 3.3. Whole Sediment Cores At both the investigated stations, a general decrease in total abundances was observed with depth along the sediment columns. At station 40, higher total abundance and diversity values than station 45 were registered (Figure4), with a sharp decline between the 6th and 7th centimetres. Beyond this depth, total abundance remained below 100 cysts 100 g−1. In terms of species, Codonellopsis schabii cysts and Synchaeta sp. and Acartia clausi/margalefiresting eggs were continuously observed along the whole sediment column at both stations. The ciliate C. schabii was by far the most abundant taxon at station 40 (43% of total abundance), with density highest in the 2nd cm (342±192 cysts 100 g−1); as with total abundance, a sharp decrease was observed between the 6th and 7th centimetres. Other important species were the copepods Centropages sp. (181±50 resting eggs 100 g−1 at 4th cm) and Acartiaspp. (67±32 resting eggs 100 g−1 at 1st cm). At station 45 the most abundant type was Acartiaspp. (86±57 resting eggs 100 g−1 at 1st cm) followed by C. schabii (70±60 cysts 100 g−1 at 4th cm) and Synchaeta sp. (41±23 resting eggs 100 g−1 at 5th cm).Resting stage abundance (average± standard deviations) and Shannon’s index (H′) values recorded for each cm layer of sediment cores collected at two investigated stations in Bay of Vlorë (Albania). (a) (b) (c) (d)In the nMDS ordination (Figure5, stress = 0.12) with superimposition of the hierarchical cluster with a cutoff at 70% similarity, all the samples from station 45 cluster together, while the samples from station 40 were widely dispersed, a sign of greater variability at this site.Figure 5 nMDS plot of samples from each cm of sediment cores collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 70% similarity.The assemblage structure of the two stations differed significantly at all layers (ANOSIMR=0.655; P=0.001), showing 59% dissimilarity (SIMPER, Table 5).Table 5 Results of SIMPER analysis for resting stages in sediment cores collected at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 44.16 Centropages sp. 1.77 8.77 0.86 19.86 19.86 Codonellopsis schabii 2.10 7.00 1.31 15.86 35.72 Acartia clausi/margalefi 1.56 6.21 1.22 14.06 49.78 Synchaeta sp. spiny type 1.47 5.31 1.10 12.02 61.81 Penilia avirostris 1.14 4.47 0.98 10.13 71.93 Brachionus plicatilis 0.96 2.90 0.81 6.56 78.49 Stenosemella ventricosa 0.78 1.47 0.55 3.32 81.81 Strobilidium sp. 0.73 1.43 0.51 3.23 85.04 Scrippsiella spp. 0.57 0.95 0.36 2.14 87.18 Gonyaulax spp. 0.54 0.71 0.34 1.60 88.79 Strombidium conicum 0.44 0.69 0.33 1.57 90.36 Station 45 Average similarity: 52.61 Acartia clausi/margalefi 1.88 12.13 2.08 23.05 23.05 Synchaeta sp. spiny type 1.73 10.91 1.59 20.74 43.79 Codonellopsis schabii 1.43 6.73 1.12 12.80 56.59 Strobilidium sp. 1.12 6.05 0.92 11.51 68.10 Centropages sp. 1.20 6.05 1.02 11.50 79.60 Brachionus plicatilis 0.84 2.73 0.63 5.19 84.79 Acartia sp.1 0.75 2.57 0.58 4.89 89.67 Lingulodinium polyedrum 0.74 2.39 0.59 4.54 94.22 Groups 40 and 45. Average dissimilarity = 58.63. ## 3.4. Germination Experiments All putatively viable (i.e., full) protistan cyst types observed were isolated and incubated under controlled conditions to obtain germination. Successful germination generally allowed us to confirm the cyst-based identification, but in some cases it enabled us to go beyond this and discriminate between cysts sharing similar morphology. For example,Alexandrium minutum and Scrippsiella sp. 1, both have a round cyst with a thin and smooth wall with mucous material attached, Protoperidinium thorianum and Protoperidinium sp. 1 cysts are both round-brown and smooth, and Gymnodinium nolleri and Scrippsiella sp. 4 both produce round-brown cysts with a red spot inside. The germination of all these cyst types allowed us to correctly identify these species.Cysts ascribed to the paleontologicaltaxa Bicarinellum tricarinelloides and Calciperidinium asymmetricum both germinated, thus confirming that they belong to modern taxa. The active stages obtained were tentatively identified as scrippsielloid dinoflagellates.An unknown ciliate cyst, with a papula at both extremities, produced an active stage identifiable as belonging to theStrombidium genus (Figure 2). ## 4. Discussion The total number of resting stage morphotypes recognized in the present study is particularly high compared with other studies in the Mediterranean. None of these studies gave a number higher than the one reported here, despite being based on a larger geographical area (the whole North Adriatic, in [24]) or a higher number of samples (157 sediment samples in [5]). This richness could be due to our enhanced ability, with the passage of time, to identify cysts from different species, but it could also depend on the consideration of different depths in the sediments. Indeed, the other mentioned studies only reported cysts from the sediment surface, while in the present case the type list grew by more than 60% when below-surface layers were considered.As a consequence of its richness, the reported list adds 42 morphotypes to the Albanian list and 13 alternative morphotypes to already knowntaxa. This fact clearly demonstrates that the description of cyst assemblages in coastal Mediterranean areas is still far from being exhaustive.The discovery of differences in the benthic species assemblage with respect to the plankton is partially due to the use in cyst studies of a terminology derived from paleontological studies which has yet to be standardised with reference to modern terminology. However, it is evident that the active stages in the water column assemblage of the Bay of Vlorë [15] differ in number and quality from that of the bottom sediments reported in the present study. By way of example, and only considering the surface sediment layer (i.e., the most affected by recent sinking and/or re-suspension), 4 different species of Scrippsiella (Dinophyta) were isolated as cysts, but only 2 were reported [15] as active stages in the water column for the whole bay. Moreover, in this study 5 different cyst types for the single species S. trochoidea were identified, differing in terms of size and wall. This is evidence of great intraspecific diversity, but it could be also a sign of the presence of cryptic species, as discussed by Montresor et al. [25], differing in cyst morphology but not in that of the swimming stage.The rotiferSynchaeta sp. was not found in the water column, but its resting eggs were easily recognizable and abundant, in the sediments.While the case ofS. trochoidea confirms that much remains to be discovered about the morphological variability of cysts produced by the same species (see [26], for Dinophyta or [27], for Calanoida), Synchaeta sp. is a clear case of a species not detected in the active plankton assemblage but waiting in the sediments for a favourable moment to rejoin the water column.Also worthy of attention is the observation of a Ciliophora cyst with two papulae on opposite sides (Figure2), which has never been reported before.A study of plankton composition was carried out in the same site during the same scientific cruise (January 2008) as the present study [15]. In January 2008, the phytoplankton and the microzooplankton included a total of 178 categories. Considering only the main cyst producers (dinoflagellates and ciliates), examination of the water column at 16 stations gave a total of 76 taxa (48 dinoflagellates, 28 ciliates). The present analysis of sediments, from just 2 stations, gave a total of 75 taxa. This striking similarity of values was not, however, reflected in the taxonomic composition of the 2 compartments. Indeed, 36 cyst types were identified as taxa not present in the plankton list for the same period (January 2008). This number would be even higher if we considered only plankton from stations close to the two used here for the sediments.It was not possible to correlate cyst abundance along the sediment column with age of deposition, which would require dating of the sediment layers. In any case, our results showed that the total abundance of cysts in the upper layers was up to 10 times greater than in lower ones. The sharp decrease in abundance below the 5th cm of depth, at least at station 40, does, however, suggest that an event occurred at a certain moment in the history of the plankton in the Bay of Vlorë, a suggestion that clearly requires further study. Indeed, due to its position, station 40 is a candidate for studies of the history of cyst production (and their arrival in the sediment). Located in a depression on the seabed, the depth of St. 40 (−54 m) probably favours the sedimentation of fine particles and the depletion of oxygen content, and the deposition and accumulation of sinking resting stages can thus be considered undisturbed. In addition, the observed fall in diversity from lower to upper layers could be correlated with the growth of cultural eutrophication (i.e., urban development), as proposed for Tokyo Bay and Daja Bay [28].This situation at St. 45 (depth 28 m) is not completely identical. It is near the slope of a detritus cone where materials from river Vjosa accumulate and marine currents possibly act at a different rate from those acting on St. 40.Incubation of encysted forms under controlled conditions to obtain germination is a useful tool for confirming the identification made by observation of the cyst. In some cases, different species produce very similar cysts, especially when the morphology is very simple, that is, spherical, without processes or wall structures. In the present study, we observed many Dinophyta cysts with the same basic morphology, that is, round body and smooth brown wall with no apparent signs of paratabulation or spines or processes. Their germination allowed us to classify this basic type into at least 6 species. Round brown cysts are typical ofProtoperidinium species [29, 30], but we also recognized Diplopsalis lenticula, Gymnodinium nolleri, and Oblea rotunda, as well as 3 additional Protoperidinium species. In the same way, it was possible to distinguish between Alexandrium minutum and Scrippsiellasp.1, whose cysts are very similar and whose distinctive features are recognizable only after germination.Conversely, analysis of cysts may allow us to identify species whose active stages are indistinguishable, at least by optical microscope. This is the case in the present study for theScrippsiella group, which produces active cells that are very difficult to distinguish, although their cysts differ in terms of the type of calcareous covering, colour, and the presence of spines [31, 32].Worthy of special attention here is the recovery during the present study of Dinophyta cysts whose active stages have yet to be identified. As cysts, they are still classified with a paleontological name in accordance with their description from Pleistocene to Pliocene sediment strata in the Mediterranean [33]. Two of these cyst types (Bicarinellum tricarinelloides and Calciperidinium asymmetricum) were successfully germinated, producing motile forms recognisable as belonging to the Calciodinellaceae family. In any case their frequent observation in surface sediments in other Mediterranean areas [23, 34] and in sediment traps [35] is a clear sign that these species are present in the water column today and need to be better identified. --- *Source: 101682-2013-06-20.xml*
101682-2013-06-20_101682-2013-06-20.md
57,451
Plankton Resting Stages in the Marine Sediments of the Bay of Vlorë (Albania)
Fernando Rubino; Salvatore Moscatello; Manuela Belmonte; Gianmarco Ingrosso; Genuario Belmonte
International Journal of Ecology (2013)
Earth and Environmental Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101682
101682-2013-06-20.xml
--- ## Abstract In the frame of the INTERREG III CISM project, sediment cores were collected at 2 stations in the Gulf of Vlorë to study the plankton resting stage assemblages. A total of 87 morphotypes were identified and produced by Dinophyta, Ciliophora, Rotifera, and Crustacea. In 22 cases, the cyst belonged to a species absent from the plankton of the same period. The most abundant resting stages were those produced byScrippsiella species (Dinophyta). Some calcareous cysts were identified as fossil species associated with Pleistocene to Pliocene sediment, although they were also found in surface sediments and some of them successfully germinated, thus proving their modern status. Total abundance generally decreased with sediment depth at station 40, while station 45 showed distinct maxima at 3 and 8 cm below the sediment surface. The depth of peak abundance in the sediment varied with species. This paper presents the first study of the plankton resting stages in the Bay of Vlorë. The study confirmed the utility of this type of investigation for a more correct evaluation of species diversity. In addition, the varying distribution with sediment depth suggests that this field could be of some importance in determining the history of species assemblages. --- ## Body ## 1. Introduction Resting stages produced by plankton organisms in temperate seas accumulate in the bottom sediments of confined coastal areas [1]. Their assemblages represent reservoirs of biodiversity which sustain the high resilience of plankton communities, providing recruits of propagules at each return of favourable conditions, in accordance with the so-called Supply Vertical Ecology model [2]. The existence of benthic stages in the life cycles of holoplankton provides a new key for understanding the role of life cycles in the pelagic-benthic relationship in coastal waters [3, 4]. Consequently, assessments of biodiversity at marine sites should take account of the unexpressed fraction of the plankton community contained in the bottom sediments by performing integrated sampling programs [5, 6]. Despite the proven importance of resting stage banks in coastal marine ecology, the issue of “resting versus active” plankters has commonly been considered for single taxa and only rarely from the whole-community point of view. This is probably due to the great complexity (compositional, functional, and distributional) of resting stage banks. Indeed, it has been demonstrated that at any given moment the species assemblages in bottom sediments (as resting stages) are quite different from the species detectable in the water column (as active stages) [6, 7]. However, the study of such marine “seed banks” (as understood by [3], analogous to terrestrial seed banks in forest soils) is complex on many levels. Resting stages share a common morphological plan despite belonging to organisms from different kingdoms [8]. Consequently, resting stage morphology differs sharply from that of active stages and in some cases their identification is highly problematic. However, it is also true that for some naked dinoflagellates or for thecate ones with a similar thecal plate pattern, cysts are quite different, allowing correct identification without the use of SEM or molecular techniques. Cyst-producing dinoflagellates differ in the length of their life cycle and/or the timing of cyst production, and the rest capacity of resting stages also varies. They are generally programmed to rest for the duration of the adverse period, but fractions of them can also rest for longer periods, allowing the population to reappear decades later, ([9–11], for copepods, [12, 13], for dinoflagellates). Hairston et al. [14] reported a rest of more than 300 years for a calanoid resting egg, albeit of a freshwater species.The scarcity of literature on whole resting stage communities encouraged us to describe situations in various parts of the Mediterranean, in order to obtain a rich data set useful for building models and in experimental situations. Here a detailed description of the structure of the marine “seed bank” produced by plankton in the Bay of Vlorë is reported.The present study also focuses on an Albanian bay that has not been extensively studied from the marine biodiversity point of view. The data from the benthos are compared with analyses of the phytoplankton and microzooplankton [15] in the water column to assess with more precision the biodiversity of the plankton in the Bay of Vlorë.Moscatello et al. [15] reported that the microzooplankton community of the Bay of Vlorë was composed of more than 200 taxa, of which 97 were classified as “seasonally absent.” The aim of the present paper is to determine whether these absences in the water column correspond to resting stages in the sediments. ## 2. Materials and Methods ### 2.1. Study Site An oceanographic campaign was carried out in the Bay of Vlorë from 17th to 23rd of January 2008 aboard the oceanographic vessel “Universitatis”. This survey was conducted as part of the PIC Interreg III Italy, Albania Project for providing technical assistance for the management of an International Centre of Marine Sciences (CISM) in Albania. The sampling period of the present study (January 2008) coincided with that of Moscatello et al. [15] who investigated active plankton in the water column in the same area.In order to investigate the presence and distribution of resting stages produced by plankton species in the area, 2 stations were chosen, representing 2 different types of environment: a deep zone (station 40, depth: 54 m), with sediments of terrigenous mud dominated byLabidoplax digitata (Holothuroidea) and a shallower site (station 45, depth: 28 m), with sediments of terrigenous mud dominated by Turritella communis (Gastropoda) (Figure 1) (for the classification of mud biocenosis in the Bay of Vlorë, see [16]).Figure 1 Map of study area showing location of two sampling stations (40, 45) in Bay of Vlorë (Albania). ### 2.2. Sampling Procedure Samples of bottom sediments were collected in three replicates (named 40 a, b, c and 45 a, b, c) using a Van Veen grab with upper windows that allowed the collection of undisturbed sediment cores. At each station, 2 different PVC corers (h: 30 cm, inner  ⃠ 4 and 8 cm) were used in order to obtain 2 different sets of samples. The smaller core was processed to obtain cysts of protists; the larger core was processed to obtain resting eggs of metazoans. This differentiation was necessary because metazoan resting stages are less abundant, so a greater amount of sediment is required. Moreover, their walls are only organic, allowing the adoption of a centrifugation method coupled with filtration to obtain a “clean” sample from a relatively large quantity of sediment. In contrast, protistan cysts are more abundant and have different types of walls (calcareous, siliceous, organic), which complicates the procedure when the whole cyst bank is studied. Thus, the most fruitful method of separating cysts from sediment is filtration through meshes of different sizes.After extraction, sediment cores were immediately subdivided into 1 cm thick layers, until the 15th cm from the sediment surface. The thickness of 15 cm was chosen because in previous studies we noted that abundances diminished significantly at depths of more than 7–10 cm below the sediment surface [1, 12]. The outer edge of each layer was discarded to avoid contamination from material from the overlaying layer during the insertion of the corer into the sediments. Once obtained, the samples were stored in the dark at 5°C until treatment in the laboratory. ### 2.3. Protistan Cysts (20–125μm) In the laboratory the small-core samples were treated using a sieving technique consisting of the following steps.(i) The entire sample is homogenized and then subsampled, obtaining 3–5 mL of wet sediment which is passed through a 20μm mesh (Endecott’s LTD steel sieves, ISO3310-1, London, England), using natural filtered (0.45 μm) seawater (Gulf of Taranto).(ii) The retained fraction is ultrasonified for 1 min and again passed through a series of sieves (125, 75, and 20μm mesh sizes), obtaining a fine-grained fraction (20–75 μm) containing most of the protistan cysts and a 75–125 μm fraction with the larger ones (e.g., Lingulodinium spp.) and the zooplankton resting eggs. The material retained by the 125 μm mesh was not considered.No chemicals were used to disaggregate sediment particles, in order to avoid the dissolution of calcareous and siliceous cyst walls.Qualitative and quantitative analyses were carried out under an inverted microscope (Zeiss Axiovert S100 equipped with a Nikon Coolpix 990 digital camera) at 200 and 320 magnifications. Both full (i.e., probably viable) and empty (i.e., probably germinated) cysts were considered. At least 1/5 of the finer fraction and all of the >75μm fraction were analyzed.All the resting stage morphotypes were identified on the basis of published descriptions, ([17–19], for dinoflagellates, [20, 21], for ciliates and germination experiments).Identification was performed to the lowest possible taxonomic level. As a rule, modern, biological names were used. The paleontological name is reported only for morphotypes whose active stage was not known.A fixed aliquot (≈5 g) of wet sediment from each sample was oven-dried at 70°C for 24 h to calculate the water content and obtain quantitative data for each taxon as cysts g−1 of dry sediment. ### 2.4. Metazoan Resting Eggs (45–200μm) For the analysis of the large-core samples the Onbè [22] method was used, slightly modified by using 45 and 200 μm mesh sizes to obtain a size range typical of mesozooplankton resting eggs.For each sample a fixed quantity of wet sediment was treated (≈45 cm3).Only full (i.e., probably viable) resting eggs were counted and quantitative data for eachtaxon are reported as resting eggs × 100 g−1 of dry sediment. ### 2.5. Germination Experiments To achieve germination, all putative viable cysts of protists isolated from the sediment were individually positioned in Nunclon microwells (Nalge Nunc International, Roskilde, Denmark) containing≈1 mL of natural sterilized seawater. Cysts were incubated at 20°C, 12 : 12 h LD cycle, 100 μE m−2 sec−1 irradiance, and examined on a daily basis, until germination up to a maximum of 30 days. The incubation conditions were chosen on the basis of previous studies [5, 6, 23]. They have proved to be effective for a large number of species. ### 2.6. Data Analysis For the 1st cm of the cores, data on resting stage abundance from the 2 sampling stations were obtained by merging the data from the 3 replicates of the 2 sets of samples (those for protistan cysts and metazoan resting eggs). For samples below the first cm, only the 45–200μm fraction was used, in order to facilitate and accelerate the analysis. From the abundance matrices (taxa versus stations and taxa versus station and cm respectively) of both surface and deeper sediments, the Bray Curtis similarity measure was calculated after 4th root transformation in order to allow rare species to become more evident.The PRIMER “DIVERSE” function (Primer-E Ltd, Plymouth, UK) was used to calculate the taxonomic richness (S), taxon abundance (N), Margalef index (d), Shannon-Wiener diversity index (H′), and Pielou’s evenness index (J′) for each sample.The relationships between the samples collected at the 2 stations were analyzed by means of nonmetric multidimensional scaling (nMDS) with superimposed hierarchical clustering with a cutoff at 60% similarity (for surface sediments) and 70% (for the sediment core as a whole), while the SIMPER routine was used to identify relative dissimilarity and thetaxa that contributed most to the differences.The statistical significance of the differences between the 2 stations was calculated by means of a 2-way crossed analysis of similarities (ANOSIM) on the Bray-Curtis similarity matrix based on the stratigraphy.All univariate and multivariate analyses were performed using PRIMER v.6 (Primer-E Ltd, Plymouth, UK). ## 2.1. Study Site An oceanographic campaign was carried out in the Bay of Vlorë from 17th to 23rd of January 2008 aboard the oceanographic vessel “Universitatis”. This survey was conducted as part of the PIC Interreg III Italy, Albania Project for providing technical assistance for the management of an International Centre of Marine Sciences (CISM) in Albania. The sampling period of the present study (January 2008) coincided with that of Moscatello et al. [15] who investigated active plankton in the water column in the same area.In order to investigate the presence and distribution of resting stages produced by plankton species in the area, 2 stations were chosen, representing 2 different types of environment: a deep zone (station 40, depth: 54 m), with sediments of terrigenous mud dominated byLabidoplax digitata (Holothuroidea) and a shallower site (station 45, depth: 28 m), with sediments of terrigenous mud dominated by Turritella communis (Gastropoda) (Figure 1) (for the classification of mud biocenosis in the Bay of Vlorë, see [16]).Figure 1 Map of study area showing location of two sampling stations (40, 45) in Bay of Vlorë (Albania). ## 2.2. Sampling Procedure Samples of bottom sediments were collected in three replicates (named 40 a, b, c and 45 a, b, c) using a Van Veen grab with upper windows that allowed the collection of undisturbed sediment cores. At each station, 2 different PVC corers (h: 30 cm, inner  ⃠ 4 and 8 cm) were used in order to obtain 2 different sets of samples. The smaller core was processed to obtain cysts of protists; the larger core was processed to obtain resting eggs of metazoans. This differentiation was necessary because metazoan resting stages are less abundant, so a greater amount of sediment is required. Moreover, their walls are only organic, allowing the adoption of a centrifugation method coupled with filtration to obtain a “clean” sample from a relatively large quantity of sediment. In contrast, protistan cysts are more abundant and have different types of walls (calcareous, siliceous, organic), which complicates the procedure when the whole cyst bank is studied. Thus, the most fruitful method of separating cysts from sediment is filtration through meshes of different sizes.After extraction, sediment cores were immediately subdivided into 1 cm thick layers, until the 15th cm from the sediment surface. The thickness of 15 cm was chosen because in previous studies we noted that abundances diminished significantly at depths of more than 7–10 cm below the sediment surface [1, 12]. The outer edge of each layer was discarded to avoid contamination from material from the overlaying layer during the insertion of the corer into the sediments. Once obtained, the samples were stored in the dark at 5°C until treatment in the laboratory. ## 2.3. Protistan Cysts (20–125μm) In the laboratory the small-core samples were treated using a sieving technique consisting of the following steps.(i) The entire sample is homogenized and then subsampled, obtaining 3–5 mL of wet sediment which is passed through a 20μm mesh (Endecott’s LTD steel sieves, ISO3310-1, London, England), using natural filtered (0.45 μm) seawater (Gulf of Taranto).(ii) The retained fraction is ultrasonified for 1 min and again passed through a series of sieves (125, 75, and 20μm mesh sizes), obtaining a fine-grained fraction (20–75 μm) containing most of the protistan cysts and a 75–125 μm fraction with the larger ones (e.g., Lingulodinium spp.) and the zooplankton resting eggs. The material retained by the 125 μm mesh was not considered.No chemicals were used to disaggregate sediment particles, in order to avoid the dissolution of calcareous and siliceous cyst walls.Qualitative and quantitative analyses were carried out under an inverted microscope (Zeiss Axiovert S100 equipped with a Nikon Coolpix 990 digital camera) at 200 and 320 magnifications. Both full (i.e., probably viable) and empty (i.e., probably germinated) cysts were considered. At least 1/5 of the finer fraction and all of the >75μm fraction were analyzed.All the resting stage morphotypes were identified on the basis of published descriptions, ([17–19], for dinoflagellates, [20, 21], for ciliates and germination experiments).Identification was performed to the lowest possible taxonomic level. As a rule, modern, biological names were used. The paleontological name is reported only for morphotypes whose active stage was not known.A fixed aliquot (≈5 g) of wet sediment from each sample was oven-dried at 70°C for 24 h to calculate the water content and obtain quantitative data for each taxon as cysts g−1 of dry sediment. ## 2.4. Metazoan Resting Eggs (45–200μm) For the analysis of the large-core samples the Onbè [22] method was used, slightly modified by using 45 and 200 μm mesh sizes to obtain a size range typical of mesozooplankton resting eggs.For each sample a fixed quantity of wet sediment was treated (≈45 cm3).Only full (i.e., probably viable) resting eggs were counted and quantitative data for eachtaxon are reported as resting eggs × 100 g−1 of dry sediment. ## 2.5. Germination Experiments To achieve germination, all putative viable cysts of protists isolated from the sediment were individually positioned in Nunclon microwells (Nalge Nunc International, Roskilde, Denmark) containing≈1 mL of natural sterilized seawater. Cysts were incubated at 20°C, 12 : 12 h LD cycle, 100 μE m−2 sec−1 irradiance, and examined on a daily basis, until germination up to a maximum of 30 days. The incubation conditions were chosen on the basis of previous studies [5, 6, 23]. They have proved to be effective for a large number of species. ## 2.6. Data Analysis For the 1st cm of the cores, data on resting stage abundance from the 2 sampling stations were obtained by merging the data from the 3 replicates of the 2 sets of samples (those for protistan cysts and metazoan resting eggs). For samples below the first cm, only the 45–200μm fraction was used, in order to facilitate and accelerate the analysis. From the abundance matrices (taxa versus stations and taxa versus station and cm respectively) of both surface and deeper sediments, the Bray Curtis similarity measure was calculated after 4th root transformation in order to allow rare species to become more evident.The PRIMER “DIVERSE” function (Primer-E Ltd, Plymouth, UK) was used to calculate the taxonomic richness (S), taxon abundance (N), Margalef index (d), Shannon-Wiener diversity index (H′), and Pielou’s evenness index (J′) for each sample.The relationships between the samples collected at the 2 stations were analyzed by means of nonmetric multidimensional scaling (nMDS) with superimposed hierarchical clustering with a cutoff at 60% similarity (for surface sediments) and 70% (for the sediment core as a whole), while the SIMPER routine was used to identify relative dissimilarity and thetaxa that contributed most to the differences.The statistical significance of the differences between the 2 stations was calculated by means of a 2-way crossed analysis of similarities (ANOSIM) on the Bray-Curtis similarity matrix based on the stratigraphy.All univariate and multivariate analyses were performed using PRIMER v.6 (Primer-E Ltd, Plymouth, UK). ## 3. Results ### 3.1. Total Biodiversity Resting stages were found at all levels of the 15 cm sediment core columns from the 2 investigated sites in the Gulf of Vlorë.Merging the data from the 2 sets of samples (20–125μm and 45–200 μm) and considering both full (probably viable) and empty (probably germinated) forms from each station, 87 different resting stage morphotypes produced by plankton were recognized (Table 1). Most of them (59, belonging to 20 genera) were dinoflagellates, 16 were ciliates (9 genera), 4 rotifers (2 genera), and 5 crustaceans (4 genera), while 3 (1 protistan cyst type and 2 resting eggs) remained unidentified. Station 40 showed higher biodiversity, with 79 morphotypes, 35 of them exclusive to the site. At station 45, 52 morphotypes were observed, 8 of them being exclusive.Table 1 List of resting stage (cyst) morphotypes recovered from sediments of Bay of Vlorë (Albania). Taxon St.40 St.45 Dinoflagellates Alexandrium minutum Halim ⚫ ⚫ ○ Alexandrium tamarense(Lebour) Balech ⚫ Alexandrium sp. 1 ⚫ Alexandriumsp. 2 ⚫ Bicarinellum tricarinelloides Versteegh ⚫ ○ ⚫ Calcicarpinum perfectum Versteegh ○ Calciodinellum albatrosianum (Kamptner) Janofske and Karwath ⚫ ○ ⚫ ○ Calciodinellum operosum (Deflandre) Montresor ⚫ ○ ○ Calciperidinium asymmetricum Versteegh ○ Cochlodinium polykrikoidesMargalef type 1 ⚫ Cochlodinium polykrikoidesMargalef type 2 ○ Diplopelta parva (Abé) Matsuoka ○ Diplopsalis lenticula Bergh ⚫ ○ Follisdinellum splendidum Versteegh ○ Gonyaulax group ⚫ ○ ⚫ ○ Gymnodinium impudicum(Fraga and Bravo) G. Hansen and Möestrup ⚫ Gymnodinium nolleri Ellegaard and Möestrup ⚫ Gymnodiniumsp. 1 ⚫ ○ ⚫ Lingulodinium polyedrum (Stein) Dodge ⚫ ○ ⚫ ○ Melodomuncula berlinensis Versteegh ⚫ ○ ⚫ Nematodinium armatum (Dogiel) Kofoid and Swezy ⚫ ○ Oblea rotunda (Lebour) Balech ex Sournia ⚫ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 1 ⚫ ○ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 2 ⚫ ⚫ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 1 ⚫ ○ ⚫ ○ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 2 ○ Polykrikos kofoidii Chatton ○ Polykrikos schwartzii Bütschli ○ Protoperidinium compressum (Abé) Balech ○ Protoperidinium conicum (Gran) Balech ⚫ ○ Protoperidinium oblongum (Aurivillius) Parke and Dodge ⚫ Protoperidinium parthenopesZingone and Montresor ⚫ Protoperidinium steidingerae Balech ⚫ Protoperidinium subinerme (Paulsen) Loeblich III ○ Protoperidinium thorianum (Paulsen) Balech ⚫ ○ ○ Protoperidiniumsp. 1 ⚫ ○ ○ Protoperidinium sp. 5 ⚫ ○ Protoperidiniumsp. 6 ⚫ Pyrophacus horologium Stein ⚫ ⚫ Scrippsiellacf. crystallina Lewis ○ Scrippsiella lachrymosa Lewis ⚫ ○ ⚫ ○ Scrippsiella ramonii Montresor ⚫ ○ ○ Scrippsiella trochoidea (Stein) Loeblich rough type ⚫ ⚫ Scrippsiella trochoidea (Stein) Loeblich smooth type ⚫ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich large type ⚫ ○ ⚫ Scrippsiella trochoidea (Stein) Loeblich medium type ⚫ ○ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich small type ⚫ ○ ⚫ Scrippsiellasp. 1 ⚫ ○ ⚫ ○ Scrippsiellasp. 4 ⚫ ⚫ Scrippsiellasp. 5 ⚫ ⚫ Scrippsiellasp. 6 ⚫ ⚫ Scrippsiellasp. 8 ⚫ ○ ⚫ Thoracosphaerasp. ⚫ ⚫ Dinophyta sp. 2 ⚫ ⚫ Dinophyta sp. 7 ⚫ ⚫ Dinophyta sp. 17 ⚫ Dinophyta sp. 26 ⚫ Dinophyta sp. 30 ⚫ Dinophyta sp. 33 ⚫ ⚫ Ciliates Codonella asperaKofoid and Campbell ⚫ Codonella orthoceras Heackel ⚫ Codonellopsis monacensis(Rampi) Balech ⚫ Codonellopsis schabii (Brandt) Kofoid and Campbell ⚫ ⚫ Epiplocylis undella(Ostenfeld and Schmidt) Jörgensen ⚫ ⚫ Rabdonella spiralis (Fol) Brandt ⚫ Stenosemella ventricosa(Claparède and Lachmann) Jörgensen ⚫ ⚫ Strobilidium sp. ⚫ ⚫ Strombidium cf. acutum (Leegaard) Kahl ⚫ ⚫ Strombidium conicum (Lohman) Wulff ○ ⚫ Tintinnopsis beroideaStein ⚫ Tintinnopsis butschliiKofoid and Campbell ⚫ Tintinnopsis campanulaEhrenberg ⚫ Tintinnopsis cylindricaDaday ⚫ ⚫ Tintinnopsis radix(Imhof) ⚫ Undella claparedei(Entz) Daday ⚫ Rotifers Brachionus plicatilis Müller ⚫ Synchaetasp. spiny type ⚫ ○ Synchaetasp. rough type ⚫ Synchaetasp. mucous type ⚫ Crustaceans Cladocerans Penilia avirostris Dana ⚫ ⚫ Crustaceans Copepods Acartia clausi/margalefi ⚫ Acartiasp. 1 ⚫ ○ ⚫ ○ Centropagessp. ⚫ ⚫ ○ Paracartia latisetosa (Krizcaguin) ⚫ ○ ⚫ Unidentified Cyst type 1 ⚫ Resting Egg 1 ⚫ Resting Egg 9 ⚫ ⚫: cysts observed as full (i.e., probably viable). ○: cysts observed as germinated (i.e., empty).Moreover, analysis of the empty forms found among the 20–125μm fraction led to the recognition of 11 morphotypes, all dinoflagellates.A total of 36 cyst types were identified astaxa missing from the plankton list of the same period (January 2008; [15]). Partly due to nomenclature problems, uncertainty of identification, and differences in examined periods, it was possible to ascertain the contemporaneous presence of species in both pelagic and benthic compartments only in a very few cases.Identification was frequently impossible due to the presence of previously unreported resting stage morphologies. In such cases, germination experiments allowed the cysts to be attributed to a high leveltaxon at least, as with a Strombidium (Ciliophora) cyst, whose morphology is reported here for the first time (see Figure 2).Figure 2 Photographs of Ciliophora cyst, with two opposite papulae (a). Its empty shell (hatch occurs from one of two papulae) (b). Germinated active stage,Strombidium ciliate (c). ### 3.2. Surface Sediments The analysis of surface sediments (the 1st cm of the cores), that is, those most affected by cyst deposition and resuspension/germination, revealed sharp differences between the 2 analysed stations. In total, 36 different resting stage morphotypes were observed in this first layer (Table2), 23 produced by dinoflagellates, 6 by ciliates, 2 by rotifers, 4 by crustaceans, and 1 undetermined. Even considering the small amount of available data, station 40 showed higher biodiversity, in terms of both number of taxa and diversity indexes (see Table 3). Total abundances were comparable, however, with 389±127 cysts g−1 (average ± s.d.) at station 40 versus329±123 cysts g−1 at station 45. SIMPER showed 58% dissimilarity between the assemblages of the two sites (Table 4).Table 2 Abundance (cysts g−1 dw) of probably viable resting stages (cysts) observed in surface sediments of two stations in Bay of Vlorë (Albania). Values from three replicates are reported. 40a 40b 40c 45a 45b 45c Calciodinellum albatrosianum 20.1 18.3 35.1 0.0 0.0 0.0 Calciodinellum operosum 0.0 0.0 11.7 0.0 0.0 0.0 Gonyaulax group 20.1 0.0 0.0 0.0 0.0 59.6 Gymnodiniumsp. 1 20.1 9.2 0.0 0.0 22.2 0.0 Lingulodinium polyedrum 40.2 0.0 0.0 0.0 0.0 0.0 Melodomuncula berlinensis 40.2 0.0 0.0 0.0 0.0 0.0 Oblea rotunda 0.0 0.0 11.7 0.0 0.0 0.0 Pentapharsodinium dalei type 1 20.1 0.0 0.0 0.0 11.1 0.0 Pentapharsodinium tyrrhenicum type 1 40.2 18.3 23.4 0.0 11.1 0.0 Protoperidiniumsp. 1 0.0 9.2 0.0 0.0 0.0 0.0 Protoperidinium sp. 5 0.0 0.0 11.7 0.0 11.1 0.0 Scrippsiella ramonii 0.0 9.2 0.0 0.0 0.0 0.0 Scrippsiella trochoidea rough type 40.2 18.3 46.8 0.0 0.0 0.0 Scrippsiella trochoidea smooth type 0.0 9.2 11.7 0.0 11.1 0.0 Scrippsiella trochoideamedium type 181.0 73.3 105.4 173.1 111.1 238.4 Scrippsiella trochoideasmall type 80.5 64.2 58.5 230.8 0.0 0.0 Scrippsiellasp. 1 20.1 0.0 11.7 0.0 11.1 0.0 Scrippsiellasp. 4 0.0 9.2 0.0 0.0 0.0 0.0 Thoracosphaerasp. 1 0.0 0.0 11.7 0.0 11.1 0.0 Dinophyta sp. 2 0.0 0.0 23.4 0.0 0.0 0.0 Dinophyta sp. 17 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 26 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 33 0.0 0.0 0.0 0.0 11.1 0.0 Codonellopsis schabii 1.0 0.0 0.9 0.6 0.3 0.5 Stenosemella ventricosa 0.1 0.0 0.0 0.0 0.0 0.0 Strobilidium sp. 0.1 0.0 0.1 0.0 0.0 0.0 Strombidium acutum 0.0 0.0 0.0 0.0 11.1 0.0 Tintinnopsis cylindrica 0.0 0.0 0.0 0.0 0.1 0.1 Undella claparedei 0.1 0.0 0.1 0.0 0.0 0.0 Brachionus plicatilis 0.2 0.0 0.1 0.3 0.0 0.1 Synchaetasp spiny type 0.3 0.2 0.0 0.2 0.0 0.1 Penilia avirostris 0.0 0.0 0.1 0.0 0.0 0.0 Acartia clausi/margalefi 1.0 0.3 0.7 1.5 0.3 0.8 Acartiasp. 1 0.1 0.0 0.1 0.0 0.0 0.0 Centropagessp. 0.3 0.2 0.0 0.2 0.1 0.2 Cyst type 1 0.0 0.0 0.0 57.7 0.0 0.0Table 3 Abundance and diversity indices calculated for resting stages in surface sediments at two stations investigated in Bay of Vlorë. Abundancecysts g−1 dw Total densitycysts g−1 dw S d H ′ J ′ Station 40 389 ± 127 1167 18 ± 2.7 2.9 ± 0.3 2.2 ± 0.1 0.7 ± 0.1 Station 45 329 ± 123 987 11 ± 4.4 1.8 ± 0.9 0.5 ± 0.2 0.5 ± 0.2 Abundance: average ± standard deviation from three replicates. Total density: sum of cyst abundances observed in three replicates from each station.S: number of taxa identified (average ± standard deviation). d: Margalef diversity index. H′: Shannon diversity index. J′: Pielou’s evenness index.Table 4 Results of SIMPER analysis for resting stages from surface sediments at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 56.81 Scrippsiella trochoidea medium type 119.92 21.61 7.45 38.05 38.05 Scrippsiella trochoidea small type 67.73 15.81 6.14 27.83 65.87 Scrippsiella trochoidea rough type 35.13 6.44 2.78 11.34 77.21 Pentapharsodinium tyrrhenicum type 1 27.33 5.18 8.96 9.13 86.34 Calciodinellum albatrosianum 24.53 4.94 7.24 8.69 95.03 Station 45 Average similarity: 40.37 Scrippsiella trochoidea medium type 174.21 40.03 5.87 99.16 99.16 Stations 40 and 45 Average dissimilarity = 58.20.The most abundant cyst morphotypes in the surface layers were calcareous cysts produced by species of the Calciodinellaceae family (Dinophyta). At station 40, five cyst morphotypes of this family accounted for 95% of total abundance, while at station 45, 99% was accounted for by just one cyst morphotype,Scrippsiella trochoidea medium type, confirming the lower evenness at this station.The nMDS ordination (Figure3, stress = 0), with the hierarchical cluster superimposed with a cutoff at 60% similarity, clearly reflects the separation between the samples from stations 40 and 45. Among these, due to its higher diversity, sample 45b is farther from samples 45a and 45c than it is from the samples of station 40.Figure 3 nMDS plot of surface sediment samples collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 60% similarity. ### 3.3. Whole Sediment Cores At both the investigated stations, a general decrease in total abundances was observed with depth along the sediment columns. At station 40, higher total abundance and diversity values than station 45 were registered (Figure4), with a sharp decline between the 6th and 7th centimetres. Beyond this depth, total abundance remained below 100 cysts 100 g−1. In terms of species, Codonellopsis schabii cysts and Synchaeta sp. and Acartia clausi/margalefiresting eggs were continuously observed along the whole sediment column at both stations. The ciliate C. schabii was by far the most abundant taxon at station 40 (43% of total abundance), with density highest in the 2nd cm (342±192 cysts 100 g−1); as with total abundance, a sharp decrease was observed between the 6th and 7th centimetres. Other important species were the copepods Centropages sp. (181±50 resting eggs 100 g−1 at 4th cm) and Acartiaspp. (67±32 resting eggs 100 g−1 at 1st cm). At station 45 the most abundant type was Acartiaspp. (86±57 resting eggs 100 g−1 at 1st cm) followed by C. schabii (70±60 cysts 100 g−1 at 4th cm) and Synchaeta sp. (41±23 resting eggs 100 g−1 at 5th cm).Resting stage abundance (average± standard deviations) and Shannon’s index (H′) values recorded for each cm layer of sediment cores collected at two investigated stations in Bay of Vlorë (Albania). (a) (b) (c) (d)In the nMDS ordination (Figure5, stress = 0.12) with superimposition of the hierarchical cluster with a cutoff at 70% similarity, all the samples from station 45 cluster together, while the samples from station 40 were widely dispersed, a sign of greater variability at this site.Figure 5 nMDS plot of samples from each cm of sediment cores collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 70% similarity.The assemblage structure of the two stations differed significantly at all layers (ANOSIMR=0.655; P=0.001), showing 59% dissimilarity (SIMPER, Table 5).Table 5 Results of SIMPER analysis for resting stages in sediment cores collected at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 44.16 Centropages sp. 1.77 8.77 0.86 19.86 19.86 Codonellopsis schabii 2.10 7.00 1.31 15.86 35.72 Acartia clausi/margalefi 1.56 6.21 1.22 14.06 49.78 Synchaeta sp. spiny type 1.47 5.31 1.10 12.02 61.81 Penilia avirostris 1.14 4.47 0.98 10.13 71.93 Brachionus plicatilis 0.96 2.90 0.81 6.56 78.49 Stenosemella ventricosa 0.78 1.47 0.55 3.32 81.81 Strobilidium sp. 0.73 1.43 0.51 3.23 85.04 Scrippsiella spp. 0.57 0.95 0.36 2.14 87.18 Gonyaulax spp. 0.54 0.71 0.34 1.60 88.79 Strombidium conicum 0.44 0.69 0.33 1.57 90.36 Station 45 Average similarity: 52.61 Acartia clausi/margalefi 1.88 12.13 2.08 23.05 23.05 Synchaeta sp. spiny type 1.73 10.91 1.59 20.74 43.79 Codonellopsis schabii 1.43 6.73 1.12 12.80 56.59 Strobilidium sp. 1.12 6.05 0.92 11.51 68.10 Centropages sp. 1.20 6.05 1.02 11.50 79.60 Brachionus plicatilis 0.84 2.73 0.63 5.19 84.79 Acartia sp.1 0.75 2.57 0.58 4.89 89.67 Lingulodinium polyedrum 0.74 2.39 0.59 4.54 94.22 Groups 40 and 45. Average dissimilarity = 58.63. ### 3.4. Germination Experiments All putatively viable (i.e., full) protistan cyst types observed were isolated and incubated under controlled conditions to obtain germination. Successful germination generally allowed us to confirm the cyst-based identification, but in some cases it enabled us to go beyond this and discriminate between cysts sharing similar morphology. For example,Alexandrium minutum and Scrippsiella sp. 1, both have a round cyst with a thin and smooth wall with mucous material attached, Protoperidinium thorianum and Protoperidinium sp. 1 cysts are both round-brown and smooth, and Gymnodinium nolleri and Scrippsiella sp. 4 both produce round-brown cysts with a red spot inside. The germination of all these cyst types allowed us to correctly identify these species.Cysts ascribed to the paleontologicaltaxa Bicarinellum tricarinelloides and Calciperidinium asymmetricum both germinated, thus confirming that they belong to modern taxa. The active stages obtained were tentatively identified as scrippsielloid dinoflagellates.An unknown ciliate cyst, with a papula at both extremities, produced an active stage identifiable as belonging to theStrombidium genus (Figure 2). ## 3.1. Total Biodiversity Resting stages were found at all levels of the 15 cm sediment core columns from the 2 investigated sites in the Gulf of Vlorë.Merging the data from the 2 sets of samples (20–125μm and 45–200 μm) and considering both full (probably viable) and empty (probably germinated) forms from each station, 87 different resting stage morphotypes produced by plankton were recognized (Table 1). Most of them (59, belonging to 20 genera) were dinoflagellates, 16 were ciliates (9 genera), 4 rotifers (2 genera), and 5 crustaceans (4 genera), while 3 (1 protistan cyst type and 2 resting eggs) remained unidentified. Station 40 showed higher biodiversity, with 79 morphotypes, 35 of them exclusive to the site. At station 45, 52 morphotypes were observed, 8 of them being exclusive.Table 1 List of resting stage (cyst) morphotypes recovered from sediments of Bay of Vlorë (Albania). Taxon St.40 St.45 Dinoflagellates Alexandrium minutum Halim ⚫ ⚫ ○ Alexandrium tamarense(Lebour) Balech ⚫ Alexandrium sp. 1 ⚫ Alexandriumsp. 2 ⚫ Bicarinellum tricarinelloides Versteegh ⚫ ○ ⚫ Calcicarpinum perfectum Versteegh ○ Calciodinellum albatrosianum (Kamptner) Janofske and Karwath ⚫ ○ ⚫ ○ Calciodinellum operosum (Deflandre) Montresor ⚫ ○ ○ Calciperidinium asymmetricum Versteegh ○ Cochlodinium polykrikoidesMargalef type 1 ⚫ Cochlodinium polykrikoidesMargalef type 2 ○ Diplopelta parva (Abé) Matsuoka ○ Diplopsalis lenticula Bergh ⚫ ○ Follisdinellum splendidum Versteegh ○ Gonyaulax group ⚫ ○ ⚫ ○ Gymnodinium impudicum(Fraga and Bravo) G. Hansen and Möestrup ⚫ Gymnodinium nolleri Ellegaard and Möestrup ⚫ Gymnodiniumsp. 1 ⚫ ○ ⚫ Lingulodinium polyedrum (Stein) Dodge ⚫ ○ ⚫ ○ Melodomuncula berlinensis Versteegh ⚫ ○ ⚫ Nematodinium armatum (Dogiel) Kofoid and Swezy ⚫ ○ Oblea rotunda (Lebour) Balech ex Sournia ⚫ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 1 ⚫ ○ ⚫ Pentapharsodinium dalei Indelicato and Loeblich type 2 ⚫ ⚫ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 1 ⚫ ○ ⚫ ○ Pentapharsodinium tyrrhenicum Montresor, Zingone, and Marino type 2 ○ Polykrikos kofoidii Chatton ○ Polykrikos schwartzii Bütschli ○ Protoperidinium compressum (Abé) Balech ○ Protoperidinium conicum (Gran) Balech ⚫ ○ Protoperidinium oblongum (Aurivillius) Parke and Dodge ⚫ Protoperidinium parthenopesZingone and Montresor ⚫ Protoperidinium steidingerae Balech ⚫ Protoperidinium subinerme (Paulsen) Loeblich III ○ Protoperidinium thorianum (Paulsen) Balech ⚫ ○ ○ Protoperidiniumsp. 1 ⚫ ○ ○ Protoperidinium sp. 5 ⚫ ○ Protoperidiniumsp. 6 ⚫ Pyrophacus horologium Stein ⚫ ⚫ Scrippsiellacf. crystallina Lewis ○ Scrippsiella lachrymosa Lewis ⚫ ○ ⚫ ○ Scrippsiella ramonii Montresor ⚫ ○ ○ Scrippsiella trochoidea (Stein) Loeblich rough type ⚫ ⚫ Scrippsiella trochoidea (Stein) Loeblich smooth type ⚫ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich large type ⚫ ○ ⚫ Scrippsiella trochoidea (Stein) Loeblich medium type ⚫ ○ ⚫ ○ Scrippsiella trochoidea (Stein) Loeblich small type ⚫ ○ ⚫ Scrippsiellasp. 1 ⚫ ○ ⚫ ○ Scrippsiellasp. 4 ⚫ ⚫ Scrippsiellasp. 5 ⚫ ⚫ Scrippsiellasp. 6 ⚫ ⚫ Scrippsiellasp. 8 ⚫ ○ ⚫ Thoracosphaerasp. ⚫ ⚫ Dinophyta sp. 2 ⚫ ⚫ Dinophyta sp. 7 ⚫ ⚫ Dinophyta sp. 17 ⚫ Dinophyta sp. 26 ⚫ Dinophyta sp. 30 ⚫ Dinophyta sp. 33 ⚫ ⚫ Ciliates Codonella asperaKofoid and Campbell ⚫ Codonella orthoceras Heackel ⚫ Codonellopsis monacensis(Rampi) Balech ⚫ Codonellopsis schabii (Brandt) Kofoid and Campbell ⚫ ⚫ Epiplocylis undella(Ostenfeld and Schmidt) Jörgensen ⚫ ⚫ Rabdonella spiralis (Fol) Brandt ⚫ Stenosemella ventricosa(Claparède and Lachmann) Jörgensen ⚫ ⚫ Strobilidium sp. ⚫ ⚫ Strombidium cf. acutum (Leegaard) Kahl ⚫ ⚫ Strombidium conicum (Lohman) Wulff ○ ⚫ Tintinnopsis beroideaStein ⚫ Tintinnopsis butschliiKofoid and Campbell ⚫ Tintinnopsis campanulaEhrenberg ⚫ Tintinnopsis cylindricaDaday ⚫ ⚫ Tintinnopsis radix(Imhof) ⚫ Undella claparedei(Entz) Daday ⚫ Rotifers Brachionus plicatilis Müller ⚫ Synchaetasp. spiny type ⚫ ○ Synchaetasp. rough type ⚫ Synchaetasp. mucous type ⚫ Crustaceans Cladocerans Penilia avirostris Dana ⚫ ⚫ Crustaceans Copepods Acartia clausi/margalefi ⚫ Acartiasp. 1 ⚫ ○ ⚫ ○ Centropagessp. ⚫ ⚫ ○ Paracartia latisetosa (Krizcaguin) ⚫ ○ ⚫ Unidentified Cyst type 1 ⚫ Resting Egg 1 ⚫ Resting Egg 9 ⚫ ⚫: cysts observed as full (i.e., probably viable). ○: cysts observed as germinated (i.e., empty).Moreover, analysis of the empty forms found among the 20–125μm fraction led to the recognition of 11 morphotypes, all dinoflagellates.A total of 36 cyst types were identified astaxa missing from the plankton list of the same period (January 2008; [15]). Partly due to nomenclature problems, uncertainty of identification, and differences in examined periods, it was possible to ascertain the contemporaneous presence of species in both pelagic and benthic compartments only in a very few cases.Identification was frequently impossible due to the presence of previously unreported resting stage morphologies. In such cases, germination experiments allowed the cysts to be attributed to a high leveltaxon at least, as with a Strombidium (Ciliophora) cyst, whose morphology is reported here for the first time (see Figure 2).Figure 2 Photographs of Ciliophora cyst, with two opposite papulae (a). Its empty shell (hatch occurs from one of two papulae) (b). Germinated active stage,Strombidium ciliate (c). ## 3.2. Surface Sediments The analysis of surface sediments (the 1st cm of the cores), that is, those most affected by cyst deposition and resuspension/germination, revealed sharp differences between the 2 analysed stations. In total, 36 different resting stage morphotypes were observed in this first layer (Table2), 23 produced by dinoflagellates, 6 by ciliates, 2 by rotifers, 4 by crustaceans, and 1 undetermined. Even considering the small amount of available data, station 40 showed higher biodiversity, in terms of both number of taxa and diversity indexes (see Table 3). Total abundances were comparable, however, with 389±127 cysts g−1 (average ± s.d.) at station 40 versus329±123 cysts g−1 at station 45. SIMPER showed 58% dissimilarity between the assemblages of the two sites (Table 4).Table 2 Abundance (cysts g−1 dw) of probably viable resting stages (cysts) observed in surface sediments of two stations in Bay of Vlorë (Albania). Values from three replicates are reported. 40a 40b 40c 45a 45b 45c Calciodinellum albatrosianum 20.1 18.3 35.1 0.0 0.0 0.0 Calciodinellum operosum 0.0 0.0 11.7 0.0 0.0 0.0 Gonyaulax group 20.1 0.0 0.0 0.0 0.0 59.6 Gymnodiniumsp. 1 20.1 9.2 0.0 0.0 22.2 0.0 Lingulodinium polyedrum 40.2 0.0 0.0 0.0 0.0 0.0 Melodomuncula berlinensis 40.2 0.0 0.0 0.0 0.0 0.0 Oblea rotunda 0.0 0.0 11.7 0.0 0.0 0.0 Pentapharsodinium dalei type 1 20.1 0.0 0.0 0.0 11.1 0.0 Pentapharsodinium tyrrhenicum type 1 40.2 18.3 23.4 0.0 11.1 0.0 Protoperidiniumsp. 1 0.0 9.2 0.0 0.0 0.0 0.0 Protoperidinium sp. 5 0.0 0.0 11.7 0.0 11.1 0.0 Scrippsiella ramonii 0.0 9.2 0.0 0.0 0.0 0.0 Scrippsiella trochoidea rough type 40.2 18.3 46.8 0.0 0.0 0.0 Scrippsiella trochoidea smooth type 0.0 9.2 11.7 0.0 11.1 0.0 Scrippsiella trochoideamedium type 181.0 73.3 105.4 173.1 111.1 238.4 Scrippsiella trochoideasmall type 80.5 64.2 58.5 230.8 0.0 0.0 Scrippsiellasp. 1 20.1 0.0 11.7 0.0 11.1 0.0 Scrippsiellasp. 4 0.0 9.2 0.0 0.0 0.0 0.0 Thoracosphaerasp. 1 0.0 0.0 11.7 0.0 11.1 0.0 Dinophyta sp. 2 0.0 0.0 23.4 0.0 0.0 0.0 Dinophyta sp. 17 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 26 0.0 18.3 0.0 0.0 0.0 0.0 Dinophyta sp. 33 0.0 0.0 0.0 0.0 11.1 0.0 Codonellopsis schabii 1.0 0.0 0.9 0.6 0.3 0.5 Stenosemella ventricosa 0.1 0.0 0.0 0.0 0.0 0.0 Strobilidium sp. 0.1 0.0 0.1 0.0 0.0 0.0 Strombidium acutum 0.0 0.0 0.0 0.0 11.1 0.0 Tintinnopsis cylindrica 0.0 0.0 0.0 0.0 0.1 0.1 Undella claparedei 0.1 0.0 0.1 0.0 0.0 0.0 Brachionus plicatilis 0.2 0.0 0.1 0.3 0.0 0.1 Synchaetasp spiny type 0.3 0.2 0.0 0.2 0.0 0.1 Penilia avirostris 0.0 0.0 0.1 0.0 0.0 0.0 Acartia clausi/margalefi 1.0 0.3 0.7 1.5 0.3 0.8 Acartiasp. 1 0.1 0.0 0.1 0.0 0.0 0.0 Centropagessp. 0.3 0.2 0.0 0.2 0.1 0.2 Cyst type 1 0.0 0.0 0.0 57.7 0.0 0.0Table 3 Abundance and diversity indices calculated for resting stages in surface sediments at two stations investigated in Bay of Vlorë. Abundancecysts g−1 dw Total densitycysts g−1 dw S d H ′ J ′ Station 40 389 ± 127 1167 18 ± 2.7 2.9 ± 0.3 2.2 ± 0.1 0.7 ± 0.1 Station 45 329 ± 123 987 11 ± 4.4 1.8 ± 0.9 0.5 ± 0.2 0.5 ± 0.2 Abundance: average ± standard deviation from three replicates. Total density: sum of cyst abundances observed in three replicates from each station.S: number of taxa identified (average ± standard deviation). d: Margalef diversity index. H′: Shannon diversity index. J′: Pielou’s evenness index.Table 4 Results of SIMPER analysis for resting stages from surface sediments at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 56.81 Scrippsiella trochoidea medium type 119.92 21.61 7.45 38.05 38.05 Scrippsiella trochoidea small type 67.73 15.81 6.14 27.83 65.87 Scrippsiella trochoidea rough type 35.13 6.44 2.78 11.34 77.21 Pentapharsodinium tyrrhenicum type 1 27.33 5.18 8.96 9.13 86.34 Calciodinellum albatrosianum 24.53 4.94 7.24 8.69 95.03 Station 45 Average similarity: 40.37 Scrippsiella trochoidea medium type 174.21 40.03 5.87 99.16 99.16 Stations 40 and 45 Average dissimilarity = 58.20.The most abundant cyst morphotypes in the surface layers were calcareous cysts produced by species of the Calciodinellaceae family (Dinophyta). At station 40, five cyst morphotypes of this family accounted for 95% of total abundance, while at station 45, 99% was accounted for by just one cyst morphotype,Scrippsiella trochoidea medium type, confirming the lower evenness at this station.The nMDS ordination (Figure3, stress = 0), with the hierarchical cluster superimposed with a cutoff at 60% similarity, clearly reflects the separation between the samples from stations 40 and 45. Among these, due to its higher diversity, sample 45b is farther from samples 45a and 45c than it is from the samples of station 40.Figure 3 nMDS plot of surface sediment samples collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 60% similarity. ## 3.3. Whole Sediment Cores At both the investigated stations, a general decrease in total abundances was observed with depth along the sediment columns. At station 40, higher total abundance and diversity values than station 45 were registered (Figure4), with a sharp decline between the 6th and 7th centimetres. Beyond this depth, total abundance remained below 100 cysts 100 g−1. In terms of species, Codonellopsis schabii cysts and Synchaeta sp. and Acartia clausi/margalefiresting eggs were continuously observed along the whole sediment column at both stations. The ciliate C. schabii was by far the most abundant taxon at station 40 (43% of total abundance), with density highest in the 2nd cm (342±192 cysts 100 g−1); as with total abundance, a sharp decrease was observed between the 6th and 7th centimetres. Other important species were the copepods Centropages sp. (181±50 resting eggs 100 g−1 at 4th cm) and Acartiaspp. (67±32 resting eggs 100 g−1 at 1st cm). At station 45 the most abundant type was Acartiaspp. (86±57 resting eggs 100 g−1 at 1st cm) followed by C. schabii (70±60 cysts 100 g−1 at 4th cm) and Synchaeta sp. (41±23 resting eggs 100 g−1 at 5th cm).Resting stage abundance (average± standard deviations) and Shannon’s index (H′) values recorded for each cm layer of sediment cores collected at two investigated stations in Bay of Vlorë (Albania). (a) (b) (c) (d)In the nMDS ordination (Figure5, stress = 0.12) with superimposition of the hierarchical cluster with a cutoff at 70% similarity, all the samples from station 45 cluster together, while the samples from station 40 were widely dispersed, a sign of greater variability at this site.Figure 5 nMDS plot of samples from each cm of sediment cores collected at stations 40 and 45 in Bay of Vlorë. Hierarchical clustering superimposed with cutoff at 70% similarity.The assemblage structure of the two stations differed significantly at all layers (ANOSIMR=0.655; P=0.001), showing 59% dissimilarity (SIMPER, Table 5).Table 5 Results of SIMPER analysis for resting stages in sediment cores collected at stations 40 and 45 in Bay of Vlorë. Taxa Av. Abund Av. Sim Sim/SD Contrib% Cum.% Station 40 Average similarity: 44.16 Centropages sp. 1.77 8.77 0.86 19.86 19.86 Codonellopsis schabii 2.10 7.00 1.31 15.86 35.72 Acartia clausi/margalefi 1.56 6.21 1.22 14.06 49.78 Synchaeta sp. spiny type 1.47 5.31 1.10 12.02 61.81 Penilia avirostris 1.14 4.47 0.98 10.13 71.93 Brachionus plicatilis 0.96 2.90 0.81 6.56 78.49 Stenosemella ventricosa 0.78 1.47 0.55 3.32 81.81 Strobilidium sp. 0.73 1.43 0.51 3.23 85.04 Scrippsiella spp. 0.57 0.95 0.36 2.14 87.18 Gonyaulax spp. 0.54 0.71 0.34 1.60 88.79 Strombidium conicum 0.44 0.69 0.33 1.57 90.36 Station 45 Average similarity: 52.61 Acartia clausi/margalefi 1.88 12.13 2.08 23.05 23.05 Synchaeta sp. spiny type 1.73 10.91 1.59 20.74 43.79 Codonellopsis schabii 1.43 6.73 1.12 12.80 56.59 Strobilidium sp. 1.12 6.05 0.92 11.51 68.10 Centropages sp. 1.20 6.05 1.02 11.50 79.60 Brachionus plicatilis 0.84 2.73 0.63 5.19 84.79 Acartia sp.1 0.75 2.57 0.58 4.89 89.67 Lingulodinium polyedrum 0.74 2.39 0.59 4.54 94.22 Groups 40 and 45. Average dissimilarity = 58.63. ## 3.4. Germination Experiments All putatively viable (i.e., full) protistan cyst types observed were isolated and incubated under controlled conditions to obtain germination. Successful germination generally allowed us to confirm the cyst-based identification, but in some cases it enabled us to go beyond this and discriminate between cysts sharing similar morphology. For example,Alexandrium minutum and Scrippsiella sp. 1, both have a round cyst with a thin and smooth wall with mucous material attached, Protoperidinium thorianum and Protoperidinium sp. 1 cysts are both round-brown and smooth, and Gymnodinium nolleri and Scrippsiella sp. 4 both produce round-brown cysts with a red spot inside. The germination of all these cyst types allowed us to correctly identify these species.Cysts ascribed to the paleontologicaltaxa Bicarinellum tricarinelloides and Calciperidinium asymmetricum both germinated, thus confirming that they belong to modern taxa. The active stages obtained were tentatively identified as scrippsielloid dinoflagellates.An unknown ciliate cyst, with a papula at both extremities, produced an active stage identifiable as belonging to theStrombidium genus (Figure 2). ## 4. Discussion The total number of resting stage morphotypes recognized in the present study is particularly high compared with other studies in the Mediterranean. None of these studies gave a number higher than the one reported here, despite being based on a larger geographical area (the whole North Adriatic, in [24]) or a higher number of samples (157 sediment samples in [5]). This richness could be due to our enhanced ability, with the passage of time, to identify cysts from different species, but it could also depend on the consideration of different depths in the sediments. Indeed, the other mentioned studies only reported cysts from the sediment surface, while in the present case the type list grew by more than 60% when below-surface layers were considered.As a consequence of its richness, the reported list adds 42 morphotypes to the Albanian list and 13 alternative morphotypes to already knowntaxa. This fact clearly demonstrates that the description of cyst assemblages in coastal Mediterranean areas is still far from being exhaustive.The discovery of differences in the benthic species assemblage with respect to the plankton is partially due to the use in cyst studies of a terminology derived from paleontological studies which has yet to be standardised with reference to modern terminology. However, it is evident that the active stages in the water column assemblage of the Bay of Vlorë [15] differ in number and quality from that of the bottom sediments reported in the present study. By way of example, and only considering the surface sediment layer (i.e., the most affected by recent sinking and/or re-suspension), 4 different species of Scrippsiella (Dinophyta) were isolated as cysts, but only 2 were reported [15] as active stages in the water column for the whole bay. Moreover, in this study 5 different cyst types for the single species S. trochoidea were identified, differing in terms of size and wall. This is evidence of great intraspecific diversity, but it could be also a sign of the presence of cryptic species, as discussed by Montresor et al. [25], differing in cyst morphology but not in that of the swimming stage.The rotiferSynchaeta sp. was not found in the water column, but its resting eggs were easily recognizable and abundant, in the sediments.While the case ofS. trochoidea confirms that much remains to be discovered about the morphological variability of cysts produced by the same species (see [26], for Dinophyta or [27], for Calanoida), Synchaeta sp. is a clear case of a species not detected in the active plankton assemblage but waiting in the sediments for a favourable moment to rejoin the water column.Also worthy of attention is the observation of a Ciliophora cyst with two papulae on opposite sides (Figure2), which has never been reported before.A study of plankton composition was carried out in the same site during the same scientific cruise (January 2008) as the present study [15]. In January 2008, the phytoplankton and the microzooplankton included a total of 178 categories. Considering only the main cyst producers (dinoflagellates and ciliates), examination of the water column at 16 stations gave a total of 76 taxa (48 dinoflagellates, 28 ciliates). The present analysis of sediments, from just 2 stations, gave a total of 75 taxa. This striking similarity of values was not, however, reflected in the taxonomic composition of the 2 compartments. Indeed, 36 cyst types were identified as taxa not present in the plankton list for the same period (January 2008). This number would be even higher if we considered only plankton from stations close to the two used here for the sediments.It was not possible to correlate cyst abundance along the sediment column with age of deposition, which would require dating of the sediment layers. In any case, our results showed that the total abundance of cysts in the upper layers was up to 10 times greater than in lower ones. The sharp decrease in abundance below the 5th cm of depth, at least at station 40, does, however, suggest that an event occurred at a certain moment in the history of the plankton in the Bay of Vlorë, a suggestion that clearly requires further study. Indeed, due to its position, station 40 is a candidate for studies of the history of cyst production (and their arrival in the sediment). Located in a depression on the seabed, the depth of St. 40 (−54 m) probably favours the sedimentation of fine particles and the depletion of oxygen content, and the deposition and accumulation of sinking resting stages can thus be considered undisturbed. In addition, the observed fall in diversity from lower to upper layers could be correlated with the growth of cultural eutrophication (i.e., urban development), as proposed for Tokyo Bay and Daja Bay [28].This situation at St. 45 (depth 28 m) is not completely identical. It is near the slope of a detritus cone where materials from river Vjosa accumulate and marine currents possibly act at a different rate from those acting on St. 40.Incubation of encysted forms under controlled conditions to obtain germination is a useful tool for confirming the identification made by observation of the cyst. In some cases, different species produce very similar cysts, especially when the morphology is very simple, that is, spherical, without processes or wall structures. In the present study, we observed many Dinophyta cysts with the same basic morphology, that is, round body and smooth brown wall with no apparent signs of paratabulation or spines or processes. Their germination allowed us to classify this basic type into at least 6 species. Round brown cysts are typical ofProtoperidinium species [29, 30], but we also recognized Diplopsalis lenticula, Gymnodinium nolleri, and Oblea rotunda, as well as 3 additional Protoperidinium species. In the same way, it was possible to distinguish between Alexandrium minutum and Scrippsiellasp.1, whose cysts are very similar and whose distinctive features are recognizable only after germination.Conversely, analysis of cysts may allow us to identify species whose active stages are indistinguishable, at least by optical microscope. This is the case in the present study for theScrippsiella group, which produces active cells that are very difficult to distinguish, although their cysts differ in terms of the type of calcareous covering, colour, and the presence of spines [31, 32].Worthy of special attention here is the recovery during the present study of Dinophyta cysts whose active stages have yet to be identified. As cysts, they are still classified with a paleontological name in accordance with their description from Pleistocene to Pliocene sediment strata in the Mediterranean [33]. Two of these cyst types (Bicarinellum tricarinelloides and Calciperidinium asymmetricum) were successfully germinated, producing motile forms recognisable as belonging to the Calciodinellaceae family. In any case their frequent observation in surface sediments in other Mediterranean areas [23, 34] and in sediment traps [35] is a clear sign that these species are present in the water column today and need to be better identified. --- *Source: 101682-2013-06-20.xml*
2013
# Vitamin D-Regulated MicroRNAs: Are They Protective Factors against Dengue Virus Infection? **Authors:** John F. Arboleda; Silvio Urcuqui-Inchima **Journal:** Advances in Virology (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1016840 --- ## Abstract Over the last few years, an increasing body of evidence has highlighted the critical participation of vitamin D in the regulation of proinflammatory responses and protection against many infectious pathogens, including viruses. The activity of vitamin D is associated with microRNAs, which are fine tuners of immune activation pathways and provide novel mechanisms to avoid the damage that arises from excessive inflammatory responses. Severe symptoms of an ongoing dengue virus infection and disease are strongly related to highly altered production of proinflammatory mediators, suggesting impairment in homeostatic mechanisms that control the host’s immune response. Here, we discuss the possible implications of emerging studies anticipating the biological effects of vitamin D and microRNAs during the inflammatory response, and we attempt to extrapolate these findings to dengue virus infection and to their potential use for disease management strategies. --- ## Body ## 1. Introduction Activation of innate immune cells results in the release of proinflammatory mediators to initiate a protective local response against invading pathogens [1]. However, overactivated inflammatory activity could be detrimental since it can cause tissue damage and even death of the host. Therefore, negative feedback mechanisms are required to control the duration and intensity of the inflammatory response [1, 2]. Although little is known about the molecular mechanisms occurring during dengue virus (DENV) infection/disease, it has been suggested that the immune response initiated against the virus greatly contributes to pathogenesis. Indeed, several symptoms of the disease are tightly related to imbalanced immune responses, particularly to high production of proinflammatory cytokines [3, 4] suggesting an impairment of homeostatic mechanisms that control inflammation. Interestingly, vitamin D has been described as an important modulator of immune responses to several pathogens and as a key factor enhancing immunoregulatory mechanisms that avoid the damage that arises from excessive inflammatory responses [5, 6], as in dengue disease [7]. Mounting evidence obtained from human populations and experimental in vitro studies has suggested that this hormone can play a key role in the immune system’s response to several viruses [8–14], thereby becoming a potential target of intervention to combat DENV infection and disease progression. Among several mechanisms, vitamin D activity has been associated with the expression of certain microRNAs (miRs) [15] that are one of the main regulatory switches operating at the translational level [16]. miRs constitute approximately 1% of the human genome and their sequences can be found within introns of other genes or can be encoded independently and transcribed in a similar fashion to mRNAs encoded by protein-coding genes [16]. A typical mature miR of 18–23 base pairs associates with the RNA-induced silencing complex (RISC) and moves towards the target mRNA [17]. Once there, the miR binds to the complementary sequence in the 3′untranslated region (3′UTR) of the mRNA, thereby inducing gene silencing through mRNA cleavage, translational repression, or deadenylation [16]. A single miR may directly regulate the expression of hundreds of mRNAs at once and several miRs can also target the same mRNA resulting in enhanced translation inhibition [18]. Targeting of specific genes involved in modulation of immune response pathways by miRs provides a finely tuned regulatory mechanism for the restoration of the host’s resting inflammation state [19–21]. Since the association between vitamin D and miR activity may play a relevant role in ongoing DENV infections, here we provide an overview of DENV-induced inflammatory responses and the early evidence anticipating a possible participation of the vitamin D and miR interplay regulating antiviral and inflammatory responses during DENV infection/disease. ## 2. DENV and the Immune Response DENV is an icosahedral-enveloped virus with a positive sense single-stranded RNA (ssRNA) genome that belongs to the family Flaviviridae, genusFlavivirus. There are four phylogenetically related but antigenically distinct viral serotypes (DENV 1–4) able to cause the full spectrum of the disease [22]. In addition, a sylvatic serotype (DENV-5), with no evidence regarding its ability to infect humans, has been recently reported [23]. DENV is transmitted byAedes mosquitoes in tropical and subtropical areas where the disease has become a major public health threat and one of the most rapidly spreading vector-borne diseases in the world, with an increasing incidence of 30-fold in the past 50 years [24, 25]. An estimated 3.6 billion people live in high risk areas worldwide and it is estimated that over 390 million cases occur every year, of which 96 million suffer from dengue fever [26–28]. Although only a minor number of cases may progress to the severe forms of the disease, 21.000 deaths are reported annually [27]. Guidelines of the World Health Organization (WHO) recognize dengue as a clinical continuum from dengue fever (DF), a nonspecific febrile illness, to dengue with or without warning signs that can progress to dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS) [3]. These severe forms of the disease are characterized by a wide spectrum of symptoms, including the development of vascular permeability, plasma leakage, thrombocytopenia, focal or generalized hemorrhages, and tissue and/or organ damage that may lead to shock and death [29, 30]. Besides ecoepidemiology, host genetic variations, and virus virulence, the risk factor is increased mainly by secondary infections with different dengue serotypes, presumably through a mechanism known as antibody-dependent immune enhancement (ADE), whereby nonneutralizing antibodies from previous heterotypic infections enhance virus entry via receptors for immunoglobulins or Fc receptors (FcRs) [29, 31, 32].Skin is the first barrier for the invading DENV and the site where innate immunity exerts the first line of defense [33]. Following the bite by an infected mosquito, local tissue resident dendritic cells (DCs) and macrophages are the main targets of the virus [34, 35]. The viral structural E protein binds to cellular receptors, such as DC-SIGN (Dendritic Cell-Specific Intercellular adhesion molecule-3-Grabbing Nonintegrin), CLEC5A (C-type lectin domain family 5, member A), and MR (mannose receptor), allowing internalization of the virus through receptor-mediated endocytosis [22, 36–38]. Once in the cytoplasm, DENV replication products, such as double-stranded RNA (dsRNA) or genomic ssRNA, are sensed by several pattern recognition receptors (PRRs) (Figure 1), including TLR3, TLR7, TLR8, the cytosolic receptors RIG-I (Retinoic acid Inducible Gene-1), and MDA-5 (Melanoma Differentiation-Associated protein 5) [39–43]. Subsequently, this subset of PRRs triggers the activation of intracellular pathways, leading to the activation of transcription factors such as interferon regulatory factors 3 and 7 (IRF3 and IRF7) and the Nuclear Factor κB (NF-κB) and the later production of type I interferons and proinflammatory cytokines promoting an antiviral response [44, 45]. Additionally, the local activation of natural killer (NK) cells, neutrophils, and mast cells by the presence of the virus induces more proinflammatory mediators, complement activation, and the commitment of cellular and humoral immune responses to clear and control viral infection [46].Figure 1 Potential link between vitamin D and miR controlling DENV-induced inflammatory response and antiviral activity. (1) DENV replication products and proteins are recognized by several PRRs whose signaling pathways promote the proinflammatory response. (2) Vitamin D activity induces transcription of microRNAs and other target genes that play a critical role in the control of inflammation-related signaling pathways and antiviral activity. ### 2.1. Inflammation and Cytokine Storm Although the immune response is critical to combat and overcome invading pathogens, it is believed that the immune response greatly contributes to progression of dengue disease [31]. The pathogenesis and progression to the severe forms of dengue are still not completely understood; however, most cases are characterized by bleeding, hemorrhage, and plasma leakage that can progress to shock or organ failure [87, 88]. These physiological events are preceded by a hyperpermeability syndrome caused mainly by an imbalance between proinflammatory and anti-inflammatory cytokines produced in response to virus infection. The predominant proinflammatory mediators or “cytokine storm,” secreted mainly by T cells, monocytes/macrophages, and endothelial cells (Table 1), promotes endothelial dysfunction by generating an endothelial “sieve” effect that leads to fluid and protein leakage. Increasing evidence suggests that endothelial integrity and vascular permeability are affected by proinflammatory cytokines through the induction of apoptosis and the modulation of tight junction molecules within endothelial cells [47, 52, 89, 90]. In addition, it has also been reported that these cytokines may often have synergistic effects and may induce expression of other cytokines, generating a positive feedback mechanism leading to further imbalanced levels of inflammatory mediators and higher permeability [4].Table 1 Summary of the main cytokines associated with development of DHF/DSS and their biological function in relation to pathogenesis. Cytokines Biological function Refs. MCP-1 Monocyte chemoattractant protein-1 is critical to drive the extravasation of mononuclear cells into the inflamed, infected, and traumatized sites of infection. In addition, it promotes endothelial permeability increasing the vascular leakage as a result of dengue virus infection. [47, 48] IL-1 It induces tissue factor (TF) expression of endothelial cells (EC) and suppresses their cell surface anticoagulant activity. It may upregulate TNF-α production and activity. IL-1β mediates platelet-induced activation of ECs, which increases chemokine release and upregulates VCAM-1 enhancing adhesion of monocytes to the endothelium. [43, 49] IL-6 It has been described as a strong inducer of endothelial permeability resulting in vascular leakage. IL-6 potentiates the coagulation cascade and can downregulate production of TNF-α and its receptors. IL-6 may perform a synergistic role with some pyrogens such as IL-1 to induce fever. [50, 51] IL-8 Its systemic concentrations are increased by EC damage, which in turn induces endothelial permeability. Activation of the coagulation system results in increased expression of IL-6 and IL-8 by monocytes, while the APC anticoagulation pathway downregulates the production of IL-8 by ECs. [49, 50, 52] IL-10 It plays an immunosuppressive role that causes IFN resistance, followed by impaired immune clearance and a persistent infectious effect for acute viral infection. IL-10 also inhibits the expression of TF and inhibits fibrinolysis. IL-10 plasma levels have been associated with disease severity; however, its role in dengue pathogenesis has not been fully elucidated. [53] TNF-α It is a potent activator of ECs; it enhances capillary permeability. TNF-α upregulates expression of TF in monocytes and ECs and downregulates expression of thrombomodulin on ECs. It also activates the fibrinolytic system and enhances expression of NO mediating activation-induced death of T cells, and it has therefore been implicated in peripheral T-cell deletion. [49, 51, 54] TGF-β Early in infection, low levels of TGF-β may trigger secretion of IL-1 and TNF-β. However, later in infection, the cytokine inhibits the Th1 response and enhances production of Th2 cytokines such as IL-10. TGF-β increases expression of TF on ECs and upregulates expression and release of PAI-1 (plasminogen activator inhibitor-1). [3] VEGF VEGF is a key driver of vascular permeability. It reduces EC occludins, claudins, and the VE-cadherin content, all of which are components of ECs junctions. Upon activation, VEGF stimulates expression of ICAM-1, VCAM-1, and E-selectin in ECs. [3, 36]This oversustained inflammatory response may be due to an impairment of the regulatory mechanisms that control the duration and intensity of inflammation or cytokine production, especially through the regulation of PRR signaling activation [20]. Several studies have shown that alterations in proinflammatory cytokine production during DENV infection/disease can be attributed to variations in recognition and activation of TLR signaling, which contributes to progression of the disease (Figure 1) [91, 92]. It was recently reported that DENV NS1 proteins may be recognized by TLR2, TLR4, and TLR6 enhancing the production of proinflammatory cytokines and triggering the endothelial permeability that leads to vascular leakage [93, 94]. Interestingly, our group has recently shown a differential expression of TLRs in dendritic cells (DCs) of dengue patients depending on the severity of the disease [95]. Indeed, there was an increased expression of TLR3 and TLR9 in DCs of patients with DF in contrast to a poor stimulation of both receptors in DCs of patients with DHF. Conversely, a lower expression of TLR2 in DF patients compared to DHF patients was also observed. Additionally, IFN-α production was also altered via TLR9, suggesting that DENV may affect the type I IFN response through this signaling pathway [95]. Indeed, DENV has successfully evolved to overcome host immune responses, by efficiently subverting the IFN pathway and inhibiting different steps of the immune response through the expression of viral nonstructural proteins that antagonize several molecules of this activation pathway [96, 97]. Although DENV may evade immune recognition [42], cumulative data have shown that it is sensed by both TLR3 and TLR7/8 and activates signaling pathways upregulating IFN-α/β, TNF-α, human defensin 5 (HD5), and human β defensin 2 (HβD2) [39–41]. In addition, RIG-I and MDA-5 are also activated upon DENV infection and are essential for host defense against the virus [40]. Moreover, TLR3 controls DENV2 replication through NF-κB activation, suggesting that TLR3 agonists such as Poly (I : C) (Polyinosinic : Polycytidylic Acid) might work as immunomodulators of DENV infection [39]. Furthermore, besides DENV recognition and binding, C-type lectins such as the mannose receptor (MR) and CLEC5A may contribute to the inflammatory responses [98–100]. CLEC5A plays a critical role in the induction of NLRP3 inflammasome activation during DENV infection and enhances the release of IL-18 and IL-1β that are critical for activation of Th17 helper cells [99, 101].While innate immune activation and proinflammatory cytokine production are being investigated during the course of DENV infections [53, 92, 102], vitamin D activity has gained special attention due to its importance in the modulation of the innate response. An increasing number of reports suggest that vitamin D activity is associated with the modulation of components implicated in antiviral immune responses and in the regulation of proinflammatory cytokine production through the modulation of miR expression [6, 13, 15, 103]. Although there is little information from observational studies and clinical trials demonstrating the role of vitamin D during dengue virus infection, here we postulate a potential role of vitamin D controlling progression of dengue disease and provide evidence of some vitamin D molecular mechanisms in support of our hypothesis. ## 2.1. Inflammation and Cytokine Storm Although the immune response is critical to combat and overcome invading pathogens, it is believed that the immune response greatly contributes to progression of dengue disease [31]. The pathogenesis and progression to the severe forms of dengue are still not completely understood; however, most cases are characterized by bleeding, hemorrhage, and plasma leakage that can progress to shock or organ failure [87, 88]. These physiological events are preceded by a hyperpermeability syndrome caused mainly by an imbalance between proinflammatory and anti-inflammatory cytokines produced in response to virus infection. The predominant proinflammatory mediators or “cytokine storm,” secreted mainly by T cells, monocytes/macrophages, and endothelial cells (Table 1), promotes endothelial dysfunction by generating an endothelial “sieve” effect that leads to fluid and protein leakage. Increasing evidence suggests that endothelial integrity and vascular permeability are affected by proinflammatory cytokines through the induction of apoptosis and the modulation of tight junction molecules within endothelial cells [47, 52, 89, 90]. In addition, it has also been reported that these cytokines may often have synergistic effects and may induce expression of other cytokines, generating a positive feedback mechanism leading to further imbalanced levels of inflammatory mediators and higher permeability [4].Table 1 Summary of the main cytokines associated with development of DHF/DSS and their biological function in relation to pathogenesis. Cytokines Biological function Refs. MCP-1 Monocyte chemoattractant protein-1 is critical to drive the extravasation of mononuclear cells into the inflamed, infected, and traumatized sites of infection. In addition, it promotes endothelial permeability increasing the vascular leakage as a result of dengue virus infection. [47, 48] IL-1 It induces tissue factor (TF) expression of endothelial cells (EC) and suppresses their cell surface anticoagulant activity. It may upregulate TNF-α production and activity. IL-1β mediates platelet-induced activation of ECs, which increases chemokine release and upregulates VCAM-1 enhancing adhesion of monocytes to the endothelium. [43, 49] IL-6 It has been described as a strong inducer of endothelial permeability resulting in vascular leakage. IL-6 potentiates the coagulation cascade and can downregulate production of TNF-α and its receptors. IL-6 may perform a synergistic role with some pyrogens such as IL-1 to induce fever. [50, 51] IL-8 Its systemic concentrations are increased by EC damage, which in turn induces endothelial permeability. Activation of the coagulation system results in increased expression of IL-6 and IL-8 by monocytes, while the APC anticoagulation pathway downregulates the production of IL-8 by ECs. [49, 50, 52] IL-10 It plays an immunosuppressive role that causes IFN resistance, followed by impaired immune clearance and a persistent infectious effect for acute viral infection. IL-10 also inhibits the expression of TF and inhibits fibrinolysis. IL-10 plasma levels have been associated with disease severity; however, its role in dengue pathogenesis has not been fully elucidated. [53] TNF-α It is a potent activator of ECs; it enhances capillary permeability. TNF-α upregulates expression of TF in monocytes and ECs and downregulates expression of thrombomodulin on ECs. It also activates the fibrinolytic system and enhances expression of NO mediating activation-induced death of T cells, and it has therefore been implicated in peripheral T-cell deletion. [49, 51, 54] TGF-β Early in infection, low levels of TGF-β may trigger secretion of IL-1 and TNF-β. However, later in infection, the cytokine inhibits the Th1 response and enhances production of Th2 cytokines such as IL-10. TGF-β increases expression of TF on ECs and upregulates expression and release of PAI-1 (plasminogen activator inhibitor-1). [3] VEGF VEGF is a key driver of vascular permeability. It reduces EC occludins, claudins, and the VE-cadherin content, all of which are components of ECs junctions. Upon activation, VEGF stimulates expression of ICAM-1, VCAM-1, and E-selectin in ECs. [3, 36]This oversustained inflammatory response may be due to an impairment of the regulatory mechanisms that control the duration and intensity of inflammation or cytokine production, especially through the regulation of PRR signaling activation [20]. Several studies have shown that alterations in proinflammatory cytokine production during DENV infection/disease can be attributed to variations in recognition and activation of TLR signaling, which contributes to progression of the disease (Figure 1) [91, 92]. It was recently reported that DENV NS1 proteins may be recognized by TLR2, TLR4, and TLR6 enhancing the production of proinflammatory cytokines and triggering the endothelial permeability that leads to vascular leakage [93, 94]. Interestingly, our group has recently shown a differential expression of TLRs in dendritic cells (DCs) of dengue patients depending on the severity of the disease [95]. Indeed, there was an increased expression of TLR3 and TLR9 in DCs of patients with DF in contrast to a poor stimulation of both receptors in DCs of patients with DHF. Conversely, a lower expression of TLR2 in DF patients compared to DHF patients was also observed. Additionally, IFN-α production was also altered via TLR9, suggesting that DENV may affect the type I IFN response through this signaling pathway [95]. Indeed, DENV has successfully evolved to overcome host immune responses, by efficiently subverting the IFN pathway and inhibiting different steps of the immune response through the expression of viral nonstructural proteins that antagonize several molecules of this activation pathway [96, 97]. Although DENV may evade immune recognition [42], cumulative data have shown that it is sensed by both TLR3 and TLR7/8 and activates signaling pathways upregulating IFN-α/β, TNF-α, human defensin 5 (HD5), and human β defensin 2 (HβD2) [39–41]. In addition, RIG-I and MDA-5 are also activated upon DENV infection and are essential for host defense against the virus [40]. Moreover, TLR3 controls DENV2 replication through NF-κB activation, suggesting that TLR3 agonists such as Poly (I : C) (Polyinosinic : Polycytidylic Acid) might work as immunomodulators of DENV infection [39]. Furthermore, besides DENV recognition and binding, C-type lectins such as the mannose receptor (MR) and CLEC5A may contribute to the inflammatory responses [98–100]. CLEC5A plays a critical role in the induction of NLRP3 inflammasome activation during DENV infection and enhances the release of IL-18 and IL-1β that are critical for activation of Th17 helper cells [99, 101].While innate immune activation and proinflammatory cytokine production are being investigated during the course of DENV infections [53, 92, 102], vitamin D activity has gained special attention due to its importance in the modulation of the innate response. An increasing number of reports suggest that vitamin D activity is associated with the modulation of components implicated in antiviral immune responses and in the regulation of proinflammatory cytokine production through the modulation of miR expression [6, 13, 15, 103]. Although there is little information from observational studies and clinical trials demonstrating the role of vitamin D during dengue virus infection, here we postulate a potential role of vitamin D controlling progression of dengue disease and provide evidence of some vitamin D molecular mechanisms in support of our hypothesis. ## 3. Vitamin D: Antiviral and Anti-Inflammatory Activity In addition to its well-known role in bone mineralization and calcium homeostasis, vitamin D is recognized as a pluripotent regulator of biological and immune functions [104]. A growing body of evidence suggests that it plays a major role during the immune system’s response to microbial infection, thereby becoming a potential intervener to control viral infections and inflammation [13, 105, 106]. The term vitamin D refers collectively to the active form 1α-25-dihydroxyvitamin D3 [1α-25(OH)2D3] and the inactive form 25-hydroxyvitamin D3 [25(OH)D3] [107]. For their transport within the serum, vitamin D compounds bind to the vitamin D binding protein (DBP) and this complex is recognized by megalin and cubilin (members of low-density lipoprotein receptor family) that then internalize the complex by invagination [108]. Intracellular trafficking of vitamin D metabolites to specific destinations is performed by members of the HSP- (Heat Shock Proteins-) 70 family [104]. In addition, vitamin D metabolites are also lipophilic molecules that can easily penetrate cell membranes and translocate to the nucleus, where 1α-25(OH)2D3 binds to the vitamin D receptor (VDR), thereby inducing heterodimerization of VDR with an isoform of the retinoid X receptor (RXR) [109]. The VDR-RXR heterodimer binds to vitamin D response elements (VDRE) present in the promoter of hundreds of target genes, whose products play key roles in cellular metabolism, bone mineralization, cell growth, differentiation, and control of inflammation (Figure 1) [104, 110, 111]. Besides VDR, other related vitamin D metabolic components such as the hydrolase CYP27B1, the enzyme that catalyzes the synthesis of active 1α-25-dihydroxyvitamin D3 from 25-hydroxyvitamin D3, are present and induced in some cells of the immune system during immune responses [112]. Thus, an increasing number of studies have explored the relationship between vitamin D activity and the immune system, specifically, the mechanisms whereby vitamin D exerts its antimicrobial and immunoregulatory activity [14, 113, 114]. Here, we highlight those modulating antiviral and inflammatory responses.Although controversial data have been reported, increasing clinical and observational studies have provided evidence supporting the protective features of vitamin D in viral infections, especially viral respiratory infections and HIV [13, 115, 116]. The activity of vitamin D in the innate immune system begins at the forefront of the body’s defense against pathogens, the skin. Regardless of global serum vitamin D levels, sensing of microbial pathogens via PRRs induces upregulation of CYP27B1 and, as a consequence, local conversion of 1,25(OH)2D3 from 25(OH)D3, enhancing VDR nuclear translocation and subsequent transcription of target genes to exert antimicrobial effects [113, 117–119]. This establishes a linkage between vitamin D status and the intracrine and paracrine modulation of cellular immune responses, in which VDR and CYP27B1 activity are of central importance [117, 118, 120]. Indeed, this link is also evidenced by studies in which pathogen susceptibility associated with vitamin D deficiency/insufficiency levels is reduced by correct supplementation [121, 122]. Furthermore, some vitamin D-induced antiviral mechanisms have been shown by preliminary reports (Table 2). Peptides such as cathelicidins are strongly upregulated by 1,25(OH)2D3 due to its VDR response elements. In humans, active cathelicidin is known as LL-37 and has a C-terminal cationic antimicrobial domain that can induce bacterial membrane disruption and inhibition of herpes simplex virus, influenza virus, and retroviral replication, among others [55–57]. In fact, very recent reports have suggested an association between vitamin D and the LL-37 antiviral activity to HIV and rhinovirus [58, 59]. Likewise, HBD-2 is also induced by 1,25(OH)2D3. Interestingly, a correlation between VDR and HBD-2 was found to be associated with natural resistance to HIV infection, suggesting the potential participation of vitamin D-induced resistance to the virus [60, 106]. Moreover, vitamin D can also induce reactive oxygen species (ROS) that associates with suppression of the replicative activity of some viruses, such as hepatitis C virus (HCV) [61]. Although the vitamin D-induced antiviral mechanisms are not fully elucidated and further studies are needed to fully understand their roles, many are possible due to the pleiotropic nature of vitamin D and the complex transcriptional modulation of hundreds of genes controlled by its activity.Table 2 Vitamin D-induced mechanisms/mediators associated with antiviral activity. Mediator/mechanism Virus Refs. Cathelicidin (LL-37) VHS, influenza virus, HIV, retrovirus [55–59] HBD2 HIV [60] ROS HCV [61] IFN response HIV, HCV [62–64] Autophagy HIV [65, 66] miR let-7 DENV [67, 68]Several studies have reported a link between VDR polymorphisms and severe outcomes of bronchiolitis and acute lower respiratory tract infections (RTIs) with respiratory syncytial virus (RSV) [105]. Indeed, vitamin D supplementation is associated with reduced RTI, vitamin D status, and serum concentrations in children [123]. Likewise, some vitamin D supplementation studies have reported a reduction in cold/influenza linked to seasonal sunlight exposure and skin pigmentation [124]. In HIV infection, associations have also been reported between vitamin D levels with progression of the disease, survival times of HIV patients, CD4+ T cell counts, inflammatory responses, and potential impact of HAART (Highly Active Anti-Retroviral Therapy) treatments [125]. Finally, similar population and ecoepidemiological reports have associated the role of vitamin D in several viral infections, including DENV and other flaviviruses [10–13], not only highlighting inhibition of viral replication but also controlling the inflammatory response and progression of the disease.In addition to viral control, vitamin D-induced immune mechanisms have important effects providing potential feedback modulation in pathways that regulate immune activation, avoiding excessive elaboration of the inflammatory responses and its potential risk for tissue homeostasis (Table3) [5, 6, 126]. TLRs can both affect and be affected by VDR signaling and likewise some antimicrobial peptides associated with TLRs have demonstrated antiviral effects [6, 13, 127]. In this sense, and due to the interest in the modulatory effect of vitamin D on TLR expression and proinflammatory cytokine production, some authors have shown that vitamin D can induce hyporesponsiveness to PAMPs (Pathogen-Associated Molecular Patterns) by downregulating the expression of TLR2 and TLR4 on monocytes that in turn have been associated with impaired production of TNF-α, suggesting a critical role of vitamin D in regulating TLR driven inflammation [71]. Importantly, a link between the DENV NS1 protein and activation of the inflammatory response via TLR2 and TLR4 impacting the progression of the disease has very recently been described [93, 128]. DENV NS1 antigens may induce the activation of TLR2 and TLR4 inducing high secretion of proinflammatory mediators that enhance endothelial dysfunction and permeability [46, 94, 129, 130]. Interestingly, it was reported that 1,25(OH)2D3 significantly reduces the levels of TLR2/TLR4 expression and of proinflammatory cytokines (TNF-α, IL-6, IL-12p70, and IL-1β) produced by U937 cells after exposure to DENV [72]. The same approach used in primary human monocytes and macrophages led to similar results, consistent with data obtained in our laboratory [19]. It has been suggested that vitamin D may regulate proinflammatory cytokine levels by targeting TLR activation signaling molecules (Figure 1). Indeed, it has been reported that treatment of monocytes with 1,25(OH)2D3 regulates TLR expression via the NF-κB pathway and reduces signaling of the mitogen-activated protein kinases MAPKs/p38 and p42/44 [19]. One of the most critical steps in NF-κB regulation is IκBα proteasomal degradation mediated by IKK (I kappa B Kinase) that leads to the nuclear entry of the NF-κB heterodimer p65/p50 to transactivate gene expression, resulting in a decrease of inflammatory genes. Accordingly, a novel molecular mechanism has recently been described in which 1,25(OH)2D3 binding to VDR attenuates NF-κB activation by directly interacting with the IKKβ protein to block its activity and, consequently, the NF-κB-dependent inflammatory response [76]. Besides TLR2 and TLR4, it has been shown that vitamin D can also downregulate the intracellular TLR9 expression and, subsequently, lead to less secretion of IL-6 in response to TLR9 stimulation [77]. Although intracellular downregulation of some PRRs such as TLR3, TLR7/8, and RIG-I/MDA5 may affect the potential antiviral response induced by type I IFN, various reports have shown that vitamin D treatment does not affect the type I IFN-induced antiviral response against various viruses [69, 131, 132]. In fact, it has been reported that porcine rotavirus (PRV) infection induces CYP27B1-dependent generation of 1,25(OH)2D3 which leads to an increased expression of TLR3 and RIG-I that consequently enhance the type I IFN-dependent antiviral response [76].Table 3 Vitamin D and miR targets associated with inflammatory response. Target/mediator Modulator Refs. TLR2/4 Vitamin D/miR155.miR146 [20, 69, 70] TNF-α Vitamin D/miR146 [70, 71] IL-1β Vitamin D/miR155 [19, 69] IL-6 Vitamin D/let-7e [72, 73] MAPK Vitamin D [19] NF-κB Vitamin D/miR155, miR146 [20, 70, 74, 75] IKK Vitamin D [76] SOCS1 Vitamin D/miR155 [20] TLR9 Vitamin D [77] ### 3.1. Vitamin D and miRs: Potential Implications for Inflammation Balance Although vitamin D may impact distinct pathways and molecules to modulate inflammatory responses, current evidence suggests TLRs and TLR signaling mediators as main targets by which vitamin D modulates inflammation (Table3) [6, 113, 133, 134]. However, a novel regulatory vitamin D mechanism in which TLR signaling/activation and miR function are associated has been recently documented, suggesting a crucial role of vitamin D and miRs for the host immune system homeostasis [15, 135, 136]. The participation of miRs as general regulatory mechanisms of initiation, propagation, and resolution of immune responses has been widely reviewed elsewhere [21, 137, 138]. Therefore, we discuss here its potential relationship with vitamin D activity in the control of inflammatory responses, attempting to extrapolate these findings to DENV infection.The ability of vitamin D to regulate miRs and their emerging relationship have been proposed by means of several experimental and clinical approaches; however, the implications of their impact on inflammatory responses have only been studied in in vitro models [15, 20, 135, 136, 139]. In patient trials with vitamin D supplementation, significant differences in miR expression profiles have been reported, suggesting that dietary vitamin D may also globally regulate miR levels [15]. Although several mechanisms may be involved in regulating such a global effect, some authors have found that chromatin states may be altered by VDR activity, determining accessibility for binding of the transcription and regulation of activation or inhibition of transcription [140, 141]. This in turn could be of relevance for canonical VDR-VDRE-mediated transcription regulation. In fact, VDR-induced regulation of miRs via VDRE has been demonstrated for some miRs such as miR-182 and let-7a whose pri-miRs (Primary miR) have multiple VDR/RXR binding sites, suggesting that these miRs could potentially be regulated by vitamin D metabolites [67, 142]. Moreover, a negative feedback loop between some miRNAs and VDR signaling has been reported. This is the case of miR-125b whose overexpression can reduce VDR/RXR protein levels. Since miR-125b is commonly downregulated in cancer cells, it has been proposed that such a decrease in miR-125b may result in the upregulation of VDR and in increasing antitumor effects driven by vitamin D in cancer cell models [136].Additionally, it has been reported that VDR signaling may attenuate TLR-mediated inflammation by enhancing a negative feedback inhibition mechanism (Figure1). A recent report has shown that VDR inactivation leads to a hyperinflammatory response in LPS-cultured mice macrophages through overproduction of miR-155 which in turns downregulates the suppressor of the cytokine signaling (SOCS) family of proteins that are key components of the negative feedback loop regulating the intensity, duration, and quality of cytokine signaling [2, 143, 144]. As feedback inhibitors of inflammation, SOCS proteins are upregulated by inflammatory cytokines, and, in turn, they block cytokine signaling by targeting the JAK/STAT (Janus Kinase/Signal Transducer and Activator of Transcription) pathway [2]. Evidence suggests that SOCS inhibits the proinflammatory pathways of cytokines such as TNF-α, IL-6, and IFN-γ and can inhibit the LPS-induced inflammatory response by directly blocking TLR4 signaling by targeting the IL-1R-associated kinases (IRAK) 1 and 4 [20, 144]. Consequently, deletion of miR-155 attenuates 1,25(OH)2D3 suppression of LPS-induced inflammation, confirming that vitamin D stimulates SOCS1 by downregulating miR-155 [20]. Taken together, these results highlight the importance of the VDR pathways controlling the inflammatory response by modulating miRNA-155-SOCS1 interactions. Finally, an additional reinforcing issue that may validate the link between vitamin D activity and miRs is the fact that 1,25(OH)2D3 deficiency has been related to reduced leukotriene synthetic capacity in macrophages [145, 146]. Recently, it was reported that leukotriene B4 (LTB4) can upregulate macrophage MyD88 (Myeloid Differentiation primary response-88) expression by decreasing SOCS-1 stability that is associated with the expression of proinflammatory miRs, such as miR-155, miR-146b, and miR-125b, and TLR4 activation in macrophages [147]. miR-146 has been also shown as a modulator of inflammatory responses mediated by TLR4/NF-κB and TNF-α [70]. Importantly, this miR has been found downregulated in patients with autoimmune disorders in which low levels of vitamin D have also been reported [148, 149]. These results suggest that vitamin D can orchestrate miR diversity involved in TLR signaling, thereby regulating inflammatory responses and activation of immune responses. ## 3.1. Vitamin D and miRs: Potential Implications for Inflammation Balance Although vitamin D may impact distinct pathways and molecules to modulate inflammatory responses, current evidence suggests TLRs and TLR signaling mediators as main targets by which vitamin D modulates inflammation (Table3) [6, 113, 133, 134]. However, a novel regulatory vitamin D mechanism in which TLR signaling/activation and miR function are associated has been recently documented, suggesting a crucial role of vitamin D and miRs for the host immune system homeostasis [15, 135, 136]. The participation of miRs as general regulatory mechanisms of initiation, propagation, and resolution of immune responses has been widely reviewed elsewhere [21, 137, 138]. Therefore, we discuss here its potential relationship with vitamin D activity in the control of inflammatory responses, attempting to extrapolate these findings to DENV infection.The ability of vitamin D to regulate miRs and their emerging relationship have been proposed by means of several experimental and clinical approaches; however, the implications of their impact on inflammatory responses have only been studied in in vitro models [15, 20, 135, 136, 139]. In patient trials with vitamin D supplementation, significant differences in miR expression profiles have been reported, suggesting that dietary vitamin D may also globally regulate miR levels [15]. Although several mechanisms may be involved in regulating such a global effect, some authors have found that chromatin states may be altered by VDR activity, determining accessibility for binding of the transcription and regulation of activation or inhibition of transcription [140, 141]. This in turn could be of relevance for canonical VDR-VDRE-mediated transcription regulation. In fact, VDR-induced regulation of miRs via VDRE has been demonstrated for some miRs such as miR-182 and let-7a whose pri-miRs (Primary miR) have multiple VDR/RXR binding sites, suggesting that these miRs could potentially be regulated by vitamin D metabolites [67, 142]. Moreover, a negative feedback loop between some miRNAs and VDR signaling has been reported. This is the case of miR-125b whose overexpression can reduce VDR/RXR protein levels. Since miR-125b is commonly downregulated in cancer cells, it has been proposed that such a decrease in miR-125b may result in the upregulation of VDR and in increasing antitumor effects driven by vitamin D in cancer cell models [136].Additionally, it has been reported that VDR signaling may attenuate TLR-mediated inflammation by enhancing a negative feedback inhibition mechanism (Figure1). A recent report has shown that VDR inactivation leads to a hyperinflammatory response in LPS-cultured mice macrophages through overproduction of miR-155 which in turns downregulates the suppressor of the cytokine signaling (SOCS) family of proteins that are key components of the negative feedback loop regulating the intensity, duration, and quality of cytokine signaling [2, 143, 144]. As feedback inhibitors of inflammation, SOCS proteins are upregulated by inflammatory cytokines, and, in turn, they block cytokine signaling by targeting the JAK/STAT (Janus Kinase/Signal Transducer and Activator of Transcription) pathway [2]. Evidence suggests that SOCS inhibits the proinflammatory pathways of cytokines such as TNF-α, IL-6, and IFN-γ and can inhibit the LPS-induced inflammatory response by directly blocking TLR4 signaling by targeting the IL-1R-associated kinases (IRAK) 1 and 4 [20, 144]. Consequently, deletion of miR-155 attenuates 1,25(OH)2D3 suppression of LPS-induced inflammation, confirming that vitamin D stimulates SOCS1 by downregulating miR-155 [20]. Taken together, these results highlight the importance of the VDR pathways controlling the inflammatory response by modulating miRNA-155-SOCS1 interactions. Finally, an additional reinforcing issue that may validate the link between vitamin D activity and miRs is the fact that 1,25(OH)2D3 deficiency has been related to reduced leukotriene synthetic capacity in macrophages [145, 146]. Recently, it was reported that leukotriene B4 (LTB4) can upregulate macrophage MyD88 (Myeloid Differentiation primary response-88) expression by decreasing SOCS-1 stability that is associated with the expression of proinflammatory miRs, such as miR-155, miR-146b, and miR-125b, and TLR4 activation in macrophages [147]. miR-146 has been also shown as a modulator of inflammatory responses mediated by TLR4/NF-κB and TNF-α [70]. Importantly, this miR has been found downregulated in patients with autoimmune disorders in which low levels of vitamin D have also been reported [148, 149]. These results suggest that vitamin D can orchestrate miR diversity involved in TLR signaling, thereby regulating inflammatory responses and activation of immune responses. ## 4. Insights into Vitamin D and DENV Infection Little is known about the link between DENV infection and vitamin D; however, since severe dengue is associated with imbalanced production of proinflammatory cytokines, it is very tempting to suggest that vitamin D could play an important role in modulating the inflammatory responses during ongoing DENV infections. Although only few studies can illustrate a link between vitamin D activity and DENV infection or disease, these reports have provided preliminary epidemiological evidence supporting this novel hypothesis. Initially, it was reported that heterozygosity in the VDR gene was correlated with progression of dengue. It was shown in a small Vietnamese population where dengue is endemic that the low frequency of a dimorphic (T/t) “t” allele in the VDR gene was associated with dengue disease severity, suggesting a protective role of VDR activity against dengue disease progression [12]. Variations in VDR have also been associated with susceptibility to osteoporosis in humans and with reduced risk of tuberculosis and persistent hepatitis B virus infections [150–152], highlighting the importance of VDR variations in signaling and immune protection. Accordingly, a study revealed the association of the “T” allele with DHF, by showing that the “T” allele codes for a longer length VDR that is the least active form of VDR. Since vitamin D is known to suppress TNF-α, it is possible that such inappropriate VDR signaling may contribute to higher levels of inflammation, enhancing the susceptibility to severity of the disease [10]. Although the modulatory effect of vitamin D during DENV infection and disease has not been widely tested in human populations, initial studies have associated the effect of oral 25(OH)D3 supplementation with antiviral responses, resistance, and overcoming of the disease. Specifically, a study reported the case of five DF patients that ameliorated the signs and symptoms of the disease, improving the overall clinical conditions and reducing the risk of disease progression [11]. Interestingly, this may be linked to other clinical approaches where oral supplementation with vitamin D enhanced the antiviral response to HCV [63], another RNA virus belonging also to the family Flaviviridae.The potential antiviral mechanism of vitamin D against DENV has yet not been fully explored; however, certain reports support the proposal that vitamin D could perform anti-DENV effects and immunoregulatory functions on innate immune responses [10–12]. In line with this, the effect of vitamin D treatment of human monocytic cell lines on DENV infection was recently reported [72]. The authors showed that cell exposure to 1,25(OH)2D3 resulted in a significant reduction of DENV-infected cells, a variable modulation of TLR2 and TLR4, and reduced levels of secreted proinflammatory cytokines such as TNF-α, IL-6, and IL-1β after infection [72]. The molecular mechanisms by which vitamin D can elicit an antiviral and anti-inflammatory role towards DENV have not been fully described, and although we observed that monocyte-derived macrophages differentiated in the presence of 1,25(OH)2D3 are less susceptible to DENV infection and express lower levels of mannose receptor restricting binding of DENV to target cells (manuscript in preparation), further studies are required to confirm that vitamin D treatment confers both anti-inflammatory and antiviral responses. Another interesting mechanism that could support the antiviral activity of vitamin D is the VDR-induced regulation of miRs via VDRE. This has been demonstrated for some miRs, such as let-7a (Table 2), whose pri-miR has multiple VDR/RXR binding sites that could potentially be regulated by vitamin D [67, 142]. miR let-7a belongs to a highly conserved family of miRs that contains other miRs previously reported to inhibit DENV replicative activity, such as let-7c [68]. Besides the members of the let-7 family, other miRs have also been associated with suppression of DENV infection and the inflammatory responses against the virus, as discussed below. ### 4.1. MicroRNAs in DENV Infection Viruses strictly depend on cellular mechanisms for their replication; therefore, there is an obligatory interaction between the virus and the host RNA silencing machinery. Although virus-derived small interfering RNAs may induce changes in cellular mRNA and miR expression profiles to induce replication, cellular miRs can also target viral sequences or induce antiviral protein expression to inhibit viral replication and translation [153]. Indeed, during DENV infection, several cellular miRs have been reported to have an effect on the replicative activity of the virus and the permissiveness of the host cells. Although some host miRs can also enhance DENV replication [81, 154], here we highlight the miRs affecting DENV replicative activity and modulating the immune response (Table 4).Table 4 Summary of miRs regulating DENV-induced inflammatory response and viral replicative activity. miRNA Target Cell line Refs. let-7e 3′-UTR of IL-6 Human peripheral blood mononuclear cells [73] let-7c HO-1 protein and the transcription factor BACH1 Huh-7 human hepatic cell line [68] miR-252 DENV envelope E protein Aedes albopictus C6/36 cell line [78] miR-30e∗ IkBα in DENV-permissive cells and IFN-β production Peripheral blood mononuclear cells and U937 and HeLa cell lines [79] miR-150 3′-UTR of SOCS-1 Peripheral blood mononuclear cells and monocytes [80] miR-122 3′-UTR of the DENV genome/mRNA BHK-21, HepG2, and Huh-7 cell lines [81] miR-142 3′-UTR of the DENV genome/mRNA Human dendritic cells and macrophages [82] miR-133a 3′-UTR of PTB; 3′-UTR of the DENV genome/mRNA Mouse C2C12 cells and Vero cells [83, 84] miR-548 5′-UTR SLA (Stem Loop A) DENV U937 monocyte/macrophages [85] miR-223 Microtubule destabilizing protein stathmin 1 (STMN-1) EA.hy926 endothelial cell line [86]The expression levels of different miRs regulated during DENV infection have been screened in the hepatic cell line Huh-7. This approach identified miR let-7c as a key regulator of the viral replicative cycle that affects viral replication and the oxidative stress immune response through the protein Heme Oxygenase-1 (HO-1) by activating its transcription factor BACH1 (Basic Leucine Zipper Transcription Factor-1) [68]. In addition, it was recently reported that, after DENV-2 infection of the C6/36 cell line, endogenous miR-252 is highly induced and associated with a decreased level of viral RNA copies. This antiviral effect was explained by the fact that miR-252 targets the DENV-2 E protein gene sequence, downregulating its expression and therefore acting as an antiviral regulator [78]. Although DENV can escape the immune system by decreasing the production of type I IFN due to DENV NS5 and NS4B activity [42, 97], DENV infection also induces the upregulation of the cellular miR-30e ∗ that suppresses DENV replication by increasing IFN-β production. This antiviral effect of miR-30e ∗ depends mainly on NF-κB activation by targeting the NF-κB inhibitor IκBα in DENV-permissive cells [79]. This antiviral effect induced by signaling of type I IFN is also promoted by miR-155 that has been reported to control virus-induced immune responses in models of infection with other members of theFlavivirus genus such as HCV [155–157]. In this latter model, the antiviral effect greatly depended on miR-155 targeting SOCS-1. This observation is in accordance with a study in which elevated expression of miR-150 in patients with DHF was correlated with suppression of SOCS-1 expression in monocytes [80] that in turn could be linked to the fact that vitamin D controls inflammatory responses through modulation of SOCS by downregulating miR-155 [20].Although it has remained unclear whether endogenous miRs can interfere with viral replicative activity by targeting DENV sequences or viral mRNAs, some experimental approaches have shown the importance of miRs in restricting viral replication through this mechanism [85, 158–160]. Some artificial miRs (amiRs) have been described as targeting the highly conserved regions of the DENV-2 genome and promoting efficient inhibition of virus replication [158]. Using DENV subgenomic replicons carrying the specific miR recognition element (MRE) for miR-122 in the 3′-UTR of the DENV genome/mRNA, some authors have shown that the liver-specific miR-122 suppresses translation and replication of DENV by targeting this MRE sequence [81]. Likewise, the insertion of the MRE for the hematopoietic specific miR-142 into the DENV-2 genome restricts replication of the virus in DCs and macrophages, highlighting the importance of this hematopoietic miR in dissemination of the virus [82]. In addition, DENV replication is enhanced by the interaction of the viral genome 3′-UTR and the host polypyrimidine tract binding (PTB) protein that translocates from the nucleus to the cytoplasm facilitating DENV replication [36, 161, 162]. However, the PTB mRNA 3′-UTR contains MREs that can be targeted by miR-133a, providing a mechanism for the downregulation of the PTB protein expression levels [163]. Moreover, in our group, we found that miR-133a contains target sites in the 3′-UTR sequence of the 4 DENV serotypes and that overexpression of miR-133a in Vero cells was associated with decreased DENV-2 replication activity [84]. All these data suggest a possible antiviral mechanism via miR-133a targeting the PTB protein mRNA and the DENV 3′-UTR sequence. Furthermore, we also showed that miR-744 and miR-484 can downregulate DENV replication by targeting the 3′UTR of the DENV RNA genome [Betancur et al., submitted]. In addition, the cellular miR-548g-3p has been identified as displaying antiviral activity by targeting the 5′-UTR SLA (Stem Loop A) promoter of the four DENV serotypes, thus, repressing viral replication and expression of viral proteins, independently of interferon signaling [85]. Moreover, overexpression of miR-223 inhibited replication of DENV in an endothelial cell-like cell line. The authors showed that miR-223 inhibits DENV by negatively regulating the microtubule destabilizing protein stathmin 1 (STMN-1) that is crucial for reorganization of microtubules and later replication of the virus. In addition, this study identified that the transcription factors C/EBP-α and EIF2 are regulators of miR-223 expression after DENV infection [86].Although little is known regarding the variations in miR expression in DENV-infected individuals, a recent study showed the expression profile of the miRs in blood samples of DEN-infected patients. The authors report 12 miRs that were specifically altered upon acute dengue and 17 miRs that could potentially be associated with specific dengue-related complications [164]. In addition, another profiling study reported abundance changes in the expression of some miRs in DENV-infected peripheral blood monocytes. Importantly, let-7e was among the miRs with the most significant regulation which, besides anti-DENV activity, may be of crucial importance for the modulation of inflammatory responses. Specifically, let-7e shares matching sequences with the 3′UTR mRNA of IL-6 and CCL3, as well as of other cytokines, highlighting a key role of miRs in immune response homeostasis during DENV infection (Figure 1) [67, 73, 86]. Likewise, miR-223 that also shares antiviral activity against DENV has been shown to have an important effect on the inflammatory response by regulating IL-β and IL-6 through IKKα and MKP-5 [86, 165, 166], stressing its potential contribution in DENV pathogenesis control. Since a link between vitamin D and miR expression has been established, but no reports discuss their combined implications for DENV antiviral and inflammatory response, we hypothesized here a vitamin D and miR interplay that could modulate DENV pathogenesis, opening new horizons in the therapeutic field of dengue disease. ## 4.1. MicroRNAs in DENV Infection Viruses strictly depend on cellular mechanisms for their replication; therefore, there is an obligatory interaction between the virus and the host RNA silencing machinery. Although virus-derived small interfering RNAs may induce changes in cellular mRNA and miR expression profiles to induce replication, cellular miRs can also target viral sequences or induce antiviral protein expression to inhibit viral replication and translation [153]. Indeed, during DENV infection, several cellular miRs have been reported to have an effect on the replicative activity of the virus and the permissiveness of the host cells. Although some host miRs can also enhance DENV replication [81, 154], here we highlight the miRs affecting DENV replicative activity and modulating the immune response (Table 4).Table 4 Summary of miRs regulating DENV-induced inflammatory response and viral replicative activity. miRNA Target Cell line Refs. let-7e 3′-UTR of IL-6 Human peripheral blood mononuclear cells [73] let-7c HO-1 protein and the transcription factor BACH1 Huh-7 human hepatic cell line [68] miR-252 DENV envelope E protein Aedes albopictus C6/36 cell line [78] miR-30e∗ IkBα in DENV-permissive cells and IFN-β production Peripheral blood mononuclear cells and U937 and HeLa cell lines [79] miR-150 3′-UTR of SOCS-1 Peripheral blood mononuclear cells and monocytes [80] miR-122 3′-UTR of the DENV genome/mRNA BHK-21, HepG2, and Huh-7 cell lines [81] miR-142 3′-UTR of the DENV genome/mRNA Human dendritic cells and macrophages [82] miR-133a 3′-UTR of PTB; 3′-UTR of the DENV genome/mRNA Mouse C2C12 cells and Vero cells [83, 84] miR-548 5′-UTR SLA (Stem Loop A) DENV U937 monocyte/macrophages [85] miR-223 Microtubule destabilizing protein stathmin 1 (STMN-1) EA.hy926 endothelial cell line [86]The expression levels of different miRs regulated during DENV infection have been screened in the hepatic cell line Huh-7. This approach identified miR let-7c as a key regulator of the viral replicative cycle that affects viral replication and the oxidative stress immune response through the protein Heme Oxygenase-1 (HO-1) by activating its transcription factor BACH1 (Basic Leucine Zipper Transcription Factor-1) [68]. In addition, it was recently reported that, after DENV-2 infection of the C6/36 cell line, endogenous miR-252 is highly induced and associated with a decreased level of viral RNA copies. This antiviral effect was explained by the fact that miR-252 targets the DENV-2 E protein gene sequence, downregulating its expression and therefore acting as an antiviral regulator [78]. Although DENV can escape the immune system by decreasing the production of type I IFN due to DENV NS5 and NS4B activity [42, 97], DENV infection also induces the upregulation of the cellular miR-30e ∗ that suppresses DENV replication by increasing IFN-β production. This antiviral effect of miR-30e ∗ depends mainly on NF-κB activation by targeting the NF-κB inhibitor IκBα in DENV-permissive cells [79]. This antiviral effect induced by signaling of type I IFN is also promoted by miR-155 that has been reported to control virus-induced immune responses in models of infection with other members of theFlavivirus genus such as HCV [155–157]. In this latter model, the antiviral effect greatly depended on miR-155 targeting SOCS-1. This observation is in accordance with a study in which elevated expression of miR-150 in patients with DHF was correlated with suppression of SOCS-1 expression in monocytes [80] that in turn could be linked to the fact that vitamin D controls inflammatory responses through modulation of SOCS by downregulating miR-155 [20].Although it has remained unclear whether endogenous miRs can interfere with viral replicative activity by targeting DENV sequences or viral mRNAs, some experimental approaches have shown the importance of miRs in restricting viral replication through this mechanism [85, 158–160]. Some artificial miRs (amiRs) have been described as targeting the highly conserved regions of the DENV-2 genome and promoting efficient inhibition of virus replication [158]. Using DENV subgenomic replicons carrying the specific miR recognition element (MRE) for miR-122 in the 3′-UTR of the DENV genome/mRNA, some authors have shown that the liver-specific miR-122 suppresses translation and replication of DENV by targeting this MRE sequence [81]. Likewise, the insertion of the MRE for the hematopoietic specific miR-142 into the DENV-2 genome restricts replication of the virus in DCs and macrophages, highlighting the importance of this hematopoietic miR in dissemination of the virus [82]. In addition, DENV replication is enhanced by the interaction of the viral genome 3′-UTR and the host polypyrimidine tract binding (PTB) protein that translocates from the nucleus to the cytoplasm facilitating DENV replication [36, 161, 162]. However, the PTB mRNA 3′-UTR contains MREs that can be targeted by miR-133a, providing a mechanism for the downregulation of the PTB protein expression levels [163]. Moreover, in our group, we found that miR-133a contains target sites in the 3′-UTR sequence of the 4 DENV serotypes and that overexpression of miR-133a in Vero cells was associated with decreased DENV-2 replication activity [84]. All these data suggest a possible antiviral mechanism via miR-133a targeting the PTB protein mRNA and the DENV 3′-UTR sequence. Furthermore, we also showed that miR-744 and miR-484 can downregulate DENV replication by targeting the 3′UTR of the DENV RNA genome [Betancur et al., submitted]. In addition, the cellular miR-548g-3p has been identified as displaying antiviral activity by targeting the 5′-UTR SLA (Stem Loop A) promoter of the four DENV serotypes, thus, repressing viral replication and expression of viral proteins, independently of interferon signaling [85]. Moreover, overexpression of miR-223 inhibited replication of DENV in an endothelial cell-like cell line. The authors showed that miR-223 inhibits DENV by negatively regulating the microtubule destabilizing protein stathmin 1 (STMN-1) that is crucial for reorganization of microtubules and later replication of the virus. In addition, this study identified that the transcription factors C/EBP-α and EIF2 are regulators of miR-223 expression after DENV infection [86].Although little is known regarding the variations in miR expression in DENV-infected individuals, a recent study showed the expression profile of the miRs in blood samples of DEN-infected patients. The authors report 12 miRs that were specifically altered upon acute dengue and 17 miRs that could potentially be associated with specific dengue-related complications [164]. In addition, another profiling study reported abundance changes in the expression of some miRs in DENV-infected peripheral blood monocytes. Importantly, let-7e was among the miRs with the most significant regulation which, besides anti-DENV activity, may be of crucial importance for the modulation of inflammatory responses. Specifically, let-7e shares matching sequences with the 3′UTR mRNA of IL-6 and CCL3, as well as of other cytokines, highlighting a key role of miRs in immune response homeostasis during DENV infection (Figure 1) [67, 73, 86]. Likewise, miR-223 that also shares antiviral activity against DENV has been shown to have an important effect on the inflammatory response by regulating IL-β and IL-6 through IKKα and MKP-5 [86, 165, 166], stressing its potential contribution in DENV pathogenesis control. Since a link between vitamin D and miR expression has been established, but no reports discuss their combined implications for DENV antiviral and inflammatory response, we hypothesized here a vitamin D and miR interplay that could modulate DENV pathogenesis, opening new horizons in the therapeutic field of dengue disease. ## 5. Concluding Remarks and Future Perspectives Severe dengue disease symptoms and DENV infection are characterized by overproduction of proinflammatory cytokines driven mainly by activation of several PRRs [29]. Here, we hypothesize that vitamin D may contribute to avoiding DENV infection and disease progression, especially through the modulation of miRs/TLRs that enhance the antiviral activity and regulate the inflammatory response. Although vitamin D’s antiviral mechanism has not been fully elucidated, it may be linked to vitamin D’s ability to control the permissiveness of DENV target cells and the virus-induced proinflammatory responses [72]. However, a better understanding of these mechanisms is required to provide interesting clues regarding DENV pathogenesis and dengue disease treatment. Certainly, epidemiological and experimental evidence describe an overall positive vitamin D-related immune effect in which increased levels of vitamin D and variants in the VDR receptor are associated with reduction of viral replication, decreased risk of infection, lower disease severity, and better outcome of the dengue symptoms [9–12, 72]. Additionally, the emerging relationships between vitamin D, the TLR signaling pathway, and its regulation by miRs are beginning to gain critical importance in infectious diseases. Indeed, as discussed above, several DENV infection studies have started to illustrate these vitamin D regulatory features that could be key mechanisms for the control of virus replication and homeostasis of the inflammatory response, thus making this hormone a special candidate for therapeutic strategies [127]. Although most of the studies have focused on the effects of vitamin D induced in dendritic cells and macrophages, others have also described the same immunoregulatory effects on other cell populations of the immune system such as CD8+ T cells, NK cells, and B cells [167–169] suggesting their impact not only on DENV target cells but also at the level of cells associated with virus clearance. All the data discussed here suggest that vitamin D could constitute a strong potential strategy to modulate the “cytokine storm” that occurs during ongoing DENV infections and the progression to severe states of the disease. Although it is important to note that such a global effect on the inflammatory activity could weaken the host response to other opportunistic pathogens, it has been suggested that while vitamin D may reduce inflammatory markers during viral infections, it also exerts protective effects against coinfections with other opportunistic pathogens [14, 106]. Moreover, its clinical effectiveness has been tested by improving the overall physical condition of DENV patients and reducing the progression of the disease [11]. Although incoming supplementary trials are required to fully elucidate the therapeutic relevance of vitamin D, it is evident that this hormone may be an excellent alternative of a natural immune-regulatory agent capable of modulating the innate immune response against DENV, which will provide crucial information to understand and design strategies to treat and control progression of dengue disease. Although further experimental studies are required to boost the understanding of vitamin D in the regulation of inflammation and antiviral response against DENV infection, the information discussed above highlights the features of vitamin D in immune regulation as an exciting research field and as an efficient and low-cost therapeutic procedure against DENV and possibly other viral infections. --- *Source: 1016840-2016-05-11.xml*
1016840-2016-05-11_1016840-2016-05-11.md
66,796
Vitamin D-Regulated MicroRNAs: Are They Protective Factors against Dengue Virus Infection?
John F. Arboleda; Silvio Urcuqui-Inchima
Advances in Virology (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1016840
1016840-2016-05-11.xml
--- ## Abstract Over the last few years, an increasing body of evidence has highlighted the critical participation of vitamin D in the regulation of proinflammatory responses and protection against many infectious pathogens, including viruses. The activity of vitamin D is associated with microRNAs, which are fine tuners of immune activation pathways and provide novel mechanisms to avoid the damage that arises from excessive inflammatory responses. Severe symptoms of an ongoing dengue virus infection and disease are strongly related to highly altered production of proinflammatory mediators, suggesting impairment in homeostatic mechanisms that control the host’s immune response. Here, we discuss the possible implications of emerging studies anticipating the biological effects of vitamin D and microRNAs during the inflammatory response, and we attempt to extrapolate these findings to dengue virus infection and to their potential use for disease management strategies. --- ## Body ## 1. Introduction Activation of innate immune cells results in the release of proinflammatory mediators to initiate a protective local response against invading pathogens [1]. However, overactivated inflammatory activity could be detrimental since it can cause tissue damage and even death of the host. Therefore, negative feedback mechanisms are required to control the duration and intensity of the inflammatory response [1, 2]. Although little is known about the molecular mechanisms occurring during dengue virus (DENV) infection/disease, it has been suggested that the immune response initiated against the virus greatly contributes to pathogenesis. Indeed, several symptoms of the disease are tightly related to imbalanced immune responses, particularly to high production of proinflammatory cytokines [3, 4] suggesting an impairment of homeostatic mechanisms that control inflammation. Interestingly, vitamin D has been described as an important modulator of immune responses to several pathogens and as a key factor enhancing immunoregulatory mechanisms that avoid the damage that arises from excessive inflammatory responses [5, 6], as in dengue disease [7]. Mounting evidence obtained from human populations and experimental in vitro studies has suggested that this hormone can play a key role in the immune system’s response to several viruses [8–14], thereby becoming a potential target of intervention to combat DENV infection and disease progression. Among several mechanisms, vitamin D activity has been associated with the expression of certain microRNAs (miRs) [15] that are one of the main regulatory switches operating at the translational level [16]. miRs constitute approximately 1% of the human genome and their sequences can be found within introns of other genes or can be encoded independently and transcribed in a similar fashion to mRNAs encoded by protein-coding genes [16]. A typical mature miR of 18–23 base pairs associates with the RNA-induced silencing complex (RISC) and moves towards the target mRNA [17]. Once there, the miR binds to the complementary sequence in the 3′untranslated region (3′UTR) of the mRNA, thereby inducing gene silencing through mRNA cleavage, translational repression, or deadenylation [16]. A single miR may directly regulate the expression of hundreds of mRNAs at once and several miRs can also target the same mRNA resulting in enhanced translation inhibition [18]. Targeting of specific genes involved in modulation of immune response pathways by miRs provides a finely tuned regulatory mechanism for the restoration of the host’s resting inflammation state [19–21]. Since the association between vitamin D and miR activity may play a relevant role in ongoing DENV infections, here we provide an overview of DENV-induced inflammatory responses and the early evidence anticipating a possible participation of the vitamin D and miR interplay regulating antiviral and inflammatory responses during DENV infection/disease. ## 2. DENV and the Immune Response DENV is an icosahedral-enveloped virus with a positive sense single-stranded RNA (ssRNA) genome that belongs to the family Flaviviridae, genusFlavivirus. There are four phylogenetically related but antigenically distinct viral serotypes (DENV 1–4) able to cause the full spectrum of the disease [22]. In addition, a sylvatic serotype (DENV-5), with no evidence regarding its ability to infect humans, has been recently reported [23]. DENV is transmitted byAedes mosquitoes in tropical and subtropical areas where the disease has become a major public health threat and one of the most rapidly spreading vector-borne diseases in the world, with an increasing incidence of 30-fold in the past 50 years [24, 25]. An estimated 3.6 billion people live in high risk areas worldwide and it is estimated that over 390 million cases occur every year, of which 96 million suffer from dengue fever [26–28]. Although only a minor number of cases may progress to the severe forms of the disease, 21.000 deaths are reported annually [27]. Guidelines of the World Health Organization (WHO) recognize dengue as a clinical continuum from dengue fever (DF), a nonspecific febrile illness, to dengue with or without warning signs that can progress to dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS) [3]. These severe forms of the disease are characterized by a wide spectrum of symptoms, including the development of vascular permeability, plasma leakage, thrombocytopenia, focal or generalized hemorrhages, and tissue and/or organ damage that may lead to shock and death [29, 30]. Besides ecoepidemiology, host genetic variations, and virus virulence, the risk factor is increased mainly by secondary infections with different dengue serotypes, presumably through a mechanism known as antibody-dependent immune enhancement (ADE), whereby nonneutralizing antibodies from previous heterotypic infections enhance virus entry via receptors for immunoglobulins or Fc receptors (FcRs) [29, 31, 32].Skin is the first barrier for the invading DENV and the site where innate immunity exerts the first line of defense [33]. Following the bite by an infected mosquito, local tissue resident dendritic cells (DCs) and macrophages are the main targets of the virus [34, 35]. The viral structural E protein binds to cellular receptors, such as DC-SIGN (Dendritic Cell-Specific Intercellular adhesion molecule-3-Grabbing Nonintegrin), CLEC5A (C-type lectin domain family 5, member A), and MR (mannose receptor), allowing internalization of the virus through receptor-mediated endocytosis [22, 36–38]. Once in the cytoplasm, DENV replication products, such as double-stranded RNA (dsRNA) or genomic ssRNA, are sensed by several pattern recognition receptors (PRRs) (Figure 1), including TLR3, TLR7, TLR8, the cytosolic receptors RIG-I (Retinoic acid Inducible Gene-1), and MDA-5 (Melanoma Differentiation-Associated protein 5) [39–43]. Subsequently, this subset of PRRs triggers the activation of intracellular pathways, leading to the activation of transcription factors such as interferon regulatory factors 3 and 7 (IRF3 and IRF7) and the Nuclear Factor κB (NF-κB) and the later production of type I interferons and proinflammatory cytokines promoting an antiviral response [44, 45]. Additionally, the local activation of natural killer (NK) cells, neutrophils, and mast cells by the presence of the virus induces more proinflammatory mediators, complement activation, and the commitment of cellular and humoral immune responses to clear and control viral infection [46].Figure 1 Potential link between vitamin D and miR controlling DENV-induced inflammatory response and antiviral activity. (1) DENV replication products and proteins are recognized by several PRRs whose signaling pathways promote the proinflammatory response. (2) Vitamin D activity induces transcription of microRNAs and other target genes that play a critical role in the control of inflammation-related signaling pathways and antiviral activity. ### 2.1. Inflammation and Cytokine Storm Although the immune response is critical to combat and overcome invading pathogens, it is believed that the immune response greatly contributes to progression of dengue disease [31]. The pathogenesis and progression to the severe forms of dengue are still not completely understood; however, most cases are characterized by bleeding, hemorrhage, and plasma leakage that can progress to shock or organ failure [87, 88]. These physiological events are preceded by a hyperpermeability syndrome caused mainly by an imbalance between proinflammatory and anti-inflammatory cytokines produced in response to virus infection. The predominant proinflammatory mediators or “cytokine storm,” secreted mainly by T cells, monocytes/macrophages, and endothelial cells (Table 1), promotes endothelial dysfunction by generating an endothelial “sieve” effect that leads to fluid and protein leakage. Increasing evidence suggests that endothelial integrity and vascular permeability are affected by proinflammatory cytokines through the induction of apoptosis and the modulation of tight junction molecules within endothelial cells [47, 52, 89, 90]. In addition, it has also been reported that these cytokines may often have synergistic effects and may induce expression of other cytokines, generating a positive feedback mechanism leading to further imbalanced levels of inflammatory mediators and higher permeability [4].Table 1 Summary of the main cytokines associated with development of DHF/DSS and their biological function in relation to pathogenesis. Cytokines Biological function Refs. MCP-1 Monocyte chemoattractant protein-1 is critical to drive the extravasation of mononuclear cells into the inflamed, infected, and traumatized sites of infection. In addition, it promotes endothelial permeability increasing the vascular leakage as a result of dengue virus infection. [47, 48] IL-1 It induces tissue factor (TF) expression of endothelial cells (EC) and suppresses their cell surface anticoagulant activity. It may upregulate TNF-α production and activity. IL-1β mediates platelet-induced activation of ECs, which increases chemokine release and upregulates VCAM-1 enhancing adhesion of monocytes to the endothelium. [43, 49] IL-6 It has been described as a strong inducer of endothelial permeability resulting in vascular leakage. IL-6 potentiates the coagulation cascade and can downregulate production of TNF-α and its receptors. IL-6 may perform a synergistic role with some pyrogens such as IL-1 to induce fever. [50, 51] IL-8 Its systemic concentrations are increased by EC damage, which in turn induces endothelial permeability. Activation of the coagulation system results in increased expression of IL-6 and IL-8 by monocytes, while the APC anticoagulation pathway downregulates the production of IL-8 by ECs. [49, 50, 52] IL-10 It plays an immunosuppressive role that causes IFN resistance, followed by impaired immune clearance and a persistent infectious effect for acute viral infection. IL-10 also inhibits the expression of TF and inhibits fibrinolysis. IL-10 plasma levels have been associated with disease severity; however, its role in dengue pathogenesis has not been fully elucidated. [53] TNF-α It is a potent activator of ECs; it enhances capillary permeability. TNF-α upregulates expression of TF in monocytes and ECs and downregulates expression of thrombomodulin on ECs. It also activates the fibrinolytic system and enhances expression of NO mediating activation-induced death of T cells, and it has therefore been implicated in peripheral T-cell deletion. [49, 51, 54] TGF-β Early in infection, low levels of TGF-β may trigger secretion of IL-1 and TNF-β. However, later in infection, the cytokine inhibits the Th1 response and enhances production of Th2 cytokines such as IL-10. TGF-β increases expression of TF on ECs and upregulates expression and release of PAI-1 (plasminogen activator inhibitor-1). [3] VEGF VEGF is a key driver of vascular permeability. It reduces EC occludins, claudins, and the VE-cadherin content, all of which are components of ECs junctions. Upon activation, VEGF stimulates expression of ICAM-1, VCAM-1, and E-selectin in ECs. [3, 36]This oversustained inflammatory response may be due to an impairment of the regulatory mechanisms that control the duration and intensity of inflammation or cytokine production, especially through the regulation of PRR signaling activation [20]. Several studies have shown that alterations in proinflammatory cytokine production during DENV infection/disease can be attributed to variations in recognition and activation of TLR signaling, which contributes to progression of the disease (Figure 1) [91, 92]. It was recently reported that DENV NS1 proteins may be recognized by TLR2, TLR4, and TLR6 enhancing the production of proinflammatory cytokines and triggering the endothelial permeability that leads to vascular leakage [93, 94]. Interestingly, our group has recently shown a differential expression of TLRs in dendritic cells (DCs) of dengue patients depending on the severity of the disease [95]. Indeed, there was an increased expression of TLR3 and TLR9 in DCs of patients with DF in contrast to a poor stimulation of both receptors in DCs of patients with DHF. Conversely, a lower expression of TLR2 in DF patients compared to DHF patients was also observed. Additionally, IFN-α production was also altered via TLR9, suggesting that DENV may affect the type I IFN response through this signaling pathway [95]. Indeed, DENV has successfully evolved to overcome host immune responses, by efficiently subverting the IFN pathway and inhibiting different steps of the immune response through the expression of viral nonstructural proteins that antagonize several molecules of this activation pathway [96, 97]. Although DENV may evade immune recognition [42], cumulative data have shown that it is sensed by both TLR3 and TLR7/8 and activates signaling pathways upregulating IFN-α/β, TNF-α, human defensin 5 (HD5), and human β defensin 2 (HβD2) [39–41]. In addition, RIG-I and MDA-5 are also activated upon DENV infection and are essential for host defense against the virus [40]. Moreover, TLR3 controls DENV2 replication through NF-κB activation, suggesting that TLR3 agonists such as Poly (I : C) (Polyinosinic : Polycytidylic Acid) might work as immunomodulators of DENV infection [39]. Furthermore, besides DENV recognition and binding, C-type lectins such as the mannose receptor (MR) and CLEC5A may contribute to the inflammatory responses [98–100]. CLEC5A plays a critical role in the induction of NLRP3 inflammasome activation during DENV infection and enhances the release of IL-18 and IL-1β that are critical for activation of Th17 helper cells [99, 101].While innate immune activation and proinflammatory cytokine production are being investigated during the course of DENV infections [53, 92, 102], vitamin D activity has gained special attention due to its importance in the modulation of the innate response. An increasing number of reports suggest that vitamin D activity is associated with the modulation of components implicated in antiviral immune responses and in the regulation of proinflammatory cytokine production through the modulation of miR expression [6, 13, 15, 103]. Although there is little information from observational studies and clinical trials demonstrating the role of vitamin D during dengue virus infection, here we postulate a potential role of vitamin D controlling progression of dengue disease and provide evidence of some vitamin D molecular mechanisms in support of our hypothesis. ## 2.1. Inflammation and Cytokine Storm Although the immune response is critical to combat and overcome invading pathogens, it is believed that the immune response greatly contributes to progression of dengue disease [31]. The pathogenesis and progression to the severe forms of dengue are still not completely understood; however, most cases are characterized by bleeding, hemorrhage, and plasma leakage that can progress to shock or organ failure [87, 88]. These physiological events are preceded by a hyperpermeability syndrome caused mainly by an imbalance between proinflammatory and anti-inflammatory cytokines produced in response to virus infection. The predominant proinflammatory mediators or “cytokine storm,” secreted mainly by T cells, monocytes/macrophages, and endothelial cells (Table 1), promotes endothelial dysfunction by generating an endothelial “sieve” effect that leads to fluid and protein leakage. Increasing evidence suggests that endothelial integrity and vascular permeability are affected by proinflammatory cytokines through the induction of apoptosis and the modulation of tight junction molecules within endothelial cells [47, 52, 89, 90]. In addition, it has also been reported that these cytokines may often have synergistic effects and may induce expression of other cytokines, generating a positive feedback mechanism leading to further imbalanced levels of inflammatory mediators and higher permeability [4].Table 1 Summary of the main cytokines associated with development of DHF/DSS and their biological function in relation to pathogenesis. Cytokines Biological function Refs. MCP-1 Monocyte chemoattractant protein-1 is critical to drive the extravasation of mononuclear cells into the inflamed, infected, and traumatized sites of infection. In addition, it promotes endothelial permeability increasing the vascular leakage as a result of dengue virus infection. [47, 48] IL-1 It induces tissue factor (TF) expression of endothelial cells (EC) and suppresses their cell surface anticoagulant activity. It may upregulate TNF-α production and activity. IL-1β mediates platelet-induced activation of ECs, which increases chemokine release and upregulates VCAM-1 enhancing adhesion of monocytes to the endothelium. [43, 49] IL-6 It has been described as a strong inducer of endothelial permeability resulting in vascular leakage. IL-6 potentiates the coagulation cascade and can downregulate production of TNF-α and its receptors. IL-6 may perform a synergistic role with some pyrogens such as IL-1 to induce fever. [50, 51] IL-8 Its systemic concentrations are increased by EC damage, which in turn induces endothelial permeability. Activation of the coagulation system results in increased expression of IL-6 and IL-8 by monocytes, while the APC anticoagulation pathway downregulates the production of IL-8 by ECs. [49, 50, 52] IL-10 It plays an immunosuppressive role that causes IFN resistance, followed by impaired immune clearance and a persistent infectious effect for acute viral infection. IL-10 also inhibits the expression of TF and inhibits fibrinolysis. IL-10 plasma levels have been associated with disease severity; however, its role in dengue pathogenesis has not been fully elucidated. [53] TNF-α It is a potent activator of ECs; it enhances capillary permeability. TNF-α upregulates expression of TF in monocytes and ECs and downregulates expression of thrombomodulin on ECs. It also activates the fibrinolytic system and enhances expression of NO mediating activation-induced death of T cells, and it has therefore been implicated in peripheral T-cell deletion. [49, 51, 54] TGF-β Early in infection, low levels of TGF-β may trigger secretion of IL-1 and TNF-β. However, later in infection, the cytokine inhibits the Th1 response and enhances production of Th2 cytokines such as IL-10. TGF-β increases expression of TF on ECs and upregulates expression and release of PAI-1 (plasminogen activator inhibitor-1). [3] VEGF VEGF is a key driver of vascular permeability. It reduces EC occludins, claudins, and the VE-cadherin content, all of which are components of ECs junctions. Upon activation, VEGF stimulates expression of ICAM-1, VCAM-1, and E-selectin in ECs. [3, 36]This oversustained inflammatory response may be due to an impairment of the regulatory mechanisms that control the duration and intensity of inflammation or cytokine production, especially through the regulation of PRR signaling activation [20]. Several studies have shown that alterations in proinflammatory cytokine production during DENV infection/disease can be attributed to variations in recognition and activation of TLR signaling, which contributes to progression of the disease (Figure 1) [91, 92]. It was recently reported that DENV NS1 proteins may be recognized by TLR2, TLR4, and TLR6 enhancing the production of proinflammatory cytokines and triggering the endothelial permeability that leads to vascular leakage [93, 94]. Interestingly, our group has recently shown a differential expression of TLRs in dendritic cells (DCs) of dengue patients depending on the severity of the disease [95]. Indeed, there was an increased expression of TLR3 and TLR9 in DCs of patients with DF in contrast to a poor stimulation of both receptors in DCs of patients with DHF. Conversely, a lower expression of TLR2 in DF patients compared to DHF patients was also observed. Additionally, IFN-α production was also altered via TLR9, suggesting that DENV may affect the type I IFN response through this signaling pathway [95]. Indeed, DENV has successfully evolved to overcome host immune responses, by efficiently subverting the IFN pathway and inhibiting different steps of the immune response through the expression of viral nonstructural proteins that antagonize several molecules of this activation pathway [96, 97]. Although DENV may evade immune recognition [42], cumulative data have shown that it is sensed by both TLR3 and TLR7/8 and activates signaling pathways upregulating IFN-α/β, TNF-α, human defensin 5 (HD5), and human β defensin 2 (HβD2) [39–41]. In addition, RIG-I and MDA-5 are also activated upon DENV infection and are essential for host defense against the virus [40]. Moreover, TLR3 controls DENV2 replication through NF-κB activation, suggesting that TLR3 agonists such as Poly (I : C) (Polyinosinic : Polycytidylic Acid) might work as immunomodulators of DENV infection [39]. Furthermore, besides DENV recognition and binding, C-type lectins such as the mannose receptor (MR) and CLEC5A may contribute to the inflammatory responses [98–100]. CLEC5A plays a critical role in the induction of NLRP3 inflammasome activation during DENV infection and enhances the release of IL-18 and IL-1β that are critical for activation of Th17 helper cells [99, 101].While innate immune activation and proinflammatory cytokine production are being investigated during the course of DENV infections [53, 92, 102], vitamin D activity has gained special attention due to its importance in the modulation of the innate response. An increasing number of reports suggest that vitamin D activity is associated with the modulation of components implicated in antiviral immune responses and in the regulation of proinflammatory cytokine production through the modulation of miR expression [6, 13, 15, 103]. Although there is little information from observational studies and clinical trials demonstrating the role of vitamin D during dengue virus infection, here we postulate a potential role of vitamin D controlling progression of dengue disease and provide evidence of some vitamin D molecular mechanisms in support of our hypothesis. ## 3. Vitamin D: Antiviral and Anti-Inflammatory Activity In addition to its well-known role in bone mineralization and calcium homeostasis, vitamin D is recognized as a pluripotent regulator of biological and immune functions [104]. A growing body of evidence suggests that it plays a major role during the immune system’s response to microbial infection, thereby becoming a potential intervener to control viral infections and inflammation [13, 105, 106]. The term vitamin D refers collectively to the active form 1α-25-dihydroxyvitamin D3 [1α-25(OH)2D3] and the inactive form 25-hydroxyvitamin D3 [25(OH)D3] [107]. For their transport within the serum, vitamin D compounds bind to the vitamin D binding protein (DBP) and this complex is recognized by megalin and cubilin (members of low-density lipoprotein receptor family) that then internalize the complex by invagination [108]. Intracellular trafficking of vitamin D metabolites to specific destinations is performed by members of the HSP- (Heat Shock Proteins-) 70 family [104]. In addition, vitamin D metabolites are also lipophilic molecules that can easily penetrate cell membranes and translocate to the nucleus, where 1α-25(OH)2D3 binds to the vitamin D receptor (VDR), thereby inducing heterodimerization of VDR with an isoform of the retinoid X receptor (RXR) [109]. The VDR-RXR heterodimer binds to vitamin D response elements (VDRE) present in the promoter of hundreds of target genes, whose products play key roles in cellular metabolism, bone mineralization, cell growth, differentiation, and control of inflammation (Figure 1) [104, 110, 111]. Besides VDR, other related vitamin D metabolic components such as the hydrolase CYP27B1, the enzyme that catalyzes the synthesis of active 1α-25-dihydroxyvitamin D3 from 25-hydroxyvitamin D3, are present and induced in some cells of the immune system during immune responses [112]. Thus, an increasing number of studies have explored the relationship between vitamin D activity and the immune system, specifically, the mechanisms whereby vitamin D exerts its antimicrobial and immunoregulatory activity [14, 113, 114]. Here, we highlight those modulating antiviral and inflammatory responses.Although controversial data have been reported, increasing clinical and observational studies have provided evidence supporting the protective features of vitamin D in viral infections, especially viral respiratory infections and HIV [13, 115, 116]. The activity of vitamin D in the innate immune system begins at the forefront of the body’s defense against pathogens, the skin. Regardless of global serum vitamin D levels, sensing of microbial pathogens via PRRs induces upregulation of CYP27B1 and, as a consequence, local conversion of 1,25(OH)2D3 from 25(OH)D3, enhancing VDR nuclear translocation and subsequent transcription of target genes to exert antimicrobial effects [113, 117–119]. This establishes a linkage between vitamin D status and the intracrine and paracrine modulation of cellular immune responses, in which VDR and CYP27B1 activity are of central importance [117, 118, 120]. Indeed, this link is also evidenced by studies in which pathogen susceptibility associated with vitamin D deficiency/insufficiency levels is reduced by correct supplementation [121, 122]. Furthermore, some vitamin D-induced antiviral mechanisms have been shown by preliminary reports (Table 2). Peptides such as cathelicidins are strongly upregulated by 1,25(OH)2D3 due to its VDR response elements. In humans, active cathelicidin is known as LL-37 and has a C-terminal cationic antimicrobial domain that can induce bacterial membrane disruption and inhibition of herpes simplex virus, influenza virus, and retroviral replication, among others [55–57]. In fact, very recent reports have suggested an association between vitamin D and the LL-37 antiviral activity to HIV and rhinovirus [58, 59]. Likewise, HBD-2 is also induced by 1,25(OH)2D3. Interestingly, a correlation between VDR and HBD-2 was found to be associated with natural resistance to HIV infection, suggesting the potential participation of vitamin D-induced resistance to the virus [60, 106]. Moreover, vitamin D can also induce reactive oxygen species (ROS) that associates with suppression of the replicative activity of some viruses, such as hepatitis C virus (HCV) [61]. Although the vitamin D-induced antiviral mechanisms are not fully elucidated and further studies are needed to fully understand their roles, many are possible due to the pleiotropic nature of vitamin D and the complex transcriptional modulation of hundreds of genes controlled by its activity.Table 2 Vitamin D-induced mechanisms/mediators associated with antiviral activity. Mediator/mechanism Virus Refs. Cathelicidin (LL-37) VHS, influenza virus, HIV, retrovirus [55–59] HBD2 HIV [60] ROS HCV [61] IFN response HIV, HCV [62–64] Autophagy HIV [65, 66] miR let-7 DENV [67, 68]Several studies have reported a link between VDR polymorphisms and severe outcomes of bronchiolitis and acute lower respiratory tract infections (RTIs) with respiratory syncytial virus (RSV) [105]. Indeed, vitamin D supplementation is associated with reduced RTI, vitamin D status, and serum concentrations in children [123]. Likewise, some vitamin D supplementation studies have reported a reduction in cold/influenza linked to seasonal sunlight exposure and skin pigmentation [124]. In HIV infection, associations have also been reported between vitamin D levels with progression of the disease, survival times of HIV patients, CD4+ T cell counts, inflammatory responses, and potential impact of HAART (Highly Active Anti-Retroviral Therapy) treatments [125]. Finally, similar population and ecoepidemiological reports have associated the role of vitamin D in several viral infections, including DENV and other flaviviruses [10–13], not only highlighting inhibition of viral replication but also controlling the inflammatory response and progression of the disease.In addition to viral control, vitamin D-induced immune mechanisms have important effects providing potential feedback modulation in pathways that regulate immune activation, avoiding excessive elaboration of the inflammatory responses and its potential risk for tissue homeostasis (Table3) [5, 6, 126]. TLRs can both affect and be affected by VDR signaling and likewise some antimicrobial peptides associated with TLRs have demonstrated antiviral effects [6, 13, 127]. In this sense, and due to the interest in the modulatory effect of vitamin D on TLR expression and proinflammatory cytokine production, some authors have shown that vitamin D can induce hyporesponsiveness to PAMPs (Pathogen-Associated Molecular Patterns) by downregulating the expression of TLR2 and TLR4 on monocytes that in turn have been associated with impaired production of TNF-α, suggesting a critical role of vitamin D in regulating TLR driven inflammation [71]. Importantly, a link between the DENV NS1 protein and activation of the inflammatory response via TLR2 and TLR4 impacting the progression of the disease has very recently been described [93, 128]. DENV NS1 antigens may induce the activation of TLR2 and TLR4 inducing high secretion of proinflammatory mediators that enhance endothelial dysfunction and permeability [46, 94, 129, 130]. Interestingly, it was reported that 1,25(OH)2D3 significantly reduces the levels of TLR2/TLR4 expression and of proinflammatory cytokines (TNF-α, IL-6, IL-12p70, and IL-1β) produced by U937 cells after exposure to DENV [72]. The same approach used in primary human monocytes and macrophages led to similar results, consistent with data obtained in our laboratory [19]. It has been suggested that vitamin D may regulate proinflammatory cytokine levels by targeting TLR activation signaling molecules (Figure 1). Indeed, it has been reported that treatment of monocytes with 1,25(OH)2D3 regulates TLR expression via the NF-κB pathway and reduces signaling of the mitogen-activated protein kinases MAPKs/p38 and p42/44 [19]. One of the most critical steps in NF-κB regulation is IκBα proteasomal degradation mediated by IKK (I kappa B Kinase) that leads to the nuclear entry of the NF-κB heterodimer p65/p50 to transactivate gene expression, resulting in a decrease of inflammatory genes. Accordingly, a novel molecular mechanism has recently been described in which 1,25(OH)2D3 binding to VDR attenuates NF-κB activation by directly interacting with the IKKβ protein to block its activity and, consequently, the NF-κB-dependent inflammatory response [76]. Besides TLR2 and TLR4, it has been shown that vitamin D can also downregulate the intracellular TLR9 expression and, subsequently, lead to less secretion of IL-6 in response to TLR9 stimulation [77]. Although intracellular downregulation of some PRRs such as TLR3, TLR7/8, and RIG-I/MDA5 may affect the potential antiviral response induced by type I IFN, various reports have shown that vitamin D treatment does not affect the type I IFN-induced antiviral response against various viruses [69, 131, 132]. In fact, it has been reported that porcine rotavirus (PRV) infection induces CYP27B1-dependent generation of 1,25(OH)2D3 which leads to an increased expression of TLR3 and RIG-I that consequently enhance the type I IFN-dependent antiviral response [76].Table 3 Vitamin D and miR targets associated with inflammatory response. Target/mediator Modulator Refs. TLR2/4 Vitamin D/miR155.miR146 [20, 69, 70] TNF-α Vitamin D/miR146 [70, 71] IL-1β Vitamin D/miR155 [19, 69] IL-6 Vitamin D/let-7e [72, 73] MAPK Vitamin D [19] NF-κB Vitamin D/miR155, miR146 [20, 70, 74, 75] IKK Vitamin D [76] SOCS1 Vitamin D/miR155 [20] TLR9 Vitamin D [77] ### 3.1. Vitamin D and miRs: Potential Implications for Inflammation Balance Although vitamin D may impact distinct pathways and molecules to modulate inflammatory responses, current evidence suggests TLRs and TLR signaling mediators as main targets by which vitamin D modulates inflammation (Table3) [6, 113, 133, 134]. However, a novel regulatory vitamin D mechanism in which TLR signaling/activation and miR function are associated has been recently documented, suggesting a crucial role of vitamin D and miRs for the host immune system homeostasis [15, 135, 136]. The participation of miRs as general regulatory mechanisms of initiation, propagation, and resolution of immune responses has been widely reviewed elsewhere [21, 137, 138]. Therefore, we discuss here its potential relationship with vitamin D activity in the control of inflammatory responses, attempting to extrapolate these findings to DENV infection.The ability of vitamin D to regulate miRs and their emerging relationship have been proposed by means of several experimental and clinical approaches; however, the implications of their impact on inflammatory responses have only been studied in in vitro models [15, 20, 135, 136, 139]. In patient trials with vitamin D supplementation, significant differences in miR expression profiles have been reported, suggesting that dietary vitamin D may also globally regulate miR levels [15]. Although several mechanisms may be involved in regulating such a global effect, some authors have found that chromatin states may be altered by VDR activity, determining accessibility for binding of the transcription and regulation of activation or inhibition of transcription [140, 141]. This in turn could be of relevance for canonical VDR-VDRE-mediated transcription regulation. In fact, VDR-induced regulation of miRs via VDRE has been demonstrated for some miRs such as miR-182 and let-7a whose pri-miRs (Primary miR) have multiple VDR/RXR binding sites, suggesting that these miRs could potentially be regulated by vitamin D metabolites [67, 142]. Moreover, a negative feedback loop between some miRNAs and VDR signaling has been reported. This is the case of miR-125b whose overexpression can reduce VDR/RXR protein levels. Since miR-125b is commonly downregulated in cancer cells, it has been proposed that such a decrease in miR-125b may result in the upregulation of VDR and in increasing antitumor effects driven by vitamin D in cancer cell models [136].Additionally, it has been reported that VDR signaling may attenuate TLR-mediated inflammation by enhancing a negative feedback inhibition mechanism (Figure1). A recent report has shown that VDR inactivation leads to a hyperinflammatory response in LPS-cultured mice macrophages through overproduction of miR-155 which in turns downregulates the suppressor of the cytokine signaling (SOCS) family of proteins that are key components of the negative feedback loop regulating the intensity, duration, and quality of cytokine signaling [2, 143, 144]. As feedback inhibitors of inflammation, SOCS proteins are upregulated by inflammatory cytokines, and, in turn, they block cytokine signaling by targeting the JAK/STAT (Janus Kinase/Signal Transducer and Activator of Transcription) pathway [2]. Evidence suggests that SOCS inhibits the proinflammatory pathways of cytokines such as TNF-α, IL-6, and IFN-γ and can inhibit the LPS-induced inflammatory response by directly blocking TLR4 signaling by targeting the IL-1R-associated kinases (IRAK) 1 and 4 [20, 144]. Consequently, deletion of miR-155 attenuates 1,25(OH)2D3 suppression of LPS-induced inflammation, confirming that vitamin D stimulates SOCS1 by downregulating miR-155 [20]. Taken together, these results highlight the importance of the VDR pathways controlling the inflammatory response by modulating miRNA-155-SOCS1 interactions. Finally, an additional reinforcing issue that may validate the link between vitamin D activity and miRs is the fact that 1,25(OH)2D3 deficiency has been related to reduced leukotriene synthetic capacity in macrophages [145, 146]. Recently, it was reported that leukotriene B4 (LTB4) can upregulate macrophage MyD88 (Myeloid Differentiation primary response-88) expression by decreasing SOCS-1 stability that is associated with the expression of proinflammatory miRs, such as miR-155, miR-146b, and miR-125b, and TLR4 activation in macrophages [147]. miR-146 has been also shown as a modulator of inflammatory responses mediated by TLR4/NF-κB and TNF-α [70]. Importantly, this miR has been found downregulated in patients with autoimmune disorders in which low levels of vitamin D have also been reported [148, 149]. These results suggest that vitamin D can orchestrate miR diversity involved in TLR signaling, thereby regulating inflammatory responses and activation of immune responses. ## 3.1. Vitamin D and miRs: Potential Implications for Inflammation Balance Although vitamin D may impact distinct pathways and molecules to modulate inflammatory responses, current evidence suggests TLRs and TLR signaling mediators as main targets by which vitamin D modulates inflammation (Table3) [6, 113, 133, 134]. However, a novel regulatory vitamin D mechanism in which TLR signaling/activation and miR function are associated has been recently documented, suggesting a crucial role of vitamin D and miRs for the host immune system homeostasis [15, 135, 136]. The participation of miRs as general regulatory mechanisms of initiation, propagation, and resolution of immune responses has been widely reviewed elsewhere [21, 137, 138]. Therefore, we discuss here its potential relationship with vitamin D activity in the control of inflammatory responses, attempting to extrapolate these findings to DENV infection.The ability of vitamin D to regulate miRs and their emerging relationship have been proposed by means of several experimental and clinical approaches; however, the implications of their impact on inflammatory responses have only been studied in in vitro models [15, 20, 135, 136, 139]. In patient trials with vitamin D supplementation, significant differences in miR expression profiles have been reported, suggesting that dietary vitamin D may also globally regulate miR levels [15]. Although several mechanisms may be involved in regulating such a global effect, some authors have found that chromatin states may be altered by VDR activity, determining accessibility for binding of the transcription and regulation of activation or inhibition of transcription [140, 141]. This in turn could be of relevance for canonical VDR-VDRE-mediated transcription regulation. In fact, VDR-induced regulation of miRs via VDRE has been demonstrated for some miRs such as miR-182 and let-7a whose pri-miRs (Primary miR) have multiple VDR/RXR binding sites, suggesting that these miRs could potentially be regulated by vitamin D metabolites [67, 142]. Moreover, a negative feedback loop between some miRNAs and VDR signaling has been reported. This is the case of miR-125b whose overexpression can reduce VDR/RXR protein levels. Since miR-125b is commonly downregulated in cancer cells, it has been proposed that such a decrease in miR-125b may result in the upregulation of VDR and in increasing antitumor effects driven by vitamin D in cancer cell models [136].Additionally, it has been reported that VDR signaling may attenuate TLR-mediated inflammation by enhancing a negative feedback inhibition mechanism (Figure1). A recent report has shown that VDR inactivation leads to a hyperinflammatory response in LPS-cultured mice macrophages through overproduction of miR-155 which in turns downregulates the suppressor of the cytokine signaling (SOCS) family of proteins that are key components of the negative feedback loop regulating the intensity, duration, and quality of cytokine signaling [2, 143, 144]. As feedback inhibitors of inflammation, SOCS proteins are upregulated by inflammatory cytokines, and, in turn, they block cytokine signaling by targeting the JAK/STAT (Janus Kinase/Signal Transducer and Activator of Transcription) pathway [2]. Evidence suggests that SOCS inhibits the proinflammatory pathways of cytokines such as TNF-α, IL-6, and IFN-γ and can inhibit the LPS-induced inflammatory response by directly blocking TLR4 signaling by targeting the IL-1R-associated kinases (IRAK) 1 and 4 [20, 144]. Consequently, deletion of miR-155 attenuates 1,25(OH)2D3 suppression of LPS-induced inflammation, confirming that vitamin D stimulates SOCS1 by downregulating miR-155 [20]. Taken together, these results highlight the importance of the VDR pathways controlling the inflammatory response by modulating miRNA-155-SOCS1 interactions. Finally, an additional reinforcing issue that may validate the link between vitamin D activity and miRs is the fact that 1,25(OH)2D3 deficiency has been related to reduced leukotriene synthetic capacity in macrophages [145, 146]. Recently, it was reported that leukotriene B4 (LTB4) can upregulate macrophage MyD88 (Myeloid Differentiation primary response-88) expression by decreasing SOCS-1 stability that is associated with the expression of proinflammatory miRs, such as miR-155, miR-146b, and miR-125b, and TLR4 activation in macrophages [147]. miR-146 has been also shown as a modulator of inflammatory responses mediated by TLR4/NF-κB and TNF-α [70]. Importantly, this miR has been found downregulated in patients with autoimmune disorders in which low levels of vitamin D have also been reported [148, 149]. These results suggest that vitamin D can orchestrate miR diversity involved in TLR signaling, thereby regulating inflammatory responses and activation of immune responses. ## 4. Insights into Vitamin D and DENV Infection Little is known about the link between DENV infection and vitamin D; however, since severe dengue is associated with imbalanced production of proinflammatory cytokines, it is very tempting to suggest that vitamin D could play an important role in modulating the inflammatory responses during ongoing DENV infections. Although only few studies can illustrate a link between vitamin D activity and DENV infection or disease, these reports have provided preliminary epidemiological evidence supporting this novel hypothesis. Initially, it was reported that heterozygosity in the VDR gene was correlated with progression of dengue. It was shown in a small Vietnamese population where dengue is endemic that the low frequency of a dimorphic (T/t) “t” allele in the VDR gene was associated with dengue disease severity, suggesting a protective role of VDR activity against dengue disease progression [12]. Variations in VDR have also been associated with susceptibility to osteoporosis in humans and with reduced risk of tuberculosis and persistent hepatitis B virus infections [150–152], highlighting the importance of VDR variations in signaling and immune protection. Accordingly, a study revealed the association of the “T” allele with DHF, by showing that the “T” allele codes for a longer length VDR that is the least active form of VDR. Since vitamin D is known to suppress TNF-α, it is possible that such inappropriate VDR signaling may contribute to higher levels of inflammation, enhancing the susceptibility to severity of the disease [10]. Although the modulatory effect of vitamin D during DENV infection and disease has not been widely tested in human populations, initial studies have associated the effect of oral 25(OH)D3 supplementation with antiviral responses, resistance, and overcoming of the disease. Specifically, a study reported the case of five DF patients that ameliorated the signs and symptoms of the disease, improving the overall clinical conditions and reducing the risk of disease progression [11]. Interestingly, this may be linked to other clinical approaches where oral supplementation with vitamin D enhanced the antiviral response to HCV [63], another RNA virus belonging also to the family Flaviviridae.The potential antiviral mechanism of vitamin D against DENV has yet not been fully explored; however, certain reports support the proposal that vitamin D could perform anti-DENV effects and immunoregulatory functions on innate immune responses [10–12]. In line with this, the effect of vitamin D treatment of human monocytic cell lines on DENV infection was recently reported [72]. The authors showed that cell exposure to 1,25(OH)2D3 resulted in a significant reduction of DENV-infected cells, a variable modulation of TLR2 and TLR4, and reduced levels of secreted proinflammatory cytokines such as TNF-α, IL-6, and IL-1β after infection [72]. The molecular mechanisms by which vitamin D can elicit an antiviral and anti-inflammatory role towards DENV have not been fully described, and although we observed that monocyte-derived macrophages differentiated in the presence of 1,25(OH)2D3 are less susceptible to DENV infection and express lower levels of mannose receptor restricting binding of DENV to target cells (manuscript in preparation), further studies are required to confirm that vitamin D treatment confers both anti-inflammatory and antiviral responses. Another interesting mechanism that could support the antiviral activity of vitamin D is the VDR-induced regulation of miRs via VDRE. This has been demonstrated for some miRs, such as let-7a (Table 2), whose pri-miR has multiple VDR/RXR binding sites that could potentially be regulated by vitamin D [67, 142]. miR let-7a belongs to a highly conserved family of miRs that contains other miRs previously reported to inhibit DENV replicative activity, such as let-7c [68]. Besides the members of the let-7 family, other miRs have also been associated with suppression of DENV infection and the inflammatory responses against the virus, as discussed below. ### 4.1. MicroRNAs in DENV Infection Viruses strictly depend on cellular mechanisms for their replication; therefore, there is an obligatory interaction between the virus and the host RNA silencing machinery. Although virus-derived small interfering RNAs may induce changes in cellular mRNA and miR expression profiles to induce replication, cellular miRs can also target viral sequences or induce antiviral protein expression to inhibit viral replication and translation [153]. Indeed, during DENV infection, several cellular miRs have been reported to have an effect on the replicative activity of the virus and the permissiveness of the host cells. Although some host miRs can also enhance DENV replication [81, 154], here we highlight the miRs affecting DENV replicative activity and modulating the immune response (Table 4).Table 4 Summary of miRs regulating DENV-induced inflammatory response and viral replicative activity. miRNA Target Cell line Refs. let-7e 3′-UTR of IL-6 Human peripheral blood mononuclear cells [73] let-7c HO-1 protein and the transcription factor BACH1 Huh-7 human hepatic cell line [68] miR-252 DENV envelope E protein Aedes albopictus C6/36 cell line [78] miR-30e∗ IkBα in DENV-permissive cells and IFN-β production Peripheral blood mononuclear cells and U937 and HeLa cell lines [79] miR-150 3′-UTR of SOCS-1 Peripheral blood mononuclear cells and monocytes [80] miR-122 3′-UTR of the DENV genome/mRNA BHK-21, HepG2, and Huh-7 cell lines [81] miR-142 3′-UTR of the DENV genome/mRNA Human dendritic cells and macrophages [82] miR-133a 3′-UTR of PTB; 3′-UTR of the DENV genome/mRNA Mouse C2C12 cells and Vero cells [83, 84] miR-548 5′-UTR SLA (Stem Loop A) DENV U937 monocyte/macrophages [85] miR-223 Microtubule destabilizing protein stathmin 1 (STMN-1) EA.hy926 endothelial cell line [86]The expression levels of different miRs regulated during DENV infection have been screened in the hepatic cell line Huh-7. This approach identified miR let-7c as a key regulator of the viral replicative cycle that affects viral replication and the oxidative stress immune response through the protein Heme Oxygenase-1 (HO-1) by activating its transcription factor BACH1 (Basic Leucine Zipper Transcription Factor-1) [68]. In addition, it was recently reported that, after DENV-2 infection of the C6/36 cell line, endogenous miR-252 is highly induced and associated with a decreased level of viral RNA copies. This antiviral effect was explained by the fact that miR-252 targets the DENV-2 E protein gene sequence, downregulating its expression and therefore acting as an antiviral regulator [78]. Although DENV can escape the immune system by decreasing the production of type I IFN due to DENV NS5 and NS4B activity [42, 97], DENV infection also induces the upregulation of the cellular miR-30e ∗ that suppresses DENV replication by increasing IFN-β production. This antiviral effect of miR-30e ∗ depends mainly on NF-κB activation by targeting the NF-κB inhibitor IκBα in DENV-permissive cells [79]. This antiviral effect induced by signaling of type I IFN is also promoted by miR-155 that has been reported to control virus-induced immune responses in models of infection with other members of theFlavivirus genus such as HCV [155–157]. In this latter model, the antiviral effect greatly depended on miR-155 targeting SOCS-1. This observation is in accordance with a study in which elevated expression of miR-150 in patients with DHF was correlated with suppression of SOCS-1 expression in monocytes [80] that in turn could be linked to the fact that vitamin D controls inflammatory responses through modulation of SOCS by downregulating miR-155 [20].Although it has remained unclear whether endogenous miRs can interfere with viral replicative activity by targeting DENV sequences or viral mRNAs, some experimental approaches have shown the importance of miRs in restricting viral replication through this mechanism [85, 158–160]. Some artificial miRs (amiRs) have been described as targeting the highly conserved regions of the DENV-2 genome and promoting efficient inhibition of virus replication [158]. Using DENV subgenomic replicons carrying the specific miR recognition element (MRE) for miR-122 in the 3′-UTR of the DENV genome/mRNA, some authors have shown that the liver-specific miR-122 suppresses translation and replication of DENV by targeting this MRE sequence [81]. Likewise, the insertion of the MRE for the hematopoietic specific miR-142 into the DENV-2 genome restricts replication of the virus in DCs and macrophages, highlighting the importance of this hematopoietic miR in dissemination of the virus [82]. In addition, DENV replication is enhanced by the interaction of the viral genome 3′-UTR and the host polypyrimidine tract binding (PTB) protein that translocates from the nucleus to the cytoplasm facilitating DENV replication [36, 161, 162]. However, the PTB mRNA 3′-UTR contains MREs that can be targeted by miR-133a, providing a mechanism for the downregulation of the PTB protein expression levels [163]. Moreover, in our group, we found that miR-133a contains target sites in the 3′-UTR sequence of the 4 DENV serotypes and that overexpression of miR-133a in Vero cells was associated with decreased DENV-2 replication activity [84]. All these data suggest a possible antiviral mechanism via miR-133a targeting the PTB protein mRNA and the DENV 3′-UTR sequence. Furthermore, we also showed that miR-744 and miR-484 can downregulate DENV replication by targeting the 3′UTR of the DENV RNA genome [Betancur et al., submitted]. In addition, the cellular miR-548g-3p has been identified as displaying antiviral activity by targeting the 5′-UTR SLA (Stem Loop A) promoter of the four DENV serotypes, thus, repressing viral replication and expression of viral proteins, independently of interferon signaling [85]. Moreover, overexpression of miR-223 inhibited replication of DENV in an endothelial cell-like cell line. The authors showed that miR-223 inhibits DENV by negatively regulating the microtubule destabilizing protein stathmin 1 (STMN-1) that is crucial for reorganization of microtubules and later replication of the virus. In addition, this study identified that the transcription factors C/EBP-α and EIF2 are regulators of miR-223 expression after DENV infection [86].Although little is known regarding the variations in miR expression in DENV-infected individuals, a recent study showed the expression profile of the miRs in blood samples of DEN-infected patients. The authors report 12 miRs that were specifically altered upon acute dengue and 17 miRs that could potentially be associated with specific dengue-related complications [164]. In addition, another profiling study reported abundance changes in the expression of some miRs in DENV-infected peripheral blood monocytes. Importantly, let-7e was among the miRs with the most significant regulation which, besides anti-DENV activity, may be of crucial importance for the modulation of inflammatory responses. Specifically, let-7e shares matching sequences with the 3′UTR mRNA of IL-6 and CCL3, as well as of other cytokines, highlighting a key role of miRs in immune response homeostasis during DENV infection (Figure 1) [67, 73, 86]. Likewise, miR-223 that also shares antiviral activity against DENV has been shown to have an important effect on the inflammatory response by regulating IL-β and IL-6 through IKKα and MKP-5 [86, 165, 166], stressing its potential contribution in DENV pathogenesis control. Since a link between vitamin D and miR expression has been established, but no reports discuss their combined implications for DENV antiviral and inflammatory response, we hypothesized here a vitamin D and miR interplay that could modulate DENV pathogenesis, opening new horizons in the therapeutic field of dengue disease. ## 4.1. MicroRNAs in DENV Infection Viruses strictly depend on cellular mechanisms for their replication; therefore, there is an obligatory interaction between the virus and the host RNA silencing machinery. Although virus-derived small interfering RNAs may induce changes in cellular mRNA and miR expression profiles to induce replication, cellular miRs can also target viral sequences or induce antiviral protein expression to inhibit viral replication and translation [153]. Indeed, during DENV infection, several cellular miRs have been reported to have an effect on the replicative activity of the virus and the permissiveness of the host cells. Although some host miRs can also enhance DENV replication [81, 154], here we highlight the miRs affecting DENV replicative activity and modulating the immune response (Table 4).Table 4 Summary of miRs regulating DENV-induced inflammatory response and viral replicative activity. miRNA Target Cell line Refs. let-7e 3′-UTR of IL-6 Human peripheral blood mononuclear cells [73] let-7c HO-1 protein and the transcription factor BACH1 Huh-7 human hepatic cell line [68] miR-252 DENV envelope E protein Aedes albopictus C6/36 cell line [78] miR-30e∗ IkBα in DENV-permissive cells and IFN-β production Peripheral blood mononuclear cells and U937 and HeLa cell lines [79] miR-150 3′-UTR of SOCS-1 Peripheral blood mononuclear cells and monocytes [80] miR-122 3′-UTR of the DENV genome/mRNA BHK-21, HepG2, and Huh-7 cell lines [81] miR-142 3′-UTR of the DENV genome/mRNA Human dendritic cells and macrophages [82] miR-133a 3′-UTR of PTB; 3′-UTR of the DENV genome/mRNA Mouse C2C12 cells and Vero cells [83, 84] miR-548 5′-UTR SLA (Stem Loop A) DENV U937 monocyte/macrophages [85] miR-223 Microtubule destabilizing protein stathmin 1 (STMN-1) EA.hy926 endothelial cell line [86]The expression levels of different miRs regulated during DENV infection have been screened in the hepatic cell line Huh-7. This approach identified miR let-7c as a key regulator of the viral replicative cycle that affects viral replication and the oxidative stress immune response through the protein Heme Oxygenase-1 (HO-1) by activating its transcription factor BACH1 (Basic Leucine Zipper Transcription Factor-1) [68]. In addition, it was recently reported that, after DENV-2 infection of the C6/36 cell line, endogenous miR-252 is highly induced and associated with a decreased level of viral RNA copies. This antiviral effect was explained by the fact that miR-252 targets the DENV-2 E protein gene sequence, downregulating its expression and therefore acting as an antiviral regulator [78]. Although DENV can escape the immune system by decreasing the production of type I IFN due to DENV NS5 and NS4B activity [42, 97], DENV infection also induces the upregulation of the cellular miR-30e ∗ that suppresses DENV replication by increasing IFN-β production. This antiviral effect of miR-30e ∗ depends mainly on NF-κB activation by targeting the NF-κB inhibitor IκBα in DENV-permissive cells [79]. This antiviral effect induced by signaling of type I IFN is also promoted by miR-155 that has been reported to control virus-induced immune responses in models of infection with other members of theFlavivirus genus such as HCV [155–157]. In this latter model, the antiviral effect greatly depended on miR-155 targeting SOCS-1. This observation is in accordance with a study in which elevated expression of miR-150 in patients with DHF was correlated with suppression of SOCS-1 expression in monocytes [80] that in turn could be linked to the fact that vitamin D controls inflammatory responses through modulation of SOCS by downregulating miR-155 [20].Although it has remained unclear whether endogenous miRs can interfere with viral replicative activity by targeting DENV sequences or viral mRNAs, some experimental approaches have shown the importance of miRs in restricting viral replication through this mechanism [85, 158–160]. Some artificial miRs (amiRs) have been described as targeting the highly conserved regions of the DENV-2 genome and promoting efficient inhibition of virus replication [158]. Using DENV subgenomic replicons carrying the specific miR recognition element (MRE) for miR-122 in the 3′-UTR of the DENV genome/mRNA, some authors have shown that the liver-specific miR-122 suppresses translation and replication of DENV by targeting this MRE sequence [81]. Likewise, the insertion of the MRE for the hematopoietic specific miR-142 into the DENV-2 genome restricts replication of the virus in DCs and macrophages, highlighting the importance of this hematopoietic miR in dissemination of the virus [82]. In addition, DENV replication is enhanced by the interaction of the viral genome 3′-UTR and the host polypyrimidine tract binding (PTB) protein that translocates from the nucleus to the cytoplasm facilitating DENV replication [36, 161, 162]. However, the PTB mRNA 3′-UTR contains MREs that can be targeted by miR-133a, providing a mechanism for the downregulation of the PTB protein expression levels [163]. Moreover, in our group, we found that miR-133a contains target sites in the 3′-UTR sequence of the 4 DENV serotypes and that overexpression of miR-133a in Vero cells was associated with decreased DENV-2 replication activity [84]. All these data suggest a possible antiviral mechanism via miR-133a targeting the PTB protein mRNA and the DENV 3′-UTR sequence. Furthermore, we also showed that miR-744 and miR-484 can downregulate DENV replication by targeting the 3′UTR of the DENV RNA genome [Betancur et al., submitted]. In addition, the cellular miR-548g-3p has been identified as displaying antiviral activity by targeting the 5′-UTR SLA (Stem Loop A) promoter of the four DENV serotypes, thus, repressing viral replication and expression of viral proteins, independently of interferon signaling [85]. Moreover, overexpression of miR-223 inhibited replication of DENV in an endothelial cell-like cell line. The authors showed that miR-223 inhibits DENV by negatively regulating the microtubule destabilizing protein stathmin 1 (STMN-1) that is crucial for reorganization of microtubules and later replication of the virus. In addition, this study identified that the transcription factors C/EBP-α and EIF2 are regulators of miR-223 expression after DENV infection [86].Although little is known regarding the variations in miR expression in DENV-infected individuals, a recent study showed the expression profile of the miRs in blood samples of DEN-infected patients. The authors report 12 miRs that were specifically altered upon acute dengue and 17 miRs that could potentially be associated with specific dengue-related complications [164]. In addition, another profiling study reported abundance changes in the expression of some miRs in DENV-infected peripheral blood monocytes. Importantly, let-7e was among the miRs with the most significant regulation which, besides anti-DENV activity, may be of crucial importance for the modulation of inflammatory responses. Specifically, let-7e shares matching sequences with the 3′UTR mRNA of IL-6 and CCL3, as well as of other cytokines, highlighting a key role of miRs in immune response homeostasis during DENV infection (Figure 1) [67, 73, 86]. Likewise, miR-223 that also shares antiviral activity against DENV has been shown to have an important effect on the inflammatory response by regulating IL-β and IL-6 through IKKα and MKP-5 [86, 165, 166], stressing its potential contribution in DENV pathogenesis control. Since a link between vitamin D and miR expression has been established, but no reports discuss their combined implications for DENV antiviral and inflammatory response, we hypothesized here a vitamin D and miR interplay that could modulate DENV pathogenesis, opening new horizons in the therapeutic field of dengue disease. ## 5. Concluding Remarks and Future Perspectives Severe dengue disease symptoms and DENV infection are characterized by overproduction of proinflammatory cytokines driven mainly by activation of several PRRs [29]. Here, we hypothesize that vitamin D may contribute to avoiding DENV infection and disease progression, especially through the modulation of miRs/TLRs that enhance the antiviral activity and regulate the inflammatory response. Although vitamin D’s antiviral mechanism has not been fully elucidated, it may be linked to vitamin D’s ability to control the permissiveness of DENV target cells and the virus-induced proinflammatory responses [72]. However, a better understanding of these mechanisms is required to provide interesting clues regarding DENV pathogenesis and dengue disease treatment. Certainly, epidemiological and experimental evidence describe an overall positive vitamin D-related immune effect in which increased levels of vitamin D and variants in the VDR receptor are associated with reduction of viral replication, decreased risk of infection, lower disease severity, and better outcome of the dengue symptoms [9–12, 72]. Additionally, the emerging relationships between vitamin D, the TLR signaling pathway, and its regulation by miRs are beginning to gain critical importance in infectious diseases. Indeed, as discussed above, several DENV infection studies have started to illustrate these vitamin D regulatory features that could be key mechanisms for the control of virus replication and homeostasis of the inflammatory response, thus making this hormone a special candidate for therapeutic strategies [127]. Although most of the studies have focused on the effects of vitamin D induced in dendritic cells and macrophages, others have also described the same immunoregulatory effects on other cell populations of the immune system such as CD8+ T cells, NK cells, and B cells [167–169] suggesting their impact not only on DENV target cells but also at the level of cells associated with virus clearance. All the data discussed here suggest that vitamin D could constitute a strong potential strategy to modulate the “cytokine storm” that occurs during ongoing DENV infections and the progression to severe states of the disease. Although it is important to note that such a global effect on the inflammatory activity could weaken the host response to other opportunistic pathogens, it has been suggested that while vitamin D may reduce inflammatory markers during viral infections, it also exerts protective effects against coinfections with other opportunistic pathogens [14, 106]. Moreover, its clinical effectiveness has been tested by improving the overall physical condition of DENV patients and reducing the progression of the disease [11]. Although incoming supplementary trials are required to fully elucidate the therapeutic relevance of vitamin D, it is evident that this hormone may be an excellent alternative of a natural immune-regulatory agent capable of modulating the innate immune response against DENV, which will provide crucial information to understand and design strategies to treat and control progression of dengue disease. Although further experimental studies are required to boost the understanding of vitamin D in the regulation of inflammation and antiviral response against DENV infection, the information discussed above highlights the features of vitamin D in immune regulation as an exciting research field and as an efficient and low-cost therapeutic procedure against DENV and possibly other viral infections. --- *Source: 1016840-2016-05-11.xml*
2016
# Homogenization of Parabolic Equations with an Arbitrary Number of Scales in Both Space and Time **Authors:** Liselott Flodén; Anders Holmbom; Marianne Olsson Lindberg; Jens Persson **Journal:** Journal of Applied Mathematics (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101685 --- ## Abstract The main contribution of this paper is the homogenization of the linear parabolic equation∂ t u ε ( x , t ) - ∇ · ( a ( x / ε q 1 , . . . , x / ε q n , t / ε r 1 , . . . , t / ε r m ) ∇ u ε ( x , t ) ) = f ( x , t ) exhibiting an arbitrary finite number of both spatial and temporal scales. We briefly recall some fundamentals of multiscale convergence and provide a characterization of multiscale limits for gradients, in an evolution setting adapted to a quite general class of well-separated scales, which we name by jointly well-separated scales (see appendix for the proof). We proceed with a weaker version of this concept called very weak multiscale convergence. We prove a compactness result with respect to this latter type for jointly well-separated scales. This is a key result for performing the homogenization of parabolic problems combining rapid spatial and temporal oscillations such as the problem above. Applying this compactness result together with a characterization of multiscale limits of sequences of gradients we carry out the homogenization procedure, where we together with the homogenized problem obtain n local problems, that is, one for each spatial microscale. To illustrate the use of the obtained result, we apply it to a case with three spatial and three temporal scales with q 1 = 1, q 2 = 2, and 0 < r 1 < r 2. --- ## Body ## 1. Introduction In this paper, we study the homogenization of(1) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m. Here Ω T = Ω × ( 0 , T ), where Ω is an open bounded subset of ℝ N with smooth boundary and a is periodic with respect to the unit cube Y = ( 0,1 ) N in ℝ N in the n first variables and with respect to the unit interval S = ( 0,1 ) in the remaining m variables. The homogenization of (1) consists in studying the asymptotic behavior of the solutions u ɛ as ɛ tends to zero and finding the limit equation which admits the limit u of this sequence as its unique solution. The main contribution of this paper is the proof of a homogenization result for (1), that is, for parabolic problems with an arbitrary finite number of scales in both space and time.Parabolic problems with rapid oscillations in one spatial and one temporal scale were investigated already in [1] using asymptotic expansions. Techniques of two-scale convergence type, see, for example, [2–4], for this kind of problems were first introduced in [5]. One of the main contributions in [5] is a compactness result for a more restricted class of test functions compared with usual two-scale convergence, which has a key role in the homogenization procedure. In [6], a similar result for an arbitrary number of well-separated spatial scales is proven and the type of convergence in question is formalized under the name of very weak multiscale convergence.A number of recent papers address various kinds of parabolic homogenization problems applying techniques related to those introduced in [5]. [7] treats a monotone parabolic problem with the same choices of scales as in [5] in the more general setting of Σ-convergence. In [8], the case with two fast temporal scales is treated with one of them identical to a single fast spatial scale. These results with the same choice of scales are extended to a more general class of differential operators in [9] and in [10], the two fast spatial scales are fixed to be ε 1 = ɛ, ε 2 = ε 2, while only one fast temporal scale appears. Significant progress was made in [11], where the case with an arbitrary number of temporal scales is treated and none of them has to coincide with the single fast spatial scale. A first study of parabolic problems where the number of fast spatial and temporal scales both exceeds one is found in [12], where the fast spatial scales are ε 1 = ɛ, ε 2 = ε 2 and the rapid temporal scales are chosen as ε 1 ′ = ε 2, ε 2 ′ = ε 4, and ε 3 ′ = ε 5. Similar techniques have also been recently applied to hyperbolic problems. In [13] the two fast spatial scales are well separated and the fast temporal scale coincides with the slower of the fast spatial scales and in [14] the set of scales is the same as in [8, 9]. Clearly all of these previous results include strong restrictions on the choices of scales. Our aim here is to provide a unified approach with the choices of scales in the examples above as special cases. The homogenization procedure for (1) covers arbitrary numbers of spatial and temporal scales and any reasonable choice of the exponents q 1 , … , q n and r 1 , … , r m defining the fast spatial and temporal scales, respectively. The key to this is the result on very weak multiscale convergence proved in Theorem 7 which adapts the original concept in [6] to the appropriate evolution setting. Let us note that techniques used for the proof of the special case with ε 1 = ɛ, ε 2 = ε 2 in [10] do not apply to the case with arbitrary numbers of scales studied here.The present paper is organized as follows. In Section2 we briefly recall the concepts of multiscale convergence and evolution multiscale convergence and give a characterization of gradients with respect to this latter type of convergence under a certain well-separatedness assumption. In Section 3 we consider very weak multiscale convergence in the evolution setting and give the key compactness result employed in the homogenization of (1), which is carried out in Section 4. In this final section, we also illustrate how this general homogenization result can be used by applying it to the particular case governed by a ( x / ɛ , x / ε 2 , t / ε r 1 , t / ε r 2 ) where 0 < r 1 < r 2.Notation. F ♯ ( Y ) is the space of all functions in F loc ⁡ ( ℝ N ) that are Y-periodic repetitions of some function in F ( Y ). We denote Y k = Y for k = 1 , … , n, Y n = Y 1 × ⋯ × Y n, y n = y 1 , … , y n, d y n = d y 1 … d y n, S j = S for j = 1 , … , m, S m = S 1 × ⋯ × S m, s m = s 1 , … , s m, d s m = d s 1 … d s m, and 𝒴 n , m = Y n × S m. Moreover, we let ε k ( ɛ ),  k = 1 , … , n, and ε j ′ ( ɛ ), j = 1 , … , m, be strictly positive functions such that ε k ( ɛ ) and ε j ′ ( ɛ ) go to zero when ɛ does. More explanations of standard notations for homogenization theory are found in [15]. ## 2. Multiscale Convergence Our approach for the homogenization procedure in Section4 is based on the two-scale convergence method, first introduced in [2] and generalized to include several scales in [16]. Following [16], we say that a sequence { u ɛ } in L 2 ( Ω ) (n + 1)-scale converges to u 0 ∈ L 2 ( Ω × Y n ) if (2) ∫ Ω ‍ u ɛ ( x ) v ( x , x ε 1 , … , x ε n ) d x ⟶ ∫ Ω ‍ ∫ Y n ‍ u 0 ( x , y n ) v ( x , y n ) d y n d x for any v ∈ L 2 ( Ω ; C ♯ ( Y n ) ) and we write (3) u ɛ ( x ) ⇀ n + 1 u 0 ( x , y n ) . This type of convergence can be adapted to the evolution setting; see, for example, [12]. We give the following definition of evolution multiscale convergence.Definition 1. A sequence{ u ɛ } in L 2 ( Ω T ) is said to (n + 1 , m + 1)-scale converge to u 0 ∈ L 2 ( Ω T × 𝒴 n , m ) if (4) ∫ Ω T ‍ u ɛ ( x , t ) v ( x , t , x ε 1 , … , x ε n , t ε 1 ′ , … , t ε m ′ ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u 0 ( x , t , y n , s m ) v ( x , t , y n , s m ) d y n d s m d x d t for any v ∈ L 2 ( Ω T ; C ♯ ( 𝒴 n , m ) ). We write (5) u ɛ ( x , t ) ⇀ n + 1 , m + 1 u 0 ( x , t , y n , s m ) .Normally, some assumptions are made on the relation between the scales. We say that the scales in a list{ ε 1 , … , ε n } are separated if (6) lim ⁡ ɛ → 0 ε k + 1 ε k = 0 for k = 1 , … , n - 1 and that the scales are well-separated if there exists a positive integer l such that (7) lim ⁡ ɛ → 0 1 ε k ( ε k + 1 ε k ) l = 0 for k = 1 , … , n - 1.We also need the concept in the following definition.Definition 2. Let{ ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } be lists of well-separated scales. Collect all elements from both lists in one common list. If from possible duplicates, where by duplicates we mean scales which tend to zero equally fast, one member of each such pair is removed and the list in order of magnitude of all the remaining elements is well-separated, the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are said to be jointly well-separated.In the remark below, we give some further comments on the concept introduced in Definition2.Remark 3. To include also the temporal scales alongside with the spatial scales allows us to study a much richer class of homogenization problems such as all the cases included in (1). For a more technically formulated definition and some examples, see Section  2.4 in [17]. Note that the lists { ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } of spatial and temporal scales, respectively, in (1) are jointly well-separated for any choice of 0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m.Below we provide a characterization of evolution multiscale limits for gradients, which will be used in the proof of the homogenization result in Section4. Here W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the space of all functions in L 2 ( 0 , T ; H 0 1 ( Ω ) ) such that the time derivative belongs to L 2 ( 0 , T ; H - 1 ( Ω ) ); see, for example, Chapter 23 in [18].Theorem 4. Let{ u ɛ } be a bounded sequence in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and suppose that the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are jointly well-separated. Then there exists a subsequence such that (8) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , (9) ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ) for j = 2 , … , n.Proof. See Theorem 2.74 in [17] and the appendix of this paper. ## 3. Very Weak Multiscale Convergence A first compactness result of very weak convergence type was presented in [5] for the purpose of homogenizing linear parabolic equations with fast oscillations in one spatial scale and one temporal scale. A compactness result for the case with oscillations in n well-separated spatial scales was proven in [6], where the notion of very weak convergence was introduced. It states that for any bounded sequence { u ɛ } in H 0 1 ( Ω ) and the scales in the list { ε 1 , … , ε n } well-separated it holds up to subsequence that (10) ∫ Ω ‍ u ɛ ( x ) ε n v ( x , x ε 1 , … , x ε n - 1 ) φ ( x ε n ) d x ⟶ ∫ Ω ‍ ∫ Y n ‍ u n ( x , y n ) v ( x , y n - 1 ) φ ( y n ) d y n d x for any v ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ) and φ ∈ C ♯ ∞ ( Y n ) / ℝ, where u n is the same as in the right-hand side of (11) ∇ u ɛ ( x ) ⇀ n + 1 ∇ u ( x ) + ∑ j = 1 n ‍ ∇ y j u j ( x , y j ) , the original time independent version of the gradient characterization in Theorem 4, that is found in [16]. In Theorem 7 below we present a generalized result including oscillations in time with a view to homogenizing (1). First we define very weak evolution multiscale convergence.Definition 5. We say that a sequence{ g ɛ } in L 1 ( Ω T )  ( n + 1 , m + 1 )-scale converges very weakly to g 0 ∈ L 1 ( Ω T × 𝒴 n , m ) if (12) ∫ Ω T ‍ g ɛ ( x , t ) v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) φ ( x ε n ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ g 0 ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t for any v ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ),  φ ∈ C ♯ ∞ ( Y n ) / ℝ and c ∈ D ( 0 , T ; C ♯ ∞ ( S m ) ). A unique limit is provided by requiring that (13) ∫ Y n ‍ g 0 ( x , t , y n , s m ) d y n = 0 . We write (14) g ɛ ( x , t ) ⇀ v w n + 1 , m + 1 g 0 ( x , t , y n , s m ) .The following proposition (see Theorem 3.3 in [16]) is needed for the proof of Theorem 7.Proposition 6. Letv ∈ D ( Ω ; C ♯ ∞ ( Y n ) ) be a function such that (15) ∫ Y n ‍ v ( x , y n ) d y n = 0 , and assume that the scales in the list { ε 1 , … , ε n } are well-separated. Then { ε n - 1 v ( x , x / ε 1 , … , x / ε n ) } is bounded in H - 1 ( Ω ).We are now ready to state the following theorem which is essential for the homogenization of (1); see also Theorem 7 in [19] and Theorem 2.78 in [17].Theorem 7. Let{ u ɛ } be a bounded sequence in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and assume that the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are jointly well-separated. Then there exists a subsequence such that (16) u ɛ ( x , t ) ε n ⇀ v w n + 1 , m + 1 u n ( x , t , y n , s m ) , where, for n = 1, u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and, for n = 2,3 , …,  u n ∈ L 2 ( Ω T × 𝒴 n - 1 , m ; H ♯ 1 ( Y n ) / ℝ ) are the same as in Theorem 4.Proof. We want to prove that for anyv ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ),  c ∈ D ( 0 , T ; C ♯ ∞ ( S m ) ) and φ ∈ C ♯ ∞ ( Y n ) / ℝ, (17) ∫ Ω T ‍ u ɛ ( x , t ) ε n v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) φ ( x ε n ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t for some suitable subsequence. First we note that any φ ∈ C ♯ ∞ ( Y n ) / ℝ can be expressed as (18) φ ( y n ) = Δ y n w ( y n ) = ∇ y n · ( ∇ y n w ( y n ) ) for some w ∈ C ♯ ∞ ( Y n ) / ℝ (see, e.g., Remark 3.2 in [7]). Furthermore, let (19) ψ ( y n ) = ∇ y n w ( y n ) and observe that (20) ∫ Y n ‍ ψ ( y n ) d y n = ∫ Y n ‍ ∇ y n w ( y n ) d y n = 0 because of the Y n-periodicity of w. By (18), the left-hand side of (17) can be expressed as (21) ∫ Ω T ‍ u ɛ ( x , t ) ε n v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ( ∇ y n · ψ ) ( x ε n ) d x d t = ∫ Ω T ‍ u ɛ ( x , t ) v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ∇ · ( ψ ( x ε n ) ) d x d t . Integrating by parts with respect to x, we obtain (22) - ∫ Ω T ‍ ∇ u ɛ ( x , t ) · v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ψ ( x ε n ) + u ɛ ( x , t ) ∇ x v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) + ∑ j = 1 n - 1 ‍ u ɛ ( x , t ) ε j - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t . To begin with, we consider the first term. Passing to the multiscale limit using Theorem 4, we arrive up to subsequence at (23) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v ( x , y n - 1 ) c ( t , s m ) ψ ( y n ) d y n d s m d x d t , and due to (20) all but the last term vanish. We have (24) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ ∇ y n u n ( x , t , y n , s m ) 22222222 · v ( x , y n - 1 ) c ( t , s m ) ψ ( y n ) d y n d s m d x d t . Moreover, (8) means that the second term of (22) up to a subsequence approaches (25) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u ( x , t ) ∇ x v ( x , y n - 1 ) c ( t , s m ) · ψ ( y n ) d y n d s m d x d t = - ∫ Ω T ‍ ∫ 𝒴 n - 1 , m ‍ u ( x , t ) ∇ x v ( x , y n - 1 ) c ( t , s m ) · ( ∫ Y n ‍ ψ ( y n ) d y n ) d y n - 1 d s m d x d t = 0 , where the last equality is a result of (20). It remains to investigate the last term of (22). We write (26) ∑ j = 1 n - 1 ‍ ∫ Ω T ‍ u ɛ ( x , t ) ε j - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) 22222 × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t = ∑ j = 1 n - 1 ‍ ε n ε j ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t . Clearly,{ ε n - 1 ∇ y j v ( x , x / ε 1 , … , x / ε n - 1 ) · ψ ( x / ε n ) } is bounded in H - 1 ( Ω ) for j = 1 , … , n - 1 by Proposition 6. Observing that { u ɛ } is assumed to be bounded in L 2 ( 0 , T ; H 0 1 ( Ω ) ), this means that, for any integer j ∈ [ 1 , n - 1 ], there are constants C 1 , C 2 , C 3 > 0 such that (27) ( ( t , t ε 1 ′ , … , t ε m ′ ) ε n ε j ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t ( t , t ε 1 ′ , … , t ε m ′ ) ) 2 = ( ε n ε j ) 2 ( ( t , t ε 1 ′ , … , t ε m ′ ) ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t ) 2 ≤ C 1 ( ε n ε j ) 2 ∫ 0 T ‍ ( ( t , t ε 1 ′ , … , t ε m ′ ) ∫ Ω ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x ) 2 d t ≤ C 1 ( ε n ε j ) 2 × ∫ 0 T ‍ ( ∥ u ɛ ( · , t ) c ( t , t ε 1 ′ , … , t ε m ′ ) ∥ H 0 1 ( Ω ) ∥ ε n - 1 ∇ y j v ( · , · ε 1 , … , · ε n - 1 ) · ψ ( · ε n ) ∥ H - 1 ( Ω ) × ∥ u ɛ ( · , t ) c ( t , t ε 1 ′ , … , t ε m ′ ) ∥ H 0 1 ( Ω ) ) 2 d t ≤ C 2 ( ε n ε j ) 2 ∫ 0 T ‍ ∥ u ɛ ( · , t ) ∥ H 0 1 ( Ω ) 2 d t = C 2 ( ε n ε j ) 2 ∥ u ɛ ∥ L 2 ( 0 , T ; H 0 1 ( Ω ) ) 2 ≤ C 3 ( ε n ε j ) 2 . Hence, all the terms in the sum (26) vanish as ɛ → 0 as a result of the separatedness of the scales. Then (24) is all that remains after passing to the limit in (22). Finally, integrating (24) by parts, we obtain (28) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) c ( t , s m ) ∇ y n 2222222 · ψ ( y n ) d y n d s m d x d t = ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t , which is the right-hand side of (17).Remark 8. The notion of very weak multiscale convergence is an alternative type of multiscale convergence. It is remarkable in the sense that it enables us to provide a compactness result of multiscale convergence type for sequences that are not bounded in any Lebesgue space. In fact, it deals with the normally forbidden situation of finding a limit for a quotient, where the denominator goes to zero while the numerator does not. The price to pay for this is that we have to use much smaller class of admissible testfunctions. In the set of modes of multiscale convergence usually applied in homogenization that we find in Definition1 and Theorem 4, very weak multiscale convergence provides us with the missing link. As we will see in the homogenization procedure in the next section Theorems 4 and 7 give us the cornerstones for the homogenization procedure that allows us to tackle all appearing passages to limits in a unified way by means of two distinct theorems and without ad hoc constructions. Moreover, Theorem 7 provides us with appropriate upscaling to detect microoscillations in solutions of typical homogenization problems, which are usually of vanishing amplitude, while the global tendency is filtered away as a result of the choice of test functions. See [12]. ## 4. Homogenization We are now ready to give the main contribution of this paper, the homogenization of the linear parabolic problem (1). The gradient characterization in Theorem 4 and the very weak compactness result from Theorem 7 are crucial for proving the homogenization result, which is presented in Section 4.1. An illustration of how this result can be used in practice is given in Section 4.2. ### 4.1. The General Case We study the homogenization of the problem(29) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n,  0 < r 1 < ⋯ < r m, f ∈ L 2 ( Ω T ), u 0 ∈ L 2 ( Ω ) and where we assume that(A1) a ∈ C ♯ ( 𝒴 n , m ) N × N. (A2) a ( y n , s m ) ξ · ξ ≥ α | ξ | 2 for all ( y n , s m ) ∈ ℝ n N × ℝ m, all ξ ∈ ℝ N and some α > 0.Under these conditions, (29) allows a unique solution u ɛ ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and for some positive constant C, (30) ∥ u ɛ ∥ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) < C .Given the scale exponents0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m, we may define some numbers in order to formulate the theorem below in a convenient way. We define d i (the number of temporal scales faster than the square of the spatial scale in question) and ρ i (indicates whether there is nonresonance or resonance), i = 1 , … , n, as follows.(i) If2 q i < r 1,then d i = m, if r j ≤ 2 q i < r j + 1 for some j = 1 , … , m - 1, then d i = m - j, and if 2 q i ≥ r m, then d i = 0. (ii) If2 q i = r j for some j = 1 , … , m, that is we have resonance, we let ρ i = 1; otherwise, ρ i = 0.Note that from the definition of d i we have in fact in the definition of ρ i that j = m - d i in the case of resonance.Finally, we recall that the lists{ ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } are jointly well-separated.Theorem 9. Let{ u ɛ } be a sequence of solutions in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) to (29). Then it holds that (31) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the unique solution to (32) ∂ t u ( x , t ) - ∇ · ( b ( x , t ) ∇ u ( x , t ) ) = f ( x , t ) i n Ω T , u ( x , t ) = 0 o n ∂ Ω × ( 0 , T ) , u ( x , 0 ) = u 0 ( x ) i n Ω with (33) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 n , m ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) d y n d s m . Here u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n, are the unique solutions to the system of local problems (34) ρ i ∂ s m - d i u i ( x , t , y i , s m ) - ∇ y i · ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) × d y n ⋯ d y i + 1 d s m ⋯ d s m - d i + 1 = 0 , for i = 1 , … , n, where u i is independent of s m - d i + 1 , … , s m.Remark 10. In the cased i = 0, we naturally interpret the integration in (34) as if there is no local temporal integration involved and that there is no independence of any local temporal variable.Remark 11. Note that if,for example, u 1 is independent of s m the function space that u 1 belongs to simplifies to u 1 ∈ L 2 ( Ω T × S m - 1 ; H ♯ 1 ( Y 1 ) / ℝ ) and when u 1 is also independent of s m - 1, we have that u 1 ∈ L 2 ( Ω T × S m - 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and so on.Proof of Theorem9. Since{ u ɛ } is bounded in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and the lists of scales are jointly well-separated, we can apply Theorem 4 and obtain that, up to a subsequence, (35) u ɛ ( x , t ) ⟶ u ( x , t ) in L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) in L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ), and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n. To obtain the homogenized problem, we introduce the weak form(36) ∫ Ω T ‍ - u ɛ ( x , t ) v ( x ) ∂ t c ( t ) 222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∇ v ( x ) c ( t ) d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t of (29) where v ∈ H 0 1 ( Ω ) and c ∈ D ( 0 , T ), and letting ɛ → 0, we get using Theorem 4 (37) ∫ Ω T ‍ - u ( x , t ) v ( x ) ∂ t c ( t ) + ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · ∇ v ( x ) c ( t ) d y n d s m d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t . We proceed by deriving the system of local problems (34) and the independencies of the local temporal variables. Fix i = 1 , … , n and choose (38) v ( x ) = ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) , p > 0 , c ( t ) = c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) , λ = 1 , … , m with v 1 ∈ D ( Ω ),  v j ∈ C ♯ ∞ ( Y j - 1 ) for j = 2 , … , i,  v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ,  c 1 ∈ D ( 0 , T ) and c l ∈ C ♯ ∞ ( S l - 1 ) for l = 2 , … , λ + 1. Here p and λ will be fixed later. Using this choice of test functions in (36), we have (39) ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ( ∂ t c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) + ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) × c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ( ε p ∇ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) + ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) × v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = ∫ Ω T ‍ f ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t , where, for l = 2 and l = λ + 1, the interpretation should be that the partial derivative acts on c 2 and c λ + 1, respectively, and where the j = 2 and j = i + 1 terms are defined analogously. We let ɛ → 0 and using Theorem 4, we obtain (40) lim ⁡ ɛ → 0 ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) c 2 ( t ε r 1 ) 222222222222 ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , and extracting a factorε - q i in the first term, we get (41) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) × ∑ l = 2 λ + 1 ‍ ε p + q i - r l - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 22222222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 × v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Suppose that p + q i - r λ ≥ 0 and p - q i ≥ 0 (which also guarantees that p > 0 as required above); then, by Theorems 7 and 4, we have left (42) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε p + q i - r λ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε p - q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , which is the point of departure for deriving the local problems and the independency. We distinguish four different cases whereρ i is either zero (nonresonance) or one (resonance) and d i is either zero or positive. Case  1. Consider ρ i = 0 and d i = 0. We choose λ = m and p = q i. This means that p + q i - r λ = 2 q i - r m > 0 since d i = ρ i = 0 and p - q i = q i - q i = 0. This implies that (42) is valid. We get (43) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 , where we let ɛ → 0 and obtain by means of Theorems 7 and 4 (44) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the Variational Lemma, we have (45) ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 22222222 · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i = 0 , a.e. in Ω T × S m × Y 1 × ⋯ × Y i - 1 for all v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ and by density for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ. This is the weak form of the local problem in this case. In what follows Theorems 7 and 4, the variational lemma and the density argument are used in a corresponding way. Case  2. Consider ρ i = 1 and d i = 0. We again choose λ = m and p = q i. We then have p + q i - r λ = 2 q i - r m = 0 since d i = 0 and ρ i = 1 and p - q i = q i - q i = 0 which implies that we may again use (42). We get (46) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 and, passing to the limit, (47) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m c m + 1 ( s m ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) 2222222 × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the variational lemma (48) ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m ) v i + 1 ( y i ) ∂ s m c m + 1 ( s m ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · ∇ y i v i + 1 ( y i ) c m + 1 ( s m ) d y n ⋯ d y i d s m = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m + 1 ∈ C ♯ ∞ ( S m ), which is the weak form of the local problem in this second case. Case  3. Consider ρ i = 0 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i which immediately yields that p + q i - r λ = 0. Furthermore, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i. Thus we have from (42) (49) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . We let ɛ tend to zero and obtain (50) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 and we have left (51) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 , a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ). This means that u i is independent of s λ; thus, u i does not depend on s m - d i + 1 , … , s m. Next we choose p = q i and λ = m - d i. We have p + q i - r λ = 2 q i - r m - d i > 0 and p - q i = 0 and we may again use (42). We have (52) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m - d i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 , where a passage to the limit yields (53) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 22222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 , and finally (54) ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i d s m ⋯ d s m - d i + 1 = 0 , a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ, which is the weak form of the local problem. Case  4. Consider ρ i = 1 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i directly implying that p + q i - r λ = 0. Moreover, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i and ρ i. Hence using (42), we obtain (55) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Passing to the limit, we get (56) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 . That is, (57) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ), and hence u i is independent of s λ. Next we choose p = q i and λ = m - d i in (42). Thus we have p + q i - r λ = 2 q i - r m - d i = 0 and p - q i = 0 and we get (58) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 . We let ɛ go to zero obtaining (59) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m - d i ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( s m - d i ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 and finally we arrive at (60) ∫ S m - d i ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m - d i ) v i + 1 2222 × ( y i ) ∂ s m - d i c m - d i + 1 ( s m - d i ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) c m - d i + 1 ( s m - d i ) d y n ⋯ d y i d s m ⋯ d s m - d i = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m - d i + 1 ∈ C ♯ ∞ ( S m - d i ), the weak form of the local problem.Remark 12. The result above can be extended to any meaningful choice of jointly well-separated scales by means of the general compactness results in Theorems4 and 7 and are hence not restricted to scales that are powers of ɛ; see, for example, [11] for the case with an arbitrary number of temporal scales but only one spatial micro scale. To make the exposition clear, we have assumed linearity, but the result can be extended to monotone, not necessarily linear, problems using standard methods.Remark 13. The wellposedness of the homogenized problem follows fromG-convergence; see, for example, Sections 3 and 4 in [20]. See also Theorem 4.19 in [17] for an easily accessible description of the regularity of the G-limit b. The existence of solutions to the local problems follows from the fact that they appear as limits in appropriate convergence processes. Concerning uniqueness, the coercivity of the elliptic part follows along the lines of the proof of Theorem 2.11 in [16] and for those containing a derivative with respect to some local time scale general theory for linear parabolic equations apply, see, for example, Section 23 in [18]. Normally multiscale homogenization results are formulated as in Theorem 9 without separation of variables and if we study slightly more general problems, for example, those with monotone operators where the linearity has been relaxed, such separation is not possible. However, in Corollary 2.12 in [16], a technique similar to separation of variables of the type sometimes used for conventional homogenization problems is developed. Here one scale at the time is removed in an inductive process and the homogenized coefficient is computed. We believe that a similar procedure could be successful also for the type of problem studied here but would be quite technical. ### 4.2. Illustration of Theorem9 To illustrate the use of Theorem9, we apply it to the 3,3-scaled parabolic homogenization problem (61) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ɛ , x ε 2 , t ε r 1 , t ε r 2 ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < r 1 < r 2,  f ∈ L 2 ( Ω T ),u 0 ∈ L 2 ( Ω ), and the structure conditions(B1) a ∈ C ♯ ( 𝒴 2,2 ) N × N (B2) a ( y 2 , s 2 ) ξ · ξ ≥ α | ξ | 2 for all ( y 2 , s 2 ) ∈ ℝ 2 N × ℝ 2, all ξ ∈ ℝ N and some α > 0are satisfied.We note that the assumptions of Theorem9 are satisfied in this case. Hence the convergence results in (31) hold and, for the homogenized matrix, (62) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 2,2 ‍ a ( y 2 , s 2 ) ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) 2222222 + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) d y 2 d s 2 . Furthermore, u 1 ∈ L 2 ( Ω T × S 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and u 2 ∈ L 2 ( Ω T × 𝒴 1,2 ; H ♯ 1 ( Y 2 ) / ℝ ) are the unique solutions to the system of local problems (63) ρ i ∂ s 2 - d i u i ( x , t , y i , s 2 ) - ∇ y i · ∫ S 2 - d i + 1 ‍ ⋯ ∫ S 2 ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y 2 ‍ a ( y 2 , s 2 ) × ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) × d y 2 ⋯ d y i + 1 d s 2 ⋯ d s 2 - d i + 1 = 0 for i = 1,2, where u i is independent of s 2 - d i + 1 , … , s 2.To find the local problems and the independencies explicitly, we need to identify which values ofd i, and ρ i to use. To find d i, we simply count the number of temporal scales faster than the square of the ith spatial scale for different choices of r 1 and r 2. Moreover, resonance (ρ i = 1) occurs when the square of the ith spatial scale coincides with one of the temporal scales.First we consider the slowest spatial scale; that is, we leti = 1. Note that 2 q 1 = 2. If 2 q 1 = 2 < r 1, then d 1 = 2, ifr 1 ≤ 2 < r 2 then d 1 = 1 and if 2 ≥ r 2, then d 1 = 0. Regarding resonance, if r 1 = 2 or r 2 = 2; then ρ 1 = 1; otherwise, ρ 1 = 0. For lucidity, we present which values of r 1 and r 2 that give the different values of d 1 and ρ 1in Table 1.Table 1 d i and ρ i for i = 1. r 1 and r 2 relative to 2 q 1 = 2 d 1 ρ 1 0 < r 1 < r 2 < 2 0 0 0 < r 1 < r 2 = 2 0 1 0 < r 1 < 2 < r 2 1 0 2 = r 1 < r 2 1 1 2 < r 1 < r 2 2 0In a similar way as above, we get fori = 2 Table 2.Table 2 d i and ρ i for i = 2. r 1 and r 2 relative to 2 q 2 = 4 d 2 ρ 2 0 < r 1 < r 2 < 4 0 0 0 < r 1 < r 2 = 4 0 1 0 < r 1 < 4 < r 2 1 0 4 = r 1 < r 2 1 1 4 < r 1 < r 2 2 0We start by sorting out the independencies of the local temporal variables. As noted, fori = 1,2, u i is independent of s 2 - d i + 1 , … , s 2, which means that if d i = 1, then u i is independent of s 2 and if d i = 2, then u i is independent of both s 1 and s 2. In terms of r 1 and r 2, we have that for r 2 > 2, u 1 is independent of s 2 and for r 1 > 2 also independent of s 1, for r 2 > 4, u 2 is independent of s 2 and moreover, for r 1 > 4 it holds that u 2 is also independent of s 1.To find the local problems, we examine all possible combinations of( d 1 , ρ 1 ) and ( d 2 , ρ 2 ), where 13 are realizable depending on which values r 1 and r 2 may assume. Each row in the tables gives rise to a local problem via (63). This means that each combination gives two local problems. If a row occurs in several combinations, the same local problem reappears. If we start by choosing the first row in the second table, that is ( d 2 , ρ 2 ) = ( 0,0 ), this can be combined with all five rows from the first table, which means that the local problem descending from ( d 2 , ρ 2 ) = ( 0,0 ) is common to these combinations. By (63), this common local problem is (64) - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . If we combine ( d 2 , ρ 2 ) = ( 0,0 ) with ( d 1 , ρ 1 ) = ( 0,0 ) we have in terms of r 1 and r 2that 0 < r 1 < r 2 < 2. The other local problem in this case is (65) - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 . In combination with ( d 1 , ρ 1 ) = ( 0,1 ), that is, 0 < r 1 < r 2 = 2, we obtain instead (66) ∂ s 2 u 1 - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , and for ( d 1 , ρ 1 ) = ( 1,0 ), which means that 0 < r 1 < 2 < r 2 < 4, we have (67) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 . The fourth possible combination, that is, with ( d 1 , ρ 1 ) = ( 1,1 ), that is r 1 = 2 < r 2 < 4, gives (68) ∂ s 1 u 1 - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) 2222222222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 and finally for ( d 1 , ρ 1 ) = ( 2,0 ), that is 2 < r 1 < r 2 < 4, the second local problem is (69) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 .Next we consider( d 2 , ρ 2 ) = ( 0,1 ) in Table 2, which corresponds to 0 < r 1 < r 2 = 4 and gives the local problem (70) ∂ s 2 u 2 - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . Here we have three possible combinations, namely with ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). We note that we have already derived the local problems corresponding to these rows. Thus, the second local problem for r 2 = 4 and 0 < r 1 < 2 is given by (67) for r 2 = 4 and r 1 = 2 by (68) and for 2 < r 1 < r 2 = 4 by (69).We proceed by choosing( d 2 , ρ 2 ) = ( 1,0 ) in Table 2, yielding (71) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . The choice ( d 2 , ρ 2 ) = ( 1,0 ) can be combined with three different rows from Table 1, ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). In combination with ( d 1 , ρ 1 ) = ( 1,0 ), which means that r 2 > 4 and 0 < r 1 < 2, we have (72) - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) 222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is essentially the same as (67) but with the integration over S 2 directly on a ( y 2 , s 2 ) since both u 1 and u 2 are independent of s 2. For ( d 1 , ρ 1 ) = ( 1,1 ), that is, r 2 > 4 and r 1 = 2, we have (73) ∂ s 1 u 1 - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is the same as (68), but where we may integrate directly on a ( y 2 , s 2 ) in the same manner as above. For the third possibility, ( d 1 , ρ 1 ) = ( 2,0 ), 2 < r 1 < 4 < r 2, we get (74) - ∇ y 1 · ∫ S 1 ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 1 = 0 , the same as (69), except for the position of the integration over S 2.The next row in Table2 to consider is ( d 2 , ρ 2 ) = ( 1,1 ), which can be combined only with ( d 1 , ρ 1 ) = ( 2,0 ). This combination corresponds to 4 = r 1 < r 2 and gives (75) ∂ s 1 u 2 - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 and again (74).Finally, for the row( d 2 , ρ 2 ) = ( 2,0 ) together with ( d 1 , ρ 1 ) = ( 2,0 ), that is, 4 < r 1 < r 2, we get (76) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 , - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , where the latter is essentially the same as (69) and (74).Thus, having considered all possible combinations ofr 1 and r 2, we have obtained 13 different cases, A–M in Figure 1, governed by two local problems each.Figure 1 The 13 cases depicted in ther 1 r 2 plane in the order of appearance.In the figure, cases B, D, F, H, J, and L (straight line segments) correspond to single resonance, whereas in the case G (a single point), there is double resonance. In the remaining cases (open two-dimensional regions), there is no resonance.Remark 14. Note that for a problem with fixed scales the finding of the local problems is very straightforward. For example, if we study (61) with r 1 = 2 and r 2 = 17, we have m = 2, n = 2, d 1 = 1, ρ 1 = 1, d 2 = 1, and ρ 2 = 0. We obtain that both u 1 and u 2 are independent of s 2. Inserting d 1 = 1, ρ 1 = 1 in (34) immediately gives the problem (73) and d 2 = 1, ρ 2 = 0 results in (71). The example chosen above with variable time scale exponents reveals more of the applicability and comprehensiveness of the theorem.Remark 15. The problem (61) was studied already in [17, 19], but using Theorem 9, the process is considerably shortened. ## 4.1. The General Case We study the homogenization of the problem(29) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n,  0 < r 1 < ⋯ < r m, f ∈ L 2 ( Ω T ), u 0 ∈ L 2 ( Ω ) and where we assume that(A1) a ∈ C ♯ ( 𝒴 n , m ) N × N. (A2) a ( y n , s m ) ξ · ξ ≥ α | ξ | 2 for all ( y n , s m ) ∈ ℝ n N × ℝ m, all ξ ∈ ℝ N and some α > 0.Under these conditions, (29) allows a unique solution u ɛ ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and for some positive constant C, (30) ∥ u ɛ ∥ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) < C .Given the scale exponents0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m, we may define some numbers in order to formulate the theorem below in a convenient way. We define d i (the number of temporal scales faster than the square of the spatial scale in question) and ρ i (indicates whether there is nonresonance or resonance), i = 1 , … , n, as follows.(i) If2 q i < r 1,then d i = m, if r j ≤ 2 q i < r j + 1 for some j = 1 , … , m - 1, then d i = m - j, and if 2 q i ≥ r m, then d i = 0. (ii) If2 q i = r j for some j = 1 , … , m, that is we have resonance, we let ρ i = 1; otherwise, ρ i = 0.Note that from the definition of d i we have in fact in the definition of ρ i that j = m - d i in the case of resonance.Finally, we recall that the lists{ ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } are jointly well-separated.Theorem 9. Let{ u ɛ } be a sequence of solutions in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) to (29). Then it holds that (31) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the unique solution to (32) ∂ t u ( x , t ) - ∇ · ( b ( x , t ) ∇ u ( x , t ) ) = f ( x , t ) i n Ω T , u ( x , t ) = 0 o n ∂ Ω × ( 0 , T ) , u ( x , 0 ) = u 0 ( x ) i n Ω with (33) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 n , m ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) d y n d s m . Here u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n, are the unique solutions to the system of local problems (34) ρ i ∂ s m - d i u i ( x , t , y i , s m ) - ∇ y i · ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) × d y n ⋯ d y i + 1 d s m ⋯ d s m - d i + 1 = 0 , for i = 1 , … , n, where u i is independent of s m - d i + 1 , … , s m.Remark 10. In the cased i = 0, we naturally interpret the integration in (34) as if there is no local temporal integration involved and that there is no independence of any local temporal variable.Remark 11. Note that if,for example, u 1 is independent of s m the function space that u 1 belongs to simplifies to u 1 ∈ L 2 ( Ω T × S m - 1 ; H ♯ 1 ( Y 1 ) / ℝ ) and when u 1 is also independent of s m - 1, we have that u 1 ∈ L 2 ( Ω T × S m - 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and so on.Proof of Theorem9. Since{ u ɛ } is bounded in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and the lists of scales are jointly well-separated, we can apply Theorem 4 and obtain that, up to a subsequence, (35) u ɛ ( x , t ) ⟶ u ( x , t ) in L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) in L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ), and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n. To obtain the homogenized problem, we introduce the weak form(36) ∫ Ω T ‍ - u ɛ ( x , t ) v ( x ) ∂ t c ( t ) 222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∇ v ( x ) c ( t ) d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t of (29) where v ∈ H 0 1 ( Ω ) and c ∈ D ( 0 , T ), and letting ɛ → 0, we get using Theorem 4 (37) ∫ Ω T ‍ - u ( x , t ) v ( x ) ∂ t c ( t ) + ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · ∇ v ( x ) c ( t ) d y n d s m d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t . We proceed by deriving the system of local problems (34) and the independencies of the local temporal variables. Fix i = 1 , … , n and choose (38) v ( x ) = ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) , p > 0 , c ( t ) = c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) , λ = 1 , … , m with v 1 ∈ D ( Ω ),  v j ∈ C ♯ ∞ ( Y j - 1 ) for j = 2 , … , i,  v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ,  c 1 ∈ D ( 0 , T ) and c l ∈ C ♯ ∞ ( S l - 1 ) for l = 2 , … , λ + 1. Here p and λ will be fixed later. Using this choice of test functions in (36), we have (39) ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ( ∂ t c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) + ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) × c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ( ε p ∇ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) + ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) × v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = ∫ Ω T ‍ f ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t , where, for l = 2 and l = λ + 1, the interpretation should be that the partial derivative acts on c 2 and c λ + 1, respectively, and where the j = 2 and j = i + 1 terms are defined analogously. We let ɛ → 0 and using Theorem 4, we obtain (40) lim ⁡ ɛ → 0 ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) c 2 ( t ε r 1 ) 222222222222 ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , and extracting a factorε - q i in the first term, we get (41) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) × ∑ l = 2 λ + 1 ‍ ε p + q i - r l - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 22222222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 × v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Suppose that p + q i - r λ ≥ 0 and p - q i ≥ 0 (which also guarantees that p > 0 as required above); then, by Theorems 7 and 4, we have left (42) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε p + q i - r λ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε p - q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , which is the point of departure for deriving the local problems and the independency. We distinguish four different cases whereρ i is either zero (nonresonance) or one (resonance) and d i is either zero or positive. Case  1. Consider ρ i = 0 and d i = 0. We choose λ = m and p = q i. This means that p + q i - r λ = 2 q i - r m > 0 since d i = ρ i = 0 and p - q i = q i - q i = 0. This implies that (42) is valid. We get (43) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 , where we let ɛ → 0 and obtain by means of Theorems 7 and 4 (44) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the Variational Lemma, we have (45) ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 22222222 · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i = 0 , a.e. in Ω T × S m × Y 1 × ⋯ × Y i - 1 for all v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ and by density for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ. This is the weak form of the local problem in this case. In what follows Theorems 7 and 4, the variational lemma and the density argument are used in a corresponding way. Case  2. Consider ρ i = 1 and d i = 0. We again choose λ = m and p = q i. We then have p + q i - r λ = 2 q i - r m = 0 since d i = 0 and ρ i = 1 and p - q i = q i - q i = 0 which implies that we may again use (42). We get (46) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 and, passing to the limit, (47) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m c m + 1 ( s m ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) 2222222 × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the variational lemma (48) ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m ) v i + 1 ( y i ) ∂ s m c m + 1 ( s m ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · ∇ y i v i + 1 ( y i ) c m + 1 ( s m ) d y n ⋯ d y i d s m = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m + 1 ∈ C ♯ ∞ ( S m ), which is the weak form of the local problem in this second case. Case  3. Consider ρ i = 0 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i which immediately yields that p + q i - r λ = 0. Furthermore, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i. Thus we have from (42) (49) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . We let ɛ tend to zero and obtain (50) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 and we have left (51) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 , a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ). This means that u i is independent of s λ; thus, u i does not depend on s m - d i + 1 , … , s m. Next we choose p = q i and λ = m - d i. We have p + q i - r λ = 2 q i - r m - d i > 0 and p - q i = 0 and we may again use (42). We have (52) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m - d i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 , where a passage to the limit yields (53) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 22222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 , and finally (54) ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i d s m ⋯ d s m - d i + 1 = 0 , a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ, which is the weak form of the local problem. Case  4. Consider ρ i = 1 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i directly implying that p + q i - r λ = 0. Moreover, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i and ρ i. Hence using (42), we obtain (55) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Passing to the limit, we get (56) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 . That is, (57) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ), and hence u i is independent of s λ. Next we choose p = q i and λ = m - d i in (42). Thus we have p + q i - r λ = 2 q i - r m - d i = 0 and p - q i = 0 and we get (58) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 . We let ɛ go to zero obtaining (59) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m - d i ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( s m - d i ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 and finally we arrive at (60) ∫ S m - d i ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m - d i ) v i + 1 2222 × ( y i ) ∂ s m - d i c m - d i + 1 ( s m - d i ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) c m - d i + 1 ( s m - d i ) d y n ⋯ d y i d s m ⋯ d s m - d i = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m - d i + 1 ∈ C ♯ ∞ ( S m - d i ), the weak form of the local problem.Remark 12. The result above can be extended to any meaningful choice of jointly well-separated scales by means of the general compactness results in Theorems4 and 7 and are hence not restricted to scales that are powers of ɛ; see, for example, [11] for the case with an arbitrary number of temporal scales but only one spatial micro scale. To make the exposition clear, we have assumed linearity, but the result can be extended to monotone, not necessarily linear, problems using standard methods.Remark 13. The wellposedness of the homogenized problem follows fromG-convergence; see, for example, Sections 3 and 4 in [20]. See also Theorem 4.19 in [17] for an easily accessible description of the regularity of the G-limit b. The existence of solutions to the local problems follows from the fact that they appear as limits in appropriate convergence processes. Concerning uniqueness, the coercivity of the elliptic part follows along the lines of the proof of Theorem 2.11 in [16] and for those containing a derivative with respect to some local time scale general theory for linear parabolic equations apply, see, for example, Section 23 in [18]. Normally multiscale homogenization results are formulated as in Theorem 9 without separation of variables and if we study slightly more general problems, for example, those with monotone operators where the linearity has been relaxed, such separation is not possible. However, in Corollary 2.12 in [16], a technique similar to separation of variables of the type sometimes used for conventional homogenization problems is developed. Here one scale at the time is removed in an inductive process and the homogenized coefficient is computed. We believe that a similar procedure could be successful also for the type of problem studied here but would be quite technical. ## 4.2. Illustration of Theorem9 To illustrate the use of Theorem9, we apply it to the 3,3-scaled parabolic homogenization problem (61) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ɛ , x ε 2 , t ε r 1 , t ε r 2 ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < r 1 < r 2,  f ∈ L 2 ( Ω T ),u 0 ∈ L 2 ( Ω ), and the structure conditions(B1) a ∈ C ♯ ( 𝒴 2,2 ) N × N (B2) a ( y 2 , s 2 ) ξ · ξ ≥ α | ξ | 2 for all ( y 2 , s 2 ) ∈ ℝ 2 N × ℝ 2, all ξ ∈ ℝ N and some α > 0are satisfied.We note that the assumptions of Theorem9 are satisfied in this case. Hence the convergence results in (31) hold and, for the homogenized matrix, (62) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 2,2 ‍ a ( y 2 , s 2 ) ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) 2222222 + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) d y 2 d s 2 . Furthermore, u 1 ∈ L 2 ( Ω T × S 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and u 2 ∈ L 2 ( Ω T × 𝒴 1,2 ; H ♯ 1 ( Y 2 ) / ℝ ) are the unique solutions to the system of local problems (63) ρ i ∂ s 2 - d i u i ( x , t , y i , s 2 ) - ∇ y i · ∫ S 2 - d i + 1 ‍ ⋯ ∫ S 2 ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y 2 ‍ a ( y 2 , s 2 ) × ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) × d y 2 ⋯ d y i + 1 d s 2 ⋯ d s 2 - d i + 1 = 0 for i = 1,2, where u i is independent of s 2 - d i + 1 , … , s 2.To find the local problems and the independencies explicitly, we need to identify which values ofd i, and ρ i to use. To find d i, we simply count the number of temporal scales faster than the square of the ith spatial scale for different choices of r 1 and r 2. Moreover, resonance (ρ i = 1) occurs when the square of the ith spatial scale coincides with one of the temporal scales.First we consider the slowest spatial scale; that is, we leti = 1. Note that 2 q 1 = 2. If 2 q 1 = 2 < r 1, then d 1 = 2, ifr 1 ≤ 2 < r 2 then d 1 = 1 and if 2 ≥ r 2, then d 1 = 0. Regarding resonance, if r 1 = 2 or r 2 = 2; then ρ 1 = 1; otherwise, ρ 1 = 0. For lucidity, we present which values of r 1 and r 2 that give the different values of d 1 and ρ 1in Table 1.Table 1 d i and ρ i for i = 1. r 1 and r 2 relative to 2 q 1 = 2 d 1 ρ 1 0 < r 1 < r 2 < 2 0 0 0 < r 1 < r 2 = 2 0 1 0 < r 1 < 2 < r 2 1 0 2 = r 1 < r 2 1 1 2 < r 1 < r 2 2 0In a similar way as above, we get fori = 2 Table 2.Table 2 d i and ρ i for i = 2. r 1 and r 2 relative to 2 q 2 = 4 d 2 ρ 2 0 < r 1 < r 2 < 4 0 0 0 < r 1 < r 2 = 4 0 1 0 < r 1 < 4 < r 2 1 0 4 = r 1 < r 2 1 1 4 < r 1 < r 2 2 0We start by sorting out the independencies of the local temporal variables. As noted, fori = 1,2, u i is independent of s 2 - d i + 1 , … , s 2, which means that if d i = 1, then u i is independent of s 2 and if d i = 2, then u i is independent of both s 1 and s 2. In terms of r 1 and r 2, we have that for r 2 > 2, u 1 is independent of s 2 and for r 1 > 2 also independent of s 1, for r 2 > 4, u 2 is independent of s 2 and moreover, for r 1 > 4 it holds that u 2 is also independent of s 1.To find the local problems, we examine all possible combinations of( d 1 , ρ 1 ) and ( d 2 , ρ 2 ), where 13 are realizable depending on which values r 1 and r 2 may assume. Each row in the tables gives rise to a local problem via (63). This means that each combination gives two local problems. If a row occurs in several combinations, the same local problem reappears. If we start by choosing the first row in the second table, that is ( d 2 , ρ 2 ) = ( 0,0 ), this can be combined with all five rows from the first table, which means that the local problem descending from ( d 2 , ρ 2 ) = ( 0,0 ) is common to these combinations. By (63), this common local problem is (64) - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . If we combine ( d 2 , ρ 2 ) = ( 0,0 ) with ( d 1 , ρ 1 ) = ( 0,0 ) we have in terms of r 1 and r 2that 0 < r 1 < r 2 < 2. The other local problem in this case is (65) - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 . In combination with ( d 1 , ρ 1 ) = ( 0,1 ), that is, 0 < r 1 < r 2 = 2, we obtain instead (66) ∂ s 2 u 1 - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , and for ( d 1 , ρ 1 ) = ( 1,0 ), which means that 0 < r 1 < 2 < r 2 < 4, we have (67) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 . The fourth possible combination, that is, with ( d 1 , ρ 1 ) = ( 1,1 ), that is r 1 = 2 < r 2 < 4, gives (68) ∂ s 1 u 1 - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) 2222222222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 and finally for ( d 1 , ρ 1 ) = ( 2,0 ), that is 2 < r 1 < r 2 < 4, the second local problem is (69) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 .Next we consider( d 2 , ρ 2 ) = ( 0,1 ) in Table 2, which corresponds to 0 < r 1 < r 2 = 4 and gives the local problem (70) ∂ s 2 u 2 - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . Here we have three possible combinations, namely with ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). We note that we have already derived the local problems corresponding to these rows. Thus, the second local problem for r 2 = 4 and 0 < r 1 < 2 is given by (67) for r 2 = 4 and r 1 = 2 by (68) and for 2 < r 1 < r 2 = 4 by (69).We proceed by choosing( d 2 , ρ 2 ) = ( 1,0 ) in Table 2, yielding (71) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . The choice ( d 2 , ρ 2 ) = ( 1,0 ) can be combined with three different rows from Table 1, ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). In combination with ( d 1 , ρ 1 ) = ( 1,0 ), which means that r 2 > 4 and 0 < r 1 < 2, we have (72) - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) 222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is essentially the same as (67) but with the integration over S 2 directly on a ( y 2 , s 2 ) since both u 1 and u 2 are independent of s 2. For ( d 1 , ρ 1 ) = ( 1,1 ), that is, r 2 > 4 and r 1 = 2, we have (73) ∂ s 1 u 1 - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is the same as (68), but where we may integrate directly on a ( y 2 , s 2 ) in the same manner as above. For the third possibility, ( d 1 , ρ 1 ) = ( 2,0 ), 2 < r 1 < 4 < r 2, we get (74) - ∇ y 1 · ∫ S 1 ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 1 = 0 , the same as (69), except for the position of the integration over S 2.The next row in Table2 to consider is ( d 2 , ρ 2 ) = ( 1,1 ), which can be combined only with ( d 1 , ρ 1 ) = ( 2,0 ). This combination corresponds to 4 = r 1 < r 2 and gives (75) ∂ s 1 u 2 - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 and again (74).Finally, for the row( d 2 , ρ 2 ) = ( 2,0 ) together with ( d 1 , ρ 1 ) = ( 2,0 ), that is, 4 < r 1 < r 2, we get (76) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 , - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , where the latter is essentially the same as (69) and (74).Thus, having considered all possible combinations ofr 1 and r 2, we have obtained 13 different cases, A–M in Figure 1, governed by two local problems each.Figure 1 The 13 cases depicted in ther 1 r 2 plane in the order of appearance.In the figure, cases B, D, F, H, J, and L (straight line segments) correspond to single resonance, whereas in the case G (a single point), there is double resonance. In the remaining cases (open two-dimensional regions), there is no resonance.Remark 14. Note that for a problem with fixed scales the finding of the local problems is very straightforward. For example, if we study (61) with r 1 = 2 and r 2 = 17, we have m = 2, n = 2, d 1 = 1, ρ 1 = 1, d 2 = 1, and ρ 2 = 0. We obtain that both u 1 and u 2 are independent of s 2. Inserting d 1 = 1, ρ 1 = 1 in (34) immediately gives the problem (73) and d 2 = 1, ρ 2 = 0 results in (71). The example chosen above with variable time scale exponents reveals more of the applicability and comprehensiveness of the theorem.Remark 15. The problem (61) was studied already in [17, 19], but using Theorem 9, the process is considerably shortened. --- *Source: 101685-2014-02-24.xml*
101685-2014-02-24_101685-2014-02-24.md
74,517
Homogenization of Parabolic Equations with an Arbitrary Number of Scales in Both Space and Time
Liselott Flodén; Anders Holmbom; Marianne Olsson Lindberg; Jens Persson
Journal of Applied Mathematics (2014)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101685
101685-2014-02-24.xml
--- ## Abstract The main contribution of this paper is the homogenization of the linear parabolic equation∂ t u ε ( x , t ) - ∇ · ( a ( x / ε q 1 , . . . , x / ε q n , t / ε r 1 , . . . , t / ε r m ) ∇ u ε ( x , t ) ) = f ( x , t ) exhibiting an arbitrary finite number of both spatial and temporal scales. We briefly recall some fundamentals of multiscale convergence and provide a characterization of multiscale limits for gradients, in an evolution setting adapted to a quite general class of well-separated scales, which we name by jointly well-separated scales (see appendix for the proof). We proceed with a weaker version of this concept called very weak multiscale convergence. We prove a compactness result with respect to this latter type for jointly well-separated scales. This is a key result for performing the homogenization of parabolic problems combining rapid spatial and temporal oscillations such as the problem above. Applying this compactness result together with a characterization of multiscale limits of sequences of gradients we carry out the homogenization procedure, where we together with the homogenized problem obtain n local problems, that is, one for each spatial microscale. To illustrate the use of the obtained result, we apply it to a case with three spatial and three temporal scales with q 1 = 1, q 2 = 2, and 0 < r 1 < r 2. --- ## Body ## 1. Introduction In this paper, we study the homogenization of(1) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m. Here Ω T = Ω × ( 0 , T ), where Ω is an open bounded subset of ℝ N with smooth boundary and a is periodic with respect to the unit cube Y = ( 0,1 ) N in ℝ N in the n first variables and with respect to the unit interval S = ( 0,1 ) in the remaining m variables. The homogenization of (1) consists in studying the asymptotic behavior of the solutions u ɛ as ɛ tends to zero and finding the limit equation which admits the limit u of this sequence as its unique solution. The main contribution of this paper is the proof of a homogenization result for (1), that is, for parabolic problems with an arbitrary finite number of scales in both space and time.Parabolic problems with rapid oscillations in one spatial and one temporal scale were investigated already in [1] using asymptotic expansions. Techniques of two-scale convergence type, see, for example, [2–4], for this kind of problems were first introduced in [5]. One of the main contributions in [5] is a compactness result for a more restricted class of test functions compared with usual two-scale convergence, which has a key role in the homogenization procedure. In [6], a similar result for an arbitrary number of well-separated spatial scales is proven and the type of convergence in question is formalized under the name of very weak multiscale convergence.A number of recent papers address various kinds of parabolic homogenization problems applying techniques related to those introduced in [5]. [7] treats a monotone parabolic problem with the same choices of scales as in [5] in the more general setting of Σ-convergence. In [8], the case with two fast temporal scales is treated with one of them identical to a single fast spatial scale. These results with the same choice of scales are extended to a more general class of differential operators in [9] and in [10], the two fast spatial scales are fixed to be ε 1 = ɛ, ε 2 = ε 2, while only one fast temporal scale appears. Significant progress was made in [11], where the case with an arbitrary number of temporal scales is treated and none of them has to coincide with the single fast spatial scale. A first study of parabolic problems where the number of fast spatial and temporal scales both exceeds one is found in [12], where the fast spatial scales are ε 1 = ɛ, ε 2 = ε 2 and the rapid temporal scales are chosen as ε 1 ′ = ε 2, ε 2 ′ = ε 4, and ε 3 ′ = ε 5. Similar techniques have also been recently applied to hyperbolic problems. In [13] the two fast spatial scales are well separated and the fast temporal scale coincides with the slower of the fast spatial scales and in [14] the set of scales is the same as in [8, 9]. Clearly all of these previous results include strong restrictions on the choices of scales. Our aim here is to provide a unified approach with the choices of scales in the examples above as special cases. The homogenization procedure for (1) covers arbitrary numbers of spatial and temporal scales and any reasonable choice of the exponents q 1 , … , q n and r 1 , … , r m defining the fast spatial and temporal scales, respectively. The key to this is the result on very weak multiscale convergence proved in Theorem 7 which adapts the original concept in [6] to the appropriate evolution setting. Let us note that techniques used for the proof of the special case with ε 1 = ɛ, ε 2 = ε 2 in [10] do not apply to the case with arbitrary numbers of scales studied here.The present paper is organized as follows. In Section2 we briefly recall the concepts of multiscale convergence and evolution multiscale convergence and give a characterization of gradients with respect to this latter type of convergence under a certain well-separatedness assumption. In Section 3 we consider very weak multiscale convergence in the evolution setting and give the key compactness result employed in the homogenization of (1), which is carried out in Section 4. In this final section, we also illustrate how this general homogenization result can be used by applying it to the particular case governed by a ( x / ɛ , x / ε 2 , t / ε r 1 , t / ε r 2 ) where 0 < r 1 < r 2.Notation. F ♯ ( Y ) is the space of all functions in F loc ⁡ ( ℝ N ) that are Y-periodic repetitions of some function in F ( Y ). We denote Y k = Y for k = 1 , … , n, Y n = Y 1 × ⋯ × Y n, y n = y 1 , … , y n, d y n = d y 1 … d y n, S j = S for j = 1 , … , m, S m = S 1 × ⋯ × S m, s m = s 1 , … , s m, d s m = d s 1 … d s m, and 𝒴 n , m = Y n × S m. Moreover, we let ε k ( ɛ ),  k = 1 , … , n, and ε j ′ ( ɛ ), j = 1 , … , m, be strictly positive functions such that ε k ( ɛ ) and ε j ′ ( ɛ ) go to zero when ɛ does. More explanations of standard notations for homogenization theory are found in [15]. ## 2. Multiscale Convergence Our approach for the homogenization procedure in Section4 is based on the two-scale convergence method, first introduced in [2] and generalized to include several scales in [16]. Following [16], we say that a sequence { u ɛ } in L 2 ( Ω ) (n + 1)-scale converges to u 0 ∈ L 2 ( Ω × Y n ) if (2) ∫ Ω ‍ u ɛ ( x ) v ( x , x ε 1 , … , x ε n ) d x ⟶ ∫ Ω ‍ ∫ Y n ‍ u 0 ( x , y n ) v ( x , y n ) d y n d x for any v ∈ L 2 ( Ω ; C ♯ ( Y n ) ) and we write (3) u ɛ ( x ) ⇀ n + 1 u 0 ( x , y n ) . This type of convergence can be adapted to the evolution setting; see, for example, [12]. We give the following definition of evolution multiscale convergence.Definition 1. A sequence{ u ɛ } in L 2 ( Ω T ) is said to (n + 1 , m + 1)-scale converge to u 0 ∈ L 2 ( Ω T × 𝒴 n , m ) if (4) ∫ Ω T ‍ u ɛ ( x , t ) v ( x , t , x ε 1 , … , x ε n , t ε 1 ′ , … , t ε m ′ ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u 0 ( x , t , y n , s m ) v ( x , t , y n , s m ) d y n d s m d x d t for any v ∈ L 2 ( Ω T ; C ♯ ( 𝒴 n , m ) ). We write (5) u ɛ ( x , t ) ⇀ n + 1 , m + 1 u 0 ( x , t , y n , s m ) .Normally, some assumptions are made on the relation between the scales. We say that the scales in a list{ ε 1 , … , ε n } are separated if (6) lim ⁡ ɛ → 0 ε k + 1 ε k = 0 for k = 1 , … , n - 1 and that the scales are well-separated if there exists a positive integer l such that (7) lim ⁡ ɛ → 0 1 ε k ( ε k + 1 ε k ) l = 0 for k = 1 , … , n - 1.We also need the concept in the following definition.Definition 2. Let{ ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } be lists of well-separated scales. Collect all elements from both lists in one common list. If from possible duplicates, where by duplicates we mean scales which tend to zero equally fast, one member of each such pair is removed and the list in order of magnitude of all the remaining elements is well-separated, the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are said to be jointly well-separated.In the remark below, we give some further comments on the concept introduced in Definition2.Remark 3. To include also the temporal scales alongside with the spatial scales allows us to study a much richer class of homogenization problems such as all the cases included in (1). For a more technically formulated definition and some examples, see Section  2.4 in [17]. Note that the lists { ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } of spatial and temporal scales, respectively, in (1) are jointly well-separated for any choice of 0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m.Below we provide a characterization of evolution multiscale limits for gradients, which will be used in the proof of the homogenization result in Section4. Here W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the space of all functions in L 2 ( 0 , T ; H 0 1 ( Ω ) ) such that the time derivative belongs to L 2 ( 0 , T ; H - 1 ( Ω ) ); see, for example, Chapter 23 in [18].Theorem 4. Let{ u ɛ } be a bounded sequence in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and suppose that the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are jointly well-separated. Then there exists a subsequence such that (8) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , (9) ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ) for j = 2 , … , n.Proof. See Theorem 2.74 in [17] and the appendix of this paper. ## 3. Very Weak Multiscale Convergence A first compactness result of very weak convergence type was presented in [5] for the purpose of homogenizing linear parabolic equations with fast oscillations in one spatial scale and one temporal scale. A compactness result for the case with oscillations in n well-separated spatial scales was proven in [6], where the notion of very weak convergence was introduced. It states that for any bounded sequence { u ɛ } in H 0 1 ( Ω ) and the scales in the list { ε 1 , … , ε n } well-separated it holds up to subsequence that (10) ∫ Ω ‍ u ɛ ( x ) ε n v ( x , x ε 1 , … , x ε n - 1 ) φ ( x ε n ) d x ⟶ ∫ Ω ‍ ∫ Y n ‍ u n ( x , y n ) v ( x , y n - 1 ) φ ( y n ) d y n d x for any v ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ) and φ ∈ C ♯ ∞ ( Y n ) / ℝ, where u n is the same as in the right-hand side of (11) ∇ u ɛ ( x ) ⇀ n + 1 ∇ u ( x ) + ∑ j = 1 n ‍ ∇ y j u j ( x , y j ) , the original time independent version of the gradient characterization in Theorem 4, that is found in [16]. In Theorem 7 below we present a generalized result including oscillations in time with a view to homogenizing (1). First we define very weak evolution multiscale convergence.Definition 5. We say that a sequence{ g ɛ } in L 1 ( Ω T )  ( n + 1 , m + 1 )-scale converges very weakly to g 0 ∈ L 1 ( Ω T × 𝒴 n , m ) if (12) ∫ Ω T ‍ g ɛ ( x , t ) v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) φ ( x ε n ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ g 0 ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t for any v ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ),  φ ∈ C ♯ ∞ ( Y n ) / ℝ and c ∈ D ( 0 , T ; C ♯ ∞ ( S m ) ). A unique limit is provided by requiring that (13) ∫ Y n ‍ g 0 ( x , t , y n , s m ) d y n = 0 . We write (14) g ɛ ( x , t ) ⇀ v w n + 1 , m + 1 g 0 ( x , t , y n , s m ) .The following proposition (see Theorem 3.3 in [16]) is needed for the proof of Theorem 7.Proposition 6. Letv ∈ D ( Ω ; C ♯ ∞ ( Y n ) ) be a function such that (15) ∫ Y n ‍ v ( x , y n ) d y n = 0 , and assume that the scales in the list { ε 1 , … , ε n } are well-separated. Then { ε n - 1 v ( x , x / ε 1 , … , x / ε n ) } is bounded in H - 1 ( Ω ).We are now ready to state the following theorem which is essential for the homogenization of (1); see also Theorem 7 in [19] and Theorem 2.78 in [17].Theorem 7. Let{ u ɛ } be a bounded sequence in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and assume that the lists { ε 1 , … , ε n } and { ε 1 ′ , … , ε m ′ } are jointly well-separated. Then there exists a subsequence such that (16) u ɛ ( x , t ) ε n ⇀ v w n + 1 , m + 1 u n ( x , t , y n , s m ) , where, for n = 1, u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and, for n = 2,3 , …,  u n ∈ L 2 ( Ω T × 𝒴 n - 1 , m ; H ♯ 1 ( Y n ) / ℝ ) are the same as in Theorem 4.Proof. We want to prove that for anyv ∈ D ( Ω ; C ♯ ∞ ( Y n - 1 ) ),  c ∈ D ( 0 , T ; C ♯ ∞ ( S m ) ) and φ ∈ C ♯ ∞ ( Y n ) / ℝ, (17) ∫ Ω T ‍ u ɛ ( x , t ) ε n v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) φ ( x ε n ) d x d t ⟶ ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t for some suitable subsequence. First we note that any φ ∈ C ♯ ∞ ( Y n ) / ℝ can be expressed as (18) φ ( y n ) = Δ y n w ( y n ) = ∇ y n · ( ∇ y n w ( y n ) ) for some w ∈ C ♯ ∞ ( Y n ) / ℝ (see, e.g., Remark 3.2 in [7]). Furthermore, let (19) ψ ( y n ) = ∇ y n w ( y n ) and observe that (20) ∫ Y n ‍ ψ ( y n ) d y n = ∫ Y n ‍ ∇ y n w ( y n ) d y n = 0 because of the Y n-periodicity of w. By (18), the left-hand side of (17) can be expressed as (21) ∫ Ω T ‍ u ɛ ( x , t ) ε n v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ( ∇ y n · ψ ) ( x ε n ) d x d t = ∫ Ω T ‍ u ɛ ( x , t ) v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ∇ · ( ψ ( x ε n ) ) d x d t . Integrating by parts with respect to x, we obtain (22) - ∫ Ω T ‍ ∇ u ɛ ( x , t ) · v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) ψ ( x ε n ) + u ɛ ( x , t ) ∇ x v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) + ∑ j = 1 n - 1 ‍ u ɛ ( x , t ) ε j - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t . To begin with, we consider the first term. Passing to the multiscale limit using Theorem 4, we arrive up to subsequence at (23) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v ( x , y n - 1 ) c ( t , s m ) ψ ( y n ) d y n d s m d x d t , and due to (20) all but the last term vanish. We have (24) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ ∇ y n u n ( x , t , y n , s m ) 22222222 · v ( x , y n - 1 ) c ( t , s m ) ψ ( y n ) d y n d s m d x d t . Moreover, (8) means that the second term of (22) up to a subsequence approaches (25) - ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u ( x , t ) ∇ x v ( x , y n - 1 ) c ( t , s m ) · ψ ( y n ) d y n d s m d x d t = - ∫ Ω T ‍ ∫ 𝒴 n - 1 , m ‍ u ( x , t ) ∇ x v ( x , y n - 1 ) c ( t , s m ) · ( ∫ Y n ‍ ψ ( y n ) d y n ) d y n - 1 d s m d x d t = 0 , where the last equality is a result of (20). It remains to investigate the last term of (22). We write (26) ∑ j = 1 n - 1 ‍ ∫ Ω T ‍ u ɛ ( x , t ) ε j - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) 22222 × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t = ∑ j = 1 n - 1 ‍ ε n ε j ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t . Clearly,{ ε n - 1 ∇ y j v ( x , x / ε 1 , … , x / ε n - 1 ) · ψ ( x / ε n ) } is bounded in H - 1 ( Ω ) for j = 1 , … , n - 1 by Proposition 6. Observing that { u ɛ } is assumed to be bounded in L 2 ( 0 , T ; H 0 1 ( Ω ) ), this means that, for any integer j ∈ [ 1 , n - 1 ], there are constants C 1 , C 2 , C 3 > 0 such that (27) ( ( t , t ε 1 ′ , … , t ε m ′ ) ε n ε j ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t ( t , t ε 1 ′ , … , t ε m ′ ) ) 2 = ( ε n ε j ) 2 ( ( t , t ε 1 ′ , … , t ε m ′ ) ∫ Ω T ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x d t ) 2 ≤ C 1 ( ε n ε j ) 2 ∫ 0 T ‍ ( ( t , t ε 1 ′ , … , t ε m ′ ) ∫ Ω ‍ u ɛ ( x , t ) ε n - 1 ∇ y j v ( x , x ε 1 , … , x ε n - 1 ) × c ( t , t ε 1 ′ , … , t ε m ′ ) · ψ ( x ε n ) d x ) 2 d t ≤ C 1 ( ε n ε j ) 2 × ∫ 0 T ‍ ( ∥ u ɛ ( · , t ) c ( t , t ε 1 ′ , … , t ε m ′ ) ∥ H 0 1 ( Ω ) ∥ ε n - 1 ∇ y j v ( · , · ε 1 , … , · ε n - 1 ) · ψ ( · ε n ) ∥ H - 1 ( Ω ) × ∥ u ɛ ( · , t ) c ( t , t ε 1 ′ , … , t ε m ′ ) ∥ H 0 1 ( Ω ) ) 2 d t ≤ C 2 ( ε n ε j ) 2 ∫ 0 T ‍ ∥ u ɛ ( · , t ) ∥ H 0 1 ( Ω ) 2 d t = C 2 ( ε n ε j ) 2 ∥ u ɛ ∥ L 2 ( 0 , T ; H 0 1 ( Ω ) ) 2 ≤ C 3 ( ε n ε j ) 2 . Hence, all the terms in the sum (26) vanish as ɛ → 0 as a result of the separatedness of the scales. Then (24) is all that remains after passing to the limit in (22). Finally, integrating (24) by parts, we obtain (28) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) c ( t , s m ) ∇ y n 2222222 · ψ ( y n ) d y n d s m d x d t = ∫ Ω T ‍ ∫ 𝒴 n , m ‍ u n ( x , t , y n , s m ) v ( x , y n - 1 ) 2222222222 × c ( t , s m ) φ ( y n ) d y n d s m d x d t , which is the right-hand side of (17).Remark 8. The notion of very weak multiscale convergence is an alternative type of multiscale convergence. It is remarkable in the sense that it enables us to provide a compactness result of multiscale convergence type for sequences that are not bounded in any Lebesgue space. In fact, it deals with the normally forbidden situation of finding a limit for a quotient, where the denominator goes to zero while the numerator does not. The price to pay for this is that we have to use much smaller class of admissible testfunctions. In the set of modes of multiscale convergence usually applied in homogenization that we find in Definition1 and Theorem 4, very weak multiscale convergence provides us with the missing link. As we will see in the homogenization procedure in the next section Theorems 4 and 7 give us the cornerstones for the homogenization procedure that allows us to tackle all appearing passages to limits in a unified way by means of two distinct theorems and without ad hoc constructions. Moreover, Theorem 7 provides us with appropriate upscaling to detect microoscillations in solutions of typical homogenization problems, which are usually of vanishing amplitude, while the global tendency is filtered away as a result of the choice of test functions. See [12]. ## 4. Homogenization We are now ready to give the main contribution of this paper, the homogenization of the linear parabolic problem (1). The gradient characterization in Theorem 4 and the very weak compactness result from Theorem 7 are crucial for proving the homogenization result, which is presented in Section 4.1. An illustration of how this result can be used in practice is given in Section 4.2. ### 4.1. The General Case We study the homogenization of the problem(29) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n,  0 < r 1 < ⋯ < r m, f ∈ L 2 ( Ω T ), u 0 ∈ L 2 ( Ω ) and where we assume that(A1) a ∈ C ♯ ( 𝒴 n , m ) N × N. (A2) a ( y n , s m ) ξ · ξ ≥ α | ξ | 2 for all ( y n , s m ) ∈ ℝ n N × ℝ m, all ξ ∈ ℝ N and some α > 0.Under these conditions, (29) allows a unique solution u ɛ ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and for some positive constant C, (30) ∥ u ɛ ∥ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) < C .Given the scale exponents0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m, we may define some numbers in order to formulate the theorem below in a convenient way. We define d i (the number of temporal scales faster than the square of the spatial scale in question) and ρ i (indicates whether there is nonresonance or resonance), i = 1 , … , n, as follows.(i) If2 q i < r 1,then d i = m, if r j ≤ 2 q i < r j + 1 for some j = 1 , … , m - 1, then d i = m - j, and if 2 q i ≥ r m, then d i = 0. (ii) If2 q i = r j for some j = 1 , … , m, that is we have resonance, we let ρ i = 1; otherwise, ρ i = 0.Note that from the definition of d i we have in fact in the definition of ρ i that j = m - d i in the case of resonance.Finally, we recall that the lists{ ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } are jointly well-separated.Theorem 9. Let{ u ɛ } be a sequence of solutions in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) to (29). Then it holds that (31) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the unique solution to (32) ∂ t u ( x , t ) - ∇ · ( b ( x , t ) ∇ u ( x , t ) ) = f ( x , t ) i n Ω T , u ( x , t ) = 0 o n ∂ Ω × ( 0 , T ) , u ( x , 0 ) = u 0 ( x ) i n Ω with (33) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 n , m ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) d y n d s m . Here u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n, are the unique solutions to the system of local problems (34) ρ i ∂ s m - d i u i ( x , t , y i , s m ) - ∇ y i · ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) × d y n ⋯ d y i + 1 d s m ⋯ d s m - d i + 1 = 0 , for i = 1 , … , n, where u i is independent of s m - d i + 1 , … , s m.Remark 10. In the cased i = 0, we naturally interpret the integration in (34) as if there is no local temporal integration involved and that there is no independence of any local temporal variable.Remark 11. Note that if,for example, u 1 is independent of s m the function space that u 1 belongs to simplifies to u 1 ∈ L 2 ( Ω T × S m - 1 ; H ♯ 1 ( Y 1 ) / ℝ ) and when u 1 is also independent of s m - 1, we have that u 1 ∈ L 2 ( Ω T × S m - 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and so on.Proof of Theorem9. Since{ u ɛ } is bounded in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and the lists of scales are jointly well-separated, we can apply Theorem 4 and obtain that, up to a subsequence, (35) u ɛ ( x , t ) ⟶ u ( x , t ) in L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) in L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ), and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n. To obtain the homogenized problem, we introduce the weak form(36) ∫ Ω T ‍ - u ɛ ( x , t ) v ( x ) ∂ t c ( t ) 222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∇ v ( x ) c ( t ) d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t of (29) where v ∈ H 0 1 ( Ω ) and c ∈ D ( 0 , T ), and letting ɛ → 0, we get using Theorem 4 (37) ∫ Ω T ‍ - u ( x , t ) v ( x ) ∂ t c ( t ) + ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · ∇ v ( x ) c ( t ) d y n d s m d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t . We proceed by deriving the system of local problems (34) and the independencies of the local temporal variables. Fix i = 1 , … , n and choose (38) v ( x ) = ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) , p > 0 , c ( t ) = c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) , λ = 1 , … , m with v 1 ∈ D ( Ω ),  v j ∈ C ♯ ∞ ( Y j - 1 ) for j = 2 , … , i,  v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ,  c 1 ∈ D ( 0 , T ) and c l ∈ C ♯ ∞ ( S l - 1 ) for l = 2 , … , λ + 1. Here p and λ will be fixed later. Using this choice of test functions in (36), we have (39) ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ( ∂ t c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) + ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) × c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ( ε p ∇ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) + ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) × v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = ∫ Ω T ‍ f ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t , where, for l = 2 and l = λ + 1, the interpretation should be that the partial derivative acts on c 2 and c λ + 1, respectively, and where the j = 2 and j = i + 1 terms are defined analogously. We let ɛ → 0 and using Theorem 4, we obtain (40) lim ⁡ ɛ → 0 ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) c 2 ( t ε r 1 ) 222222222222 ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , and extracting a factorε - q i in the first term, we get (41) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) × ∑ l = 2 λ + 1 ‍ ε p + q i - r l - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 22222222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 × v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Suppose that p + q i - r λ ≥ 0 and p - q i ≥ 0 (which also guarantees that p > 0 as required above); then, by Theorems 7 and 4, we have left (42) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε p + q i - r λ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε p - q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , which is the point of departure for deriving the local problems and the independency. We distinguish four different cases whereρ i is either zero (nonresonance) or one (resonance) and d i is either zero or positive. Case  1. Consider ρ i = 0 and d i = 0. We choose λ = m and p = q i. This means that p + q i - r λ = 2 q i - r m > 0 since d i = ρ i = 0 and p - q i = q i - q i = 0. This implies that (42) is valid. We get (43) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 , where we let ɛ → 0 and obtain by means of Theorems 7 and 4 (44) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the Variational Lemma, we have (45) ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 22222222 · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i = 0 , a.e. in Ω T × S m × Y 1 × ⋯ × Y i - 1 for all v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ and by density for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ. This is the weak form of the local problem in this case. In what follows Theorems 7 and 4, the variational lemma and the density argument are used in a corresponding way. Case  2. Consider ρ i = 1 and d i = 0. We again choose λ = m and p = q i. We then have p + q i - r λ = 2 q i - r m = 0 since d i = 0 and ρ i = 1 and p - q i = q i - q i = 0 which implies that we may again use (42). We get (46) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 and, passing to the limit, (47) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m c m + 1 ( s m ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) 2222222 × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the variational lemma (48) ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m ) v i + 1 ( y i ) ∂ s m c m + 1 ( s m ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · ∇ y i v i + 1 ( y i ) c m + 1 ( s m ) d y n ⋯ d y i d s m = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m + 1 ∈ C ♯ ∞ ( S m ), which is the weak form of the local problem in this second case. Case  3. Consider ρ i = 0 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i which immediately yields that p + q i - r λ = 0. Furthermore, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i. Thus we have from (42) (49) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . We let ɛ tend to zero and obtain (50) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 and we have left (51) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 , a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ). This means that u i is independent of s λ; thus, u i does not depend on s m - d i + 1 , … , s m. Next we choose p = q i and λ = m - d i. We have p + q i - r λ = 2 q i - r m - d i > 0 and p - q i = 0 and we may again use (42). We have (52) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m - d i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 , where a passage to the limit yields (53) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 22222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 , and finally (54) ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i d s m ⋯ d s m - d i + 1 = 0 , a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ, which is the weak form of the local problem. Case  4. Consider ρ i = 1 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i directly implying that p + q i - r λ = 0. Moreover, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i and ρ i. Hence using (42), we obtain (55) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Passing to the limit, we get (56) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 . That is, (57) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ), and hence u i is independent of s λ. Next we choose p = q i and λ = m - d i in (42). Thus we have p + q i - r λ = 2 q i - r m - d i = 0 and p - q i = 0 and we get (58) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 . We let ɛ go to zero obtaining (59) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m - d i ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( s m - d i ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 and finally we arrive at (60) ∫ S m - d i ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m - d i ) v i + 1 2222 × ( y i ) ∂ s m - d i c m - d i + 1 ( s m - d i ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) c m - d i + 1 ( s m - d i ) d y n ⋯ d y i d s m ⋯ d s m - d i = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m - d i + 1 ∈ C ♯ ∞ ( S m - d i ), the weak form of the local problem.Remark 12. The result above can be extended to any meaningful choice of jointly well-separated scales by means of the general compactness results in Theorems4 and 7 and are hence not restricted to scales that are powers of ɛ; see, for example, [11] for the case with an arbitrary number of temporal scales but only one spatial micro scale. To make the exposition clear, we have assumed linearity, but the result can be extended to monotone, not necessarily linear, problems using standard methods.Remark 13. The wellposedness of the homogenized problem follows fromG-convergence; see, for example, Sections 3 and 4 in [20]. See also Theorem 4.19 in [17] for an easily accessible description of the regularity of the G-limit b. The existence of solutions to the local problems follows from the fact that they appear as limits in appropriate convergence processes. Concerning uniqueness, the coercivity of the elliptic part follows along the lines of the proof of Theorem 2.11 in [16] and for those containing a derivative with respect to some local time scale general theory for linear parabolic equations apply, see, for example, Section 23 in [18]. Normally multiscale homogenization results are formulated as in Theorem 9 without separation of variables and if we study slightly more general problems, for example, those with monotone operators where the linearity has been relaxed, such separation is not possible. However, in Corollary 2.12 in [16], a technique similar to separation of variables of the type sometimes used for conventional homogenization problems is developed. Here one scale at the time is removed in an inductive process and the homogenized coefficient is computed. We believe that a similar procedure could be successful also for the type of problem studied here but would be quite technical. ### 4.2. Illustration of Theorem9 To illustrate the use of Theorem9, we apply it to the 3,3-scaled parabolic homogenization problem (61) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ɛ , x ε 2 , t ε r 1 , t ε r 2 ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < r 1 < r 2,  f ∈ L 2 ( Ω T ),u 0 ∈ L 2 ( Ω ), and the structure conditions(B1) a ∈ C ♯ ( 𝒴 2,2 ) N × N (B2) a ( y 2 , s 2 ) ξ · ξ ≥ α | ξ | 2 for all ( y 2 , s 2 ) ∈ ℝ 2 N × ℝ 2, all ξ ∈ ℝ N and some α > 0are satisfied.We note that the assumptions of Theorem9 are satisfied in this case. Hence the convergence results in (31) hold and, for the homogenized matrix, (62) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 2,2 ‍ a ( y 2 , s 2 ) ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) 2222222 + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) d y 2 d s 2 . Furthermore, u 1 ∈ L 2 ( Ω T × S 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and u 2 ∈ L 2 ( Ω T × 𝒴 1,2 ; H ♯ 1 ( Y 2 ) / ℝ ) are the unique solutions to the system of local problems (63) ρ i ∂ s 2 - d i u i ( x , t , y i , s 2 ) - ∇ y i · ∫ S 2 - d i + 1 ‍ ⋯ ∫ S 2 ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y 2 ‍ a ( y 2 , s 2 ) × ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) × d y 2 ⋯ d y i + 1 d s 2 ⋯ d s 2 - d i + 1 = 0 for i = 1,2, where u i is independent of s 2 - d i + 1 , … , s 2.To find the local problems and the independencies explicitly, we need to identify which values ofd i, and ρ i to use. To find d i, we simply count the number of temporal scales faster than the square of the ith spatial scale for different choices of r 1 and r 2. Moreover, resonance (ρ i = 1) occurs when the square of the ith spatial scale coincides with one of the temporal scales.First we consider the slowest spatial scale; that is, we leti = 1. Note that 2 q 1 = 2. If 2 q 1 = 2 < r 1, then d 1 = 2, ifr 1 ≤ 2 < r 2 then d 1 = 1 and if 2 ≥ r 2, then d 1 = 0. Regarding resonance, if r 1 = 2 or r 2 = 2; then ρ 1 = 1; otherwise, ρ 1 = 0. For lucidity, we present which values of r 1 and r 2 that give the different values of d 1 and ρ 1in Table 1.Table 1 d i and ρ i for i = 1. r 1 and r 2 relative to 2 q 1 = 2 d 1 ρ 1 0 < r 1 < r 2 < 2 0 0 0 < r 1 < r 2 = 2 0 1 0 < r 1 < 2 < r 2 1 0 2 = r 1 < r 2 1 1 2 < r 1 < r 2 2 0In a similar way as above, we get fori = 2 Table 2.Table 2 d i and ρ i for i = 2. r 1 and r 2 relative to 2 q 2 = 4 d 2 ρ 2 0 < r 1 < r 2 < 4 0 0 0 < r 1 < r 2 = 4 0 1 0 < r 1 < 4 < r 2 1 0 4 = r 1 < r 2 1 1 4 < r 1 < r 2 2 0We start by sorting out the independencies of the local temporal variables. As noted, fori = 1,2, u i is independent of s 2 - d i + 1 , … , s 2, which means that if d i = 1, then u i is independent of s 2 and if d i = 2, then u i is independent of both s 1 and s 2. In terms of r 1 and r 2, we have that for r 2 > 2, u 1 is independent of s 2 and for r 1 > 2 also independent of s 1, for r 2 > 4, u 2 is independent of s 2 and moreover, for r 1 > 4 it holds that u 2 is also independent of s 1.To find the local problems, we examine all possible combinations of( d 1 , ρ 1 ) and ( d 2 , ρ 2 ), where 13 are realizable depending on which values r 1 and r 2 may assume. Each row in the tables gives rise to a local problem via (63). This means that each combination gives two local problems. If a row occurs in several combinations, the same local problem reappears. If we start by choosing the first row in the second table, that is ( d 2 , ρ 2 ) = ( 0,0 ), this can be combined with all five rows from the first table, which means that the local problem descending from ( d 2 , ρ 2 ) = ( 0,0 ) is common to these combinations. By (63), this common local problem is (64) - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . If we combine ( d 2 , ρ 2 ) = ( 0,0 ) with ( d 1 , ρ 1 ) = ( 0,0 ) we have in terms of r 1 and r 2that 0 < r 1 < r 2 < 2. The other local problem in this case is (65) - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 . In combination with ( d 1 , ρ 1 ) = ( 0,1 ), that is, 0 < r 1 < r 2 = 2, we obtain instead (66) ∂ s 2 u 1 - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , and for ( d 1 , ρ 1 ) = ( 1,0 ), which means that 0 < r 1 < 2 < r 2 < 4, we have (67) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 . The fourth possible combination, that is, with ( d 1 , ρ 1 ) = ( 1,1 ), that is r 1 = 2 < r 2 < 4, gives (68) ∂ s 1 u 1 - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) 2222222222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 and finally for ( d 1 , ρ 1 ) = ( 2,0 ), that is 2 < r 1 < r 2 < 4, the second local problem is (69) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 .Next we consider( d 2 , ρ 2 ) = ( 0,1 ) in Table 2, which corresponds to 0 < r 1 < r 2 = 4 and gives the local problem (70) ∂ s 2 u 2 - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . Here we have three possible combinations, namely with ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). We note that we have already derived the local problems corresponding to these rows. Thus, the second local problem for r 2 = 4 and 0 < r 1 < 2 is given by (67) for r 2 = 4 and r 1 = 2 by (68) and for 2 < r 1 < r 2 = 4 by (69).We proceed by choosing( d 2 , ρ 2 ) = ( 1,0 ) in Table 2, yielding (71) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . The choice ( d 2 , ρ 2 ) = ( 1,0 ) can be combined with three different rows from Table 1, ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). In combination with ( d 1 , ρ 1 ) = ( 1,0 ), which means that r 2 > 4 and 0 < r 1 < 2, we have (72) - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) 222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is essentially the same as (67) but with the integration over S 2 directly on a ( y 2 , s 2 ) since both u 1 and u 2 are independent of s 2. For ( d 1 , ρ 1 ) = ( 1,1 ), that is, r 2 > 4 and r 1 = 2, we have (73) ∂ s 1 u 1 - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is the same as (68), but where we may integrate directly on a ( y 2 , s 2 ) in the same manner as above. For the third possibility, ( d 1 , ρ 1 ) = ( 2,0 ), 2 < r 1 < 4 < r 2, we get (74) - ∇ y 1 · ∫ S 1 ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 1 = 0 , the same as (69), except for the position of the integration over S 2.The next row in Table2 to consider is ( d 2 , ρ 2 ) = ( 1,1 ), which can be combined only with ( d 1 , ρ 1 ) = ( 2,0 ). This combination corresponds to 4 = r 1 < r 2 and gives (75) ∂ s 1 u 2 - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 and again (74).Finally, for the row( d 2 , ρ 2 ) = ( 2,0 ) together with ( d 1 , ρ 1 ) = ( 2,0 ), that is, 4 < r 1 < r 2, we get (76) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 , - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , where the latter is essentially the same as (69) and (74).Thus, having considered all possible combinations ofr 1 and r 2, we have obtained 13 different cases, A–M in Figure 1, governed by two local problems each.Figure 1 The 13 cases depicted in ther 1 r 2 plane in the order of appearance.In the figure, cases B, D, F, H, J, and L (straight line segments) correspond to single resonance, whereas in the case G (a single point), there is double resonance. In the remaining cases (open two-dimensional regions), there is no resonance.Remark 14. Note that for a problem with fixed scales the finding of the local problems is very straightforward. For example, if we study (61) with r 1 = 2 and r 2 = 17, we have m = 2, n = 2, d 1 = 1, ρ 1 = 1, d 2 = 1, and ρ 2 = 0. We obtain that both u 1 and u 2 are independent of s 2. Inserting d 1 = 1, ρ 1 = 1 in (34) immediately gives the problem (73) and d 2 = 1, ρ 2 = 0 results in (71). The example chosen above with variable time scale exponents reveals more of the applicability and comprehensiveness of the theorem.Remark 15. The problem (61) was studied already in [17, 19], but using Theorem 9, the process is considerably shortened. ## 4.1. The General Case We study the homogenization of the problem(29) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < q 1 < ⋯ < q n,  0 < r 1 < ⋯ < r m, f ∈ L 2 ( Ω T ), u 0 ∈ L 2 ( Ω ) and where we assume that(A1) a ∈ C ♯ ( 𝒴 n , m ) N × N. (A2) a ( y n , s m ) ξ · ξ ≥ α | ξ | 2 for all ( y n , s m ) ∈ ℝ n N × ℝ m, all ξ ∈ ℝ N and some α > 0.Under these conditions, (29) allows a unique solution u ɛ ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and for some positive constant C, (30) ∥ u ɛ ∥ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) < C .Given the scale exponents0 < q 1 < ⋯ < q n and 0 < r 1 < ⋯ < r m, we may define some numbers in order to formulate the theorem below in a convenient way. We define d i (the number of temporal scales faster than the square of the spatial scale in question) and ρ i (indicates whether there is nonresonance or resonance), i = 1 , … , n, as follows.(i) If2 q i < r 1,then d i = m, if r j ≤ 2 q i < r j + 1 for some j = 1 , … , m - 1, then d i = m - j, and if 2 q i ≥ r m, then d i = 0. (ii) If2 q i = r j for some j = 1 , … , m, that is we have resonance, we let ρ i = 1; otherwise, ρ i = 0.Note that from the definition of d i we have in fact in the definition of ρ i that j = m - d i in the case of resonance.Finally, we recall that the lists{ ε q 1 , … , ε q n } and { ε r 1 , … , ε r m } are jointly well-separated.Theorem 9. Let{ u ɛ } be a sequence of solutions in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) to (29). Then it holds that (31) u ɛ ( x , t ) ⟶ u ( x , t ) i n L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) i n L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) is the unique solution to (32) ∂ t u ( x , t ) - ∇ · ( b ( x , t ) ∇ u ( x , t ) ) = f ( x , t ) i n Ω T , u ( x , t ) = 0 o n ∂ Ω × ( 0 , T ) , u ( x , 0 ) = u 0 ( x ) i n Ω with (33) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 n , m ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) d y n d s m . Here u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ) and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n, are the unique solutions to the system of local problems (34) ρ i ∂ s m - d i u i ( x , t , y i , s m ) - ∇ y i · ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) × d y n ⋯ d y i + 1 d s m ⋯ d s m - d i + 1 = 0 , for i = 1 , … , n, where u i is independent of s m - d i + 1 , … , s m.Remark 10. In the cased i = 0, we naturally interpret the integration in (34) as if there is no local temporal integration involved and that there is no independence of any local temporal variable.Remark 11. Note that if,for example, u 1 is independent of s m the function space that u 1 belongs to simplifies to u 1 ∈ L 2 ( Ω T × S m - 1 ; H ♯ 1 ( Y 1 ) / ℝ ) and when u 1 is also independent of s m - 1, we have that u 1 ∈ L 2 ( Ω T × S m - 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and so on.Proof of Theorem9. Since{ u ɛ } is bounded in W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ) and the lists of scales are jointly well-separated, we can apply Theorem 4 and obtain that, up to a subsequence, (35) u ɛ ( x , t ) ⟶ u ( x , t ) in L 2 ( Ω T ) , u ɛ ( x , t ) ⇀ u ( x , t ) in L 2 ( 0 , T ; H 0 1 ( Ω ) ) , ∇ u ɛ ( x , t ) ⇀ n + 1 , m + 1 ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) , where u ∈ W 2 1 ( 0 , T ; H 0 1 ( Ω ) , L 2 ( Ω ) ), u 1 ∈ L 2 ( Ω T × S m ; H ♯ 1 ( Y 1 ) / ℝ ), and u j ∈ L 2 ( Ω T × 𝒴 j - 1 , m ; H ♯ 1 ( Y j ) / ℝ ), j = 2 , … , n. To obtain the homogenized problem, we introduce the weak form(36) ∫ Ω T ‍ - u ɛ ( x , t ) v ( x ) ∂ t c ( t ) 222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∇ v ( x ) c ( t ) d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t of (29) where v ∈ H 0 1 ( Ω ) and c ∈ D ( 0 , T ), and letting ɛ → 0, we get using Theorem 4 (37) ∫ Ω T ‍ - u ( x , t ) v ( x ) ∂ t c ( t ) + ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · ∇ v ( x ) c ( t ) d y n d s m d x d t = ∫ Ω T ‍ f ( x , t ) v ( x ) c ( t ) d x d t . We proceed by deriving the system of local problems (34) and the independencies of the local temporal variables. Fix i = 1 , … , n and choose (38) v ( x ) = ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) , p > 0 , c ( t ) = c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) , λ = 1 , … , m with v 1 ∈ D ( Ω ),  v j ∈ C ♯ ∞ ( Y j - 1 ) for j = 2 , … , i,  v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ,  c 1 ∈ D ( 0 , T ) and c l ∈ C ♯ ∞ ( S l - 1 ) for l = 2 , … , λ + 1. Here p and λ will be fixed later. Using this choice of test functions in (36), we have (39) ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ( ∂ t c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) + ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) × c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ( ε p ∇ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) + ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) × v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = ∫ Ω T ‍ f ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t , where, for l = 2 and l = λ + 1, the interpretation should be that the partial derivative acts on c 2 and c λ + 1, respectively, and where the j = 2 and j = i + 1 terms are defined analogously. We let ɛ → 0 and using Theorem 4, we obtain (40) lim ⁡ ɛ → 0 ∫ Ω T ‍ - u ɛ ( x , t ) ε p v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × ∑ l = 2 λ + 1 ‍ ε - r l - 1 c 1 ( t ) c 2 ( t ε r 1 ) 222222222222 ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , and extracting a factorε - q i in the first term, we get (41) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) × ∑ l = 2 λ + 1 ‍ ε p + q i - r l - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 22222222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s l - 1 c l ( t ε r l - 1 ) ⋯ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ∑ j = 2 i + 1 ‍ ε p - q j - 1 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ ∇ y j - 1 × v j ( x ε q j - 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Suppose that p + q i - r λ ≥ 0 and p - q i ≥ 0 (which also guarantees that p > 0 as required above); then, by Theorems 7 and 4, we have left (42) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε p + q i - r λ v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε p - q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 , which is the point of departure for deriving the local problems and the independency. We distinguish four different cases whereρ i is either zero (nonresonance) or one (resonance) and d i is either zero or positive. Case  1. Consider ρ i = 0 and d i = 0. We choose λ = m and p = q i. This means that p + q i - r λ = 2 q i - r m > 0 since d i = ρ i = 0 and p - q i = q i - q i = 0. This implies that (42) is valid. We get (43) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 , where we let ɛ → 0 and obtain by means of Theorems 7 and 4 (44) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the Variational Lemma, we have (45) ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 22222222 · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i = 0 , a.e. in Ω T × S m × Y 1 × ⋯ × Y i - 1 for all v i + 1 ∈ C ♯ ∞ ( Y i ) / ℝ and by density for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ. This is the weak form of the local problem in this case. In what follows Theorems 7 and 4, the variational lemma and the density argument are used in a corresponding way. Case  2. Consider ρ i = 1 and d i = 0. We again choose λ = m and p = q i. We then have p + q i - r λ = 2 q i - r m = 0 since d i = 0 and ρ i = 1 and p - q i = q i - q i = 0 which implies that we may again use (42). We get (46) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m c m + 1 ( t ε r m ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m + 1 ( t ε r m ) d x d t = 0 and, passing to the limit, (47) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m c m + 1 ( s m ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) c 1 ( t ) 2222222 × c 2 ( s 1 ) ⋯ c m + 1 ( s m ) d y n d s m d x d t = 0 . By the variational lemma (48) ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m ) v i + 1 ( y i ) ∂ s m c m + 1 ( s m ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m ) ) · ∇ y i v i + 1 ( y i ) c m + 1 ( s m ) d y n ⋯ d y i d s m = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m + 1 ∈ C ♯ ∞ ( S m ), which is the weak form of the local problem in this second case. Case  3. Consider ρ i = 0 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i which immediately yields that p + q i - r λ = 0. Furthermore, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i. Thus we have from (42) (49) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . We let ɛ tend to zero and obtain (50) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 and we have left (51) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 , a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ). This means that u i is independent of s λ; thus, u i does not depend on s m - d i + 1 , … , s m. Next we choose p = q i and λ = m - d i. We have p + q i - r λ = 2 q i - r m - d i > 0 and p - q i = 0 and we may again use (42). We have (52) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 2 q i - r m - d i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) 222222 + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) 222222 · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) 222222 × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 , where a passage to the limit yields (53) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 22222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 , and finally (54) ∫ S m - d i + 1 ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ a ( y n , s m ) × ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) d y n ⋯ d y i d s m ⋯ d s m - d i + 1 = 0 , a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ, which is the weak form of the local problem. Case  4. Consider ρ i = 1 and d i > 0. Let λ be fixed and successively be m , … , m - d i + 1. Choose p = r λ - q i directly implying that p + q i - r λ = 0. Moreover, p - q i = r λ - 2 q i > 0 by the restriction of λ and the definition of d i and ρ i. Hence using (42), we obtain (55) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s λ c λ + 1 ( t ε r λ ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε r λ - 2 q i v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c λ + 1 ( t ε r λ ) d x d t = 0 . Passing to the limit, we get (56) ∫ Ω T ‍ ∫ 𝒴 i , λ ‍ - u i ( x , t , y i , s λ ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 22222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s λ c λ + 1 ( s λ ) d y i d s λ d x d t = 0 . That is, (57) ∫ S λ ‍ - u i ( x , t , y i , s λ ) ∂ s λ c λ + 1 ( s λ ) d s λ = 0 a.e. for all c λ + 1 ∈ C ♯ ∞ ( S λ ), and hence u i is independent of s λ. Next we choose p = q i and λ = m - d i in (42). Thus we have p + q i - r λ = 2 q i - r m - d i = 0 and p - q i = 0 and we get (58) lim ⁡ ɛ → 0 ∫ Ω T ‍ - ε - q i u ɛ ( x , t ) ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( t ε r m - d i ) + a ( x ε q 1 , … , x ε q n , t ε r 1 , … , t ε r m ) ∇ u ɛ ( x , t ) · ε 0 v 1 ( x ) v 2 ( x ε q 1 ) ⋯ v i ( x ε q i - 1 ) ∇ y i v i + 1 ( x ε q i ) × c 1 ( t ) c 2 ( t ε r 1 ) ⋯ c m - d i + 1 ( t ε r m - d i ) d x d t = 0 . We let ɛ go to zero obtaining (59) ∫ Ω T ‍ ∫ 𝒴 n , m ‍ - u i ( x , t , y i , s m - d i ) v 1 ( x ) v 2 ( y 1 ) ⋯ v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ ∂ s m - d i c m - d i + 1 ( s m - d i ) 2222222 + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) 2222222 · v 1 ( x ) v 2 ( y 1 ) ⋯ v i ( y i - 1 ) ∇ y i v i + 1 ( y i ) 2222222 × c 1 ( t ) c 2 ( s 1 ) ⋯ c m - d i + 1 ( s m - d i ) d y n d s m d x d t = 0 and finally we arrive at (60) ∫ S m - d i ‍ ⋯ ∫ S m ‍ ∫ Y i ‍ ⋯ ∫ Y n ‍ - u i ( x , t , y i , s m - d i ) v i + 1 2222 × ( y i ) ∂ s m - d i c m - d i + 1 ( s m - d i ) + a ( y n , s m ) ( ∇ u ( x , t ) + ∑ j = 1 n ‍ ∇ y j u j ( x , t , y j , s m - d i ) ) · ∇ y i v i + 1 ( y i ) c m - d i + 1 ( s m - d i ) d y n ⋯ d y i d s m ⋯ d s m - d i = 0 a.e. for all v i + 1 ∈ H ♯ 1 ( Y i ) / ℝ and c m - d i + 1 ∈ C ♯ ∞ ( S m - d i ), the weak form of the local problem.Remark 12. The result above can be extended to any meaningful choice of jointly well-separated scales by means of the general compactness results in Theorems4 and 7 and are hence not restricted to scales that are powers of ɛ; see, for example, [11] for the case with an arbitrary number of temporal scales but only one spatial micro scale. To make the exposition clear, we have assumed linearity, but the result can be extended to monotone, not necessarily linear, problems using standard methods.Remark 13. The wellposedness of the homogenized problem follows fromG-convergence; see, for example, Sections 3 and 4 in [20]. See also Theorem 4.19 in [17] for an easily accessible description of the regularity of the G-limit b. The existence of solutions to the local problems follows from the fact that they appear as limits in appropriate convergence processes. Concerning uniqueness, the coercivity of the elliptic part follows along the lines of the proof of Theorem 2.11 in [16] and for those containing a derivative with respect to some local time scale general theory for linear parabolic equations apply, see, for example, Section 23 in [18]. Normally multiscale homogenization results are formulated as in Theorem 9 without separation of variables and if we study slightly more general problems, for example, those with monotone operators where the linearity has been relaxed, such separation is not possible. However, in Corollary 2.12 in [16], a technique similar to separation of variables of the type sometimes used for conventional homogenization problems is developed. Here one scale at the time is removed in an inductive process and the homogenized coefficient is computed. We believe that a similar procedure could be successful also for the type of problem studied here but would be quite technical. ## 4.2. Illustration of Theorem9 To illustrate the use of Theorem9, we apply it to the 3,3-scaled parabolic homogenization problem (61) ∂ t u ɛ ( x , t ) - ∇ · ( a ( x ɛ , x ε 2 , t ε r 1 , t ε r 2 ) ∇ u ɛ ( x , t ) ) = f ( x , t ) in Ω T , u ɛ ( x , t ) = 0 on ⁡ ∂ Ω × ( 0 , T ) , u ɛ ( x , 0 ) = u 0 ( x ) in Ω , where 0 < r 1 < r 2,  f ∈ L 2 ( Ω T ),u 0 ∈ L 2 ( Ω ), and the structure conditions(B1) a ∈ C ♯ ( 𝒴 2,2 ) N × N (B2) a ( y 2 , s 2 ) ξ · ξ ≥ α | ξ | 2 for all ( y 2 , s 2 ) ∈ ℝ 2 N × ℝ 2, all ξ ∈ ℝ N and some α > 0are satisfied.We note that the assumptions of Theorem9 are satisfied in this case. Hence the convergence results in (31) hold and, for the homogenized matrix, (62) b ( x , t ) ∇ u ( x , t ) = ∫ 𝒴 2,2 ‍ a ( y 2 , s 2 ) ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) 2222222 + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) d y 2 d s 2 . Furthermore, u 1 ∈ L 2 ( Ω T × S 2 ; H ♯ 1 ( Y 1 ) / ℝ ) and u 2 ∈ L 2 ( Ω T × 𝒴 1,2 ; H ♯ 1 ( Y 2 ) / ℝ ) are the unique solutions to the system of local problems (63) ρ i ∂ s 2 - d i u i ( x , t , y i , s 2 ) - ∇ y i · ∫ S 2 - d i + 1 ‍ ⋯ ∫ S 2 ‍ ∫ Y i + 1 ‍ ⋯ ∫ Y 2 ‍ a ( y 2 , s 2 ) × ( ∇ u ( x , t ) + ∇ y 1 u 1 ( x , t , y 1 , s 2 ) + ∇ y 2 u 2 ( x , t , y 2 , s 2 ) ) × d y 2 ⋯ d y i + 1 d s 2 ⋯ d s 2 - d i + 1 = 0 for i = 1,2, where u i is independent of s 2 - d i + 1 , … , s 2.To find the local problems and the independencies explicitly, we need to identify which values ofd i, and ρ i to use. To find d i, we simply count the number of temporal scales faster than the square of the ith spatial scale for different choices of r 1 and r 2. Moreover, resonance (ρ i = 1) occurs when the square of the ith spatial scale coincides with one of the temporal scales.First we consider the slowest spatial scale; that is, we leti = 1. Note that 2 q 1 = 2. If 2 q 1 = 2 < r 1, then d 1 = 2, ifr 1 ≤ 2 < r 2 then d 1 = 1 and if 2 ≥ r 2, then d 1 = 0. Regarding resonance, if r 1 = 2 or r 2 = 2; then ρ 1 = 1; otherwise, ρ 1 = 0. For lucidity, we present which values of r 1 and r 2 that give the different values of d 1 and ρ 1in Table 1.Table 1 d i and ρ i for i = 1. r 1 and r 2 relative to 2 q 1 = 2 d 1 ρ 1 0 < r 1 < r 2 < 2 0 0 0 < r 1 < r 2 = 2 0 1 0 < r 1 < 2 < r 2 1 0 2 = r 1 < r 2 1 1 2 < r 1 < r 2 2 0In a similar way as above, we get fori = 2 Table 2.Table 2 d i and ρ i for i = 2. r 1 and r 2 relative to 2 q 2 = 4 d 2 ρ 2 0 < r 1 < r 2 < 4 0 0 0 < r 1 < r 2 = 4 0 1 0 < r 1 < 4 < r 2 1 0 4 = r 1 < r 2 1 1 4 < r 1 < r 2 2 0We start by sorting out the independencies of the local temporal variables. As noted, fori = 1,2, u i is independent of s 2 - d i + 1 , … , s 2, which means that if d i = 1, then u i is independent of s 2 and if d i = 2, then u i is independent of both s 1 and s 2. In terms of r 1 and r 2, we have that for r 2 > 2, u 1 is independent of s 2 and for r 1 > 2 also independent of s 1, for r 2 > 4, u 2 is independent of s 2 and moreover, for r 1 > 4 it holds that u 2 is also independent of s 1.To find the local problems, we examine all possible combinations of( d 1 , ρ 1 ) and ( d 2 , ρ 2 ), where 13 are realizable depending on which values r 1 and r 2 may assume. Each row in the tables gives rise to a local problem via (63). This means that each combination gives two local problems. If a row occurs in several combinations, the same local problem reappears. If we start by choosing the first row in the second table, that is ( d 2 , ρ 2 ) = ( 0,0 ), this can be combined with all five rows from the first table, which means that the local problem descending from ( d 2 , ρ 2 ) = ( 0,0 ) is common to these combinations. By (63), this common local problem is (64) - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . If we combine ( d 2 , ρ 2 ) = ( 0,0 ) with ( d 1 , ρ 1 ) = ( 0,0 ) we have in terms of r 1 and r 2that 0 < r 1 < r 2 < 2. The other local problem in this case is (65) - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 . In combination with ( d 1 , ρ 1 ) = ( 0,1 ), that is, 0 < r 1 < r 2 = 2, we obtain instead (66) ∂ s 2 u 1 - ∇ y 1 · ∫ Y 2 ‍ a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , and for ( d 1 , ρ 1 ) = ( 1,0 ), which means that 0 < r 1 < 2 < r 2 < 4, we have (67) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 . The fourth possible combination, that is, with ( d 1 , ρ 1 ) = ( 1,1 ), that is r 1 = 2 < r 2 < 4, gives (68) ∂ s 1 u 1 - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) 2222222222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 and finally for ( d 1 , ρ 1 ) = ( 2,0 ), that is 2 < r 1 < r 2 < 4, the second local problem is (69) - ∇ y 1 · ∫ S 2 ∫ Y 2 a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 2 = 0 .Next we consider( d 2 , ρ 2 ) = ( 0,1 ) in Table 2, which corresponds to 0 < r 1 < r 2 = 4 and gives the local problem (70) ∂ s 2 u 2 - ∇ y 2 · ( a ( y 2 , s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . Here we have three possible combinations, namely with ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). We note that we have already derived the local problems corresponding to these rows. Thus, the second local problem for r 2 = 4 and 0 < r 1 < 2 is given by (67) for r 2 = 4 and r 1 = 2 by (68) and for 2 < r 1 < r 2 = 4 by (69).We proceed by choosing( d 2 , ρ 2 ) = ( 1,0 ) in Table 2, yielding (71) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 . The choice ( d 2 , ρ 2 ) = ( 1,0 ) can be combined with three different rows from Table 1, ( d 1 , ρ 1 ) = ( 1,0 ), ( 1,1 ), and ( 2,0 ). In combination with ( d 1 , ρ 1 ) = ( 1,0 ), which means that r 2 > 4 and 0 < r 1 < 2, we have (72) - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) 222222222 × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is essentially the same as (67) but with the integration over S 2 directly on a ( y 2 , s 2 ) since both u 1 and u 2 are independent of s 2. For ( d 1 , ρ 1 ) = ( 1,1 ), that is, r 2 > 4 and r 1 = 2, we have (73) ∂ s 1 u 1 - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , which is the same as (68), but where we may integrate directly on a ( y 2 , s 2 ) in the same manner as above. For the third possibility, ( d 1 , ρ 1 ) = ( 2,0 ), 2 < r 1 < 4 < r 2, we get (74) - ∇ y 1 · ∫ S 1 ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 d s 1 = 0 , the same as (69), except for the position of the integration over S 2.The next row in Table2 to consider is ( d 2 , ρ 2 ) = ( 1,1 ), which can be combined only with ( d 1 , ρ 1 ) = ( 2,0 ). This combination corresponds to 4 = r 1 < r 2 and gives (75) ∂ s 1 u 2 - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 and again (74).Finally, for the row( d 2 , ρ 2 ) = ( 2,0 ) together with ( d 1 , ρ 1 ) = ( 2,0 ), that is, 4 < r 1 < r 2, we get (76) - ∇ y 2 · ( ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) ) = 0 , - ∇ y 1 · ∫ Y 2 ( ∫ S 2 a ( y 2 , s 2 ) d s 2 ) × ( ∇ u + ∇ y 1 u 1 + ∇ y 2 u 2 ) d y 2 = 0 , where the latter is essentially the same as (69) and (74).Thus, having considered all possible combinations ofr 1 and r 2, we have obtained 13 different cases, A–M in Figure 1, governed by two local problems each.Figure 1 The 13 cases depicted in ther 1 r 2 plane in the order of appearance.In the figure, cases B, D, F, H, J, and L (straight line segments) correspond to single resonance, whereas in the case G (a single point), there is double resonance. In the remaining cases (open two-dimensional regions), there is no resonance.Remark 14. Note that for a problem with fixed scales the finding of the local problems is very straightforward. For example, if we study (61) with r 1 = 2 and r 2 = 17, we have m = 2, n = 2, d 1 = 1, ρ 1 = 1, d 2 = 1, and ρ 2 = 0. We obtain that both u 1 and u 2 are independent of s 2. Inserting d 1 = 1, ρ 1 = 1 in (34) immediately gives the problem (73) and d 2 = 1, ρ 2 = 0 results in (71). The example chosen above with variable time scale exponents reveals more of the applicability and comprehensiveness of the theorem.Remark 15. The problem (61) was studied already in [17, 19], but using Theorem 9, the process is considerably shortened. --- *Source: 101685-2014-02-24.xml*
2014
# Synthesis and Properties of High Strength Thin Film Composites of Poly(ethylene Oxide) and PEO-PMMA Blend with Cetylpyridinium Chloride Modified Clay **Authors:** Mohammad Saleem Khan; Sabiha Sultana **Journal:** International Journal of Polymer Science (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101692 --- ## Abstract Ion-conducting thin film composites of polymer electrolytes were prepared by mixing high MW poly(ethylene oxide) (PEO), poly(methyl methacrylate) (PMMA) as a polymer matrix, cetylpyridinium chloride (CPC) modified MMT as filler, and different content of LiClO4 by using solution cast method. The crystallinity, ionic conductivity (σ), and mechanical properties of the composite electrolytes and blend composites were evaluated by using XRD, AC impedance, and UTM studies, respectively. The modification of clay by CPC showed enhancement in the d-spacing. The loading of clay has effect on crystallinity of PEO systems. Blend composites showed better mechanical properties. Young’s modulus and elongation at break values showed increase with salt and clay incorporation in pure PEO. The optimum composition composite of PEO with 3.5 wt% of salt and 3.3 wt% of CPMMT exhibited better performance. --- ## Body ## 1. Introduction Polymer/clay composites are hybrid materials which contain organically modified clay and polymer matrix. These are extensively studied materials because of enhanced mechanical, thermal, optical, and other properties. The availability of clay, its low cost, and well developed intercalation chemistry have added to attracting researchers towards material preparation from this. Polymer molecules are believed to intercalate into the galleries of the clay [1]. The amount of clay in these composites plays a vital role in affecting polymer crystallinity and mechanical properties. Pure nonmodified clay is difficult to intercalate and disperse homogeneously in the polymer matrix because of high interfacial tension with organic materials. To overcome this problem, clay is modified to introduce hydrophobic character in it making intercalation with polymers possible. Organic modifier, nature of polymer, and processing conditions are major factors that affect the structure of the resulting composite. It has been indicated that the functional groups and chain length of the backbone of organic modifier have vital influence on thed-spacing and elastic modulus of the polymer-clay composites and crystallinity of polymers [1]. Keeping this in mind, the selection of polymer and organic modification of clay is very important factor in preparation and use of such polymer-clay composite.It is a well-known fact that poly(ethylene oxide) (PEO) is a unique polymer soluble in both aqueous and organic solvents. It has polyether chain which can coordinate with alkali cations (Li+, Na+, Ca2+, etc.) resulting in the formation of polyelectrolyte for batteries, supercapacitors, and fuel cells [2–4]. PEO based polyelectrolytes have been found to show low conductivity while their blend with other polymers and incorporation of salts show enhancement in conductivity [5]. The incorporation of clay having silicate layer is also known to increase the conductivity of PEO based electrolytes. The composite of PEO/Clay has been studied in detail from time to time [6–9]. Most of the work so far has been done on composite containing only PEO-Clay. On the other hand, to our knowledge there are no/or a few reports of PEO-PMMA blend clay composites [10, 11] and also the cetylpyridinium chloride modified montmorillonite clay (CPMMT) has not been used to prepare such composites. The present work aims at the synthesis and characterization of PEO-PMMA/Clay composite with LiClO4 salt using cetylpyridinium chloride modified montmorillonite clay. The detailed X-ray diffraction, electrical, and mechanical properties have been investigated and discussed in the present work. Further thin film fabrication of these composites which has not been reported earlier has been done and reported here. This type of thin film configuration may find application not only in solid polymer electrolyte but also in shape memory polymers for improved mechanical properties. ## 2. Experimental ### 2.1. Materials Poly(ethylene oxide) (PEO) (MW 600,000) and poly(methyl methacrylate) (PMMA) (high molecular weight) were obtained from Acros and BDH Chemicals, respectively. The clay, montmorillonite, was purchased from Aldrich Chemicals. Research grade lithium perchlorate LiClO4 (MW106.39) was obtained from Acros Chemicals. All these polymers and chemicals were used as such without further purification. Acetonitrile (CH3CN) was used as a solvent. It is a good solvent for polymers, that is, PEO and PMMA, montmorillonite, and salts. ### 2.2. Methods #### 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. #### 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ### 2.3. Instrumentation The X-ray diffractometry (XRD) was carried out by using Cu-kα radiation at a tube voltage of 40 KV and 20 mA current. Rigaku (Japan) FX Geiger Series RAD_B system was used for X-ray diffraction measurements.The tensile properties of the samples were tested using Testometric universal testing machine M350/500 manufactured by Testometric UK. The films of pure polymers and that of selected compositions of composites with uniform thickness (measured with digital micrometer) and width were cut for analysis. The length of each sample was 50 mm. The analysis was performed at room temperature with cross-head speed of 5 mm/min. For high accuracy and precision, a sensitive load cell of 100 kg capacities with 1.0 mg load detection with a minimum 0.01 mm cross-head speed was used. A special griping system was designed for thin film griping to avoid any slippage during tensile test. Standard procedure and formulae were used for calculating various tensile properties including Young modulus (stiffness) and elongation at break. Data directly feed into computer interfaced with the UTM.The impedance measurements were carried out at room temperature (15°C) using Solartron 1260 frequency response analyzer (FRA) over the frequency range of1 - 1 × 10 7 Hz and 100 mv voltage. The impedance data were then transferred to the (Z-plot/Z-view) software package. ## 2.1. Materials Poly(ethylene oxide) (PEO) (MW 600,000) and poly(methyl methacrylate) (PMMA) (high molecular weight) were obtained from Acros and BDH Chemicals, respectively. The clay, montmorillonite, was purchased from Aldrich Chemicals. Research grade lithium perchlorate LiClO4 (MW106.39) was obtained from Acros Chemicals. All these polymers and chemicals were used as such without further purification. Acetonitrile (CH3CN) was used as a solvent. It is a good solvent for polymers, that is, PEO and PMMA, montmorillonite, and salts. ## 2.2. Methods ### 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. ### 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ## 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. ## 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ## 2.3. Instrumentation The X-ray diffractometry (XRD) was carried out by using Cu-kα radiation at a tube voltage of 40 KV and 20 mA current. Rigaku (Japan) FX Geiger Series RAD_B system was used for X-ray diffraction measurements.The tensile properties of the samples were tested using Testometric universal testing machine M350/500 manufactured by Testometric UK. The films of pure polymers and that of selected compositions of composites with uniform thickness (measured with digital micrometer) and width were cut for analysis. The length of each sample was 50 mm. The analysis was performed at room temperature with cross-head speed of 5 mm/min. For high accuracy and precision, a sensitive load cell of 100 kg capacities with 1.0 mg load detection with a minimum 0.01 mm cross-head speed was used. A special griping system was designed for thin film griping to avoid any slippage during tensile test. Standard procedure and formulae were used for calculating various tensile properties including Young modulus (stiffness) and elongation at break. Data directly feed into computer interfaced with the UTM.The impedance measurements were carried out at room temperature (15°C) using Solartron 1260 frequency response analyzer (FRA) over the frequency range of1 - 1 × 10 7 Hz and 100 mv voltage. The impedance data were then transferred to the (Z-plot/Z-view) software package. ## 3. Results and Discussion Montmorillonite clay (MMT) clay structure along with CPC structure is shown in Figure1. The structure shows that an octahedrally coordinated alumina is sandwiched between two tetrahedrally coordinated silica. The spacing between clay layers ranges in nanometers and in between these layers water molecules and exchangeable cations like Na+ are present. These +Ve ions are mostly near the layers where the −Ve site of the layer is present and a kind of attachment is there between these. The CPC has a bulky cationic head and hydrocarbon chain which is neutral. The MMT clay was modified with cetylpyridinium chloride (CPC) whose structure is shown in Figure 2. The mechanism clearly shows that smaller Na+ is exchanged with the bulky cationic head group of CPC while NaCl is coming out after treatment. Due to this exchange and insertion of larger cation in between layers, the interlayer spacing increases (see Table 1). The modified clay, that is, CPMMT, is organophilic with a lower surface energy, which is more compatible with organic polymers.Table 1 Values ofd-spacing for various systems studied. System Peak position∗ (2θ) d-spacing MMT 11.65 7.5839 CPMMT 11.30 7.8266 PEO/CPMMT/Salt (2.1 wt%) 22.95 3.8720 PEO/PCPMMT/Salt (2.1 wt%) 22.85 3.8090 ∗Peak with highest intensity.Figure 1 Structure of MMT clay and CPC.Figure 2 Mechanism of modification of MMT. ### 3.1. X-Ray Diffraction Analysis of PEO/LiClO4 (Constant)/CPMMT Composite System XRD of pure PEO shows maximum diffraction peaks representing highly crystalline structure as already published in our earlier studies [13]. LiClO4 XRD was not done but, as reported in literature, XRD pattern of LiClO4 shows intense peaks at angle 2θ = 18.360, 23.20, 27.50, 32.990, and 36.580 revealing the crystalline nature of the ionic salt [14]. Figures 3 and 4 display the XRD scan of pure and CPC modified MMT, respectively. It is clear from the diffractogram and Table 1 that modification of clay by CPC enhances thed-value from 7.5839 to 7.82366 by shifting 2θ value from 11.65 to 11.30. It also shows the addition of new peaks at 2θ = 17.75 and 55.3 and vanishing of some peaks at 2θ = 20.8, 42.4, 50.15, and 54.3. The addition and disappearance of peaks and alteration ofd-values clearly depict the successful modification of montmorillonite by CPC. The rest of the peaks are not altered. The increasingd-spacing will cause the dissociation of MMT, resulting in composites with better dispersion of clay particles [15].Figure 3 XRD scan of pure MMT.Figure 4 XRD scan of CPMMT.The polymer/salt/CPC modified structure and interaction mechanism is given in Figure5. The interaction of CPMMT with polymer (PEO) shows that polymer molecules come in between the CPC layers attached to clay. An elaboration of the intercalating portion clearly shows that there is increase in gallery spacing which is associated with lowering in surface energy. Polymer intercalates within the galleries as a result of the negative surface charge and the cationic head groups of CPC preferentially reside on the layer surface. The salt, that is, LiClO4 has also interactions with both the polymer and the negatively charged clay layers.Figure 5 Mechanism of PEO-Salt/CPMMT interaction.Composites of PEO/LiClO4/CPMMT were synthesized with 1.98, 3.3, 4.62, and 5.94 wt% of modified montmorillonite loading keeping the mole fraction of salt constant at 3.5 wt%. The diffraction patterns of PEO/Salt/CPMMT (1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt%) composite systems are shown in Figure 6. It can be revealed from these XRD patterns that the PEO has the minimum crystallinity when the CPMMT loading is 3.3 wt%. This 3.3 wt% of CPMMT loading was selected as an optimum condition for the synthesis of the composites. The substantial increase in the intensities of the XRD peaks on increasing CPMMT loadings suggests that the dispersion is better at lower clay loading than at higher loadings. This is because the lithium cation coordinates with the flexible CH2-O- chain of PEO forming complexes and thereby disturbing the crystallinity. When the clay is loaded into the PEO/Li ClO4 electrolyte the crystallinity initially decreases up to 3.3 wt% of clay loading and increases thereafter. In case of undoped PEO, the crystallinity gradually decreases with an increase in clay loading because of the steric hindrance caused by the huge surface area of randomly oriented clay throughout the matrix. The different crystallization behaviors of PEO/LiClO4/clay composite electrolyte and PEO/Clay composite is explained by considering the fact that negatively charged clay layers also coordinate with the lithium cation due to a strong electrostatic interaction. The interaction depends on the expansion of silicate layers and clay content. Because of this interaction, PEO to Li+ interactions decrease and crystallinity increases. Thus two competing effects are present in the PEO/LiClO4/Clay composite electrolyte; one reduces the crystallinity and the other favors the crystallinity. At low clay loading the first factor predominates leading to a decrease in the crystallinity and beyond the optimum clay concentrations the second factor predominates over the first, resulting in higher crystallinity [16, 17]. The presence of the CPMMT, however, had no effect on the location of the peaks, which indicates that perfect exfoliation of the clay layer structure of the organoclay in PEO does not occur [18]. The XRD patterns of the fabricated composites show that most of the peaks corresponding to pure LiClO4 have disappeared in the composite system, which reveals the dissolution of the salt in the polymer matrix. Similarly the appearance of some of the peaks of the LiClO4 in the composite system confirms the complexation of the salt with the polymer matrix.Figure 6 Combined XRD pattern of PEO/Salt composite system with 1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt% of CPMMT. ### 3.2. X-Ray Diffraction Analysis of PEO/LiClO4 (Variable)/CPMMT (3.3 wt%) Composite System X-ray diffraction analysis of PEO/LiClO4/CPMMT composite with 3.3 wt% of CPMMT clay at varying concentrations of salt is shown in Figure 7, which depicts that PEO/LiClO4/clay composites first show decrease in crystallinity of PEO with the increasing amount of salt, but when the concentration of salt is increased from PCS2, that is, 3.5 wt%, the crystalline character of PEO starts increasing; this is attributed to the local aggregation of inorganic particles at higher salt concentration. The same result is manifested by our mechanical and Scanning Electron Microscopy (SEM) studies.Figure 7 Combined XRD scans of PEO/CPMMT/Salt (PCS1) (2.1 wt% A), PEO/CPMMT/Salt (PCS2) (3.5 wt% B), and PEO/CPMMT/Salt (PCS3) (5 wt% C). ### 3.3. X-Ray Diffraction Analysis of PEO/PMMA/LiClO4 (Variable)/CPMMT (3.3 wt%) Blend Composite System In order to investigate the effect of poly(methyl methacrylate) (PMMA) addition on the crystallinity of PEO in the blend composite of PEO/PMMA/LiClO4/CPMMT having variable concentrations of salt and constant clay content (of 3.3 wt%), X-ray analysis was carried out. From the diffractogram pattern given in Figure 8, it is clear that though PMMA is amorphous in nature, its addition to the composite system has no significant effect on the system. The crystalline fraction of PEO increased a little bit by its addition. This is because the amount of PEO in PEO/PMMA blend is far more than overlap weight fraction (W ∗), which causes PEO to crystallize, and also because PMMA interaction with CPMMT is more than that of PEO which affects the properties of PEO when present in blend. Thed-spacing between the layers of the system is found to be decreasing (Table 1) which also accounts for increase of crystalline behavior. This result is consistent with our AC impedance study and is also supported by the literature [18].Figure 8 Combined XRD scans of PEO/PMMA/CPMMT/Salt (PPCS1) (2.1 wt% A), PEO/PMMA/CPMMT/Salt (PPCS2) (3.5 wt% B), and PEO/PMMA/CPMMT/Salt (PPCS3) (5 wt% C). ### 3.4. Ionic Conductivity of PEO Composite and Blend Composite System In a Nyquist impedance plot, the real part (Z /) of the impedance was plotted against the imaginary part (Z / /) for data collected at frequencies ranging from 1 to 107 Hz. To investigate complete picture of the system, an equivalent circuit was used [19]. The bulk resistance of the solid polymer electrolyte (SPE) was consequent from the equivalent circuit. Figures 9(a), 9(b), 9(c), and 9(d) show the Nyquist impedance plots for PEO/LiClO4, denoted as PS (a), PEO/LiClO4 after fitting to equivalent circuit (b), PEO/CPMMT/LiClO4 denoted as PCS (c), PEO/PMMA/CPMMT/LiClO4 denoted as PPCS (d), and impedance plot after fitting to the equivalent circuit, respectively. These diagrams deviate from an ideal impedance spectrum that usually exhibits a standard semicircle at the high frequency section and a vertical line at a lower frequency section. The deformed semicircle and the inclined line for the polymeric film/electrode system may be attributed to the irregular thickness and morphology of the polymeric film and the roughness of the electrode surface [20, 21]. To investigate the phenomenon a “constant phase element” (CPE) was employed in the equivalent circuit. The high frequency semicircle depicts the combination of R1 and CPE-1, while the spike showing the trend for second semicircle due to double layer capacitance (at the interface of solid polymer electrolyte and electrode) is reflected by CPE-2 [19]. The equivalent circuit used for fitting data and table for parameters for the circuit elements evaluated by fitting the impedance data for composite and blend system at room temperature (15°C) is given in Figure 9 as inset.Figure 9 Typical Nyquist impedance plots for PEO/Salt (PS) (a), PEO/Salt (PS) after fitting to equivalent circuit (b). Inset showing that diagram of circuit and extracted parameters for the circuit elements of PS, PCS, and PPCS are summarized in the table. PEO/Salt/CPMMT (PCS) (c). PEO/PMMA/Salt/CPMMT (PPCS) (d). (a) (b) (c) (d)From equivalent circuits the bulk resistance values were obtained. The bulk resistance allows us to obtain the ionic conductivity using(1) σ = I R A ,where σ = conductivity (S/cm), R = resistance (Ω), I = thickness (cm), and A = area of the electrode (c m 2).The capacitance values were calculated according to(2) ω max ⁡ R C = 1 ,where ω m a x corresponds to the frequency at the maximum of semicircle. The capacitance values obtained for the bulk are in complete harmony with the earlier reported values [22].The value of ionic conductivity obtained at room temperature (15°C) for pure poly(ethylene oxide) (PEO) is less than 6.78 × 10−10 S cm−1 reported in the literature earlier by Kumar and coworker [17] for the same molecular weight PEO at 30°C. This difference in conductivity values is because of the temperature and changing nature of solvent used in our study [23]. From the table given as inset in Figure 10, it is clear that the conductivity of PEO at laboratory temperature, that is, 15°C, increases sharply with the salt incorporation. The same trend in conductivity of PEO based electrolytes with the salt concentration has also been observed by Srivastava and Ibrahim et al. [24, 25]. This increase is due to the increase in charge carriers caused by the addition of higher concentration of LiClO4 and the increase in the fraction of amorphous phase. The addition of ionic salt decreases degradation temperature because of the growth of amorphous fraction and destabilizes the polymer network. The PEO/LiClO4 electrolyte with high salt concentration was found to be less stable. Alternatively CPMMT was used to overcome these drawbacks. Inorganic fillers are usually used to improve the electrochemical and mechanical properties [26]. Clay is inorganic filler with intercalation property, where clay layers maintain their registry. Intercalating polymer (residing polymer chains between silicates) in a layered clay host can produce huge interfacial area to sustain the mechanical property of polymer electrolyte system and impart salt solvating power to dissolve the lithium salt [27]. A glance at Figure 10 and inset table reveals that the addition of salt at constant (3.3 wt%) clay content increases the conductivity of PEO/Salt/CPMMT (PCS) composites retaining dimensional stability till PCS2 (3.5 wt%); beyond PCS2, further addition of salt decreases the conductivity badly. This initial increase is due to the decrease in the crystallinity and increase in amorphous fraction of PEO for ion conduction till equilibrium is achieved at PCS2. This is consistent with our XRD results. The conductivity decreases drastically, when amount of salt increases from PCS2 to PCS3 (5 wt%), but is still higher than that of pristine polymer. The possible explanation for this behavior may be ion association and the formation of charge multipliers [25]. In order to study the effect of poly(methyl methacrylate) (PMMA) incorporation on the ionic conductivity of PEO based solid polymer electrolytes, PMMA was blended with PEO for solid polymer electrolyte (SPE) composites. From the values of ionic conductivity given in Figure 10 and inset table, it is clear that the addition of PMMA to PEO/Salt electrolyte system decreases the conductivity of PCS system but still shows higher value than pure PEO films. The rigid structure of PMMA due to the entrapped silicate layers alters the segmental dynamics of PEO so there is decrease in conductivity. Jeddi and coworkers [28] have reported an overlap weight fraction for PEO/PMMA blend which is about 2.8 wt% for PEO. Overlap weight fraction is that weight at which PEO starts interpenetration and miscibility of blends is affected. In our system the amount of PEO is far more than overlap weight fraction. So it causes decrease in conductivity and an increase in the agglomeration of clay by decreasing its interaction with the PEO. The same trend has been observed in mechanical properties of the PEO/PMMA/Salt/CPMMT (PPCS) composites. The values of ionic conductivity we reached at laboratory temperature of 15°C are higher than those reported for the PEO/PMMA/salt/Na-MMT in the literature at 25°C [22]. This increase may be caused by the better dispersion of CPMMT.Figure 10 Bulk ionic conductivity variation for PSC and PPSC with weight % of salt for composite system at room temperature (15°C). ### 3.5. Elongation at Break of PEO/LiClO4/CPMMT Composite System Elongation at break is the strain at failure or percent change at failure and explains the ductility of the material with external force. The effect of salt addition on the ductility or % elongation is shown in Figure11. The result from Figure 11 depicts that ductility of the composite material increases with increasing salt concentration in the resulting composites. This increase is attributed to the presence of CPMMT which enhances the mobility of the PEO polymer. The highest % elongation at break is obtained for the PCS2 composite and beyond PCS2% elongation at break decreases. The higher uniformity in the dispersion of salt and clay within PEO is correlated with better adhesion between the components of the composite due to the homogeneous dispersion of CPCMMT at PCS2 composition. The decrease in the ductility beyond PCS2 is due to the restriction in chain mobility of the matrix and the filler particles acting as defect points [29]. This also shows that beyond certain limit of salt concentration the behavior changes. Further, at higher concentrations the polymers exist in agglomeration and the clay is not well dispersed. The overall result is the increase in the ductility of the composite material with increasing salt concentration. The net increase in elongation at break for PCS system suggests filler induced dimensional stability to the composite electrolyte films, making them capable of sustaining and withstanding any external pressure/shock to a better level.Figure 11 Variation of elongation at break for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ### 3.6. Young’s Modulus of PEO/LiClO4/CPMMT Composites Young’s modulus is a variable that describes the relationship of stress to strain within the elastic region. This is measured from the slope of the curve within the elastic area of the specimen. The modulus of elasticity describes a material’s stiffness; the greater the modulus, the stiffer the material. It quantifies the elasticity of the polymer.It is truly associated with primary and secondary chemical bonds. Unlike the neat polymer where the mechanical properties are determined almost entirely by matrix, the mechanical properties of the composite depend on the interaction between the polymer and the added fillers. From Figure12 it is clear that Young’s modulus of the composites electrolyte decreases with the increasing concentration of inorganic contents at constant clay level. The influence of LiClO4 on the mechanical properties of PEO/CPMMT film resembles the plasticization effect. The interaction between PEO and CPMMT is weakened by the increasing content of salt. The same behavior of Young’s modulus with filler has been reported earlier in the literature [29]. As mechanical properties change by changing the composition of components as well as with the applied force, they are difficult to analyze. Also, this decrease may probably be explained in terms of debonding around polymer and clay interphases and void formation. It can be concluded that value of modulus depends highly on the distribution of filler particles in the polymer matrix, which in turn depends on the particle/particle interaction (agglomeration) and polymer particle interaction (adhesion and wetting) and morphology of the filler particles [30].Figure 12 Variations of Young’s modulus for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ### 3.7. Elongations at Break of PEO/PMMA/LiClO4/CPMMT Blend Composites System In order to have a more clear idea of the change in mechanical properties of the blend composite system, first the addition of salt to the blend system was studied for its effect on the mechanical properties and then CPMMT was added to the same system and the samples were analyzed by UTM. From the results given in Figure11, it is clear that elongation at break decreases initially with the increasing concentration of salt to the blend system and then starts increasing with higher salt concentration. This decrease in failure strain is due to the rigid filler addition, which restricts the mobility of the PEO polymer molecules to flow freely past one another, thus causing premature failure. The original elasticity of PEO is distorted due to the addition of PMMA and LiClO4 which is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness and toughness [31, 32]. Composites with these properties can be used for heat resistant materials or product packaging materials. ### 3.8. Young’s Modulus of PEO/PMMA/LiClO4/CPMM Blend Composites System Young’s modulus of the PEO/PMMA as a function of salt is shown in Figure12. From this Figure it is clear that Young’s modulus of the blend composite shows an overall decrease with the addition of salt. This decrease shows the weaker PEO interchain interaction and increase in the particle size of the inorganic phase because of local aggregations of particles in the presence of PMMA; these phenomena may act as flaws in it [33, 34]. The same trend has been confirmed by the SEM result as well. This means that the addition of salt to the PEO/PMMA composite suppresses the material’s stiffness and hence elasticity of the polymer. But when clay was added to the same PEO/PMMA/Salt system, an enormous increase in the value of Young modulus was observed as shown in Figure 12. This is due to the intercalation of polymer chains within the clay galleries that avoid segmental motion of the polymer chains [35]. Although there is an overall decrease in the value of Young modulus of the PEO/PMMA/LiClO4/CPMMT system with increasing salt concentration, still it is much higher than that of the virgin (neat) poly(ethylene oxide) (PEO) and PEO/Salt/CPMMT. This is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness [31]. ## 3.1. X-Ray Diffraction Analysis of PEO/LiClO4 (Constant)/CPMMT Composite System XRD of pure PEO shows maximum diffraction peaks representing highly crystalline structure as already published in our earlier studies [13]. LiClO4 XRD was not done but, as reported in literature, XRD pattern of LiClO4 shows intense peaks at angle 2θ = 18.360, 23.20, 27.50, 32.990, and 36.580 revealing the crystalline nature of the ionic salt [14]. Figures 3 and 4 display the XRD scan of pure and CPC modified MMT, respectively. It is clear from the diffractogram and Table 1 that modification of clay by CPC enhances thed-value from 7.5839 to 7.82366 by shifting 2θ value from 11.65 to 11.30. It also shows the addition of new peaks at 2θ = 17.75 and 55.3 and vanishing of some peaks at 2θ = 20.8, 42.4, 50.15, and 54.3. The addition and disappearance of peaks and alteration ofd-values clearly depict the successful modification of montmorillonite by CPC. The rest of the peaks are not altered. The increasingd-spacing will cause the dissociation of MMT, resulting in composites with better dispersion of clay particles [15].Figure 3 XRD scan of pure MMT.Figure 4 XRD scan of CPMMT.The polymer/salt/CPC modified structure and interaction mechanism is given in Figure5. The interaction of CPMMT with polymer (PEO) shows that polymer molecules come in between the CPC layers attached to clay. An elaboration of the intercalating portion clearly shows that there is increase in gallery spacing which is associated with lowering in surface energy. Polymer intercalates within the galleries as a result of the negative surface charge and the cationic head groups of CPC preferentially reside on the layer surface. The salt, that is, LiClO4 has also interactions with both the polymer and the negatively charged clay layers.Figure 5 Mechanism of PEO-Salt/CPMMT interaction.Composites of PEO/LiClO4/CPMMT were synthesized with 1.98, 3.3, 4.62, and 5.94 wt% of modified montmorillonite loading keeping the mole fraction of salt constant at 3.5 wt%. The diffraction patterns of PEO/Salt/CPMMT (1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt%) composite systems are shown in Figure 6. It can be revealed from these XRD patterns that the PEO has the minimum crystallinity when the CPMMT loading is 3.3 wt%. This 3.3 wt% of CPMMT loading was selected as an optimum condition for the synthesis of the composites. The substantial increase in the intensities of the XRD peaks on increasing CPMMT loadings suggests that the dispersion is better at lower clay loading than at higher loadings. This is because the lithium cation coordinates with the flexible CH2-O- chain of PEO forming complexes and thereby disturbing the crystallinity. When the clay is loaded into the PEO/Li ClO4 electrolyte the crystallinity initially decreases up to 3.3 wt% of clay loading and increases thereafter. In case of undoped PEO, the crystallinity gradually decreases with an increase in clay loading because of the steric hindrance caused by the huge surface area of randomly oriented clay throughout the matrix. The different crystallization behaviors of PEO/LiClO4/clay composite electrolyte and PEO/Clay composite is explained by considering the fact that negatively charged clay layers also coordinate with the lithium cation due to a strong electrostatic interaction. The interaction depends on the expansion of silicate layers and clay content. Because of this interaction, PEO to Li+ interactions decrease and crystallinity increases. Thus two competing effects are present in the PEO/LiClO4/Clay composite electrolyte; one reduces the crystallinity and the other favors the crystallinity. At low clay loading the first factor predominates leading to a decrease in the crystallinity and beyond the optimum clay concentrations the second factor predominates over the first, resulting in higher crystallinity [16, 17]. The presence of the CPMMT, however, had no effect on the location of the peaks, which indicates that perfect exfoliation of the clay layer structure of the organoclay in PEO does not occur [18]. The XRD patterns of the fabricated composites show that most of the peaks corresponding to pure LiClO4 have disappeared in the composite system, which reveals the dissolution of the salt in the polymer matrix. Similarly the appearance of some of the peaks of the LiClO4 in the composite system confirms the complexation of the salt with the polymer matrix.Figure 6 Combined XRD pattern of PEO/Salt composite system with 1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt% of CPMMT. ## 3.2. X-Ray Diffraction Analysis of PEO/LiClO4 (Variable)/CPMMT (3.3 wt%) Composite System X-ray diffraction analysis of PEO/LiClO4/CPMMT composite with 3.3 wt% of CPMMT clay at varying concentrations of salt is shown in Figure 7, which depicts that PEO/LiClO4/clay composites first show decrease in crystallinity of PEO with the increasing amount of salt, but when the concentration of salt is increased from PCS2, that is, 3.5 wt%, the crystalline character of PEO starts increasing; this is attributed to the local aggregation of inorganic particles at higher salt concentration. The same result is manifested by our mechanical and Scanning Electron Microscopy (SEM) studies.Figure 7 Combined XRD scans of PEO/CPMMT/Salt (PCS1) (2.1 wt% A), PEO/CPMMT/Salt (PCS2) (3.5 wt% B), and PEO/CPMMT/Salt (PCS3) (5 wt% C). ## 3.3. X-Ray Diffraction Analysis of PEO/PMMA/LiClO4 (Variable)/CPMMT (3.3 wt%) Blend Composite System In order to investigate the effect of poly(methyl methacrylate) (PMMA) addition on the crystallinity of PEO in the blend composite of PEO/PMMA/LiClO4/CPMMT having variable concentrations of salt and constant clay content (of 3.3 wt%), X-ray analysis was carried out. From the diffractogram pattern given in Figure 8, it is clear that though PMMA is amorphous in nature, its addition to the composite system has no significant effect on the system. The crystalline fraction of PEO increased a little bit by its addition. This is because the amount of PEO in PEO/PMMA blend is far more than overlap weight fraction (W ∗), which causes PEO to crystallize, and also because PMMA interaction with CPMMT is more than that of PEO which affects the properties of PEO when present in blend. Thed-spacing between the layers of the system is found to be decreasing (Table 1) which also accounts for increase of crystalline behavior. This result is consistent with our AC impedance study and is also supported by the literature [18].Figure 8 Combined XRD scans of PEO/PMMA/CPMMT/Salt (PPCS1) (2.1 wt% A), PEO/PMMA/CPMMT/Salt (PPCS2) (3.5 wt% B), and PEO/PMMA/CPMMT/Salt (PPCS3) (5 wt% C). ## 3.4. Ionic Conductivity of PEO Composite and Blend Composite System In a Nyquist impedance plot, the real part (Z /) of the impedance was plotted against the imaginary part (Z / /) for data collected at frequencies ranging from 1 to 107 Hz. To investigate complete picture of the system, an equivalent circuit was used [19]. The bulk resistance of the solid polymer electrolyte (SPE) was consequent from the equivalent circuit. Figures 9(a), 9(b), 9(c), and 9(d) show the Nyquist impedance plots for PEO/LiClO4, denoted as PS (a), PEO/LiClO4 after fitting to equivalent circuit (b), PEO/CPMMT/LiClO4 denoted as PCS (c), PEO/PMMA/CPMMT/LiClO4 denoted as PPCS (d), and impedance plot after fitting to the equivalent circuit, respectively. These diagrams deviate from an ideal impedance spectrum that usually exhibits a standard semicircle at the high frequency section and a vertical line at a lower frequency section. The deformed semicircle and the inclined line for the polymeric film/electrode system may be attributed to the irregular thickness and morphology of the polymeric film and the roughness of the electrode surface [20, 21]. To investigate the phenomenon a “constant phase element” (CPE) was employed in the equivalent circuit. The high frequency semicircle depicts the combination of R1 and CPE-1, while the spike showing the trend for second semicircle due to double layer capacitance (at the interface of solid polymer electrolyte and electrode) is reflected by CPE-2 [19]. The equivalent circuit used for fitting data and table for parameters for the circuit elements evaluated by fitting the impedance data for composite and blend system at room temperature (15°C) is given in Figure 9 as inset.Figure 9 Typical Nyquist impedance plots for PEO/Salt (PS) (a), PEO/Salt (PS) after fitting to equivalent circuit (b). Inset showing that diagram of circuit and extracted parameters for the circuit elements of PS, PCS, and PPCS are summarized in the table. PEO/Salt/CPMMT (PCS) (c). PEO/PMMA/Salt/CPMMT (PPCS) (d). (a) (b) (c) (d)From equivalent circuits the bulk resistance values were obtained. The bulk resistance allows us to obtain the ionic conductivity using(1) σ = I R A ,where σ = conductivity (S/cm), R = resistance (Ω), I = thickness (cm), and A = area of the electrode (c m 2).The capacitance values were calculated according to(2) ω max ⁡ R C = 1 ,where ω m a x corresponds to the frequency at the maximum of semicircle. The capacitance values obtained for the bulk are in complete harmony with the earlier reported values [22].The value of ionic conductivity obtained at room temperature (15°C) for pure poly(ethylene oxide) (PEO) is less than 6.78 × 10−10 S cm−1 reported in the literature earlier by Kumar and coworker [17] for the same molecular weight PEO at 30°C. This difference in conductivity values is because of the temperature and changing nature of solvent used in our study [23]. From the table given as inset in Figure 10, it is clear that the conductivity of PEO at laboratory temperature, that is, 15°C, increases sharply with the salt incorporation. The same trend in conductivity of PEO based electrolytes with the salt concentration has also been observed by Srivastava and Ibrahim et al. [24, 25]. This increase is due to the increase in charge carriers caused by the addition of higher concentration of LiClO4 and the increase in the fraction of amorphous phase. The addition of ionic salt decreases degradation temperature because of the growth of amorphous fraction and destabilizes the polymer network. The PEO/LiClO4 electrolyte with high salt concentration was found to be less stable. Alternatively CPMMT was used to overcome these drawbacks. Inorganic fillers are usually used to improve the electrochemical and mechanical properties [26]. Clay is inorganic filler with intercalation property, where clay layers maintain their registry. Intercalating polymer (residing polymer chains between silicates) in a layered clay host can produce huge interfacial area to sustain the mechanical property of polymer electrolyte system and impart salt solvating power to dissolve the lithium salt [27]. A glance at Figure 10 and inset table reveals that the addition of salt at constant (3.3 wt%) clay content increases the conductivity of PEO/Salt/CPMMT (PCS) composites retaining dimensional stability till PCS2 (3.5 wt%); beyond PCS2, further addition of salt decreases the conductivity badly. This initial increase is due to the decrease in the crystallinity and increase in amorphous fraction of PEO for ion conduction till equilibrium is achieved at PCS2. This is consistent with our XRD results. The conductivity decreases drastically, when amount of salt increases from PCS2 to PCS3 (5 wt%), but is still higher than that of pristine polymer. The possible explanation for this behavior may be ion association and the formation of charge multipliers [25]. In order to study the effect of poly(methyl methacrylate) (PMMA) incorporation on the ionic conductivity of PEO based solid polymer electrolytes, PMMA was blended with PEO for solid polymer electrolyte (SPE) composites. From the values of ionic conductivity given in Figure 10 and inset table, it is clear that the addition of PMMA to PEO/Salt electrolyte system decreases the conductivity of PCS system but still shows higher value than pure PEO films. The rigid structure of PMMA due to the entrapped silicate layers alters the segmental dynamics of PEO so there is decrease in conductivity. Jeddi and coworkers [28] have reported an overlap weight fraction for PEO/PMMA blend which is about 2.8 wt% for PEO. Overlap weight fraction is that weight at which PEO starts interpenetration and miscibility of blends is affected. In our system the amount of PEO is far more than overlap weight fraction. So it causes decrease in conductivity and an increase in the agglomeration of clay by decreasing its interaction with the PEO. The same trend has been observed in mechanical properties of the PEO/PMMA/Salt/CPMMT (PPCS) composites. The values of ionic conductivity we reached at laboratory temperature of 15°C are higher than those reported for the PEO/PMMA/salt/Na-MMT in the literature at 25°C [22]. This increase may be caused by the better dispersion of CPMMT.Figure 10 Bulk ionic conductivity variation for PSC and PPSC with weight % of salt for composite system at room temperature (15°C). ## 3.5. Elongation at Break of PEO/LiClO4/CPMMT Composite System Elongation at break is the strain at failure or percent change at failure and explains the ductility of the material with external force. The effect of salt addition on the ductility or % elongation is shown in Figure11. The result from Figure 11 depicts that ductility of the composite material increases with increasing salt concentration in the resulting composites. This increase is attributed to the presence of CPMMT which enhances the mobility of the PEO polymer. The highest % elongation at break is obtained for the PCS2 composite and beyond PCS2% elongation at break decreases. The higher uniformity in the dispersion of salt and clay within PEO is correlated with better adhesion between the components of the composite due to the homogeneous dispersion of CPCMMT at PCS2 composition. The decrease in the ductility beyond PCS2 is due to the restriction in chain mobility of the matrix and the filler particles acting as defect points [29]. This also shows that beyond certain limit of salt concentration the behavior changes. Further, at higher concentrations the polymers exist in agglomeration and the clay is not well dispersed. The overall result is the increase in the ductility of the composite material with increasing salt concentration. The net increase in elongation at break for PCS system suggests filler induced dimensional stability to the composite electrolyte films, making them capable of sustaining and withstanding any external pressure/shock to a better level.Figure 11 Variation of elongation at break for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ## 3.6. Young’s Modulus of PEO/LiClO4/CPMMT Composites Young’s modulus is a variable that describes the relationship of stress to strain within the elastic region. This is measured from the slope of the curve within the elastic area of the specimen. The modulus of elasticity describes a material’s stiffness; the greater the modulus, the stiffer the material. It quantifies the elasticity of the polymer.It is truly associated with primary and secondary chemical bonds. Unlike the neat polymer where the mechanical properties are determined almost entirely by matrix, the mechanical properties of the composite depend on the interaction between the polymer and the added fillers. From Figure12 it is clear that Young’s modulus of the composites electrolyte decreases with the increasing concentration of inorganic contents at constant clay level. The influence of LiClO4 on the mechanical properties of PEO/CPMMT film resembles the plasticization effect. The interaction between PEO and CPMMT is weakened by the increasing content of salt. The same behavior of Young’s modulus with filler has been reported earlier in the literature [29]. As mechanical properties change by changing the composition of components as well as with the applied force, they are difficult to analyze. Also, this decrease may probably be explained in terms of debonding around polymer and clay interphases and void formation. It can be concluded that value of modulus depends highly on the distribution of filler particles in the polymer matrix, which in turn depends on the particle/particle interaction (agglomeration) and polymer particle interaction (adhesion and wetting) and morphology of the filler particles [30].Figure 12 Variations of Young’s modulus for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ## 3.7. Elongations at Break of PEO/PMMA/LiClO4/CPMMT Blend Composites System In order to have a more clear idea of the change in mechanical properties of the blend composite system, first the addition of salt to the blend system was studied for its effect on the mechanical properties and then CPMMT was added to the same system and the samples were analyzed by UTM. From the results given in Figure11, it is clear that elongation at break decreases initially with the increasing concentration of salt to the blend system and then starts increasing with higher salt concentration. This decrease in failure strain is due to the rigid filler addition, which restricts the mobility of the PEO polymer molecules to flow freely past one another, thus causing premature failure. The original elasticity of PEO is distorted due to the addition of PMMA and LiClO4 which is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness and toughness [31, 32]. Composites with these properties can be used for heat resistant materials or product packaging materials. ## 3.8. Young’s Modulus of PEO/PMMA/LiClO4/CPMM Blend Composites System Young’s modulus of the PEO/PMMA as a function of salt is shown in Figure12. From this Figure it is clear that Young’s modulus of the blend composite shows an overall decrease with the addition of salt. This decrease shows the weaker PEO interchain interaction and increase in the particle size of the inorganic phase because of local aggregations of particles in the presence of PMMA; these phenomena may act as flaws in it [33, 34]. The same trend has been confirmed by the SEM result as well. This means that the addition of salt to the PEO/PMMA composite suppresses the material’s stiffness and hence elasticity of the polymer. But when clay was added to the same PEO/PMMA/Salt system, an enormous increase in the value of Young modulus was observed as shown in Figure 12. This is due to the intercalation of polymer chains within the clay galleries that avoid segmental motion of the polymer chains [35]. Although there is an overall decrease in the value of Young modulus of the PEO/PMMA/LiClO4/CPMMT system with increasing salt concentration, still it is much higher than that of the virgin (neat) poly(ethylene oxide) (PEO) and PEO/Salt/CPMMT. This is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness [31]. ## 4. Conclusions This work used cetylpyridinium chloride to modify MMT mixed with higher molecular weight PEO/LiClO4 and PEO/PMMA/LiClO4 to produce composite materials. The experimental results showed that at constant salt content the addition of CPMMT first reduces crystallinity of PEO till 3.3 wt% of clay and then starts increasing at higher clay content. Thus 3.3 wt% of clay was selected as the optimum clay loadings for composites fabrication. The XRD results showed that the crystallinity of composites at optimum clay loading increases with increasing salt content and ionic conductivity obtained from impedance technique showed declining trend with higher salt content. The addition of 50 wt% of higher molecular weight PMMA to the composite of PEO/Salt/CPMMT affected the properties due to the immiscibility or aggregation of filler within the polymer matrix; however, the blend composites showed better mechanical performance. The composite of PEO with 3.5 wt% of salt and 3.3 wt% of CPMMT exhibited better performance. --- *Source: 101692-2015-08-19.xml*
101692-2015-08-19_101692-2015-08-19.md
53,190
Synthesis and Properties of High Strength Thin Film Composites of Poly(ethylene Oxide) and PEO-PMMA Blend with Cetylpyridinium Chloride Modified Clay
Mohammad Saleem Khan; Sabiha Sultana
International Journal of Polymer Science (2015)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101692
101692-2015-08-19.xml
--- ## Abstract Ion-conducting thin film composites of polymer electrolytes were prepared by mixing high MW poly(ethylene oxide) (PEO), poly(methyl methacrylate) (PMMA) as a polymer matrix, cetylpyridinium chloride (CPC) modified MMT as filler, and different content of LiClO4 by using solution cast method. The crystallinity, ionic conductivity (σ), and mechanical properties of the composite electrolytes and blend composites were evaluated by using XRD, AC impedance, and UTM studies, respectively. The modification of clay by CPC showed enhancement in the d-spacing. The loading of clay has effect on crystallinity of PEO systems. Blend composites showed better mechanical properties. Young’s modulus and elongation at break values showed increase with salt and clay incorporation in pure PEO. The optimum composition composite of PEO with 3.5 wt% of salt and 3.3 wt% of CPMMT exhibited better performance. --- ## Body ## 1. Introduction Polymer/clay composites are hybrid materials which contain organically modified clay and polymer matrix. These are extensively studied materials because of enhanced mechanical, thermal, optical, and other properties. The availability of clay, its low cost, and well developed intercalation chemistry have added to attracting researchers towards material preparation from this. Polymer molecules are believed to intercalate into the galleries of the clay [1]. The amount of clay in these composites plays a vital role in affecting polymer crystallinity and mechanical properties. Pure nonmodified clay is difficult to intercalate and disperse homogeneously in the polymer matrix because of high interfacial tension with organic materials. To overcome this problem, clay is modified to introduce hydrophobic character in it making intercalation with polymers possible. Organic modifier, nature of polymer, and processing conditions are major factors that affect the structure of the resulting composite. It has been indicated that the functional groups and chain length of the backbone of organic modifier have vital influence on thed-spacing and elastic modulus of the polymer-clay composites and crystallinity of polymers [1]. Keeping this in mind, the selection of polymer and organic modification of clay is very important factor in preparation and use of such polymer-clay composite.It is a well-known fact that poly(ethylene oxide) (PEO) is a unique polymer soluble in both aqueous and organic solvents. It has polyether chain which can coordinate with alkali cations (Li+, Na+, Ca2+, etc.) resulting in the formation of polyelectrolyte for batteries, supercapacitors, and fuel cells [2–4]. PEO based polyelectrolytes have been found to show low conductivity while their blend with other polymers and incorporation of salts show enhancement in conductivity [5]. The incorporation of clay having silicate layer is also known to increase the conductivity of PEO based electrolytes. The composite of PEO/Clay has been studied in detail from time to time [6–9]. Most of the work so far has been done on composite containing only PEO-Clay. On the other hand, to our knowledge there are no/or a few reports of PEO-PMMA blend clay composites [10, 11] and also the cetylpyridinium chloride modified montmorillonite clay (CPMMT) has not been used to prepare such composites. The present work aims at the synthesis and characterization of PEO-PMMA/Clay composite with LiClO4 salt using cetylpyridinium chloride modified montmorillonite clay. The detailed X-ray diffraction, electrical, and mechanical properties have been investigated and discussed in the present work. Further thin film fabrication of these composites which has not been reported earlier has been done and reported here. This type of thin film configuration may find application not only in solid polymer electrolyte but also in shape memory polymers for improved mechanical properties. ## 2. Experimental ### 2.1. Materials Poly(ethylene oxide) (PEO) (MW 600,000) and poly(methyl methacrylate) (PMMA) (high molecular weight) were obtained from Acros and BDH Chemicals, respectively. The clay, montmorillonite, was purchased from Aldrich Chemicals. Research grade lithium perchlorate LiClO4 (MW106.39) was obtained from Acros Chemicals. All these polymers and chemicals were used as such without further purification. Acetonitrile (CH3CN) was used as a solvent. It is a good solvent for polymers, that is, PEO and PMMA, montmorillonite, and salts. ### 2.2. Methods #### 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. #### 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ### 2.3. Instrumentation The X-ray diffractometry (XRD) was carried out by using Cu-kα radiation at a tube voltage of 40 KV and 20 mA current. Rigaku (Japan) FX Geiger Series RAD_B system was used for X-ray diffraction measurements.The tensile properties of the samples were tested using Testometric universal testing machine M350/500 manufactured by Testometric UK. The films of pure polymers and that of selected compositions of composites with uniform thickness (measured with digital micrometer) and width were cut for analysis. The length of each sample was 50 mm. The analysis was performed at room temperature with cross-head speed of 5 mm/min. For high accuracy and precision, a sensitive load cell of 100 kg capacities with 1.0 mg load detection with a minimum 0.01 mm cross-head speed was used. A special griping system was designed for thin film griping to avoid any slippage during tensile test. Standard procedure and formulae were used for calculating various tensile properties including Young modulus (stiffness) and elongation at break. Data directly feed into computer interfaced with the UTM.The impedance measurements were carried out at room temperature (15°C) using Solartron 1260 frequency response analyzer (FRA) over the frequency range of1 - 1 × 10 7 Hz and 100 mv voltage. The impedance data were then transferred to the (Z-plot/Z-view) software package. ## 2.1. Materials Poly(ethylene oxide) (PEO) (MW 600,000) and poly(methyl methacrylate) (PMMA) (high molecular weight) were obtained from Acros and BDH Chemicals, respectively. The clay, montmorillonite, was purchased from Aldrich Chemicals. Research grade lithium perchlorate LiClO4 (MW106.39) was obtained from Acros Chemicals. All these polymers and chemicals were used as such without further purification. Acetonitrile (CH3CN) was used as a solvent. It is a good solvent for polymers, that is, PEO and PMMA, montmorillonite, and salts. ## 2.2. Methods ### 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. ### 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ## 2.2.1. Modification of Clay One drawback to clay minerals for battery electrolytes is their hydrophilic nature. Cation modification is one way to avoid this issue. Researchers are exploring organic cation and their ability to make hydrophilic clays into organophilic compounds. The term organic implies that organically modified clays can be attached to organic polymers. The organic modification of clay in our system was carried out according to the procedure reported earlier in the literature [12]. ## 2.2.2. Preparation of PEO/Salt/CPMMT and PEO/PMMA/Salt/CPMMT Composite Films PEO and PMMA were dissolved separately in acetonitrile to prepare 2% solution. Constant volume of this 2% polymer solution was mixed with different volumes of 1 M LiClO4 and CPMMT, following continuous stirring for 24 h at 60°C. These solutions were then transferred to Petri dishes of uniform diameter, kept on smooth and leveled surfaces, covered with lids and were left at room temperature for drying and converting into uniform smooth films of PEO/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) and PEO/PMMA/CPMMT (3.3 wt%)/Salt (2.1, 3.5, and 5 wt%) polymeric composites designated as PCS2.1, PCS3.5, PCS5, PPCS2.1, PPCS3.5, and PPCS5, respectively. The films obtained were stable and free standing. ## 2.3. Instrumentation The X-ray diffractometry (XRD) was carried out by using Cu-kα radiation at a tube voltage of 40 KV and 20 mA current. Rigaku (Japan) FX Geiger Series RAD_B system was used for X-ray diffraction measurements.The tensile properties of the samples were tested using Testometric universal testing machine M350/500 manufactured by Testometric UK. The films of pure polymers and that of selected compositions of composites with uniform thickness (measured with digital micrometer) and width were cut for analysis. The length of each sample was 50 mm. The analysis was performed at room temperature with cross-head speed of 5 mm/min. For high accuracy and precision, a sensitive load cell of 100 kg capacities with 1.0 mg load detection with a minimum 0.01 mm cross-head speed was used. A special griping system was designed for thin film griping to avoid any slippage during tensile test. Standard procedure and formulae were used for calculating various tensile properties including Young modulus (stiffness) and elongation at break. Data directly feed into computer interfaced with the UTM.The impedance measurements were carried out at room temperature (15°C) using Solartron 1260 frequency response analyzer (FRA) over the frequency range of1 - 1 × 10 7 Hz and 100 mv voltage. The impedance data were then transferred to the (Z-plot/Z-view) software package. ## 3. Results and Discussion Montmorillonite clay (MMT) clay structure along with CPC structure is shown in Figure1. The structure shows that an octahedrally coordinated alumina is sandwiched between two tetrahedrally coordinated silica. The spacing between clay layers ranges in nanometers and in between these layers water molecules and exchangeable cations like Na+ are present. These +Ve ions are mostly near the layers where the −Ve site of the layer is present and a kind of attachment is there between these. The CPC has a bulky cationic head and hydrocarbon chain which is neutral. The MMT clay was modified with cetylpyridinium chloride (CPC) whose structure is shown in Figure 2. The mechanism clearly shows that smaller Na+ is exchanged with the bulky cationic head group of CPC while NaCl is coming out after treatment. Due to this exchange and insertion of larger cation in between layers, the interlayer spacing increases (see Table 1). The modified clay, that is, CPMMT, is organophilic with a lower surface energy, which is more compatible with organic polymers.Table 1 Values ofd-spacing for various systems studied. System Peak position∗ (2θ) d-spacing MMT 11.65 7.5839 CPMMT 11.30 7.8266 PEO/CPMMT/Salt (2.1 wt%) 22.95 3.8720 PEO/PCPMMT/Salt (2.1 wt%) 22.85 3.8090 ∗Peak with highest intensity.Figure 1 Structure of MMT clay and CPC.Figure 2 Mechanism of modification of MMT. ### 3.1. X-Ray Diffraction Analysis of PEO/LiClO4 (Constant)/CPMMT Composite System XRD of pure PEO shows maximum diffraction peaks representing highly crystalline structure as already published in our earlier studies [13]. LiClO4 XRD was not done but, as reported in literature, XRD pattern of LiClO4 shows intense peaks at angle 2θ = 18.360, 23.20, 27.50, 32.990, and 36.580 revealing the crystalline nature of the ionic salt [14]. Figures 3 and 4 display the XRD scan of pure and CPC modified MMT, respectively. It is clear from the diffractogram and Table 1 that modification of clay by CPC enhances thed-value from 7.5839 to 7.82366 by shifting 2θ value from 11.65 to 11.30. It also shows the addition of new peaks at 2θ = 17.75 and 55.3 and vanishing of some peaks at 2θ = 20.8, 42.4, 50.15, and 54.3. The addition and disappearance of peaks and alteration ofd-values clearly depict the successful modification of montmorillonite by CPC. The rest of the peaks are not altered. The increasingd-spacing will cause the dissociation of MMT, resulting in composites with better dispersion of clay particles [15].Figure 3 XRD scan of pure MMT.Figure 4 XRD scan of CPMMT.The polymer/salt/CPC modified structure and interaction mechanism is given in Figure5. The interaction of CPMMT with polymer (PEO) shows that polymer molecules come in between the CPC layers attached to clay. An elaboration of the intercalating portion clearly shows that there is increase in gallery spacing which is associated with lowering in surface energy. Polymer intercalates within the galleries as a result of the negative surface charge and the cationic head groups of CPC preferentially reside on the layer surface. The salt, that is, LiClO4 has also interactions with both the polymer and the negatively charged clay layers.Figure 5 Mechanism of PEO-Salt/CPMMT interaction.Composites of PEO/LiClO4/CPMMT were synthesized with 1.98, 3.3, 4.62, and 5.94 wt% of modified montmorillonite loading keeping the mole fraction of salt constant at 3.5 wt%. The diffraction patterns of PEO/Salt/CPMMT (1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt%) composite systems are shown in Figure 6. It can be revealed from these XRD patterns that the PEO has the minimum crystallinity when the CPMMT loading is 3.3 wt%. This 3.3 wt% of CPMMT loading was selected as an optimum condition for the synthesis of the composites. The substantial increase in the intensities of the XRD peaks on increasing CPMMT loadings suggests that the dispersion is better at lower clay loading than at higher loadings. This is because the lithium cation coordinates with the flexible CH2-O- chain of PEO forming complexes and thereby disturbing the crystallinity. When the clay is loaded into the PEO/Li ClO4 electrolyte the crystallinity initially decreases up to 3.3 wt% of clay loading and increases thereafter. In case of undoped PEO, the crystallinity gradually decreases with an increase in clay loading because of the steric hindrance caused by the huge surface area of randomly oriented clay throughout the matrix. The different crystallization behaviors of PEO/LiClO4/clay composite electrolyte and PEO/Clay composite is explained by considering the fact that negatively charged clay layers also coordinate with the lithium cation due to a strong electrostatic interaction. The interaction depends on the expansion of silicate layers and clay content. Because of this interaction, PEO to Li+ interactions decrease and crystallinity increases. Thus two competing effects are present in the PEO/LiClO4/Clay composite electrolyte; one reduces the crystallinity and the other favors the crystallinity. At low clay loading the first factor predominates leading to a decrease in the crystallinity and beyond the optimum clay concentrations the second factor predominates over the first, resulting in higher crystallinity [16, 17]. The presence of the CPMMT, however, had no effect on the location of the peaks, which indicates that perfect exfoliation of the clay layer structure of the organoclay in PEO does not occur [18]. The XRD patterns of the fabricated composites show that most of the peaks corresponding to pure LiClO4 have disappeared in the composite system, which reveals the dissolution of the salt in the polymer matrix. Similarly the appearance of some of the peaks of the LiClO4 in the composite system confirms the complexation of the salt with the polymer matrix.Figure 6 Combined XRD pattern of PEO/Salt composite system with 1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt% of CPMMT. ### 3.2. X-Ray Diffraction Analysis of PEO/LiClO4 (Variable)/CPMMT (3.3 wt%) Composite System X-ray diffraction analysis of PEO/LiClO4/CPMMT composite with 3.3 wt% of CPMMT clay at varying concentrations of salt is shown in Figure 7, which depicts that PEO/LiClO4/clay composites first show decrease in crystallinity of PEO with the increasing amount of salt, but when the concentration of salt is increased from PCS2, that is, 3.5 wt%, the crystalline character of PEO starts increasing; this is attributed to the local aggregation of inorganic particles at higher salt concentration. The same result is manifested by our mechanical and Scanning Electron Microscopy (SEM) studies.Figure 7 Combined XRD scans of PEO/CPMMT/Salt (PCS1) (2.1 wt% A), PEO/CPMMT/Salt (PCS2) (3.5 wt% B), and PEO/CPMMT/Salt (PCS3) (5 wt% C). ### 3.3. X-Ray Diffraction Analysis of PEO/PMMA/LiClO4 (Variable)/CPMMT (3.3 wt%) Blend Composite System In order to investigate the effect of poly(methyl methacrylate) (PMMA) addition on the crystallinity of PEO in the blend composite of PEO/PMMA/LiClO4/CPMMT having variable concentrations of salt and constant clay content (of 3.3 wt%), X-ray analysis was carried out. From the diffractogram pattern given in Figure 8, it is clear that though PMMA is amorphous in nature, its addition to the composite system has no significant effect on the system. The crystalline fraction of PEO increased a little bit by its addition. This is because the amount of PEO in PEO/PMMA blend is far more than overlap weight fraction (W ∗), which causes PEO to crystallize, and also because PMMA interaction with CPMMT is more than that of PEO which affects the properties of PEO when present in blend. Thed-spacing between the layers of the system is found to be decreasing (Table 1) which also accounts for increase of crystalline behavior. This result is consistent with our AC impedance study and is also supported by the literature [18].Figure 8 Combined XRD scans of PEO/PMMA/CPMMT/Salt (PPCS1) (2.1 wt% A), PEO/PMMA/CPMMT/Salt (PPCS2) (3.5 wt% B), and PEO/PMMA/CPMMT/Salt (PPCS3) (5 wt% C). ### 3.4. Ionic Conductivity of PEO Composite and Blend Composite System In a Nyquist impedance plot, the real part (Z /) of the impedance was plotted against the imaginary part (Z / /) for data collected at frequencies ranging from 1 to 107 Hz. To investigate complete picture of the system, an equivalent circuit was used [19]. The bulk resistance of the solid polymer electrolyte (SPE) was consequent from the equivalent circuit. Figures 9(a), 9(b), 9(c), and 9(d) show the Nyquist impedance plots for PEO/LiClO4, denoted as PS (a), PEO/LiClO4 after fitting to equivalent circuit (b), PEO/CPMMT/LiClO4 denoted as PCS (c), PEO/PMMA/CPMMT/LiClO4 denoted as PPCS (d), and impedance plot after fitting to the equivalent circuit, respectively. These diagrams deviate from an ideal impedance spectrum that usually exhibits a standard semicircle at the high frequency section and a vertical line at a lower frequency section. The deformed semicircle and the inclined line for the polymeric film/electrode system may be attributed to the irregular thickness and morphology of the polymeric film and the roughness of the electrode surface [20, 21]. To investigate the phenomenon a “constant phase element” (CPE) was employed in the equivalent circuit. The high frequency semicircle depicts the combination of R1 and CPE-1, while the spike showing the trend for second semicircle due to double layer capacitance (at the interface of solid polymer electrolyte and electrode) is reflected by CPE-2 [19]. The equivalent circuit used for fitting data and table for parameters for the circuit elements evaluated by fitting the impedance data for composite and blend system at room temperature (15°C) is given in Figure 9 as inset.Figure 9 Typical Nyquist impedance plots for PEO/Salt (PS) (a), PEO/Salt (PS) after fitting to equivalent circuit (b). Inset showing that diagram of circuit and extracted parameters for the circuit elements of PS, PCS, and PPCS are summarized in the table. PEO/Salt/CPMMT (PCS) (c). PEO/PMMA/Salt/CPMMT (PPCS) (d). (a) (b) (c) (d)From equivalent circuits the bulk resistance values were obtained. The bulk resistance allows us to obtain the ionic conductivity using(1) σ = I R A ,where σ = conductivity (S/cm), R = resistance (Ω), I = thickness (cm), and A = area of the electrode (c m 2).The capacitance values were calculated according to(2) ω max ⁡ R C = 1 ,where ω m a x corresponds to the frequency at the maximum of semicircle. The capacitance values obtained for the bulk are in complete harmony with the earlier reported values [22].The value of ionic conductivity obtained at room temperature (15°C) for pure poly(ethylene oxide) (PEO) is less than 6.78 × 10−10 S cm−1 reported in the literature earlier by Kumar and coworker [17] for the same molecular weight PEO at 30°C. This difference in conductivity values is because of the temperature and changing nature of solvent used in our study [23]. From the table given as inset in Figure 10, it is clear that the conductivity of PEO at laboratory temperature, that is, 15°C, increases sharply with the salt incorporation. The same trend in conductivity of PEO based electrolytes with the salt concentration has also been observed by Srivastava and Ibrahim et al. [24, 25]. This increase is due to the increase in charge carriers caused by the addition of higher concentration of LiClO4 and the increase in the fraction of amorphous phase. The addition of ionic salt decreases degradation temperature because of the growth of amorphous fraction and destabilizes the polymer network. The PEO/LiClO4 electrolyte with high salt concentration was found to be less stable. Alternatively CPMMT was used to overcome these drawbacks. Inorganic fillers are usually used to improve the electrochemical and mechanical properties [26]. Clay is inorganic filler with intercalation property, where clay layers maintain their registry. Intercalating polymer (residing polymer chains between silicates) in a layered clay host can produce huge interfacial area to sustain the mechanical property of polymer electrolyte system and impart salt solvating power to dissolve the lithium salt [27]. A glance at Figure 10 and inset table reveals that the addition of salt at constant (3.3 wt%) clay content increases the conductivity of PEO/Salt/CPMMT (PCS) composites retaining dimensional stability till PCS2 (3.5 wt%); beyond PCS2, further addition of salt decreases the conductivity badly. This initial increase is due to the decrease in the crystallinity and increase in amorphous fraction of PEO for ion conduction till equilibrium is achieved at PCS2. This is consistent with our XRD results. The conductivity decreases drastically, when amount of salt increases from PCS2 to PCS3 (5 wt%), but is still higher than that of pristine polymer. The possible explanation for this behavior may be ion association and the formation of charge multipliers [25]. In order to study the effect of poly(methyl methacrylate) (PMMA) incorporation on the ionic conductivity of PEO based solid polymer electrolytes, PMMA was blended with PEO for solid polymer electrolyte (SPE) composites. From the values of ionic conductivity given in Figure 10 and inset table, it is clear that the addition of PMMA to PEO/Salt electrolyte system decreases the conductivity of PCS system but still shows higher value than pure PEO films. The rigid structure of PMMA due to the entrapped silicate layers alters the segmental dynamics of PEO so there is decrease in conductivity. Jeddi and coworkers [28] have reported an overlap weight fraction for PEO/PMMA blend which is about 2.8 wt% for PEO. Overlap weight fraction is that weight at which PEO starts interpenetration and miscibility of blends is affected. In our system the amount of PEO is far more than overlap weight fraction. So it causes decrease in conductivity and an increase in the agglomeration of clay by decreasing its interaction with the PEO. The same trend has been observed in mechanical properties of the PEO/PMMA/Salt/CPMMT (PPCS) composites. The values of ionic conductivity we reached at laboratory temperature of 15°C are higher than those reported for the PEO/PMMA/salt/Na-MMT in the literature at 25°C [22]. This increase may be caused by the better dispersion of CPMMT.Figure 10 Bulk ionic conductivity variation for PSC and PPSC with weight % of salt for composite system at room temperature (15°C). ### 3.5. Elongation at Break of PEO/LiClO4/CPMMT Composite System Elongation at break is the strain at failure or percent change at failure and explains the ductility of the material with external force. The effect of salt addition on the ductility or % elongation is shown in Figure11. The result from Figure 11 depicts that ductility of the composite material increases with increasing salt concentration in the resulting composites. This increase is attributed to the presence of CPMMT which enhances the mobility of the PEO polymer. The highest % elongation at break is obtained for the PCS2 composite and beyond PCS2% elongation at break decreases. The higher uniformity in the dispersion of salt and clay within PEO is correlated with better adhesion between the components of the composite due to the homogeneous dispersion of CPCMMT at PCS2 composition. The decrease in the ductility beyond PCS2 is due to the restriction in chain mobility of the matrix and the filler particles acting as defect points [29]. This also shows that beyond certain limit of salt concentration the behavior changes. Further, at higher concentrations the polymers exist in agglomeration and the clay is not well dispersed. The overall result is the increase in the ductility of the composite material with increasing salt concentration. The net increase in elongation at break for PCS system suggests filler induced dimensional stability to the composite electrolyte films, making them capable of sustaining and withstanding any external pressure/shock to a better level.Figure 11 Variation of elongation at break for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ### 3.6. Young’s Modulus of PEO/LiClO4/CPMMT Composites Young’s modulus is a variable that describes the relationship of stress to strain within the elastic region. This is measured from the slope of the curve within the elastic area of the specimen. The modulus of elasticity describes a material’s stiffness; the greater the modulus, the stiffer the material. It quantifies the elasticity of the polymer.It is truly associated with primary and secondary chemical bonds. Unlike the neat polymer where the mechanical properties are determined almost entirely by matrix, the mechanical properties of the composite depend on the interaction between the polymer and the added fillers. From Figure12 it is clear that Young’s modulus of the composites electrolyte decreases with the increasing concentration of inorganic contents at constant clay level. The influence of LiClO4 on the mechanical properties of PEO/CPMMT film resembles the plasticization effect. The interaction between PEO and CPMMT is weakened by the increasing content of salt. The same behavior of Young’s modulus with filler has been reported earlier in the literature [29]. As mechanical properties change by changing the composition of components as well as with the applied force, they are difficult to analyze. Also, this decrease may probably be explained in terms of debonding around polymer and clay interphases and void formation. It can be concluded that value of modulus depends highly on the distribution of filler particles in the polymer matrix, which in turn depends on the particle/particle interaction (agglomeration) and polymer particle interaction (adhesion and wetting) and morphology of the filler particles [30].Figure 12 Variations of Young’s modulus for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ### 3.7. Elongations at Break of PEO/PMMA/LiClO4/CPMMT Blend Composites System In order to have a more clear idea of the change in mechanical properties of the blend composite system, first the addition of salt to the blend system was studied for its effect on the mechanical properties and then CPMMT was added to the same system and the samples were analyzed by UTM. From the results given in Figure11, it is clear that elongation at break decreases initially with the increasing concentration of salt to the blend system and then starts increasing with higher salt concentration. This decrease in failure strain is due to the rigid filler addition, which restricts the mobility of the PEO polymer molecules to flow freely past one another, thus causing premature failure. The original elasticity of PEO is distorted due to the addition of PMMA and LiClO4 which is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness and toughness [31, 32]. Composites with these properties can be used for heat resistant materials or product packaging materials. ### 3.8. Young’s Modulus of PEO/PMMA/LiClO4/CPMM Blend Composites System Young’s modulus of the PEO/PMMA as a function of salt is shown in Figure12. From this Figure it is clear that Young’s modulus of the blend composite shows an overall decrease with the addition of salt. This decrease shows the weaker PEO interchain interaction and increase in the particle size of the inorganic phase because of local aggregations of particles in the presence of PMMA; these phenomena may act as flaws in it [33, 34]. The same trend has been confirmed by the SEM result as well. This means that the addition of salt to the PEO/PMMA composite suppresses the material’s stiffness and hence elasticity of the polymer. But when clay was added to the same PEO/PMMA/Salt system, an enormous increase in the value of Young modulus was observed as shown in Figure 12. This is due to the intercalation of polymer chains within the clay galleries that avoid segmental motion of the polymer chains [35]. Although there is an overall decrease in the value of Young modulus of the PEO/PMMA/LiClO4/CPMMT system with increasing salt concentration, still it is much higher than that of the virgin (neat) poly(ethylene oxide) (PEO) and PEO/Salt/CPMMT. This is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness [31]. ## 3.1. X-Ray Diffraction Analysis of PEO/LiClO4 (Constant)/CPMMT Composite System XRD of pure PEO shows maximum diffraction peaks representing highly crystalline structure as already published in our earlier studies [13]. LiClO4 XRD was not done but, as reported in literature, XRD pattern of LiClO4 shows intense peaks at angle 2θ = 18.360, 23.20, 27.50, 32.990, and 36.580 revealing the crystalline nature of the ionic salt [14]. Figures 3 and 4 display the XRD scan of pure and CPC modified MMT, respectively. It is clear from the diffractogram and Table 1 that modification of clay by CPC enhances thed-value from 7.5839 to 7.82366 by shifting 2θ value from 11.65 to 11.30. It also shows the addition of new peaks at 2θ = 17.75 and 55.3 and vanishing of some peaks at 2θ = 20.8, 42.4, 50.15, and 54.3. The addition and disappearance of peaks and alteration ofd-values clearly depict the successful modification of montmorillonite by CPC. The rest of the peaks are not altered. The increasingd-spacing will cause the dissociation of MMT, resulting in composites with better dispersion of clay particles [15].Figure 3 XRD scan of pure MMT.Figure 4 XRD scan of CPMMT.The polymer/salt/CPC modified structure and interaction mechanism is given in Figure5. The interaction of CPMMT with polymer (PEO) shows that polymer molecules come in between the CPC layers attached to clay. An elaboration of the intercalating portion clearly shows that there is increase in gallery spacing which is associated with lowering in surface energy. Polymer intercalates within the galleries as a result of the negative surface charge and the cationic head groups of CPC preferentially reside on the layer surface. The salt, that is, LiClO4 has also interactions with both the polymer and the negatively charged clay layers.Figure 5 Mechanism of PEO-Salt/CPMMT interaction.Composites of PEO/LiClO4/CPMMT were synthesized with 1.98, 3.3, 4.62, and 5.94 wt% of modified montmorillonite loading keeping the mole fraction of salt constant at 3.5 wt%. The diffraction patterns of PEO/Salt/CPMMT (1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt%) composite systems are shown in Figure 6. It can be revealed from these XRD patterns that the PEO has the minimum crystallinity when the CPMMT loading is 3.3 wt%. This 3.3 wt% of CPMMT loading was selected as an optimum condition for the synthesis of the composites. The substantial increase in the intensities of the XRD peaks on increasing CPMMT loadings suggests that the dispersion is better at lower clay loading than at higher loadings. This is because the lithium cation coordinates with the flexible CH2-O- chain of PEO forming complexes and thereby disturbing the crystallinity. When the clay is loaded into the PEO/Li ClO4 electrolyte the crystallinity initially decreases up to 3.3 wt% of clay loading and increases thereafter. In case of undoped PEO, the crystallinity gradually decreases with an increase in clay loading because of the steric hindrance caused by the huge surface area of randomly oriented clay throughout the matrix. The different crystallization behaviors of PEO/LiClO4/clay composite electrolyte and PEO/Clay composite is explained by considering the fact that negatively charged clay layers also coordinate with the lithium cation due to a strong electrostatic interaction. The interaction depends on the expansion of silicate layers and clay content. Because of this interaction, PEO to Li+ interactions decrease and crystallinity increases. Thus two competing effects are present in the PEO/LiClO4/Clay composite electrolyte; one reduces the crystallinity and the other favors the crystallinity. At low clay loading the first factor predominates leading to a decrease in the crystallinity and beyond the optimum clay concentrations the second factor predominates over the first, resulting in higher crystallinity [16, 17]. The presence of the CPMMT, however, had no effect on the location of the peaks, which indicates that perfect exfoliation of the clay layer structure of the organoclay in PEO does not occur [18]. The XRD patterns of the fabricated composites show that most of the peaks corresponding to pure LiClO4 have disappeared in the composite system, which reveals the dissolution of the salt in the polymer matrix. Similarly the appearance of some of the peaks of the LiClO4 in the composite system confirms the complexation of the salt with the polymer matrix.Figure 6 Combined XRD pattern of PEO/Salt composite system with 1.98 (A), 3.3 (B), 4.62 (C), and 5.94 (D) wt% of CPMMT. ## 3.2. X-Ray Diffraction Analysis of PEO/LiClO4 (Variable)/CPMMT (3.3 wt%) Composite System X-ray diffraction analysis of PEO/LiClO4/CPMMT composite with 3.3 wt% of CPMMT clay at varying concentrations of salt is shown in Figure 7, which depicts that PEO/LiClO4/clay composites first show decrease in crystallinity of PEO with the increasing amount of salt, but when the concentration of salt is increased from PCS2, that is, 3.5 wt%, the crystalline character of PEO starts increasing; this is attributed to the local aggregation of inorganic particles at higher salt concentration. The same result is manifested by our mechanical and Scanning Electron Microscopy (SEM) studies.Figure 7 Combined XRD scans of PEO/CPMMT/Salt (PCS1) (2.1 wt% A), PEO/CPMMT/Salt (PCS2) (3.5 wt% B), and PEO/CPMMT/Salt (PCS3) (5 wt% C). ## 3.3. X-Ray Diffraction Analysis of PEO/PMMA/LiClO4 (Variable)/CPMMT (3.3 wt%) Blend Composite System In order to investigate the effect of poly(methyl methacrylate) (PMMA) addition on the crystallinity of PEO in the blend composite of PEO/PMMA/LiClO4/CPMMT having variable concentrations of salt and constant clay content (of 3.3 wt%), X-ray analysis was carried out. From the diffractogram pattern given in Figure 8, it is clear that though PMMA is amorphous in nature, its addition to the composite system has no significant effect on the system. The crystalline fraction of PEO increased a little bit by its addition. This is because the amount of PEO in PEO/PMMA blend is far more than overlap weight fraction (W ∗), which causes PEO to crystallize, and also because PMMA interaction with CPMMT is more than that of PEO which affects the properties of PEO when present in blend. Thed-spacing between the layers of the system is found to be decreasing (Table 1) which also accounts for increase of crystalline behavior. This result is consistent with our AC impedance study and is also supported by the literature [18].Figure 8 Combined XRD scans of PEO/PMMA/CPMMT/Salt (PPCS1) (2.1 wt% A), PEO/PMMA/CPMMT/Salt (PPCS2) (3.5 wt% B), and PEO/PMMA/CPMMT/Salt (PPCS3) (5 wt% C). ## 3.4. Ionic Conductivity of PEO Composite and Blend Composite System In a Nyquist impedance plot, the real part (Z /) of the impedance was plotted against the imaginary part (Z / /) for data collected at frequencies ranging from 1 to 107 Hz. To investigate complete picture of the system, an equivalent circuit was used [19]. The bulk resistance of the solid polymer electrolyte (SPE) was consequent from the equivalent circuit. Figures 9(a), 9(b), 9(c), and 9(d) show the Nyquist impedance plots for PEO/LiClO4, denoted as PS (a), PEO/LiClO4 after fitting to equivalent circuit (b), PEO/CPMMT/LiClO4 denoted as PCS (c), PEO/PMMA/CPMMT/LiClO4 denoted as PPCS (d), and impedance plot after fitting to the equivalent circuit, respectively. These diagrams deviate from an ideal impedance spectrum that usually exhibits a standard semicircle at the high frequency section and a vertical line at a lower frequency section. The deformed semicircle and the inclined line for the polymeric film/electrode system may be attributed to the irregular thickness and morphology of the polymeric film and the roughness of the electrode surface [20, 21]. To investigate the phenomenon a “constant phase element” (CPE) was employed in the equivalent circuit. The high frequency semicircle depicts the combination of R1 and CPE-1, while the spike showing the trend for second semicircle due to double layer capacitance (at the interface of solid polymer electrolyte and electrode) is reflected by CPE-2 [19]. The equivalent circuit used for fitting data and table for parameters for the circuit elements evaluated by fitting the impedance data for composite and blend system at room temperature (15°C) is given in Figure 9 as inset.Figure 9 Typical Nyquist impedance plots for PEO/Salt (PS) (a), PEO/Salt (PS) after fitting to equivalent circuit (b). Inset showing that diagram of circuit and extracted parameters for the circuit elements of PS, PCS, and PPCS are summarized in the table. PEO/Salt/CPMMT (PCS) (c). PEO/PMMA/Salt/CPMMT (PPCS) (d). (a) (b) (c) (d)From equivalent circuits the bulk resistance values were obtained. The bulk resistance allows us to obtain the ionic conductivity using(1) σ = I R A ,where σ = conductivity (S/cm), R = resistance (Ω), I = thickness (cm), and A = area of the electrode (c m 2).The capacitance values were calculated according to(2) ω max ⁡ R C = 1 ,where ω m a x corresponds to the frequency at the maximum of semicircle. The capacitance values obtained for the bulk are in complete harmony with the earlier reported values [22].The value of ionic conductivity obtained at room temperature (15°C) for pure poly(ethylene oxide) (PEO) is less than 6.78 × 10−10 S cm−1 reported in the literature earlier by Kumar and coworker [17] for the same molecular weight PEO at 30°C. This difference in conductivity values is because of the temperature and changing nature of solvent used in our study [23]. From the table given as inset in Figure 10, it is clear that the conductivity of PEO at laboratory temperature, that is, 15°C, increases sharply with the salt incorporation. The same trend in conductivity of PEO based electrolytes with the salt concentration has also been observed by Srivastava and Ibrahim et al. [24, 25]. This increase is due to the increase in charge carriers caused by the addition of higher concentration of LiClO4 and the increase in the fraction of amorphous phase. The addition of ionic salt decreases degradation temperature because of the growth of amorphous fraction and destabilizes the polymer network. The PEO/LiClO4 electrolyte with high salt concentration was found to be less stable. Alternatively CPMMT was used to overcome these drawbacks. Inorganic fillers are usually used to improve the electrochemical and mechanical properties [26]. Clay is inorganic filler with intercalation property, where clay layers maintain their registry. Intercalating polymer (residing polymer chains between silicates) in a layered clay host can produce huge interfacial area to sustain the mechanical property of polymer electrolyte system and impart salt solvating power to dissolve the lithium salt [27]. A glance at Figure 10 and inset table reveals that the addition of salt at constant (3.3 wt%) clay content increases the conductivity of PEO/Salt/CPMMT (PCS) composites retaining dimensional stability till PCS2 (3.5 wt%); beyond PCS2, further addition of salt decreases the conductivity badly. This initial increase is due to the decrease in the crystallinity and increase in amorphous fraction of PEO for ion conduction till equilibrium is achieved at PCS2. This is consistent with our XRD results. The conductivity decreases drastically, when amount of salt increases from PCS2 to PCS3 (5 wt%), but is still higher than that of pristine polymer. The possible explanation for this behavior may be ion association and the formation of charge multipliers [25]. In order to study the effect of poly(methyl methacrylate) (PMMA) incorporation on the ionic conductivity of PEO based solid polymer electrolytes, PMMA was blended with PEO for solid polymer electrolyte (SPE) composites. From the values of ionic conductivity given in Figure 10 and inset table, it is clear that the addition of PMMA to PEO/Salt electrolyte system decreases the conductivity of PCS system but still shows higher value than pure PEO films. The rigid structure of PMMA due to the entrapped silicate layers alters the segmental dynamics of PEO so there is decrease in conductivity. Jeddi and coworkers [28] have reported an overlap weight fraction for PEO/PMMA blend which is about 2.8 wt% for PEO. Overlap weight fraction is that weight at which PEO starts interpenetration and miscibility of blends is affected. In our system the amount of PEO is far more than overlap weight fraction. So it causes decrease in conductivity and an increase in the agglomeration of clay by decreasing its interaction with the PEO. The same trend has been observed in mechanical properties of the PEO/PMMA/Salt/CPMMT (PPCS) composites. The values of ionic conductivity we reached at laboratory temperature of 15°C are higher than those reported for the PEO/PMMA/salt/Na-MMT in the literature at 25°C [22]. This increase may be caused by the better dispersion of CPMMT.Figure 10 Bulk ionic conductivity variation for PSC and PPSC with weight % of salt for composite system at room temperature (15°C). ## 3.5. Elongation at Break of PEO/LiClO4/CPMMT Composite System Elongation at break is the strain at failure or percent change at failure and explains the ductility of the material with external force. The effect of salt addition on the ductility or % elongation is shown in Figure11. The result from Figure 11 depicts that ductility of the composite material increases with increasing salt concentration in the resulting composites. This increase is attributed to the presence of CPMMT which enhances the mobility of the PEO polymer. The highest % elongation at break is obtained for the PCS2 composite and beyond PCS2% elongation at break decreases. The higher uniformity in the dispersion of salt and clay within PEO is correlated with better adhesion between the components of the composite due to the homogeneous dispersion of CPCMMT at PCS2 composition. The decrease in the ductility beyond PCS2 is due to the restriction in chain mobility of the matrix and the filler particles acting as defect points [29]. This also shows that beyond certain limit of salt concentration the behavior changes. Further, at higher concentrations the polymers exist in agglomeration and the clay is not well dispersed. The overall result is the increase in the ductility of the composite material with increasing salt concentration. The net increase in elongation at break for PCS system suggests filler induced dimensional stability to the composite electrolyte films, making them capable of sustaining and withstanding any external pressure/shock to a better level.Figure 11 Variation of elongation at break for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ## 3.6. Young’s Modulus of PEO/LiClO4/CPMMT Composites Young’s modulus is a variable that describes the relationship of stress to strain within the elastic region. This is measured from the slope of the curve within the elastic area of the specimen. The modulus of elasticity describes a material’s stiffness; the greater the modulus, the stiffer the material. It quantifies the elasticity of the polymer.It is truly associated with primary and secondary chemical bonds. Unlike the neat polymer where the mechanical properties are determined almost entirely by matrix, the mechanical properties of the composite depend on the interaction between the polymer and the added fillers. From Figure12 it is clear that Young’s modulus of the composites electrolyte decreases with the increasing concentration of inorganic contents at constant clay level. The influence of LiClO4 on the mechanical properties of PEO/CPMMT film resembles the plasticization effect. The interaction between PEO and CPMMT is weakened by the increasing content of salt. The same behavior of Young’s modulus with filler has been reported earlier in the literature [29]. As mechanical properties change by changing the composition of components as well as with the applied force, they are difficult to analyze. Also, this decrease may probably be explained in terms of debonding around polymer and clay interphases and void formation. It can be concluded that value of modulus depends highly on the distribution of filler particles in the polymer matrix, which in turn depends on the particle/particle interaction (agglomeration) and polymer particle interaction (adhesion and wetting) and morphology of the filler particles [30].Figure 12 Variations of Young’s modulus for PEO/Clay (PC), PEO/PMMA, and PEO/PMMA/Clay (PPC) composite and blend composite system with varying content of salt. ## 3.7. Elongations at Break of PEO/PMMA/LiClO4/CPMMT Blend Composites System In order to have a more clear idea of the change in mechanical properties of the blend composite system, first the addition of salt to the blend system was studied for its effect on the mechanical properties and then CPMMT was added to the same system and the samples were analyzed by UTM. From the results given in Figure11, it is clear that elongation at break decreases initially with the increasing concentration of salt to the blend system and then starts increasing with higher salt concentration. This decrease in failure strain is due to the rigid filler addition, which restricts the mobility of the PEO polymer molecules to flow freely past one another, thus causing premature failure. The original elasticity of PEO is distorted due to the addition of PMMA and LiClO4 which is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness and toughness [31, 32]. Composites with these properties can be used for heat resistant materials or product packaging materials. ## 3.8. Young’s Modulus of PEO/PMMA/LiClO4/CPMM Blend Composites System Young’s modulus of the PEO/PMMA as a function of salt is shown in Figure12. From this Figure it is clear that Young’s modulus of the blend composite shows an overall decrease with the addition of salt. This decrease shows the weaker PEO interchain interaction and increase in the particle size of the inorganic phase because of local aggregations of particles in the presence of PMMA; these phenomena may act as flaws in it [33, 34]. The same trend has been confirmed by the SEM result as well. This means that the addition of salt to the PEO/PMMA composite suppresses the material’s stiffness and hence elasticity of the polymer. But when clay was added to the same PEO/PMMA/Salt system, an enormous increase in the value of Young modulus was observed as shown in Figure 12. This is due to the intercalation of polymer chains within the clay galleries that avoid segmental motion of the polymer chains [35]. Although there is an overall decrease in the value of Young modulus of the PEO/PMMA/LiClO4/CPMMT system with increasing salt concentration, still it is much higher than that of the virgin (neat) poly(ethylene oxide) (PEO) and PEO/Salt/CPMMT. This is in close agreement with the conclusion that the addition of rigid particles like PMMA into the polymer matrix increases its stiffness [31]. ## 4. Conclusions This work used cetylpyridinium chloride to modify MMT mixed with higher molecular weight PEO/LiClO4 and PEO/PMMA/LiClO4 to produce composite materials. The experimental results showed that at constant salt content the addition of CPMMT first reduces crystallinity of PEO till 3.3 wt% of clay and then starts increasing at higher clay content. Thus 3.3 wt% of clay was selected as the optimum clay loadings for composites fabrication. The XRD results showed that the crystallinity of composites at optimum clay loading increases with increasing salt content and ionic conductivity obtained from impedance technique showed declining trend with higher salt content. The addition of 50 wt% of higher molecular weight PMMA to the composite of PEO/Salt/CPMMT affected the properties due to the immiscibility or aggregation of filler within the polymer matrix; however, the blend composites showed better mechanical performance. The composite of PEO with 3.5 wt% of salt and 3.3 wt% of CPMMT exhibited better performance. --- *Source: 101692-2015-08-19.xml*
2015
# Schiff Base Ligand Coated Gold Nanoparticles for the Chemical Sensing of Fe(III) Ions **Authors:** Abiola Azeez Jimoh; Aasif Helal; M. Nasiruzzaman Shaikh; Md. Abdul Aziz; Zain H. Yamani; Amir Al-Ahmed; Jong-Pil Kim **Journal:** Journal of Nanomaterials (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101694 --- ## Abstract New Schiff base-coated gold nanoparticles (AuNPs) of type AuNP@L (where L: thiolated Schiff base ligand) have been synthesized and characterized using various spectroscopic techniques. The AuNPs and AuNP@L were imaged by transmission electron microscopy (TEM) and were confirmed to be well-dispersed, uniformly distributed, spherical nanoparticles with an average diameter of 8–10 nm. Their potential applications for chemosensing were investigated in UV-Vis and fluorescence spectroscopic studies. The AuNP@L exhibited selectivity for Fe3+ in an ethanol/water mixture (ratio 9 : 1 v/v). The absorption and emission spectral studies revealed a 1 : 1 binding mode for Fe3+, with binding constants of 8.5 × 10 5 and 2.9 × 10 5 M−1, respectively. --- ## Body ## 1. Introduction In recent years, gold nanoparticles (AuNPs) have attracted substantial attention for their extensive application in drug delivery [1, 2], magnetic resonance imaging (MRI) [3, 4], X-ray computed tomography (X-ray CT) [5], catalysis [6], biosensing [7, 8], and so forth because their size, shape, and surface functionalization are easily controlled through the ligands and corresponding metal complexes. One of the typical applications of AuNPs in current research is the colorimetric detection of metal ions in the environment as well as in physiological systems because they possess excellent optical properties, such as high extinction coefficients and distance-dependent plasmonic absorption [9, 10]. However, the challenge moving forward is to prevent aggregation of the nanoparticles in high-ionic-strength solutions because aggregation restricts the broad and practical application of AuNPs in the detection of ionic species [11, 12].Furthermore, the detection of Fe(III) at trace levels is relevant because iron, with its chemical versatility, is essential for the proper functioning of numerous organisms in the entire spectrum of the biological system [13]. In the human body, iron is one of the most essential trace elements; deficiency of ferric ion (Fe(III)) in the body causes anemia, hemochromatosis, liver damage, diabetes, Parkinson’s disease, and cancer [14–16]. Ferric ions also play critical roles in the growth and development of living cells and catalyze numerous biochemical processes [17]. However, the physiological abundance of Fe(III) causes imbalance, triggering the failure of multiple organs, such as the heart, pancreas, and liver [18, 19]. In this regard, the judicious selection and proper design of an adequate receptor are vital. Numerous studies on the development of Schiff base chemosensors for the detection of Hg(II), Zn(II), Al(III), and other ions have recently been reported in the literature [20–22]. However, the availability of chemosensors for Fe(III) that have a high detection threshold is rather limited and the amount of material required to detect a signal is high.Conventional detection of Fe3+ relies on several standard analytical techniques such as inductively coupled plasma atomic emission spectrometry (ICP-AES) [23], inductively coupled plasma mass spectrometry (ICPMS) [24, 25], atomic absorption spectrometry (AAS) [26], and voltammetry [27]. However, these methods are expensive, bulky, and time consuming because they require tedious pretreatment procedures for sample preparation. However, fluorescence microscopy, which is based on optical fluorescence, is a simple, easy, inexpensive, and highly selective tool for studying the localization, trafficking, and expression levels of biomolecules and metal ions within living cells [28]. Most Fe3+ sensing methods are based on an organic chemosensor that either undergoes fluorescence quenching because of the paramagnetic nature of ferric ion [29] or undergoes a “turn on” mechanism [30].In the search of a new chemosensor with high sensitivity and a very low detection limit for Fe(III), the combination of nanotechnology and a metal binding unit became an obvious choice. AuNPs, which exhibit good optical properties as signaling units as well as the ability to carry higher payloads on their surface and ligands with strong coordinating elements, have enabled the development of a suite of highly efficient chemosensors. However, the literature contains few reports of their application. For example, Zhang et al. reported excellent dispersion AuNPs for detecting sugars by the hypsochromic surface plasmon resonance (SPR) shift [31]. Bai et al. also reported 4-piperazinyl-1,8-naphthalimide functionalized AuNPs for Fe(III) recognition, and their results were highly encouraging [32].Here, we report the synthesis of a thiolated Schiff base ligand by the reaction of salicylaldehyde and 4-aminothiophenol, followed by its subsequent anchoring onto the surface of AuNPs through replacing citrate as a stabilizing agent. The results of the characterization of the ligand and the resulting surface-functionalized AuNP@L are described. The efficiency of the AuNP@L as a chemosensor is also reported here on the basis of the results of fluorescence and UV-Vis studies. ## 2. Experimental ### 2.1. General Remarks All of the chemicals and solvents were purchased from Sigma-Aldrich. The1H and 13C NMR spectra and chemical shifts were recorded in deuterated chloroform (CDCl3) on a JEOL 500 MHz spectrometer. FT-IR spectra were collected on Nicolet (Thermo Scientific) spectrometer using iTR as a sample holder in the wavenumber range from 600 to 4000 cm−1. Absorption spectra were collected at room temperature in the 4000–400 cm−1 range using a JASCO-670 spectrophotometer, and emission spectra were acquired on a Fluorolog (Horiba) system. Diffraction data were collected on a Rigaku model MiniFlex II diffractometer equipped with a Cu-Kα radiations source. The data were acquired over the 2θ range between 25 and 110°. The surface morphology of the NPs was discerned by field-emission scanning electron microscopy (FESEM) on a microscope (LYRA 3 Dual Beam, Tescan) operated at 30 kV. FESEM samples were prepared from either a suspension or a dry powder. The energy-dispersive X-ray spectra for the chemical and elemental analyses of NPs were also collected using an X-Max detector by Oxford, Inc. TEM was performed on a Philips CM200 operated at 200 kV; for the sample preparation, one drop of the aqueous AuNP@L solution was spread onto a 200-mesh copper carbon grid and allowed to dry at room temperature. ### 2.2. Synthesis of Schiff Base Ligand The thiolated bidentate Schiff base ligand was prepared (Scheme1) according to a procedure reported in the literature [33, 34]. To an ethanolic solution of salicylaldehyde, an equimolar amount of 4-aminothiophenol was added, and the mixture was refluxed at 90°C for 5 h. The yellow precipitate was filtered, purified by recrystallization from methanol, and finally dried under vacuum to obtain a 91% yield.Scheme 1 ### 2.3. Synthesis of AuNP@L AuNPs coated with citrate (AuNP@Cit) were prepared using the citrate (Cit) reduction method in deionized water (Scheme2). HAuCl4 ·3H2O (0.33 g, 1 mmol) in 500 mL of water was refluxed in a 1 L round-bottom flask equipped with a condenser. The mixture was stirred vigorously under argon for 30 min. Trisodium citrate (10 mL, 1.14 g, 3.88 mmol) solution was rapidly added, which resulted in a color change from yellow to purple. After the mixture was boiled for another 10 min, the heating mantle was removed and the mixture was allowed to cool at room temperature. AuNP@L was prepared as follows. To the freshly prepared AuNP@Cit (50 mL), Schiff base ligand (5 mg in 0.5 mL methanol) was added in one portion and stirred for 5 h at room temperature. The AuNP@L precipitated upon the addition of an equal amount of acetone. The nanoparticles were collected by centrifugation and washed successively with water and acetone to remove the unreacted ligands.Scheme 2 ## 2.1. General Remarks All of the chemicals and solvents were purchased from Sigma-Aldrich. The1H and 13C NMR spectra and chemical shifts were recorded in deuterated chloroform (CDCl3) on a JEOL 500 MHz spectrometer. FT-IR spectra were collected on Nicolet (Thermo Scientific) spectrometer using iTR as a sample holder in the wavenumber range from 600 to 4000 cm−1. Absorption spectra were collected at room temperature in the 4000–400 cm−1 range using a JASCO-670 spectrophotometer, and emission spectra were acquired on a Fluorolog (Horiba) system. Diffraction data were collected on a Rigaku model MiniFlex II diffractometer equipped with a Cu-Kα radiations source. The data were acquired over the 2θ range between 25 and 110°. The surface morphology of the NPs was discerned by field-emission scanning electron microscopy (FESEM) on a microscope (LYRA 3 Dual Beam, Tescan) operated at 30 kV. FESEM samples were prepared from either a suspension or a dry powder. The energy-dispersive X-ray spectra for the chemical and elemental analyses of NPs were also collected using an X-Max detector by Oxford, Inc. TEM was performed on a Philips CM200 operated at 200 kV; for the sample preparation, one drop of the aqueous AuNP@L solution was spread onto a 200-mesh copper carbon grid and allowed to dry at room temperature. ## 2.2. Synthesis of Schiff Base Ligand The thiolated bidentate Schiff base ligand was prepared (Scheme1) according to a procedure reported in the literature [33, 34]. To an ethanolic solution of salicylaldehyde, an equimolar amount of 4-aminothiophenol was added, and the mixture was refluxed at 90°C for 5 h. The yellow precipitate was filtered, purified by recrystallization from methanol, and finally dried under vacuum to obtain a 91% yield.Scheme 1 ## 2.3. Synthesis of AuNP@L AuNPs coated with citrate (AuNP@Cit) were prepared using the citrate (Cit) reduction method in deionized water (Scheme2). HAuCl4 ·3H2O (0.33 g, 1 mmol) in 500 mL of water was refluxed in a 1 L round-bottom flask equipped with a condenser. The mixture was stirred vigorously under argon for 30 min. Trisodium citrate (10 mL, 1.14 g, 3.88 mmol) solution was rapidly added, which resulted in a color change from yellow to purple. After the mixture was boiled for another 10 min, the heating mantle was removed and the mixture was allowed to cool at room temperature. AuNP@L was prepared as follows. To the freshly prepared AuNP@Cit (50 mL), Schiff base ligand (5 mg in 0.5 mL methanol) was added in one portion and stirred for 5 h at room temperature. The AuNP@L precipitated upon the addition of an equal amount of acetone. The nanoparticles were collected by centrifugation and washed successively with water and acetone to remove the unreacted ligands.Scheme 2 ## 3. Results and Discussion ### 3.1. Synthesis and Characterization The 2-[(4-mercaptophenyl)imino methyl] phenol Schiff base (L) was prepared by the reaction between salicylaldehyde and 4-aminothiophenol in ethanol (1 : 1 mole/mole) under reflux conditions for 5 h (Scheme1). The resulting yellow solid was recrystallized from methanol in 94% yield. The formation of the imino ligand was confirmed by 1H and 13C NMR, which showed a characteristic olefinic proton shift at δ8.9 ppm, supported by the olefinic carbon shift at δ160.2 ppm (see supporting information in Supplementary Material available online at http://dx.doi.org/10.1155/2015/101694). The FT-IR spectra of the imino ligand showed peaks at 1614 cm−1 and 3448 cm−1, corresponding to the vibration modes of the C=N and –OH groups, respectively. As a result of coordination of the bare ligand with the Fe3+, the C=N band shifted to a lower wavenumber (1609 cm−1), indicating the formation of a metal complex. A similar trend was observed for the phenolic group upon participation in coordination with the metal center. The surface of the AuNP@Cit nanoparticles was functionalized via the one-step addition of the ligand in a minimum amount of methanol solution (Scheme 2). Optimization of the Au-to-ligand molar ratio was critical for the preparation of AuNP@L because excess ligand resulted in aggregation and precipitation. The formation of the AuNP@Cit and AuNP@L was confirmed by spectroscopic techniques. For instance, in the case of the citrate-coated nanoparticles, visible absorption spectra showed a shift of the absorption band (λ m a x) from 525 nm to 530 nm for the ligand-modified moieties. This observed shift was attributed to the surface plasmon vibration in the ligand-modified particles. The binding of the thiolated-imino ligand to the Au surface was further confirmed by the disappearance of –SH stretches in the FT-IR spectrum, indicating Au-S bond formation [35]. The TEM image (Figure 1) shows uniformly distributed spherical particles with an average diameter of 8–10 nm. The peaks at 2θ = 38.2, 44.4, 64.5, 77.5, and 81.7° in the XRD pattern correspond to the (111), (200), (220), (311), and (222) planes in the AuNPs and are identical to those reported in the literature (JCPDS card number: 00-004-0784) [36]. The uniform anchoring of ligands onto the surface of nanoparticles is demonstrated by the energy-dispersive X-ray spectroscopy (EDX) element mapping images in Figures 2(a) and 2(b). Thiols were uniformly anchored onto the Au surface. The structural composition was demonstrated by EDX (Figure 2(c)); carbon, nitrogen, and sulfur were observed to be present on the AuNP surface. A high loading of Schiff base ligand was confirmed from thermogravimetric analysis (TGA), which showed 21.5% weight loss in the temperature range from 0 to 800°C (ramp rate: 10°C/min), corresponding to the decomposition of the organic ligand.Figure 1 (a) XRD pattern and (b) TEM image of the synthesized AuNP@L. (a) (b)Figure 2 Elemental mapping images of AuNP@L showing (a) gold and (b) sulfur; (c) EDX spectrum of AuNP@L. (a) (b) (c) ### 3.2. UV-Vis Absorption Studies Preliminary results of the UV-Vis absorption and fluorescent emission studies revealed that the AuNP@L exhibited selectivity toward ferric ions (10μM) in a 9 : 1 ethanol/water system. As evident in Figure 3, in the absence of ligand, the peak at 525 nm corresponds to the SPR of AuNPs. Upon attachment of the ligand, this peak red-shifted to 530 nm. Moreover, an additional absorption band appeared at 350 nm; this band was attributed to the π-π ∗ transition, which is likely favored by the planar orientation enforced by the intramolecular hydrogen bonding in AuNP@L [37]. The addition of Fe3+ causes the plasmonic absorption peak to shift again from 530 to 559 nm.Figure 3 UV-Vis absorption spectra of AuNP@Cit, AuNP@L, and AuNP@L + Fe3+.Interestingly, the presence of other metal ions did not influence the UV-Vis signature, indicating that no aggregation occurred, similar to the observed behavior of ferric ions. However, upon further ingress of ferric ions in the solution containing AuNP@L, the absorption band at 350 nm was gradually but systematically quenched, whereas that at 530 nm was synchronously shifted to 559 nm, as shown in Figure4. The 530 to 559 nm shift in the plasmonic absorption band with a gradual increase in the Fe3+ concentration indicates cation-induced aggregation of AuNPs. The shift in the peak is linear up to 1 equivalent of Fe3+ (Figure 4 inset), indicating the formation of a 1 : 1 complex with a strong affinity (binding constant: 8.5 × 105 M−1; estimated error ≤ 10%) [38].Figure 4 Evolution of the UV-Vis spectra of AuNP@L (10μM) upon the addition of Fe(NO3)3 in a (9 : 1) EtOH : H2O mixture. Inset: exploded view of the shift of the plasmonic absorption peak from 530 to 559 nm with increasing ferric ion concentration. ### 3.3. Photoluminescence Studies The results of the photoluminescence studies of AuNP@L with iron (concentration: 10μM) in 10% (v/v) water/ethanol are shown in Figure 5. The emission peak at 491 nm upon excitation with 390 nm radiation resulted from the intramolecular charge transfer (ICT) between the imine groups and the phenolic groups of the ligands in AuNP@L. The addition of ferric ions quenched the fluorescent emission due to chelation-enhanced quenching (CEQ) because Fe3+ is paramagnetic. The quenching in the presence of iron provides a very fast and efficient nonradiative decay of the excited states due to the electron or energy transfer between the cations and the ligands.Figure 5 Fluorescence titration of AuNP@L (10μM) H2O : EtOH (1 : 9) (λ e x = 390 nm). Inset: mole ratio plot of the emission at 491 nm. ### 3.4. Competition with Other Metal Ions The selectivity and tolerance of AuNP@L for Fe3+ over other cations were investigated by adding 10 equivalents of various metal ions to 10 μM of AuNP@L (Figure 6). Partial quenching occurred with Al3+, Cu2+, Hg2+, and Zn2+, as shown in Figure 5, whereas the molecular fluorescence was quenched to a maximum level with Fe3+, indicating that AuNP@L exhibited the highest sensitivity for ferric ion detection.Figure 6 Metal-ion selectivity of AuNP@L; bars indicate the fluorescence intensity (excitation at 390 nm and emission at 491 nm). Nitrate salts of various metal ions (10.0 equivalents) were added to AuNP@L (10μM) in H2O : EtOH (1 : 9).This observation was attributed to the difference in the coordinative interaction energy for various cations that otherwise do not substantially differ in ionic size. Thus, this energy difference can be exploited for discriminative purposes, especially for fluorescent sensing [39]. Fe3+ exhibits high thermodynamic affinity for phenolic-C=N and –OH groups, which is a hybrid of the imino nitrogen of the amine and the oxygen of the phenol ring; this hybrid is formed as a result of the strong tendency of phenol to undergo deprotonation during complex formation, with fast metal-to-ligand binding kinetics that are otherwise not possible with other transition-metal ions. The estimated detection limit of AuNP@L is 1.2 μM for Fe3+.Although the fluorescenceturn on approach is more effective thanswitch off approach, the probe, AuNP@L, was selective to Fe3+ compared to the other biologically relevant metal ions (Cu2+, Zn2+, etc.). It also has a comparable detection limit of 1.2 μM for Fe3+. Moreover the absorption and emission spectral studies showed a 1 : 1 binding mode for Fe3+, with strong binding constants of 8.5 × 105 and 2.9 × 105 M−1, respectively. The obtained detection limit is comparable with the literature data for the detection of the Fe3+ in different system (Table 1).Table 1 Comparison of the detection limit AuNPs@L with similar system. System Detection limit Medium References Carbon Dots (CD) 2.0 × 10−9 M Ionic liquid [40] AuNP-thiourea 8.9 × 10−4 M Aqueous [41] MOFs 1.0 × 10−7 M DMF [42] AuNP@L 12 × 10−7 M Ethanol : water (9 : 1) Present work ## 3.1. Synthesis and Characterization The 2-[(4-mercaptophenyl)imino methyl] phenol Schiff base (L) was prepared by the reaction between salicylaldehyde and 4-aminothiophenol in ethanol (1 : 1 mole/mole) under reflux conditions for 5 h (Scheme1). The resulting yellow solid was recrystallized from methanol in 94% yield. The formation of the imino ligand was confirmed by 1H and 13C NMR, which showed a characteristic olefinic proton shift at δ8.9 ppm, supported by the olefinic carbon shift at δ160.2 ppm (see supporting information in Supplementary Material available online at http://dx.doi.org/10.1155/2015/101694). The FT-IR spectra of the imino ligand showed peaks at 1614 cm−1 and 3448 cm−1, corresponding to the vibration modes of the C=N and –OH groups, respectively. As a result of coordination of the bare ligand with the Fe3+, the C=N band shifted to a lower wavenumber (1609 cm−1), indicating the formation of a metal complex. A similar trend was observed for the phenolic group upon participation in coordination with the metal center. The surface of the AuNP@Cit nanoparticles was functionalized via the one-step addition of the ligand in a minimum amount of methanol solution (Scheme 2). Optimization of the Au-to-ligand molar ratio was critical for the preparation of AuNP@L because excess ligand resulted in aggregation and precipitation. The formation of the AuNP@Cit and AuNP@L was confirmed by spectroscopic techniques. For instance, in the case of the citrate-coated nanoparticles, visible absorption spectra showed a shift of the absorption band (λ m a x) from 525 nm to 530 nm for the ligand-modified moieties. This observed shift was attributed to the surface plasmon vibration in the ligand-modified particles. The binding of the thiolated-imino ligand to the Au surface was further confirmed by the disappearance of –SH stretches in the FT-IR spectrum, indicating Au-S bond formation [35]. The TEM image (Figure 1) shows uniformly distributed spherical particles with an average diameter of 8–10 nm. The peaks at 2θ = 38.2, 44.4, 64.5, 77.5, and 81.7° in the XRD pattern correspond to the (111), (200), (220), (311), and (222) planes in the AuNPs and are identical to those reported in the literature (JCPDS card number: 00-004-0784) [36]. The uniform anchoring of ligands onto the surface of nanoparticles is demonstrated by the energy-dispersive X-ray spectroscopy (EDX) element mapping images in Figures 2(a) and 2(b). Thiols were uniformly anchored onto the Au surface. The structural composition was demonstrated by EDX (Figure 2(c)); carbon, nitrogen, and sulfur were observed to be present on the AuNP surface. A high loading of Schiff base ligand was confirmed from thermogravimetric analysis (TGA), which showed 21.5% weight loss in the temperature range from 0 to 800°C (ramp rate: 10°C/min), corresponding to the decomposition of the organic ligand.Figure 1 (a) XRD pattern and (b) TEM image of the synthesized AuNP@L. (a) (b)Figure 2 Elemental mapping images of AuNP@L showing (a) gold and (b) sulfur; (c) EDX spectrum of AuNP@L. (a) (b) (c) ## 3.2. UV-Vis Absorption Studies Preliminary results of the UV-Vis absorption and fluorescent emission studies revealed that the AuNP@L exhibited selectivity toward ferric ions (10μM) in a 9 : 1 ethanol/water system. As evident in Figure 3, in the absence of ligand, the peak at 525 nm corresponds to the SPR of AuNPs. Upon attachment of the ligand, this peak red-shifted to 530 nm. Moreover, an additional absorption band appeared at 350 nm; this band was attributed to the π-π ∗ transition, which is likely favored by the planar orientation enforced by the intramolecular hydrogen bonding in AuNP@L [37]. The addition of Fe3+ causes the plasmonic absorption peak to shift again from 530 to 559 nm.Figure 3 UV-Vis absorption spectra of AuNP@Cit, AuNP@L, and AuNP@L + Fe3+.Interestingly, the presence of other metal ions did not influence the UV-Vis signature, indicating that no aggregation occurred, similar to the observed behavior of ferric ions. However, upon further ingress of ferric ions in the solution containing AuNP@L, the absorption band at 350 nm was gradually but systematically quenched, whereas that at 530 nm was synchronously shifted to 559 nm, as shown in Figure4. The 530 to 559 nm shift in the plasmonic absorption band with a gradual increase in the Fe3+ concentration indicates cation-induced aggregation of AuNPs. The shift in the peak is linear up to 1 equivalent of Fe3+ (Figure 4 inset), indicating the formation of a 1 : 1 complex with a strong affinity (binding constant: 8.5 × 105 M−1; estimated error ≤ 10%) [38].Figure 4 Evolution of the UV-Vis spectra of AuNP@L (10μM) upon the addition of Fe(NO3)3 in a (9 : 1) EtOH : H2O mixture. Inset: exploded view of the shift of the plasmonic absorption peak from 530 to 559 nm with increasing ferric ion concentration. ## 3.3. Photoluminescence Studies The results of the photoluminescence studies of AuNP@L with iron (concentration: 10μM) in 10% (v/v) water/ethanol are shown in Figure 5. The emission peak at 491 nm upon excitation with 390 nm radiation resulted from the intramolecular charge transfer (ICT) between the imine groups and the phenolic groups of the ligands in AuNP@L. The addition of ferric ions quenched the fluorescent emission due to chelation-enhanced quenching (CEQ) because Fe3+ is paramagnetic. The quenching in the presence of iron provides a very fast and efficient nonradiative decay of the excited states due to the electron or energy transfer between the cations and the ligands.Figure 5 Fluorescence titration of AuNP@L (10μM) H2O : EtOH (1 : 9) (λ e x = 390 nm). Inset: mole ratio plot of the emission at 491 nm. ## 3.4. Competition with Other Metal Ions The selectivity and tolerance of AuNP@L for Fe3+ over other cations were investigated by adding 10 equivalents of various metal ions to 10 μM of AuNP@L (Figure 6). Partial quenching occurred with Al3+, Cu2+, Hg2+, and Zn2+, as shown in Figure 5, whereas the molecular fluorescence was quenched to a maximum level with Fe3+, indicating that AuNP@L exhibited the highest sensitivity for ferric ion detection.Figure 6 Metal-ion selectivity of AuNP@L; bars indicate the fluorescence intensity (excitation at 390 nm and emission at 491 nm). Nitrate salts of various metal ions (10.0 equivalents) were added to AuNP@L (10μM) in H2O : EtOH (1 : 9).This observation was attributed to the difference in the coordinative interaction energy for various cations that otherwise do not substantially differ in ionic size. Thus, this energy difference can be exploited for discriminative purposes, especially for fluorescent sensing [39]. Fe3+ exhibits high thermodynamic affinity for phenolic-C=N and –OH groups, which is a hybrid of the imino nitrogen of the amine and the oxygen of the phenol ring; this hybrid is formed as a result of the strong tendency of phenol to undergo deprotonation during complex formation, with fast metal-to-ligand binding kinetics that are otherwise not possible with other transition-metal ions. The estimated detection limit of AuNP@L is 1.2 μM for Fe3+.Although the fluorescenceturn on approach is more effective thanswitch off approach, the probe, AuNP@L, was selective to Fe3+ compared to the other biologically relevant metal ions (Cu2+, Zn2+, etc.). It also has a comparable detection limit of 1.2 μM for Fe3+. Moreover the absorption and emission spectral studies showed a 1 : 1 binding mode for Fe3+, with strong binding constants of 8.5 × 105 and 2.9 × 105 M−1, respectively. The obtained detection limit is comparable with the literature data for the detection of the Fe3+ in different system (Table 1).Table 1 Comparison of the detection limit AuNPs@L with similar system. System Detection limit Medium References Carbon Dots (CD) 2.0 × 10−9 M Ionic liquid [40] AuNP-thiourea 8.9 × 10−4 M Aqueous [41] MOFs 1.0 × 10−7 M DMF [42] AuNP@L 12 × 10−7 M Ethanol : water (9 : 1) Present work ## 4. Conclusion In summary, we have prepared a simple and sensitive nanogold-based Schiff base chemosensor that exhibits high selectivity toward ferric ions compared to other cations in a water/ethanol mixture. The AuNP@L was characterized by UV-visible absorption spectroscopy, photoluminescence, TGA, and TEM. The detection limit for Fe3+ ions was estimated to be 1.2 μM, without interference from other metal ions. The binding mode was 1 : 1 and the binding constants were 8.5 × 105 M−1 and 2.9 × 105 M−1, as calculated from the results of absorption and emission titrations, respectively. Thus from sensing point of view this probe can be used in the physiological system with good selectivity and sensitivity for the detection of Fe3+. --- *Source: 101694-2015-07-29.xml*
101694-2015-07-29_101694-2015-07-29.md
28,294
Schiff Base Ligand Coated Gold Nanoparticles for the Chemical Sensing of Fe(III) Ions
Abiola Azeez Jimoh; Aasif Helal; M. Nasiruzzaman Shaikh; Md. Abdul Aziz; Zain H. Yamani; Amir Al-Ahmed; Jong-Pil Kim
Journal of Nanomaterials (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101694
101694-2015-07-29.xml
--- ## Abstract New Schiff base-coated gold nanoparticles (AuNPs) of type AuNP@L (where L: thiolated Schiff base ligand) have been synthesized and characterized using various spectroscopic techniques. The AuNPs and AuNP@L were imaged by transmission electron microscopy (TEM) and were confirmed to be well-dispersed, uniformly distributed, spherical nanoparticles with an average diameter of 8–10 nm. Their potential applications for chemosensing were investigated in UV-Vis and fluorescence spectroscopic studies. The AuNP@L exhibited selectivity for Fe3+ in an ethanol/water mixture (ratio 9 : 1 v/v). The absorption and emission spectral studies revealed a 1 : 1 binding mode for Fe3+, with binding constants of 8.5 × 10 5 and 2.9 × 10 5 M−1, respectively. --- ## Body ## 1. Introduction In recent years, gold nanoparticles (AuNPs) have attracted substantial attention for their extensive application in drug delivery [1, 2], magnetic resonance imaging (MRI) [3, 4], X-ray computed tomography (X-ray CT) [5], catalysis [6], biosensing [7, 8], and so forth because their size, shape, and surface functionalization are easily controlled through the ligands and corresponding metal complexes. One of the typical applications of AuNPs in current research is the colorimetric detection of metal ions in the environment as well as in physiological systems because they possess excellent optical properties, such as high extinction coefficients and distance-dependent plasmonic absorption [9, 10]. However, the challenge moving forward is to prevent aggregation of the nanoparticles in high-ionic-strength solutions because aggregation restricts the broad and practical application of AuNPs in the detection of ionic species [11, 12].Furthermore, the detection of Fe(III) at trace levels is relevant because iron, with its chemical versatility, is essential for the proper functioning of numerous organisms in the entire spectrum of the biological system [13]. In the human body, iron is one of the most essential trace elements; deficiency of ferric ion (Fe(III)) in the body causes anemia, hemochromatosis, liver damage, diabetes, Parkinson’s disease, and cancer [14–16]. Ferric ions also play critical roles in the growth and development of living cells and catalyze numerous biochemical processes [17]. However, the physiological abundance of Fe(III) causes imbalance, triggering the failure of multiple organs, such as the heart, pancreas, and liver [18, 19]. In this regard, the judicious selection and proper design of an adequate receptor are vital. Numerous studies on the development of Schiff base chemosensors for the detection of Hg(II), Zn(II), Al(III), and other ions have recently been reported in the literature [20–22]. However, the availability of chemosensors for Fe(III) that have a high detection threshold is rather limited and the amount of material required to detect a signal is high.Conventional detection of Fe3+ relies on several standard analytical techniques such as inductively coupled plasma atomic emission spectrometry (ICP-AES) [23], inductively coupled plasma mass spectrometry (ICPMS) [24, 25], atomic absorption spectrometry (AAS) [26], and voltammetry [27]. However, these methods are expensive, bulky, and time consuming because they require tedious pretreatment procedures for sample preparation. However, fluorescence microscopy, which is based on optical fluorescence, is a simple, easy, inexpensive, and highly selective tool for studying the localization, trafficking, and expression levels of biomolecules and metal ions within living cells [28]. Most Fe3+ sensing methods are based on an organic chemosensor that either undergoes fluorescence quenching because of the paramagnetic nature of ferric ion [29] or undergoes a “turn on” mechanism [30].In the search of a new chemosensor with high sensitivity and a very low detection limit for Fe(III), the combination of nanotechnology and a metal binding unit became an obvious choice. AuNPs, which exhibit good optical properties as signaling units as well as the ability to carry higher payloads on their surface and ligands with strong coordinating elements, have enabled the development of a suite of highly efficient chemosensors. However, the literature contains few reports of their application. For example, Zhang et al. reported excellent dispersion AuNPs for detecting sugars by the hypsochromic surface plasmon resonance (SPR) shift [31]. Bai et al. also reported 4-piperazinyl-1,8-naphthalimide functionalized AuNPs for Fe(III) recognition, and their results were highly encouraging [32].Here, we report the synthesis of a thiolated Schiff base ligand by the reaction of salicylaldehyde and 4-aminothiophenol, followed by its subsequent anchoring onto the surface of AuNPs through replacing citrate as a stabilizing agent. The results of the characterization of the ligand and the resulting surface-functionalized AuNP@L are described. The efficiency of the AuNP@L as a chemosensor is also reported here on the basis of the results of fluorescence and UV-Vis studies. ## 2. Experimental ### 2.1. General Remarks All of the chemicals and solvents were purchased from Sigma-Aldrich. The1H and 13C NMR spectra and chemical shifts were recorded in deuterated chloroform (CDCl3) on a JEOL 500 MHz spectrometer. FT-IR spectra were collected on Nicolet (Thermo Scientific) spectrometer using iTR as a sample holder in the wavenumber range from 600 to 4000 cm−1. Absorption spectra were collected at room temperature in the 4000–400 cm−1 range using a JASCO-670 spectrophotometer, and emission spectra were acquired on a Fluorolog (Horiba) system. Diffraction data were collected on a Rigaku model MiniFlex II diffractometer equipped with a Cu-Kα radiations source. The data were acquired over the 2θ range between 25 and 110°. The surface morphology of the NPs was discerned by field-emission scanning electron microscopy (FESEM) on a microscope (LYRA 3 Dual Beam, Tescan) operated at 30 kV. FESEM samples were prepared from either a suspension or a dry powder. The energy-dispersive X-ray spectra for the chemical and elemental analyses of NPs were also collected using an X-Max detector by Oxford, Inc. TEM was performed on a Philips CM200 operated at 200 kV; for the sample preparation, one drop of the aqueous AuNP@L solution was spread onto a 200-mesh copper carbon grid and allowed to dry at room temperature. ### 2.2. Synthesis of Schiff Base Ligand The thiolated bidentate Schiff base ligand was prepared (Scheme1) according to a procedure reported in the literature [33, 34]. To an ethanolic solution of salicylaldehyde, an equimolar amount of 4-aminothiophenol was added, and the mixture was refluxed at 90°C for 5 h. The yellow precipitate was filtered, purified by recrystallization from methanol, and finally dried under vacuum to obtain a 91% yield.Scheme 1 ### 2.3. Synthesis of AuNP@L AuNPs coated with citrate (AuNP@Cit) were prepared using the citrate (Cit) reduction method in deionized water (Scheme2). HAuCl4 ·3H2O (0.33 g, 1 mmol) in 500 mL of water was refluxed in a 1 L round-bottom flask equipped with a condenser. The mixture was stirred vigorously under argon for 30 min. Trisodium citrate (10 mL, 1.14 g, 3.88 mmol) solution was rapidly added, which resulted in a color change from yellow to purple. After the mixture was boiled for another 10 min, the heating mantle was removed and the mixture was allowed to cool at room temperature. AuNP@L was prepared as follows. To the freshly prepared AuNP@Cit (50 mL), Schiff base ligand (5 mg in 0.5 mL methanol) was added in one portion and stirred for 5 h at room temperature. The AuNP@L precipitated upon the addition of an equal amount of acetone. The nanoparticles were collected by centrifugation and washed successively with water and acetone to remove the unreacted ligands.Scheme 2 ## 2.1. General Remarks All of the chemicals and solvents were purchased from Sigma-Aldrich. The1H and 13C NMR spectra and chemical shifts were recorded in deuterated chloroform (CDCl3) on a JEOL 500 MHz spectrometer. FT-IR spectra were collected on Nicolet (Thermo Scientific) spectrometer using iTR as a sample holder in the wavenumber range from 600 to 4000 cm−1. Absorption spectra were collected at room temperature in the 4000–400 cm−1 range using a JASCO-670 spectrophotometer, and emission spectra were acquired on a Fluorolog (Horiba) system. Diffraction data were collected on a Rigaku model MiniFlex II diffractometer equipped with a Cu-Kα radiations source. The data were acquired over the 2θ range between 25 and 110°. The surface morphology of the NPs was discerned by field-emission scanning electron microscopy (FESEM) on a microscope (LYRA 3 Dual Beam, Tescan) operated at 30 kV. FESEM samples were prepared from either a suspension or a dry powder. The energy-dispersive X-ray spectra for the chemical and elemental analyses of NPs were also collected using an X-Max detector by Oxford, Inc. TEM was performed on a Philips CM200 operated at 200 kV; for the sample preparation, one drop of the aqueous AuNP@L solution was spread onto a 200-mesh copper carbon grid and allowed to dry at room temperature. ## 2.2. Synthesis of Schiff Base Ligand The thiolated bidentate Schiff base ligand was prepared (Scheme1) according to a procedure reported in the literature [33, 34]. To an ethanolic solution of salicylaldehyde, an equimolar amount of 4-aminothiophenol was added, and the mixture was refluxed at 90°C for 5 h. The yellow precipitate was filtered, purified by recrystallization from methanol, and finally dried under vacuum to obtain a 91% yield.Scheme 1 ## 2.3. Synthesis of AuNP@L AuNPs coated with citrate (AuNP@Cit) were prepared using the citrate (Cit) reduction method in deionized water (Scheme2). HAuCl4 ·3H2O (0.33 g, 1 mmol) in 500 mL of water was refluxed in a 1 L round-bottom flask equipped with a condenser. The mixture was stirred vigorously under argon for 30 min. Trisodium citrate (10 mL, 1.14 g, 3.88 mmol) solution was rapidly added, which resulted in a color change from yellow to purple. After the mixture was boiled for another 10 min, the heating mantle was removed and the mixture was allowed to cool at room temperature. AuNP@L was prepared as follows. To the freshly prepared AuNP@Cit (50 mL), Schiff base ligand (5 mg in 0.5 mL methanol) was added in one portion and stirred for 5 h at room temperature. The AuNP@L precipitated upon the addition of an equal amount of acetone. The nanoparticles were collected by centrifugation and washed successively with water and acetone to remove the unreacted ligands.Scheme 2 ## 3. Results and Discussion ### 3.1. Synthesis and Characterization The 2-[(4-mercaptophenyl)imino methyl] phenol Schiff base (L) was prepared by the reaction between salicylaldehyde and 4-aminothiophenol in ethanol (1 : 1 mole/mole) under reflux conditions for 5 h (Scheme1). The resulting yellow solid was recrystallized from methanol in 94% yield. The formation of the imino ligand was confirmed by 1H and 13C NMR, which showed a characteristic olefinic proton shift at δ8.9 ppm, supported by the olefinic carbon shift at δ160.2 ppm (see supporting information in Supplementary Material available online at http://dx.doi.org/10.1155/2015/101694). The FT-IR spectra of the imino ligand showed peaks at 1614 cm−1 and 3448 cm−1, corresponding to the vibration modes of the C=N and –OH groups, respectively. As a result of coordination of the bare ligand with the Fe3+, the C=N band shifted to a lower wavenumber (1609 cm−1), indicating the formation of a metal complex. A similar trend was observed for the phenolic group upon participation in coordination with the metal center. The surface of the AuNP@Cit nanoparticles was functionalized via the one-step addition of the ligand in a minimum amount of methanol solution (Scheme 2). Optimization of the Au-to-ligand molar ratio was critical for the preparation of AuNP@L because excess ligand resulted in aggregation and precipitation. The formation of the AuNP@Cit and AuNP@L was confirmed by spectroscopic techniques. For instance, in the case of the citrate-coated nanoparticles, visible absorption spectra showed a shift of the absorption band (λ m a x) from 525 nm to 530 nm for the ligand-modified moieties. This observed shift was attributed to the surface plasmon vibration in the ligand-modified particles. The binding of the thiolated-imino ligand to the Au surface was further confirmed by the disappearance of –SH stretches in the FT-IR spectrum, indicating Au-S bond formation [35]. The TEM image (Figure 1) shows uniformly distributed spherical particles with an average diameter of 8–10 nm. The peaks at 2θ = 38.2, 44.4, 64.5, 77.5, and 81.7° in the XRD pattern correspond to the (111), (200), (220), (311), and (222) planes in the AuNPs and are identical to those reported in the literature (JCPDS card number: 00-004-0784) [36]. The uniform anchoring of ligands onto the surface of nanoparticles is demonstrated by the energy-dispersive X-ray spectroscopy (EDX) element mapping images in Figures 2(a) and 2(b). Thiols were uniformly anchored onto the Au surface. The structural composition was demonstrated by EDX (Figure 2(c)); carbon, nitrogen, and sulfur were observed to be present on the AuNP surface. A high loading of Schiff base ligand was confirmed from thermogravimetric analysis (TGA), which showed 21.5% weight loss in the temperature range from 0 to 800°C (ramp rate: 10°C/min), corresponding to the decomposition of the organic ligand.Figure 1 (a) XRD pattern and (b) TEM image of the synthesized AuNP@L. (a) (b)Figure 2 Elemental mapping images of AuNP@L showing (a) gold and (b) sulfur; (c) EDX spectrum of AuNP@L. (a) (b) (c) ### 3.2. UV-Vis Absorption Studies Preliminary results of the UV-Vis absorption and fluorescent emission studies revealed that the AuNP@L exhibited selectivity toward ferric ions (10μM) in a 9 : 1 ethanol/water system. As evident in Figure 3, in the absence of ligand, the peak at 525 nm corresponds to the SPR of AuNPs. Upon attachment of the ligand, this peak red-shifted to 530 nm. Moreover, an additional absorption band appeared at 350 nm; this band was attributed to the π-π ∗ transition, which is likely favored by the planar orientation enforced by the intramolecular hydrogen bonding in AuNP@L [37]. The addition of Fe3+ causes the plasmonic absorption peak to shift again from 530 to 559 nm.Figure 3 UV-Vis absorption spectra of AuNP@Cit, AuNP@L, and AuNP@L + Fe3+.Interestingly, the presence of other metal ions did not influence the UV-Vis signature, indicating that no aggregation occurred, similar to the observed behavior of ferric ions. However, upon further ingress of ferric ions in the solution containing AuNP@L, the absorption band at 350 nm was gradually but systematically quenched, whereas that at 530 nm was synchronously shifted to 559 nm, as shown in Figure4. The 530 to 559 nm shift in the plasmonic absorption band with a gradual increase in the Fe3+ concentration indicates cation-induced aggregation of AuNPs. The shift in the peak is linear up to 1 equivalent of Fe3+ (Figure 4 inset), indicating the formation of a 1 : 1 complex with a strong affinity (binding constant: 8.5 × 105 M−1; estimated error ≤ 10%) [38].Figure 4 Evolution of the UV-Vis spectra of AuNP@L (10μM) upon the addition of Fe(NO3)3 in a (9 : 1) EtOH : H2O mixture. Inset: exploded view of the shift of the plasmonic absorption peak from 530 to 559 nm with increasing ferric ion concentration. ### 3.3. Photoluminescence Studies The results of the photoluminescence studies of AuNP@L with iron (concentration: 10μM) in 10% (v/v) water/ethanol are shown in Figure 5. The emission peak at 491 nm upon excitation with 390 nm radiation resulted from the intramolecular charge transfer (ICT) between the imine groups and the phenolic groups of the ligands in AuNP@L. The addition of ferric ions quenched the fluorescent emission due to chelation-enhanced quenching (CEQ) because Fe3+ is paramagnetic. The quenching in the presence of iron provides a very fast and efficient nonradiative decay of the excited states due to the electron or energy transfer between the cations and the ligands.Figure 5 Fluorescence titration of AuNP@L (10μM) H2O : EtOH (1 : 9) (λ e x = 390 nm). Inset: mole ratio plot of the emission at 491 nm. ### 3.4. Competition with Other Metal Ions The selectivity and tolerance of AuNP@L for Fe3+ over other cations were investigated by adding 10 equivalents of various metal ions to 10 μM of AuNP@L (Figure 6). Partial quenching occurred with Al3+, Cu2+, Hg2+, and Zn2+, as shown in Figure 5, whereas the molecular fluorescence was quenched to a maximum level with Fe3+, indicating that AuNP@L exhibited the highest sensitivity for ferric ion detection.Figure 6 Metal-ion selectivity of AuNP@L; bars indicate the fluorescence intensity (excitation at 390 nm and emission at 491 nm). Nitrate salts of various metal ions (10.0 equivalents) were added to AuNP@L (10μM) in H2O : EtOH (1 : 9).This observation was attributed to the difference in the coordinative interaction energy for various cations that otherwise do not substantially differ in ionic size. Thus, this energy difference can be exploited for discriminative purposes, especially for fluorescent sensing [39]. Fe3+ exhibits high thermodynamic affinity for phenolic-C=N and –OH groups, which is a hybrid of the imino nitrogen of the amine and the oxygen of the phenol ring; this hybrid is formed as a result of the strong tendency of phenol to undergo deprotonation during complex formation, with fast metal-to-ligand binding kinetics that are otherwise not possible with other transition-metal ions. The estimated detection limit of AuNP@L is 1.2 μM for Fe3+.Although the fluorescenceturn on approach is more effective thanswitch off approach, the probe, AuNP@L, was selective to Fe3+ compared to the other biologically relevant metal ions (Cu2+, Zn2+, etc.). It also has a comparable detection limit of 1.2 μM for Fe3+. Moreover the absorption and emission spectral studies showed a 1 : 1 binding mode for Fe3+, with strong binding constants of 8.5 × 105 and 2.9 × 105 M−1, respectively. The obtained detection limit is comparable with the literature data for the detection of the Fe3+ in different system (Table 1).Table 1 Comparison of the detection limit AuNPs@L with similar system. System Detection limit Medium References Carbon Dots (CD) 2.0 × 10−9 M Ionic liquid [40] AuNP-thiourea 8.9 × 10−4 M Aqueous [41] MOFs 1.0 × 10−7 M DMF [42] AuNP@L 12 × 10−7 M Ethanol : water (9 : 1) Present work ## 3.1. Synthesis and Characterization The 2-[(4-mercaptophenyl)imino methyl] phenol Schiff base (L) was prepared by the reaction between salicylaldehyde and 4-aminothiophenol in ethanol (1 : 1 mole/mole) under reflux conditions for 5 h (Scheme1). The resulting yellow solid was recrystallized from methanol in 94% yield. The formation of the imino ligand was confirmed by 1H and 13C NMR, which showed a characteristic olefinic proton shift at δ8.9 ppm, supported by the olefinic carbon shift at δ160.2 ppm (see supporting information in Supplementary Material available online at http://dx.doi.org/10.1155/2015/101694). The FT-IR spectra of the imino ligand showed peaks at 1614 cm−1 and 3448 cm−1, corresponding to the vibration modes of the C=N and –OH groups, respectively. As a result of coordination of the bare ligand with the Fe3+, the C=N band shifted to a lower wavenumber (1609 cm−1), indicating the formation of a metal complex. A similar trend was observed for the phenolic group upon participation in coordination with the metal center. The surface of the AuNP@Cit nanoparticles was functionalized via the one-step addition of the ligand in a minimum amount of methanol solution (Scheme 2). Optimization of the Au-to-ligand molar ratio was critical for the preparation of AuNP@L because excess ligand resulted in aggregation and precipitation. The formation of the AuNP@Cit and AuNP@L was confirmed by spectroscopic techniques. For instance, in the case of the citrate-coated nanoparticles, visible absorption spectra showed a shift of the absorption band (λ m a x) from 525 nm to 530 nm for the ligand-modified moieties. This observed shift was attributed to the surface plasmon vibration in the ligand-modified particles. The binding of the thiolated-imino ligand to the Au surface was further confirmed by the disappearance of –SH stretches in the FT-IR spectrum, indicating Au-S bond formation [35]. The TEM image (Figure 1) shows uniformly distributed spherical particles with an average diameter of 8–10 nm. The peaks at 2θ = 38.2, 44.4, 64.5, 77.5, and 81.7° in the XRD pattern correspond to the (111), (200), (220), (311), and (222) planes in the AuNPs and are identical to those reported in the literature (JCPDS card number: 00-004-0784) [36]. The uniform anchoring of ligands onto the surface of nanoparticles is demonstrated by the energy-dispersive X-ray spectroscopy (EDX) element mapping images in Figures 2(a) and 2(b). Thiols were uniformly anchored onto the Au surface. The structural composition was demonstrated by EDX (Figure 2(c)); carbon, nitrogen, and sulfur were observed to be present on the AuNP surface. A high loading of Schiff base ligand was confirmed from thermogravimetric analysis (TGA), which showed 21.5% weight loss in the temperature range from 0 to 800°C (ramp rate: 10°C/min), corresponding to the decomposition of the organic ligand.Figure 1 (a) XRD pattern and (b) TEM image of the synthesized AuNP@L. (a) (b)Figure 2 Elemental mapping images of AuNP@L showing (a) gold and (b) sulfur; (c) EDX spectrum of AuNP@L. (a) (b) (c) ## 3.2. UV-Vis Absorption Studies Preliminary results of the UV-Vis absorption and fluorescent emission studies revealed that the AuNP@L exhibited selectivity toward ferric ions (10μM) in a 9 : 1 ethanol/water system. As evident in Figure 3, in the absence of ligand, the peak at 525 nm corresponds to the SPR of AuNPs. Upon attachment of the ligand, this peak red-shifted to 530 nm. Moreover, an additional absorption band appeared at 350 nm; this band was attributed to the π-π ∗ transition, which is likely favored by the planar orientation enforced by the intramolecular hydrogen bonding in AuNP@L [37]. The addition of Fe3+ causes the plasmonic absorption peak to shift again from 530 to 559 nm.Figure 3 UV-Vis absorption spectra of AuNP@Cit, AuNP@L, and AuNP@L + Fe3+.Interestingly, the presence of other metal ions did not influence the UV-Vis signature, indicating that no aggregation occurred, similar to the observed behavior of ferric ions. However, upon further ingress of ferric ions in the solution containing AuNP@L, the absorption band at 350 nm was gradually but systematically quenched, whereas that at 530 nm was synchronously shifted to 559 nm, as shown in Figure4. The 530 to 559 nm shift in the plasmonic absorption band with a gradual increase in the Fe3+ concentration indicates cation-induced aggregation of AuNPs. The shift in the peak is linear up to 1 equivalent of Fe3+ (Figure 4 inset), indicating the formation of a 1 : 1 complex with a strong affinity (binding constant: 8.5 × 105 M−1; estimated error ≤ 10%) [38].Figure 4 Evolution of the UV-Vis spectra of AuNP@L (10μM) upon the addition of Fe(NO3)3 in a (9 : 1) EtOH : H2O mixture. Inset: exploded view of the shift of the plasmonic absorption peak from 530 to 559 nm with increasing ferric ion concentration. ## 3.3. Photoluminescence Studies The results of the photoluminescence studies of AuNP@L with iron (concentration: 10μM) in 10% (v/v) water/ethanol are shown in Figure 5. The emission peak at 491 nm upon excitation with 390 nm radiation resulted from the intramolecular charge transfer (ICT) between the imine groups and the phenolic groups of the ligands in AuNP@L. The addition of ferric ions quenched the fluorescent emission due to chelation-enhanced quenching (CEQ) because Fe3+ is paramagnetic. The quenching in the presence of iron provides a very fast and efficient nonradiative decay of the excited states due to the electron or energy transfer between the cations and the ligands.Figure 5 Fluorescence titration of AuNP@L (10μM) H2O : EtOH (1 : 9) (λ e x = 390 nm). Inset: mole ratio plot of the emission at 491 nm. ## 3.4. Competition with Other Metal Ions The selectivity and tolerance of AuNP@L for Fe3+ over other cations were investigated by adding 10 equivalents of various metal ions to 10 μM of AuNP@L (Figure 6). Partial quenching occurred with Al3+, Cu2+, Hg2+, and Zn2+, as shown in Figure 5, whereas the molecular fluorescence was quenched to a maximum level with Fe3+, indicating that AuNP@L exhibited the highest sensitivity for ferric ion detection.Figure 6 Metal-ion selectivity of AuNP@L; bars indicate the fluorescence intensity (excitation at 390 nm and emission at 491 nm). Nitrate salts of various metal ions (10.0 equivalents) were added to AuNP@L (10μM) in H2O : EtOH (1 : 9).This observation was attributed to the difference in the coordinative interaction energy for various cations that otherwise do not substantially differ in ionic size. Thus, this energy difference can be exploited for discriminative purposes, especially for fluorescent sensing [39]. Fe3+ exhibits high thermodynamic affinity for phenolic-C=N and –OH groups, which is a hybrid of the imino nitrogen of the amine and the oxygen of the phenol ring; this hybrid is formed as a result of the strong tendency of phenol to undergo deprotonation during complex formation, with fast metal-to-ligand binding kinetics that are otherwise not possible with other transition-metal ions. The estimated detection limit of AuNP@L is 1.2 μM for Fe3+.Although the fluorescenceturn on approach is more effective thanswitch off approach, the probe, AuNP@L, was selective to Fe3+ compared to the other biologically relevant metal ions (Cu2+, Zn2+, etc.). It also has a comparable detection limit of 1.2 μM for Fe3+. Moreover the absorption and emission spectral studies showed a 1 : 1 binding mode for Fe3+, with strong binding constants of 8.5 × 105 and 2.9 × 105 M−1, respectively. The obtained detection limit is comparable with the literature data for the detection of the Fe3+ in different system (Table 1).Table 1 Comparison of the detection limit AuNPs@L with similar system. System Detection limit Medium References Carbon Dots (CD) 2.0 × 10−9 M Ionic liquid [40] AuNP-thiourea 8.9 × 10−4 M Aqueous [41] MOFs 1.0 × 10−7 M DMF [42] AuNP@L 12 × 10−7 M Ethanol : water (9 : 1) Present work ## 4. Conclusion In summary, we have prepared a simple and sensitive nanogold-based Schiff base chemosensor that exhibits high selectivity toward ferric ions compared to other cations in a water/ethanol mixture. The AuNP@L was characterized by UV-visible absorption spectroscopy, photoluminescence, TGA, and TEM. The detection limit for Fe3+ ions was estimated to be 1.2 μM, without interference from other metal ions. The binding mode was 1 : 1 and the binding constants were 8.5 × 105 M−1 and 2.9 × 105 M−1, as calculated from the results of absorption and emission titrations, respectively. Thus from sensing point of view this probe can be used in the physiological system with good selectivity and sensitivity for the detection of Fe3+. --- *Source: 101694-2015-07-29.xml*
2015
# Bearing Fault Identification Method under Small Samples and Multiple Working Conditions **Authors:** Yuhui Wu; Licai Liu; Shuqu Qian; Jianyong Tian **Journal:** Mobile Information Systems (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1016954 --- ## Abstract Aiming at the problem of the low bearing faults identification accuracy of the method based on the deep neural network under small samples and multiple working conditions, a novel bearing fault identification method combined with the coordinate delay phase space reconstruction method (CDPSR), residual network, meta-SGD algorithm, and the AdaBoost technology was proposed. The proposed method firstly calculates the high-dimensional space coordinates of bearing vibration signals using the CDPSR method and uses these coordinates to construct a training set, then learns and updates the parameters of classifier networks using the meta-SGD algorithm with the train set, iteratively trains multiple classifiers, and finally integrates those classifiers to form a strong classifier by AdaBoost technology. The 4-way and 20-shot experiments of artificial and natural bearing faults show that the proposed method can identify the fault samples and nonfault samples with 100% accuracy, and the fault location accuracy is over 90%. Compared with some state-of-the-art methods such as WDCNN and CNN-SVM, the proposed method improves the fault identification accuracy and stability to a certain extent. The proposed method has high fault identification accuracy under small samples and multiworking conditions, which makes it applicable in some practical areas of complex working conditions and difficulty obtaining bearing fault signals. --- ## Body ## 1. Introduction From the perspective of the application, the bearing fault identification can be divided into two types, which include single work conditions and multiworking conditions. Bearing fault identification of small samples under multiworking conditions refers to the problem of predicting which fault type the test bearing samples belong to under a few fault training samples collected from complex work conditions. The problem is also calledN-way and K-shot multiworking bearing fault identification problem, where the training data include N classes, each class only has K samples, and the K is no more than 20. The movement of the faulty bearing is nonlinear. In order to fit the nonlinear features included in the rolling bearing vibration signals, many studies used the neural network to learn bearing vibration signal features and got good results in bearing fault identification by establishing the mappings between the fault feature and fault. For example, literature [1] and literature [2] used convolution neural networks (CNNs) to repeatedly learn the time-frequency features of a large number of original bearing vibration signals and realized the precise classification of bearing faults. The fault identification method based on the deep neural network usually requires a large amount of training data, and it is hard to play its ability to fit nonlinear features under small samples. Literature [3] and literature [4] show that the common problem of small sample learning is that the feature extraction effect is not very high and the identification accuracy is poor. Literature [5] shows that the bearing fault identification accuracy based on rolling bearing vibration signal analysis is affected by the bearing conditions. Rotation, load, and faults are the key factors affecting the motion state of the bearing rotor. When the speed of bearing continues to increase, due to the failure occurring in bearing such as touching, the nonlinear features of the rotor will further intensify, and the movement will even evolve into chaotic motion. The bearing fault identification under multiworking conditions is more difficult than the identification in a single working condition. To improve the identification accuracy of the method based on deep neural networks for multiworking bearing faults, literature [6] and literature [7] used attention, label smoothness, and other auxiliary algorithms to optimize the training processes of the neural network. Under the limited training data condition, the identification performance of the optimized deep CNN has been improved, but the number of each class of training sample still exceeds 20. At present, the bearing fault identification of small samples under multiworking conditions is one of the research hotspots in deep neural network applications.In recent years, some scholars have conducted research on bearing fault identification under small samples and multiworking conditions from different viewpoints. Literature [8] used CNN to extract the early bearing fault of the motor, and literature [9] used CNN to extract the features in the spectrum image of the bearing vibration signal. Different neural network structures have different capabilities for extracting bearing fault features, and the study of network structures to improve bearing vibration performance has received widespread attention. The training strategy of neural networks has also been the main direction of deep neural network research. MAML and SAMME are outstanding works in this area. Data preprocessing technology generates new training samples through geometric transformation, which was widely applied to prevent the neural network from overfitting in the process of learning small samples. Literature [10] researched data preprocessing methods based on the sparse autoencoder, and literature [11] used sliding sampling to enhance bearing vibration data. Now, fault identification of small samples based on learning strategies and data preprocessing is one of the important research directions for bearing fault diagnosis. In this paper, the fault signals collected from multiple working conditions with different speeds, different loads, and different fault degrees were used as the research objects. We first employed the coordinate delay phase space reconstruction (CDPSR) method to process bearing vibration signals, then designed a deep bearing fault identification neural network based on CNN and residual network to produce some classifiers and finally, through the Adaboost method, implemented an integrating multiply classifiers algorithm for training and integrating these bearing fault classifiers to form a stronger bearing fault classifier.The rest of this paper is organized as follows. Section2 introduces related technologies such as the CDPSR method, residual network, meta-SGD algorithm, and the Adaboost technology. Section 3 details our bearing fault identification method, Section 3.1 presents the processes of the data preprocessing and building new training set, Section 3.2 designs the network structure of the bearing fault classifier and its training method, and Section 3.3 develops the integrating step of multiple bearing fault classifiers; Section 4 describes the steps of conducting validation experiments using the artificial bearing fault dataset and the natural bearing fault dataset and discusses the experiment results; Finally, Section 5 concludes this paper and highlights the significance of the proposed method. ## 2. Related Work ### 2.1. Phase Space Reconstruction The coordinate delay method [12] is a specific implementation of the phase space reconstruction technology, which can be used to calculate the phase space coordinates of the bearing vibration time-series signals. Suppose xt is a one-dimensional time-series signal, 0τ,1τ,…,m−1τ are delay time, and M is the embedded dimension of high-dimensional phase space. According to the coordinate delay method, the phase space coordinates of M-dimensional can be represented as equation (1), where i = 0, …, M−1.(1)Xt=X0tX1t⋮Xm−1t=xt,…,xt+m−1τxt+1,…,xt+1+m−1τ⋮xt+m−1,…,xt+m−1+m−1τ.Solving the phase space coordinates of the time sequence requiresτ and M. Calculating delay time τ has a variety of methods such as correlation coefficients and other methods based on mutual information judgment. The related function method for solving τ is simple and effective. The autocorrelation function of the time sequence Rτ=1/N∑i=1N−τxi·xi+τ is a function that changes with τ. When Rτ drops to the R0∗1−e−11−e−1, the time is the best delay time τ for reconstructing phase space [13].When the delay timeτ is known, the Cao algorithm [14] can be used to solve the embedding dimension M, and it can calculate the embedding dimension with a small amount of data. Denote the i-th d-dimensional reconstruction vector as yid=xi,xi+τ,…,xi+d−1τ,i=1,…,N−d−1τ, and define ai,d=yid+1−yni,dd+1/‖yid−yni,dd‖,i=1,…,N−τ, where yid+1 is the reconstruction vector of the i-th (d + 1) dimension. Since ni,d∈1,…,n−dτ, yni,dd is in the d-dimensional phase space, the mean of ai,d is Ed=1/N−dτ∑i=1N−drai,d, and E1d=Ed+1/Ed. If E1d stops changing from a certain d0, then d0+1 is the minimum embedding dimension to be found. ### 2.2. Convolution Neural Network Convolution Neural Network [15] is made up of neurons that have learnable weights and biases; it uses the convolution layer and nonlinear activation functions to abstract the original data layer by layer as features required for specific tasks to achieve mapping of features and targets. The CNN is a sequence of layers that mainly includes the convolution layer, the pooling layer, and the fully connected layer. We can stack these layers to form a full CNN architecture. Since CNN has good nonlinear fitting capabilities, it was widely used in fields such as image feature extraction and voice feature analysis. There are many excellent CNN applications in the field of bearing fault identification [16, 17]. ### 2.3. Residual Network The residual refers to the gap between the observed value and the prediction value; literature [18] applied it to the neural network and proposed the concept of the residual block whose structure is shown in Figure 1.Figure 1 Structure of residual block.Here,xl and xl represent the input and output of the residual blocks, hx represents the observed value, and the residual block can be expressed as yl=hxl+Fxl,wlxl+1=fyl.Ifh· and f· are directly mapping functions, for example, hx=x and fyl=ReLUyl, the gradient of the L-layer can pass to any layer that is shallower than it [19]. Therefore, as the number of network layers increases, the residual network will not degenerate, and the ability of fitting nonlinear features becomes stronger. So far, many studies were using residual networks to extract features of business data. For example, literature [20] studied using the SELU activation function to optimize the deep residual network, used the deep residual network model to analyze information dissemination in wireless networks, and obtained a complete media information dissemination prediction of wireless networks.Resnet is a residual network structure stacked by the residual blocks, which has performed well in image classification applications [21, 22]. Literature [23] researched the advantage of Resnet-18 by comparing Resnet-18/50, VGG-19, and Googlenet and found that Resnet-18 has the advantages of short training time and high accuracy, developed a deep learning model base on ResNet-18 to diagnose the fan blade surface damage, and got good recognition effect. Literature [24] proposed a dual attention residual network that uses a residual module from Resnet18 to detect oil spills of various shapes and scales. Resnet-18 is an implementation of the Resnet model, and it consists of five parts including Conv1_x, Conv2_x, Conv3_x, Conv4_x, and Conv5_x. Each part has a convolution and constant block. Resnet-18 was designed for the classification of thousands of pictures, its network structure is complicated, and it needs to take a long time to train. Compared to the classification of thousands of pictures, the task of bearing fault identification under dozens of working conditions is small. Therefore, this paper deleted the Conv4_x and Conv5_x network structures in the Resnet-18, modified the size of the convolution kernel, reduced the complexity of the Resnet-18, and still retained the advantages of the residual network. ### 2.4. Meta-SGD Algorithm The Model-Agnostic Meta-Learning (MAML) algorithm [25] proposed by Finn et al. can be used to train the model that can be optimized by gradient descent algorithm; it is an excellent algorithm in the field of meta-learning. The difference between the MAML and other optimization algorithms such as the stochastic gradient descent (SGD) algorithm [26] is that the optimization process in MAML is divided into two steps, which first assumes that n tasks (Ti,i=1,…,n) are selected from the supporting dataset and each task is used to calculate the gradient based on the current neural network parameter θt to get n updated neural network parameter sets θ1′,…,θn′, θi′=θt−α∇θtLθt,DTiSupport, second calculates the update gradient on the query dataset, and determines the new neural network parameter θt+1 according to θt+1=θt−β∑Ti∼T∇θtLθi′,DTiQuery.As theα and β hyperparameters are fixed in the MAML, they cannot be adjusted with the change of the network, which makes the training process fluctuate. The Meta-SGD algorithm [27] automatically adjusts the α parameter to increase the stability, whose execution steps are as follows.Meta-SGD can learn task-agnostic features rather than simply adapt to task-specific features [28]. Literature [29] used the learning rate of the meta-learning SGD to predict streaming time-series data online and achieved good results. ### 2.5. Multiclass AdaBoost Method AdaBoost [30] is a multimodel integration method, and its final output is the weighted sum of the results of the integrated multiple classifiers. AdaBoost was originally raised by Freund to research the binary classification problem. By introducing multiclassification index loss in the front-directional model, AdaBoost also can be applied to the multiclass problem. The sample weight is constantly updated in subsequent AdaBoost training, and samples that have not been identified correctly will be set up with greater weights. This process of AdaBoost training is shown in Figure 2.Figure 2 AdaBoost model structure.SAMME [31] is an implementation algorithm of the multiclass AdaBoost, its specific implementation steps are as shown in Algorithm1.Algorithm 1: SAMME. Input: M classifiers, training samples, and test samples.Output: A strong classifier and the test sample prediction values.(1) Initialize the observation weightswi=1/n,i=1,2,…,n;(2) form = 1 to M:(3) Fit a classifierfmx to the training data using weights wi;(4) Computeerrm=∑i=1nwiΙci≠fmxi/∑i=1nwi(5) Computeαm=log1−errm/errm+logK−1WhereK is the total number of sample classifications;(6) Setwi⟵wi⋅eαm⋅Ιci≠fmxi,i=1,…,n;(7) Renormalizewi;(8) endOutputx=argmaxk∑m=1Mαm⋅Ifm=k(9) OutputEndLiterature [32] hybridized the AdaBoost with a linear support vector machine model and developed a diagnostic system to predict hepatitis disease; the results demonstrate that the strength of a conventional support vector machine model is improved by 6.39%. Literature [33] studied the use of the AdaBoost framework to integrate other methods and proposed a wind turbine fault feature extraction method based on the AdaBoost framework and SGMD-CS. Experiments show that AdaBoost is an effective multimodal integration framework. To realize bearing fault identification, this paper studied the use of the AdaBoost framework to integrate multiple fault identification classifier models designed based on CNN and residual networks. ## 2.1. Phase Space Reconstruction The coordinate delay method [12] is a specific implementation of the phase space reconstruction technology, which can be used to calculate the phase space coordinates of the bearing vibration time-series signals. Suppose xt is a one-dimensional time-series signal, 0τ,1τ,…,m−1τ are delay time, and M is the embedded dimension of high-dimensional phase space. According to the coordinate delay method, the phase space coordinates of M-dimensional can be represented as equation (1), where i = 0, …, M−1.(1)Xt=X0tX1t⋮Xm−1t=xt,…,xt+m−1τxt+1,…,xt+1+m−1τ⋮xt+m−1,…,xt+m−1+m−1τ.Solving the phase space coordinates of the time sequence requiresτ and M. Calculating delay time τ has a variety of methods such as correlation coefficients and other methods based on mutual information judgment. The related function method for solving τ is simple and effective. The autocorrelation function of the time sequence Rτ=1/N∑i=1N−τxi·xi+τ is a function that changes with τ. When Rτ drops to the R0∗1−e−11−e−1, the time is the best delay time τ for reconstructing phase space [13].When the delay timeτ is known, the Cao algorithm [14] can be used to solve the embedding dimension M, and it can calculate the embedding dimension with a small amount of data. Denote the i-th d-dimensional reconstruction vector as yid=xi,xi+τ,…,xi+d−1τ,i=1,…,N−d−1τ, and define ai,d=yid+1−yni,dd+1/‖yid−yni,dd‖,i=1,…,N−τ, where yid+1 is the reconstruction vector of the i-th (d + 1) dimension. Since ni,d∈1,…,n−dτ, yni,dd is in the d-dimensional phase space, the mean of ai,d is Ed=1/N−dτ∑i=1N−drai,d, and E1d=Ed+1/Ed. If E1d stops changing from a certain d0, then d0+1 is the minimum embedding dimension to be found. ## 2.2. Convolution Neural Network Convolution Neural Network [15] is made up of neurons that have learnable weights and biases; it uses the convolution layer and nonlinear activation functions to abstract the original data layer by layer as features required for specific tasks to achieve mapping of features and targets. The CNN is a sequence of layers that mainly includes the convolution layer, the pooling layer, and the fully connected layer. We can stack these layers to form a full CNN architecture. Since CNN has good nonlinear fitting capabilities, it was widely used in fields such as image feature extraction and voice feature analysis. There are many excellent CNN applications in the field of bearing fault identification [16, 17]. ## 2.3. Residual Network The residual refers to the gap between the observed value and the prediction value; literature [18] applied it to the neural network and proposed the concept of the residual block whose structure is shown in Figure 1.Figure 1 Structure of residual block.Here,xl and xl represent the input and output of the residual blocks, hx represents the observed value, and the residual block can be expressed as yl=hxl+Fxl,wlxl+1=fyl.Ifh· and f· are directly mapping functions, for example, hx=x and fyl=ReLUyl, the gradient of the L-layer can pass to any layer that is shallower than it [19]. Therefore, as the number of network layers increases, the residual network will not degenerate, and the ability of fitting nonlinear features becomes stronger. So far, many studies were using residual networks to extract features of business data. For example, literature [20] studied using the SELU activation function to optimize the deep residual network, used the deep residual network model to analyze information dissemination in wireless networks, and obtained a complete media information dissemination prediction of wireless networks.Resnet is a residual network structure stacked by the residual blocks, which has performed well in image classification applications [21, 22]. Literature [23] researched the advantage of Resnet-18 by comparing Resnet-18/50, VGG-19, and Googlenet and found that Resnet-18 has the advantages of short training time and high accuracy, developed a deep learning model base on ResNet-18 to diagnose the fan blade surface damage, and got good recognition effect. Literature [24] proposed a dual attention residual network that uses a residual module from Resnet18 to detect oil spills of various shapes and scales. Resnet-18 is an implementation of the Resnet model, and it consists of five parts including Conv1_x, Conv2_x, Conv3_x, Conv4_x, and Conv5_x. Each part has a convolution and constant block. Resnet-18 was designed for the classification of thousands of pictures, its network structure is complicated, and it needs to take a long time to train. Compared to the classification of thousands of pictures, the task of bearing fault identification under dozens of working conditions is small. Therefore, this paper deleted the Conv4_x and Conv5_x network structures in the Resnet-18, modified the size of the convolution kernel, reduced the complexity of the Resnet-18, and still retained the advantages of the residual network. ## 2.4. Meta-SGD Algorithm The Model-Agnostic Meta-Learning (MAML) algorithm [25] proposed by Finn et al. can be used to train the model that can be optimized by gradient descent algorithm; it is an excellent algorithm in the field of meta-learning. The difference between the MAML and other optimization algorithms such as the stochastic gradient descent (SGD) algorithm [26] is that the optimization process in MAML is divided into two steps, which first assumes that n tasks (Ti,i=1,…,n) are selected from the supporting dataset and each task is used to calculate the gradient based on the current neural network parameter θt to get n updated neural network parameter sets θ1′,…,θn′, θi′=θt−α∇θtLθt,DTiSupport, second calculates the update gradient on the query dataset, and determines the new neural network parameter θt+1 according to θt+1=θt−β∑Ti∼T∇θtLθi′,DTiQuery.As theα and β hyperparameters are fixed in the MAML, they cannot be adjusted with the change of the network, which makes the training process fluctuate. The Meta-SGD algorithm [27] automatically adjusts the α parameter to increase the stability, whose execution steps are as follows.Meta-SGD can learn task-agnostic features rather than simply adapt to task-specific features [28]. Literature [29] used the learning rate of the meta-learning SGD to predict streaming time-series data online and achieved good results. ## 2.5. Multiclass AdaBoost Method AdaBoost [30] is a multimodel integration method, and its final output is the weighted sum of the results of the integrated multiple classifiers. AdaBoost was originally raised by Freund to research the binary classification problem. By introducing multiclassification index loss in the front-directional model, AdaBoost also can be applied to the multiclass problem. The sample weight is constantly updated in subsequent AdaBoost training, and samples that have not been identified correctly will be set up with greater weights. This process of AdaBoost training is shown in Figure 2.Figure 2 AdaBoost model structure.SAMME [31] is an implementation algorithm of the multiclass AdaBoost, its specific implementation steps are as shown in Algorithm1.Algorithm 1: SAMME. Input: M classifiers, training samples, and test samples.Output: A strong classifier and the test sample prediction values.(1) Initialize the observation weightswi=1/n,i=1,2,…,n;(2) form = 1 to M:(3) Fit a classifierfmx to the training data using weights wi;(4) Computeerrm=∑i=1nwiΙci≠fmxi/∑i=1nwi(5) Computeαm=log1−errm/errm+logK−1WhereK is the total number of sample classifications;(6) Setwi⟵wi⋅eαm⋅Ιci≠fmxi,i=1,…,n;(7) Renormalizewi;(8) endOutputx=argmaxk∑m=1Mαm⋅Ifm=k(9) OutputEndLiterature [32] hybridized the AdaBoost with a linear support vector machine model and developed a diagnostic system to predict hepatitis disease; the results demonstrate that the strength of a conventional support vector machine model is improved by 6.39%. Literature [33] studied the use of the AdaBoost framework to integrate other methods and proposed a wind turbine fault feature extraction method based on the AdaBoost framework and SGMD-CS. Experiments show that AdaBoost is an effective multimodal integration framework. To realize bearing fault identification, this paper studied the use of the AdaBoost framework to integrate multiple fault identification classifier models designed based on CNN and residual networks. ## 3. Our Method ### 3.1. Data Preprocessing and Building New Feature Set Use min-max normalization (2) to handle the training samples including N_Train samples and test samples including N_Test samples. Divide the training samples into a training set and verification set according to the ratio of 5 : 1. Select one sample separately from the training set, verification set, and test set as an input of Algorithm 2 to calculate the best value of time delay τ and phase space dimension M. Assume that the training sample has L data points. According to (1), we can get 5/6N_TrainL−M+τF/τFM coordinates for the training set, 1/6N_TrainL−M+τF/τFM coordinates for the verification set, and N_TestL−M+τF/τFM coordinates for the test set, when the sampling frequency of the bearing vibration signal is F and the phase space vector is not reused.(2)x−=x−minxmaxx−minx.Algorithm 2: Mata-SG D. Input: The task distribution ΡT, learning rate β.Output: θ(1) Initializeθ and α;(2) while not done do(3) Sample batch of tasksTi∼ΡT(4) for all Tido(5) LSupportTiθt⟵1/SupportTi∑x,y∈SupportTily,fθtx(6) θi′=θt−α∇θtLSupportTiDTiSupport(7) LQueryTiθi′⟵1/QueryTi∑x,y∈QueryTily,fθi′x(8) end(9) θt+1,αt+1⟵θt,αt−β∇θt,αt∑TiLQueryΤiθi′EndCombine the reconstructed phase space coordinates with the labels of the original signals to build new training samples. Since the phase space has the same topological properties as the original bearing vibration signal system, the regularity of the bearing time-series signal in the high-dimensional space is restored in the coordinates. Therefore, any coordinate in phase space represents the state of the original bearing vibration signal system and contains corresponding features. Compared with the features included in the original signals, the features in the phase space coordinates are more obvious and easier to be identified by the classifier. ### 3.2. Bearing Fault Classifier and Its Training Our bearing fault classifier is designed based on CNN and the residual block. The classifier uses 7 network layers, of which the Conv_x network consists of convolutional layers and ReLU activation operations. The full connection layer used 100 neurons, and the output layer used 4 neurons. The first layer of our network uses a larger convolution kernel and then reduces the size of the convolution kernel. The detailed structure is shown in Figure3.Figure 3 The detailed structure of the bearing fault classifier.Our bearing fault classifier takes the phase space coordinates of the bearing vibration signal as input. In the classifier, the input matrix first flows through the Conv1_x and then through Conv2_x, Conv3_x, the fully connected layer, and the output layer in sequence. In the Conv1_x module, the input matrix performed convolution operations according to (3) using 64 different convolution kernels with a size of 7 ∗ 7. To avoid the deviation of the result distribution after the convolution operation, the batch normalization (BN) technique is used to standardize the convolution result, so that the convolution result obeys a normal distribution with a mean of 0 and a variance of 1. Then, the result of BN is nonlinearly transformed using the ReLU activation function (fx=max0,x). Different from the Conv1_x module, the Conv2_x module first uses the maximum pooling technique that takes the maximum value in a fixed-size sliding window to reduce the density of data features and then uses two convolution layers with smaller kernels to perform convolution operations on the input. After the second BN operation, Conv2_x uses the sum of the result of max-pooling (as the observed value of residual block) and the result of BN as the input to the nonlinear activation function. Conv3_x also has two convolution layers, the difference is that to make the observed value of the residual block of Conv3_x own the same shape as the second convolution result of Con3_x, the observed value is processed with a convolution of size 1 ∗ 1. In the fully connected layer and the output layer, the input is processed in the same way as equation (4).(3)yu,μl+1=∑n=1K∑m=1Kxln+u,m+μ×wiln,m+bl,(4)yil+1=ReLU∑j=1Jxjl×wi,jl+bil.In (3), xl represents the input matrix of the l-th layer, K is the size of the convolution kernel, and wil and bl represent the connection weight and activation parameter of the i-th convolution kernel of l-th layer (all convolution kernels of the same convolution layer shared the same activation value). In equation (4), i represents the serial number of neurons in the full connection layer, j means the position number of the input matrix, wi,jl represents the weight of the i-th neurons to the j-th value of the input, and bil is the bias of the i-th neurons of l-th layer.To prevent overfitting in the training process, we added the following judgment statements after step 9 of Algorithm 2 and used the modified Algorithm 2 to update the network parameters of our bearing fault classifier.If Acc_of_Train ≥ 0.9 and Acc_of_Train − Acc_of_Validation > 0.1:Number_of_ Overfitting ++;else:Number_of_ Overfitting = 0;If Number_of_ Overfitting > 5:break; ### 3.3. The Integrating Step of Multiple Bearing Fault Classifiers Combining multiple weak bearing fault identification classifiers designed in Section3.2 can generate a stronger bearing fault identification classifier using the AdaBoost algorithm; however, unlike the SAMME, our method divides the training data into a support set and a query set, calculates each sample weight of the support and query set, and updates the classification error with the sample weights. The integrating steps of our bearing fault classifiers are shown in Algorithm 3.Algorithm 3: Our bearing fault identification algorithm. Inputs: Number of classifiers, training and test samples composed of rolling bearing vibration signals, and the sample labels.Outputs: Bearing fault identification classifier and the prediction values of the test samples.(1) Setβ with 0.001 for Algorithm 2;(2) For all classifiers do:(3) Decompose the bearing vibration signals and construct the new training set, verification set, and test set according to the data preprocessing step described in Section3.1;(4) Divide the training set into support and query sets with the ratio of 1 : 1;(5) Initialize the sample weight of the support and query set withwisupport=1/Num_S, i=0,…,Num_s−1; wjiquery=1/Num_q, j=0,…,Num_q−1, where Num_s and Num_q are the numbers of the sample of the support set and query set;(6) Set the target loss function with the cross-entropy (lCEp,q=−∑pxlogqx+1−pxlog1−qx×wx, where p is the predictive value, q is the true value, and w is the sample weight);(7) Update the parameters of the first classifier using the support set and query set according to the modified Meta-SGD;(8) Calculate the identification error rate of the training set according to equation (2);(9) Calculate the weight coefficient of the classifier according to equation (3);(10) Update the sample weight of the training sample, and normalize these weights according to (4);(11) Use the network parameters of the previous classifier to initialize the network parameters of the next classifier;(12) Train the next classifier using the training set updated with the new weights according to the modified Meta-SGD;(13) end(14) Calculate the prediction values of the test sample according to equation (5), output the prediction values and the integrated classifier.EndThis algorithm uses the Meta-SGD learning strategy to decrease the value of the objective loss function and update the network parameters; it not only has the characteristics of the meta-learning strategy to quickly converge but also adjusts the learning rate based on the learning task. The initialization of neural network parameters has an important impact on training. In order to make the network get good initialization parameters, the algorithm initializes the parameters of the next classifier in the parameters of the previous classifier during the iterative training process.The algorithm needs to determine 2 hyperparameters in advance. They are the learning rateβ of Algorithm 2 and the number of data points of the training sample. The two parameters have a great influence on the algorithm. First, the learning rate β will affect the learning speed of meta-SGD; second, the number of data points of the training sample determines the shape of the classifier input data, and the execution effect of the coordinate delay method algorithm will be affected by this parameter. If it is too large, it will increase the interference data, and if it is too small, the high-dimensional phase space of the bearing vibration time-series signal cannot be accurately established. In this paper, we set β with 0.001, and the number of the data points is 1024. ## 3.1. Data Preprocessing and Building New Feature Set Use min-max normalization (2) to handle the training samples including N_Train samples and test samples including N_Test samples. Divide the training samples into a training set and verification set according to the ratio of 5 : 1. Select one sample separately from the training set, verification set, and test set as an input of Algorithm 2 to calculate the best value of time delay τ and phase space dimension M. Assume that the training sample has L data points. According to (1), we can get 5/6N_TrainL−M+τF/τFM coordinates for the training set, 1/6N_TrainL−M+τF/τFM coordinates for the verification set, and N_TestL−M+τF/τFM coordinates for the test set, when the sampling frequency of the bearing vibration signal is F and the phase space vector is not reused.(2)x−=x−minxmaxx−minx.Algorithm 2: Mata-SG D. Input: The task distribution ΡT, learning rate β.Output: θ(1) Initializeθ and α;(2) while not done do(3) Sample batch of tasksTi∼ΡT(4) for all Tido(5) LSupportTiθt⟵1/SupportTi∑x,y∈SupportTily,fθtx(6) θi′=θt−α∇θtLSupportTiDTiSupport(7) LQueryTiθi′⟵1/QueryTi∑x,y∈QueryTily,fθi′x(8) end(9) θt+1,αt+1⟵θt,αt−β∇θt,αt∑TiLQueryΤiθi′EndCombine the reconstructed phase space coordinates with the labels of the original signals to build new training samples. Since the phase space has the same topological properties as the original bearing vibration signal system, the regularity of the bearing time-series signal in the high-dimensional space is restored in the coordinates. Therefore, any coordinate in phase space represents the state of the original bearing vibration signal system and contains corresponding features. Compared with the features included in the original signals, the features in the phase space coordinates are more obvious and easier to be identified by the classifier. ## 3.2. Bearing Fault Classifier and Its Training Our bearing fault classifier is designed based on CNN and the residual block. The classifier uses 7 network layers, of which the Conv_x network consists of convolutional layers and ReLU activation operations. The full connection layer used 100 neurons, and the output layer used 4 neurons. The first layer of our network uses a larger convolution kernel and then reduces the size of the convolution kernel. The detailed structure is shown in Figure3.Figure 3 The detailed structure of the bearing fault classifier.Our bearing fault classifier takes the phase space coordinates of the bearing vibration signal as input. In the classifier, the input matrix first flows through the Conv1_x and then through Conv2_x, Conv3_x, the fully connected layer, and the output layer in sequence. In the Conv1_x module, the input matrix performed convolution operations according to (3) using 64 different convolution kernels with a size of 7 ∗ 7. To avoid the deviation of the result distribution after the convolution operation, the batch normalization (BN) technique is used to standardize the convolution result, so that the convolution result obeys a normal distribution with a mean of 0 and a variance of 1. Then, the result of BN is nonlinearly transformed using the ReLU activation function (fx=max0,x). Different from the Conv1_x module, the Conv2_x module first uses the maximum pooling technique that takes the maximum value in a fixed-size sliding window to reduce the density of data features and then uses two convolution layers with smaller kernels to perform convolution operations on the input. After the second BN operation, Conv2_x uses the sum of the result of max-pooling (as the observed value of residual block) and the result of BN as the input to the nonlinear activation function. Conv3_x also has two convolution layers, the difference is that to make the observed value of the residual block of Conv3_x own the same shape as the second convolution result of Con3_x, the observed value is processed with a convolution of size 1 ∗ 1. In the fully connected layer and the output layer, the input is processed in the same way as equation (4).(3)yu,μl+1=∑n=1K∑m=1Kxln+u,m+μ×wiln,m+bl,(4)yil+1=ReLU∑j=1Jxjl×wi,jl+bil.In (3), xl represents the input matrix of the l-th layer, K is the size of the convolution kernel, and wil and bl represent the connection weight and activation parameter of the i-th convolution kernel of l-th layer (all convolution kernels of the same convolution layer shared the same activation value). In equation (4), i represents the serial number of neurons in the full connection layer, j means the position number of the input matrix, wi,jl represents the weight of the i-th neurons to the j-th value of the input, and bil is the bias of the i-th neurons of l-th layer.To prevent overfitting in the training process, we added the following judgment statements after step 9 of Algorithm 2 and used the modified Algorithm 2 to update the network parameters of our bearing fault classifier.If Acc_of_Train ≥ 0.9 and Acc_of_Train − Acc_of_Validation > 0.1:Number_of_ Overfitting ++;else:Number_of_ Overfitting = 0;If Number_of_ Overfitting > 5:break; ## 3.3. The Integrating Step of Multiple Bearing Fault Classifiers Combining multiple weak bearing fault identification classifiers designed in Section3.2 can generate a stronger bearing fault identification classifier using the AdaBoost algorithm; however, unlike the SAMME, our method divides the training data into a support set and a query set, calculates each sample weight of the support and query set, and updates the classification error with the sample weights. The integrating steps of our bearing fault classifiers are shown in Algorithm 3.Algorithm 3: Our bearing fault identification algorithm. Inputs: Number of classifiers, training and test samples composed of rolling bearing vibration signals, and the sample labels.Outputs: Bearing fault identification classifier and the prediction values of the test samples.(1) Setβ with 0.001 for Algorithm 2;(2) For all classifiers do:(3) Decompose the bearing vibration signals and construct the new training set, verification set, and test set according to the data preprocessing step described in Section3.1;(4) Divide the training set into support and query sets with the ratio of 1 : 1;(5) Initialize the sample weight of the support and query set withwisupport=1/Num_S, i=0,…,Num_s−1; wjiquery=1/Num_q, j=0,…,Num_q−1, where Num_s and Num_q are the numbers of the sample of the support set and query set;(6) Set the target loss function with the cross-entropy (lCEp,q=−∑pxlogqx+1−pxlog1−qx×wx, where p is the predictive value, q is the true value, and w is the sample weight);(7) Update the parameters of the first classifier using the support set and query set according to the modified Meta-SGD;(8) Calculate the identification error rate of the training set according to equation (2);(9) Calculate the weight coefficient of the classifier according to equation (3);(10) Update the sample weight of the training sample, and normalize these weights according to (4);(11) Use the network parameters of the previous classifier to initialize the network parameters of the next classifier;(12) Train the next classifier using the training set updated with the new weights according to the modified Meta-SGD;(13) end(14) Calculate the prediction values of the test sample according to equation (5), output the prediction values and the integrated classifier.EndThis algorithm uses the Meta-SGD learning strategy to decrease the value of the objective loss function and update the network parameters; it not only has the characteristics of the meta-learning strategy to quickly converge but also adjusts the learning rate based on the learning task. The initialization of neural network parameters has an important impact on training. In order to make the network get good initialization parameters, the algorithm initializes the parameters of the next classifier in the parameters of the previous classifier during the iterative training process.The algorithm needs to determine 2 hyperparameters in advance. They are the learning rateβ of Algorithm 2 and the number of data points of the training sample. The two parameters have a great influence on the algorithm. First, the learning rate β will affect the learning speed of meta-SGD; second, the number of data points of the training sample determines the shape of the classifier input data, and the execution effect of the coordinate delay method algorithm will be affected by this parameter. If it is too large, it will increase the interference data, and if it is too small, the high-dimensional phase space of the bearing vibration time-series signal cannot be accurately established. In this paper, we set β with 0.001, and the number of the data points is 1024. ## 4. Experiment The fault identification accuracy under different working conditions is an important indicator for measuring the rolling bearing fault identification method. This paper verified the effectiveness of the proposed method by calculating the test accuracies of bearing fault identification on the artificial and natural bearing fault dataset collected from different loads, different speeds, and different fault conditions.The experiments were conducted on the Tensorflow CPU 2.7 platform programming with python. The hardware and software environments included core i7-4790K 4.0 GHz processor, 16 G memory, and Windows Server 2018 operating system.To eliminate the impact of accidentality, each experiment in this paper was performed 5 times independently, and the average value of 5 test accuracy was used as the experimental results. ### 4.1. Experiments on the CWRU Dataset Artificial bearing fault data set CRWU [34] consists of bearing vibration time serial signals in the state of normal, internal circle fault, outer circle fault, and rolling bodies fault. The CWRU dataset is a representative data set in the field of bearing fault diagnosis. Many scholars got positive results when they used the CWRU to perform simulation experiments [35, 36]. #### 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD #### 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ### 4.2. Experiments on XJTU-SY Dataset To further verify the effectiveness of the proposed method, the natural bearing fault data set, XJTU-SY [37], was used for our experiments. #### 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 #### 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 4.1. Experiments on the CWRU Dataset Artificial bearing fault data set CRWU [34] consists of bearing vibration time serial signals in the state of normal, internal circle fault, outer circle fault, and rolling bodies fault. The CWRU dataset is a representative data set in the field of bearing fault diagnosis. Many scholars got positive results when they used the CWRU to perform simulation experiments [35, 36]. ### 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD ### 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ## 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD ## 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ## 4.2. Experiments on XJTU-SY Dataset To further verify the effectiveness of the proposed method, the natural bearing fault data set, XJTU-SY [37], was used for our experiments. ### 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 ### 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 ## 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 5. Conclusions A novel bearing fault identification method for multiconditions and small samples was proposed to challenge the problems of lacking fault data and poor performance. To verify the effectiveness of the proposed method, the artificial and natural bearing fault signals were taken to experiment with as a case study. The result shows that the proposed method realized accurate fault signal identification under multiple working conditions and small samples, and its accuracy rate of bearing fault positioning exceeds 90%. Benefitting from the reconstruction of high-dimensional space of bearing vibration time series by coordinate delay construction method, extraction of the phase space features using the convolutional neural network, the transmission of the gradient to other layers by residual block, updation of the classifier parameters by Meta-SGD, and integration of multiple classifiers by AdaBoost method, the proposed method gets excellent bearing fault feature extraction and high fault identification ability. Finally, compared with other advanced methods, the proposed method also has certain advantages. From these cases, the proposed method is very effective.The proposed method can accurately identify bearing faults under small samples and multiworking conditions without manually setting fault features. Therefore, the proposed method has a certain value in some application areas with complex working conditions and difficulty obtaining a large number of bearing fault samples, such as aviation bearings. --- *Source: 1016954-2022-08-13.xml*
1016954-2022-08-13_1016954-2022-08-13.md
79,087
Bearing Fault Identification Method under Small Samples and Multiple Working Conditions
Yuhui Wu; Licai Liu; Shuqu Qian; Jianyong Tian
Mobile Information Systems (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1016954
1016954-2022-08-13.xml
--- ## Abstract Aiming at the problem of the low bearing faults identification accuracy of the method based on the deep neural network under small samples and multiple working conditions, a novel bearing fault identification method combined with the coordinate delay phase space reconstruction method (CDPSR), residual network, meta-SGD algorithm, and the AdaBoost technology was proposed. The proposed method firstly calculates the high-dimensional space coordinates of bearing vibration signals using the CDPSR method and uses these coordinates to construct a training set, then learns and updates the parameters of classifier networks using the meta-SGD algorithm with the train set, iteratively trains multiple classifiers, and finally integrates those classifiers to form a strong classifier by AdaBoost technology. The 4-way and 20-shot experiments of artificial and natural bearing faults show that the proposed method can identify the fault samples and nonfault samples with 100% accuracy, and the fault location accuracy is over 90%. Compared with some state-of-the-art methods such as WDCNN and CNN-SVM, the proposed method improves the fault identification accuracy and stability to a certain extent. The proposed method has high fault identification accuracy under small samples and multiworking conditions, which makes it applicable in some practical areas of complex working conditions and difficulty obtaining bearing fault signals. --- ## Body ## 1. Introduction From the perspective of the application, the bearing fault identification can be divided into two types, which include single work conditions and multiworking conditions. Bearing fault identification of small samples under multiworking conditions refers to the problem of predicting which fault type the test bearing samples belong to under a few fault training samples collected from complex work conditions. The problem is also calledN-way and K-shot multiworking bearing fault identification problem, where the training data include N classes, each class only has K samples, and the K is no more than 20. The movement of the faulty bearing is nonlinear. In order to fit the nonlinear features included in the rolling bearing vibration signals, many studies used the neural network to learn bearing vibration signal features and got good results in bearing fault identification by establishing the mappings between the fault feature and fault. For example, literature [1] and literature [2] used convolution neural networks (CNNs) to repeatedly learn the time-frequency features of a large number of original bearing vibration signals and realized the precise classification of bearing faults. The fault identification method based on the deep neural network usually requires a large amount of training data, and it is hard to play its ability to fit nonlinear features under small samples. Literature [3] and literature [4] show that the common problem of small sample learning is that the feature extraction effect is not very high and the identification accuracy is poor. Literature [5] shows that the bearing fault identification accuracy based on rolling bearing vibration signal analysis is affected by the bearing conditions. Rotation, load, and faults are the key factors affecting the motion state of the bearing rotor. When the speed of bearing continues to increase, due to the failure occurring in bearing such as touching, the nonlinear features of the rotor will further intensify, and the movement will even evolve into chaotic motion. The bearing fault identification under multiworking conditions is more difficult than the identification in a single working condition. To improve the identification accuracy of the method based on deep neural networks for multiworking bearing faults, literature [6] and literature [7] used attention, label smoothness, and other auxiliary algorithms to optimize the training processes of the neural network. Under the limited training data condition, the identification performance of the optimized deep CNN has been improved, but the number of each class of training sample still exceeds 20. At present, the bearing fault identification of small samples under multiworking conditions is one of the research hotspots in deep neural network applications.In recent years, some scholars have conducted research on bearing fault identification under small samples and multiworking conditions from different viewpoints. Literature [8] used CNN to extract the early bearing fault of the motor, and literature [9] used CNN to extract the features in the spectrum image of the bearing vibration signal. Different neural network structures have different capabilities for extracting bearing fault features, and the study of network structures to improve bearing vibration performance has received widespread attention. The training strategy of neural networks has also been the main direction of deep neural network research. MAML and SAMME are outstanding works in this area. Data preprocessing technology generates new training samples through geometric transformation, which was widely applied to prevent the neural network from overfitting in the process of learning small samples. Literature [10] researched data preprocessing methods based on the sparse autoencoder, and literature [11] used sliding sampling to enhance bearing vibration data. Now, fault identification of small samples based on learning strategies and data preprocessing is one of the important research directions for bearing fault diagnosis. In this paper, the fault signals collected from multiple working conditions with different speeds, different loads, and different fault degrees were used as the research objects. We first employed the coordinate delay phase space reconstruction (CDPSR) method to process bearing vibration signals, then designed a deep bearing fault identification neural network based on CNN and residual network to produce some classifiers and finally, through the Adaboost method, implemented an integrating multiply classifiers algorithm for training and integrating these bearing fault classifiers to form a stronger bearing fault classifier.The rest of this paper is organized as follows. Section2 introduces related technologies such as the CDPSR method, residual network, meta-SGD algorithm, and the Adaboost technology. Section 3 details our bearing fault identification method, Section 3.1 presents the processes of the data preprocessing and building new training set, Section 3.2 designs the network structure of the bearing fault classifier and its training method, and Section 3.3 develops the integrating step of multiple bearing fault classifiers; Section 4 describes the steps of conducting validation experiments using the artificial bearing fault dataset and the natural bearing fault dataset and discusses the experiment results; Finally, Section 5 concludes this paper and highlights the significance of the proposed method. ## 2. Related Work ### 2.1. Phase Space Reconstruction The coordinate delay method [12] is a specific implementation of the phase space reconstruction technology, which can be used to calculate the phase space coordinates of the bearing vibration time-series signals. Suppose xt is a one-dimensional time-series signal, 0τ,1τ,…,m−1τ are delay time, and M is the embedded dimension of high-dimensional phase space. According to the coordinate delay method, the phase space coordinates of M-dimensional can be represented as equation (1), where i = 0, …, M−1.(1)Xt=X0tX1t⋮Xm−1t=xt,…,xt+m−1τxt+1,…,xt+1+m−1τ⋮xt+m−1,…,xt+m−1+m−1τ.Solving the phase space coordinates of the time sequence requiresτ and M. Calculating delay time τ has a variety of methods such as correlation coefficients and other methods based on mutual information judgment. The related function method for solving τ is simple and effective. The autocorrelation function of the time sequence Rτ=1/N∑i=1N−τxi·xi+τ is a function that changes with τ. When Rτ drops to the R0∗1−e−11−e−1, the time is the best delay time τ for reconstructing phase space [13].When the delay timeτ is known, the Cao algorithm [14] can be used to solve the embedding dimension M, and it can calculate the embedding dimension with a small amount of data. Denote the i-th d-dimensional reconstruction vector as yid=xi,xi+τ,…,xi+d−1τ,i=1,…,N−d−1τ, and define ai,d=yid+1−yni,dd+1/‖yid−yni,dd‖,i=1,…,N−τ, where yid+1 is the reconstruction vector of the i-th (d + 1) dimension. Since ni,d∈1,…,n−dτ, yni,dd is in the d-dimensional phase space, the mean of ai,d is Ed=1/N−dτ∑i=1N−drai,d, and E1d=Ed+1/Ed. If E1d stops changing from a certain d0, then d0+1 is the minimum embedding dimension to be found. ### 2.2. Convolution Neural Network Convolution Neural Network [15] is made up of neurons that have learnable weights and biases; it uses the convolution layer and nonlinear activation functions to abstract the original data layer by layer as features required for specific tasks to achieve mapping of features and targets. The CNN is a sequence of layers that mainly includes the convolution layer, the pooling layer, and the fully connected layer. We can stack these layers to form a full CNN architecture. Since CNN has good nonlinear fitting capabilities, it was widely used in fields such as image feature extraction and voice feature analysis. There are many excellent CNN applications in the field of bearing fault identification [16, 17]. ### 2.3. Residual Network The residual refers to the gap between the observed value and the prediction value; literature [18] applied it to the neural network and proposed the concept of the residual block whose structure is shown in Figure 1.Figure 1 Structure of residual block.Here,xl and xl represent the input and output of the residual blocks, hx represents the observed value, and the residual block can be expressed as yl=hxl+Fxl,wlxl+1=fyl.Ifh· and f· are directly mapping functions, for example, hx=x and fyl=ReLUyl, the gradient of the L-layer can pass to any layer that is shallower than it [19]. Therefore, as the number of network layers increases, the residual network will not degenerate, and the ability of fitting nonlinear features becomes stronger. So far, many studies were using residual networks to extract features of business data. For example, literature [20] studied using the SELU activation function to optimize the deep residual network, used the deep residual network model to analyze information dissemination in wireless networks, and obtained a complete media information dissemination prediction of wireless networks.Resnet is a residual network structure stacked by the residual blocks, which has performed well in image classification applications [21, 22]. Literature [23] researched the advantage of Resnet-18 by comparing Resnet-18/50, VGG-19, and Googlenet and found that Resnet-18 has the advantages of short training time and high accuracy, developed a deep learning model base on ResNet-18 to diagnose the fan blade surface damage, and got good recognition effect. Literature [24] proposed a dual attention residual network that uses a residual module from Resnet18 to detect oil spills of various shapes and scales. Resnet-18 is an implementation of the Resnet model, and it consists of five parts including Conv1_x, Conv2_x, Conv3_x, Conv4_x, and Conv5_x. Each part has a convolution and constant block. Resnet-18 was designed for the classification of thousands of pictures, its network structure is complicated, and it needs to take a long time to train. Compared to the classification of thousands of pictures, the task of bearing fault identification under dozens of working conditions is small. Therefore, this paper deleted the Conv4_x and Conv5_x network structures in the Resnet-18, modified the size of the convolution kernel, reduced the complexity of the Resnet-18, and still retained the advantages of the residual network. ### 2.4. Meta-SGD Algorithm The Model-Agnostic Meta-Learning (MAML) algorithm [25] proposed by Finn et al. can be used to train the model that can be optimized by gradient descent algorithm; it is an excellent algorithm in the field of meta-learning. The difference between the MAML and other optimization algorithms such as the stochastic gradient descent (SGD) algorithm [26] is that the optimization process in MAML is divided into two steps, which first assumes that n tasks (Ti,i=1,…,n) are selected from the supporting dataset and each task is used to calculate the gradient based on the current neural network parameter θt to get n updated neural network parameter sets θ1′,…,θn′, θi′=θt−α∇θtLθt,DTiSupport, second calculates the update gradient on the query dataset, and determines the new neural network parameter θt+1 according to θt+1=θt−β∑Ti∼T∇θtLθi′,DTiQuery.As theα and β hyperparameters are fixed in the MAML, they cannot be adjusted with the change of the network, which makes the training process fluctuate. The Meta-SGD algorithm [27] automatically adjusts the α parameter to increase the stability, whose execution steps are as follows.Meta-SGD can learn task-agnostic features rather than simply adapt to task-specific features [28]. Literature [29] used the learning rate of the meta-learning SGD to predict streaming time-series data online and achieved good results. ### 2.5. Multiclass AdaBoost Method AdaBoost [30] is a multimodel integration method, and its final output is the weighted sum of the results of the integrated multiple classifiers. AdaBoost was originally raised by Freund to research the binary classification problem. By introducing multiclassification index loss in the front-directional model, AdaBoost also can be applied to the multiclass problem. The sample weight is constantly updated in subsequent AdaBoost training, and samples that have not been identified correctly will be set up with greater weights. This process of AdaBoost training is shown in Figure 2.Figure 2 AdaBoost model structure.SAMME [31] is an implementation algorithm of the multiclass AdaBoost, its specific implementation steps are as shown in Algorithm1.Algorithm 1: SAMME. Input: M classifiers, training samples, and test samples.Output: A strong classifier and the test sample prediction values.(1) Initialize the observation weightswi=1/n,i=1,2,…,n;(2) form = 1 to M:(3) Fit a classifierfmx to the training data using weights wi;(4) Computeerrm=∑i=1nwiΙci≠fmxi/∑i=1nwi(5) Computeαm=log1−errm/errm+logK−1WhereK is the total number of sample classifications;(6) Setwi⟵wi⋅eαm⋅Ιci≠fmxi,i=1,…,n;(7) Renormalizewi;(8) endOutputx=argmaxk∑m=1Mαm⋅Ifm=k(9) OutputEndLiterature [32] hybridized the AdaBoost with a linear support vector machine model and developed a diagnostic system to predict hepatitis disease; the results demonstrate that the strength of a conventional support vector machine model is improved by 6.39%. Literature [33] studied the use of the AdaBoost framework to integrate other methods and proposed a wind turbine fault feature extraction method based on the AdaBoost framework and SGMD-CS. Experiments show that AdaBoost is an effective multimodal integration framework. To realize bearing fault identification, this paper studied the use of the AdaBoost framework to integrate multiple fault identification classifier models designed based on CNN and residual networks. ## 2.1. Phase Space Reconstruction The coordinate delay method [12] is a specific implementation of the phase space reconstruction technology, which can be used to calculate the phase space coordinates of the bearing vibration time-series signals. Suppose xt is a one-dimensional time-series signal, 0τ,1τ,…,m−1τ are delay time, and M is the embedded dimension of high-dimensional phase space. According to the coordinate delay method, the phase space coordinates of M-dimensional can be represented as equation (1), where i = 0, …, M−1.(1)Xt=X0tX1t⋮Xm−1t=xt,…,xt+m−1τxt+1,…,xt+1+m−1τ⋮xt+m−1,…,xt+m−1+m−1τ.Solving the phase space coordinates of the time sequence requiresτ and M. Calculating delay time τ has a variety of methods such as correlation coefficients and other methods based on mutual information judgment. The related function method for solving τ is simple and effective. The autocorrelation function of the time sequence Rτ=1/N∑i=1N−τxi·xi+τ is a function that changes with τ. When Rτ drops to the R0∗1−e−11−e−1, the time is the best delay time τ for reconstructing phase space [13].When the delay timeτ is known, the Cao algorithm [14] can be used to solve the embedding dimension M, and it can calculate the embedding dimension with a small amount of data. Denote the i-th d-dimensional reconstruction vector as yid=xi,xi+τ,…,xi+d−1τ,i=1,…,N−d−1τ, and define ai,d=yid+1−yni,dd+1/‖yid−yni,dd‖,i=1,…,N−τ, where yid+1 is the reconstruction vector of the i-th (d + 1) dimension. Since ni,d∈1,…,n−dτ, yni,dd is in the d-dimensional phase space, the mean of ai,d is Ed=1/N−dτ∑i=1N−drai,d, and E1d=Ed+1/Ed. If E1d stops changing from a certain d0, then d0+1 is the minimum embedding dimension to be found. ## 2.2. Convolution Neural Network Convolution Neural Network [15] is made up of neurons that have learnable weights and biases; it uses the convolution layer and nonlinear activation functions to abstract the original data layer by layer as features required for specific tasks to achieve mapping of features and targets. The CNN is a sequence of layers that mainly includes the convolution layer, the pooling layer, and the fully connected layer. We can stack these layers to form a full CNN architecture. Since CNN has good nonlinear fitting capabilities, it was widely used in fields such as image feature extraction and voice feature analysis. There are many excellent CNN applications in the field of bearing fault identification [16, 17]. ## 2.3. Residual Network The residual refers to the gap between the observed value and the prediction value; literature [18] applied it to the neural network and proposed the concept of the residual block whose structure is shown in Figure 1.Figure 1 Structure of residual block.Here,xl and xl represent the input and output of the residual blocks, hx represents the observed value, and the residual block can be expressed as yl=hxl+Fxl,wlxl+1=fyl.Ifh· and f· are directly mapping functions, for example, hx=x and fyl=ReLUyl, the gradient of the L-layer can pass to any layer that is shallower than it [19]. Therefore, as the number of network layers increases, the residual network will not degenerate, and the ability of fitting nonlinear features becomes stronger. So far, many studies were using residual networks to extract features of business data. For example, literature [20] studied using the SELU activation function to optimize the deep residual network, used the deep residual network model to analyze information dissemination in wireless networks, and obtained a complete media information dissemination prediction of wireless networks.Resnet is a residual network structure stacked by the residual blocks, which has performed well in image classification applications [21, 22]. Literature [23] researched the advantage of Resnet-18 by comparing Resnet-18/50, VGG-19, and Googlenet and found that Resnet-18 has the advantages of short training time and high accuracy, developed a deep learning model base on ResNet-18 to diagnose the fan blade surface damage, and got good recognition effect. Literature [24] proposed a dual attention residual network that uses a residual module from Resnet18 to detect oil spills of various shapes and scales. Resnet-18 is an implementation of the Resnet model, and it consists of five parts including Conv1_x, Conv2_x, Conv3_x, Conv4_x, and Conv5_x. Each part has a convolution and constant block. Resnet-18 was designed for the classification of thousands of pictures, its network structure is complicated, and it needs to take a long time to train. Compared to the classification of thousands of pictures, the task of bearing fault identification under dozens of working conditions is small. Therefore, this paper deleted the Conv4_x and Conv5_x network structures in the Resnet-18, modified the size of the convolution kernel, reduced the complexity of the Resnet-18, and still retained the advantages of the residual network. ## 2.4. Meta-SGD Algorithm The Model-Agnostic Meta-Learning (MAML) algorithm [25] proposed by Finn et al. can be used to train the model that can be optimized by gradient descent algorithm; it is an excellent algorithm in the field of meta-learning. The difference between the MAML and other optimization algorithms such as the stochastic gradient descent (SGD) algorithm [26] is that the optimization process in MAML is divided into two steps, which first assumes that n tasks (Ti,i=1,…,n) are selected from the supporting dataset and each task is used to calculate the gradient based on the current neural network parameter θt to get n updated neural network parameter sets θ1′,…,θn′, θi′=θt−α∇θtLθt,DTiSupport, second calculates the update gradient on the query dataset, and determines the new neural network parameter θt+1 according to θt+1=θt−β∑Ti∼T∇θtLθi′,DTiQuery.As theα and β hyperparameters are fixed in the MAML, they cannot be adjusted with the change of the network, which makes the training process fluctuate. The Meta-SGD algorithm [27] automatically adjusts the α parameter to increase the stability, whose execution steps are as follows.Meta-SGD can learn task-agnostic features rather than simply adapt to task-specific features [28]. Literature [29] used the learning rate of the meta-learning SGD to predict streaming time-series data online and achieved good results. ## 2.5. Multiclass AdaBoost Method AdaBoost [30] is a multimodel integration method, and its final output is the weighted sum of the results of the integrated multiple classifiers. AdaBoost was originally raised by Freund to research the binary classification problem. By introducing multiclassification index loss in the front-directional model, AdaBoost also can be applied to the multiclass problem. The sample weight is constantly updated in subsequent AdaBoost training, and samples that have not been identified correctly will be set up with greater weights. This process of AdaBoost training is shown in Figure 2.Figure 2 AdaBoost model structure.SAMME [31] is an implementation algorithm of the multiclass AdaBoost, its specific implementation steps are as shown in Algorithm1.Algorithm 1: SAMME. Input: M classifiers, training samples, and test samples.Output: A strong classifier and the test sample prediction values.(1) Initialize the observation weightswi=1/n,i=1,2,…,n;(2) form = 1 to M:(3) Fit a classifierfmx to the training data using weights wi;(4) Computeerrm=∑i=1nwiΙci≠fmxi/∑i=1nwi(5) Computeαm=log1−errm/errm+logK−1WhereK is the total number of sample classifications;(6) Setwi⟵wi⋅eαm⋅Ιci≠fmxi,i=1,…,n;(7) Renormalizewi;(8) endOutputx=argmaxk∑m=1Mαm⋅Ifm=k(9) OutputEndLiterature [32] hybridized the AdaBoost with a linear support vector machine model and developed a diagnostic system to predict hepatitis disease; the results demonstrate that the strength of a conventional support vector machine model is improved by 6.39%. Literature [33] studied the use of the AdaBoost framework to integrate other methods and proposed a wind turbine fault feature extraction method based on the AdaBoost framework and SGMD-CS. Experiments show that AdaBoost is an effective multimodal integration framework. To realize bearing fault identification, this paper studied the use of the AdaBoost framework to integrate multiple fault identification classifier models designed based on CNN and residual networks. ## 3. Our Method ### 3.1. Data Preprocessing and Building New Feature Set Use min-max normalization (2) to handle the training samples including N_Train samples and test samples including N_Test samples. Divide the training samples into a training set and verification set according to the ratio of 5 : 1. Select one sample separately from the training set, verification set, and test set as an input of Algorithm 2 to calculate the best value of time delay τ and phase space dimension M. Assume that the training sample has L data points. According to (1), we can get 5/6N_TrainL−M+τF/τFM coordinates for the training set, 1/6N_TrainL−M+τF/τFM coordinates for the verification set, and N_TestL−M+τF/τFM coordinates for the test set, when the sampling frequency of the bearing vibration signal is F and the phase space vector is not reused.(2)x−=x−minxmaxx−minx.Algorithm 2: Mata-SG D. Input: The task distribution ΡT, learning rate β.Output: θ(1) Initializeθ and α;(2) while not done do(3) Sample batch of tasksTi∼ΡT(4) for all Tido(5) LSupportTiθt⟵1/SupportTi∑x,y∈SupportTily,fθtx(6) θi′=θt−α∇θtLSupportTiDTiSupport(7) LQueryTiθi′⟵1/QueryTi∑x,y∈QueryTily,fθi′x(8) end(9) θt+1,αt+1⟵θt,αt−β∇θt,αt∑TiLQueryΤiθi′EndCombine the reconstructed phase space coordinates with the labels of the original signals to build new training samples. Since the phase space has the same topological properties as the original bearing vibration signal system, the regularity of the bearing time-series signal in the high-dimensional space is restored in the coordinates. Therefore, any coordinate in phase space represents the state of the original bearing vibration signal system and contains corresponding features. Compared with the features included in the original signals, the features in the phase space coordinates are more obvious and easier to be identified by the classifier. ### 3.2. Bearing Fault Classifier and Its Training Our bearing fault classifier is designed based on CNN and the residual block. The classifier uses 7 network layers, of which the Conv_x network consists of convolutional layers and ReLU activation operations. The full connection layer used 100 neurons, and the output layer used 4 neurons. The first layer of our network uses a larger convolution kernel and then reduces the size of the convolution kernel. The detailed structure is shown in Figure3.Figure 3 The detailed structure of the bearing fault classifier.Our bearing fault classifier takes the phase space coordinates of the bearing vibration signal as input. In the classifier, the input matrix first flows through the Conv1_x and then through Conv2_x, Conv3_x, the fully connected layer, and the output layer in sequence. In the Conv1_x module, the input matrix performed convolution operations according to (3) using 64 different convolution kernels with a size of 7 ∗ 7. To avoid the deviation of the result distribution after the convolution operation, the batch normalization (BN) technique is used to standardize the convolution result, so that the convolution result obeys a normal distribution with a mean of 0 and a variance of 1. Then, the result of BN is nonlinearly transformed using the ReLU activation function (fx=max0,x). Different from the Conv1_x module, the Conv2_x module first uses the maximum pooling technique that takes the maximum value in a fixed-size sliding window to reduce the density of data features and then uses two convolution layers with smaller kernels to perform convolution operations on the input. After the second BN operation, Conv2_x uses the sum of the result of max-pooling (as the observed value of residual block) and the result of BN as the input to the nonlinear activation function. Conv3_x also has two convolution layers, the difference is that to make the observed value of the residual block of Conv3_x own the same shape as the second convolution result of Con3_x, the observed value is processed with a convolution of size 1 ∗ 1. In the fully connected layer and the output layer, the input is processed in the same way as equation (4).(3)yu,μl+1=∑n=1K∑m=1Kxln+u,m+μ×wiln,m+bl,(4)yil+1=ReLU∑j=1Jxjl×wi,jl+bil.In (3), xl represents the input matrix of the l-th layer, K is the size of the convolution kernel, and wil and bl represent the connection weight and activation parameter of the i-th convolution kernel of l-th layer (all convolution kernels of the same convolution layer shared the same activation value). In equation (4), i represents the serial number of neurons in the full connection layer, j means the position number of the input matrix, wi,jl represents the weight of the i-th neurons to the j-th value of the input, and bil is the bias of the i-th neurons of l-th layer.To prevent overfitting in the training process, we added the following judgment statements after step 9 of Algorithm 2 and used the modified Algorithm 2 to update the network parameters of our bearing fault classifier.If Acc_of_Train ≥ 0.9 and Acc_of_Train − Acc_of_Validation > 0.1:Number_of_ Overfitting ++;else:Number_of_ Overfitting = 0;If Number_of_ Overfitting > 5:break; ### 3.3. The Integrating Step of Multiple Bearing Fault Classifiers Combining multiple weak bearing fault identification classifiers designed in Section3.2 can generate a stronger bearing fault identification classifier using the AdaBoost algorithm; however, unlike the SAMME, our method divides the training data into a support set and a query set, calculates each sample weight of the support and query set, and updates the classification error with the sample weights. The integrating steps of our bearing fault classifiers are shown in Algorithm 3.Algorithm 3: Our bearing fault identification algorithm. Inputs: Number of classifiers, training and test samples composed of rolling bearing vibration signals, and the sample labels.Outputs: Bearing fault identification classifier and the prediction values of the test samples.(1) Setβ with 0.001 for Algorithm 2;(2) For all classifiers do:(3) Decompose the bearing vibration signals and construct the new training set, verification set, and test set according to the data preprocessing step described in Section3.1;(4) Divide the training set into support and query sets with the ratio of 1 : 1;(5) Initialize the sample weight of the support and query set withwisupport=1/Num_S, i=0,…,Num_s−1; wjiquery=1/Num_q, j=0,…,Num_q−1, where Num_s and Num_q are the numbers of the sample of the support set and query set;(6) Set the target loss function with the cross-entropy (lCEp,q=−∑pxlogqx+1−pxlog1−qx×wx, where p is the predictive value, q is the true value, and w is the sample weight);(7) Update the parameters of the first classifier using the support set and query set according to the modified Meta-SGD;(8) Calculate the identification error rate of the training set according to equation (2);(9) Calculate the weight coefficient of the classifier according to equation (3);(10) Update the sample weight of the training sample, and normalize these weights according to (4);(11) Use the network parameters of the previous classifier to initialize the network parameters of the next classifier;(12) Train the next classifier using the training set updated with the new weights according to the modified Meta-SGD;(13) end(14) Calculate the prediction values of the test sample according to equation (5), output the prediction values and the integrated classifier.EndThis algorithm uses the Meta-SGD learning strategy to decrease the value of the objective loss function and update the network parameters; it not only has the characteristics of the meta-learning strategy to quickly converge but also adjusts the learning rate based on the learning task. The initialization of neural network parameters has an important impact on training. In order to make the network get good initialization parameters, the algorithm initializes the parameters of the next classifier in the parameters of the previous classifier during the iterative training process.The algorithm needs to determine 2 hyperparameters in advance. They are the learning rateβ of Algorithm 2 and the number of data points of the training sample. The two parameters have a great influence on the algorithm. First, the learning rate β will affect the learning speed of meta-SGD; second, the number of data points of the training sample determines the shape of the classifier input data, and the execution effect of the coordinate delay method algorithm will be affected by this parameter. If it is too large, it will increase the interference data, and if it is too small, the high-dimensional phase space of the bearing vibration time-series signal cannot be accurately established. In this paper, we set β with 0.001, and the number of the data points is 1024. ## 3.1. Data Preprocessing and Building New Feature Set Use min-max normalization (2) to handle the training samples including N_Train samples and test samples including N_Test samples. Divide the training samples into a training set and verification set according to the ratio of 5 : 1. Select one sample separately from the training set, verification set, and test set as an input of Algorithm 2 to calculate the best value of time delay τ and phase space dimension M. Assume that the training sample has L data points. According to (1), we can get 5/6N_TrainL−M+τF/τFM coordinates for the training set, 1/6N_TrainL−M+τF/τFM coordinates for the verification set, and N_TestL−M+τF/τFM coordinates for the test set, when the sampling frequency of the bearing vibration signal is F and the phase space vector is not reused.(2)x−=x−minxmaxx−minx.Algorithm 2: Mata-SG D. Input: The task distribution ΡT, learning rate β.Output: θ(1) Initializeθ and α;(2) while not done do(3) Sample batch of tasksTi∼ΡT(4) for all Tido(5) LSupportTiθt⟵1/SupportTi∑x,y∈SupportTily,fθtx(6) θi′=θt−α∇θtLSupportTiDTiSupport(7) LQueryTiθi′⟵1/QueryTi∑x,y∈QueryTily,fθi′x(8) end(9) θt+1,αt+1⟵θt,αt−β∇θt,αt∑TiLQueryΤiθi′EndCombine the reconstructed phase space coordinates with the labels of the original signals to build new training samples. Since the phase space has the same topological properties as the original bearing vibration signal system, the regularity of the bearing time-series signal in the high-dimensional space is restored in the coordinates. Therefore, any coordinate in phase space represents the state of the original bearing vibration signal system and contains corresponding features. Compared with the features included in the original signals, the features in the phase space coordinates are more obvious and easier to be identified by the classifier. ## 3.2. Bearing Fault Classifier and Its Training Our bearing fault classifier is designed based on CNN and the residual block. The classifier uses 7 network layers, of which the Conv_x network consists of convolutional layers and ReLU activation operations. The full connection layer used 100 neurons, and the output layer used 4 neurons. The first layer of our network uses a larger convolution kernel and then reduces the size of the convolution kernel. The detailed structure is shown in Figure3.Figure 3 The detailed structure of the bearing fault classifier.Our bearing fault classifier takes the phase space coordinates of the bearing vibration signal as input. In the classifier, the input matrix first flows through the Conv1_x and then through Conv2_x, Conv3_x, the fully connected layer, and the output layer in sequence. In the Conv1_x module, the input matrix performed convolution operations according to (3) using 64 different convolution kernels with a size of 7 ∗ 7. To avoid the deviation of the result distribution after the convolution operation, the batch normalization (BN) technique is used to standardize the convolution result, so that the convolution result obeys a normal distribution with a mean of 0 and a variance of 1. Then, the result of BN is nonlinearly transformed using the ReLU activation function (fx=max0,x). Different from the Conv1_x module, the Conv2_x module first uses the maximum pooling technique that takes the maximum value in a fixed-size sliding window to reduce the density of data features and then uses two convolution layers with smaller kernels to perform convolution operations on the input. After the second BN operation, Conv2_x uses the sum of the result of max-pooling (as the observed value of residual block) and the result of BN as the input to the nonlinear activation function. Conv3_x also has two convolution layers, the difference is that to make the observed value of the residual block of Conv3_x own the same shape as the second convolution result of Con3_x, the observed value is processed with a convolution of size 1 ∗ 1. In the fully connected layer and the output layer, the input is processed in the same way as equation (4).(3)yu,μl+1=∑n=1K∑m=1Kxln+u,m+μ×wiln,m+bl,(4)yil+1=ReLU∑j=1Jxjl×wi,jl+bil.In (3), xl represents the input matrix of the l-th layer, K is the size of the convolution kernel, and wil and bl represent the connection weight and activation parameter of the i-th convolution kernel of l-th layer (all convolution kernels of the same convolution layer shared the same activation value). In equation (4), i represents the serial number of neurons in the full connection layer, j means the position number of the input matrix, wi,jl represents the weight of the i-th neurons to the j-th value of the input, and bil is the bias of the i-th neurons of l-th layer.To prevent overfitting in the training process, we added the following judgment statements after step 9 of Algorithm 2 and used the modified Algorithm 2 to update the network parameters of our bearing fault classifier.If Acc_of_Train ≥ 0.9 and Acc_of_Train − Acc_of_Validation > 0.1:Number_of_ Overfitting ++;else:Number_of_ Overfitting = 0;If Number_of_ Overfitting > 5:break; ## 3.3. The Integrating Step of Multiple Bearing Fault Classifiers Combining multiple weak bearing fault identification classifiers designed in Section3.2 can generate a stronger bearing fault identification classifier using the AdaBoost algorithm; however, unlike the SAMME, our method divides the training data into a support set and a query set, calculates each sample weight of the support and query set, and updates the classification error with the sample weights. The integrating steps of our bearing fault classifiers are shown in Algorithm 3.Algorithm 3: Our bearing fault identification algorithm. Inputs: Number of classifiers, training and test samples composed of rolling bearing vibration signals, and the sample labels.Outputs: Bearing fault identification classifier and the prediction values of the test samples.(1) Setβ with 0.001 for Algorithm 2;(2) For all classifiers do:(3) Decompose the bearing vibration signals and construct the new training set, verification set, and test set according to the data preprocessing step described in Section3.1;(4) Divide the training set into support and query sets with the ratio of 1 : 1;(5) Initialize the sample weight of the support and query set withwisupport=1/Num_S, i=0,…,Num_s−1; wjiquery=1/Num_q, j=0,…,Num_q−1, where Num_s and Num_q are the numbers of the sample of the support set and query set;(6) Set the target loss function with the cross-entropy (lCEp,q=−∑pxlogqx+1−pxlog1−qx×wx, where p is the predictive value, q is the true value, and w is the sample weight);(7) Update the parameters of the first classifier using the support set and query set according to the modified Meta-SGD;(8) Calculate the identification error rate of the training set according to equation (2);(9) Calculate the weight coefficient of the classifier according to equation (3);(10) Update the sample weight of the training sample, and normalize these weights according to (4);(11) Use the network parameters of the previous classifier to initialize the network parameters of the next classifier;(12) Train the next classifier using the training set updated with the new weights according to the modified Meta-SGD;(13) end(14) Calculate the prediction values of the test sample according to equation (5), output the prediction values and the integrated classifier.EndThis algorithm uses the Meta-SGD learning strategy to decrease the value of the objective loss function and update the network parameters; it not only has the characteristics of the meta-learning strategy to quickly converge but also adjusts the learning rate based on the learning task. The initialization of neural network parameters has an important impact on training. In order to make the network get good initialization parameters, the algorithm initializes the parameters of the next classifier in the parameters of the previous classifier during the iterative training process.The algorithm needs to determine 2 hyperparameters in advance. They are the learning rateβ of Algorithm 2 and the number of data points of the training sample. The two parameters have a great influence on the algorithm. First, the learning rate β will affect the learning speed of meta-SGD; second, the number of data points of the training sample determines the shape of the classifier input data, and the execution effect of the coordinate delay method algorithm will be affected by this parameter. If it is too large, it will increase the interference data, and if it is too small, the high-dimensional phase space of the bearing vibration time-series signal cannot be accurately established. In this paper, we set β with 0.001, and the number of the data points is 1024. ## 4. Experiment The fault identification accuracy under different working conditions is an important indicator for measuring the rolling bearing fault identification method. This paper verified the effectiveness of the proposed method by calculating the test accuracies of bearing fault identification on the artificial and natural bearing fault dataset collected from different loads, different speeds, and different fault conditions.The experiments were conducted on the Tensorflow CPU 2.7 platform programming with python. The hardware and software environments included core i7-4790K 4.0 GHz processor, 16 G memory, and Windows Server 2018 operating system.To eliminate the impact of accidentality, each experiment in this paper was performed 5 times independently, and the average value of 5 test accuracy was used as the experimental results. ### 4.1. Experiments on the CWRU Dataset Artificial bearing fault data set CRWU [34] consists of bearing vibration time serial signals in the state of normal, internal circle fault, outer circle fault, and rolling bodies fault. The CWRU dataset is a representative data set in the field of bearing fault diagnosis. Many scholars got positive results when they used the CWRU to perform simulation experiments [35, 36]. #### 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD #### 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ### 4.2. Experiments on XJTU-SY Dataset To further verify the effectiveness of the proposed method, the natural bearing fault data set, XJTU-SY [37], was used for our experiments. #### 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 #### 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 4.1. Experiments on the CWRU Dataset Artificial bearing fault data set CRWU [34] consists of bearing vibration time serial signals in the state of normal, internal circle fault, outer circle fault, and rolling bodies fault. The CWRU dataset is a representative data set in the field of bearing fault diagnosis. Many scholars got positive results when they used the CWRU to perform simulation experiments [35, 36]. ### 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD ### 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ## 4.1.1. Experimental Data The bearing vibration signals used in this case were sampled from the 6205-2RSJEM SKF rolling bearings at a sampling frequency of 12 kHz. The fault types of these signals are associated with 4 different bearing damage diameters, recorded asA (0), B (0.007 inches), C (0.014 inches), and D (0.021 inches). These signals were cut into several segments. Each contained 1024 data points and was used as an experimental sample. The experiment samples are selected from the 16 kinds of work conditions described in Table 1 and are used to carry out three groups of small sample experiments. Three groups of experiments randomly selected 10 samples per class for testing, and these testing samples have the same labels and same working conditions as training samples.Table 1 Work conditions of the CWRU experiment. No.12345678910111213141516Load (hp)1230Motor speed (rpm)1772175017301797Damage degreeABCDABCDABCDABCD ## 4.1.2. Experiment and Result Analysis According to Algorithm3, three group experiments were conducted. The first group experiment was a variable power experiment. The samples of normal, inner circle fault, outer circle fault, and rolling element fault are randomly selected from 8 working conditions numbered 1, 5, 9, 13, 2, 6, 10, and 14 in Table 1 to carry out 4-way and 20-shot experiments. Similar to the first group experiment, the second group experiment was a variable fault degree experiment, in which samples were randomly selected under four operating conditions numbered 1, 2, 3, and 4. The third group of experiments was the variable power and fault degree experiments, and samples of five operating conditions of 1, 3, 4, 5, 10, and 14 were randomly selected. The sample distribution of the three experiments is shown in Table 2. Calculated the test accuracy and the test results are shown in Table 3.Table 2 The bearing status, label, and quantity of the experimental samples. No.Condition no.NormalInner faultOuter faultRolling faultLabelQuantityLabelQuantityLabelQuantityLabelQuantity1105505905130521525356152535101525351415253521020217273731727374162636310105010101525351415253531525354152535Table 3 Results of the three group experiments. Experiment number4-way and 20-shotTest accuracyStandard deviation1(92.5 + 97.0 + 95.0 + 100 + 97.5)/5 = 96.42.52(97.5 + 92.5 + 100 + 92.5 + 95.0)/5 = 95.52.93(92.5 + 100 + 95.0 + 92.0 + 95.0)/5 = 95.02.7In the three groups of experiments, the test accuracies of the proposed method are greater than 95% and the standard deviations of the accuracy of the three experiments are within 3 when the number of training samples of normal, inner ring fault, outer ring fault, and rolling element fault is 20 and in a variety of working conditions composed of three different factors of bearing fault degree, speed, and load. Figure4 is the convergence process of loss value and validation accuracy under the 4-way and 20-shot experiments with variable power and fault degree.Figure 4 Convergence process of loss value and validation accuracy metrics in one training session. (a) Convergence process of loss value. (b) Convergence process of verification accuracy. (a)(b)During the training process of the first classifier, the value of the target loss function continued to decrease, the test accuracy rate continued to increase, and the value of the network parameters was continuously optimized. The test accuracy of this classifier exceeded 90%. After the first classifier, the four classifiers were fine-tuned according to the training dataset with different sample weights in the training of the second, third, fourth, and fifth four classifiers. Finally, five different classifiers were obtained, which complemented each other, their test accuracy was all above 90%, and the final test accuracy of our method reached 100%.The aboveN-way and K-shot experiments show that the proposed method can filter the influence of the three factors of speed, load, and fault degree on the bearing fault identification to a certain extent whether it is applied in the constant or variable condition. The proposed method has good accuracy and stability for the fault identification of small sample bearings with different speeds, loads, and fault degrees. ## 4.2. Experiments on XJTU-SY Dataset To further verify the effectiveness of the proposed method, the natural bearing fault data set, XJTU-SY [37], was used for our experiments. ### 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 ### 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 4.2.1. Experimental Data The sampling frequency is 25.6 kHz in the XJTU-SY experiment, and the XJTU-SY dataset also includes the faults raised in the outer circle, inner circle, and rolling bodies. The bearing vibration signals sampled from eight different conditions were randomly selected to form training sets and test sets. Every experiment sample has 1024 data points, and its type and its labels are shown in Table4.Table 4 Bearing fault state and its label of XJTU-SY dataset. Condition no.Load (kN)Rotating speed (rpm)Bearing stateNormalInner faultOuter faultRolling element fault1710 (h3)2400121811 (h2)225001231912 (h1)210003 ## 4.2.2. Experimental Results and Analysis The 4-way and 20-shot experiment (experiment 4) is carried out by randomly selecting normal, inner fault, outer fault, and rolling element fault samples from the working condition numbered 18 in Table4. The test results are shown in Table 5.Table 5 Test results of experiment 4. Test accuracyStandard deviation(95 + 96.25 + 97.5 + 96.5 + 95)/5 = 96.051.2The results of experiment 4 show that the identification accuracy of the proposed method for natural bearing fault is still high under the small number of training samples, and the test accuracy of the 4-way and 20-shot is more than 96% when the training samples and test samples are in the same working condition. Figure5 shows the confusion matrixes of the predicted value of the test sample in the 5 experiments.Figure 5 Confusion matrix of test samples in 4-way and 20-shot of experiment 4.From the confusion matrix, the predicted values of label 0 were always consistent with the actual values, and the prediction accuracy was 100%; the prediction accuracy of other labels fluctuated slightly, and the error mainly came from the wrong prediction of label 1 as a label 3. In the end, the prediction results of each classifier were excellent. The prediction accuracies of label 1 were above 90% for 4 consecutive times, the prediction accuracies of label 2 exceeded 99%, and the prediction accuracies of label 3 exceeded 90% four times. The recognition rate of 100% fault samples and nonfault samples means that the proposed method can accurately distinguish between fault samples and nonfault samples, and the fault location accuracy rate of 94.7% shows that the method also has a good ability to identify natural bearing faults under known working conditions.Our method can be viewed as a combination of CDPSR data preprocessing, residual network, Meta-SGD, and AdaBoost, which is insensitive to changes in bearing load, rotational speed, and failure degree. To analyze each part in the proposed method, we performed the ablation experiments. First, we analyzed the contribution of the residual network designed in Section3.2 by comparing two different types of combinations of CNN + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, in which CNN consisted of a stack of simple convolutional layers and max-pooling layers. Second, the effect of the Meta-SGD learning strategy method is analyzed by comparing ResNet + CDPSR + Meta-SGD + AdaBoost and ResNet + CDPSR + SGD + AdaBoost, and the use of SGD was described in reference [23]. Then, by comparing Resnet + Meta-SGD + AdaBoost and ResNet + CDPSR + Meta-SGD + AdaBoost, we discussed the influence of the reconstructed bearing vibration timing signal on the method. Finally, the influence of AdaBoost on this method was analyzed by comparing ResNet + CDPSR + Meta-SGD and ResNet + CDPSR + Meta-SGD + AdaBoost. The test results of the ablation experiment are shown in Figure 6.Figure 6 The results of the ablation experiments.From the results in Figure6, the bearing fault recognition accuracy of the classifier using the residual network was 6% higher than that of the CNN classifier, Meta-SGD had a greater improvement in bearing fault recognition than SGD, and the CDPSR method played a positive influence on the fault identification. From the variance of the test accuracy of ResNet + Meta-SGD + CDPSR and ResNet + Meta-SGD + CDPSR + AdaBoost, we can see that AdaBoost also played a positive role as a stabilizer for the proposed method. Hence, the four parts of CDPSR, residual network, Meta-SGD, and AdaBoost had made positive contributions to the proposed method. Residual network and Meta-SGD can effectively improve the test accuracy; CDPSR data preprocessing and Adaboost have significant impacts on the stability of the method.To analyze the advantages of our method, the comparison experiment was carried out under the same conditions as experiment 4. The test accuracy of the proposed method, the WDCNN method [16], and the CNN-SVM method [17] would be compared.The WDCNN method proposed by Zhang et al. in 2017 included a specific deep network (WDCNN network), which takes the wide kernels in the first as convolutional layers and small convolutional kernels in the preceding layers. The method slices the training samples with overlap to obtain huge amounts of data, then uses raw vibration signals as input, and trains the WDCNN network using the backpropagation algorithm detailed in reference [16].The CNN-SVM method combines both the merits of CNN and SVM, which firstly uses the 2D representation of raw vibration signals as input, then trains the original CNN with the output layer for several epochs until the training process converges using stochastic gradient descent, and finally replaces the output layer with the SVM, which included the radial basis function (RBF) kernel. The optimum scheme used in this method was elaborated in reference [17].The WDCNN and the CNN-SVM method are typical methods in the application of fault-bearing recognition. The fault identification results of the comparison experiment are shown in Figure7.Figure 7 Results of the comparison experiment.In the comparison, the residual network designed for our method and the network used in the WDCNN method have the same number of network layers and the same scale of learning parameters. The average result of the five CNN-SVM experiments was 82.5%, and the average result of the WDCNN was 90.5%. The test accuracies of both the WDCNN and our method were above 90%, and our result was 96%. Compared with the test accuracy of CNN-SVM and WDCNN, the test accuracy of our method was higher. Moreover, the test results of our method were 95 to 97.5, the value range was relatively narrow, while the value range of the other two methods was relatively much wider. Therefore, the proposed method has a certain improvement in the accuracy of bearing fault identification, and its stability is better. ## 5. Conclusions A novel bearing fault identification method for multiconditions and small samples was proposed to challenge the problems of lacking fault data and poor performance. To verify the effectiveness of the proposed method, the artificial and natural bearing fault signals were taken to experiment with as a case study. The result shows that the proposed method realized accurate fault signal identification under multiple working conditions and small samples, and its accuracy rate of bearing fault positioning exceeds 90%. Benefitting from the reconstruction of high-dimensional space of bearing vibration time series by coordinate delay construction method, extraction of the phase space features using the convolutional neural network, the transmission of the gradient to other layers by residual block, updation of the classifier parameters by Meta-SGD, and integration of multiple classifiers by AdaBoost method, the proposed method gets excellent bearing fault feature extraction and high fault identification ability. Finally, compared with other advanced methods, the proposed method also has certain advantages. From these cases, the proposed method is very effective.The proposed method can accurately identify bearing faults under small samples and multiworking conditions without manually setting fault features. Therefore, the proposed method has a certain value in some application areas with complex working conditions and difficulty obtaining a large number of bearing fault samples, such as aviation bearings. --- *Source: 1016954-2022-08-13.xml*
2022
# Influence of Rotation and Magnetic Fields on a Functionally Graded Thermoelastic Solid Subjected to a Mechanical Load **Authors:** Ankush Gunghas; Rajesh Kumar; Sunita Deswal; Kapil Kumar Kalkal **Journal:** Journal of Mathematics (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1016981 --- ## Abstract The current manuscript is presented to study two-dimensional deformations in a nonhomogeneous, isotropic, rotating, magneto-thermoelastic medium in the context of Green-Naghdi model III. It is assumed that the functionally graded material has nonhomogeneous mechanical and thermal properties in thex-direction. The exact expressions for the displacement components, temperature field, and stresses are obtained in the physical domain by using normal mode technique. These are also computed numerically for a copper-like material and presented graphically to observe the variations of the considered physical variables. Comparisons of the physical quantities are shown in figures to depict the effects of angular velocity, nonhomogeneity parameter, and magnetic field. --- ## Body ## 1. Introduction In the classical dynamical coupled theory of thermoelasticity formulated by Biot [1], thermal signals are propagating with infinite speed, which is not physically acceptable. To remove this drawback of the classical coupled dynamical theory of thermoelasticity, several theories of generalized thermoelasticity were developed. The first two generalized thermoelastic theories are Lord-Shulman (L-S) [2] theory and Green-Lindsay (G-L) [3] theory. In L-S theory, one thermal relaxation time parameter is introduced in the classical Fourier’s law of heat conduction, whereas in the G-L theory, two thermal relaxation times are introduced in the constitutive relations for force stress tensor and entropy equation. Green and Naghdi [4–6] speculated three new thermoelasticity theories that permit treatment of a much wider class of heat flow problems, labeled as G-N models I, II, and III. The nature of the constitutive equations in these three models is such that when the respective theories are linearized, the model I is same as the classical heat conduction theory, model II predicts the finite speed of heat propagation involving no energy dissipation, and model III indicates the propagation of thermal signals with finite speed.The theory of magneto-thermoelasticity is concerned with the effect of magnetic field on elastic and thermoelastic deformations of a solid body and has received the attention of many researchers due to its extensive use in various fields like optics, geophysics, and acoustics. The problem of distribution of thermal stresses and temperature in a perfectly conducting half-space, in contact with a vacuum, permeated by an initial magnetic field was studied by Ezzat [7]. A model of two-dimensional equations of generalized magneto-thermoelasticity with one relaxation time in a perfectly conducting medium was established by Ezzat and Othman [8]. The impacts of fractional order parameter, hydrostatic initial stress, and gravity field on the plane waves in a fiber-reinforced isotropic thermoelastic medium were investigated by Othmanet al. [9]. Deswal and Kalkal [10] employed Laplace and Fourier transforms technique to study the phenomenon of wave propagation in a fractional order micropolar magneto-thermoelastic half-space. In the frame of fractional order theory of generalized thermoelasticity, Deswalet al. [11] studied magneto-thermoelastic interactions in an initially stressed, isotropic, homogeneous half-space. The effects of initial stress and magnetic field on thermoelastic interactions in an isotropic, thermally and electrically conducting half-space, whose surface is subjected to mechanical and thermal loads, were explored by Othman and Eraki [12]. Xiong and Guo [13] investigated the electro-magneto-thermoelastic diffusive plane waves in a half-space with variable material properties under fractional order thermoelastic theory.Since the large bodies like the earth, the moon, and other planets have an angular velocity, it appears more realistic to study the thermoelastic problems in a rotating medium. The propagation of elastic waves in a rotating, homogeneous, and isotropic medium was investigated by Schoenberg and Censor [14]. Some results in thermoelastic rotating medium are due to Roy Choudhuri and Debnath [15] and Roy Choudhuri and Mukhopadhyay [16]. The effect of rotation in generalized thermoelastic solid under the influence of gravity with an overlying infinite thermoelastic fluid was analyzed by Ailawalia and Narah [17]. Abouelregal and Zenkour [18] used fractional order theory of thermoelasticity to scrutinize the effect of angular velocity on fiber-reinforced generalized thermoelastic medium whose surface is subjected to a Mode-I crack problem. Kumaret al. [19] discussed the propagation of plane waves at the free surface of thermally conducting micropolar elastic half-space with two temperatures. Othmanet al. [20] considered the dual-phase lag model to study the influence of the rotation on a two-dimensional problem of micropolar thermoelastic isotropic medium with two temperatures. Saidet al. [21] used normal mode technique to study thermodynamical interactions in a micropolar magneto-elastic medium with rotation and two-temperature. Abouelregal and Abo-Dahab [22] investigated a two-dimensional problem in the context of dual-phase-lag model with fiber-reinforcement and rotation using normal mode analysis. The effect of angular velocity on Rayleigh wave propagation in a fiber-reinforced, anisotropic magneto-thermo-viscoelastic media was discussed by Hussien and Bayones [23].Over the last few decades, some structural materials such as functionally graded materials have been rapidly developed and used in many engineering applications. In functionally graded materials (FGMs), material properties vary gradually with a location within the body. FGMs are usually designed to be used under high-temperature environments. So, FGMs can easily eliminate or control thermal stresses, when sudden heating or cooling happens. These types of material are broadly used in important structures such as body materials in the aerospace field and nuclear reactors. A thermoinelastic response of functionally graded composites was studied by Aboudiet al. [24]. Abd-Allaet al. [25] analyzed radial vibrations in a functionally graded orthotropic elastic half-space subjected to rotation and gravity field. The electro-magneto-thermoelastic response of an infinite functionally graded cylinder was studied by Abbas and Zenkour [26], by using finite element method. The problem of generalized thermoelasticity in a thick-walled functionally graded cylinder with one relaxation time was considered by Abbas [27]. In this problem, the effects of temperature-dependent properties, volume fraction parameter, and thermal relaxation time on thermophysical quantities are estimated.The aim of the present contribution is to consider two-dimensional disturbances in an infinite, isotropic, nonhomogeneous, rotating, magneto-thermoelastic medium in the context of G-N model III. All the mechanical and thermal properties of the FGM under consideration are supposed to vary as an exponential power of the space-coordinate. The numerical results for the physical quantities have been obtained for a copper-like material and presented graphically to estimate and highlight the effects of different parameters considered in this problem. ## 2. Basic Equations Following Green-Naghdi [5] and Roy Choudhuri and Debnath [15], the field equations and stress-strain-temperature relations in a rotating thermoelastic medium in the presence of body forces Fi areConstitutive Law (1) σ i j = 2 μ e i j + δ i j λ e r r - β θ ,where(2)eij=12ui,j+uj,i.Stress Equation of Motion (3) σ j i , j + F i = ρ ∂ 2 u i ∂ t 2 + Ω → × Ω → × u → i + 2 Ω → × u → ˙ i .Equation of Heat Conduction (4) K ∗ ∇ 2 θ + K ∇ 2 θ ˙ = ρ C e θ ¨ + β T 0 e ¨ ,where λ, μ are Lame’s elastic constants, β=(3λ+2μ)αt, αt is the coefficient of linear thermal expansion, σij are the components of stress, eij are the components of strain, ρ is the reference mass density, Ce is the specific heat at constant strain, K is the thermal conductivity, K∗ is the material constant characteristic for this theory, u→ is the displacement vector, θ=T-T0, T is the absolute temperature, T0 is the reference temperature of the medium in its natural state assumed to be θ/T0≪1, e=err is the cubical dilatation, Ω→ is the rotation vector, and δij is the Kronecker delta.For a nonhomogeneous medium, the parametersλ, μ, β, K, K∗, and ρ are no longer constant but become space-dependent. Hence we replace λ, μ, β, K, K∗, and ρ by λ0f(x→), μ0f(x→), β0f(x→), K0f(x→), K0∗f(x→), and ρ0f(x→), respectively, where λ0, μ0, β0, K0, K0∗, and ρ0 are supposed to be constants and f(x→) is a given nondimensional function of the space variable x→=(x,y,z). Using these values of parameters, (1), (3), and (4) take the following form:(5)σij=fx→2μ0eij+δijλ0err-β0θ,(6)σji,j+Fi=ρ0fx→∂2ui∂t2+Ω→×Ω→×u→i+2Ω→×u→˙i,(7)K0∗fx→θ,i,i+K0fx→θ˙,i,i=fx→ρ0Ceθ¨+β0T0e¨,where(8)β0=3λ0+2μ0αt.Here, the superposed dot denotes derivative with respect to time and the comma denotes derivative with respect to space variable. ## 3. Mathematical Model Consider a nonhomogeneous, isotropic, magneto-thermoelastic half-space under the purview of G-N model III. Rectangular Cartesian coordinates are introduced having the surface of the half-space as the planex=0, with x-axis pointing vertically downwards into the medium. The medium is rotating with an angular velocity Ω→. Thus the displacement equation of motion in the rotating plane has two extra terms: Ω→×(Ω→×u→), which is the centripetal acceleration due to time varying motion only and 2Ω→×u→˙, which is the Coriolis acceleration. The present formulation is restricted to xy-plane and thus all the field variables are independent of the space variable z. So the displacement vector u→ and angular velocity Ω→ will have the components:(9)u→=u1,u2,0,Ω→=0,0,Ω.It is also assumed that material properties are graded only in x-direction. So we take f(x→) as f(x). By virtue of (9), the stresses arising from (5) can be expressed as(10)σxx=fxλ0+2μ0∂u1∂x+λ0∂u2∂y-β0θ,(11)σyy=fxλ0+2μ0∂u2∂y+λ0∂u1∂x-β0θ,(12)σxy=fxμ0∂u1∂y+∂u2∂x.Due to the application of an initial magnetic field H→(0,0,H0), an induced magnetic field h→, an induced electric field E→, and a current density J→ are developed in the considered medium. The simplified linear equations of electrodynamics of a slowly moving medium for a nonhomogeneous, isotropic and thermally conducting elastic solid are given by, neglecting Thomson’s effect [28]:(13)∇×h→=J→,∇×E→=-μm∂h→∂t,∇·h→=0,∇·E→=0,(14)E→=-μm∂u→∂t×H→,h→=∇×u→×H→,where μm is the magnetic permeability of the medium.From the above expressions, one can obtain(15)E→=-μmH0∂u2∂t,μmH0∂u1∂t,0,h→=0,0,-H0∂u1∂x+∂u2∂y,J→=∂h3∂y,-∂h3∂x,0,where(16)h3=-H0∂u1∂x+∂u2∂y.By virtue of above expressions and replacing μm by μm0f(x), the components of the Lorentz force F→=μm(J→×H→) are given by(17)F1=fxμm0H02∂e∂x,F2=fxμm0H02∂e∂y,F3=0.Utilizing the components of Lorentz force into stress equation of motion along with the consideration of two-dimensional problem, the field equations (6) and (7) yield(18)fxλ0+2μ0∂2u1∂x2+λ0+μ0∂2u2∂x∂y+μ0∂2u1∂y2-β0∂θ∂x+∂fx∂xλ0+2μ0∂u1∂x+λ0∂u2∂y-β0θ+fxμm0H02∂e∂x=fxρ0∂2u1∂t2-Ω2u1-2Ω∂u2∂t,(19)fxλ0+2μ0∂2u2∂y2+λ0+μ0∂2u1∂x∂y+μ0∂2u2∂x2-β0∂θ∂y+μ0∂fx∂x∂u1∂y+∂u2∂x+fxμm0H02∂e∂y=fxρ0∂2u2∂t2-Ω2u2+2Ω∂u1∂t,(20)K0∗fx∇2θ+∂fx∂x∂θ∂x+K0fx∇2θ˙+∂fx∂x∂θ˙∂x=fxρ0Ceθ¨+β0T0e¨,where(21)∇2=∂2∂x2+∂2∂y2. ## 4. Exponential Variation of Nonhomogeneity By assumingf(x)=e-nx, where n is a dimensionless parameter, one can conclude that the mechanical and thermal properties of the material vary exponentially along the x-direction. The governing equations can be recast in the dimensionless form by introducing the following dimensionless parameters:(22)x′,y′=w∗c1x,y,u1′,u2′=ρ0w∗c1β0T0u1,u2,t′=w∗t,θ′=θT0,σij′=σijβ0T0,Ω′=Ωw∗,where(23)w∗=ρ0Cec12K0,c12=λ0+2μ0ρ0.Now, in terms of the dimensionless parameters given in (22), (10)-(12) and (18)-(20) transform to(24)σxx=e-nx∂u1∂x+A11∂u2∂y-θ,(25)σyy=e-nx∂u2∂y+A11∂u1∂x-θ,(26)σxy=e-nxA12∂u1∂y+∂u2∂x,(27)1+A22∂2u1∂x2+A21+A22∂2u2∂x∂y+A12∂2u1∂y2-∂θ∂x-n∂u1∂x+A11∂u2∂y-θ=∂2u1∂t2-Ω2u1-2Ω∂u2∂t,(28)1+A22∂2u2∂y2+A21+A22∂2u1∂x∂y+A12∂2u2∂x2-∂θ∂y-nA12∂u1∂y+∂u2∂x=∂2u2∂t2-Ω2u2+2Ω∂u1∂t,(29)∇2θ-n∂θ∂x+A31∇2θ˙-n∂θ˙∂x=A32∂θ˙∂t+A33∂e˙∂t,where(30)A11=λ0ρ0c12,A12=μ0ρ0c12,A21=A11+A12,A22=μm0H02ρ0c12,A31=K0w∗K0∗,A32=ρ0Cec12K0∗,A33=β02T0ρ0K0∗. ## 5. Solution Methodology In this section, the normal mode method is employed, which has the advantage of finding the exact solutions without any assumed constraints on the field variables. In this approach, the solution of the physical variables is decomposed in terms of normal modes and one gets exact solution without any assumed restrictions on the actual physical quantities that appear in the governing equations of the problem considered. Normal mode analysis is, in fact, to look for the solution in Fourier transform domain. It is assumed that all the functions are sufficiently smooth on the real line such that the normal mode analysis of these functions exists. So, the solution for the considered physical variables can be decomposed in terms of normal modes in the following form:(31)u1,u2,θ,σijx,y,t=u1∗,u2∗,θ∗,σij∗xexp⁡ωt+ιmy,where u1∗, u2∗, θ∗, and σij∗ are the amplitudes of the functions, ω is the angular frequency, ι is the imaginary unit, and m is the wave number in y-direction.Introducing expression (31) in (27)-(29), we get(32)B11D2-nD+B12u1∗+B13D+B14u2∗-D-nθ∗=0,(33)B13D-B15u1∗+A12D2-B16D+B17u2∗+B18θ∗=0,(34)B19Du1∗+B21u2∗+B22D2-B23D-B24θ∗=0,where(35)B11=1+A22,B12=Ω2-A12m2-ω2,B13=A21+A22ιm,B14=2Ωω-nA11ιm,B15=2Ωω+nA12ιm,B16=nA12,B17=Ω2-ω2-m21+A22,B18=-ιm,B19=-A33ω2,B21=-B18B19,B22=1+A31ω,B23=nB22,B24=m2+A31m2ω+A32ω2.The condition for the existence of a nonzero solution of the system of (32)-(34) provides us(36)D6+F1D5+F2D4+F3D3+F4D2+F5D+F6u1∗,u2∗,θ∗=0,where(37)F1=E11E18+E12E17-A12E24-E14E23E11E17-A12E23,F2=E11E19+E12E18+E13F17-E25A12-E14E24-E15E23E11E17-A12E23,F3=E11E21+E12E19+E13E18-A12E26-E14E25-E15E24-E16E23E11E17-A12E23,F4=E11E22+E12E21+E13E19-E14E26-E15E25-E16E24E11E17-A12E23,F5=E12E22+E13E21-E15E26-E16E25E11E17-A12E23,F6=E13E22-E16E26E11E17-A12E23,E11=B11B18+B13,E12=-nB18+B14+nB13,E13=B12B18+nB14,E14=-B15+nA12,E15=B13B18+B16+nB15,E16=B14B18-nB16,E17=A12B22,E18=-B15B22+B23A12,E19=B16B22+B15B23-A12B24,E21=B15B24-B16B23,E22=-B16B24+B18B21,E23=B22B13,E24=-B13B23+B17B22,E25=B17B23-B13B24-B19B18,E26=B17B24.The general solution of (36) which is bounded as x→∞ is given by(38)u1∗,u2∗,θ∗x=∑i=131,H1i,H2iMim,ωe-λix,forRe⁡λi>0,where Mi(m,ω)(i=1,2,3) are parameters, depending upon m and ω, and(39)H1i=E11λi2-E12λi+E13A12λi3-E14λi2+E15λi-E16,H2i=B21H1i-B19λiB24-B23λi-B22λi2.In view of solution (38), stress components (24)-(26) take the form(40)σxx∗,σxy∗,σyy∗x=∑i=13H3i,H4i,H5iMim,ωe-λix-nx,forRe⁡λi>0,where(41)H3i=-λi+ιmA11H1i-H2i,H4i=A12ιm-λiH1i,H5i=-A11λi+ιmH1i-H2i. ## 6. Application: Mechanical Load on the Surface of the Half-Space A nonhomogeneous, rotating, magneto-thermoelastic medium, occupying the half-spacex≥0, has been considered. The surface x=0 of the half-space is acted upon by a mechanical load as shown in Figure 1. So, the boundary conditions are given by the following.Figure 1 Mechanical load over a functionally graded rotating magneto-thermoelastic medium.(i) Mechanical Boundary Conditions (i) Normal stress component obeys(42)σxx0,y,t=-py,t, wherep(y,t) is a given function of y and t. (ii) Tangential stress component vanishes at the surfacex=0,i.e.,(43)σxy0,y,t=0.(ii) Thermal Boundary ConditionSince, the plane boundary surfacex=0 is taken to be isothermal, so the thermal boundary condition is the vanishing of temperature θ,i.e.,(44)θ0,y,t=0.Application of nondimensional parameters and normal mode technique defined in (22) and (31) respectively, transforms the above boundary conditions to the form:(45)σxx∗x=-p∗,σxy∗x=0,θ∗x=0atx=0.Taking into account the nondimensional expressions for temperature and stresses from (38) and (40), the above boundary conditions reduce to a nonhomogeneous system of three equations, which can be written in matrix form as(46)H31H32H33H41H42H43H21H22H23M1M2M3=-p∗00.Solution of system (46) provides us the values of Mi(i=1,2,3) as follows:(47)M1=Δ1Δ,M2=Δ2Δ,M3=Δ3Δ,where(48)Δ=H31L1-H32L2+H33L3,Δ1=-p∗L1,Δ2=p∗L2,Δ3=-p∗L3,L1=H42H23-H22H43,L2=H41H23-H21H43,L3=H41H22-H21H42.Substitution of (47) into expressions (38) and (40) provides us the following expressions of field variables(49)u1∗,u2∗,θ∗x=1Δ∑i=131,H1i,H2iΔie-λix,forRe⁡λi>0,(50)σxx∗,σxy∗,σyy∗x=1Δ∑i=13H3i,H4i,H5iΔie-λix-nx,forRe⁡λi>0. ## 7. Notable Cases ### 7.1. Neglecting Rotational Effect In the absence of rotation (i.e., Ω=0), we shall be left with the relevant problem in a nonhomogeneous, isotropic, magneto-thermoelastic medium in the context of GN theory III. In this limiting case, we get the corresponding expressions of the physical quantities from (49) and (50). ### 7.2. Neglecting Nonhomogeneity Effect By settingn=0 in (24)-(29), one can get required expressions for different distributions from (49) and (50). In this limiting case, our results coincide with those of Abo-Dahabet al. [29] with appropriate changes in loading and boundary conditions. ## 7.1. Neglecting Rotational Effect In the absence of rotation (i.e., Ω=0), we shall be left with the relevant problem in a nonhomogeneous, isotropic, magneto-thermoelastic medium in the context of GN theory III. In this limiting case, we get the corresponding expressions of the physical quantities from (49) and (50). ## 7.2. Neglecting Nonhomogeneity Effect By settingn=0 in (24)-(29), one can get required expressions for different distributions from (49) and (50). In this limiting case, our results coincide with those of Abo-Dahabet al. [29] with appropriate changes in loading and boundary conditions. ## 8. Numerical Results and Discussion With an aim to illustrate the obtained theoretical results in the preceding section, we now present some numerical results. The following relevant physical constants are taken from Abo-Dahabet al. [29] for a copper-like material:(51)λ0=7.76×1010Nm-2,μ0=3.86×1010Nm-2,K0=0.6×10-2Wm-1K-1,Ce=383.1Jkg-1K-1,ρ0=8954kgm-3,T0=293K,αt=1.78×10-5K-1.Sinceω is complex quantity, we can write ω=ω0+ιω1 so that eωt=eω0t[cos(ω1t)+ιsin(ω1t)]. So for small values of time we can assume ω as real (i.e.,ω=ω0). The other parameters for numerical computation are taken as ω=2, m=2, t=0.1, p∗=8, and y=0.2.Figures2–5 analyze the effect of rotation on the distribution of field variables by considering three different values of angular velocity as Ω=0.0 (solid line), Ω=0.5 (dashed line), and Ω=0.9 (dot-dashed line) with n=1.0 and H0=105. Figure 2 explains the spatial variation of the normal displacement component for different values of Ω. The figure shows that the distribution of normal displacement follows a similar trend for all the values of Ω and dissimilarity lies on the ground of magnitudes. Figure 3 is plotted to depict the variation of normal stress with location x for three different values of Ω. The figure shows that increase in the value of Ω results in an increase in the numerical values of normal stress. Therefore, angular velocity is having an increasing effect on the profile of normal stress. Variations in tangential stress distribution with spatial coordinate x have been displayed in Figure 4. These variations are having a common starting point of zero magnitude, which is in quite good agreement with the boundary conditions. The figure shows that the tangential stress increases in the beginning and starts decreasing near the point x=0.23 and thereafter converges to zero as x increases. Moreover, with increasing Ω, there is an increase in the magnitude of tangential stress distribution. Figure 5 is drawn to observe the effect of angular velocity on the pattern of temperature distribution. As expected, the temperature distribution is having a coincident starting point of zero magnitude for all the values of Ω, which agrees completely with the boundary conditions. It is also manifested from the figure that increasing values of Ω are having a decreasing effect on the magnitude of temperature variations.Figure 2 Effect of angular velocity on normal displacement distribution.Figure 3 Effect of angular velocity on normal stress distribution.Figure 4 Effect of angular velocity on tangential stress distribution.Figure 5 Effect of angular velocity on temperature distribution.Figures6–9 illustrate the effect of the nonhomogeneity parameter on the distribution of field variables by setting three different values of nonhomogeneity parameter as n=0.0 (solid line), n=0.5 (dashed line), and n=1.0 (dot-dashed line) with Ω=0.5 and H0=105. In Figure 6, we have shown the spatial variation of normal displacement for different values of n. From this figure, it is noted that all the curves have distinct starting points. It can also be noticed from the plot that the displacement distribution is strongly affected by the presence of nonhomogeneity parameter. Figure 7 illustrates the variation of normal stress with distance x for different values of nonhomogeneity parameter n. For three different values of nonhomogeneity parameter, σxx starts with value 8.9 (in magnitude). Normal stress distribution exhibits significant sensitivity towards the nonhomogeneity parameter and it is compressive for homogeneous medium. Figure 8 is plotted to show the variations of tangential stress σxy with distance x. The plot indicates that the tangential stress field is having a coincident starting point of zero magnitude for all the three cases, which signifies that the boundary conditions are satisfied. It can be seen from the plot that, for homogeneous medium, the behaviour of σxy is totally opposite to that in the nonhomogeneous medium. The difference in magnitudes becomes indistinct along with the passage of time. Figure 9 shows that the temperature starts with a value zero which is completely in agreement with the boundary conditions. It increases in the beginning and starts decreasing in the neighbourhood of x=0.3 and converges to zero as x increases. It is also manifested from the figure that the presence of nonhomogeneity is having a decreasing effect on the magnitude of temperature distribution.Figure 6 Effect of nonhomogeneity parameter on normal displacement distribution.Figure 7 Effect of nonhomogeneity parameter on normal stress distribution.Figure 8 Effect of nonhomogeneity parameter on tangential stress distribution.Figure 9 Effect of nonhomogeneity parameter on temperature distribution.The influence of magnetic field on various field variables is examined in Figures10–13. Figure 10 shows the transient effect of applied mechanical load on normal displacement distribution, in the medium with a magnetic field (solid line) and without magnetic field (dashed line). Solution curves for both the cases follow a similar pattern of variations with the difference in magnitudes. It can be noted that magnetic field has a noticeable impact on displacement distribution. Figure 11 describes the variation of normal stress with location x. The plot shows that stress σxx is having a coincident initial point for both the cases and diminution of the magnitude takes place as the distance from the boundary increases. Figure 12 represents the spatial variation of tangential stress. It can be seen from the plot that the values of tangential stress for both the cases increase in the beginning and start decreasing in the neighbourhood of x=0.2 and approach to zero with increasing x. Figure 13 displays the variation of temperature field with distance x. In this figure, all the curves have coincident beginning point with value zero that leads to satisfy the boundary conditions. The amplitude of vibrations of solution curves for temperature field gets suppressed in the absence of magnetic field.Figure 10 Effect of magnetic field on normal displacement.Figure 11 Effect of magnetic field on normal stress distribution.Figure 12 Effect of magnetic field on tangential stress distribution.Figure 13 Effect of magnetic field on temperature distribution.The 3D plots showing normal displacement distribution, normal stress distribution, tangential stress distribution, and temperature distribution are shown in Figures14–17 for a wide range of x(0≤x≤4) and for a wide range of dimensionless time t(0≤t≤0.4). Figure 14 describes the variation of normal displacement with distance x and with time t. From this figure, it can be seen that the dimensionless time t plays an important role on the distribution of normal displacement. Figure 15 represents the variation in the values of normal stress for a wide range of x. Numerical values of normal stress decrease as the distance x increases, while for time t, an increase in the values takes place. Figure 16 has been plotted to show the profile of tangential stress distribution. The tangential stress starts with a zero value which is completely in agreement with the boundary conditions prescribed. Figure 17 represents the distribution of temperature with distance x and with time t. The temperature distribution is behaving like an increasing function in the range 0.0≤x≤0.2 and for the rest of the domain it decreases and reaches a steady state about the point x=2.7.Figure 14 Profile of normal displacement atn=1.0, H0=105, and Ω=0.5.Figure 15 Profile of normal stress distribution atn=1.0, H0=105, and Ω=0.5.Figure 16 Profile of tangential stress distribution atn=1.0, H0=105, and Ω=0.5.Figure 17 Profile of temperature distribution atn=1.0, H0=105, and Ω=0.5. ## 9. Concluding Remarks The investigation under consideration provides a mathematical model to obtain the behaviour of normal displacement, stresses, and temperature in a nonhomogeneous, isotropic, rotating, magneto-thermoelastic medium within the framework of G-N model III, by using normal mode technique. Theoretical and numerical results reveal that the parameters, namely, rotation, nonhomogeneity parameter, and magnetic field, have significant effects on the considered physical variables. Analysis of graphs permits the following concluding remarks:(i) As expected, the values of all the physical quantities converge to zero as the distancex increases and from the distribution of all physical quantities, it can be found that wave type heat propagates through the medium.(ii) It is apparent from figures that the rotational speed has an increasing effect on the profiles of normal stress and tangential stress but it has a decreasing effect on the profile of temperature.(iii) The presence of nonhomogeneity is having a decreasing effect on the magnitude of temperature distribution while it has a mix effect on the remaining field variables. Therefore, while designing FGMs, the effect of nonhomogeneity should be taken into consideration.(iv) The presence of magnetic field plays a significant role in the distribution of all physical variables.(v) The method adopted here is applicable to a wide range of problems in thermodynamics and thermoelasticity. It can be employed to boundary-layer problems which are described by linearized Navier-Stokes equations in electro-hydrodynamics.The above study is of geophysical interest and finds applications in mechanical engineering, industrial sectors, and seismology. The results presented in this paper will prove useful for scientists in material science and designers of new materials as well as for those working on the development of magneto-thermoelasticity. Electro-magneto composite materials have applications in sensors, actuators, ultrasonic imaging devices, and many other emerging components. Magneto-thermoelasticity has drawn the attention of many engineers, because of its wide use in diverse areas, especially, geophysics for determining the effect of earth’s magnetic field on seismic waves, development of a highly sensitive superconducting magneto-meter, electrical power engineering, etc. FGMs are broadly used in the biomedical application as medical implants, because they are designed to mimic the human organs, which are FGMs in nature. These types of materials are also used in pressure vessels and pipes. --- *Source: 1016981-2019-06-10.xml*
1016981-2019-06-10_1016981-2019-06-10.md
29,533
Influence of Rotation and Magnetic Fields on a Functionally Graded Thermoelastic Solid Subjected to a Mechanical Load
Ankush Gunghas; Rajesh Kumar; Sunita Deswal; Kapil Kumar Kalkal
Journal of Mathematics (2019)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1016981
1016981-2019-06-10.xml
--- ## Abstract The current manuscript is presented to study two-dimensional deformations in a nonhomogeneous, isotropic, rotating, magneto-thermoelastic medium in the context of Green-Naghdi model III. It is assumed that the functionally graded material has nonhomogeneous mechanical and thermal properties in thex-direction. The exact expressions for the displacement components, temperature field, and stresses are obtained in the physical domain by using normal mode technique. These are also computed numerically for a copper-like material and presented graphically to observe the variations of the considered physical variables. Comparisons of the physical quantities are shown in figures to depict the effects of angular velocity, nonhomogeneity parameter, and magnetic field. --- ## Body ## 1. Introduction In the classical dynamical coupled theory of thermoelasticity formulated by Biot [1], thermal signals are propagating with infinite speed, which is not physically acceptable. To remove this drawback of the classical coupled dynamical theory of thermoelasticity, several theories of generalized thermoelasticity were developed. The first two generalized thermoelastic theories are Lord-Shulman (L-S) [2] theory and Green-Lindsay (G-L) [3] theory. In L-S theory, one thermal relaxation time parameter is introduced in the classical Fourier’s law of heat conduction, whereas in the G-L theory, two thermal relaxation times are introduced in the constitutive relations for force stress tensor and entropy equation. Green and Naghdi [4–6] speculated three new thermoelasticity theories that permit treatment of a much wider class of heat flow problems, labeled as G-N models I, II, and III. The nature of the constitutive equations in these three models is such that when the respective theories are linearized, the model I is same as the classical heat conduction theory, model II predicts the finite speed of heat propagation involving no energy dissipation, and model III indicates the propagation of thermal signals with finite speed.The theory of magneto-thermoelasticity is concerned with the effect of magnetic field on elastic and thermoelastic deformations of a solid body and has received the attention of many researchers due to its extensive use in various fields like optics, geophysics, and acoustics. The problem of distribution of thermal stresses and temperature in a perfectly conducting half-space, in contact with a vacuum, permeated by an initial magnetic field was studied by Ezzat [7]. A model of two-dimensional equations of generalized magneto-thermoelasticity with one relaxation time in a perfectly conducting medium was established by Ezzat and Othman [8]. The impacts of fractional order parameter, hydrostatic initial stress, and gravity field on the plane waves in a fiber-reinforced isotropic thermoelastic medium were investigated by Othmanet al. [9]. Deswal and Kalkal [10] employed Laplace and Fourier transforms technique to study the phenomenon of wave propagation in a fractional order micropolar magneto-thermoelastic half-space. In the frame of fractional order theory of generalized thermoelasticity, Deswalet al. [11] studied magneto-thermoelastic interactions in an initially stressed, isotropic, homogeneous half-space. The effects of initial stress and magnetic field on thermoelastic interactions in an isotropic, thermally and electrically conducting half-space, whose surface is subjected to mechanical and thermal loads, were explored by Othman and Eraki [12]. Xiong and Guo [13] investigated the electro-magneto-thermoelastic diffusive plane waves in a half-space with variable material properties under fractional order thermoelastic theory.Since the large bodies like the earth, the moon, and other planets have an angular velocity, it appears more realistic to study the thermoelastic problems in a rotating medium. The propagation of elastic waves in a rotating, homogeneous, and isotropic medium was investigated by Schoenberg and Censor [14]. Some results in thermoelastic rotating medium are due to Roy Choudhuri and Debnath [15] and Roy Choudhuri and Mukhopadhyay [16]. The effect of rotation in generalized thermoelastic solid under the influence of gravity with an overlying infinite thermoelastic fluid was analyzed by Ailawalia and Narah [17]. Abouelregal and Zenkour [18] used fractional order theory of thermoelasticity to scrutinize the effect of angular velocity on fiber-reinforced generalized thermoelastic medium whose surface is subjected to a Mode-I crack problem. Kumaret al. [19] discussed the propagation of plane waves at the free surface of thermally conducting micropolar elastic half-space with two temperatures. Othmanet al. [20] considered the dual-phase lag model to study the influence of the rotation on a two-dimensional problem of micropolar thermoelastic isotropic medium with two temperatures. Saidet al. [21] used normal mode technique to study thermodynamical interactions in a micropolar magneto-elastic medium with rotation and two-temperature. Abouelregal and Abo-Dahab [22] investigated a two-dimensional problem in the context of dual-phase-lag model with fiber-reinforcement and rotation using normal mode analysis. The effect of angular velocity on Rayleigh wave propagation in a fiber-reinforced, anisotropic magneto-thermo-viscoelastic media was discussed by Hussien and Bayones [23].Over the last few decades, some structural materials such as functionally graded materials have been rapidly developed and used in many engineering applications. In functionally graded materials (FGMs), material properties vary gradually with a location within the body. FGMs are usually designed to be used under high-temperature environments. So, FGMs can easily eliminate or control thermal stresses, when sudden heating or cooling happens. These types of material are broadly used in important structures such as body materials in the aerospace field and nuclear reactors. A thermoinelastic response of functionally graded composites was studied by Aboudiet al. [24]. Abd-Allaet al. [25] analyzed radial vibrations in a functionally graded orthotropic elastic half-space subjected to rotation and gravity field. The electro-magneto-thermoelastic response of an infinite functionally graded cylinder was studied by Abbas and Zenkour [26], by using finite element method. The problem of generalized thermoelasticity in a thick-walled functionally graded cylinder with one relaxation time was considered by Abbas [27]. In this problem, the effects of temperature-dependent properties, volume fraction parameter, and thermal relaxation time on thermophysical quantities are estimated.The aim of the present contribution is to consider two-dimensional disturbances in an infinite, isotropic, nonhomogeneous, rotating, magneto-thermoelastic medium in the context of G-N model III. All the mechanical and thermal properties of the FGM under consideration are supposed to vary as an exponential power of the space-coordinate. The numerical results for the physical quantities have been obtained for a copper-like material and presented graphically to estimate and highlight the effects of different parameters considered in this problem. ## 2. Basic Equations Following Green-Naghdi [5] and Roy Choudhuri and Debnath [15], the field equations and stress-strain-temperature relations in a rotating thermoelastic medium in the presence of body forces Fi areConstitutive Law (1) σ i j = 2 μ e i j + δ i j λ e r r - β θ ,where(2)eij=12ui,j+uj,i.Stress Equation of Motion (3) σ j i , j + F i = ρ ∂ 2 u i ∂ t 2 + Ω → × Ω → × u → i + 2 Ω → × u → ˙ i .Equation of Heat Conduction (4) K ∗ ∇ 2 θ + K ∇ 2 θ ˙ = ρ C e θ ¨ + β T 0 e ¨ ,where λ, μ are Lame’s elastic constants, β=(3λ+2μ)αt, αt is the coefficient of linear thermal expansion, σij are the components of stress, eij are the components of strain, ρ is the reference mass density, Ce is the specific heat at constant strain, K is the thermal conductivity, K∗ is the material constant characteristic for this theory, u→ is the displacement vector, θ=T-T0, T is the absolute temperature, T0 is the reference temperature of the medium in its natural state assumed to be θ/T0≪1, e=err is the cubical dilatation, Ω→ is the rotation vector, and δij is the Kronecker delta.For a nonhomogeneous medium, the parametersλ, μ, β, K, K∗, and ρ are no longer constant but become space-dependent. Hence we replace λ, μ, β, K, K∗, and ρ by λ0f(x→), μ0f(x→), β0f(x→), K0f(x→), K0∗f(x→), and ρ0f(x→), respectively, where λ0, μ0, β0, K0, K0∗, and ρ0 are supposed to be constants and f(x→) is a given nondimensional function of the space variable x→=(x,y,z). Using these values of parameters, (1), (3), and (4) take the following form:(5)σij=fx→2μ0eij+δijλ0err-β0θ,(6)σji,j+Fi=ρ0fx→∂2ui∂t2+Ω→×Ω→×u→i+2Ω→×u→˙i,(7)K0∗fx→θ,i,i+K0fx→θ˙,i,i=fx→ρ0Ceθ¨+β0T0e¨,where(8)β0=3λ0+2μ0αt.Here, the superposed dot denotes derivative with respect to time and the comma denotes derivative with respect to space variable. ## 3. Mathematical Model Consider a nonhomogeneous, isotropic, magneto-thermoelastic half-space under the purview of G-N model III. Rectangular Cartesian coordinates are introduced having the surface of the half-space as the planex=0, with x-axis pointing vertically downwards into the medium. The medium is rotating with an angular velocity Ω→. Thus the displacement equation of motion in the rotating plane has two extra terms: Ω→×(Ω→×u→), which is the centripetal acceleration due to time varying motion only and 2Ω→×u→˙, which is the Coriolis acceleration. The present formulation is restricted to xy-plane and thus all the field variables are independent of the space variable z. So the displacement vector u→ and angular velocity Ω→ will have the components:(9)u→=u1,u2,0,Ω→=0,0,Ω.It is also assumed that material properties are graded only in x-direction. So we take f(x→) as f(x). By virtue of (9), the stresses arising from (5) can be expressed as(10)σxx=fxλ0+2μ0∂u1∂x+λ0∂u2∂y-β0θ,(11)σyy=fxλ0+2μ0∂u2∂y+λ0∂u1∂x-β0θ,(12)σxy=fxμ0∂u1∂y+∂u2∂x.Due to the application of an initial magnetic field H→(0,0,H0), an induced magnetic field h→, an induced electric field E→, and a current density J→ are developed in the considered medium. The simplified linear equations of electrodynamics of a slowly moving medium for a nonhomogeneous, isotropic and thermally conducting elastic solid are given by, neglecting Thomson’s effect [28]:(13)∇×h→=J→,∇×E→=-μm∂h→∂t,∇·h→=0,∇·E→=0,(14)E→=-μm∂u→∂t×H→,h→=∇×u→×H→,where μm is the magnetic permeability of the medium.From the above expressions, one can obtain(15)E→=-μmH0∂u2∂t,μmH0∂u1∂t,0,h→=0,0,-H0∂u1∂x+∂u2∂y,J→=∂h3∂y,-∂h3∂x,0,where(16)h3=-H0∂u1∂x+∂u2∂y.By virtue of above expressions and replacing μm by μm0f(x), the components of the Lorentz force F→=μm(J→×H→) are given by(17)F1=fxμm0H02∂e∂x,F2=fxμm0H02∂e∂y,F3=0.Utilizing the components of Lorentz force into stress equation of motion along with the consideration of two-dimensional problem, the field equations (6) and (7) yield(18)fxλ0+2μ0∂2u1∂x2+λ0+μ0∂2u2∂x∂y+μ0∂2u1∂y2-β0∂θ∂x+∂fx∂xλ0+2μ0∂u1∂x+λ0∂u2∂y-β0θ+fxμm0H02∂e∂x=fxρ0∂2u1∂t2-Ω2u1-2Ω∂u2∂t,(19)fxλ0+2μ0∂2u2∂y2+λ0+μ0∂2u1∂x∂y+μ0∂2u2∂x2-β0∂θ∂y+μ0∂fx∂x∂u1∂y+∂u2∂x+fxμm0H02∂e∂y=fxρ0∂2u2∂t2-Ω2u2+2Ω∂u1∂t,(20)K0∗fx∇2θ+∂fx∂x∂θ∂x+K0fx∇2θ˙+∂fx∂x∂θ˙∂x=fxρ0Ceθ¨+β0T0e¨,where(21)∇2=∂2∂x2+∂2∂y2. ## 4. Exponential Variation of Nonhomogeneity By assumingf(x)=e-nx, where n is a dimensionless parameter, one can conclude that the mechanical and thermal properties of the material vary exponentially along the x-direction. The governing equations can be recast in the dimensionless form by introducing the following dimensionless parameters:(22)x′,y′=w∗c1x,y,u1′,u2′=ρ0w∗c1β0T0u1,u2,t′=w∗t,θ′=θT0,σij′=σijβ0T0,Ω′=Ωw∗,where(23)w∗=ρ0Cec12K0,c12=λ0+2μ0ρ0.Now, in terms of the dimensionless parameters given in (22), (10)-(12) and (18)-(20) transform to(24)σxx=e-nx∂u1∂x+A11∂u2∂y-θ,(25)σyy=e-nx∂u2∂y+A11∂u1∂x-θ,(26)σxy=e-nxA12∂u1∂y+∂u2∂x,(27)1+A22∂2u1∂x2+A21+A22∂2u2∂x∂y+A12∂2u1∂y2-∂θ∂x-n∂u1∂x+A11∂u2∂y-θ=∂2u1∂t2-Ω2u1-2Ω∂u2∂t,(28)1+A22∂2u2∂y2+A21+A22∂2u1∂x∂y+A12∂2u2∂x2-∂θ∂y-nA12∂u1∂y+∂u2∂x=∂2u2∂t2-Ω2u2+2Ω∂u1∂t,(29)∇2θ-n∂θ∂x+A31∇2θ˙-n∂θ˙∂x=A32∂θ˙∂t+A33∂e˙∂t,where(30)A11=λ0ρ0c12,A12=μ0ρ0c12,A21=A11+A12,A22=μm0H02ρ0c12,A31=K0w∗K0∗,A32=ρ0Cec12K0∗,A33=β02T0ρ0K0∗. ## 5. Solution Methodology In this section, the normal mode method is employed, which has the advantage of finding the exact solutions without any assumed constraints on the field variables. In this approach, the solution of the physical variables is decomposed in terms of normal modes and one gets exact solution without any assumed restrictions on the actual physical quantities that appear in the governing equations of the problem considered. Normal mode analysis is, in fact, to look for the solution in Fourier transform domain. It is assumed that all the functions are sufficiently smooth on the real line such that the normal mode analysis of these functions exists. So, the solution for the considered physical variables can be decomposed in terms of normal modes in the following form:(31)u1,u2,θ,σijx,y,t=u1∗,u2∗,θ∗,σij∗xexp⁡ωt+ιmy,where u1∗, u2∗, θ∗, and σij∗ are the amplitudes of the functions, ω is the angular frequency, ι is the imaginary unit, and m is the wave number in y-direction.Introducing expression (31) in (27)-(29), we get(32)B11D2-nD+B12u1∗+B13D+B14u2∗-D-nθ∗=0,(33)B13D-B15u1∗+A12D2-B16D+B17u2∗+B18θ∗=0,(34)B19Du1∗+B21u2∗+B22D2-B23D-B24θ∗=0,where(35)B11=1+A22,B12=Ω2-A12m2-ω2,B13=A21+A22ιm,B14=2Ωω-nA11ιm,B15=2Ωω+nA12ιm,B16=nA12,B17=Ω2-ω2-m21+A22,B18=-ιm,B19=-A33ω2,B21=-B18B19,B22=1+A31ω,B23=nB22,B24=m2+A31m2ω+A32ω2.The condition for the existence of a nonzero solution of the system of (32)-(34) provides us(36)D6+F1D5+F2D4+F3D3+F4D2+F5D+F6u1∗,u2∗,θ∗=0,where(37)F1=E11E18+E12E17-A12E24-E14E23E11E17-A12E23,F2=E11E19+E12E18+E13F17-E25A12-E14E24-E15E23E11E17-A12E23,F3=E11E21+E12E19+E13E18-A12E26-E14E25-E15E24-E16E23E11E17-A12E23,F4=E11E22+E12E21+E13E19-E14E26-E15E25-E16E24E11E17-A12E23,F5=E12E22+E13E21-E15E26-E16E25E11E17-A12E23,F6=E13E22-E16E26E11E17-A12E23,E11=B11B18+B13,E12=-nB18+B14+nB13,E13=B12B18+nB14,E14=-B15+nA12,E15=B13B18+B16+nB15,E16=B14B18-nB16,E17=A12B22,E18=-B15B22+B23A12,E19=B16B22+B15B23-A12B24,E21=B15B24-B16B23,E22=-B16B24+B18B21,E23=B22B13,E24=-B13B23+B17B22,E25=B17B23-B13B24-B19B18,E26=B17B24.The general solution of (36) which is bounded as x→∞ is given by(38)u1∗,u2∗,θ∗x=∑i=131,H1i,H2iMim,ωe-λix,forRe⁡λi>0,where Mi(m,ω)(i=1,2,3) are parameters, depending upon m and ω, and(39)H1i=E11λi2-E12λi+E13A12λi3-E14λi2+E15λi-E16,H2i=B21H1i-B19λiB24-B23λi-B22λi2.In view of solution (38), stress components (24)-(26) take the form(40)σxx∗,σxy∗,σyy∗x=∑i=13H3i,H4i,H5iMim,ωe-λix-nx,forRe⁡λi>0,where(41)H3i=-λi+ιmA11H1i-H2i,H4i=A12ιm-λiH1i,H5i=-A11λi+ιmH1i-H2i. ## 6. Application: Mechanical Load on the Surface of the Half-Space A nonhomogeneous, rotating, magneto-thermoelastic medium, occupying the half-spacex≥0, has been considered. The surface x=0 of the half-space is acted upon by a mechanical load as shown in Figure 1. So, the boundary conditions are given by the following.Figure 1 Mechanical load over a functionally graded rotating magneto-thermoelastic medium.(i) Mechanical Boundary Conditions (i) Normal stress component obeys(42)σxx0,y,t=-py,t, wherep(y,t) is a given function of y and t. (ii) Tangential stress component vanishes at the surfacex=0,i.e.,(43)σxy0,y,t=0.(ii) Thermal Boundary ConditionSince, the plane boundary surfacex=0 is taken to be isothermal, so the thermal boundary condition is the vanishing of temperature θ,i.e.,(44)θ0,y,t=0.Application of nondimensional parameters and normal mode technique defined in (22) and (31) respectively, transforms the above boundary conditions to the form:(45)σxx∗x=-p∗,σxy∗x=0,θ∗x=0atx=0.Taking into account the nondimensional expressions for temperature and stresses from (38) and (40), the above boundary conditions reduce to a nonhomogeneous system of three equations, which can be written in matrix form as(46)H31H32H33H41H42H43H21H22H23M1M2M3=-p∗00.Solution of system (46) provides us the values of Mi(i=1,2,3) as follows:(47)M1=Δ1Δ,M2=Δ2Δ,M3=Δ3Δ,where(48)Δ=H31L1-H32L2+H33L3,Δ1=-p∗L1,Δ2=p∗L2,Δ3=-p∗L3,L1=H42H23-H22H43,L2=H41H23-H21H43,L3=H41H22-H21H42.Substitution of (47) into expressions (38) and (40) provides us the following expressions of field variables(49)u1∗,u2∗,θ∗x=1Δ∑i=131,H1i,H2iΔie-λix,forRe⁡λi>0,(50)σxx∗,σxy∗,σyy∗x=1Δ∑i=13H3i,H4i,H5iΔie-λix-nx,forRe⁡λi>0. ## 7. Notable Cases ### 7.1. Neglecting Rotational Effect In the absence of rotation (i.e., Ω=0), we shall be left with the relevant problem in a nonhomogeneous, isotropic, magneto-thermoelastic medium in the context of GN theory III. In this limiting case, we get the corresponding expressions of the physical quantities from (49) and (50). ### 7.2. Neglecting Nonhomogeneity Effect By settingn=0 in (24)-(29), one can get required expressions for different distributions from (49) and (50). In this limiting case, our results coincide with those of Abo-Dahabet al. [29] with appropriate changes in loading and boundary conditions. ## 7.1. Neglecting Rotational Effect In the absence of rotation (i.e., Ω=0), we shall be left with the relevant problem in a nonhomogeneous, isotropic, magneto-thermoelastic medium in the context of GN theory III. In this limiting case, we get the corresponding expressions of the physical quantities from (49) and (50). ## 7.2. Neglecting Nonhomogeneity Effect By settingn=0 in (24)-(29), one can get required expressions for different distributions from (49) and (50). In this limiting case, our results coincide with those of Abo-Dahabet al. [29] with appropriate changes in loading and boundary conditions. ## 8. Numerical Results and Discussion With an aim to illustrate the obtained theoretical results in the preceding section, we now present some numerical results. The following relevant physical constants are taken from Abo-Dahabet al. [29] for a copper-like material:(51)λ0=7.76×1010Nm-2,μ0=3.86×1010Nm-2,K0=0.6×10-2Wm-1K-1,Ce=383.1Jkg-1K-1,ρ0=8954kgm-3,T0=293K,αt=1.78×10-5K-1.Sinceω is complex quantity, we can write ω=ω0+ιω1 so that eωt=eω0t[cos(ω1t)+ιsin(ω1t)]. So for small values of time we can assume ω as real (i.e.,ω=ω0). The other parameters for numerical computation are taken as ω=2, m=2, t=0.1, p∗=8, and y=0.2.Figures2–5 analyze the effect of rotation on the distribution of field variables by considering three different values of angular velocity as Ω=0.0 (solid line), Ω=0.5 (dashed line), and Ω=0.9 (dot-dashed line) with n=1.0 and H0=105. Figure 2 explains the spatial variation of the normal displacement component for different values of Ω. The figure shows that the distribution of normal displacement follows a similar trend for all the values of Ω and dissimilarity lies on the ground of magnitudes. Figure 3 is plotted to depict the variation of normal stress with location x for three different values of Ω. The figure shows that increase in the value of Ω results in an increase in the numerical values of normal stress. Therefore, angular velocity is having an increasing effect on the profile of normal stress. Variations in tangential stress distribution with spatial coordinate x have been displayed in Figure 4. These variations are having a common starting point of zero magnitude, which is in quite good agreement with the boundary conditions. The figure shows that the tangential stress increases in the beginning and starts decreasing near the point x=0.23 and thereafter converges to zero as x increases. Moreover, with increasing Ω, there is an increase in the magnitude of tangential stress distribution. Figure 5 is drawn to observe the effect of angular velocity on the pattern of temperature distribution. As expected, the temperature distribution is having a coincident starting point of zero magnitude for all the values of Ω, which agrees completely with the boundary conditions. It is also manifested from the figure that increasing values of Ω are having a decreasing effect on the magnitude of temperature variations.Figure 2 Effect of angular velocity on normal displacement distribution.Figure 3 Effect of angular velocity on normal stress distribution.Figure 4 Effect of angular velocity on tangential stress distribution.Figure 5 Effect of angular velocity on temperature distribution.Figures6–9 illustrate the effect of the nonhomogeneity parameter on the distribution of field variables by setting three different values of nonhomogeneity parameter as n=0.0 (solid line), n=0.5 (dashed line), and n=1.0 (dot-dashed line) with Ω=0.5 and H0=105. In Figure 6, we have shown the spatial variation of normal displacement for different values of n. From this figure, it is noted that all the curves have distinct starting points. It can also be noticed from the plot that the displacement distribution is strongly affected by the presence of nonhomogeneity parameter. Figure 7 illustrates the variation of normal stress with distance x for different values of nonhomogeneity parameter n. For three different values of nonhomogeneity parameter, σxx starts with value 8.9 (in magnitude). Normal stress distribution exhibits significant sensitivity towards the nonhomogeneity parameter and it is compressive for homogeneous medium. Figure 8 is plotted to show the variations of tangential stress σxy with distance x. The plot indicates that the tangential stress field is having a coincident starting point of zero magnitude for all the three cases, which signifies that the boundary conditions are satisfied. It can be seen from the plot that, for homogeneous medium, the behaviour of σxy is totally opposite to that in the nonhomogeneous medium. The difference in magnitudes becomes indistinct along with the passage of time. Figure 9 shows that the temperature starts with a value zero which is completely in agreement with the boundary conditions. It increases in the beginning and starts decreasing in the neighbourhood of x=0.3 and converges to zero as x increases. It is also manifested from the figure that the presence of nonhomogeneity is having a decreasing effect on the magnitude of temperature distribution.Figure 6 Effect of nonhomogeneity parameter on normal displacement distribution.Figure 7 Effect of nonhomogeneity parameter on normal stress distribution.Figure 8 Effect of nonhomogeneity parameter on tangential stress distribution.Figure 9 Effect of nonhomogeneity parameter on temperature distribution.The influence of magnetic field on various field variables is examined in Figures10–13. Figure 10 shows the transient effect of applied mechanical load on normal displacement distribution, in the medium with a magnetic field (solid line) and without magnetic field (dashed line). Solution curves for both the cases follow a similar pattern of variations with the difference in magnitudes. It can be noted that magnetic field has a noticeable impact on displacement distribution. Figure 11 describes the variation of normal stress with location x. The plot shows that stress σxx is having a coincident initial point for both the cases and diminution of the magnitude takes place as the distance from the boundary increases. Figure 12 represents the spatial variation of tangential stress. It can be seen from the plot that the values of tangential stress for both the cases increase in the beginning and start decreasing in the neighbourhood of x=0.2 and approach to zero with increasing x. Figure 13 displays the variation of temperature field with distance x. In this figure, all the curves have coincident beginning point with value zero that leads to satisfy the boundary conditions. The amplitude of vibrations of solution curves for temperature field gets suppressed in the absence of magnetic field.Figure 10 Effect of magnetic field on normal displacement.Figure 11 Effect of magnetic field on normal stress distribution.Figure 12 Effect of magnetic field on tangential stress distribution.Figure 13 Effect of magnetic field on temperature distribution.The 3D plots showing normal displacement distribution, normal stress distribution, tangential stress distribution, and temperature distribution are shown in Figures14–17 for a wide range of x(0≤x≤4) and for a wide range of dimensionless time t(0≤t≤0.4). Figure 14 describes the variation of normal displacement with distance x and with time t. From this figure, it can be seen that the dimensionless time t plays an important role on the distribution of normal displacement. Figure 15 represents the variation in the values of normal stress for a wide range of x. Numerical values of normal stress decrease as the distance x increases, while for time t, an increase in the values takes place. Figure 16 has been plotted to show the profile of tangential stress distribution. The tangential stress starts with a zero value which is completely in agreement with the boundary conditions prescribed. Figure 17 represents the distribution of temperature with distance x and with time t. The temperature distribution is behaving like an increasing function in the range 0.0≤x≤0.2 and for the rest of the domain it decreases and reaches a steady state about the point x=2.7.Figure 14 Profile of normal displacement atn=1.0, H0=105, and Ω=0.5.Figure 15 Profile of normal stress distribution atn=1.0, H0=105, and Ω=0.5.Figure 16 Profile of tangential stress distribution atn=1.0, H0=105, and Ω=0.5.Figure 17 Profile of temperature distribution atn=1.0, H0=105, and Ω=0.5. ## 9. Concluding Remarks The investigation under consideration provides a mathematical model to obtain the behaviour of normal displacement, stresses, and temperature in a nonhomogeneous, isotropic, rotating, magneto-thermoelastic medium within the framework of G-N model III, by using normal mode technique. Theoretical and numerical results reveal that the parameters, namely, rotation, nonhomogeneity parameter, and magnetic field, have significant effects on the considered physical variables. Analysis of graphs permits the following concluding remarks:(i) As expected, the values of all the physical quantities converge to zero as the distancex increases and from the distribution of all physical quantities, it can be found that wave type heat propagates through the medium.(ii) It is apparent from figures that the rotational speed has an increasing effect on the profiles of normal stress and tangential stress but it has a decreasing effect on the profile of temperature.(iii) The presence of nonhomogeneity is having a decreasing effect on the magnitude of temperature distribution while it has a mix effect on the remaining field variables. Therefore, while designing FGMs, the effect of nonhomogeneity should be taken into consideration.(iv) The presence of magnetic field plays a significant role in the distribution of all physical variables.(v) The method adopted here is applicable to a wide range of problems in thermodynamics and thermoelasticity. It can be employed to boundary-layer problems which are described by linearized Navier-Stokes equations in electro-hydrodynamics.The above study is of geophysical interest and finds applications in mechanical engineering, industrial sectors, and seismology. The results presented in this paper will prove useful for scientists in material science and designers of new materials as well as for those working on the development of magneto-thermoelasticity. Electro-magneto composite materials have applications in sensors, actuators, ultrasonic imaging devices, and many other emerging components. Magneto-thermoelasticity has drawn the attention of many engineers, because of its wide use in diverse areas, especially, geophysics for determining the effect of earth’s magnetic field on seismic waves, development of a highly sensitive superconducting magneto-meter, electrical power engineering, etc. FGMs are broadly used in the biomedical application as medical implants, because they are designed to mimic the human organs, which are FGMs in nature. These types of materials are also used in pressure vessels and pipes. --- *Source: 1016981-2019-06-10.xml*
2019
# Cone Beam Computed Tomographic Evaluation and Diagnosis of Mandibular First Molar with 6 Canals **Authors:** Shiraz Pasha; Bathula Vimala Chaitanya; Kusum Valli Somisetty **Journal:** Case Reports in Dentistry (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1016985 --- ## Abstract Root canal treatment of tooth with aberrant root canal morphology is very challenging. So thorough knowledge of both the external and internal anatomy of teeth is an important aspect of root canal treatment. With the advancement in technology it is imperative to use modern diagnostic tools such as magnification devices, CBCT, microscopes, and RVG to confirm the presence of these aberrant configurations. However, in everyday endodontic practice, clinicians have to treat teeth with atypical configurations for root canal treatment to be successful. This case report presents the management of a mandibular first molar with six root canals, four in mesial and two in distal root, and also emphasizes the use and importance of Cone Beam Computed Tomography (CBCT) as a diagnostic tool in endodontics. --- ## Body ## 1. Introduction Precise study of the morphology of human teeth is required for the successful treatment with the objective of providing better oral health and restoring stomatognathic functions [1]. The mandibular first molar usually has 2 roots, occasionally 3. Further, there are generally 2 canals in the mesial root and one or 2 in the distal root. Vertucci and Williams [2] were the first persons to report the presence of middle mesial canal in the mandibular 1st molar and since then there were many case reports published showing the presence of mandibular molars with aberrant root canal morphology. In a radiographic study of extracted teeth Goel et al. reported that mandibular first molars had three mesial canals in 13.3% of specimens, four mesial canals in 3.3% specimens, and three distal canals in 1.7% of specimens [3]. It has been postulated that secondary dentin apposition during tooth maturation would form dentinal vertical partitions inside the root canal cavity creating root canals and the third root canal is also created by this process. Such third canals are situated mainly between the two main root canals, the buccal and lingual root canals [4].This case report presents the management of the 1st mandibular molar with six root canals, four in mesial and two in distal root canal confirmed by CBCT. ## 2. Case Presentation A 30-year-old male patient with nonsignificant medical history reported to our department with a chief complaint of pain in right mandibular region. On history taking there were episodes of intermittent pain for the past 15 days. Pain was moderate in nature, nonradiating, aggravates on taking sweets and chewing foods, and relieves on taking medication. On clinical examination a deep carious lesion was seen with respect to 46. Exaggerated response was observed during pulp testing with electric pulp tester and lingering pain was observed with cold pulp test compared to contralateral teeth. IOPAR revealed radiolucency involving enamel, dentin, and pulp with no periapical changes in relation to 46 (Figure1). It was diagnosed as acute irreversible pulpitis. Root canal treatment was decided and explained to the patient. After securing local anesthesia (2% lignocaine, inferior alveolar nerve block on the right side) rubber dam was applied and endodontic treatment was initiated. After gaining the proper access four canals were located, two in the mesial and two in the distal. It was evident under magnification that the MB and ML were placed well apart with an isthmus joining two canals. Hence, the possibility of MM canal should be anticipated in the isthmus. On exploration with DG-16 probe, we found 2 additional canals between MB and ML (Figure 5). IOPAR revealed one MM joining the MB canal and another joining the ML canal in the middle third. To confirm this we advised a CBCT of the right mandibular molar. CBCT revealed four canals in the mesial root and two canals in distal root (Figures 3 and 4). Access was refined and orifices were enlarged using orifice openers. The working length was determined with radiographic technique and apex locator (Figure 2). Both the mesial and distal canals were enlarged up to the size of 25/6% taper (M two, VDW) followed by an intracanal medication with calcium hydroxide and chlorhexidine was placed for 1 week. At the 3rd appointment, master cone was selected (Figure 6) and obturation was performed using cold lateral compaction technique and AH-plus root canal sealer. Figure 7 shows the IOPAR immediately after obturation.Figure 1 Pre-op X-ray.Figure 2 IOPA showing 4 mesial canals.Figure 3 CBCT image showing 4 canals in mesial root.Figure 4 CBCT image showing 4 canals in mesial root.Figure 5 Intraoral image showing 6 canals.Figure 6 Master cone selection.Figure 7 IOPAR after obturation. ## 3. Discussion The complicated and diverse root canal system poses a challenge to successful diagnosis and treatment. The incidence rates of MM canal are between 1 and 15% [5]. In most of the cases, middle/extra canals are hidden by a dentinal projection in the mesial and distal aspect of pulp chamber walls, and this dentinal growth is usually located between the two main canals. Pomeranz et al. [6] in their study found that about 12 out of 100 molars had MM canals. They classified them into Fin, confluent and independent. In a similar study done by de Pablo et al. [7] confirms the presence of MM canal in 2.6% mandibular first molars. Gulabivala et al. [8] described a four-canal pattern, but existing as two canals, in Burmese population. Newer diagnostic methods such as computerized tomography (CT) scanning greatly facilitate access to the internal root canal morphology. One of the most important advantages of CBCT is that operator can have a look at slices of tooth of interest [9]. Other diagnostic tools such as multiple preoperative radiographs, use of sharp explorer, ultrasonic tips, staining the chamber floor with 1% methylene blue dye, performing the sodium hypochlorite “champagne bubble test,” and visualizing canal bleeding points which are all important aids in locating root canal orifices are used to find out the additional canals present [5]. Also the use of operating microscope has revolutionized the practice of endodontics by allowing the clinicians to visualize the canal more efficiently [10]. Suspicion about a MM canal should always be anticipated when isthmus is clinically evident. The groove between MB and ML is a potential area to be addressed and the access should be modified for effective disinfection of root canal system. The clinicians should always suspect the possibility of additional canals in patients who are 40 years and above. Mandibular first molar with four canals in mesial root has been reported in literature thrice which makes our case report unique and worth mentioning to understand the complexity of root canal system of mandibular first molar. ## 4. Conclusion Prognosis of the endodontic treatment on a long term is severely compromised due to the failure to locate and clean extra canals. This management is quite challenging. With good knowledge, the will to search, and the magnification and modern imaging techniques, the success rates can be improved. Our case report describes a successful management of a mandibular molar with 6 canals. --- *Source: 1016985-2016-01-21.xml*
1016985-2016-01-21_1016985-2016-01-21.md
7,579
Cone Beam Computed Tomographic Evaluation and Diagnosis of Mandibular First Molar with 6 Canals
Shiraz Pasha; Bathula Vimala Chaitanya; Kusum Valli Somisetty
Case Reports in Dentistry (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1016985
1016985-2016-01-21.xml
--- ## Abstract Root canal treatment of tooth with aberrant root canal morphology is very challenging. So thorough knowledge of both the external and internal anatomy of teeth is an important aspect of root canal treatment. With the advancement in technology it is imperative to use modern diagnostic tools such as magnification devices, CBCT, microscopes, and RVG to confirm the presence of these aberrant configurations. However, in everyday endodontic practice, clinicians have to treat teeth with atypical configurations for root canal treatment to be successful. This case report presents the management of a mandibular first molar with six root canals, four in mesial and two in distal root, and also emphasizes the use and importance of Cone Beam Computed Tomography (CBCT) as a diagnostic tool in endodontics. --- ## Body ## 1. Introduction Precise study of the morphology of human teeth is required for the successful treatment with the objective of providing better oral health and restoring stomatognathic functions [1]. The mandibular first molar usually has 2 roots, occasionally 3. Further, there are generally 2 canals in the mesial root and one or 2 in the distal root. Vertucci and Williams [2] were the first persons to report the presence of middle mesial canal in the mandibular 1st molar and since then there were many case reports published showing the presence of mandibular molars with aberrant root canal morphology. In a radiographic study of extracted teeth Goel et al. reported that mandibular first molars had three mesial canals in 13.3% of specimens, four mesial canals in 3.3% specimens, and three distal canals in 1.7% of specimens [3]. It has been postulated that secondary dentin apposition during tooth maturation would form dentinal vertical partitions inside the root canal cavity creating root canals and the third root canal is also created by this process. Such third canals are situated mainly between the two main root canals, the buccal and lingual root canals [4].This case report presents the management of the 1st mandibular molar with six root canals, four in mesial and two in distal root canal confirmed by CBCT. ## 2. Case Presentation A 30-year-old male patient with nonsignificant medical history reported to our department with a chief complaint of pain in right mandibular region. On history taking there were episodes of intermittent pain for the past 15 days. Pain was moderate in nature, nonradiating, aggravates on taking sweets and chewing foods, and relieves on taking medication. On clinical examination a deep carious lesion was seen with respect to 46. Exaggerated response was observed during pulp testing with electric pulp tester and lingering pain was observed with cold pulp test compared to contralateral teeth. IOPAR revealed radiolucency involving enamel, dentin, and pulp with no periapical changes in relation to 46 (Figure1). It was diagnosed as acute irreversible pulpitis. Root canal treatment was decided and explained to the patient. After securing local anesthesia (2% lignocaine, inferior alveolar nerve block on the right side) rubber dam was applied and endodontic treatment was initiated. After gaining the proper access four canals were located, two in the mesial and two in the distal. It was evident under magnification that the MB and ML were placed well apart with an isthmus joining two canals. Hence, the possibility of MM canal should be anticipated in the isthmus. On exploration with DG-16 probe, we found 2 additional canals between MB and ML (Figure 5). IOPAR revealed one MM joining the MB canal and another joining the ML canal in the middle third. To confirm this we advised a CBCT of the right mandibular molar. CBCT revealed four canals in the mesial root and two canals in distal root (Figures 3 and 4). Access was refined and orifices were enlarged using orifice openers. The working length was determined with radiographic technique and apex locator (Figure 2). Both the mesial and distal canals were enlarged up to the size of 25/6% taper (M two, VDW) followed by an intracanal medication with calcium hydroxide and chlorhexidine was placed for 1 week. At the 3rd appointment, master cone was selected (Figure 6) and obturation was performed using cold lateral compaction technique and AH-plus root canal sealer. Figure 7 shows the IOPAR immediately after obturation.Figure 1 Pre-op X-ray.Figure 2 IOPA showing 4 mesial canals.Figure 3 CBCT image showing 4 canals in mesial root.Figure 4 CBCT image showing 4 canals in mesial root.Figure 5 Intraoral image showing 6 canals.Figure 6 Master cone selection.Figure 7 IOPAR after obturation. ## 3. Discussion The complicated and diverse root canal system poses a challenge to successful diagnosis and treatment. The incidence rates of MM canal are between 1 and 15% [5]. In most of the cases, middle/extra canals are hidden by a dentinal projection in the mesial and distal aspect of pulp chamber walls, and this dentinal growth is usually located between the two main canals. Pomeranz et al. [6] in their study found that about 12 out of 100 molars had MM canals. They classified them into Fin, confluent and independent. In a similar study done by de Pablo et al. [7] confirms the presence of MM canal in 2.6% mandibular first molars. Gulabivala et al. [8] described a four-canal pattern, but existing as two canals, in Burmese population. Newer diagnostic methods such as computerized tomography (CT) scanning greatly facilitate access to the internal root canal morphology. One of the most important advantages of CBCT is that operator can have a look at slices of tooth of interest [9]. Other diagnostic tools such as multiple preoperative radiographs, use of sharp explorer, ultrasonic tips, staining the chamber floor with 1% methylene blue dye, performing the sodium hypochlorite “champagne bubble test,” and visualizing canal bleeding points which are all important aids in locating root canal orifices are used to find out the additional canals present [5]. Also the use of operating microscope has revolutionized the practice of endodontics by allowing the clinicians to visualize the canal more efficiently [10]. Suspicion about a MM canal should always be anticipated when isthmus is clinically evident. The groove between MB and ML is a potential area to be addressed and the access should be modified for effective disinfection of root canal system. The clinicians should always suspect the possibility of additional canals in patients who are 40 years and above. Mandibular first molar with four canals in mesial root has been reported in literature thrice which makes our case report unique and worth mentioning to understand the complexity of root canal system of mandibular first molar. ## 4. Conclusion Prognosis of the endodontic treatment on a long term is severely compromised due to the failure to locate and clean extra canals. This management is quite challenging. With good knowledge, the will to search, and the magnification and modern imaging techniques, the success rates can be improved. Our case report describes a successful management of a mandibular molar with 6 canals. --- *Source: 1016985-2016-01-21.xml*
2016
# Measurement and Analysis of P2P IPTV Program Resource **Authors:** Wenxian Wang; Xingshu Chen; Haizhou Wang; Qi Zhang; Cheng Wang **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101702 --- ## Abstract With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs. --- ## Body ## 1. Introduction Peer-to-Peer (P2P) applications take advantage of resources such as storage, CPU cycles, content, or human presence available at the edge of the Internet to provide a service [1]. With the development and maturity of P2P technology, P2P applications become more and more popular in the recent ten years, including file-sharing applications, audio-based VOIP applications, and video-based IPTV applications [2–5]. However, they occupy a significant proportion of Internet traffic. According to a survey from CacheLogic [6] in June, 2004, 60% of the Internet traffic is P2P. In addition, P2P IPTV applications, such as PPTV (former PPLive) [2], QQLive [4] in China, become popular gradually and occupy a great amount of P2P traffic [7].P2P IPTV, also called P2P streaming, emerged recently as a novel framework to deliver TV programs or live events to a large number of viewers over the Internet. With the rapid large-scale popularization of broadband technology, P2P IPTV becomes the disruptive IP communication technology, which greatly revolutionizes people’s lives and entertainment [8]. Several P2P IPTV applications have gained great commercial success, including CoolStreaming [9], PPTV, PPStream, and UUSee. With low price and simple operation, P2P IPTV becomes more popular in recent years and receives great attention from both industry and academia. It was reported that PPTV had more than 200 million installations and its active monthly user base (in December 2010) was 104 million. PPTV had the penetration of 43% in Chinese Internet users [10].There are more and more P2P IPTV applications in the Internet now. And it is difficult to measure the applications because they use proprietary protocols. Source codes or official documents are scarcely published. However, the measurement of P2P IPTV applications is an important problem for the management and development of IPTV and several researchers have tried to address this issue. But none of them focused on program-list and its distribution of P2P IPTV. In this paper, we proposed a distributed multiprotocol crawler (DMP-Crawler) for collecting program resources in P2P IPTV networks. Moreover, we analyzed the characteristics of these IPTV programs and presented an analytical model of hot programs. The model can be used to infer the popular drama IPTV users at some time.The remainder of this paper is structured as follows. Section2 presents an overview of P2P IPTV. Section 3 introduces related work of P2P IPTV measurement. Section 4 describes the principle of P2P IPTV program-list distribution, DMP-Crawler, and methodology of measurement and analysis. Section 5 presents and discusses the results. Finally, Section 6 concludes the paper and gives the future work. ## 2. Overview of P2P IPTV ### 2.1. P2P IPTV Internet Protocol Television (IPTV) denotes the transport of live streams and recorded movies or video clips by means of advanced packet-switched Internet technologies [11]. ITU-T defined IPTV as multimedia services such as television/video/audio/text/graphics/data delivered over IP-based networks managed to support the required level of QoS/QoE/, security, interactivity, and reliability [12].Over the past decade, P2P technology has been a promising solution for the distribution of large-scale media and a large amount of P2P IPTV systems have been developed and widely deployed on the Internet. In this paper, we defined P2P IPTV as a technology that enabled users to transmit and receive multimedia services including television, video, audio, text, and graphic through P2P overlay networks with support for QoS/QoE, security, mobility, interactivity, and reliability. Through P2P IPTV, users can enjoy IPTV services anywhere. Now P2P IPTV applications are changing the way that we watch TV and movies.In 2000, Chu et al. proposed End System Multicast (ESM) [13], the first P2P IPTV application, in which an overlay tree is constructed to distribute video data and continuously optimized to minimize end-to-end latency. Then the overlay networks are adopted for efficient distribution of live video. The overlay networks include Nice [14], SplitStream [15], Scattercast [16], and Overcast [17]. Unfortunately, they were not deployed in large scale due to their limited capabilities. CoolStreaming was released in summer 2004 and arguably represented the first large-scale P2P video streaming experiment [9]. Then, CoolStreaming was significantly modified and commercially launched. With the success of CoolStreaming, many P2P IPTV applications emerged in 2005. The known applications include PPLive, PPStream, QQLive, and UUSee in China. From 2006, related measurements of P2P IPTV were done by a number of academic staff, and we also carried out the measurement work [18] in 2007. ### 2.2. Architecture of P2P IPTV A typical P2P IPTV application is comprised of five components: media collection server (MCS), media distribution server (MDS), program-list server (PLS), tracker server (TS, also called peer-list server), and end system (ES, also called client or peer). As illustrated in Figure1, the basic workflow of a P2P IPTV application is provided as follows.Figure 1 Architecture of P2P IPTV.Step 1. MCS gathers video data in two ways. Firstly, for live program, MCS gets video data from video grabber. Secondly, for video on demand (VoD), MCS reads video file directly, encodes video data according to some coding methods, and uploads the data to MDS.Step 2. When coding data of a video, MCS will generate the related program name, program GUID (Globally Unique Identifier), play link, category, and so forth and register the information in PLS. At the same time, MDS will register program GUID in TS.Step 3. After receiving live data, MDS will distribute them to IPTV network. After receiving VoD data, MDS will store them firstly and distribute them when clients request them. We have introduced the video distribution protocol in detail in 2012 [18].Step 4. The local peer requests the latest program-list file from PLS and updates it immediately after lunching the IPTV client. The list of program consists of program name, program GUID which is the most important identification of signal communication among peers, program descriptions, and so forth.Step 5. After the local peer selects one program to watch videos, the peer registers itself to the tracker server and sends multiple query messages to the server to retrieve a small set of partner peers who are watching the same program. The information of partner peers includes IP address, TCP port, and UDP port. Upon receiving the initial list of partner peers, the local peer uses this seed peer list to harvest additional lists by periodically probing active peers which maintain a list of peers.Step 6. After harvesting enough peers, the peer tries to connect the active ones or MDS to request video data for playback of the appointed program and launches a local media player (such as Windows Media Player and RealPlayer) to play the video. To deal with the churn of peers, the local peer needs to actively seek new peers from its existing partners to update peer list. At the same time, it also rebroadcasts its current peer list to its partner peers.Our work is focused on Step4, such as the distribution of program-list. ## 2.1. P2P IPTV Internet Protocol Television (IPTV) denotes the transport of live streams and recorded movies or video clips by means of advanced packet-switched Internet technologies [11]. ITU-T defined IPTV as multimedia services such as television/video/audio/text/graphics/data delivered over IP-based networks managed to support the required level of QoS/QoE/, security, interactivity, and reliability [12].Over the past decade, P2P technology has been a promising solution for the distribution of large-scale media and a large amount of P2P IPTV systems have been developed and widely deployed on the Internet. In this paper, we defined P2P IPTV as a technology that enabled users to transmit and receive multimedia services including television, video, audio, text, and graphic through P2P overlay networks with support for QoS/QoE, security, mobility, interactivity, and reliability. Through P2P IPTV, users can enjoy IPTV services anywhere. Now P2P IPTV applications are changing the way that we watch TV and movies.In 2000, Chu et al. proposed End System Multicast (ESM) [13], the first P2P IPTV application, in which an overlay tree is constructed to distribute video data and continuously optimized to minimize end-to-end latency. Then the overlay networks are adopted for efficient distribution of live video. The overlay networks include Nice [14], SplitStream [15], Scattercast [16], and Overcast [17]. Unfortunately, they were not deployed in large scale due to their limited capabilities. CoolStreaming was released in summer 2004 and arguably represented the first large-scale P2P video streaming experiment [9]. Then, CoolStreaming was significantly modified and commercially launched. With the success of CoolStreaming, many P2P IPTV applications emerged in 2005. The known applications include PPLive, PPStream, QQLive, and UUSee in China. From 2006, related measurements of P2P IPTV were done by a number of academic staff, and we also carried out the measurement work [18] in 2007. ## 2.2. Architecture of P2P IPTV A typical P2P IPTV application is comprised of five components: media collection server (MCS), media distribution server (MDS), program-list server (PLS), tracker server (TS, also called peer-list server), and end system (ES, also called client or peer). As illustrated in Figure1, the basic workflow of a P2P IPTV application is provided as follows.Figure 1 Architecture of P2P IPTV.Step 1. MCS gathers video data in two ways. Firstly, for live program, MCS gets video data from video grabber. Secondly, for video on demand (VoD), MCS reads video file directly, encodes video data according to some coding methods, and uploads the data to MDS.Step 2. When coding data of a video, MCS will generate the related program name, program GUID (Globally Unique Identifier), play link, category, and so forth and register the information in PLS. At the same time, MDS will register program GUID in TS.Step 3. After receiving live data, MDS will distribute them to IPTV network. After receiving VoD data, MDS will store them firstly and distribute them when clients request them. We have introduced the video distribution protocol in detail in 2012 [18].Step 4. The local peer requests the latest program-list file from PLS and updates it immediately after lunching the IPTV client. The list of program consists of program name, program GUID which is the most important identification of signal communication among peers, program descriptions, and so forth.Step 5. After the local peer selects one program to watch videos, the peer registers itself to the tracker server and sends multiple query messages to the server to retrieve a small set of partner peers who are watching the same program. The information of partner peers includes IP address, TCP port, and UDP port. Upon receiving the initial list of partner peers, the local peer uses this seed peer list to harvest additional lists by periodically probing active peers which maintain a list of peers.Step 6. After harvesting enough peers, the peer tries to connect the active ones or MDS to request video data for playback of the appointed program and launches a local media player (such as Windows Media Player and RealPlayer) to play the video. To deal with the churn of peers, the local peer needs to actively seek new peers from its existing partners to update peer list. At the same time, it also rebroadcasts its current peer list to its partner peers.Our work is focused on Step4, such as the distribution of program-list. ## 3. Related Work P2P IPTV measurement has been extensively studied. The measuring methods can be classified into two types: passive tracing and active tracing approach.The passive approach is performed by deploying code at suitable points in the network infrastructure. The passive approach does not increase the network traffic. And it is often used to analyze and identify P2P IPTV traffic from general Internet traffic with the known behaviors (such as connection ports, feature, or patterns). It is also used to capture IPTV traces and grasp the P2P IPTV application. Du et al. [19] and Tan et al. [20] developed a machine learning methodology to identify PPLive and PPStreasm traffic. Agarwal et al. [21] studied the program startup time and the quality of service in terms of the number of consecutive lost block. Silverston and Fourmaux [22] studied four IPTV applications and gave the global view of the impact of P2P media streaming on the network traffic. Following the research, they presented a detailed study of IPTV traffic, providing useful insights on transport-level and packet-level properties as well as the behaviors of the peers in the network [23]. With abundant traces from a successful commercial P2P IPTV application, Wu et al. [24] characterized interpeer bandwidth availability in large-scale P2P streaming networks. The passive approach is potentially transparent and scalable and allows the comparison of traffic from multiple domains side-by-side. However, it is dependent upon the access to core network infrastructure, which is not always available. Thus, it is often used for flow control in firewall or gateway devices.In the active approach, the special crawler, like an ordinary client, is adopted to inject test packets into P2P IPTV network or send packets to servers and peers. Then the crawler follows packets and measures characters of IPTV network. Hei et al. [25] carried out the first active tracing of a commercial P2P IPTV application, namely, PPLive. They further developed a dedicated PPLive crawler to study the global characteristics of PPLive system [26]. Wu et al. [27] presented Magellan to characterize topologies of peer-to-peer streaming networks of UUSee. Vu et al. [28] mapped the PPLive network to study the impacts of media streaming on P2P overlays.Most of existing research work surveyed the P2P IPTV network-centric metrics (such as traffic characterization, TCP or UDP connections, and video traffic) or user-centric metrics (such as user arrival and departure, geographic distribution, and channel population). Our studies were primarily focused on program-list distribution of P2P IPTV applications because program resource distribution was very important to P2P IPTV applications. Our work surveyed the P2P IPTV content-centric metrics which were useful for prediction and monitoring of programs. In this paper, a distributed multiprotocol program crawler was proposed to collect various kinds of information of programs. Moreover, we also analyzed the characteristics of program resources and put forward an analytical model of hot program. ## 4. Methodology of Measurement and Characteristic Analysis In this section, we will present the basic principle of program-list distribution in P2P IPTV applications and illustrate a feasible and efficient architecture for crawling program-list. ### 4.1. Principle of Program Resource Distribution When the program-list is downloaded and extracted by an IPTV client, users can select a program to watch videos. So program-list distribution is very important in P2P IPTV applications. The program-list includes program name, categories, play-link, and descriptions. Play-link is the most important identification of signal communication among peers viewing the same program. A typical example of program metadata is shown in Table1.Table 1 Metadata of a IPTV program. Program name 110205- Happy camp Category Variety show Channel name Happy camp Subchannel name — IPTV application name PPStream Play-link pps://n26aeygqeb6s4kbc2aqa.pps/110205- Happy Camp.wmv Number of views 425 Date added 2011/02/05The client-server architecture, as shown in Figure1, is usually used to distribute program-list file in IPTV systems. When an IPTV client starts up, it requests program-list file from program-list servers and updates the local information of all the programs immediately. XML is usually used in program-list files to organize various metadata of programs. This is different from website-based program-list distribution of video-sharing sites like Youku and YouTube. Program-list of IPTV is well organized for convenient browsing.With the rapid increase in the number of programs, the size of program-list file becomes bigger and bigger. For example, PPTV had about 300 thousand programs in 2011, and the size of program-list file was more than 20 MB, which is a heavy burden to program-list servers and leads to bad experience to users. Some IPTV applications use compression method to decrease the file size, while others use multiple program-list files based on program categories. Furthermore, some IPTV applications encrypt program-list files to prevent hotlinking. ### 4.2. Architecture of DMP-Crawler In order to obtain program information of IPTV applications, it is necessary to summarize the principle of program-list distribution of the most of IPTV applications and decrypt the encrypt algorithm and XML metadata of program-list file.Then, an efficient distributed multiprotocol crawler (DMP-Crawler) was proposed to collect various kinds of information of programs in popular P2P IPTV applications. {Program name, IPTV application name} was used to uniquely identify a program. Figure2 presents an overview of architecture of DMP-Crawler, which is composed of one crawler controller and a number of crawler clients.Figure 2 Architecture of DMP-Crawler.On the basis of crawler clients’ and server’s status, the crawler controller assigns tasks to multiple independent crawler clients through a task scheduling algorithm. Each crawler client periodically reports its crawling status as well as CPU and memory consumption to crawler controller.A crawler client mainly includes crawling engine, program-list crawling module, program-list extracting module, classification module, and data storage module. According to crawling task type, the crawler client invokes crawler engine, requests program-list file from program-list servers, and reports crawling status to crawler controller. When program-list file is downloaded, the crawler client extracts metadata of programs from the file, classifies these programs, and stores all information into database for further analyses. ### 4.3. Characteristic Analysis of Program Resource In order to understand naming rules of IPTV programs, characteristics of program naming were analyzed with statistical methods. Characteristics analysis of programs included distributions of the length of program names, the entropy of the character types, high-frequency symbols in the names, and distributions of the hierarchy depth of program names.Definition 1. Program Name (PN) is composed of a serial of characters, PN=c1c2⋯cn, Where n is the number of characters or length of PN and n=len(PN). ci(i=1,2,…,n) is a printable character in some coded character set, such as Chinese, English, Latin, and punctuation symbol.Iflen(PN)≤10, the program has a short name.If10<len(PN)≤20, the program has a medium name.If20<len(PN)≤30, the program has a long name.Iflen(PN)>30, the program has a super-long name.Let a random variablex denote the character type of program name. The set of value of x is denoted as (1)CharsType={C,E,L,G,N,S,O}, where C, E, L, G, N, S, and O represent Chinese, English, Latin, Greek, Number, Symbol (includes punctuation and special character), and unidentified character, respectively. Character type is defined by Unicode Character Database (UCD) [29].LetUc denote the set of characters of program names, ci∈Uc.With a mapping functionf:Uc→CharsType, every character of program name can be transformed to the corresponding character type as follows: (2)f(PN)=x1x2⋯xn, where xi∈CharsType, i=1,2,…,n.LetH(CharsType) denote the information entropy of xi, which is used to evaluate the chaos of program naming [30] (3)H(CharsType)=-∑i=1Vp(xi)·log⁡2⁡(p(xi)), where p(xi) is the probability of xi; V is the number of character type in program name. Thus, the value of H(CharsType) is between 0 and log2V. In the calculation of entropy, let 0·log20=0. ### 4.4. Analytical Model of Hot Programs Definition 2. Hot programs are the top 100 popular programs that have the most viewers and concern a large amount of people.Definition 3. Hot degree is used to describe the concerned degree or level of hot programs by people. The influencing factors of hot degree include the number, watching time, and comments of viewers.LetHD denote the hot degree of a program. In a P2P IPTV application, it can be expressed as (4)HD=α·U+β·T+γ·C,α+β+γ=1, where U, T, and C represent the number of viewers, watching time, and the number of comments, respectively. Here the number of viewers refers to the number of online viewers at some point. And α, β, and γ are their weights.For all P2P IPTV applications, (4) can be rewritten as (5)HD=∑i=1jHDi=∑i=1j(α·Ui+β·Ti+γ·Ci),α+β+γ=1, where j is the number of IPTV applications. ## 4.1. Principle of Program Resource Distribution When the program-list is downloaded and extracted by an IPTV client, users can select a program to watch videos. So program-list distribution is very important in P2P IPTV applications. The program-list includes program name, categories, play-link, and descriptions. Play-link is the most important identification of signal communication among peers viewing the same program. A typical example of program metadata is shown in Table1.Table 1 Metadata of a IPTV program. Program name 110205- Happy camp Category Variety show Channel name Happy camp Subchannel name — IPTV application name PPStream Play-link pps://n26aeygqeb6s4kbc2aqa.pps/110205- Happy Camp.wmv Number of views 425 Date added 2011/02/05The client-server architecture, as shown in Figure1, is usually used to distribute program-list file in IPTV systems. When an IPTV client starts up, it requests program-list file from program-list servers and updates the local information of all the programs immediately. XML is usually used in program-list files to organize various metadata of programs. This is different from website-based program-list distribution of video-sharing sites like Youku and YouTube. Program-list of IPTV is well organized for convenient browsing.With the rapid increase in the number of programs, the size of program-list file becomes bigger and bigger. For example, PPTV had about 300 thousand programs in 2011, and the size of program-list file was more than 20 MB, which is a heavy burden to program-list servers and leads to bad experience to users. Some IPTV applications use compression method to decrease the file size, while others use multiple program-list files based on program categories. Furthermore, some IPTV applications encrypt program-list files to prevent hotlinking. ## 4.2. Architecture of DMP-Crawler In order to obtain program information of IPTV applications, it is necessary to summarize the principle of program-list distribution of the most of IPTV applications and decrypt the encrypt algorithm and XML metadata of program-list file.Then, an efficient distributed multiprotocol crawler (DMP-Crawler) was proposed to collect various kinds of information of programs in popular P2P IPTV applications. {Program name, IPTV application name} was used to uniquely identify a program. Figure2 presents an overview of architecture of DMP-Crawler, which is composed of one crawler controller and a number of crawler clients.Figure 2 Architecture of DMP-Crawler.On the basis of crawler clients’ and server’s status, the crawler controller assigns tasks to multiple independent crawler clients through a task scheduling algorithm. Each crawler client periodically reports its crawling status as well as CPU and memory consumption to crawler controller.A crawler client mainly includes crawling engine, program-list crawling module, program-list extracting module, classification module, and data storage module. According to crawling task type, the crawler client invokes crawler engine, requests program-list file from program-list servers, and reports crawling status to crawler controller. When program-list file is downloaded, the crawler client extracts metadata of programs from the file, classifies these programs, and stores all information into database for further analyses. ## 4.3. Characteristic Analysis of Program Resource In order to understand naming rules of IPTV programs, characteristics of program naming were analyzed with statistical methods. Characteristics analysis of programs included distributions of the length of program names, the entropy of the character types, high-frequency symbols in the names, and distributions of the hierarchy depth of program names.Definition 1. Program Name (PN) is composed of a serial of characters, PN=c1c2⋯cn, Where n is the number of characters or length of PN and n=len(PN). ci(i=1,2,…,n) is a printable character in some coded character set, such as Chinese, English, Latin, and punctuation symbol.Iflen(PN)≤10, the program has a short name.If10<len(PN)≤20, the program has a medium name.If20<len(PN)≤30, the program has a long name.Iflen(PN)>30, the program has a super-long name.Let a random variablex denote the character type of program name. The set of value of x is denoted as (1)CharsType={C,E,L,G,N,S,O}, where C, E, L, G, N, S, and O represent Chinese, English, Latin, Greek, Number, Symbol (includes punctuation and special character), and unidentified character, respectively. Character type is defined by Unicode Character Database (UCD) [29].LetUc denote the set of characters of program names, ci∈Uc.With a mapping functionf:Uc→CharsType, every character of program name can be transformed to the corresponding character type as follows: (2)f(PN)=x1x2⋯xn, where xi∈CharsType, i=1,2,…,n.LetH(CharsType) denote the information entropy of xi, which is used to evaluate the chaos of program naming [30] (3)H(CharsType)=-∑i=1Vp(xi)·log⁡2⁡(p(xi)), where p(xi) is the probability of xi; V is the number of character type in program name. Thus, the value of H(CharsType) is between 0 and log2V. In the calculation of entropy, let 0·log20=0. ## 4.4. Analytical Model of Hot Programs Definition 2. Hot programs are the top 100 popular programs that have the most viewers and concern a large amount of people.Definition 3. Hot degree is used to describe the concerned degree or level of hot programs by people. The influencing factors of hot degree include the number, watching time, and comments of viewers.LetHD denote the hot degree of a program. In a P2P IPTV application, it can be expressed as (4)HD=α·U+β·T+γ·C,α+β+γ=1, where U, T, and C represent the number of viewers, watching time, and the number of comments, respectively. Here the number of viewers refers to the number of online viewers at some point. And α, β, and γ are their weights.For all P2P IPTV applications, (4) can be rewritten as (5)HD=∑i=1jHDi=∑i=1j(α·Ui+β·Ti+γ·Ci),α+β+γ=1, where j is the number of IPTV applications. ## 5. Results and Discussion ### 5.1. Crawling Results of DMP-Crawler DMP-Crawler consists of one crawler controller and ten crawler clients. DMP-Crawler is deployed on three PC Servers with Intel E5506 CPU and 4 GB Memory in Beijing of China with 10 Mbps Ethernet network access. DMP-Crawler ran two rounds and collected about 900,000 programs every day. According to the collected program names and IPTV application names, the repeated programs were removed.From February 2009 to July 2012, DMP-Crawler collected 13,107,766 distinct programs from 33 IPTV applications in China, in which only 0.3% of the programs were live programs. In particular, PPfilm has no live progr am.The numbers and ratios of programs of 33 IPTV applications are shown graphically in Figure3. From the collected data, we can find that the distribution of programs is highly skewed. The most popular IPTV application is PPfilm, accounting for about 31.1%, and the second is PPStream, accounting for about 19.3%. These two IPTV applications account for about one half of the entire IPTV programs.Figure 3 Distribution of various IPTV programs.According to the requirements of the State Administration of Radio Film and Television (SARFT), IPTV service providers must apply for “Information Network Dissemination Audio-Visual Programs Permit” before August 2009. Some IPTV service providers could not acquire the permit and stopped video service in 2010. Therefore, these IPTV applications have only hundreds or thousands of programs.We ranked each of the IPTV applications according to their percentages of programs and plotted a typical Cumulative Distribution Function (CDF) of the percentages of programs in Figure4.Figure 4 CDF of distribution of programs.In Figure4, 15.2% (5/33) popular IPTV applications have 80% programs and 24.2% (8/33) popular IPTV applications have more than 90% programs. Some IPTV applications, like SopCast and TVUPlayer, have only a small proportion of programs, for they have no video on demand.When programs were extracted from program-list file, programs were classified into one of 13 categories defined by SARFT. Percentages of all the categories are shown graphically in Figure5. From the figure, we can observe that the distribution is highly skewed: the most popular category is News, accounting for about 39%; the second is Drama, accounting for about 21%; the third is Animation, accounting for about 16%; and the last is Specific show, accounting for about 0.02%.Figure 5 Distribution of programs’ categories.In Figure5, we also list category “Others.” “Others” are programs that cannot be classified. ### 5.2. Characteristic Analysis of Program Resource #### 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. #### 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 #### 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ### 5.3. Analysis of Hot Programs In the measurement of P2P IPTV, it is difficult to measure watching time of programs which is managed by IPTV operators. It is impossible for us to collaborate with major IPTV operators. In addition, only a few popular IPTV applications provide program comment functions in IPTV client or website. Moreover, the number of online viewers is much more than the number of comments. Here letα=1, β=γ=0; (5) was simplified as (6)HD=∑i=1jHDi=∑i=1jUi.From (6), we can find that hot programs appear in popular IPTV applications. Thus, we only consider the top 5 IPTV applications; then (7)HD=UPPStream+UPPfilm+UUUSee+UPPTV+UQQLive.For a program, PPStream and PPfilm provide the number of its viewers, UUSee presents the ratio of its viewers, while PPTV and QQLive only offer 6-level popularity. Thus, we must normalize the number of viewers according to the number of online viewers of various IPTV applications. In June 2010, the maximum viewers of PPStream, UUSee, PPTV, and QQLive are about 20.0, 2.0, 11.0, and 6.6 million, respectively. The normalizing rules are as follows.(1) UPPStream and UPPfilm can be obtained from their program-list files.(2) UUSee presents the ratio of viewers.UUUSee is calculated according to the ratio and the number of online viewers. We estimate the online viewers of UUSee as 2 million.(3) PPTV and QQLive offer 6-level popularity of programs. Every popularity level presents an interval of the number of viewers. Popularity levels of 0, 1, 2, 3, 4, and 5, respectively, represent[0,199], [200,399], [400,599], [600,799], [800,999], and [1000,+∞]. If popularity level is between 0 and 4, median of corresponding interval is used as U. If popularity level is 5 and PPStream has not the same program, UPPTV = 3000 and UQQLive = 1800. If popularity level is 5 and PPStream has the same program, UPPTV and UQQLive are calculated by (8)UPPTV=0.55×UPPStream,UQQLive=0.33×UPPStream.Table6 lists the hottest programs in one week through analytical model of hot programs. From Table 6, we can infer that the most popular drama IPTV wasLet’s see the Meteor Shower togetherin that week. Moreover, the model can be used to predict popularity of hot programs in some time.Table 6 Hottest programs in one week. Date Program names Categories HD 2010-12-23 Let’s see the Meteor Shower together—23 Drama 116099 2010-12-24 Salt (Angelina Jolie) Film 66071 2010-12-25 100824-  Kangxi Variety Show 94976 2010-12-26 Can’t Buy Me Love—02 Drama 94578 2010-12-27 Let’s see the Meteor Shower together—24 Drama 146474 2010-12-28 Triple Tap Film 111477 2010-12-29 Triple Tap Film 111677 ## 5.1. Crawling Results of DMP-Crawler DMP-Crawler consists of one crawler controller and ten crawler clients. DMP-Crawler is deployed on three PC Servers with Intel E5506 CPU and 4 GB Memory in Beijing of China with 10 Mbps Ethernet network access. DMP-Crawler ran two rounds and collected about 900,000 programs every day. According to the collected program names and IPTV application names, the repeated programs were removed.From February 2009 to July 2012, DMP-Crawler collected 13,107,766 distinct programs from 33 IPTV applications in China, in which only 0.3% of the programs were live programs. In particular, PPfilm has no live progr am.The numbers and ratios of programs of 33 IPTV applications are shown graphically in Figure3. From the collected data, we can find that the distribution of programs is highly skewed. The most popular IPTV application is PPfilm, accounting for about 31.1%, and the second is PPStream, accounting for about 19.3%. These two IPTV applications account for about one half of the entire IPTV programs.Figure 3 Distribution of various IPTV programs.According to the requirements of the State Administration of Radio Film and Television (SARFT), IPTV service providers must apply for “Information Network Dissemination Audio-Visual Programs Permit” before August 2009. Some IPTV service providers could not acquire the permit and stopped video service in 2010. Therefore, these IPTV applications have only hundreds or thousands of programs.We ranked each of the IPTV applications according to their percentages of programs and plotted a typical Cumulative Distribution Function (CDF) of the percentages of programs in Figure4.Figure 4 CDF of distribution of programs.In Figure4, 15.2% (5/33) popular IPTV applications have 80% programs and 24.2% (8/33) popular IPTV applications have more than 90% programs. Some IPTV applications, like SopCast and TVUPlayer, have only a small proportion of programs, for they have no video on demand.When programs were extracted from program-list file, programs were classified into one of 13 categories defined by SARFT. Percentages of all the categories are shown graphically in Figure5. From the figure, we can observe that the distribution is highly skewed: the most popular category is News, accounting for about 39%; the second is Drama, accounting for about 21%; the third is Animation, accounting for about 16%; and the last is Specific show, accounting for about 0.02%.Figure 5 Distribution of programs’ categories.In Figure5, we also list category “Others.” “Others” are programs that cannot be classified. ## 5.2. Characteristic Analysis of Program Resource ### 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. ### 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 ### 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ## 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. ## 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 ## 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ## 5.3. Analysis of Hot Programs In the measurement of P2P IPTV, it is difficult to measure watching time of programs which is managed by IPTV operators. It is impossible for us to collaborate with major IPTV operators. In addition, only a few popular IPTV applications provide program comment functions in IPTV client or website. Moreover, the number of online viewers is much more than the number of comments. Here letα=1, β=γ=0; (5) was simplified as (6)HD=∑i=1jHDi=∑i=1jUi.From (6), we can find that hot programs appear in popular IPTV applications. Thus, we only consider the top 5 IPTV applications; then (7)HD=UPPStream+UPPfilm+UUUSee+UPPTV+UQQLive.For a program, PPStream and PPfilm provide the number of its viewers, UUSee presents the ratio of its viewers, while PPTV and QQLive only offer 6-level popularity. Thus, we must normalize the number of viewers according to the number of online viewers of various IPTV applications. In June 2010, the maximum viewers of PPStream, UUSee, PPTV, and QQLive are about 20.0, 2.0, 11.0, and 6.6 million, respectively. The normalizing rules are as follows.(1) UPPStream and UPPfilm can be obtained from their program-list files.(2) UUSee presents the ratio of viewers.UUUSee is calculated according to the ratio and the number of online viewers. We estimate the online viewers of UUSee as 2 million.(3) PPTV and QQLive offer 6-level popularity of programs. Every popularity level presents an interval of the number of viewers. Popularity levels of 0, 1, 2, 3, 4, and 5, respectively, represent[0,199], [200,399], [400,599], [600,799], [800,999], and [1000,+∞]. If popularity level is between 0 and 4, median of corresponding interval is used as U. If popularity level is 5 and PPStream has not the same program, UPPTV = 3000 and UQQLive = 1800. If popularity level is 5 and PPStream has the same program, UPPTV and UQQLive are calculated by (8)UPPTV=0.55×UPPStream,UQQLive=0.33×UPPStream.Table6 lists the hottest programs in one week through analytical model of hot programs. From Table 6, we can infer that the most popular drama IPTV wasLet’s see the Meteor Shower togetherin that week. Moreover, the model can be used to predict popularity of hot programs in some time.Table 6 Hottest programs in one week. Date Program names Categories HD 2010-12-23 Let’s see the Meteor Shower together—23 Drama 116099 2010-12-24 Salt (Angelina Jolie) Film 66071 2010-12-25 100824-  Kangxi Variety Show 94976 2010-12-26 Can’t Buy Me Love—02 Drama 94578 2010-12-27 Let’s see the Meteor Shower together—24 Drama 146474 2010-12-28 Triple Tap Film 111477 2010-12-29 Triple Tap Film 111677 ## 6. Conclusions In this paper, we have studied the program information collection in P2P IPTV applications. We proposed a distributed multiprotocol crawler to harvest program information of P2P IPTV applications. As far as we know, it is the first time that the detailed crawler for IPTV programs is presented. Characteristic analysis of programs was carried out. The results reveal the disorderly naming conventions of P2P IPTV program and can help to purify and extract useful information from chaos names for better retrieval. We also put forward an analytical model of hot programs to represent popularity of programs and predict user behaviors and popularity of hot programs within a period.Distribution of IPTV programs is independent and incompact, resulting in the chaos of program name, which obstructs searching and organizing programs. In the next work, we will focus on data mining of programs and establishment of IPTV repository. --- *Source: 101702-2014-03-19.xml*
101702-2014-03-19_101702-2014-03-19.md
53,967
Measurement and Analysis of P2P IPTV Program Resource
Wenxian Wang; Xingshu Chen; Haizhou Wang; Qi Zhang; Cheng Wang
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101702
101702-2014-03-19.xml
--- ## Abstract With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs. --- ## Body ## 1. Introduction Peer-to-Peer (P2P) applications take advantage of resources such as storage, CPU cycles, content, or human presence available at the edge of the Internet to provide a service [1]. With the development and maturity of P2P technology, P2P applications become more and more popular in the recent ten years, including file-sharing applications, audio-based VOIP applications, and video-based IPTV applications [2–5]. However, they occupy a significant proportion of Internet traffic. According to a survey from CacheLogic [6] in June, 2004, 60% of the Internet traffic is P2P. In addition, P2P IPTV applications, such as PPTV (former PPLive) [2], QQLive [4] in China, become popular gradually and occupy a great amount of P2P traffic [7].P2P IPTV, also called P2P streaming, emerged recently as a novel framework to deliver TV programs or live events to a large number of viewers over the Internet. With the rapid large-scale popularization of broadband technology, P2P IPTV becomes the disruptive IP communication technology, which greatly revolutionizes people’s lives and entertainment [8]. Several P2P IPTV applications have gained great commercial success, including CoolStreaming [9], PPTV, PPStream, and UUSee. With low price and simple operation, P2P IPTV becomes more popular in recent years and receives great attention from both industry and academia. It was reported that PPTV had more than 200 million installations and its active monthly user base (in December 2010) was 104 million. PPTV had the penetration of 43% in Chinese Internet users [10].There are more and more P2P IPTV applications in the Internet now. And it is difficult to measure the applications because they use proprietary protocols. Source codes or official documents are scarcely published. However, the measurement of P2P IPTV applications is an important problem for the management and development of IPTV and several researchers have tried to address this issue. But none of them focused on program-list and its distribution of P2P IPTV. In this paper, we proposed a distributed multiprotocol crawler (DMP-Crawler) for collecting program resources in P2P IPTV networks. Moreover, we analyzed the characteristics of these IPTV programs and presented an analytical model of hot programs. The model can be used to infer the popular drama IPTV users at some time.The remainder of this paper is structured as follows. Section2 presents an overview of P2P IPTV. Section 3 introduces related work of P2P IPTV measurement. Section 4 describes the principle of P2P IPTV program-list distribution, DMP-Crawler, and methodology of measurement and analysis. Section 5 presents and discusses the results. Finally, Section 6 concludes the paper and gives the future work. ## 2. Overview of P2P IPTV ### 2.1. P2P IPTV Internet Protocol Television (IPTV) denotes the transport of live streams and recorded movies or video clips by means of advanced packet-switched Internet technologies [11]. ITU-T defined IPTV as multimedia services such as television/video/audio/text/graphics/data delivered over IP-based networks managed to support the required level of QoS/QoE/, security, interactivity, and reliability [12].Over the past decade, P2P technology has been a promising solution for the distribution of large-scale media and a large amount of P2P IPTV systems have been developed and widely deployed on the Internet. In this paper, we defined P2P IPTV as a technology that enabled users to transmit and receive multimedia services including television, video, audio, text, and graphic through P2P overlay networks with support for QoS/QoE, security, mobility, interactivity, and reliability. Through P2P IPTV, users can enjoy IPTV services anywhere. Now P2P IPTV applications are changing the way that we watch TV and movies.In 2000, Chu et al. proposed End System Multicast (ESM) [13], the first P2P IPTV application, in which an overlay tree is constructed to distribute video data and continuously optimized to minimize end-to-end latency. Then the overlay networks are adopted for efficient distribution of live video. The overlay networks include Nice [14], SplitStream [15], Scattercast [16], and Overcast [17]. Unfortunately, they were not deployed in large scale due to their limited capabilities. CoolStreaming was released in summer 2004 and arguably represented the first large-scale P2P video streaming experiment [9]. Then, CoolStreaming was significantly modified and commercially launched. With the success of CoolStreaming, many P2P IPTV applications emerged in 2005. The known applications include PPLive, PPStream, QQLive, and UUSee in China. From 2006, related measurements of P2P IPTV were done by a number of academic staff, and we also carried out the measurement work [18] in 2007. ### 2.2. Architecture of P2P IPTV A typical P2P IPTV application is comprised of five components: media collection server (MCS), media distribution server (MDS), program-list server (PLS), tracker server (TS, also called peer-list server), and end system (ES, also called client or peer). As illustrated in Figure1, the basic workflow of a P2P IPTV application is provided as follows.Figure 1 Architecture of P2P IPTV.Step 1. MCS gathers video data in two ways. Firstly, for live program, MCS gets video data from video grabber. Secondly, for video on demand (VoD), MCS reads video file directly, encodes video data according to some coding methods, and uploads the data to MDS.Step 2. When coding data of a video, MCS will generate the related program name, program GUID (Globally Unique Identifier), play link, category, and so forth and register the information in PLS. At the same time, MDS will register program GUID in TS.Step 3. After receiving live data, MDS will distribute them to IPTV network. After receiving VoD data, MDS will store them firstly and distribute them when clients request them. We have introduced the video distribution protocol in detail in 2012 [18].Step 4. The local peer requests the latest program-list file from PLS and updates it immediately after lunching the IPTV client. The list of program consists of program name, program GUID which is the most important identification of signal communication among peers, program descriptions, and so forth.Step 5. After the local peer selects one program to watch videos, the peer registers itself to the tracker server and sends multiple query messages to the server to retrieve a small set of partner peers who are watching the same program. The information of partner peers includes IP address, TCP port, and UDP port. Upon receiving the initial list of partner peers, the local peer uses this seed peer list to harvest additional lists by periodically probing active peers which maintain a list of peers.Step 6. After harvesting enough peers, the peer tries to connect the active ones or MDS to request video data for playback of the appointed program and launches a local media player (such as Windows Media Player and RealPlayer) to play the video. To deal with the churn of peers, the local peer needs to actively seek new peers from its existing partners to update peer list. At the same time, it also rebroadcasts its current peer list to its partner peers.Our work is focused on Step4, such as the distribution of program-list. ## 2.1. P2P IPTV Internet Protocol Television (IPTV) denotes the transport of live streams and recorded movies or video clips by means of advanced packet-switched Internet technologies [11]. ITU-T defined IPTV as multimedia services such as television/video/audio/text/graphics/data delivered over IP-based networks managed to support the required level of QoS/QoE/, security, interactivity, and reliability [12].Over the past decade, P2P technology has been a promising solution for the distribution of large-scale media and a large amount of P2P IPTV systems have been developed and widely deployed on the Internet. In this paper, we defined P2P IPTV as a technology that enabled users to transmit and receive multimedia services including television, video, audio, text, and graphic through P2P overlay networks with support for QoS/QoE, security, mobility, interactivity, and reliability. Through P2P IPTV, users can enjoy IPTV services anywhere. Now P2P IPTV applications are changing the way that we watch TV and movies.In 2000, Chu et al. proposed End System Multicast (ESM) [13], the first P2P IPTV application, in which an overlay tree is constructed to distribute video data and continuously optimized to minimize end-to-end latency. Then the overlay networks are adopted for efficient distribution of live video. The overlay networks include Nice [14], SplitStream [15], Scattercast [16], and Overcast [17]. Unfortunately, they were not deployed in large scale due to their limited capabilities. CoolStreaming was released in summer 2004 and arguably represented the first large-scale P2P video streaming experiment [9]. Then, CoolStreaming was significantly modified and commercially launched. With the success of CoolStreaming, many P2P IPTV applications emerged in 2005. The known applications include PPLive, PPStream, QQLive, and UUSee in China. From 2006, related measurements of P2P IPTV were done by a number of academic staff, and we also carried out the measurement work [18] in 2007. ## 2.2. Architecture of P2P IPTV A typical P2P IPTV application is comprised of five components: media collection server (MCS), media distribution server (MDS), program-list server (PLS), tracker server (TS, also called peer-list server), and end system (ES, also called client or peer). As illustrated in Figure1, the basic workflow of a P2P IPTV application is provided as follows.Figure 1 Architecture of P2P IPTV.Step 1. MCS gathers video data in two ways. Firstly, for live program, MCS gets video data from video grabber. Secondly, for video on demand (VoD), MCS reads video file directly, encodes video data according to some coding methods, and uploads the data to MDS.Step 2. When coding data of a video, MCS will generate the related program name, program GUID (Globally Unique Identifier), play link, category, and so forth and register the information in PLS. At the same time, MDS will register program GUID in TS.Step 3. After receiving live data, MDS will distribute them to IPTV network. After receiving VoD data, MDS will store them firstly and distribute them when clients request them. We have introduced the video distribution protocol in detail in 2012 [18].Step 4. The local peer requests the latest program-list file from PLS and updates it immediately after lunching the IPTV client. The list of program consists of program name, program GUID which is the most important identification of signal communication among peers, program descriptions, and so forth.Step 5. After the local peer selects one program to watch videos, the peer registers itself to the tracker server and sends multiple query messages to the server to retrieve a small set of partner peers who are watching the same program. The information of partner peers includes IP address, TCP port, and UDP port. Upon receiving the initial list of partner peers, the local peer uses this seed peer list to harvest additional lists by periodically probing active peers which maintain a list of peers.Step 6. After harvesting enough peers, the peer tries to connect the active ones or MDS to request video data for playback of the appointed program and launches a local media player (such as Windows Media Player and RealPlayer) to play the video. To deal with the churn of peers, the local peer needs to actively seek new peers from its existing partners to update peer list. At the same time, it also rebroadcasts its current peer list to its partner peers.Our work is focused on Step4, such as the distribution of program-list. ## 3. Related Work P2P IPTV measurement has been extensively studied. The measuring methods can be classified into two types: passive tracing and active tracing approach.The passive approach is performed by deploying code at suitable points in the network infrastructure. The passive approach does not increase the network traffic. And it is often used to analyze and identify P2P IPTV traffic from general Internet traffic with the known behaviors (such as connection ports, feature, or patterns). It is also used to capture IPTV traces and grasp the P2P IPTV application. Du et al. [19] and Tan et al. [20] developed a machine learning methodology to identify PPLive and PPStreasm traffic. Agarwal et al. [21] studied the program startup time and the quality of service in terms of the number of consecutive lost block. Silverston and Fourmaux [22] studied four IPTV applications and gave the global view of the impact of P2P media streaming on the network traffic. Following the research, they presented a detailed study of IPTV traffic, providing useful insights on transport-level and packet-level properties as well as the behaviors of the peers in the network [23]. With abundant traces from a successful commercial P2P IPTV application, Wu et al. [24] characterized interpeer bandwidth availability in large-scale P2P streaming networks. The passive approach is potentially transparent and scalable and allows the comparison of traffic from multiple domains side-by-side. However, it is dependent upon the access to core network infrastructure, which is not always available. Thus, it is often used for flow control in firewall or gateway devices.In the active approach, the special crawler, like an ordinary client, is adopted to inject test packets into P2P IPTV network or send packets to servers and peers. Then the crawler follows packets and measures characters of IPTV network. Hei et al. [25] carried out the first active tracing of a commercial P2P IPTV application, namely, PPLive. They further developed a dedicated PPLive crawler to study the global characteristics of PPLive system [26]. Wu et al. [27] presented Magellan to characterize topologies of peer-to-peer streaming networks of UUSee. Vu et al. [28] mapped the PPLive network to study the impacts of media streaming on P2P overlays.Most of existing research work surveyed the P2P IPTV network-centric metrics (such as traffic characterization, TCP or UDP connections, and video traffic) or user-centric metrics (such as user arrival and departure, geographic distribution, and channel population). Our studies were primarily focused on program-list distribution of P2P IPTV applications because program resource distribution was very important to P2P IPTV applications. Our work surveyed the P2P IPTV content-centric metrics which were useful for prediction and monitoring of programs. In this paper, a distributed multiprotocol program crawler was proposed to collect various kinds of information of programs. Moreover, we also analyzed the characteristics of program resources and put forward an analytical model of hot program. ## 4. Methodology of Measurement and Characteristic Analysis In this section, we will present the basic principle of program-list distribution in P2P IPTV applications and illustrate a feasible and efficient architecture for crawling program-list. ### 4.1. Principle of Program Resource Distribution When the program-list is downloaded and extracted by an IPTV client, users can select a program to watch videos. So program-list distribution is very important in P2P IPTV applications. The program-list includes program name, categories, play-link, and descriptions. Play-link is the most important identification of signal communication among peers viewing the same program. A typical example of program metadata is shown in Table1.Table 1 Metadata of a IPTV program. Program name 110205- Happy camp Category Variety show Channel name Happy camp Subchannel name — IPTV application name PPStream Play-link pps://n26aeygqeb6s4kbc2aqa.pps/110205- Happy Camp.wmv Number of views 425 Date added 2011/02/05The client-server architecture, as shown in Figure1, is usually used to distribute program-list file in IPTV systems. When an IPTV client starts up, it requests program-list file from program-list servers and updates the local information of all the programs immediately. XML is usually used in program-list files to organize various metadata of programs. This is different from website-based program-list distribution of video-sharing sites like Youku and YouTube. Program-list of IPTV is well organized for convenient browsing.With the rapid increase in the number of programs, the size of program-list file becomes bigger and bigger. For example, PPTV had about 300 thousand programs in 2011, and the size of program-list file was more than 20 MB, which is a heavy burden to program-list servers and leads to bad experience to users. Some IPTV applications use compression method to decrease the file size, while others use multiple program-list files based on program categories. Furthermore, some IPTV applications encrypt program-list files to prevent hotlinking. ### 4.2. Architecture of DMP-Crawler In order to obtain program information of IPTV applications, it is necessary to summarize the principle of program-list distribution of the most of IPTV applications and decrypt the encrypt algorithm and XML metadata of program-list file.Then, an efficient distributed multiprotocol crawler (DMP-Crawler) was proposed to collect various kinds of information of programs in popular P2P IPTV applications. {Program name, IPTV application name} was used to uniquely identify a program. Figure2 presents an overview of architecture of DMP-Crawler, which is composed of one crawler controller and a number of crawler clients.Figure 2 Architecture of DMP-Crawler.On the basis of crawler clients’ and server’s status, the crawler controller assigns tasks to multiple independent crawler clients through a task scheduling algorithm. Each crawler client periodically reports its crawling status as well as CPU and memory consumption to crawler controller.A crawler client mainly includes crawling engine, program-list crawling module, program-list extracting module, classification module, and data storage module. According to crawling task type, the crawler client invokes crawler engine, requests program-list file from program-list servers, and reports crawling status to crawler controller. When program-list file is downloaded, the crawler client extracts metadata of programs from the file, classifies these programs, and stores all information into database for further analyses. ### 4.3. Characteristic Analysis of Program Resource In order to understand naming rules of IPTV programs, characteristics of program naming were analyzed with statistical methods. Characteristics analysis of programs included distributions of the length of program names, the entropy of the character types, high-frequency symbols in the names, and distributions of the hierarchy depth of program names.Definition 1. Program Name (PN) is composed of a serial of characters, PN=c1c2⋯cn, Where n is the number of characters or length of PN and n=len(PN). ci(i=1,2,…,n) is a printable character in some coded character set, such as Chinese, English, Latin, and punctuation symbol.Iflen(PN)≤10, the program has a short name.If10<len(PN)≤20, the program has a medium name.If20<len(PN)≤30, the program has a long name.Iflen(PN)>30, the program has a super-long name.Let a random variablex denote the character type of program name. The set of value of x is denoted as (1)CharsType={C,E,L,G,N,S,O}, where C, E, L, G, N, S, and O represent Chinese, English, Latin, Greek, Number, Symbol (includes punctuation and special character), and unidentified character, respectively. Character type is defined by Unicode Character Database (UCD) [29].LetUc denote the set of characters of program names, ci∈Uc.With a mapping functionf:Uc→CharsType, every character of program name can be transformed to the corresponding character type as follows: (2)f(PN)=x1x2⋯xn, where xi∈CharsType, i=1,2,…,n.LetH(CharsType) denote the information entropy of xi, which is used to evaluate the chaos of program naming [30] (3)H(CharsType)=-∑i=1Vp(xi)·log⁡2⁡(p(xi)), where p(xi) is the probability of xi; V is the number of character type in program name. Thus, the value of H(CharsType) is between 0 and log2V. In the calculation of entropy, let 0·log20=0. ### 4.4. Analytical Model of Hot Programs Definition 2. Hot programs are the top 100 popular programs that have the most viewers and concern a large amount of people.Definition 3. Hot degree is used to describe the concerned degree or level of hot programs by people. The influencing factors of hot degree include the number, watching time, and comments of viewers.LetHD denote the hot degree of a program. In a P2P IPTV application, it can be expressed as (4)HD=α·U+β·T+γ·C,α+β+γ=1, where U, T, and C represent the number of viewers, watching time, and the number of comments, respectively. Here the number of viewers refers to the number of online viewers at some point. And α, β, and γ are their weights.For all P2P IPTV applications, (4) can be rewritten as (5)HD=∑i=1jHDi=∑i=1j(α·Ui+β·Ti+γ·Ci),α+β+γ=1, where j is the number of IPTV applications. ## 4.1. Principle of Program Resource Distribution When the program-list is downloaded and extracted by an IPTV client, users can select a program to watch videos. So program-list distribution is very important in P2P IPTV applications. The program-list includes program name, categories, play-link, and descriptions. Play-link is the most important identification of signal communication among peers viewing the same program. A typical example of program metadata is shown in Table1.Table 1 Metadata of a IPTV program. Program name 110205- Happy camp Category Variety show Channel name Happy camp Subchannel name — IPTV application name PPStream Play-link pps://n26aeygqeb6s4kbc2aqa.pps/110205- Happy Camp.wmv Number of views 425 Date added 2011/02/05The client-server architecture, as shown in Figure1, is usually used to distribute program-list file in IPTV systems. When an IPTV client starts up, it requests program-list file from program-list servers and updates the local information of all the programs immediately. XML is usually used in program-list files to organize various metadata of programs. This is different from website-based program-list distribution of video-sharing sites like Youku and YouTube. Program-list of IPTV is well organized for convenient browsing.With the rapid increase in the number of programs, the size of program-list file becomes bigger and bigger. For example, PPTV had about 300 thousand programs in 2011, and the size of program-list file was more than 20 MB, which is a heavy burden to program-list servers and leads to bad experience to users. Some IPTV applications use compression method to decrease the file size, while others use multiple program-list files based on program categories. Furthermore, some IPTV applications encrypt program-list files to prevent hotlinking. ## 4.2. Architecture of DMP-Crawler In order to obtain program information of IPTV applications, it is necessary to summarize the principle of program-list distribution of the most of IPTV applications and decrypt the encrypt algorithm and XML metadata of program-list file.Then, an efficient distributed multiprotocol crawler (DMP-Crawler) was proposed to collect various kinds of information of programs in popular P2P IPTV applications. {Program name, IPTV application name} was used to uniquely identify a program. Figure2 presents an overview of architecture of DMP-Crawler, which is composed of one crawler controller and a number of crawler clients.Figure 2 Architecture of DMP-Crawler.On the basis of crawler clients’ and server’s status, the crawler controller assigns tasks to multiple independent crawler clients through a task scheduling algorithm. Each crawler client periodically reports its crawling status as well as CPU and memory consumption to crawler controller.A crawler client mainly includes crawling engine, program-list crawling module, program-list extracting module, classification module, and data storage module. According to crawling task type, the crawler client invokes crawler engine, requests program-list file from program-list servers, and reports crawling status to crawler controller. When program-list file is downloaded, the crawler client extracts metadata of programs from the file, classifies these programs, and stores all information into database for further analyses. ## 4.3. Characteristic Analysis of Program Resource In order to understand naming rules of IPTV programs, characteristics of program naming were analyzed with statistical methods. Characteristics analysis of programs included distributions of the length of program names, the entropy of the character types, high-frequency symbols in the names, and distributions of the hierarchy depth of program names.Definition 1. Program Name (PN) is composed of a serial of characters, PN=c1c2⋯cn, Where n is the number of characters or length of PN and n=len(PN). ci(i=1,2,…,n) is a printable character in some coded character set, such as Chinese, English, Latin, and punctuation symbol.Iflen(PN)≤10, the program has a short name.If10<len(PN)≤20, the program has a medium name.If20<len(PN)≤30, the program has a long name.Iflen(PN)>30, the program has a super-long name.Let a random variablex denote the character type of program name. The set of value of x is denoted as (1)CharsType={C,E,L,G,N,S,O}, where C, E, L, G, N, S, and O represent Chinese, English, Latin, Greek, Number, Symbol (includes punctuation and special character), and unidentified character, respectively. Character type is defined by Unicode Character Database (UCD) [29].LetUc denote the set of characters of program names, ci∈Uc.With a mapping functionf:Uc→CharsType, every character of program name can be transformed to the corresponding character type as follows: (2)f(PN)=x1x2⋯xn, where xi∈CharsType, i=1,2,…,n.LetH(CharsType) denote the information entropy of xi, which is used to evaluate the chaos of program naming [30] (3)H(CharsType)=-∑i=1Vp(xi)·log⁡2⁡(p(xi)), where p(xi) is the probability of xi; V is the number of character type in program name. Thus, the value of H(CharsType) is between 0 and log2V. In the calculation of entropy, let 0·log20=0. ## 4.4. Analytical Model of Hot Programs Definition 2. Hot programs are the top 100 popular programs that have the most viewers and concern a large amount of people.Definition 3. Hot degree is used to describe the concerned degree or level of hot programs by people. The influencing factors of hot degree include the number, watching time, and comments of viewers.LetHD denote the hot degree of a program. In a P2P IPTV application, it can be expressed as (4)HD=α·U+β·T+γ·C,α+β+γ=1, where U, T, and C represent the number of viewers, watching time, and the number of comments, respectively. Here the number of viewers refers to the number of online viewers at some point. And α, β, and γ are their weights.For all P2P IPTV applications, (4) can be rewritten as (5)HD=∑i=1jHDi=∑i=1j(α·Ui+β·Ti+γ·Ci),α+β+γ=1, where j is the number of IPTV applications. ## 5. Results and Discussion ### 5.1. Crawling Results of DMP-Crawler DMP-Crawler consists of one crawler controller and ten crawler clients. DMP-Crawler is deployed on three PC Servers with Intel E5506 CPU and 4 GB Memory in Beijing of China with 10 Mbps Ethernet network access. DMP-Crawler ran two rounds and collected about 900,000 programs every day. According to the collected program names and IPTV application names, the repeated programs were removed.From February 2009 to July 2012, DMP-Crawler collected 13,107,766 distinct programs from 33 IPTV applications in China, in which only 0.3% of the programs were live programs. In particular, PPfilm has no live progr am.The numbers and ratios of programs of 33 IPTV applications are shown graphically in Figure3. From the collected data, we can find that the distribution of programs is highly skewed. The most popular IPTV application is PPfilm, accounting for about 31.1%, and the second is PPStream, accounting for about 19.3%. These two IPTV applications account for about one half of the entire IPTV programs.Figure 3 Distribution of various IPTV programs.According to the requirements of the State Administration of Radio Film and Television (SARFT), IPTV service providers must apply for “Information Network Dissemination Audio-Visual Programs Permit” before August 2009. Some IPTV service providers could not acquire the permit and stopped video service in 2010. Therefore, these IPTV applications have only hundreds or thousands of programs.We ranked each of the IPTV applications according to their percentages of programs and plotted a typical Cumulative Distribution Function (CDF) of the percentages of programs in Figure4.Figure 4 CDF of distribution of programs.In Figure4, 15.2% (5/33) popular IPTV applications have 80% programs and 24.2% (8/33) popular IPTV applications have more than 90% programs. Some IPTV applications, like SopCast and TVUPlayer, have only a small proportion of programs, for they have no video on demand.When programs were extracted from program-list file, programs were classified into one of 13 categories defined by SARFT. Percentages of all the categories are shown graphically in Figure5. From the figure, we can observe that the distribution is highly skewed: the most popular category is News, accounting for about 39%; the second is Drama, accounting for about 21%; the third is Animation, accounting for about 16%; and the last is Specific show, accounting for about 0.02%.Figure 5 Distribution of programs’ categories.In Figure5, we also list category “Others.” “Others” are programs that cannot be classified. ### 5.2. Characteristic Analysis of Program Resource #### 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. #### 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 #### 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ### 5.3. Analysis of Hot Programs In the measurement of P2P IPTV, it is difficult to measure watching time of programs which is managed by IPTV operators. It is impossible for us to collaborate with major IPTV operators. In addition, only a few popular IPTV applications provide program comment functions in IPTV client or website. Moreover, the number of online viewers is much more than the number of comments. Here letα=1, β=γ=0; (5) was simplified as (6)HD=∑i=1jHDi=∑i=1jUi.From (6), we can find that hot programs appear in popular IPTV applications. Thus, we only consider the top 5 IPTV applications; then (7)HD=UPPStream+UPPfilm+UUUSee+UPPTV+UQQLive.For a program, PPStream and PPfilm provide the number of its viewers, UUSee presents the ratio of its viewers, while PPTV and QQLive only offer 6-level popularity. Thus, we must normalize the number of viewers according to the number of online viewers of various IPTV applications. In June 2010, the maximum viewers of PPStream, UUSee, PPTV, and QQLive are about 20.0, 2.0, 11.0, and 6.6 million, respectively. The normalizing rules are as follows.(1) UPPStream and UPPfilm can be obtained from their program-list files.(2) UUSee presents the ratio of viewers.UUUSee is calculated according to the ratio and the number of online viewers. We estimate the online viewers of UUSee as 2 million.(3) PPTV and QQLive offer 6-level popularity of programs. Every popularity level presents an interval of the number of viewers. Popularity levels of 0, 1, 2, 3, 4, and 5, respectively, represent[0,199], [200,399], [400,599], [600,799], [800,999], and [1000,+∞]. If popularity level is between 0 and 4, median of corresponding interval is used as U. If popularity level is 5 and PPStream has not the same program, UPPTV = 3000 and UQQLive = 1800. If popularity level is 5 and PPStream has the same program, UPPTV and UQQLive are calculated by (8)UPPTV=0.55×UPPStream,UQQLive=0.33×UPPStream.Table6 lists the hottest programs in one week through analytical model of hot programs. From Table 6, we can infer that the most popular drama IPTV wasLet’s see the Meteor Shower togetherin that week. Moreover, the model can be used to predict popularity of hot programs in some time.Table 6 Hottest programs in one week. Date Program names Categories HD 2010-12-23 Let’s see the Meteor Shower together—23 Drama 116099 2010-12-24 Salt (Angelina Jolie) Film 66071 2010-12-25 100824-  Kangxi Variety Show 94976 2010-12-26 Can’t Buy Me Love—02 Drama 94578 2010-12-27 Let’s see the Meteor Shower together—24 Drama 146474 2010-12-28 Triple Tap Film 111477 2010-12-29 Triple Tap Film 111677 ## 5.1. Crawling Results of DMP-Crawler DMP-Crawler consists of one crawler controller and ten crawler clients. DMP-Crawler is deployed on three PC Servers with Intel E5506 CPU and 4 GB Memory in Beijing of China with 10 Mbps Ethernet network access. DMP-Crawler ran two rounds and collected about 900,000 programs every day. According to the collected program names and IPTV application names, the repeated programs were removed.From February 2009 to July 2012, DMP-Crawler collected 13,107,766 distinct programs from 33 IPTV applications in China, in which only 0.3% of the programs were live programs. In particular, PPfilm has no live progr am.The numbers and ratios of programs of 33 IPTV applications are shown graphically in Figure3. From the collected data, we can find that the distribution of programs is highly skewed. The most popular IPTV application is PPfilm, accounting for about 31.1%, and the second is PPStream, accounting for about 19.3%. These two IPTV applications account for about one half of the entire IPTV programs.Figure 3 Distribution of various IPTV programs.According to the requirements of the State Administration of Radio Film and Television (SARFT), IPTV service providers must apply for “Information Network Dissemination Audio-Visual Programs Permit” before August 2009. Some IPTV service providers could not acquire the permit and stopped video service in 2010. Therefore, these IPTV applications have only hundreds or thousands of programs.We ranked each of the IPTV applications according to their percentages of programs and plotted a typical Cumulative Distribution Function (CDF) of the percentages of programs in Figure4.Figure 4 CDF of distribution of programs.In Figure4, 15.2% (5/33) popular IPTV applications have 80% programs and 24.2% (8/33) popular IPTV applications have more than 90% programs. Some IPTV applications, like SopCast and TVUPlayer, have only a small proportion of programs, for they have no video on demand.When programs were extracted from program-list file, programs were classified into one of 13 categories defined by SARFT. Percentages of all the categories are shown graphically in Figure5. From the figure, we can observe that the distribution is highly skewed: the most popular category is News, accounting for about 39%; the second is Drama, accounting for about 21%; the third is Animation, accounting for about 16%; and the last is Specific show, accounting for about 0.02%.Figure 5 Distribution of programs’ categories.In Figure5, we also list category “Others.” “Others” are programs that cannot be classified. ## 5.2. Characteristic Analysis of Program Resource ### 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. ### 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 ### 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ## 5.2.1. Distribution of Length of Program Name All of the programs were sorted according to the length of name and percentages of programs were calculated according to the length interval of 5. Percentages of all the length intervals are shown in Table2.Table 2 Distribution of length of program name. Length Range PPStream PPTV QQLive UUSee PPfilm All (0, 5) 5.2% 3.0% 15.8% 2.8% 8.3% 9.3% (5, 10) 44.0% 16.9% 59.4% 17.9% 50.5% 29.9% (10, 15) 26.7% 30.2% 17.5% 26.8% 29.7% 25.1% (15, 20) 17.6% 18.5% 5.2% 46.4% 7.3% 23.4% (20, 25) 5.6% 12.6% 1.6% 5.1% 1.4% 6.3% (25, 30) 0.6% 7.4% 0.3% 0.8% 0.4% 2.5% (30, +∞) 0.2% 11.5% 0.2% 0.2% 2.5% 3.5%In Table2, about 40% of programs have short names; 48.5% of programs have medium names; 8.8% of programs have long names; only 3.5% of programs have super-long names. In the 5 popular IPTV applications, more than 30% of PPTV programs of PPTV application have long and superlong names. 75.2% of QQLive and 58.8% of PPfilm programs have short names.We also analyze quartile of name length. The results are shown in Table3. QQLive programs have the smallest Q1 and interquartile range, and PPTV programs have the biggest Q3 and interquartile range, which are in accordance with the results in Table 2.Table 3 Length quartile of length of program name. IPTV Applications Q 1 Median Q 3 Interquartile Range (Q3–Q1) PPStream 8 11 15 7 PPTV 11 15 23 12 QQLive 6 7 10 4 UUSee 12 16 18 6 PPfilm 7 9 13 6 All 8 13 17 9From Tables2 and 3, we can infer that short and medium names are often used in IPTV program naming, especially in QQLive programs. ## 5.2.2. Character Type of Program Name All characters in program name were counted and mapped to corresponding CharsType. And the character types includeC, E, L, G, N, S, and O. The number of character types is 7. Probabilities of C, E, L, G, N, S, and O are 0.585135, 0.051970, 0.00950, 0.000029, 0.199762, 0.153493, and 0.000109, respectively. The chaos of IPTV program naming is 1.122197 according to (3).When collecting IPTV programs, we also put forward a BitTorrent crawler and an eDonkey crawler to crawl program resources in BitTorrent and eDonkey network. The two crawlers collected 2,329,237 BitTorrent programs and 619,810 eDonkey programs.In Table4, information entropy of character of IPTV programs is less than that of BitTorrent and eDonkey programs. The chaos of IPTV program naming is small, indicating that IPTV program naming is relatively regular. It may be interpreted as follows: popular P2P IPTV applications are operated commercially, while BitTorrent and eDonkey are public platforms and the programs are uploaded by amateurs.Table 4 Information entropy and probability of character type. Character Type Probability of character type IPTV BitTorrent eDonkey Chinese (C) 0.585135 0.260022 0.220748 English (E) 0.051970 0.345382 0.472992 Latin (L) 0.00950 0.002303 0.002324 Greek (G) 0.000029 0.00001 0.000013 Number (N) 0.199762 0.12577 0.057365 Symbol (S) 0.153493 0.264594 0.242756 Unidentified character (O) 0.000109 0.00194 0.003801 Information entropy (H) 1.122197 1.356355 1.231227 ## 5.2.3. Hierarchy Depth of Program Definition 4. Hierarchy depth of program is the times that a program is classified according to category, channel, subchannel, and so forth. For example, hierarchy depth of the program in Table1 is 2.Original intention of statistics of hierarchy depth is to find the relationship between length and hierarchy depth. Statistical results of hierarchy depth of all programs and popular IPTV programs are presented in Table5.Table 5 Distribution of hierarchy depth of programs. Hierarchy Depth PPStream PPTV QQLive PPfilm UUSee All 1 19.12% 3.20% 15.95% 8.34% 3.25% 14.72% 2 80.87% 44.69% 9.34% 43.35% 85.64% 50.76% 3 0.014% 52.11% 17.50% 48.31% 11.11% 27.42% 4 0.00% 0.004% 57.22% 0.003% 0.00% 7.09% 5 0.00% 0.00% 0.0003% 0.00% 0.00% 0.002%In Table5, more than 50% programs’ hierarchy depth is 2 and 27.42% programs’ hierarchy depth is 3. The 2-hierarchy is easy to show programs in IPTV client. The 3-hierarchy is often used to display movies and drama programs. Hierarchy depth distributions of PPTV and PPfilm are similar. And the 2-hierarchy and 3-hierarchy programs in PPTV and PPfilm account for more than 90%. While hierarchy depth distribution of QQLive is quite different from that of other applications, its 4-hierarchy programs account for 57.22%. Thus, its programs are prone to used short name. ## 5.3. Analysis of Hot Programs In the measurement of P2P IPTV, it is difficult to measure watching time of programs which is managed by IPTV operators. It is impossible for us to collaborate with major IPTV operators. In addition, only a few popular IPTV applications provide program comment functions in IPTV client or website. Moreover, the number of online viewers is much more than the number of comments. Here letα=1, β=γ=0; (5) was simplified as (6)HD=∑i=1jHDi=∑i=1jUi.From (6), we can find that hot programs appear in popular IPTV applications. Thus, we only consider the top 5 IPTV applications; then (7)HD=UPPStream+UPPfilm+UUUSee+UPPTV+UQQLive.For a program, PPStream and PPfilm provide the number of its viewers, UUSee presents the ratio of its viewers, while PPTV and QQLive only offer 6-level popularity. Thus, we must normalize the number of viewers according to the number of online viewers of various IPTV applications. In June 2010, the maximum viewers of PPStream, UUSee, PPTV, and QQLive are about 20.0, 2.0, 11.0, and 6.6 million, respectively. The normalizing rules are as follows.(1) UPPStream and UPPfilm can be obtained from their program-list files.(2) UUSee presents the ratio of viewers.UUUSee is calculated according to the ratio and the number of online viewers. We estimate the online viewers of UUSee as 2 million.(3) PPTV and QQLive offer 6-level popularity of programs. Every popularity level presents an interval of the number of viewers. Popularity levels of 0, 1, 2, 3, 4, and 5, respectively, represent[0,199], [200,399], [400,599], [600,799], [800,999], and [1000,+∞]. If popularity level is between 0 and 4, median of corresponding interval is used as U. If popularity level is 5 and PPStream has not the same program, UPPTV = 3000 and UQQLive = 1800. If popularity level is 5 and PPStream has the same program, UPPTV and UQQLive are calculated by (8)UPPTV=0.55×UPPStream,UQQLive=0.33×UPPStream.Table6 lists the hottest programs in one week through analytical model of hot programs. From Table 6, we can infer that the most popular drama IPTV wasLet’s see the Meteor Shower togetherin that week. Moreover, the model can be used to predict popularity of hot programs in some time.Table 6 Hottest programs in one week. Date Program names Categories HD 2010-12-23 Let’s see the Meteor Shower together—23 Drama 116099 2010-12-24 Salt (Angelina Jolie) Film 66071 2010-12-25 100824-  Kangxi Variety Show 94976 2010-12-26 Can’t Buy Me Love—02 Drama 94578 2010-12-27 Let’s see the Meteor Shower together—24 Drama 146474 2010-12-28 Triple Tap Film 111477 2010-12-29 Triple Tap Film 111677 ## 6. Conclusions In this paper, we have studied the program information collection in P2P IPTV applications. We proposed a distributed multiprotocol crawler to harvest program information of P2P IPTV applications. As far as we know, it is the first time that the detailed crawler for IPTV programs is presented. Characteristic analysis of programs was carried out. The results reveal the disorderly naming conventions of P2P IPTV program and can help to purify and extract useful information from chaos names for better retrieval. We also put forward an analytical model of hot programs to represent popularity of programs and predict user behaviors and popularity of hot programs within a period.Distribution of IPTV programs is independent and incompact, resulting in the chaos of program name, which obstructs searching and organizing programs. In the next work, we will focus on data mining of programs and establishment of IPTV repository. --- *Source: 101702-2014-03-19.xml*
2014
# Mechanisms Mediating the Effects ofγ-Tocotrienol When Used in Combination with PPARγ Agonists or Antagonists on MCF-7 and MDA-MB-231 Breast Cancer Cells **Authors:** Abhita Malaviya; Paul W. Sylvester **Journal:** International Journal of Breast Cancer (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101705 --- ## Abstract γ-Tocotrienol is a natural vitamin E that displays potent anticancer activity, and previous studies suggest that these effects involve alterations in PPARγ activity. Treatment with 0.5–6 μM  γ-tocotrienol, 0.4–50 μM PPARγ agonists (rosiglitazone or troglitazone), or 0.4–25 μM PPARγ antagonists (GW9662 or T0070907) alone resulted in a dose-responsive inhibition of MCF-7 and MDA-MB-231 breast cancer proliferation. However, combined treatment of 1–4 μM  γ-tocotrienol with PPARγ agonists reversed the growth inhibitory effects of γ-tocotrienol, whereas combined treatment of 1–4 μM  γ-tocotrienol with PPARγ antagonists synergistically inhibited MCF-7 and MDA-MB-231 cell growth. Combined treatment of γ-tocotrienol and PPARγ agonists caused an increase in transcription activity of PPARγ along with increased expression of PPARγ and RXR, and decrease in PPARγ coactivators, CBP p/300, CBP C-20, and SRC-1, in both breast cancer cell lines. In contrast, combined treatment of γ-tocotrienol with PPARγ antagonists resulted in a decrease in transcription activity of PPARγ, along with decreased expression of PPARγ and RXR, increase in PPARγ coactivators, and corresponding decrease in PI3K/Akt mitogenic signaling in these cells. These findings suggest that elevations in PPARγ are correlated with increased breast cancer growth and survival, and treatment that decreases PPARγ expression may provide benefit in the treatment of breast cancer. --- ## Body ## 1. Introduction Peroxisome proliferator activated receptorγ (PPARγ) belongs to the nuclear receptor superfamily and functions as a ligand-activated transcription factor that forms a heterodimer complex with retinoid X receptor (RXR). This complex then binds to a specific DNA sequence called the peroxisome proliferator response element and initiates the recruitment of coactivator proteins such as CBP p/300, SRC-1, and CBP C-20, which further modulate gene transcription [1–3]. Studies have shown that PPARγ is overexpressed in many types of breast cancer cells [4–7]. Experimental evidence in rodents has shown that overexpression of PPARγ is associated with an increased incidence and growth in mammary tumors, whereas knockdown of PPARγ expression was found to significantly inhibit spontaneous mammary tumor development [8, 9]. Taken together these results suggest that inhibition of PPARγ expression and/or activity may be beneficial in the treatment of breast cancer. However, other studies have shown that treatment with the PPARγ agonist rosiglitazone and troglitazone, or conversely with PPARγ antagonists GW9662 and T0070907, were both found to significantly inhibit the growth of a wide variety of cancer cell lines [10, 11]. An explanation for these conflicting findings is not clearly evident, especially since some of the anticancer effects of these agents may be mediated through PPARγ-independent mechanisms. Interpretation of these findings is further complicated by the fact that PPARγ transcriptional activity can be modulated when phosphorylation by Akt and other kinases, which can occur from crosstalk with other mitogenic signaling pathways [12].γ-Tocotrienol is a member of the vitamin E family of compounds that displays potent anticancer activity [13, 14]. The mechanism(s) involved in mediating the anticancer activity of γ-tocotrienol appear to involve the suppression of growth-factor-dependent mitogenic signaling, particularly the PI3K/Akt signaling pathway [15–18]. PI3K is a lipid signaling kinase that activates PDK-1, which subsequently phosphorylates and activates Akt. Activated Akt phosphorylates various proteins associated with cell proliferation and survival [19]. PDK-1 and Akt activity is terminated by phosphatases such as PTEN [20].Recent studies have shown that tocotrienols activate specific PPARs in reporter-based assays [21], whereas other studies have shown that γ-tocotrienol increases intracellular levels of 15-lipoxygenase-2, the enzyme responsible for the conversion of arachidonic acid to the PPARγ activating ligand, 15-S-hydroxyeicosatrienooic acid, in prostate cancer cells [22]. Therefore, it was hypothesized that the anticancer effects of γ-tocotrienol may be mediated, at least in part, through a PPARγ-dependent mechanism. Studies were conducted to characterize the effects of γ-tocotrienol treatment alone and in combination with specific PPARγ agonists and antagonists on the growth and survival of MCF-7 and MDA-MB-231 human breast cancer cells. Additional studies evaluated treatment effects on the expression of PPARγ and PPARγ coactivators, and PI3K/Akt mitogenic signaling in these breast cancer cell lines. Results from these studies further characterize the anticancer mechanism of action of γ-tocotrienol, as well as PPARγ agonist and antagonists, and provides insights as to potential benefits of these therapies in the treatment of breast cancer. ## 2. Materials and Methods ### 2.1. Reagents and Antibodies All reagents were purchased from Sigma Chemical Company (St. Louis, MO) unless otherwise stated. Purifiedγ-tocotrienol (>98% purity) was generously provided as a gift by First Tech International Ltd (Hong Kong). The PPARγ agonists, rosiglitazone and troglitazone, and the PPARγ antagonists, GW9662 and T0070907, were purchased from Cayman Chemicals (San Diego, CA). Fetal bovine serum was purchased from American Type Culture Collection (Manassas, VA). Antibodies for β-actin, PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, cleaved caspase-3, and cleaved PARP were purchased from Cell Signaling Technology (Beverly, MA). Antibodies for RXR, CBP C-20, SRC-1, and CBP p/300 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Goat anti-rabbit and anti-mouse secondary antibodies were purchased from PerkinElmer Biosciences (Boston, MA). ### 2.2. Cell Lines and Culture Conditions The estrogen-receptor negative MDA-MB-231, and the estrogen-receptor positive MCF-7 breast carcinoma cell lines were purchased from American Type Culture Collection (Manassas, VA). MDA-MB-231 and MCF-7 breast cancer cells were cultured in modified Dulbecco’s modified Eagle Medium (DMEM)/F12 supplemented with 10% fetal bovine serum, 10μg/mL insulin, 100 U/mL penicillin, 0.1 mg/mL streptomycin at 37°C in an environment of 95% air and 5% CO2 in a humidified incubator. For subculturing, cells were rinsed twice with sterile Ca2+- and Mg2+-free phosphate-buffered saline (PBS) and incubated in 0.05% trypsin containing 0.025% EDTA in PBS for 5 min at 37°C. The released cells were centrifuged, resuspended in serum containing media, and counted using a hemocytometer. ### 2.3. Experimental Treatments The highly lipophilicγ-tocotrienol was suspended in a solution of sterile 10% BSA as described previously [13, 14]. Briefly, an appropriate amount of γ-tocotrienol was first dissolved in 100 μL of 100% ethanol, then added to a small volume of sterile 10% BSA in water and incubated overnight at 37°C with continuous shaking. This stock solution was then used to prepare various concentrations of treatment media. Stock solutions of rosiglitazone, troglitazone, GW9662 and T0070907 were prepared in DMSO. Ethanol and/or DMSO was added to all treatment media such that the final concentration was the same in all treatment groups within any given experiment and was always less than 0.1%. ### 2.4. Growth Studies MCF-7 and MDA-MB-231 cells were plated at a density of5×104 cells/well (6 replicates/group) in 24 well culture plates and 1×104 cells/well in 96 well culture plate, respectively and allowed to adhere overnight. The next day, cells were divided into different treatment groups, culture media was removed, washed with sterile PBS, then fed fresh media containing their respective treatments, and then returned to the incubator. Cells were treated with media containing 0–50 μM rosiglitazone, troglitazone, GW9662, T0070907 or 0–8 μM γ-tocotrienol alone or a combination for a 4-day culture period. Cells in each treatment group were fed fresh media every other day throughout the experimental period. For apoptosis experiments, MCF-7 and MDA-MB-231 cells were plated as described above. Cells were allowed to grow in control media for 3 days, after which they were exposed to the various treatments for a 24 h period. Treatment with 20 μM γ-tocotrienol has previous been shown to induce apoptosis in breast cancer cells [13, 14] and was used as a positive control in this study. ### 2.5. Measurement of Viable Cell Number MCF-7 and MDA-MB-231 viable cell number was determined using the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) colorimetric assay as described previously [13, 14]. At the end of the treatment period, treatment media was removed and all cells were exposed for 3 h (96 well plates) or 4 h (24 well/plates) to fresh control media containing 0.41 mg/mL MTT at 37°C. Afterwards, media was removed and MTT crystals were dissolved in 1 mL of isopropanol for 24 culture plate or 100 μL of DMSO for 96 culture plate assays. The optical density of each sample was measured at 570 nm at a microplate reader (Spectracount; Packard Bioscience Company, Meriden, CT) zeroed against a blank prepared from cell-free medium. The number of cells per well was calculated against a standard curve prepared by plating known cell densities, as determined by hemocytometer, in triplicate at the start of each experiment. ### 2.6. Western Blot Analysis MCF-7 and MDA-MB-231 cells were plated at a density of1×106 cells/100 mm culture dish and exposed to control or treatment media for a 4-day culture period. Afterwards, cells were washed with PBS, isolated with trypsin, and whole cell lysates were prepared in Laemmli buffer [23] as described previously [24]. The protein concentration in each sample was determined using Bio-Rad protein assay kit (Bio-Rad, Hercules, CA). Equal amounts of protein from each sample in a given experiment was loaded onto SDS-polyacrylamide minigels and electrophoresed through 5%–15% resolving gel. Proteins separated on each gel were transblotted at 30 V for 12–16 h at 4°C onto a polyvinylidene fluoride (PVDF) membrane (PerkinElmer Lifesciences, Wellesley, MA) in a Trans-Blot Cell (Bio-Rad, Hercules, CA) according to the method of Towbin et al. [25]. The membranes were then blocked with 2% BSA in 10 mM Tris HCl containing 50 mM NaCl and 0.1% Tween 20 pH 7.4 (TBST) and then incubated with specific primary antibodies against PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, RXR, CBP C-20, SRC-1, CBP p/300, cleaved capase-3, cleaved PARP or β-actin, diluted 1 : 500 to 1 : 5000 in TBST/2% BSA for 2 h. Membranes are washed 5 times with TBST followed by incubation with the respective horseradish peroxide-conjugated secondary antibodies diluted 1 : 3000 to 1 : 5000 in TBST/2% BSA for 1 h followed by rinsing with TBST. Protein bands bound to the antibody were visualized by chemiluminescence (Pierce, Rockford, IL) according to the manufacturer’s instructions and images were obtained using a Kodak Gel Logic 1500 Imaging System (Carestream Health Inc, Rochester, NY). The visualization of β-actin was performed to confirm equal sample loading in each lane. Images of protein bands on the film were acquired and scanning densitometric analysis was performed with Kodak molecular imaging software version 4.5 (Carestream Health Inc, Rochester, NY). All experiments were repeated at least three times and a representative western blot image from each experiment is shown in the figures. ### 2.7. Transient Transfection and Luciferase Reporter Assay MCF-7 and MDA-MB-231 cells were plated at a density of2×104 per well in 96-well plates and allowed to adhere overnight. After this cells were transfected with 32 ng of PPRE X3-TK-luc (Addgene plasmid no. 1015) [26] and 3.2 ng of renilla luciferase plasmid per well (Promega, Madison, WI) using 0.8 μL of lipofectamine 2000 transfection reagent for each well (Invitrogen, Grand Island, NY). After 6 h transfection, the media was removed; the cells were washed once and exposed to 100 μL of control or treatment media for a 4-day culture period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using dual-glo luciferase assay system (Promega, Madison, WI). Luciferase activity of each sample was normalized by the level of renilla activity. Data is represented as mean fold changes in treated cells as compared to control cells. ### 2.8. Statistical Analysis The level of interaction between PPARγ ligands and γ-tocotrienol was evaluated by isobologram method [27]. A straight line was formed by plotting IC50 doses of γ-tocotrienol and individual PPARγ ligands on the x-axes and y-axes, respectively as determined by non-linear regression curve fit analysis using GraphPad Prism 4 (GraphPad Software inc. La Jolla, CA). The data point in the isobologram corresponds to the actual IC50 dose of combined γ-tocotrienol and PPARγ ligands treatment. If a data point is on or near the line, this represents an additive treatment effect, whereas a data point that lies below or above the line indicates synergism or antagonism, respectively. Differences among the various treatment groups in growth studies and western blot studies were determined by analysis of variance followed by Dunnett’s multiple range test. Differences were considered statistically significant at a value of P<0.05. ## 2.1. Reagents and Antibodies All reagents were purchased from Sigma Chemical Company (St. Louis, MO) unless otherwise stated. Purifiedγ-tocotrienol (>98% purity) was generously provided as a gift by First Tech International Ltd (Hong Kong). The PPARγ agonists, rosiglitazone and troglitazone, and the PPARγ antagonists, GW9662 and T0070907, were purchased from Cayman Chemicals (San Diego, CA). Fetal bovine serum was purchased from American Type Culture Collection (Manassas, VA). Antibodies for β-actin, PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, cleaved caspase-3, and cleaved PARP were purchased from Cell Signaling Technology (Beverly, MA). Antibodies for RXR, CBP C-20, SRC-1, and CBP p/300 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Goat anti-rabbit and anti-mouse secondary antibodies were purchased from PerkinElmer Biosciences (Boston, MA). ## 2.2. Cell Lines and Culture Conditions The estrogen-receptor negative MDA-MB-231, and the estrogen-receptor positive MCF-7 breast carcinoma cell lines were purchased from American Type Culture Collection (Manassas, VA). MDA-MB-231 and MCF-7 breast cancer cells were cultured in modified Dulbecco’s modified Eagle Medium (DMEM)/F12 supplemented with 10% fetal bovine serum, 10μg/mL insulin, 100 U/mL penicillin, 0.1 mg/mL streptomycin at 37°C in an environment of 95% air and 5% CO2 in a humidified incubator. For subculturing, cells were rinsed twice with sterile Ca2+- and Mg2+-free phosphate-buffered saline (PBS) and incubated in 0.05% trypsin containing 0.025% EDTA in PBS for 5 min at 37°C. The released cells were centrifuged, resuspended in serum containing media, and counted using a hemocytometer. ## 2.3. Experimental Treatments The highly lipophilicγ-tocotrienol was suspended in a solution of sterile 10% BSA as described previously [13, 14]. Briefly, an appropriate amount of γ-tocotrienol was first dissolved in 100 μL of 100% ethanol, then added to a small volume of sterile 10% BSA in water and incubated overnight at 37°C with continuous shaking. This stock solution was then used to prepare various concentrations of treatment media. Stock solutions of rosiglitazone, troglitazone, GW9662 and T0070907 were prepared in DMSO. Ethanol and/or DMSO was added to all treatment media such that the final concentration was the same in all treatment groups within any given experiment and was always less than 0.1%. ## 2.4. Growth Studies MCF-7 and MDA-MB-231 cells were plated at a density of5×104 cells/well (6 replicates/group) in 24 well culture plates and 1×104 cells/well in 96 well culture plate, respectively and allowed to adhere overnight. The next day, cells were divided into different treatment groups, culture media was removed, washed with sterile PBS, then fed fresh media containing their respective treatments, and then returned to the incubator. Cells were treated with media containing 0–50 μM rosiglitazone, troglitazone, GW9662, T0070907 or 0–8 μM γ-tocotrienol alone or a combination for a 4-day culture period. Cells in each treatment group were fed fresh media every other day throughout the experimental period. For apoptosis experiments, MCF-7 and MDA-MB-231 cells were plated as described above. Cells were allowed to grow in control media for 3 days, after which they were exposed to the various treatments for a 24 h period. Treatment with 20 μM γ-tocotrienol has previous been shown to induce apoptosis in breast cancer cells [13, 14] and was used as a positive control in this study. ## 2.5. Measurement of Viable Cell Number MCF-7 and MDA-MB-231 viable cell number was determined using the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) colorimetric assay as described previously [13, 14]. At the end of the treatment period, treatment media was removed and all cells were exposed for 3 h (96 well plates) or 4 h (24 well/plates) to fresh control media containing 0.41 mg/mL MTT at 37°C. Afterwards, media was removed and MTT crystals were dissolved in 1 mL of isopropanol for 24 culture plate or 100 μL of DMSO for 96 culture plate assays. The optical density of each sample was measured at 570 nm at a microplate reader (Spectracount; Packard Bioscience Company, Meriden, CT) zeroed against a blank prepared from cell-free medium. The number of cells per well was calculated against a standard curve prepared by plating known cell densities, as determined by hemocytometer, in triplicate at the start of each experiment. ## 2.6. Western Blot Analysis MCF-7 and MDA-MB-231 cells were plated at a density of1×106 cells/100 mm culture dish and exposed to control or treatment media for a 4-day culture period. Afterwards, cells were washed with PBS, isolated with trypsin, and whole cell lysates were prepared in Laemmli buffer [23] as described previously [24]. The protein concentration in each sample was determined using Bio-Rad protein assay kit (Bio-Rad, Hercules, CA). Equal amounts of protein from each sample in a given experiment was loaded onto SDS-polyacrylamide minigels and electrophoresed through 5%–15% resolving gel. Proteins separated on each gel were transblotted at 30 V for 12–16 h at 4°C onto a polyvinylidene fluoride (PVDF) membrane (PerkinElmer Lifesciences, Wellesley, MA) in a Trans-Blot Cell (Bio-Rad, Hercules, CA) according to the method of Towbin et al. [25]. The membranes were then blocked with 2% BSA in 10 mM Tris HCl containing 50 mM NaCl and 0.1% Tween 20 pH 7.4 (TBST) and then incubated with specific primary antibodies against PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, RXR, CBP C-20, SRC-1, CBP p/300, cleaved capase-3, cleaved PARP or β-actin, diluted 1 : 500 to 1 : 5000 in TBST/2% BSA for 2 h. Membranes are washed 5 times with TBST followed by incubation with the respective horseradish peroxide-conjugated secondary antibodies diluted 1 : 3000 to 1 : 5000 in TBST/2% BSA for 1 h followed by rinsing with TBST. Protein bands bound to the antibody were visualized by chemiluminescence (Pierce, Rockford, IL) according to the manufacturer’s instructions and images were obtained using a Kodak Gel Logic 1500 Imaging System (Carestream Health Inc, Rochester, NY). The visualization of β-actin was performed to confirm equal sample loading in each lane. Images of protein bands on the film were acquired and scanning densitometric analysis was performed with Kodak molecular imaging software version 4.5 (Carestream Health Inc, Rochester, NY). All experiments were repeated at least three times and a representative western blot image from each experiment is shown in the figures. ## 2.7. Transient Transfection and Luciferase Reporter Assay MCF-7 and MDA-MB-231 cells were plated at a density of2×104 per well in 96-well plates and allowed to adhere overnight. After this cells were transfected with 32 ng of PPRE X3-TK-luc (Addgene plasmid no. 1015) [26] and 3.2 ng of renilla luciferase plasmid per well (Promega, Madison, WI) using 0.8 μL of lipofectamine 2000 transfection reagent for each well (Invitrogen, Grand Island, NY). After 6 h transfection, the media was removed; the cells were washed once and exposed to 100 μL of control or treatment media for a 4-day culture period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using dual-glo luciferase assay system (Promega, Madison, WI). Luciferase activity of each sample was normalized by the level of renilla activity. Data is represented as mean fold changes in treated cells as compared to control cells. ## 2.8. Statistical Analysis The level of interaction between PPARγ ligands and γ-tocotrienol was evaluated by isobologram method [27]. A straight line was formed by plotting IC50 doses of γ-tocotrienol and individual PPARγ ligands on the x-axes and y-axes, respectively as determined by non-linear regression curve fit analysis using GraphPad Prism 4 (GraphPad Software inc. La Jolla, CA). The data point in the isobologram corresponds to the actual IC50 dose of combined γ-tocotrienol and PPARγ ligands treatment. If a data point is on or near the line, this represents an additive treatment effect, whereas a data point that lies below or above the line indicates synergism or antagonism, respectively. Differences among the various treatment groups in growth studies and western blot studies were determined by analysis of variance followed by Dunnett’s multiple range test. Differences were considered statistically significant at a value of P<0.05. ## 3. Results ### 3.1. Antiproliferative Effects ofγ-Tocotrienol, PPARγ Agonists (Rosiglitazone and Troglitazone), and PPARγ Antagonists (GW9662 and T0070907) Treatment with 3–6μM γ-tocotrienol, 1.6–12 μM rosiglitazone, 6.4–25 μM troglitazone, 1.6–6.4 μM GW9662, or 6.4–25 μM T0070907 was found to significantly inhibit growth of MCF-7 cells in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(a)). Similarly, treatment with 4–8 μM γ-tocotrienol, 6.4–25 μM rosiglitazone, 3.2–50 μM troglitazone, 3.2–12 μM GW9662, and 12–50 μM T0070907 significantly inhibited MDA-MB-231 cell growth in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(b)).Antiproliferative effects ofγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. Vertical bars indicate mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.2. Antagonistic Effects of PPARγ Agonist Rosiglitazone and Troglitazone on the Antiproliferative Effects of γ-Tocotrienol Treatment with 1–6μM γ-tocotrienol alone significantly inhibited growth of MCF-7 (Figure 2(a)) and MDA-MB-231 (Figure 2(b)) breast cancer cells after a 4-day treatment period. However, the growth inhibitory effects of 1–4 μM γ-tocotrienol on MCF-7 cells were reversed when given in combination with 3.2 μM rosiglitazone or troglitazone (Figure 2(a), Top and Bottom). A similar, but less pronounced, reversal in 3–6 μM γ-tocotrienol-induced growth inhibitory effects on MDA-MB-231 breast cancer cells was observed when used in combination with 6.4 μM rosiglitazone or troglitazone (Figure 2(b), Top and Bottom).Effects ofγ-tocotrienol, PPARγ agonist rosiglitazone and troglitazone treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ### 3.3. Enhancement ofγ-Tocotrienol-Induced Antiproliferative Effects When Given in Combination with PPARγ Antagonist GW9662 or T0070907 The growth inhibitory effects of 1–4μM γ-tocotrienol was significantly enhanced when given in combination with a subeffective dose (3.2 μM) of the PPARγ antagonist, GW9662, in MCF-7 breast cancer cells (Figure 3(a), Top). A slight, but insignificant enhancement of the growth inhibitory effects 1–4 μM γ-tocotrienol was observed when combined with a subeffective dose (3.2 μM) of the PPARγ antagonist, T0070907, in MCF-7 breast cancer cells (Figure 3(a), Bottom). In MDA-MB-231 cells, 0.5–3 μMγ-tocotrienol was used in combination with 6.4 μM of the PPARγ antagonists, GW9662 (Figure 3(b), Top) or T0070907 (Figure 3(b), Bottom) and was found to significantly enhanced the growth inhibitory effects of these agents. Higher dose ranges of γ-tocotrienol in combination with these same doses of PPARγ antagonists resulted in a complete suppression in breast cancer cell growth such that viable cell number was undetectable using the MTT assay (data not shown).Effects ofγ-tocotrienol, PPARγ antagonists GW9662 and T0070907 treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ### 3.4. Isobologram Analysis of Combined Treatment Effects ofγ-Tocotrienol with PPARγ Agonists and Antagonists Combined treatment ofγ-tocotrienol with PPARγ agonists, rosiglitazone, and troglitazone was found to be statistically antagonistic on MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cell growth, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists, GW9662, and T0070907 were found to be statistically synergistic in both MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cells, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect.Isobologram analysis of combined treatment ofγ-tocotrienol and PPARγ ligands on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. Individual IC50 doses for γ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) were calculated and then plotted on the x-axes and y-axes, respectively. The data point on the isobologram represents the actual doses of combined γ-tocotrienol and PPARγ ligands. Combined treatment of PPARγ agonists rosiglitazone and troglitazone with γ-tocotrienol was found to be antagonistic, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists GW9662 and T0070907 was found to be synergistic, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect for both cell lines. (a) (b) ### 3.5. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced a decreased expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 5(a) and 5(b)). Treatment with 3.2 μM rosiglitazone or troglitazone alone in MCF-7 cells or 6.4 μM rosiglitazone or troglitazone alone in MDA-MB-231 cells had little or no effect on PPARγ or RXR levels (Figures 5(a) and 5(b)). However, combined treatment with similar doses of γ-tocotrienol and rosiglitazone or troglitazone resulted in a significant increase in PPARγ and RXR expression in both MCF-7 and MDA-MB-231 breast cancer cell lines (Figures 5(a) and 5(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM rosiglitazone, or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.6. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced decrease expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 6(a) and 6(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 or T0070907 alone had only slight effects on PPARγ and RXR expression (Figures 6(a) and 6(b)). However, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant reduction in PPARγ and its heterodimer partner, RXR, in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 6(a) and 6(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or T0070907 alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared from each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.7. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPRE Mediated Reporter Activity Luciferase assay shows that the treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the PPRE mediated reporter activity as compared to vehicle treated controls (Figures 7(a) and 7(b), Top and Bottom). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone, and troglitazone, or PPARγ antagonists, GW9662 and T0070907, alone, caused a slight, but insignificant decrease in PPRE mediated reporter activity (Figures 7(a) and 7(b), Top and Bottom). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone or troglitazone caused an increase in transcription activity of PPARγ in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Top). In contrast, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant decrease PPRE mediated reporter activity in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Bottom).Luciferase assay was performed on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. The cells were initially plated at a density of2×104 cells/well in 96-well plates. Cells were then transfected by adding 32 ng of PPRE X3-TK-luc and 3.2 ng of renilla luciferase plasmid in 0.8 μL of lipofectamine 2000 transfection reagent. Following a 6-h incubation period, MCF-7 cells were treated with control or treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM rosiglitazone, 0–3.2 μM troglitazone, 0–3.2 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. MDA-MB-231 cells were initially plated in a similar manner and treated with control or treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM rosiglitazone, 0–6.4 μM troglitazone, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using the dual-glo luciferase assay system. Results were calculated as raw luciferase units divided by raw renilla units. Vertical bars indicate PPRE mediated reporter activity ± SEM (arbitrary units) in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.8. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight, but insignificant effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 8(a) and 8(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone and troglitazone alone caused a slight decrease in CBP p/300 and SRC-1, but not CBP C-20, expression (Figures 8(a) and 8(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant decrease in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 8(a) and 8(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) when used alone or in combination on the levels of CBP p/300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or 3.2 μM troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol or 6.4 μM rosiglitazone or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were from cell in each treatment group and prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.9. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 9(a) and 9(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone had only slight effects on CBP p/300, CBP C-20, or SRC-1 expression (Figures 9(a) and 9(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant increase in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 9(a) and 9(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) when used alone or in combination with each other to determine protein levels of CBP p/H300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture plate and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone and in combination. MDA-MB-231 cells were plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.10. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PI3K/Akt Mitogenic Signaling Treatment of 2μM γ-tocotrienol with 3.2 μM of the PPARγ antagonists GW9662 or T0070907 alone had little or no effects on intracellular levels of Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 in MCF-7 cells after a 4-day treatment period (Figure 10(a)). However, combined treatment with the same doses of these agents caused a significant decrease in levels of phospho-Akt, PDK-1, and PI3K, but had little or no effect on total Akt and PTEN, and phospho-PTEN levels as compared to MCF-7 cells in the vehicle-treated control groups (Figure 10(a)). Similarly, treatment of 3 μM γ-tocotrienol, 6.4 μM GW9662 or 6.4 μM T0070907 alone had little or no effect on intracellular levels of phospho-Akt (activated), PDK-1, PI3K, Akt, PTEN, and phospho-PTEN in MDA-MB-231 breast cancer cells, as compared to vehicle-treated controls (Figure 10(b)). Combined treatment with the same doses of these agents resulted in a significant decrease in phospho-Akt, PDK-1, and PI3K levels as compared to MDA-MB-231 breast cancer cells in the vehicle-treated control group (Figure 10(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 levels on (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone or in combination. MDA-MB-231 cells were also plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b)Similar studies were conducted to determine the effects of combinedγ-tocotrienol treatment with PPARγ agonist rosiglitazone and troglitazone on PI3K/Akt mitogenic signaling in MCF-7 and MDA-MB-231 breast cancer cells. However, little or no differences in the relative levels of these mitogenic proteins were observed among the different treatment groups (data not shown), apparently because cells in the various treatment groups were actively proliferating at a near maximal growth rate. ### 3.11. Apoptotic Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination In order to determine if the growth inhibitory effects resulting from combined treatment with subeffective doses ofγ-tocotrienol and PPARγ antagonists might result from a reduction in viable cell number, studies were conducted to determine the acute effects (24-h) and chronic effects (96-h) of these treatment on the initiation of apoptosis and cell viability. Western blot analysis shows that treatment with 2 μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone had no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number after a 24-h and 96-h treatment exposure (Figures 11(a) and 11(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone, or in combination with their respective treatment dose of γ-tocotrienol was also found to have no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number 24-h after treatment exposure (Figures 11(a) and 11(b)). However, treatment with 20 μM γ-tocotrienol, a dose previously shown to induce apoptosis in mammary cancer cells [13, 14] and used as an apoptosis-inducing positive control in this experiments was found to induce a large increase in cleaved PARP and cleaved caspase-3 levels, and corresponding decrease in viable cell number in both MCF-7 and MDA-MB-231 breast cancer cells 24 h following treatment exposure (Figures 11(a) and 11(b)). The positive apoptosis control treatment of 20 μM γ-tocotrienol was not included in the 96 h treatment exposure experiment, because by the end of this experiment there are no viable cells remaining in this treatment group.Apoptotic effects ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on caspase-3 and cleaved PARP levels on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. For Western blot studies, MCF-7 and MDA-MB-231 cells were initially plated at 1×106 cells/100 mm culture dish and maintained on control media for a 3-day culture period. Afterwards, cells were divided into the various treatment groups, media was removed, and cells were exposed to their respective treatment media for a 24-h treatment period. In addition, cells were exposed to their respective treatment media for a 96-h treatment period, where fresh media was added every other day. MCF-7 cells were exposed to treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM GW9662, or 0–3.2 μM T0070907 alone or in combination, whereas MDA-MB-231 cells exposed to treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by western blot analysis. In parallel studies, (a) MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates, whereas (b) MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to the same treatments as described above. After a 24-h treatment exposure, viable cell number in all treatment groups was determined using MTT assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.1. Antiproliferative Effects ofγ-Tocotrienol, PPARγ Agonists (Rosiglitazone and Troglitazone), and PPARγ Antagonists (GW9662 and T0070907) Treatment with 3–6μM γ-tocotrienol, 1.6–12 μM rosiglitazone, 6.4–25 μM troglitazone, 1.6–6.4 μM GW9662, or 6.4–25 μM T0070907 was found to significantly inhibit growth of MCF-7 cells in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(a)). Similarly, treatment with 4–8 μM γ-tocotrienol, 6.4–25 μM rosiglitazone, 3.2–50 μM troglitazone, 3.2–12 μM GW9662, and 12–50 μM T0070907 significantly inhibited MDA-MB-231 cell growth in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(b)).Antiproliferative effects ofγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. Vertical bars indicate mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.2. Antagonistic Effects of PPARγ Agonist Rosiglitazone and Troglitazone on the Antiproliferative Effects of γ-Tocotrienol Treatment with 1–6μM γ-tocotrienol alone significantly inhibited growth of MCF-7 (Figure 2(a)) and MDA-MB-231 (Figure 2(b)) breast cancer cells after a 4-day treatment period. However, the growth inhibitory effects of 1–4 μM γ-tocotrienol on MCF-7 cells were reversed when given in combination with 3.2 μM rosiglitazone or troglitazone (Figure 2(a), Top and Bottom). A similar, but less pronounced, reversal in 3–6 μM γ-tocotrienol-induced growth inhibitory effects on MDA-MB-231 breast cancer cells was observed when used in combination with 6.4 μM rosiglitazone or troglitazone (Figure 2(b), Top and Bottom).Effects ofγ-tocotrienol, PPARγ agonist rosiglitazone and troglitazone treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ## 3.3. Enhancement ofγ-Tocotrienol-Induced Antiproliferative Effects When Given in Combination with PPARγ Antagonist GW9662 or T0070907 The growth inhibitory effects of 1–4μM γ-tocotrienol was significantly enhanced when given in combination with a subeffective dose (3.2 μM) of the PPARγ antagonist, GW9662, in MCF-7 breast cancer cells (Figure 3(a), Top). A slight, but insignificant enhancement of the growth inhibitory effects 1–4 μM γ-tocotrienol was observed when combined with a subeffective dose (3.2 μM) of the PPARγ antagonist, T0070907, in MCF-7 breast cancer cells (Figure 3(a), Bottom). In MDA-MB-231 cells, 0.5–3 μMγ-tocotrienol was used in combination with 6.4 μM of the PPARγ antagonists, GW9662 (Figure 3(b), Top) or T0070907 (Figure 3(b), Bottom) and was found to significantly enhanced the growth inhibitory effects of these agents. Higher dose ranges of γ-tocotrienol in combination with these same doses of PPARγ antagonists resulted in a complete suppression in breast cancer cell growth such that viable cell number was undetectable using the MTT assay (data not shown).Effects ofγ-tocotrienol, PPARγ antagonists GW9662 and T0070907 treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ## 3.4. Isobologram Analysis of Combined Treatment Effects ofγ-Tocotrienol with PPARγ Agonists and Antagonists Combined treatment ofγ-tocotrienol with PPARγ agonists, rosiglitazone, and troglitazone was found to be statistically antagonistic on MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cell growth, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists, GW9662, and T0070907 were found to be statistically synergistic in both MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cells, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect.Isobologram analysis of combined treatment ofγ-tocotrienol and PPARγ ligands on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. Individual IC50 doses for γ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) were calculated and then plotted on the x-axes and y-axes, respectively. The data point on the isobologram represents the actual doses of combined γ-tocotrienol and PPARγ ligands. Combined treatment of PPARγ agonists rosiglitazone and troglitazone with γ-tocotrienol was found to be antagonistic, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists GW9662 and T0070907 was found to be synergistic, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect for both cell lines. (a) (b) ## 3.5. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced a decreased expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 5(a) and 5(b)). Treatment with 3.2 μM rosiglitazone or troglitazone alone in MCF-7 cells or 6.4 μM rosiglitazone or troglitazone alone in MDA-MB-231 cells had little or no effect on PPARγ or RXR levels (Figures 5(a) and 5(b)). However, combined treatment with similar doses of γ-tocotrienol and rosiglitazone or troglitazone resulted in a significant increase in PPARγ and RXR expression in both MCF-7 and MDA-MB-231 breast cancer cell lines (Figures 5(a) and 5(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM rosiglitazone, or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.6. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced decrease expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 6(a) and 6(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 or T0070907 alone had only slight effects on PPARγ and RXR expression (Figures 6(a) and 6(b)). However, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant reduction in PPARγ and its heterodimer partner, RXR, in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 6(a) and 6(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or T0070907 alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared from each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.7. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPRE Mediated Reporter Activity Luciferase assay shows that the treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the PPRE mediated reporter activity as compared to vehicle treated controls (Figures 7(a) and 7(b), Top and Bottom). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone, and troglitazone, or PPARγ antagonists, GW9662 and T0070907, alone, caused a slight, but insignificant decrease in PPRE mediated reporter activity (Figures 7(a) and 7(b), Top and Bottom). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone or troglitazone caused an increase in transcription activity of PPARγ in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Top). In contrast, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant decrease PPRE mediated reporter activity in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Bottom).Luciferase assay was performed on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. The cells were initially plated at a density of2×104 cells/well in 96-well plates. Cells were then transfected by adding 32 ng of PPRE X3-TK-luc and 3.2 ng of renilla luciferase plasmid in 0.8 μL of lipofectamine 2000 transfection reagent. Following a 6-h incubation period, MCF-7 cells were treated with control or treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM rosiglitazone, 0–3.2 μM troglitazone, 0–3.2 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. MDA-MB-231 cells were initially plated in a similar manner and treated with control or treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM rosiglitazone, 0–6.4 μM troglitazone, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using the dual-glo luciferase assay system. Results were calculated as raw luciferase units divided by raw renilla units. Vertical bars indicate PPRE mediated reporter activity ± SEM (arbitrary units) in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.8. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight, but insignificant effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 8(a) and 8(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone and troglitazone alone caused a slight decrease in CBP p/300 and SRC-1, but not CBP C-20, expression (Figures 8(a) and 8(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant decrease in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 8(a) and 8(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) when used alone or in combination on the levels of CBP p/300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or 3.2 μM troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol or 6.4 μM rosiglitazone or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were from cell in each treatment group and prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.9. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 9(a) and 9(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone had only slight effects on CBP p/300, CBP C-20, or SRC-1 expression (Figures 9(a) and 9(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant increase in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 9(a) and 9(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) when used alone or in combination with each other to determine protein levels of CBP p/H300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture plate and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone and in combination. MDA-MB-231 cells were plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.10. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PI3K/Akt Mitogenic Signaling Treatment of 2μM γ-tocotrienol with 3.2 μM of the PPARγ antagonists GW9662 or T0070907 alone had little or no effects on intracellular levels of Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 in MCF-7 cells after a 4-day treatment period (Figure 10(a)). However, combined treatment with the same doses of these agents caused a significant decrease in levels of phospho-Akt, PDK-1, and PI3K, but had little or no effect on total Akt and PTEN, and phospho-PTEN levels as compared to MCF-7 cells in the vehicle-treated control groups (Figure 10(a)). Similarly, treatment of 3 μM γ-tocotrienol, 6.4 μM GW9662 or 6.4 μM T0070907 alone had little or no effect on intracellular levels of phospho-Akt (activated), PDK-1, PI3K, Akt, PTEN, and phospho-PTEN in MDA-MB-231 breast cancer cells, as compared to vehicle-treated controls (Figure 10(b)). Combined treatment with the same doses of these agents resulted in a significant decrease in phospho-Akt, PDK-1, and PI3K levels as compared to MDA-MB-231 breast cancer cells in the vehicle-treated control group (Figure 10(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 levels on (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone or in combination. MDA-MB-231 cells were also plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b)Similar studies were conducted to determine the effects of combinedγ-tocotrienol treatment with PPARγ agonist rosiglitazone and troglitazone on PI3K/Akt mitogenic signaling in MCF-7 and MDA-MB-231 breast cancer cells. However, little or no differences in the relative levels of these mitogenic proteins were observed among the different treatment groups (data not shown), apparently because cells in the various treatment groups were actively proliferating at a near maximal growth rate. ## 3.11. Apoptotic Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination In order to determine if the growth inhibitory effects resulting from combined treatment with subeffective doses ofγ-tocotrienol and PPARγ antagonists might result from a reduction in viable cell number, studies were conducted to determine the acute effects (24-h) and chronic effects (96-h) of these treatment on the initiation of apoptosis and cell viability. Western blot analysis shows that treatment with 2 μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone had no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number after a 24-h and 96-h treatment exposure (Figures 11(a) and 11(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone, or in combination with their respective treatment dose of γ-tocotrienol was also found to have no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number 24-h after treatment exposure (Figures 11(a) and 11(b)). However, treatment with 20 μM γ-tocotrienol, a dose previously shown to induce apoptosis in mammary cancer cells [13, 14] and used as an apoptosis-inducing positive control in this experiments was found to induce a large increase in cleaved PARP and cleaved caspase-3 levels, and corresponding decrease in viable cell number in both MCF-7 and MDA-MB-231 breast cancer cells 24 h following treatment exposure (Figures 11(a) and 11(b)). The positive apoptosis control treatment of 20 μM γ-tocotrienol was not included in the 96 h treatment exposure experiment, because by the end of this experiment there are no viable cells remaining in this treatment group.Apoptotic effects ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on caspase-3 and cleaved PARP levels on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. For Western blot studies, MCF-7 and MDA-MB-231 cells were initially plated at 1×106 cells/100 mm culture dish and maintained on control media for a 3-day culture period. Afterwards, cells were divided into the various treatment groups, media was removed, and cells were exposed to their respective treatment media for a 24-h treatment period. In addition, cells were exposed to their respective treatment media for a 96-h treatment period, where fresh media was added every other day. MCF-7 cells were exposed to treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM GW9662, or 0–3.2 μM T0070907 alone or in combination, whereas MDA-MB-231 cells exposed to treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by western blot analysis. In parallel studies, (a) MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates, whereas (b) MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to the same treatments as described above. After a 24-h treatment exposure, viable cell number in all treatment groups was determined using MTT assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 4. Discussion Results in these studies demonstrate that when given alone, treatment withγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), or PPARγ antagonists (GW9662 and T0070907), all induce a significant dose-responsive inhibition in the growth of MCF-7 and MDA-MB-231 human breast cancer cells in culture. However, when used in combination, treatment with low doses of PPARγ agonists were found to reverse, whereas treatment with low doses of PPARγ antagonists were found to synergistically enhance the antiproliferative effects of γ-tocotrienol. Additional studies determined that the synergistic inhibition of MCF-7 and MDA-MB-231 tumor cell growth resulting from combined low dose treatment of γ-tocotrienol with PPARγ antagonists was associated with a reduction in PPARγ, PPRE mediated reporter activity, and RXR, an increase in PPARγ coactivator expression, and a corresponding suppression in PI3K/Akt mitogenic-signaling. Conversely, enhancement in MCF-7 and MDA-MB-231 tumor cell growth resulting from combined low dose treatment of γ-tocotrienol with PPARγ agonists was associated with an increase in PPARγ, PPRE mediated reporter activity, and RXR, a decrease in PPARγ coactivator expression, and a corresponding restoration in EGF-dependent PI3K/Akt mitogenic-signaling as compared to their vehicle-treated control group. Taken together, these finding demonstrate that combined treatment of γ-tocotrienol with PPARγ antagonists display synergistic anticancer activity and may provide some benefit in the treatment of human breast cancer. These finding also demonstrate the importance of matching complimentary anticancer agents for use in combination therapy because a mismatch may result in an antagonistic and undesirable therapeutic response.Previous investigations have shown that both PPARγ agonists and antagonists act as effective anticancer agents [28, 29]. The role of PPARγ agonists as anticancer agents has been well characterized in treatment of colon, gastric, and lung cancer [3, 11], whereas, PPARγ antagonists have been shown to induce potent antiproliferative effects in many hematopoietic and epithelial cancer cell lines [11, 28]. Results in the present study confirm and extend these previous findings. Dose-response studies showed that treatment with either PPARγ agonist or antagonist significantly inhibited the growth of human MCF-7 and MDA-MB-231 breast cancer cells in culture. Furthermore, treatment-induced antiproliferative effects were found to be more pronounce in MDA-MB-231 as compared to MCF-7 breast cancer cells, and these results are similar to those previously reported [28].Numerous investigations have established thatγ-tocotrienol acts as a potent anticancer agent that inhibits the growth of mouse [16, 30] and human [31, 32] breast cancer cells. Furthermore, studies have also shown that combined treatment of γ-tocotrienol with other traditional chemotherapies often results in an additive or synergistic inhibition in cancer cell growth and viability [16, 30]. The rationale for using tocotrienols in combination therapy is based on the principle that resistance to a single agent can be overcome with the use of multiple agents that display complimentary anticancer mechanisms of action. Initial studies showed the additive anticancer effects of mixed tocotrienols and tamoxifen on growth of the estrogen receptor positive MCF-7 and the estrogen receptor negative MDA-MB-435 cells [33] and these findings were later confirmed in other reports [34]. Recent studies have also shown synergistic anticancer effects of combined use γ-tocotrienol with statins [35–37], tyrosine kinase inhibitors [18, 38], COX-2 inhibitors [39, 40], and cMet inhibitors [41]. These studies concluded that combination therapy is most effective when the anticancer mechanism of action of γ-tocotrienol compliments the mechanism of action of the other drug, and may provide significant health benefits in the prevention and/or treatment of breast cancer in women, while at the same time avoiding tumor resistance or toxic effects that is commonly associated with high-dose monotherapy.The exact role of PPARγ in breast cancer cell proliferation and survival is not clearly understood. Previous studies have suggested that PPARγ activation results in extensive accumulation of lipids and changes in mammary epithelial cell gene expression that promotes a more differentiated and less malignant phenotype, and attenuates breast cancer cell growth and progression [42, 43]. Other studies have shown that γ-tocotrienol enhances the expression of multiple forms of PPARs by selectively regulating PPAR target genes [21]. The antiproliferative effects of γ-tocotrienol have been previously hypothesized to be mediated by the action of γ-tocotrienol to stimulate PPARγ activation by increasing the production of the PPARγ ligand, 15-lipoxygenase-2, in human prostate cancer cells [22]. However, findings in the present study using two distinct types of human breast cancer cell lines showed that low-dose treatment with γ-tocotrienol decreased PPARγ levels, whereas combined treatment of γ-tocotrienol with PPARγ agonists resulted in an elevation in PPARγ levels and a corresponding increase in breast cancer cell growth. These contradictory findings might be explained by differences in the cancer cell types and experimental models used to examine combination treatment effects in these different studies. Nevertheless, the present finding clearly demonstrate an antagonistic effect on breast cancer cell proliferation when treated with the combination of γ-tocotrienol and PPARγ agonists, and provides strong evidence that increased expression of PPARγ is a negative indicator for breast cancer responsiveness to anticancer therapy. This hypothesis is further evidence by the finding that PPARγ expression is elevated in breast cancer cells as compared to normal mammary epithelial cells [9, 44], and mice genetically predisposed to developing mammary tumors constitutively express high levels of activated PPARγ as compared to control mice [9, 44]. It is also possible that the anticancer effects of high-dose treatment with PPARγ agonists may be mediated through PPARγ-independent mechanisms.The present study also confirms and extends previous findings showing that treatment with PPARγ antagonists significantly inhibits growth of breast cancer cells. Experimental results showed that PPARγ antagonist downregulate PPARγ activation and expression and these effects were associated with enhanced responsiveness to anticancer therapy [45, 46]. However, the present study also shows that combined treatment of γ-tocotrienol with PPARγ antagonist induced a relative large decrease in transcription activity of PPARγ. This treatment was also shown to result in decreased expression of PPARγ and RXR, and these effects were associated with a significant decrease in breast cancer cell growth. PPARγ functions as a heterodimer with its obligate heterodimer partner-RXR. Like other nuclear hormone receptors, the PPARγ-RXR heterodimer recruits cofactor complexes, either coactivators or corepressors to modulate their transcriptional activity [45]. Upon binding of a ligand to the heterodimer complex, corepressors are displaced and the receptor then associates with a coactivator molecule. These coactivators include SRC-1, CBP C-20, and the CBP homologue p/300 [47, 48]. Combined treatment of γ-tocotrienol and PPARγ antagonists-induced suppression of transcription of PPARγ, appears to also decrease the recruitment of coactivator molecules to available PPARγ-RXR heterodimers for translocation into the nucleus, and ultimately resulting in an elevation of free coactivator levels in the cytoplasm. Taken together these results suggest that breast cancer cells require PPARγ activation for their survival, and that treatments designed to reduce or inhibition of PPARγ levels and/or activation and may provide an effective strategy in treatment of breast cancer.PPARγ activity can be modulated by phosphorylation at multiple sites [49]. In addition, PPARγ ligands can reduce the activity of PI3K and its downstream target Akt [50]. Combined treatment of γ-tocotrienol with PPARγ antagonists was found to reduced PI3K, phosphorylated PDK-1 (active), and phosphorylated-Akt (active) levels in MCF-7 and MDA-MB-231 breast cancer cells. Furthermore, these effects were not associated with an increase in PTEN activity, the phosphatase involved in the inactivation of PDK and Akt. These findings indicate that the antiproliferative effects of combined γ-tocotrienol and PPARγ antagonists treatment is mediated through a suppression in PI3K/Akt mitogenic signaling. These effects were found to be cytostatic in nature, and not associated with a decrease in cell viability resulting from the initiation of apoptosis. Previous findings have also shown that treatment with PPARγ antagonists can cause a decrease in PI3K/Akt mitogenic signaling [51]. ## 5. Conclusion Result in these studies demonstrate that combined low-dose treatment ofγ-tocotrienol and PPARγ antagonists act synergistically to inhibit human breast cancer cell proliferation, and this effect appears to be mediated by a large reduction in PPARγ expression and corresponding reduction in PI3K/Akt mitogenic signaling. Although high dose treatment with PPARγ agonist also was also found to inhibit human breast cancer cells growth, it is most likely that these effects are mediated through PPARγ-independent mechanisms because the preponderance of experimental evidence strongly suggest that elevations in PPARγ expression is an indicator of robust breast cancer cell growth and resistance to anticancer therapy, whereas a reduction in PPARγ expression is an indicator of decreased breast cancer proliferation and increased responsiveness to chemotherapeutic agents. These findings also show that combination anticancer therapy does not always result in an additive or synergistic anticancer response, but can result in a paradoxical/antagonistic response as was observed with the combined treatment of γ-tocotrienol with PPARγ agonist in MCF-7 and MDA-MB-231 human breast cancer cells. The importance of understanding the intracellular mechanism of action of anticancer agents is critical for optimizing therapeutic response. It is also clearly evident that use of γ-tocotrienol in combination with PPARγ antagonist may have potential therapeutic value in treatment of breast cancer in women. --- *Source: 101705-2013-01-28.xml*
101705-2013-01-28_101705-2013-01-28.md
82,901
Mechanisms Mediating the Effects ofγ-Tocotrienol When Used in Combination with PPARγ Agonists or Antagonists on MCF-7 and MDA-MB-231 Breast Cancer Cells
Abhita Malaviya; Paul W. Sylvester
International Journal of Breast Cancer (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101705
101705-2013-01-28.xml
--- ## Abstract γ-Tocotrienol is a natural vitamin E that displays potent anticancer activity, and previous studies suggest that these effects involve alterations in PPARγ activity. Treatment with 0.5–6 μM  γ-tocotrienol, 0.4–50 μM PPARγ agonists (rosiglitazone or troglitazone), or 0.4–25 μM PPARγ antagonists (GW9662 or T0070907) alone resulted in a dose-responsive inhibition of MCF-7 and MDA-MB-231 breast cancer proliferation. However, combined treatment of 1–4 μM  γ-tocotrienol with PPARγ agonists reversed the growth inhibitory effects of γ-tocotrienol, whereas combined treatment of 1–4 μM  γ-tocotrienol with PPARγ antagonists synergistically inhibited MCF-7 and MDA-MB-231 cell growth. Combined treatment of γ-tocotrienol and PPARγ agonists caused an increase in transcription activity of PPARγ along with increased expression of PPARγ and RXR, and decrease in PPARγ coactivators, CBP p/300, CBP C-20, and SRC-1, in both breast cancer cell lines. In contrast, combined treatment of γ-tocotrienol with PPARγ antagonists resulted in a decrease in transcription activity of PPARγ, along with decreased expression of PPARγ and RXR, increase in PPARγ coactivators, and corresponding decrease in PI3K/Akt mitogenic signaling in these cells. These findings suggest that elevations in PPARγ are correlated with increased breast cancer growth and survival, and treatment that decreases PPARγ expression may provide benefit in the treatment of breast cancer. --- ## Body ## 1. Introduction Peroxisome proliferator activated receptorγ (PPARγ) belongs to the nuclear receptor superfamily and functions as a ligand-activated transcription factor that forms a heterodimer complex with retinoid X receptor (RXR). This complex then binds to a specific DNA sequence called the peroxisome proliferator response element and initiates the recruitment of coactivator proteins such as CBP p/300, SRC-1, and CBP C-20, which further modulate gene transcription [1–3]. Studies have shown that PPARγ is overexpressed in many types of breast cancer cells [4–7]. Experimental evidence in rodents has shown that overexpression of PPARγ is associated with an increased incidence and growth in mammary tumors, whereas knockdown of PPARγ expression was found to significantly inhibit spontaneous mammary tumor development [8, 9]. Taken together these results suggest that inhibition of PPARγ expression and/or activity may be beneficial in the treatment of breast cancer. However, other studies have shown that treatment with the PPARγ agonist rosiglitazone and troglitazone, or conversely with PPARγ antagonists GW9662 and T0070907, were both found to significantly inhibit the growth of a wide variety of cancer cell lines [10, 11]. An explanation for these conflicting findings is not clearly evident, especially since some of the anticancer effects of these agents may be mediated through PPARγ-independent mechanisms. Interpretation of these findings is further complicated by the fact that PPARγ transcriptional activity can be modulated when phosphorylation by Akt and other kinases, which can occur from crosstalk with other mitogenic signaling pathways [12].γ-Tocotrienol is a member of the vitamin E family of compounds that displays potent anticancer activity [13, 14]. The mechanism(s) involved in mediating the anticancer activity of γ-tocotrienol appear to involve the suppression of growth-factor-dependent mitogenic signaling, particularly the PI3K/Akt signaling pathway [15–18]. PI3K is a lipid signaling kinase that activates PDK-1, which subsequently phosphorylates and activates Akt. Activated Akt phosphorylates various proteins associated with cell proliferation and survival [19]. PDK-1 and Akt activity is terminated by phosphatases such as PTEN [20].Recent studies have shown that tocotrienols activate specific PPARs in reporter-based assays [21], whereas other studies have shown that γ-tocotrienol increases intracellular levels of 15-lipoxygenase-2, the enzyme responsible for the conversion of arachidonic acid to the PPARγ activating ligand, 15-S-hydroxyeicosatrienooic acid, in prostate cancer cells [22]. Therefore, it was hypothesized that the anticancer effects of γ-tocotrienol may be mediated, at least in part, through a PPARγ-dependent mechanism. Studies were conducted to characterize the effects of γ-tocotrienol treatment alone and in combination with specific PPARγ agonists and antagonists on the growth and survival of MCF-7 and MDA-MB-231 human breast cancer cells. Additional studies evaluated treatment effects on the expression of PPARγ and PPARγ coactivators, and PI3K/Akt mitogenic signaling in these breast cancer cell lines. Results from these studies further characterize the anticancer mechanism of action of γ-tocotrienol, as well as PPARγ agonist and antagonists, and provides insights as to potential benefits of these therapies in the treatment of breast cancer. ## 2. Materials and Methods ### 2.1. Reagents and Antibodies All reagents were purchased from Sigma Chemical Company (St. Louis, MO) unless otherwise stated. Purifiedγ-tocotrienol (>98% purity) was generously provided as a gift by First Tech International Ltd (Hong Kong). The PPARγ agonists, rosiglitazone and troglitazone, and the PPARγ antagonists, GW9662 and T0070907, were purchased from Cayman Chemicals (San Diego, CA). Fetal bovine serum was purchased from American Type Culture Collection (Manassas, VA). Antibodies for β-actin, PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, cleaved caspase-3, and cleaved PARP were purchased from Cell Signaling Technology (Beverly, MA). Antibodies for RXR, CBP C-20, SRC-1, and CBP p/300 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Goat anti-rabbit and anti-mouse secondary antibodies were purchased from PerkinElmer Biosciences (Boston, MA). ### 2.2. Cell Lines and Culture Conditions The estrogen-receptor negative MDA-MB-231, and the estrogen-receptor positive MCF-7 breast carcinoma cell lines were purchased from American Type Culture Collection (Manassas, VA). MDA-MB-231 and MCF-7 breast cancer cells were cultured in modified Dulbecco’s modified Eagle Medium (DMEM)/F12 supplemented with 10% fetal bovine serum, 10μg/mL insulin, 100 U/mL penicillin, 0.1 mg/mL streptomycin at 37°C in an environment of 95% air and 5% CO2 in a humidified incubator. For subculturing, cells were rinsed twice with sterile Ca2+- and Mg2+-free phosphate-buffered saline (PBS) and incubated in 0.05% trypsin containing 0.025% EDTA in PBS for 5 min at 37°C. The released cells were centrifuged, resuspended in serum containing media, and counted using a hemocytometer. ### 2.3. Experimental Treatments The highly lipophilicγ-tocotrienol was suspended in a solution of sterile 10% BSA as described previously [13, 14]. Briefly, an appropriate amount of γ-tocotrienol was first dissolved in 100 μL of 100% ethanol, then added to a small volume of sterile 10% BSA in water and incubated overnight at 37°C with continuous shaking. This stock solution was then used to prepare various concentrations of treatment media. Stock solutions of rosiglitazone, troglitazone, GW9662 and T0070907 were prepared in DMSO. Ethanol and/or DMSO was added to all treatment media such that the final concentration was the same in all treatment groups within any given experiment and was always less than 0.1%. ### 2.4. Growth Studies MCF-7 and MDA-MB-231 cells were plated at a density of5×104 cells/well (6 replicates/group) in 24 well culture plates and 1×104 cells/well in 96 well culture plate, respectively and allowed to adhere overnight. The next day, cells were divided into different treatment groups, culture media was removed, washed with sterile PBS, then fed fresh media containing their respective treatments, and then returned to the incubator. Cells were treated with media containing 0–50 μM rosiglitazone, troglitazone, GW9662, T0070907 or 0–8 μM γ-tocotrienol alone or a combination for a 4-day culture period. Cells in each treatment group were fed fresh media every other day throughout the experimental period. For apoptosis experiments, MCF-7 and MDA-MB-231 cells were plated as described above. Cells were allowed to grow in control media for 3 days, after which they were exposed to the various treatments for a 24 h period. Treatment with 20 μM γ-tocotrienol has previous been shown to induce apoptosis in breast cancer cells [13, 14] and was used as a positive control in this study. ### 2.5. Measurement of Viable Cell Number MCF-7 and MDA-MB-231 viable cell number was determined using the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) colorimetric assay as described previously [13, 14]. At the end of the treatment period, treatment media was removed and all cells were exposed for 3 h (96 well plates) or 4 h (24 well/plates) to fresh control media containing 0.41 mg/mL MTT at 37°C. Afterwards, media was removed and MTT crystals were dissolved in 1 mL of isopropanol for 24 culture plate or 100 μL of DMSO for 96 culture plate assays. The optical density of each sample was measured at 570 nm at a microplate reader (Spectracount; Packard Bioscience Company, Meriden, CT) zeroed against a blank prepared from cell-free medium. The number of cells per well was calculated against a standard curve prepared by plating known cell densities, as determined by hemocytometer, in triplicate at the start of each experiment. ### 2.6. Western Blot Analysis MCF-7 and MDA-MB-231 cells were plated at a density of1×106 cells/100 mm culture dish and exposed to control or treatment media for a 4-day culture period. Afterwards, cells were washed with PBS, isolated with trypsin, and whole cell lysates were prepared in Laemmli buffer [23] as described previously [24]. The protein concentration in each sample was determined using Bio-Rad protein assay kit (Bio-Rad, Hercules, CA). Equal amounts of protein from each sample in a given experiment was loaded onto SDS-polyacrylamide minigels and electrophoresed through 5%–15% resolving gel. Proteins separated on each gel were transblotted at 30 V for 12–16 h at 4°C onto a polyvinylidene fluoride (PVDF) membrane (PerkinElmer Lifesciences, Wellesley, MA) in a Trans-Blot Cell (Bio-Rad, Hercules, CA) according to the method of Towbin et al. [25]. The membranes were then blocked with 2% BSA in 10 mM Tris HCl containing 50 mM NaCl and 0.1% Tween 20 pH 7.4 (TBST) and then incubated with specific primary antibodies against PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, RXR, CBP C-20, SRC-1, CBP p/300, cleaved capase-3, cleaved PARP or β-actin, diluted 1 : 500 to 1 : 5000 in TBST/2% BSA for 2 h. Membranes are washed 5 times with TBST followed by incubation with the respective horseradish peroxide-conjugated secondary antibodies diluted 1 : 3000 to 1 : 5000 in TBST/2% BSA for 1 h followed by rinsing with TBST. Protein bands bound to the antibody were visualized by chemiluminescence (Pierce, Rockford, IL) according to the manufacturer’s instructions and images were obtained using a Kodak Gel Logic 1500 Imaging System (Carestream Health Inc, Rochester, NY). The visualization of β-actin was performed to confirm equal sample loading in each lane. Images of protein bands on the film were acquired and scanning densitometric analysis was performed with Kodak molecular imaging software version 4.5 (Carestream Health Inc, Rochester, NY). All experiments were repeated at least three times and a representative western blot image from each experiment is shown in the figures. ### 2.7. Transient Transfection and Luciferase Reporter Assay MCF-7 and MDA-MB-231 cells were plated at a density of2×104 per well in 96-well plates and allowed to adhere overnight. After this cells were transfected with 32 ng of PPRE X3-TK-luc (Addgene plasmid no. 1015) [26] and 3.2 ng of renilla luciferase plasmid per well (Promega, Madison, WI) using 0.8 μL of lipofectamine 2000 transfection reagent for each well (Invitrogen, Grand Island, NY). After 6 h transfection, the media was removed; the cells were washed once and exposed to 100 μL of control or treatment media for a 4-day culture period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using dual-glo luciferase assay system (Promega, Madison, WI). Luciferase activity of each sample was normalized by the level of renilla activity. Data is represented as mean fold changes in treated cells as compared to control cells. ### 2.8. Statistical Analysis The level of interaction between PPARγ ligands and γ-tocotrienol was evaluated by isobologram method [27]. A straight line was formed by plotting IC50 doses of γ-tocotrienol and individual PPARγ ligands on the x-axes and y-axes, respectively as determined by non-linear regression curve fit analysis using GraphPad Prism 4 (GraphPad Software inc. La Jolla, CA). The data point in the isobologram corresponds to the actual IC50 dose of combined γ-tocotrienol and PPARγ ligands treatment. If a data point is on or near the line, this represents an additive treatment effect, whereas a data point that lies below or above the line indicates synergism or antagonism, respectively. Differences among the various treatment groups in growth studies and western blot studies were determined by analysis of variance followed by Dunnett’s multiple range test. Differences were considered statistically significant at a value of P<0.05. ## 2.1. Reagents and Antibodies All reagents were purchased from Sigma Chemical Company (St. Louis, MO) unless otherwise stated. Purifiedγ-tocotrienol (>98% purity) was generously provided as a gift by First Tech International Ltd (Hong Kong). The PPARγ agonists, rosiglitazone and troglitazone, and the PPARγ antagonists, GW9662 and T0070907, were purchased from Cayman Chemicals (San Diego, CA). Fetal bovine serum was purchased from American Type Culture Collection (Manassas, VA). Antibodies for β-actin, PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, cleaved caspase-3, and cleaved PARP were purchased from Cell Signaling Technology (Beverly, MA). Antibodies for RXR, CBP C-20, SRC-1, and CBP p/300 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Goat anti-rabbit and anti-mouse secondary antibodies were purchased from PerkinElmer Biosciences (Boston, MA). ## 2.2. Cell Lines and Culture Conditions The estrogen-receptor negative MDA-MB-231, and the estrogen-receptor positive MCF-7 breast carcinoma cell lines were purchased from American Type Culture Collection (Manassas, VA). MDA-MB-231 and MCF-7 breast cancer cells were cultured in modified Dulbecco’s modified Eagle Medium (DMEM)/F12 supplemented with 10% fetal bovine serum, 10μg/mL insulin, 100 U/mL penicillin, 0.1 mg/mL streptomycin at 37°C in an environment of 95% air and 5% CO2 in a humidified incubator. For subculturing, cells were rinsed twice with sterile Ca2+- and Mg2+-free phosphate-buffered saline (PBS) and incubated in 0.05% trypsin containing 0.025% EDTA in PBS for 5 min at 37°C. The released cells were centrifuged, resuspended in serum containing media, and counted using a hemocytometer. ## 2.3. Experimental Treatments The highly lipophilicγ-tocotrienol was suspended in a solution of sterile 10% BSA as described previously [13, 14]. Briefly, an appropriate amount of γ-tocotrienol was first dissolved in 100 μL of 100% ethanol, then added to a small volume of sterile 10% BSA in water and incubated overnight at 37°C with continuous shaking. This stock solution was then used to prepare various concentrations of treatment media. Stock solutions of rosiglitazone, troglitazone, GW9662 and T0070907 were prepared in DMSO. Ethanol and/or DMSO was added to all treatment media such that the final concentration was the same in all treatment groups within any given experiment and was always less than 0.1%. ## 2.4. Growth Studies MCF-7 and MDA-MB-231 cells were plated at a density of5×104 cells/well (6 replicates/group) in 24 well culture plates and 1×104 cells/well in 96 well culture plate, respectively and allowed to adhere overnight. The next day, cells were divided into different treatment groups, culture media was removed, washed with sterile PBS, then fed fresh media containing their respective treatments, and then returned to the incubator. Cells were treated with media containing 0–50 μM rosiglitazone, troglitazone, GW9662, T0070907 or 0–8 μM γ-tocotrienol alone or a combination for a 4-day culture period. Cells in each treatment group were fed fresh media every other day throughout the experimental period. For apoptosis experiments, MCF-7 and MDA-MB-231 cells were plated as described above. Cells were allowed to grow in control media for 3 days, after which they were exposed to the various treatments for a 24 h period. Treatment with 20 μM γ-tocotrienol has previous been shown to induce apoptosis in breast cancer cells [13, 14] and was used as a positive control in this study. ## 2.5. Measurement of Viable Cell Number MCF-7 and MDA-MB-231 viable cell number was determined using the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) colorimetric assay as described previously [13, 14]. At the end of the treatment period, treatment media was removed and all cells were exposed for 3 h (96 well plates) or 4 h (24 well/plates) to fresh control media containing 0.41 mg/mL MTT at 37°C. Afterwards, media was removed and MTT crystals were dissolved in 1 mL of isopropanol for 24 culture plate or 100 μL of DMSO for 96 culture plate assays. The optical density of each sample was measured at 570 nm at a microplate reader (Spectracount; Packard Bioscience Company, Meriden, CT) zeroed against a blank prepared from cell-free medium. The number of cells per well was calculated against a standard curve prepared by plating known cell densities, as determined by hemocytometer, in triplicate at the start of each experiment. ## 2.6. Western Blot Analysis MCF-7 and MDA-MB-231 cells were plated at a density of1×106 cells/100 mm culture dish and exposed to control or treatment media for a 4-day culture period. Afterwards, cells were washed with PBS, isolated with trypsin, and whole cell lysates were prepared in Laemmli buffer [23] as described previously [24]. The protein concentration in each sample was determined using Bio-Rad protein assay kit (Bio-Rad, Hercules, CA). Equal amounts of protein from each sample in a given experiment was loaded onto SDS-polyacrylamide minigels and electrophoresed through 5%–15% resolving gel. Proteins separated on each gel were transblotted at 30 V for 12–16 h at 4°C onto a polyvinylidene fluoride (PVDF) membrane (PerkinElmer Lifesciences, Wellesley, MA) in a Trans-Blot Cell (Bio-Rad, Hercules, CA) according to the method of Towbin et al. [25]. The membranes were then blocked with 2% BSA in 10 mM Tris HCl containing 50 mM NaCl and 0.1% Tween 20 pH 7.4 (TBST) and then incubated with specific primary antibodies against PPARγ, Akt, phospho-Akt, PTEN, phospho-PTEN, PDK-1, PI3K, RXR, CBP C-20, SRC-1, CBP p/300, cleaved capase-3, cleaved PARP or β-actin, diluted 1 : 500 to 1 : 5000 in TBST/2% BSA for 2 h. Membranes are washed 5 times with TBST followed by incubation with the respective horseradish peroxide-conjugated secondary antibodies diluted 1 : 3000 to 1 : 5000 in TBST/2% BSA for 1 h followed by rinsing with TBST. Protein bands bound to the antibody were visualized by chemiluminescence (Pierce, Rockford, IL) according to the manufacturer’s instructions and images were obtained using a Kodak Gel Logic 1500 Imaging System (Carestream Health Inc, Rochester, NY). The visualization of β-actin was performed to confirm equal sample loading in each lane. Images of protein bands on the film were acquired and scanning densitometric analysis was performed with Kodak molecular imaging software version 4.5 (Carestream Health Inc, Rochester, NY). All experiments were repeated at least three times and a representative western blot image from each experiment is shown in the figures. ## 2.7. Transient Transfection and Luciferase Reporter Assay MCF-7 and MDA-MB-231 cells were plated at a density of2×104 per well in 96-well plates and allowed to adhere overnight. After this cells were transfected with 32 ng of PPRE X3-TK-luc (Addgene plasmid no. 1015) [26] and 3.2 ng of renilla luciferase plasmid per well (Promega, Madison, WI) using 0.8 μL of lipofectamine 2000 transfection reagent for each well (Invitrogen, Grand Island, NY). After 6 h transfection, the media was removed; the cells were washed once and exposed to 100 μL of control or treatment media for a 4-day culture period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using dual-glo luciferase assay system (Promega, Madison, WI). Luciferase activity of each sample was normalized by the level of renilla activity. Data is represented as mean fold changes in treated cells as compared to control cells. ## 2.8. Statistical Analysis The level of interaction between PPARγ ligands and γ-tocotrienol was evaluated by isobologram method [27]. A straight line was formed by plotting IC50 doses of γ-tocotrienol and individual PPARγ ligands on the x-axes and y-axes, respectively as determined by non-linear regression curve fit analysis using GraphPad Prism 4 (GraphPad Software inc. La Jolla, CA). The data point in the isobologram corresponds to the actual IC50 dose of combined γ-tocotrienol and PPARγ ligands treatment. If a data point is on or near the line, this represents an additive treatment effect, whereas a data point that lies below or above the line indicates synergism or antagonism, respectively. Differences among the various treatment groups in growth studies and western blot studies were determined by analysis of variance followed by Dunnett’s multiple range test. Differences were considered statistically significant at a value of P<0.05. ## 3. Results ### 3.1. Antiproliferative Effects ofγ-Tocotrienol, PPARγ Agonists (Rosiglitazone and Troglitazone), and PPARγ Antagonists (GW9662 and T0070907) Treatment with 3–6μM γ-tocotrienol, 1.6–12 μM rosiglitazone, 6.4–25 μM troglitazone, 1.6–6.4 μM GW9662, or 6.4–25 μM T0070907 was found to significantly inhibit growth of MCF-7 cells in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(a)). Similarly, treatment with 4–8 μM γ-tocotrienol, 6.4–25 μM rosiglitazone, 3.2–50 μM troglitazone, 3.2–12 μM GW9662, and 12–50 μM T0070907 significantly inhibited MDA-MB-231 cell growth in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(b)).Antiproliferative effects ofγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. Vertical bars indicate mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.2. Antagonistic Effects of PPARγ Agonist Rosiglitazone and Troglitazone on the Antiproliferative Effects of γ-Tocotrienol Treatment with 1–6μM γ-tocotrienol alone significantly inhibited growth of MCF-7 (Figure 2(a)) and MDA-MB-231 (Figure 2(b)) breast cancer cells after a 4-day treatment period. However, the growth inhibitory effects of 1–4 μM γ-tocotrienol on MCF-7 cells were reversed when given in combination with 3.2 μM rosiglitazone or troglitazone (Figure 2(a), Top and Bottom). A similar, but less pronounced, reversal in 3–6 μM γ-tocotrienol-induced growth inhibitory effects on MDA-MB-231 breast cancer cells was observed when used in combination with 6.4 μM rosiglitazone or troglitazone (Figure 2(b), Top and Bottom).Effects ofγ-tocotrienol, PPARγ agonist rosiglitazone and troglitazone treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ### 3.3. Enhancement ofγ-Tocotrienol-Induced Antiproliferative Effects When Given in Combination with PPARγ Antagonist GW9662 or T0070907 The growth inhibitory effects of 1–4μM γ-tocotrienol was significantly enhanced when given in combination with a subeffective dose (3.2 μM) of the PPARγ antagonist, GW9662, in MCF-7 breast cancer cells (Figure 3(a), Top). A slight, but insignificant enhancement of the growth inhibitory effects 1–4 μM γ-tocotrienol was observed when combined with a subeffective dose (3.2 μM) of the PPARγ antagonist, T0070907, in MCF-7 breast cancer cells (Figure 3(a), Bottom). In MDA-MB-231 cells, 0.5–3 μMγ-tocotrienol was used in combination with 6.4 μM of the PPARγ antagonists, GW9662 (Figure 3(b), Top) or T0070907 (Figure 3(b), Bottom) and was found to significantly enhanced the growth inhibitory effects of these agents. Higher dose ranges of γ-tocotrienol in combination with these same doses of PPARγ antagonists resulted in a complete suppression in breast cancer cell growth such that viable cell number was undetectable using the MTT assay (data not shown).Effects ofγ-tocotrienol, PPARγ antagonists GW9662 and T0070907 treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ### 3.4. Isobologram Analysis of Combined Treatment Effects ofγ-Tocotrienol with PPARγ Agonists and Antagonists Combined treatment ofγ-tocotrienol with PPARγ agonists, rosiglitazone, and troglitazone was found to be statistically antagonistic on MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cell growth, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists, GW9662, and T0070907 were found to be statistically synergistic in both MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cells, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect.Isobologram analysis of combined treatment ofγ-tocotrienol and PPARγ ligands on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. Individual IC50 doses for γ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) were calculated and then plotted on the x-axes and y-axes, respectively. The data point on the isobologram represents the actual doses of combined γ-tocotrienol and PPARγ ligands. Combined treatment of PPARγ agonists rosiglitazone and troglitazone with γ-tocotrienol was found to be antagonistic, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists GW9662 and T0070907 was found to be synergistic, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect for both cell lines. (a) (b) ### 3.5. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced a decreased expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 5(a) and 5(b)). Treatment with 3.2 μM rosiglitazone or troglitazone alone in MCF-7 cells or 6.4 μM rosiglitazone or troglitazone alone in MDA-MB-231 cells had little or no effect on PPARγ or RXR levels (Figures 5(a) and 5(b)). However, combined treatment with similar doses of γ-tocotrienol and rosiglitazone or troglitazone resulted in a significant increase in PPARγ and RXR expression in both MCF-7 and MDA-MB-231 breast cancer cell lines (Figures 5(a) and 5(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM rosiglitazone, or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.6. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced decrease expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 6(a) and 6(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 or T0070907 alone had only slight effects on PPARγ and RXR expression (Figures 6(a) and 6(b)). However, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant reduction in PPARγ and its heterodimer partner, RXR, in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 6(a) and 6(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or T0070907 alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared from each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.7. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPRE Mediated Reporter Activity Luciferase assay shows that the treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the PPRE mediated reporter activity as compared to vehicle treated controls (Figures 7(a) and 7(b), Top and Bottom). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone, and troglitazone, or PPARγ antagonists, GW9662 and T0070907, alone, caused a slight, but insignificant decrease in PPRE mediated reporter activity (Figures 7(a) and 7(b), Top and Bottom). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone or troglitazone caused an increase in transcription activity of PPARγ in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Top). In contrast, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant decrease PPRE mediated reporter activity in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Bottom).Luciferase assay was performed on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. The cells were initially plated at a density of2×104 cells/well in 96-well plates. Cells were then transfected by adding 32 ng of PPRE X3-TK-luc and 3.2 ng of renilla luciferase plasmid in 0.8 μL of lipofectamine 2000 transfection reagent. Following a 6-h incubation period, MCF-7 cells were treated with control or treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM rosiglitazone, 0–3.2 μM troglitazone, 0–3.2 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. MDA-MB-231 cells were initially plated in a similar manner and treated with control or treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM rosiglitazone, 0–6.4 μM troglitazone, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using the dual-glo luciferase assay system. Results were calculated as raw luciferase units divided by raw renilla units. Vertical bars indicate PPRE mediated reporter activity ± SEM (arbitrary units) in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.8. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight, but insignificant effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 8(a) and 8(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone and troglitazone alone caused a slight decrease in CBP p/300 and SRC-1, but not CBP C-20, expression (Figures 8(a) and 8(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant decrease in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 8(a) and 8(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) when used alone or in combination on the levels of CBP p/300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or 3.2 μM troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol or 6.4 μM rosiglitazone or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were from cell in each treatment group and prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.9. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 9(a) and 9(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone had only slight effects on CBP p/300, CBP C-20, or SRC-1 expression (Figures 9(a) and 9(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant increase in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 9(a) and 9(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) when used alone or in combination with each other to determine protein levels of CBP p/H300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture plate and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone and in combination. MDA-MB-231 cells were plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ### 3.10. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PI3K/Akt Mitogenic Signaling Treatment of 2μM γ-tocotrienol with 3.2 μM of the PPARγ antagonists GW9662 or T0070907 alone had little or no effects on intracellular levels of Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 in MCF-7 cells after a 4-day treatment period (Figure 10(a)). However, combined treatment with the same doses of these agents caused a significant decrease in levels of phospho-Akt, PDK-1, and PI3K, but had little or no effect on total Akt and PTEN, and phospho-PTEN levels as compared to MCF-7 cells in the vehicle-treated control groups (Figure 10(a)). Similarly, treatment of 3 μM γ-tocotrienol, 6.4 μM GW9662 or 6.4 μM T0070907 alone had little or no effect on intracellular levels of phospho-Akt (activated), PDK-1, PI3K, Akt, PTEN, and phospho-PTEN in MDA-MB-231 breast cancer cells, as compared to vehicle-treated controls (Figure 10(b)). Combined treatment with the same doses of these agents resulted in a significant decrease in phospho-Akt, PDK-1, and PI3K levels as compared to MDA-MB-231 breast cancer cells in the vehicle-treated control group (Figure 10(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 levels on (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone or in combination. MDA-MB-231 cells were also plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b)Similar studies were conducted to determine the effects of combinedγ-tocotrienol treatment with PPARγ agonist rosiglitazone and troglitazone on PI3K/Akt mitogenic signaling in MCF-7 and MDA-MB-231 breast cancer cells. However, little or no differences in the relative levels of these mitogenic proteins were observed among the different treatment groups (data not shown), apparently because cells in the various treatment groups were actively proliferating at a near maximal growth rate. ### 3.11. Apoptotic Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination In order to determine if the growth inhibitory effects resulting from combined treatment with subeffective doses ofγ-tocotrienol and PPARγ antagonists might result from a reduction in viable cell number, studies were conducted to determine the acute effects (24-h) and chronic effects (96-h) of these treatment on the initiation of apoptosis and cell viability. Western blot analysis shows that treatment with 2 μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone had no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number after a 24-h and 96-h treatment exposure (Figures 11(a) and 11(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone, or in combination with their respective treatment dose of γ-tocotrienol was also found to have no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number 24-h after treatment exposure (Figures 11(a) and 11(b)). However, treatment with 20 μM γ-tocotrienol, a dose previously shown to induce apoptosis in mammary cancer cells [13, 14] and used as an apoptosis-inducing positive control in this experiments was found to induce a large increase in cleaved PARP and cleaved caspase-3 levels, and corresponding decrease in viable cell number in both MCF-7 and MDA-MB-231 breast cancer cells 24 h following treatment exposure (Figures 11(a) and 11(b)). The positive apoptosis control treatment of 20 μM γ-tocotrienol was not included in the 96 h treatment exposure experiment, because by the end of this experiment there are no viable cells remaining in this treatment group.Apoptotic effects ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on caspase-3 and cleaved PARP levels on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. For Western blot studies, MCF-7 and MDA-MB-231 cells were initially plated at 1×106 cells/100 mm culture dish and maintained on control media for a 3-day culture period. Afterwards, cells were divided into the various treatment groups, media was removed, and cells were exposed to their respective treatment media for a 24-h treatment period. In addition, cells were exposed to their respective treatment media for a 96-h treatment period, where fresh media was added every other day. MCF-7 cells were exposed to treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM GW9662, or 0–3.2 μM T0070907 alone or in combination, whereas MDA-MB-231 cells exposed to treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by western blot analysis. In parallel studies, (a) MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates, whereas (b) MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to the same treatments as described above. After a 24-h treatment exposure, viable cell number in all treatment groups was determined using MTT assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.1. Antiproliferative Effects ofγ-Tocotrienol, PPARγ Agonists (Rosiglitazone and Troglitazone), and PPARγ Antagonists (GW9662 and T0070907) Treatment with 3–6μM γ-tocotrienol, 1.6–12 μM rosiglitazone, 6.4–25 μM troglitazone, 1.6–6.4 μM GW9662, or 6.4–25 μM T0070907 was found to significantly inhibit growth of MCF-7 cells in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(a)). Similarly, treatment with 4–8 μM γ-tocotrienol, 6.4–25 μM rosiglitazone, 3.2–50 μM troglitazone, 3.2–12 μM GW9662, and 12–50 μM T0070907 significantly inhibited MDA-MB-231 cell growth in a dose-responsive manner as compared to cells in the vehicle-treated control group (Figure 1(b)).Antiproliferative effects ofγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards viable cell number was determined using MTT colorimetric assay. Vertical bars indicate mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.2. Antagonistic Effects of PPARγ Agonist Rosiglitazone and Troglitazone on the Antiproliferative Effects of γ-Tocotrienol Treatment with 1–6μM γ-tocotrienol alone significantly inhibited growth of MCF-7 (Figure 2(a)) and MDA-MB-231 (Figure 2(b)) breast cancer cells after a 4-day treatment period. However, the growth inhibitory effects of 1–4 μM γ-tocotrienol on MCF-7 cells were reversed when given in combination with 3.2 μM rosiglitazone or troglitazone (Figure 2(a), Top and Bottom). A similar, but less pronounced, reversal in 3–6 μM γ-tocotrienol-induced growth inhibitory effects on MDA-MB-231 breast cancer cells was observed when used in combination with 6.4 μM rosiglitazone or troglitazone (Figure 2(b), Top and Bottom).Effects ofγ-tocotrienol, PPARγ agonist rosiglitazone and troglitazone treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ## 3.3. Enhancement ofγ-Tocotrienol-Induced Antiproliferative Effects When Given in Combination with PPARγ Antagonist GW9662 or T0070907 The growth inhibitory effects of 1–4μM γ-tocotrienol was significantly enhanced when given in combination with a subeffective dose (3.2 μM) of the PPARγ antagonist, GW9662, in MCF-7 breast cancer cells (Figure 3(a), Top). A slight, but insignificant enhancement of the growth inhibitory effects 1–4 μM γ-tocotrienol was observed when combined with a subeffective dose (3.2 μM) of the PPARγ antagonist, T0070907, in MCF-7 breast cancer cells (Figure 3(a), Bottom). In MDA-MB-231 cells, 0.5–3 μMγ-tocotrienol was used in combination with 6.4 μM of the PPARγ antagonists, GW9662 (Figure 3(b), Top) or T0070907 (Figure 3(b), Bottom) and was found to significantly enhanced the growth inhibitory effects of these agents. Higher dose ranges of γ-tocotrienol in combination with these same doses of PPARγ antagonists resulted in a complete suppression in breast cancer cell growth such that viable cell number was undetectable using the MTT assay (data not shown).Effects ofγ-tocotrienol, PPARγ antagonists GW9662 and T0070907 treatment alone or in combination on growth of (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at a density of 5×104 (6 wells per group) in 24-well plates and (b) MDA-MB-231 were initially plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to treatment media for a 4-day period. Afterwards, viable cell number was determined using MTT colorimetric assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls and P#<0.05 as compared to their corresponding control treated with γ-tocotrienol alone. (a) (b) ## 3.4. Isobologram Analysis of Combined Treatment Effects ofγ-Tocotrienol with PPARγ Agonists and Antagonists Combined treatment ofγ-tocotrienol with PPARγ agonists, rosiglitazone, and troglitazone was found to be statistically antagonistic on MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cell growth, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists, GW9662, and T0070907 were found to be statistically synergistic in both MCF-7 (Figure 4(a)) and MDA-MB-231 (Figure 4(b)) breast cancer cells, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect.Isobologram analysis of combined treatment ofγ-tocotrienol and PPARγ ligands on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. Individual IC50 doses for γ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), and PPARγ antagonists (GW9662 and T0070907) were calculated and then plotted on the x-axes and y-axes, respectively. The data point on the isobologram represents the actual doses of combined γ-tocotrienol and PPARγ ligands. Combined treatment of PPARγ agonists rosiglitazone and troglitazone with γ-tocotrienol was found to be antagonistic, as evidenced by the location of the data point in the isobologram being well above the line defining additive effect. In contrast, the growth inhibitory effect of combined treatment of γ-tocotrienol with PPARγ antagonists GW9662 and T0070907 was found to be synergistic, as evidenced by the location of the data point in the isobologram being well below the line defining additive effect for both cell lines. (a) (b) ## 3.5. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced a decreased expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 5(a) and 5(b)). Treatment with 3.2 μM rosiglitazone or troglitazone alone in MCF-7 cells or 6.4 μM rosiglitazone or troglitazone alone in MDA-MB-231 cells had little or no effect on PPARγ or RXR levels (Figures 5(a) and 5(b)). However, combined treatment with similar doses of γ-tocotrienol and rosiglitazone or troglitazone resulted in a significant increase in PPARγ and RXR expression in both MCF-7 and MDA-MB-231 breast cancer cell lines (Figures 5(a) and 5(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM rosiglitazone, or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.6. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PPARγ and RXR Levels Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced decrease expression of PPARγ and RXR as compared to the vehicle-treated controls (Figures 6(a) and 6(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 or T0070907 alone had only slight effects on PPARγ and RXR expression (Figures 6(a) and 6(b)). However, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant reduction in PPARγ and its heterodimer partner, RXR, in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 6(a) and 6(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) given alone or in combination on the levels of PPARγ and RXR after a 4-day incubation period in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or T0070907 alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, whole cell lysates were prepared from each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.7. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on PPRE Mediated Reporter Activity Luciferase assay shows that the treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the PPRE mediated reporter activity as compared to vehicle treated controls (Figures 7(a) and 7(b), Top and Bottom). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone, and troglitazone, or PPARγ antagonists, GW9662 and T0070907, alone, caused a slight, but insignificant decrease in PPRE mediated reporter activity (Figures 7(a) and 7(b), Top and Bottom). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone or troglitazone caused an increase in transcription activity of PPARγ in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Top). In contrast, combined treatment with these same doses of γ-tocotrienol and GW9662 or T0070907 caused a significant decrease PPRE mediated reporter activity in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 7(a) and 7(b), Bottom).Luciferase assay was performed on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. The cells were initially plated at a density of2×104 cells/well in 96-well plates. Cells were then transfected by adding 32 ng of PPRE X3-TK-luc and 3.2 ng of renilla luciferase plasmid in 0.8 μL of lipofectamine 2000 transfection reagent. Following a 6-h incubation period, MCF-7 cells were treated with control or treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM rosiglitazone, 0–3.2 μM troglitazone, 0–3.2 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. MDA-MB-231 cells were initially plated in a similar manner and treated with control or treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM rosiglitazone, 0–6.4 μM troglitazone, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for 4-day incubation period. Afterwards, cells were lysed with 75 μL of passive lysis buffer and treated according to manufacturer’s instructions using the dual-glo luciferase assay system. Results were calculated as raw luciferase units divided by raw renilla units. Vertical bars indicate PPRE mediated reporter activity ± SEM (arbitrary units) in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.8. Effects ofγ-Tocotrienol and PPARγ Agonist Rosiglitazone and Troglitazone Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight, but insignificant effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 8(a) and 8(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) with the PPARγ agonists, rosiglitazone and troglitazone alone caused a slight decrease in CBP p/300 and SRC-1, but not CBP C-20, expression (Figures 8(a) and 8(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant decrease in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle treated controls (Figures 8(a) and 8(b)).Western blot analysis ofγ-tocotrienol and PPARγ agonists (rosiglitazone and troglitazone) when used alone or in combination on the levels of CBP p/300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM rosiglitazone, or 3.2 μM troglitazone alone or in combination. MDA-MB-231 cells were plated in a similar manner and treated with control or treatment media containing either 3 μM γ-tocotrienol or 6.4 μM rosiglitazone or 6.4 μM troglitazone alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were from cell in each treatment group and prepared for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.9. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on Coactivator Expression Western blot analysis shows that treatment with 2μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone induced only slight effects in the expression of CBP p/300, CBP C-20, or SRC-1 as compared to the vehicle-treated controls (Figures 9(a) and 9(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone had only slight effects on CBP p/300, CBP C-20, or SRC-1 expression (Figures 9(a) and 9(b)). However, combined treatment with these same doses of γ-tocotrienol and rosiglitazone and troglitazone cause a significant increase in CBP p/300, CBP C-20, or SRC-1 expression in both MCF-7 and MDA-MB-231 cells as compared to vehicle-treated controls (Figures 9(a) and 9(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 and T0070907) when used alone or in combination with each other to determine protein levels of CBP p/H300, CBP C-20, and SRC-1 in (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture plate and treated with control or treatment media containing either 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone and in combination. MDA-MB-231 cells were plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 3.10. Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination on PI3K/Akt Mitogenic Signaling Treatment of 2μM γ-tocotrienol with 3.2 μM of the PPARγ antagonists GW9662 or T0070907 alone had little or no effects on intracellular levels of Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 in MCF-7 cells after a 4-day treatment period (Figure 10(a)). However, combined treatment with the same doses of these agents caused a significant decrease in levels of phospho-Akt, PDK-1, and PI3K, but had little or no effect on total Akt and PTEN, and phospho-PTEN levels as compared to MCF-7 cells in the vehicle-treated control groups (Figure 10(a)). Similarly, treatment of 3 μM γ-tocotrienol, 6.4 μM GW9662 or 6.4 μM T0070907 alone had little or no effect on intracellular levels of phospho-Akt (activated), PDK-1, PI3K, Akt, PTEN, and phospho-PTEN in MDA-MB-231 breast cancer cells, as compared to vehicle-treated controls (Figure 10(b)). Combined treatment with the same doses of these agents resulted in a significant decrease in phospho-Akt, PDK-1, and PI3K levels as compared to MDA-MB-231 breast cancer cells in the vehicle-treated control group (Figure 10(b)).Western blot analysis ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on Akt, phospho-Akt, PTEN, phospho-PTEN, PI3K, and PDK-1 levels on (a) MCF-7 and (b) MDA-MB-231 cells. MCF-7 cells were initially plated at 1×106 cells/100 mm culture dish and treated with control or treatment media containing 2 μM γ-tocotrienol, 3.2 μM GW9662, or 3.2 μM T0070907 alone or in combination. MDA-MB-231 cells were also plated in a similar manner and cells were treated with control or treatment media containing 3 μM γ-tocotrienol, 6.4 μM GW9662, or 6.4 μM T0070907 alone or in combination. All cells were fed fresh treatment media every other day for a 4-day incubation period. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by Western blot analysis. Scanning densitometric analysis was performed on all blots done in triplicate and the integrated optical density of each band was normalized with corresponding β-actin, as shown in bar graphs below their respective Western blot images. Vertical bars in the graph indicate the normalized integrated optical density of bands visualized in each lane ± SEM. P*<0.05 as compared with vehicle-treated controls. (a) (b)Similar studies were conducted to determine the effects of combinedγ-tocotrienol treatment with PPARγ agonist rosiglitazone and troglitazone on PI3K/Akt mitogenic signaling in MCF-7 and MDA-MB-231 breast cancer cells. However, little or no differences in the relative levels of these mitogenic proteins were observed among the different treatment groups (data not shown), apparently because cells in the various treatment groups were actively proliferating at a near maximal growth rate. ## 3.11. Apoptotic Effects ofγ-Tocotrienol and PPARγ Antagonist GW9662 and T0070907 Given Alone or in Combination In order to determine if the growth inhibitory effects resulting from combined treatment with subeffective doses ofγ-tocotrienol and PPARγ antagonists might result from a reduction in viable cell number, studies were conducted to determine the acute effects (24-h) and chronic effects (96-h) of these treatment on the initiation of apoptosis and cell viability. Western blot analysis shows that treatment with 2 μM (MCF-7 cells) or 3 μM (MDA-MB-231 cells) γ-tocotrienol alone had no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number after a 24-h and 96-h treatment exposure (Figures 11(a) and 11(b)). Treatment with 3.2 μM (MCF-7 cells) or 6.4 μM (MDA-MB-231 cells) of the PPARγ antagonists, GW9662 and T0070907, alone, or in combination with their respective treatment dose of γ-tocotrienol was also found to have no effect on the expression of cleaved PARP, cleaved caspase-3 or viable cell number 24-h after treatment exposure (Figures 11(a) and 11(b)). However, treatment with 20 μM γ-tocotrienol, a dose previously shown to induce apoptosis in mammary cancer cells [13, 14] and used as an apoptosis-inducing positive control in this experiments was found to induce a large increase in cleaved PARP and cleaved caspase-3 levels, and corresponding decrease in viable cell number in both MCF-7 and MDA-MB-231 breast cancer cells 24 h following treatment exposure (Figures 11(a) and 11(b)). The positive apoptosis control treatment of 20 μM γ-tocotrienol was not included in the 96 h treatment exposure experiment, because by the end of this experiment there are no viable cells remaining in this treatment group.Apoptotic effects ofγ-tocotrienol and PPARγ antagonists (GW9662 or T0070907) alone or in combination on caspase-3 and cleaved PARP levels on (a) MCF-7 and (b) MDA-MB-231 human breast cancer cells. For Western blot studies, MCF-7 and MDA-MB-231 cells were initially plated at 1×106 cells/100 mm culture dish and maintained on control media for a 3-day culture period. Afterwards, cells were divided into the various treatment groups, media was removed, and cells were exposed to their respective treatment media for a 24-h treatment period. In addition, cells were exposed to their respective treatment media for a 96-h treatment period, where fresh media was added every other day. MCF-7 cells were exposed to treatment media containing 0–2 μM γ-tocotrienol, 0–3.2 μM GW9662, or 0–3.2 μM T0070907 alone or in combination, whereas MDA-MB-231 cells exposed to treatment media containing 0–3 μM γ-tocotrienol, 0–6.4 μM GW9662, or 0–6.4 μM T0070907 alone or in combination. Afterwards, whole cell lysates were prepared from cells in each treatment group for subsequent separation by polyacrylamide gel electrophoresis (50 μg/lane) followed by western blot analysis. In parallel studies, (a) MCF-7 cells were plated at a density of 5×104 (6 wells per group) in 24-well culture plates, whereas (b) MDA-MB-231 cells were plated at a density of 1×104 (6 wells per group) in 96-well culture plates and exposed to the same treatments as described above. After a 24-h treatment exposure, viable cell number in all treatment groups was determined using MTT assay. Vertical bars indicate the mean cell count ± SEM in each treatment group. P*<0.05 as compared with vehicle-treated controls. (a) (b) ## 4. Discussion Results in these studies demonstrate that when given alone, treatment withγ-tocotrienol, PPARγ agonists (rosiglitazone and troglitazone), or PPARγ antagonists (GW9662 and T0070907), all induce a significant dose-responsive inhibition in the growth of MCF-7 and MDA-MB-231 human breast cancer cells in culture. However, when used in combination, treatment with low doses of PPARγ agonists were found to reverse, whereas treatment with low doses of PPARγ antagonists were found to synergistically enhance the antiproliferative effects of γ-tocotrienol. Additional studies determined that the synergistic inhibition of MCF-7 and MDA-MB-231 tumor cell growth resulting from combined low dose treatment of γ-tocotrienol with PPARγ antagonists was associated with a reduction in PPARγ, PPRE mediated reporter activity, and RXR, an increase in PPARγ coactivator expression, and a corresponding suppression in PI3K/Akt mitogenic-signaling. Conversely, enhancement in MCF-7 and MDA-MB-231 tumor cell growth resulting from combined low dose treatment of γ-tocotrienol with PPARγ agonists was associated with an increase in PPARγ, PPRE mediated reporter activity, and RXR, a decrease in PPARγ coactivator expression, and a corresponding restoration in EGF-dependent PI3K/Akt mitogenic-signaling as compared to their vehicle-treated control group. Taken together, these finding demonstrate that combined treatment of γ-tocotrienol with PPARγ antagonists display synergistic anticancer activity and may provide some benefit in the treatment of human breast cancer. These finding also demonstrate the importance of matching complimentary anticancer agents for use in combination therapy because a mismatch may result in an antagonistic and undesirable therapeutic response.Previous investigations have shown that both PPARγ agonists and antagonists act as effective anticancer agents [28, 29]. The role of PPARγ agonists as anticancer agents has been well characterized in treatment of colon, gastric, and lung cancer [3, 11], whereas, PPARγ antagonists have been shown to induce potent antiproliferative effects in many hematopoietic and epithelial cancer cell lines [11, 28]. Results in the present study confirm and extend these previous findings. Dose-response studies showed that treatment with either PPARγ agonist or antagonist significantly inhibited the growth of human MCF-7 and MDA-MB-231 breast cancer cells in culture. Furthermore, treatment-induced antiproliferative effects were found to be more pronounce in MDA-MB-231 as compared to MCF-7 breast cancer cells, and these results are similar to those previously reported [28].Numerous investigations have established thatγ-tocotrienol acts as a potent anticancer agent that inhibits the growth of mouse [16, 30] and human [31, 32] breast cancer cells. Furthermore, studies have also shown that combined treatment of γ-tocotrienol with other traditional chemotherapies often results in an additive or synergistic inhibition in cancer cell growth and viability [16, 30]. The rationale for using tocotrienols in combination therapy is based on the principle that resistance to a single agent can be overcome with the use of multiple agents that display complimentary anticancer mechanisms of action. Initial studies showed the additive anticancer effects of mixed tocotrienols and tamoxifen on growth of the estrogen receptor positive MCF-7 and the estrogen receptor negative MDA-MB-435 cells [33] and these findings were later confirmed in other reports [34]. Recent studies have also shown synergistic anticancer effects of combined use γ-tocotrienol with statins [35–37], tyrosine kinase inhibitors [18, 38], COX-2 inhibitors [39, 40], and cMet inhibitors [41]. These studies concluded that combination therapy is most effective when the anticancer mechanism of action of γ-tocotrienol compliments the mechanism of action of the other drug, and may provide significant health benefits in the prevention and/or treatment of breast cancer in women, while at the same time avoiding tumor resistance or toxic effects that is commonly associated with high-dose monotherapy.The exact role of PPARγ in breast cancer cell proliferation and survival is not clearly understood. Previous studies have suggested that PPARγ activation results in extensive accumulation of lipids and changes in mammary epithelial cell gene expression that promotes a more differentiated and less malignant phenotype, and attenuates breast cancer cell growth and progression [42, 43]. Other studies have shown that γ-tocotrienol enhances the expression of multiple forms of PPARs by selectively regulating PPAR target genes [21]. The antiproliferative effects of γ-tocotrienol have been previously hypothesized to be mediated by the action of γ-tocotrienol to stimulate PPARγ activation by increasing the production of the PPARγ ligand, 15-lipoxygenase-2, in human prostate cancer cells [22]. However, findings in the present study using two distinct types of human breast cancer cell lines showed that low-dose treatment with γ-tocotrienol decreased PPARγ levels, whereas combined treatment of γ-tocotrienol with PPARγ agonists resulted in an elevation in PPARγ levels and a corresponding increase in breast cancer cell growth. These contradictory findings might be explained by differences in the cancer cell types and experimental models used to examine combination treatment effects in these different studies. Nevertheless, the present finding clearly demonstrate an antagonistic effect on breast cancer cell proliferation when treated with the combination of γ-tocotrienol and PPARγ agonists, and provides strong evidence that increased expression of PPARγ is a negative indicator for breast cancer responsiveness to anticancer therapy. This hypothesis is further evidence by the finding that PPARγ expression is elevated in breast cancer cells as compared to normal mammary epithelial cells [9, 44], and mice genetically predisposed to developing mammary tumors constitutively express high levels of activated PPARγ as compared to control mice [9, 44]. It is also possible that the anticancer effects of high-dose treatment with PPARγ agonists may be mediated through PPARγ-independent mechanisms.The present study also confirms and extends previous findings showing that treatment with PPARγ antagonists significantly inhibits growth of breast cancer cells. Experimental results showed that PPARγ antagonist downregulate PPARγ activation and expression and these effects were associated with enhanced responsiveness to anticancer therapy [45, 46]. However, the present study also shows that combined treatment of γ-tocotrienol with PPARγ antagonist induced a relative large decrease in transcription activity of PPARγ. This treatment was also shown to result in decreased expression of PPARγ and RXR, and these effects were associated with a significant decrease in breast cancer cell growth. PPARγ functions as a heterodimer with its obligate heterodimer partner-RXR. Like other nuclear hormone receptors, the PPARγ-RXR heterodimer recruits cofactor complexes, either coactivators or corepressors to modulate their transcriptional activity [45]. Upon binding of a ligand to the heterodimer complex, corepressors are displaced and the receptor then associates with a coactivator molecule. These coactivators include SRC-1, CBP C-20, and the CBP homologue p/300 [47, 48]. Combined treatment of γ-tocotrienol and PPARγ antagonists-induced suppression of transcription of PPARγ, appears to also decrease the recruitment of coactivator molecules to available PPARγ-RXR heterodimers for translocation into the nucleus, and ultimately resulting in an elevation of free coactivator levels in the cytoplasm. Taken together these results suggest that breast cancer cells require PPARγ activation for their survival, and that treatments designed to reduce or inhibition of PPARγ levels and/or activation and may provide an effective strategy in treatment of breast cancer.PPARγ activity can be modulated by phosphorylation at multiple sites [49]. In addition, PPARγ ligands can reduce the activity of PI3K and its downstream target Akt [50]. Combined treatment of γ-tocotrienol with PPARγ antagonists was found to reduced PI3K, phosphorylated PDK-1 (active), and phosphorylated-Akt (active) levels in MCF-7 and MDA-MB-231 breast cancer cells. Furthermore, these effects were not associated with an increase in PTEN activity, the phosphatase involved in the inactivation of PDK and Akt. These findings indicate that the antiproliferative effects of combined γ-tocotrienol and PPARγ antagonists treatment is mediated through a suppression in PI3K/Akt mitogenic signaling. These effects were found to be cytostatic in nature, and not associated with a decrease in cell viability resulting from the initiation of apoptosis. Previous findings have also shown that treatment with PPARγ antagonists can cause a decrease in PI3K/Akt mitogenic signaling [51]. ## 5. Conclusion Result in these studies demonstrate that combined low-dose treatment ofγ-tocotrienol and PPARγ antagonists act synergistically to inhibit human breast cancer cell proliferation, and this effect appears to be mediated by a large reduction in PPARγ expression and corresponding reduction in PI3K/Akt mitogenic signaling. Although high dose treatment with PPARγ agonist also was also found to inhibit human breast cancer cells growth, it is most likely that these effects are mediated through PPARγ-independent mechanisms because the preponderance of experimental evidence strongly suggest that elevations in PPARγ expression is an indicator of robust breast cancer cell growth and resistance to anticancer therapy, whereas a reduction in PPARγ expression is an indicator of decreased breast cancer proliferation and increased responsiveness to chemotherapeutic agents. These findings also show that combination anticancer therapy does not always result in an additive or synergistic anticancer response, but can result in a paradoxical/antagonistic response as was observed with the combined treatment of γ-tocotrienol with PPARγ agonist in MCF-7 and MDA-MB-231 human breast cancer cells. The importance of understanding the intracellular mechanism of action of anticancer agents is critical for optimizing therapeutic response. It is also clearly evident that use of γ-tocotrienol in combination with PPARγ antagonist may have potential therapeutic value in treatment of breast cancer in women. --- *Source: 101705-2013-01-28.xml*
2013
# Acid Corrosion Inhibition and Adsorption Behaviour of Ethyl Hydroxyethyl Cellulose on Mild Steel Corrosion **Authors:** I. O. Arukalam; I. O. Madu; N. T. Ijomah; C. M. Ewulonu; G. N. Onyeagoro **Journal:** Journal of Materials (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101709 --- ## Abstract The corrosion inhibition of mild steel in 1.0 M H2SO4 solution by ethyl hydroxyethyl cellulose has been studied in relation to the concentration of the additive using weight loss measurement, EIS, polarization, and quantum chemical calculation techniques. The results indicate that EHEC inhibited corrosion reaction in the acid medium and inhibition efficiency increased with EHEC concentration. Further increase in inhibition efficiency is observed in the presence of iodide ions, due to synergistic effect. Impedance results reveal that EHEC is adsorbed on the corroding metal surface. Adsorption followed a modified Langmuir isotherm, with very high negative values of the free energy of adsorption ( Δ G a d s ). The polarization data indicate that the inhibitor was of mixed type, with predominant effect on the cathodic partial reaction. The frontier molecular orbitals, HOMO (the highest occupied molecular orbital) and LUMO (the lowest unoccupied molecular orbital) as well as local reactivity of the EHEC molecule, were analyzed theoretically using the density functional theory to explain the adsorption characteristics at a molecular level. The theoretical predictions showed good agreement with experimental results. --- ## Body ## 1. Introduction Organic compounds containing polar functional groups such as nitrogen, sulphur, and/or oxygen in a conjugated system have been reported to be effective as corrosion inhibitors for steel [1–8]. Some of the organic compounds are polymeric in nature and therefore possess multiple active centres. The study of corrosion inhibition by polymers has been on the increase in recent times. Polymers are employed as corrosion inhibitors because the presence of many adsorption centres helps them form complexes with metal ions. The formed complexes were adsorbed on the metal surface to form a barrier film which separated the metal surface from the corrosive agents present in the aggressive solution [9–14]. The effectiveness of inhibition by the adsorbed inhibitor system will be determined by the energy released on forming the metal-inhibitor bond compared to the corresponding changes when the pure acid reacts with the metal [15].Some authors have reported on the effectiveness of polymeric corrosion inhibitors [16–20]. In their accounts, the inhibitive power of these polymers is related structurally to the cyclic rings and heteroatoms which are the major active centres of adsorption.In order to support experimental studies, theoretical calculations are conducted in order to provide molecular-level understanding of the observed experimental behaviour. The major driving force of quantum chemical research is to understand and explain the functions of ethyl hydroxyethyl cellulose in molecular forms. Among quantum chemical methods for evaluation of corrosion inhibitors, density functional theory (DFT) has shown significant promise [21–23] and appears to be adequate for pointing out the changes in electronic structure responsible for inhibitory action. The geometry of the inhibitor is in ground state, as well as the nature of their molecular orbitals, HOMO (the highest occupied molecular orbital) and LUMO (the lowest unoccupied molecular orbital), that are involved in the properties of activity of inhibitors [24, 25].The present study presents the appraisal of inhibitive capability of ethyl hydroxyethyl cellulose (EHEC) on mild steel corrosion in 1.0 M H2SO4 solution using weight loss measurements and quantum chemical calculations techniques. ## 2. Materials and Methods ### 2.1. Sample Preparation Tests were performed on mild steel specimens of the following percentage chemical composition: Si: 0.02; C: 0.05; Mn: 0.18; Cu: 0.02; Cr: 0.02 and the remainder Fe. This was machined into test coupons of dimensions 3 × 2 × 0.05 cm and a small hole drilled at one end of the coupon to enable suspension into the test solution in the beaker. The metal specimens were polished with fine emery paper, degreased, and cleaned as described elsewhere [26, 27]. EHEC sourced from Sigma Aldrich chemical company was used without further purification at concentrations of 0.5, 1.0, 1.5, 2.0, and 2.5 g/L. Blank sulphuric acid solution was prepared in the concentration of 1.0 M H2SO4. The potassium iodide, KI from BDH Laboratory Supplies, was used. 0.5 g/L KI was prepared and added to each of the solutions containing the additive. ### 2.2. Weight Loss Measurements Weight loss experiments were conducted on test coupons. Tests were conducted under total immersion conditions in 200 mL of test solutions at ambient temperature,28 ± 1°C. The pre-cleaned and weighed coupons were suspended in beakers containing the solutions using glass rods and hooks. All tests were made in aerated solutions and were run three times to ensure reproducibility. To determine weight loss with respect to time, the coupons were retrieved from test solutions at 24 h intervals progressively for 120 h (5 days). At the end of the tests, the weight loss was taken to be the mean value of the difference between the initial and final weights of the coupons for the three determinations at a given time. The corrosion rates of mild steel in 1.0 M H2SO4 solution and the acid solution containing the additive, EHEC, were calculated from the expression: (1) Corrosion rate , R c ( mm / y ) = [ 87,600 Δ W ρ A t ] , where Δ W , ρ , A , t are weight loss in gram, density of mild steel in g/cm3, surface of the test coupon in cm2, and time period of exposure in the test solution in hour, respectively. ### 2.3. Electrochemical Experiments Electrochemical experiments were performed using aVERSASTAT 3 Advanced Electrochemical System operated with V3 Studio electrochemical software. A conventional three-electrode glass cell was used for the experiments. Test coupons with 1 cm2 exposed surface area were used as working electrode and a graphite rod as counterelectrode. The reference electrode was a saturated calomel electrode (SCE), which was connectedvia Luggin’s capillary. The working electrode was immersed in a test solution for 30 minutes to attain a stable open circuit potential prior to electrochemical measurements. All experiments were undertaken in 300 mL of stagnant aerated solutions at 29 ± 1°C. Each test was run in triplicate to verify the reproducibility of the systems. Electrochemical impedance spectroscopy (EIS) measurements were made at corrosion potentials ( E corr ) over a frequency range of 100 kHz–10 mHz, with a signal amplitude perturbation of 5 mV. Spectra analyses were performed using Zsimpwin software. Potentiodynamic polarization studies were carried out in the potential range −250 to +250 mV at a scan rate of 0.33 mV s−1.All theoretical quantum chemical calculations were performed using the density functional theory (DFT) electronic structure programs, Forcite and DMol3 as contained in the Materials Studio 4.0 software. ## 2.1. Sample Preparation Tests were performed on mild steel specimens of the following percentage chemical composition: Si: 0.02; C: 0.05; Mn: 0.18; Cu: 0.02; Cr: 0.02 and the remainder Fe. This was machined into test coupons of dimensions 3 × 2 × 0.05 cm and a small hole drilled at one end of the coupon to enable suspension into the test solution in the beaker. The metal specimens were polished with fine emery paper, degreased, and cleaned as described elsewhere [26, 27]. EHEC sourced from Sigma Aldrich chemical company was used without further purification at concentrations of 0.5, 1.0, 1.5, 2.0, and 2.5 g/L. Blank sulphuric acid solution was prepared in the concentration of 1.0 M H2SO4. The potassium iodide, KI from BDH Laboratory Supplies, was used. 0.5 g/L KI was prepared and added to each of the solutions containing the additive. ## 2.2. Weight Loss Measurements Weight loss experiments were conducted on test coupons. Tests were conducted under total immersion conditions in 200 mL of test solutions at ambient temperature,28 ± 1°C. The pre-cleaned and weighed coupons were suspended in beakers containing the solutions using glass rods and hooks. All tests were made in aerated solutions and were run three times to ensure reproducibility. To determine weight loss with respect to time, the coupons were retrieved from test solutions at 24 h intervals progressively for 120 h (5 days). At the end of the tests, the weight loss was taken to be the mean value of the difference between the initial and final weights of the coupons for the three determinations at a given time. The corrosion rates of mild steel in 1.0 M H2SO4 solution and the acid solution containing the additive, EHEC, were calculated from the expression: (1) Corrosion rate , R c ( mm / y ) = [ 87,600 Δ W ρ A t ] , where Δ W , ρ , A , t are weight loss in gram, density of mild steel in g/cm3, surface of the test coupon in cm2, and time period of exposure in the test solution in hour, respectively. ## 2.3. Electrochemical Experiments Electrochemical experiments were performed using aVERSASTAT 3 Advanced Electrochemical System operated with V3 Studio electrochemical software. A conventional three-electrode glass cell was used for the experiments. Test coupons with 1 cm2 exposed surface area were used as working electrode and a graphite rod as counterelectrode. The reference electrode was a saturated calomel electrode (SCE), which was connectedvia Luggin’s capillary. The working electrode was immersed in a test solution for 30 minutes to attain a stable open circuit potential prior to electrochemical measurements. All experiments were undertaken in 300 mL of stagnant aerated solutions at 29 ± 1°C. Each test was run in triplicate to verify the reproducibility of the systems. Electrochemical impedance spectroscopy (EIS) measurements were made at corrosion potentials ( E corr ) over a frequency range of 100 kHz–10 mHz, with a signal amplitude perturbation of 5 mV. Spectra analyses were performed using Zsimpwin software. Potentiodynamic polarization studies were carried out in the potential range −250 to +250 mV at a scan rate of 0.33 mV s−1.All theoretical quantum chemical calculations were performed using the density functional theory (DFT) electronic structure programs, Forcite and DMol3 as contained in the Materials Studio 4.0 software. ## 3. Results and Discussion ### 3.1. Corrosion Rates The corrosion rates of metals and alloys in aggressive solutions can be determined using different electrochemical and nonelectrochemical techniques. The mechanism of anodic dissolution of iron in acidic solutions corresponds to [28] (2a) Fe + OH ⟺ FeO H ads + e - (2b) FeO H ads ⟶ FeO H + + e - (2c) FeO H + + H + ⟺ F 2 + + H 2 O .As a consequence of these reactions, including the high solubility of the corrosion products, the metal loses weight in the solution. The results of the gravimetric determination of mild steel corrosion rate as a function of time and concentration of the additive are given in Table1.Table 1 Calculated values of corrosion rate of mild steel in 1.0 M H2SO4 in the absence and presence of EHEC and KI. System Corrosion rate (mm/y) Day 1 2 3 4 5 Blank 25.27 22.36 20.16 19.03 17.99 0.5 g/L EHEC 14.50 12.07 10.93 10.39 10.10 0.5 g/L EHEC + KI 12.20 10.59 9.50 9.27 9.06 1.0 g/L EHEC 12.92 10.78 9.74 9.23 8.97 1.0 g/L EHEC + KI 9.39 7.97 7.25 7.17 7.19 1.5 g/L EHEC 13.22 11.43 10.31 9.79 9.51 1.5 g/L EHEC + KI 9.85 8.22 7.37 7.16 7.05 2.0 g/L EHEC 11.84 10.08 9.20 8.79 8.59 2.0 g/L EHEC + KI 10.47 8.37 7.31 6.81 6.60 2.5 g/L EHEC 11.40 9.92 9.35 8.80 8.59 2.5 g/L EHEC + KI 10.06 8.21 7.70 7.56 7.48These results show that the corrosion rate of mild steel in 1.0 M H2SO4 decreases with time in systems with additive and the blank acid solution. The effects of addition of different concentrations of EHEC on corrosion rates in the acid solution after 5 days of exposure are shown in Table 1. EHEC is observed to reduce the corrosion rate at the studied concentration of 0.5 g/L EHEC, indicating inhibition of the corrosion reaction. This effect becomes more pronounced with increasing concentration of the inhibitor, which suggests that the inhibition process is sensitive to the concentration (amount) of the additive present. ### 3.2. Inhibition Efficiency A quantitative evaluation of the effect of EHEC on mild steel corrosion in 1.0 M H2SO4 solution was achieved from appraisal of the inhibition efficiency (I %) given by (3) I % = [ 1 - R cinh R cblk ] × 100 , where R cinh and R cblk are the corrosion rates in inhibited and uninhibited solutions, respectively. The values obtained for the inhibition efficiency are given in Table 2.Table 2 Calculated values of inhibition efficiency of mild steel in 1.0 M H2SO4 in the presence of EHEC and KI. System Inhibition efficiency (I%) Day 1 2 3 4 5 0.5 g/L EHEC 42.62 46.02 45.78 45.40 43.86 0.5 g/L EHEC + KI 51.72 52.64 52.88 51.29 49.64 1.0 g/L EHEC 48.87 51.79 51.69 51.50 50.14 1.0 g/L EHEC + KI 62.84 64.36 64.04 62.32 60.03 1.5 g/L EHEC 47.69 48.88 48.86 48.55 47.14 1.5 g/L EHEC + KI 61.02 63.24 63.44 62.38 60.81 2.0 g/L EHEC 53.15 54.92 54.37 53.81 52.25 2.0 g/L EHEC + KI 58.57 62.57 63.74 64.21 63.31 2.5 g/L EHEC 54.89 55.64 53.62 53.76 52.25 2.5 g/L EHEC + KI 60.19 63.28 61.81 60.27 58.42The plots show thatI % increased progressively with concentration of the additive (Figure 1). Following the observed trend of inhibition, organic inhibitors are known to decrease metal dissolution by forming a protective adsorption film which blocks the metal surface, separating it from the corrosive medium [29–32]. Consequently, in inhibited solutions, the corrosion rate is indicative of the number of free corroding sites remaining after some sites have been effectively blocked by inhibitor adsorption. It has been suggested [33, 34], however, that anions such as Cl - , I - , SO 4 2 -, and S 2 - may also participate in forming reaction intermediates on the corroding metal surface, which either inhibit or stimulate corrosion. It is important to recognize that the suppression or stimulation of the dissolution process is initiated by the specific adsorption of anion on the metal surface.Figure 1 Variation of inhibition efficiency with concentration of EHEC. ### 3.3. Effect of Halide Ion Addition To further clarify the modes of inhibitor adsorption, experiments were conducted in the presence of iodide ions, which are strongly adsorbed on the surface of mild steel in acidic solution and facilitate adsorption of organic cation-type inhibitors by acting as intermediate bridges between the positive end of the organic cation and the positively charged metal surface. Specific adsorption of iodide ions on the metal surface leads to recharging the electrical double layer [35]. The inhibitor is then drawn into the double layer by electrostatic interaction with the adsorbed I - ions, forming ion pairs on the metal surface which increases the degree of surface coverage:(4a) I - sol ⟶ I - ads (4b) I - ads + Inh + sol ⟶ [ I - - Inh + ] ads .Thus, an improvement ofI % on addition of KI is an indication of the participation of protonated inhibitor species in the adsorption process (Figure 2). Table 2 illustrates the effect of addition of 0.5 g/L KI to the different concentrations of EHEC on the corrosion of mild steel in 1.0 M H2SO4 solution.Figure 2 Synergistic effect between KI and EHEC on the variation of inhibition efficiency with time. ### 3.4. Adsorption Consideration Basic parameters which are descriptors of the nature and modes of adsorption of organic inhibitor on the corroding metal surface can be provided by adsorption isotherms which depend on the degree of surface coverage,θ. The observed inhibition of the corrosion of mild steel in 1.0 M H2SO4 solution indicates high degree of surface coverage. From a theoretical perspective, the adsorption route is regarded as a substitution process between the organic inhibitor in the aqueous solution ( Inh sol ) and water molecules adsorbed at the metal surface ( H 2 O ads ) as follows [36–38]: (5) Inh ( sol ) + x H 2 O ( ads ) ⟺ Inh ( ads ) + x H 2 O ( sol ) , where x represents the number of water molecules replaced by one molecule of adsorbed inhibitor. The adsorption bond strength is dependent on the composition of the metal and corrodent, inhibitor structure, concentration, and orientation, as well as temperature. Since EHEC can be protonated in the presence of strong acids, it is quite necessary to consider both cationic and molecular species when discussing the adsorption process of EHEC. Figure 3 shows the plot of C / θ versus C to be linear, which is in agreement with the Langmuir equation [39]: (6) C θ = n K ads + n C , where C is the concentration of inhibitor and K ads is the equilibrium constant for the adsorption-desorption process.Figure 3 Langmuir isotherm for EHEC adsorption on mild steel surface in 1.0 M H2SO4.In general,K ads represents the adsorption power of the inhibitor molecule on the metal surface. The positive values confirm the adsorbability of EHEC on the metal surface. The linear plots obtained in Figure 3 suggest that EHEC adsorption from 1.0 M H2SO4 solution followed the Langmuir isotherm, though the isotherm parameters indicate some deviations from ideal Langmuir behaviour. The slope deviates from unity (see n values in Table 3) with nonzero intercept on the y-axis, which could be traced to some limitations in the underlying assumptions. The results in fact imply that each EHEC molecule occupies n active corrosion sites on the mild steel surface in 1.0 M H2SO4 solution.Table 3 Adsorption parameters from modified Langmuir isotherm. Day R 2 n K ads Δ G ads (kJ mol−1) 1 0.990 1.702 4.515 −45.381 2 0.988 1.692 5.795 −58.246 3 0.993 1.772 7.982 −80.228 4 0.993 1.764 7.412 −74.498 5 0.992 1.834 7.627 −76.659The free energy of adsorption( Δ G ads ) obtained from (7) which is evaluated from K ads obtained from intercepts of the Langmuir plots is given in Table 3: (7) Δ G ads = - R T ln ⁡ ( 55.5 K ads ) , where R and T are the universal gas constant and absolute temperature, respectively. The other parameter retains its previous meaning. The large negative Δ G ads values implied that the adsorption of EHEC on the mild steel surface was favourable from thermodynamics point of view and indicated that the inhibitor was strongly adsorbed, covering both anodic and cathodic regions.In addition, it is important to note that adsorption free energy values of −20 kJ mol−1 or less negative are associated with an electrostatic interaction between charged molecules and charged metal surface (physical adsorption). On the other hand, adsorption free energy values of −40 Kj mol−1 or more negative values involve charge sharing or transfer from the inhibitor molecules to the metal surface to form a co-ordinate covalent bond (chemical adsorption) [40]. ### 3.5. Impedance Measurements Electrochemical impedance spectroscopy analyses provide insight into the kinetics of electrode processes as well as the surface characteristics of the electrochemical system of interest. Figure4 presents the impedance spectra measured at E corr after 30 minutes of immersion and exemplified the Nyquist plots obtained for mild steel in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. The observed increase in the impedance parameters in inhibited solutions is associated with the corrosion inhibiting effect of EHEC. The Nyquist plots for all systems generally have the form of only one depressed semicircle, corresponding to one time constant, although a slight sign of low-frequency inductive behaviour can be discerned. The depression of the capacitance semicircle with centre below the real axis suggests a distribution of the capacitance due to inhomogeneities associated with the electrode surface.Figure 4 Nyquist impedance spectra of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.The presence of a single time constant may be attributed to the short exposure time in the corrosive medium which is not adequate to reveal degradation of the substrate [41]. A polarization resistance (R p) can be extracted from the intercept of the low-frequency loop at the real axis of impedance (Z re) in the Nyquist plots, since the inductive loop is negligible. The value of R p is very close to that of the charge transfer resistance R ct, which can be extracted from the diameter of the semicircle [41, 42]. The impedance spectra for the Nyquist plots were thus adequately analyzed by being fit to the equivalent circuit model R s ( Q dl R ct ), which has been previously used to model the mild steel/acid solution interface [41, 43].The values of the impedance parameters derived from the Nyquist plots using the selected equivalent circuit modelR s ( Q dl R ct ) are given in Table 4. The terms Q dl and n, respectively, represent the magnitude and exponent of the constant phase element (CPE) of the double layer. The CPE, with impedance given by Z CPE = Q - 1 ( j w ) - n, where j is an imaginary number and w is the angular frequency in rad/s is used in place of a capacitor to compensate for the deviations from ideal dielectric behaviour associated with the nonhomogeneity of the electrode surface. Introduction of EHEC into the acid corrodent leads to an increase in R ct and a reduction of Q dl, indicating a hindering of the corrosion reaction. The decrease in Q dl values, which normally results from a decrease in the dielectric constant and/or an increase in the double layer thickness, is due to inhibitor adsorption on the metal/electrolyte interface [44]. This implies that EHEC reduces the corrosion rate of the mild steel specimen by virtue of adsorption on the metal/electrolyte interface, a fact that has been previously established. A quantitative measure of the protective effect can be obtained by comparing the values of the charge transfer resistance in the absence (R ct) and presence of inhibitor ( R ctinh ) as follows: (8) I % = [ 1 - R ct R ctinh ] × 100 , where ( R ctinh ) and R ct are the charge transfer resistance for inhibited and uninhibited systems, respectively.Table 4 Impedance and polarization parameters for mild steel in 0.5 M H2SO4 in the presence and absence of EHEC and EHEC + KI. System Impedance data Polarization data R ct n I.E. % C dl (uF cm−2) × 10−3 E corr (mV (SCE)) R p (Ω cm2) I.E. % Blank 11.931 0.889 — 14.65 −468.35 16.27 — EHEC 37.502 0.876 68.19 5.13 −477.87 58.04 71.97 EHEC + KI 133.37 0.854 91.05 2.41 −489.10 177.51 90.83The double layer capacitance values of the systems were also examined and calculated using the expression:(9) C dl = 1 2 π f max ⁡ R ct , where f max ⁡ is the maximum frequency. The obtained values of C dl are presented in Table 4.Lower double layer capacitance suggests reduced electric charge stored, which is a consequence of increased adsorption layer that acted as a dielectric constant. The increase inR ct values in inhibited systems which corresponds to an increase in the diameter of the Nyquist semicircle confirms that the corrosion inhibiting effect of EHEC and EHEC + KI and is much more pronounced in the latter system, implying that KI synergistically enhanced the corrosion inhibiting effect of EHEC. In other words, lower C dl values correspond to reduced double layer capacitance, which, according to the Helmholtz model (10), results from a decrease in the dielectric constant (ε) or an increase in the interfacial layer thickness (δ): (10) C dl = ε ε o A δ , where ε is the dielectric constant of the medium, ε o is the vacuum permittivity, A is the electrode area, and δ is the thickness of the interfacial layer.Since adsorption of an organic inhibitor on a metal surface involves the replacement of adsorbed water molecules on the surface, the smaller dielectric constant of the organic molecule compared to water as well as the increased thickness of interfacial layer due to inhibitor adsorption acted simultaneously to reduce the double layer capacitance. This provides experimental evidence of adsorption of EHEC on mild steel surface. The significantly lowerC dl value of the EHEC + KI system supports the assertion that the iodide ion significantly enhances adsorption of EHEC on the metal/solution interface. ### 3.6. Polarization Measurements Figure5 shows the polarization curves of mild steel dissolution in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. Introduction of EHEC and EHEC + KI into the acid solution was observed to shift the corrosion potentials of both inhibited systems slightly in the negative direction and in both cases inhibited the anodic metal dissolution reaction as well as the cathodic hydrogen evolution reaction. Since the E corr is not altered significantly, the implication is that the corrosion inhibition process is under mixed control with predominant cathodic effect. In addition, following observations in Figure 5, both cathodic and anodic partial reactions are affected as evident in decrease in the corrosion current densities. This implies that EHEC functioned as a mixed-type inhibitor for both systems. However, marked cathodic partial hydrogen evolution reaction is discerned. Moreso, documented report [45] has it that when displacement in E corr is >85 mV, the inhibitor can be regarded as a cathodic or anodic type inhibitor and if the displacement in E corr is <85 mV, the inhibitor can be seen as mixed type. In the present study, the displacement in the corrosion potential (E corr) in the presence of EHEC and EHEC + KI shifted 9.52 and 20.75 mV, respectively, in the cathodic direction, compared to the blank, which is a confirmation that the inhibitor acts as a mixed-type inhibitor with predominant cathodic effect.Figure 5 Polarization curves of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.Inhibition efficiency was calculated from the polarization data as follows:(11) I % = [ 1 - R p R p inh ] × 100 , where R p and R p inh are polarization resistance for uninhibited and inhibited systems, respectively. The calculated values are given in Table 4. Observations from Table 4 show that the inhibition efficiencies obtained from both impedance and polarization results are comparable. The data confirm the consistency of EHEC and EHEC + KI at the prevailing experimental condition.The cooperative effect between EHEC and KI in hindering the corrosion of mild steel in 1.0 M H2SO4 solution is also evident in both the Nyquist and Tafel polarization plots. Addition of KI resulted in a significant increase in the diameter of the Nyquist semicircle and hence an increase in R ct as well as I % and a decrease in the corrosion current density of the Tafel polarization curves. The presence of iodide ions shifts E corr more in the cathodic direction and further decreases the anodic and cathodic reaction kinetics. The mechanism of this synergistic effect has been described in detail in some reports [46]. The iodide ions are strongly chemisorbed on the corroding mild steel surface and facilitate EHEC adsorption by acting as intermediate bridges between the positively charged metal surface and EHEC cations. This stabilizes the adsorption of EHEC on the mild steel surface, leading to higher surface coverage. To account for the above observations, it is necessary to recognize that the process of adsorption of an organic inhibitor on a corroding metal surface depends on factors such as the nature and surface charge on the metal in the corrosive medium as well as the inhibitor structure. Consequently, more iodide ions are adsorbed on mild steel which presents a more positive surface, giving rise to increased synergistic interactions with protonated EHEC species and hence higher inhibition efficiencies. ### 3.7. Quantum Chemical Calculations The inhibition effectiveness of inhibitors has been reported to correlate with the quantum chemical parameters such as HOMO (the highest occupied molecular orbital), LUMO (the lowest unoccupied molecular orbital), and the energy gap between the LUMO and HOMO( Δ E = E LUMO - E HOMO ) [47–49]. A high E HOMO (less negative) is associated with the capacity of a molecule to donate electrons to an appropriated acceptor with empty molecular orbital that facilitated the adsorption process and therefore indicated good performance of the corrosion inhibitor [50]. E LUMO corresponds to a tendency for electron acceptance. Based on this, the calculated difference, Δ E, demonstrates inherent electron donating ability and measures the interaction of the inhibitor molecule with the metal surface.According to the frontier molecular orbital theory of chemical reactivity, transition of electrons is due to an interaction between the frontier orbitals, HOMO and LUMO, of reacting species. The energy of HOMO is directly related to the ionization potential and characterizes the susceptibility of the molecule toward attack by electrophiles. The energy of LUMO is directly related to the electron affinity and characterizes the susceptibility of the molecule toward attack by nucleophile. The lower the values ofE LUMO, the stronger the electron accepting ability of the molecule.The electronic structure of EHEC, the distribution of frontier molecular orbital, and Fukui indices have been modeled in order to establish the active sites as well as local reactivity of the inhibiting molecules. This was achieved using the DFT electronic structure programs, Forcite and DMol3, and using a Mulliken population analysis. Electronic parameters for the simulation include restricted spin polarization using the DND basis set as the Perdew Wang (PW) local correlation density functional. The geometry optimized structures of EHEC, HOMO and LUMO orbitals, Fukui functions, and the total electron density are presented in Figure6. In the EHEC molecule, the HOMO orbital is saturated around the aromatic nucleus which is the region of highest electron density and often the site at which electrophiles attack, and represents the active centres, with the utmost ability to bond to the metal surface. The LUMO orbital is saturated around the ethoxy function and represents the site at which nucleophilic attack occurs.Electronic properties of ethyl hydroxyethyl cellulose (EHEC) [C, grey; H, white; O, red]. (a) Optimized structure (b) Total electron density (c) HOMO orbital (d) LUMO orbital (e) Fukui function for nucleophilic attack (f) Fukui function for electrophilic attackLocal reactivity was analyzed by means of the Fukui indices to assess the active regions in terms of nucleophilic and electrophilic behaviour. Thus, the site for nucleophilic attack will be the place where the value off + is maximum. In turn, the site for electrophilic attack is controlled by the value of f -. The values of E HOMO, E LUMO, Δ E, and Fukui functions are given in Table 5. Higher values of E HOMO indicate a greater disposition of a molecule to donate electrons to a metal surface. In the same way, low values of the energy of the gap, Δ E will afford good inhibition efficiency, since the energy required to remove an electron from the last occupied orbital will be minimized [51]. The above descriptors, however, suggest that EHEC possessed good inhibiting potential. This is in agreement with the experimental findings.Table 5 Calculated values of quantum chemical properties for the most stable conformations of EHEC. Property EHEC E HOMO (eV) −6.154 E LUMO(eV) −2.323 E LUMO - HOMO (eV) 3.831 Maximumf + (Mulliken) 0.015 O(12) Maximumf - (Mulliken) 0.165 O(12) ## 3.1. Corrosion Rates The corrosion rates of metals and alloys in aggressive solutions can be determined using different electrochemical and nonelectrochemical techniques. The mechanism of anodic dissolution of iron in acidic solutions corresponds to [28] (2a) Fe + OH ⟺ FeO H ads + e - (2b) FeO H ads ⟶ FeO H + + e - (2c) FeO H + + H + ⟺ F 2 + + H 2 O .As a consequence of these reactions, including the high solubility of the corrosion products, the metal loses weight in the solution. The results of the gravimetric determination of mild steel corrosion rate as a function of time and concentration of the additive are given in Table1.Table 1 Calculated values of corrosion rate of mild steel in 1.0 M H2SO4 in the absence and presence of EHEC and KI. System Corrosion rate (mm/y) Day 1 2 3 4 5 Blank 25.27 22.36 20.16 19.03 17.99 0.5 g/L EHEC 14.50 12.07 10.93 10.39 10.10 0.5 g/L EHEC + KI 12.20 10.59 9.50 9.27 9.06 1.0 g/L EHEC 12.92 10.78 9.74 9.23 8.97 1.0 g/L EHEC + KI 9.39 7.97 7.25 7.17 7.19 1.5 g/L EHEC 13.22 11.43 10.31 9.79 9.51 1.5 g/L EHEC + KI 9.85 8.22 7.37 7.16 7.05 2.0 g/L EHEC 11.84 10.08 9.20 8.79 8.59 2.0 g/L EHEC + KI 10.47 8.37 7.31 6.81 6.60 2.5 g/L EHEC 11.40 9.92 9.35 8.80 8.59 2.5 g/L EHEC + KI 10.06 8.21 7.70 7.56 7.48These results show that the corrosion rate of mild steel in 1.0 M H2SO4 decreases with time in systems with additive and the blank acid solution. The effects of addition of different concentrations of EHEC on corrosion rates in the acid solution after 5 days of exposure are shown in Table 1. EHEC is observed to reduce the corrosion rate at the studied concentration of 0.5 g/L EHEC, indicating inhibition of the corrosion reaction. This effect becomes more pronounced with increasing concentration of the inhibitor, which suggests that the inhibition process is sensitive to the concentration (amount) of the additive present. ## 3.2. Inhibition Efficiency A quantitative evaluation of the effect of EHEC on mild steel corrosion in 1.0 M H2SO4 solution was achieved from appraisal of the inhibition efficiency (I %) given by (3) I % = [ 1 - R cinh R cblk ] × 100 , where R cinh and R cblk are the corrosion rates in inhibited and uninhibited solutions, respectively. The values obtained for the inhibition efficiency are given in Table 2.Table 2 Calculated values of inhibition efficiency of mild steel in 1.0 M H2SO4 in the presence of EHEC and KI. System Inhibition efficiency (I%) Day 1 2 3 4 5 0.5 g/L EHEC 42.62 46.02 45.78 45.40 43.86 0.5 g/L EHEC + KI 51.72 52.64 52.88 51.29 49.64 1.0 g/L EHEC 48.87 51.79 51.69 51.50 50.14 1.0 g/L EHEC + KI 62.84 64.36 64.04 62.32 60.03 1.5 g/L EHEC 47.69 48.88 48.86 48.55 47.14 1.5 g/L EHEC + KI 61.02 63.24 63.44 62.38 60.81 2.0 g/L EHEC 53.15 54.92 54.37 53.81 52.25 2.0 g/L EHEC + KI 58.57 62.57 63.74 64.21 63.31 2.5 g/L EHEC 54.89 55.64 53.62 53.76 52.25 2.5 g/L EHEC + KI 60.19 63.28 61.81 60.27 58.42The plots show thatI % increased progressively with concentration of the additive (Figure 1). Following the observed trend of inhibition, organic inhibitors are known to decrease metal dissolution by forming a protective adsorption film which blocks the metal surface, separating it from the corrosive medium [29–32]. Consequently, in inhibited solutions, the corrosion rate is indicative of the number of free corroding sites remaining after some sites have been effectively blocked by inhibitor adsorption. It has been suggested [33, 34], however, that anions such as Cl - , I - , SO 4 2 -, and S 2 - may also participate in forming reaction intermediates on the corroding metal surface, which either inhibit or stimulate corrosion. It is important to recognize that the suppression or stimulation of the dissolution process is initiated by the specific adsorption of anion on the metal surface.Figure 1 Variation of inhibition efficiency with concentration of EHEC. ## 3.3. Effect of Halide Ion Addition To further clarify the modes of inhibitor adsorption, experiments were conducted in the presence of iodide ions, which are strongly adsorbed on the surface of mild steel in acidic solution and facilitate adsorption of organic cation-type inhibitors by acting as intermediate bridges between the positive end of the organic cation and the positively charged metal surface. Specific adsorption of iodide ions on the metal surface leads to recharging the electrical double layer [35]. The inhibitor is then drawn into the double layer by electrostatic interaction with the adsorbed I - ions, forming ion pairs on the metal surface which increases the degree of surface coverage:(4a) I - sol ⟶ I - ads (4b) I - ads + Inh + sol ⟶ [ I - - Inh + ] ads .Thus, an improvement ofI % on addition of KI is an indication of the participation of protonated inhibitor species in the adsorption process (Figure 2). Table 2 illustrates the effect of addition of 0.5 g/L KI to the different concentrations of EHEC on the corrosion of mild steel in 1.0 M H2SO4 solution.Figure 2 Synergistic effect between KI and EHEC on the variation of inhibition efficiency with time. ## 3.4. Adsorption Consideration Basic parameters which are descriptors of the nature and modes of adsorption of organic inhibitor on the corroding metal surface can be provided by adsorption isotherms which depend on the degree of surface coverage,θ. The observed inhibition of the corrosion of mild steel in 1.0 M H2SO4 solution indicates high degree of surface coverage. From a theoretical perspective, the adsorption route is regarded as a substitution process between the organic inhibitor in the aqueous solution ( Inh sol ) and water molecules adsorbed at the metal surface ( H 2 O ads ) as follows [36–38]: (5) Inh ( sol ) + x H 2 O ( ads ) ⟺ Inh ( ads ) + x H 2 O ( sol ) , where x represents the number of water molecules replaced by one molecule of adsorbed inhibitor. The adsorption bond strength is dependent on the composition of the metal and corrodent, inhibitor structure, concentration, and orientation, as well as temperature. Since EHEC can be protonated in the presence of strong acids, it is quite necessary to consider both cationic and molecular species when discussing the adsorption process of EHEC. Figure 3 shows the plot of C / θ versus C to be linear, which is in agreement with the Langmuir equation [39]: (6) C θ = n K ads + n C , where C is the concentration of inhibitor and K ads is the equilibrium constant for the adsorption-desorption process.Figure 3 Langmuir isotherm for EHEC adsorption on mild steel surface in 1.0 M H2SO4.In general,K ads represents the adsorption power of the inhibitor molecule on the metal surface. The positive values confirm the adsorbability of EHEC on the metal surface. The linear plots obtained in Figure 3 suggest that EHEC adsorption from 1.0 M H2SO4 solution followed the Langmuir isotherm, though the isotherm parameters indicate some deviations from ideal Langmuir behaviour. The slope deviates from unity (see n values in Table 3) with nonzero intercept on the y-axis, which could be traced to some limitations in the underlying assumptions. The results in fact imply that each EHEC molecule occupies n active corrosion sites on the mild steel surface in 1.0 M H2SO4 solution.Table 3 Adsorption parameters from modified Langmuir isotherm. Day R 2 n K ads Δ G ads (kJ mol−1) 1 0.990 1.702 4.515 −45.381 2 0.988 1.692 5.795 −58.246 3 0.993 1.772 7.982 −80.228 4 0.993 1.764 7.412 −74.498 5 0.992 1.834 7.627 −76.659The free energy of adsorption( Δ G ads ) obtained from (7) which is evaluated from K ads obtained from intercepts of the Langmuir plots is given in Table 3: (7) Δ G ads = - R T ln ⁡ ( 55.5 K ads ) , where R and T are the universal gas constant and absolute temperature, respectively. The other parameter retains its previous meaning. The large negative Δ G ads values implied that the adsorption of EHEC on the mild steel surface was favourable from thermodynamics point of view and indicated that the inhibitor was strongly adsorbed, covering both anodic and cathodic regions.In addition, it is important to note that adsorption free energy values of −20 kJ mol−1 or less negative are associated with an electrostatic interaction between charged molecules and charged metal surface (physical adsorption). On the other hand, adsorption free energy values of −40 Kj mol−1 or more negative values involve charge sharing or transfer from the inhibitor molecules to the metal surface to form a co-ordinate covalent bond (chemical adsorption) [40]. ## 3.5. Impedance Measurements Electrochemical impedance spectroscopy analyses provide insight into the kinetics of electrode processes as well as the surface characteristics of the electrochemical system of interest. Figure4 presents the impedance spectra measured at E corr after 30 minutes of immersion and exemplified the Nyquist plots obtained for mild steel in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. The observed increase in the impedance parameters in inhibited solutions is associated with the corrosion inhibiting effect of EHEC. The Nyquist plots for all systems generally have the form of only one depressed semicircle, corresponding to one time constant, although a slight sign of low-frequency inductive behaviour can be discerned. The depression of the capacitance semicircle with centre below the real axis suggests a distribution of the capacitance due to inhomogeneities associated with the electrode surface.Figure 4 Nyquist impedance spectra of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.The presence of a single time constant may be attributed to the short exposure time in the corrosive medium which is not adequate to reveal degradation of the substrate [41]. A polarization resistance (R p) can be extracted from the intercept of the low-frequency loop at the real axis of impedance (Z re) in the Nyquist plots, since the inductive loop is negligible. The value of R p is very close to that of the charge transfer resistance R ct, which can be extracted from the diameter of the semicircle [41, 42]. The impedance spectra for the Nyquist plots were thus adequately analyzed by being fit to the equivalent circuit model R s ( Q dl R ct ), which has been previously used to model the mild steel/acid solution interface [41, 43].The values of the impedance parameters derived from the Nyquist plots using the selected equivalent circuit modelR s ( Q dl R ct ) are given in Table 4. The terms Q dl and n, respectively, represent the magnitude and exponent of the constant phase element (CPE) of the double layer. The CPE, with impedance given by Z CPE = Q - 1 ( j w ) - n, where j is an imaginary number and w is the angular frequency in rad/s is used in place of a capacitor to compensate for the deviations from ideal dielectric behaviour associated with the nonhomogeneity of the electrode surface. Introduction of EHEC into the acid corrodent leads to an increase in R ct and a reduction of Q dl, indicating a hindering of the corrosion reaction. The decrease in Q dl values, which normally results from a decrease in the dielectric constant and/or an increase in the double layer thickness, is due to inhibitor adsorption on the metal/electrolyte interface [44]. This implies that EHEC reduces the corrosion rate of the mild steel specimen by virtue of adsorption on the metal/electrolyte interface, a fact that has been previously established. A quantitative measure of the protective effect can be obtained by comparing the values of the charge transfer resistance in the absence (R ct) and presence of inhibitor ( R ctinh ) as follows: (8) I % = [ 1 - R ct R ctinh ] × 100 , where ( R ctinh ) and R ct are the charge transfer resistance for inhibited and uninhibited systems, respectively.Table 4 Impedance and polarization parameters for mild steel in 0.5 M H2SO4 in the presence and absence of EHEC and EHEC + KI. System Impedance data Polarization data R ct n I.E. % C dl (uF cm−2) × 10−3 E corr (mV (SCE)) R p (Ω cm2) I.E. % Blank 11.931 0.889 — 14.65 −468.35 16.27 — EHEC 37.502 0.876 68.19 5.13 −477.87 58.04 71.97 EHEC + KI 133.37 0.854 91.05 2.41 −489.10 177.51 90.83The double layer capacitance values of the systems were also examined and calculated using the expression:(9) C dl = 1 2 π f max ⁡ R ct , where f max ⁡ is the maximum frequency. The obtained values of C dl are presented in Table 4.Lower double layer capacitance suggests reduced electric charge stored, which is a consequence of increased adsorption layer that acted as a dielectric constant. The increase inR ct values in inhibited systems which corresponds to an increase in the diameter of the Nyquist semicircle confirms that the corrosion inhibiting effect of EHEC and EHEC + KI and is much more pronounced in the latter system, implying that KI synergistically enhanced the corrosion inhibiting effect of EHEC. In other words, lower C dl values correspond to reduced double layer capacitance, which, according to the Helmholtz model (10), results from a decrease in the dielectric constant (ε) or an increase in the interfacial layer thickness (δ): (10) C dl = ε ε o A δ , where ε is the dielectric constant of the medium, ε o is the vacuum permittivity, A is the electrode area, and δ is the thickness of the interfacial layer.Since adsorption of an organic inhibitor on a metal surface involves the replacement of adsorbed water molecules on the surface, the smaller dielectric constant of the organic molecule compared to water as well as the increased thickness of interfacial layer due to inhibitor adsorption acted simultaneously to reduce the double layer capacitance. This provides experimental evidence of adsorption of EHEC on mild steel surface. The significantly lowerC dl value of the EHEC + KI system supports the assertion that the iodide ion significantly enhances adsorption of EHEC on the metal/solution interface. ## 3.6. Polarization Measurements Figure5 shows the polarization curves of mild steel dissolution in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. Introduction of EHEC and EHEC + KI into the acid solution was observed to shift the corrosion potentials of both inhibited systems slightly in the negative direction and in both cases inhibited the anodic metal dissolution reaction as well as the cathodic hydrogen evolution reaction. Since the E corr is not altered significantly, the implication is that the corrosion inhibition process is under mixed control with predominant cathodic effect. In addition, following observations in Figure 5, both cathodic and anodic partial reactions are affected as evident in decrease in the corrosion current densities. This implies that EHEC functioned as a mixed-type inhibitor for both systems. However, marked cathodic partial hydrogen evolution reaction is discerned. Moreso, documented report [45] has it that when displacement in E corr is >85 mV, the inhibitor can be regarded as a cathodic or anodic type inhibitor and if the displacement in E corr is <85 mV, the inhibitor can be seen as mixed type. In the present study, the displacement in the corrosion potential (E corr) in the presence of EHEC and EHEC + KI shifted 9.52 and 20.75 mV, respectively, in the cathodic direction, compared to the blank, which is a confirmation that the inhibitor acts as a mixed-type inhibitor with predominant cathodic effect.Figure 5 Polarization curves of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.Inhibition efficiency was calculated from the polarization data as follows:(11) I % = [ 1 - R p R p inh ] × 100 , where R p and R p inh are polarization resistance for uninhibited and inhibited systems, respectively. The calculated values are given in Table 4. Observations from Table 4 show that the inhibition efficiencies obtained from both impedance and polarization results are comparable. The data confirm the consistency of EHEC and EHEC + KI at the prevailing experimental condition.The cooperative effect between EHEC and KI in hindering the corrosion of mild steel in 1.0 M H2SO4 solution is also evident in both the Nyquist and Tafel polarization plots. Addition of KI resulted in a significant increase in the diameter of the Nyquist semicircle and hence an increase in R ct as well as I % and a decrease in the corrosion current density of the Tafel polarization curves. The presence of iodide ions shifts E corr more in the cathodic direction and further decreases the anodic and cathodic reaction kinetics. The mechanism of this synergistic effect has been described in detail in some reports [46]. The iodide ions are strongly chemisorbed on the corroding mild steel surface and facilitate EHEC adsorption by acting as intermediate bridges between the positively charged metal surface and EHEC cations. This stabilizes the adsorption of EHEC on the mild steel surface, leading to higher surface coverage. To account for the above observations, it is necessary to recognize that the process of adsorption of an organic inhibitor on a corroding metal surface depends on factors such as the nature and surface charge on the metal in the corrosive medium as well as the inhibitor structure. Consequently, more iodide ions are adsorbed on mild steel which presents a more positive surface, giving rise to increased synergistic interactions with protonated EHEC species and hence higher inhibition efficiencies. ## 3.7. Quantum Chemical Calculations The inhibition effectiveness of inhibitors has been reported to correlate with the quantum chemical parameters such as HOMO (the highest occupied molecular orbital), LUMO (the lowest unoccupied molecular orbital), and the energy gap between the LUMO and HOMO( Δ E = E LUMO - E HOMO ) [47–49]. A high E HOMO (less negative) is associated with the capacity of a molecule to donate electrons to an appropriated acceptor with empty molecular orbital that facilitated the adsorption process and therefore indicated good performance of the corrosion inhibitor [50]. E LUMO corresponds to a tendency for electron acceptance. Based on this, the calculated difference, Δ E, demonstrates inherent electron donating ability and measures the interaction of the inhibitor molecule with the metal surface.According to the frontier molecular orbital theory of chemical reactivity, transition of electrons is due to an interaction between the frontier orbitals, HOMO and LUMO, of reacting species. The energy of HOMO is directly related to the ionization potential and characterizes the susceptibility of the molecule toward attack by electrophiles. The energy of LUMO is directly related to the electron affinity and characterizes the susceptibility of the molecule toward attack by nucleophile. The lower the values ofE LUMO, the stronger the electron accepting ability of the molecule.The electronic structure of EHEC, the distribution of frontier molecular orbital, and Fukui indices have been modeled in order to establish the active sites as well as local reactivity of the inhibiting molecules. This was achieved using the DFT electronic structure programs, Forcite and DMol3, and using a Mulliken population analysis. Electronic parameters for the simulation include restricted spin polarization using the DND basis set as the Perdew Wang (PW) local correlation density functional. The geometry optimized structures of EHEC, HOMO and LUMO orbitals, Fukui functions, and the total electron density are presented in Figure6. In the EHEC molecule, the HOMO orbital is saturated around the aromatic nucleus which is the region of highest electron density and often the site at which electrophiles attack, and represents the active centres, with the utmost ability to bond to the metal surface. The LUMO orbital is saturated around the ethoxy function and represents the site at which nucleophilic attack occurs.Electronic properties of ethyl hydroxyethyl cellulose (EHEC) [C, grey; H, white; O, red]. (a) Optimized structure (b) Total electron density (c) HOMO orbital (d) LUMO orbital (e) Fukui function for nucleophilic attack (f) Fukui function for electrophilic attackLocal reactivity was analyzed by means of the Fukui indices to assess the active regions in terms of nucleophilic and electrophilic behaviour. Thus, the site for nucleophilic attack will be the place where the value off + is maximum. In turn, the site for electrophilic attack is controlled by the value of f -. The values of E HOMO, E LUMO, Δ E, and Fukui functions are given in Table 5. Higher values of E HOMO indicate a greater disposition of a molecule to donate electrons to a metal surface. In the same way, low values of the energy of the gap, Δ E will afford good inhibition efficiency, since the energy required to remove an electron from the last occupied orbital will be minimized [51]. The above descriptors, however, suggest that EHEC possessed good inhibiting potential. This is in agreement with the experimental findings.Table 5 Calculated values of quantum chemical properties for the most stable conformations of EHEC. Property EHEC E HOMO (eV) −6.154 E LUMO(eV) −2.323 E LUMO - HOMO (eV) 3.831 Maximumf + (Mulliken) 0.015 O(12) Maximumf - (Mulliken) 0.165 O(12) ## 4. Conclusions Ethyl hydroxyethyl cellulose was found to be an effective inhibitor of mild steel in 1.0 M H2SO4 solution and its inhibition efficiency increased with increasing concentration. The corrosion process is inhibited by adsorption of EHEC on the mild steel surface following the modified Langmuir isotherm. The inhibiting action is attributed to general adsorption of both protonated and molecular species of the additive on the cathodic and anodic sites on the corroding mild steel surface. In addition, corrosion inhibition is due to the formation of a chemisorbed film on the mild steel surface. The EIS measurement confirmed the adsorption of EHEC and EHEC + KI on the mild steel surface. Polarization studies showed that EHEC and EHEC + KI were mixed-type inhibitor systems with predominant cathodic effect. The theoretical study demonstrated that the inhibition efficiency is related to molecular structure of inhibitor whereby increase in E HOMO and decrease in E LUMO favoured inhibition efficiency. --- *Source: 101709-2014-03-13.xml*
101709-2014-03-13_101709-2014-03-13.md
54,796
Acid Corrosion Inhibition and Adsorption Behaviour of Ethyl Hydroxyethyl Cellulose on Mild Steel Corrosion
I. O. Arukalam; I. O. Madu; N. T. Ijomah; C. M. Ewulonu; G. N. Onyeagoro
Journal of Materials (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101709
101709-2014-03-13.xml
--- ## Abstract The corrosion inhibition of mild steel in 1.0 M H2SO4 solution by ethyl hydroxyethyl cellulose has been studied in relation to the concentration of the additive using weight loss measurement, EIS, polarization, and quantum chemical calculation techniques. The results indicate that EHEC inhibited corrosion reaction in the acid medium and inhibition efficiency increased with EHEC concentration. Further increase in inhibition efficiency is observed in the presence of iodide ions, due to synergistic effect. Impedance results reveal that EHEC is adsorbed on the corroding metal surface. Adsorption followed a modified Langmuir isotherm, with very high negative values of the free energy of adsorption ( Δ G a d s ). The polarization data indicate that the inhibitor was of mixed type, with predominant effect on the cathodic partial reaction. The frontier molecular orbitals, HOMO (the highest occupied molecular orbital) and LUMO (the lowest unoccupied molecular orbital) as well as local reactivity of the EHEC molecule, were analyzed theoretically using the density functional theory to explain the adsorption characteristics at a molecular level. The theoretical predictions showed good agreement with experimental results. --- ## Body ## 1. Introduction Organic compounds containing polar functional groups such as nitrogen, sulphur, and/or oxygen in a conjugated system have been reported to be effective as corrosion inhibitors for steel [1–8]. Some of the organic compounds are polymeric in nature and therefore possess multiple active centres. The study of corrosion inhibition by polymers has been on the increase in recent times. Polymers are employed as corrosion inhibitors because the presence of many adsorption centres helps them form complexes with metal ions. The formed complexes were adsorbed on the metal surface to form a barrier film which separated the metal surface from the corrosive agents present in the aggressive solution [9–14]. The effectiveness of inhibition by the adsorbed inhibitor system will be determined by the energy released on forming the metal-inhibitor bond compared to the corresponding changes when the pure acid reacts with the metal [15].Some authors have reported on the effectiveness of polymeric corrosion inhibitors [16–20]. In their accounts, the inhibitive power of these polymers is related structurally to the cyclic rings and heteroatoms which are the major active centres of adsorption.In order to support experimental studies, theoretical calculations are conducted in order to provide molecular-level understanding of the observed experimental behaviour. The major driving force of quantum chemical research is to understand and explain the functions of ethyl hydroxyethyl cellulose in molecular forms. Among quantum chemical methods for evaluation of corrosion inhibitors, density functional theory (DFT) has shown significant promise [21–23] and appears to be adequate for pointing out the changes in electronic structure responsible for inhibitory action. The geometry of the inhibitor is in ground state, as well as the nature of their molecular orbitals, HOMO (the highest occupied molecular orbital) and LUMO (the lowest unoccupied molecular orbital), that are involved in the properties of activity of inhibitors [24, 25].The present study presents the appraisal of inhibitive capability of ethyl hydroxyethyl cellulose (EHEC) on mild steel corrosion in 1.0 M H2SO4 solution using weight loss measurements and quantum chemical calculations techniques. ## 2. Materials and Methods ### 2.1. Sample Preparation Tests were performed on mild steel specimens of the following percentage chemical composition: Si: 0.02; C: 0.05; Mn: 0.18; Cu: 0.02; Cr: 0.02 and the remainder Fe. This was machined into test coupons of dimensions 3 × 2 × 0.05 cm and a small hole drilled at one end of the coupon to enable suspension into the test solution in the beaker. The metal specimens were polished with fine emery paper, degreased, and cleaned as described elsewhere [26, 27]. EHEC sourced from Sigma Aldrich chemical company was used without further purification at concentrations of 0.5, 1.0, 1.5, 2.0, and 2.5 g/L. Blank sulphuric acid solution was prepared in the concentration of 1.0 M H2SO4. The potassium iodide, KI from BDH Laboratory Supplies, was used. 0.5 g/L KI was prepared and added to each of the solutions containing the additive. ### 2.2. Weight Loss Measurements Weight loss experiments were conducted on test coupons. Tests were conducted under total immersion conditions in 200 mL of test solutions at ambient temperature,28 ± 1°C. The pre-cleaned and weighed coupons were suspended in beakers containing the solutions using glass rods and hooks. All tests were made in aerated solutions and were run three times to ensure reproducibility. To determine weight loss with respect to time, the coupons were retrieved from test solutions at 24 h intervals progressively for 120 h (5 days). At the end of the tests, the weight loss was taken to be the mean value of the difference between the initial and final weights of the coupons for the three determinations at a given time. The corrosion rates of mild steel in 1.0 M H2SO4 solution and the acid solution containing the additive, EHEC, were calculated from the expression: (1) Corrosion rate , R c ( mm / y ) = [ 87,600 Δ W ρ A t ] , where Δ W , ρ , A , t are weight loss in gram, density of mild steel in g/cm3, surface of the test coupon in cm2, and time period of exposure in the test solution in hour, respectively. ### 2.3. Electrochemical Experiments Electrochemical experiments were performed using aVERSASTAT 3 Advanced Electrochemical System operated with V3 Studio electrochemical software. A conventional three-electrode glass cell was used for the experiments. Test coupons with 1 cm2 exposed surface area were used as working electrode and a graphite rod as counterelectrode. The reference electrode was a saturated calomel electrode (SCE), which was connectedvia Luggin’s capillary. The working electrode was immersed in a test solution for 30 minutes to attain a stable open circuit potential prior to electrochemical measurements. All experiments were undertaken in 300 mL of stagnant aerated solutions at 29 ± 1°C. Each test was run in triplicate to verify the reproducibility of the systems. Electrochemical impedance spectroscopy (EIS) measurements were made at corrosion potentials ( E corr ) over a frequency range of 100 kHz–10 mHz, with a signal amplitude perturbation of 5 mV. Spectra analyses were performed using Zsimpwin software. Potentiodynamic polarization studies were carried out in the potential range −250 to +250 mV at a scan rate of 0.33 mV s−1.All theoretical quantum chemical calculations were performed using the density functional theory (DFT) electronic structure programs, Forcite and DMol3 as contained in the Materials Studio 4.0 software. ## 2.1. Sample Preparation Tests were performed on mild steel specimens of the following percentage chemical composition: Si: 0.02; C: 0.05; Mn: 0.18; Cu: 0.02; Cr: 0.02 and the remainder Fe. This was machined into test coupons of dimensions 3 × 2 × 0.05 cm and a small hole drilled at one end of the coupon to enable suspension into the test solution in the beaker. The metal specimens were polished with fine emery paper, degreased, and cleaned as described elsewhere [26, 27]. EHEC sourced from Sigma Aldrich chemical company was used without further purification at concentrations of 0.5, 1.0, 1.5, 2.0, and 2.5 g/L. Blank sulphuric acid solution was prepared in the concentration of 1.0 M H2SO4. The potassium iodide, KI from BDH Laboratory Supplies, was used. 0.5 g/L KI was prepared and added to each of the solutions containing the additive. ## 2.2. Weight Loss Measurements Weight loss experiments were conducted on test coupons. Tests were conducted under total immersion conditions in 200 mL of test solutions at ambient temperature,28 ± 1°C. The pre-cleaned and weighed coupons were suspended in beakers containing the solutions using glass rods and hooks. All tests were made in aerated solutions and were run three times to ensure reproducibility. To determine weight loss with respect to time, the coupons were retrieved from test solutions at 24 h intervals progressively for 120 h (5 days). At the end of the tests, the weight loss was taken to be the mean value of the difference between the initial and final weights of the coupons for the three determinations at a given time. The corrosion rates of mild steel in 1.0 M H2SO4 solution and the acid solution containing the additive, EHEC, were calculated from the expression: (1) Corrosion rate , R c ( mm / y ) = [ 87,600 Δ W ρ A t ] , where Δ W , ρ , A , t are weight loss in gram, density of mild steel in g/cm3, surface of the test coupon in cm2, and time period of exposure in the test solution in hour, respectively. ## 2.3. Electrochemical Experiments Electrochemical experiments were performed using aVERSASTAT 3 Advanced Electrochemical System operated with V3 Studio electrochemical software. A conventional three-electrode glass cell was used for the experiments. Test coupons with 1 cm2 exposed surface area were used as working electrode and a graphite rod as counterelectrode. The reference electrode was a saturated calomel electrode (SCE), which was connectedvia Luggin’s capillary. The working electrode was immersed in a test solution for 30 minutes to attain a stable open circuit potential prior to electrochemical measurements. All experiments were undertaken in 300 mL of stagnant aerated solutions at 29 ± 1°C. Each test was run in triplicate to verify the reproducibility of the systems. Electrochemical impedance spectroscopy (EIS) measurements were made at corrosion potentials ( E corr ) over a frequency range of 100 kHz–10 mHz, with a signal amplitude perturbation of 5 mV. Spectra analyses were performed using Zsimpwin software. Potentiodynamic polarization studies were carried out in the potential range −250 to +250 mV at a scan rate of 0.33 mV s−1.All theoretical quantum chemical calculations were performed using the density functional theory (DFT) electronic structure programs, Forcite and DMol3 as contained in the Materials Studio 4.0 software. ## 3. Results and Discussion ### 3.1. Corrosion Rates The corrosion rates of metals and alloys in aggressive solutions can be determined using different electrochemical and nonelectrochemical techniques. The mechanism of anodic dissolution of iron in acidic solutions corresponds to [28] (2a) Fe + OH ⟺ FeO H ads + e - (2b) FeO H ads ⟶ FeO H + + e - (2c) FeO H + + H + ⟺ F 2 + + H 2 O .As a consequence of these reactions, including the high solubility of the corrosion products, the metal loses weight in the solution. The results of the gravimetric determination of mild steel corrosion rate as a function of time and concentration of the additive are given in Table1.Table 1 Calculated values of corrosion rate of mild steel in 1.0 M H2SO4 in the absence and presence of EHEC and KI. System Corrosion rate (mm/y) Day 1 2 3 4 5 Blank 25.27 22.36 20.16 19.03 17.99 0.5 g/L EHEC 14.50 12.07 10.93 10.39 10.10 0.5 g/L EHEC + KI 12.20 10.59 9.50 9.27 9.06 1.0 g/L EHEC 12.92 10.78 9.74 9.23 8.97 1.0 g/L EHEC + KI 9.39 7.97 7.25 7.17 7.19 1.5 g/L EHEC 13.22 11.43 10.31 9.79 9.51 1.5 g/L EHEC + KI 9.85 8.22 7.37 7.16 7.05 2.0 g/L EHEC 11.84 10.08 9.20 8.79 8.59 2.0 g/L EHEC + KI 10.47 8.37 7.31 6.81 6.60 2.5 g/L EHEC 11.40 9.92 9.35 8.80 8.59 2.5 g/L EHEC + KI 10.06 8.21 7.70 7.56 7.48These results show that the corrosion rate of mild steel in 1.0 M H2SO4 decreases with time in systems with additive and the blank acid solution. The effects of addition of different concentrations of EHEC on corrosion rates in the acid solution after 5 days of exposure are shown in Table 1. EHEC is observed to reduce the corrosion rate at the studied concentration of 0.5 g/L EHEC, indicating inhibition of the corrosion reaction. This effect becomes more pronounced with increasing concentration of the inhibitor, which suggests that the inhibition process is sensitive to the concentration (amount) of the additive present. ### 3.2. Inhibition Efficiency A quantitative evaluation of the effect of EHEC on mild steel corrosion in 1.0 M H2SO4 solution was achieved from appraisal of the inhibition efficiency (I %) given by (3) I % = [ 1 - R cinh R cblk ] × 100 , where R cinh and R cblk are the corrosion rates in inhibited and uninhibited solutions, respectively. The values obtained for the inhibition efficiency are given in Table 2.Table 2 Calculated values of inhibition efficiency of mild steel in 1.0 M H2SO4 in the presence of EHEC and KI. System Inhibition efficiency (I%) Day 1 2 3 4 5 0.5 g/L EHEC 42.62 46.02 45.78 45.40 43.86 0.5 g/L EHEC + KI 51.72 52.64 52.88 51.29 49.64 1.0 g/L EHEC 48.87 51.79 51.69 51.50 50.14 1.0 g/L EHEC + KI 62.84 64.36 64.04 62.32 60.03 1.5 g/L EHEC 47.69 48.88 48.86 48.55 47.14 1.5 g/L EHEC + KI 61.02 63.24 63.44 62.38 60.81 2.0 g/L EHEC 53.15 54.92 54.37 53.81 52.25 2.0 g/L EHEC + KI 58.57 62.57 63.74 64.21 63.31 2.5 g/L EHEC 54.89 55.64 53.62 53.76 52.25 2.5 g/L EHEC + KI 60.19 63.28 61.81 60.27 58.42The plots show thatI % increased progressively with concentration of the additive (Figure 1). Following the observed trend of inhibition, organic inhibitors are known to decrease metal dissolution by forming a protective adsorption film which blocks the metal surface, separating it from the corrosive medium [29–32]. Consequently, in inhibited solutions, the corrosion rate is indicative of the number of free corroding sites remaining after some sites have been effectively blocked by inhibitor adsorption. It has been suggested [33, 34], however, that anions such as Cl - , I - , SO 4 2 -, and S 2 - may also participate in forming reaction intermediates on the corroding metal surface, which either inhibit or stimulate corrosion. It is important to recognize that the suppression or stimulation of the dissolution process is initiated by the specific adsorption of anion on the metal surface.Figure 1 Variation of inhibition efficiency with concentration of EHEC. ### 3.3. Effect of Halide Ion Addition To further clarify the modes of inhibitor adsorption, experiments were conducted in the presence of iodide ions, which are strongly adsorbed on the surface of mild steel in acidic solution and facilitate adsorption of organic cation-type inhibitors by acting as intermediate bridges between the positive end of the organic cation and the positively charged metal surface. Specific adsorption of iodide ions on the metal surface leads to recharging the electrical double layer [35]. The inhibitor is then drawn into the double layer by electrostatic interaction with the adsorbed I - ions, forming ion pairs on the metal surface which increases the degree of surface coverage:(4a) I - sol ⟶ I - ads (4b) I - ads + Inh + sol ⟶ [ I - - Inh + ] ads .Thus, an improvement ofI % on addition of KI is an indication of the participation of protonated inhibitor species in the adsorption process (Figure 2). Table 2 illustrates the effect of addition of 0.5 g/L KI to the different concentrations of EHEC on the corrosion of mild steel in 1.0 M H2SO4 solution.Figure 2 Synergistic effect between KI and EHEC on the variation of inhibition efficiency with time. ### 3.4. Adsorption Consideration Basic parameters which are descriptors of the nature and modes of adsorption of organic inhibitor on the corroding metal surface can be provided by adsorption isotherms which depend on the degree of surface coverage,θ. The observed inhibition of the corrosion of mild steel in 1.0 M H2SO4 solution indicates high degree of surface coverage. From a theoretical perspective, the adsorption route is regarded as a substitution process between the organic inhibitor in the aqueous solution ( Inh sol ) and water molecules adsorbed at the metal surface ( H 2 O ads ) as follows [36–38]: (5) Inh ( sol ) + x H 2 O ( ads ) ⟺ Inh ( ads ) + x H 2 O ( sol ) , where x represents the number of water molecules replaced by one molecule of adsorbed inhibitor. The adsorption bond strength is dependent on the composition of the metal and corrodent, inhibitor structure, concentration, and orientation, as well as temperature. Since EHEC can be protonated in the presence of strong acids, it is quite necessary to consider both cationic and molecular species when discussing the adsorption process of EHEC. Figure 3 shows the plot of C / θ versus C to be linear, which is in agreement with the Langmuir equation [39]: (6) C θ = n K ads + n C , where C is the concentration of inhibitor and K ads is the equilibrium constant for the adsorption-desorption process.Figure 3 Langmuir isotherm for EHEC adsorption on mild steel surface in 1.0 M H2SO4.In general,K ads represents the adsorption power of the inhibitor molecule on the metal surface. The positive values confirm the adsorbability of EHEC on the metal surface. The linear plots obtained in Figure 3 suggest that EHEC adsorption from 1.0 M H2SO4 solution followed the Langmuir isotherm, though the isotherm parameters indicate some deviations from ideal Langmuir behaviour. The slope deviates from unity (see n values in Table 3) with nonzero intercept on the y-axis, which could be traced to some limitations in the underlying assumptions. The results in fact imply that each EHEC molecule occupies n active corrosion sites on the mild steel surface in 1.0 M H2SO4 solution.Table 3 Adsorption parameters from modified Langmuir isotherm. Day R 2 n K ads Δ G ads (kJ mol−1) 1 0.990 1.702 4.515 −45.381 2 0.988 1.692 5.795 −58.246 3 0.993 1.772 7.982 −80.228 4 0.993 1.764 7.412 −74.498 5 0.992 1.834 7.627 −76.659The free energy of adsorption( Δ G ads ) obtained from (7) which is evaluated from K ads obtained from intercepts of the Langmuir plots is given in Table 3: (7) Δ G ads = - R T ln ⁡ ( 55.5 K ads ) , where R and T are the universal gas constant and absolute temperature, respectively. The other parameter retains its previous meaning. The large negative Δ G ads values implied that the adsorption of EHEC on the mild steel surface was favourable from thermodynamics point of view and indicated that the inhibitor was strongly adsorbed, covering both anodic and cathodic regions.In addition, it is important to note that adsorption free energy values of −20 kJ mol−1 or less negative are associated with an electrostatic interaction between charged molecules and charged metal surface (physical adsorption). On the other hand, adsorption free energy values of −40 Kj mol−1 or more negative values involve charge sharing or transfer from the inhibitor molecules to the metal surface to form a co-ordinate covalent bond (chemical adsorption) [40]. ### 3.5. Impedance Measurements Electrochemical impedance spectroscopy analyses provide insight into the kinetics of electrode processes as well as the surface characteristics of the electrochemical system of interest. Figure4 presents the impedance spectra measured at E corr after 30 minutes of immersion and exemplified the Nyquist plots obtained for mild steel in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. The observed increase in the impedance parameters in inhibited solutions is associated with the corrosion inhibiting effect of EHEC. The Nyquist plots for all systems generally have the form of only one depressed semicircle, corresponding to one time constant, although a slight sign of low-frequency inductive behaviour can be discerned. The depression of the capacitance semicircle with centre below the real axis suggests a distribution of the capacitance due to inhomogeneities associated with the electrode surface.Figure 4 Nyquist impedance spectra of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.The presence of a single time constant may be attributed to the short exposure time in the corrosive medium which is not adequate to reveal degradation of the substrate [41]. A polarization resistance (R p) can be extracted from the intercept of the low-frequency loop at the real axis of impedance (Z re) in the Nyquist plots, since the inductive loop is negligible. The value of R p is very close to that of the charge transfer resistance R ct, which can be extracted from the diameter of the semicircle [41, 42]. The impedance spectra for the Nyquist plots were thus adequately analyzed by being fit to the equivalent circuit model R s ( Q dl R ct ), which has been previously used to model the mild steel/acid solution interface [41, 43].The values of the impedance parameters derived from the Nyquist plots using the selected equivalent circuit modelR s ( Q dl R ct ) are given in Table 4. The terms Q dl and n, respectively, represent the magnitude and exponent of the constant phase element (CPE) of the double layer. The CPE, with impedance given by Z CPE = Q - 1 ( j w ) - n, where j is an imaginary number and w is the angular frequency in rad/s is used in place of a capacitor to compensate for the deviations from ideal dielectric behaviour associated with the nonhomogeneity of the electrode surface. Introduction of EHEC into the acid corrodent leads to an increase in R ct and a reduction of Q dl, indicating a hindering of the corrosion reaction. The decrease in Q dl values, which normally results from a decrease in the dielectric constant and/or an increase in the double layer thickness, is due to inhibitor adsorption on the metal/electrolyte interface [44]. This implies that EHEC reduces the corrosion rate of the mild steel specimen by virtue of adsorption on the metal/electrolyte interface, a fact that has been previously established. A quantitative measure of the protective effect can be obtained by comparing the values of the charge transfer resistance in the absence (R ct) and presence of inhibitor ( R ctinh ) as follows: (8) I % = [ 1 - R ct R ctinh ] × 100 , where ( R ctinh ) and R ct are the charge transfer resistance for inhibited and uninhibited systems, respectively.Table 4 Impedance and polarization parameters for mild steel in 0.5 M H2SO4 in the presence and absence of EHEC and EHEC + KI. System Impedance data Polarization data R ct n I.E. % C dl (uF cm−2) × 10−3 E corr (mV (SCE)) R p (Ω cm2) I.E. % Blank 11.931 0.889 — 14.65 −468.35 16.27 — EHEC 37.502 0.876 68.19 5.13 −477.87 58.04 71.97 EHEC + KI 133.37 0.854 91.05 2.41 −489.10 177.51 90.83The double layer capacitance values of the systems were also examined and calculated using the expression:(9) C dl = 1 2 π f max ⁡ R ct , where f max ⁡ is the maximum frequency. The obtained values of C dl are presented in Table 4.Lower double layer capacitance suggests reduced electric charge stored, which is a consequence of increased adsorption layer that acted as a dielectric constant. The increase inR ct values in inhibited systems which corresponds to an increase in the diameter of the Nyquist semicircle confirms that the corrosion inhibiting effect of EHEC and EHEC + KI and is much more pronounced in the latter system, implying that KI synergistically enhanced the corrosion inhibiting effect of EHEC. In other words, lower C dl values correspond to reduced double layer capacitance, which, according to the Helmholtz model (10), results from a decrease in the dielectric constant (ε) or an increase in the interfacial layer thickness (δ): (10) C dl = ε ε o A δ , where ε is the dielectric constant of the medium, ε o is the vacuum permittivity, A is the electrode area, and δ is the thickness of the interfacial layer.Since adsorption of an organic inhibitor on a metal surface involves the replacement of adsorbed water molecules on the surface, the smaller dielectric constant of the organic molecule compared to water as well as the increased thickness of interfacial layer due to inhibitor adsorption acted simultaneously to reduce the double layer capacitance. This provides experimental evidence of adsorption of EHEC on mild steel surface. The significantly lowerC dl value of the EHEC + KI system supports the assertion that the iodide ion significantly enhances adsorption of EHEC on the metal/solution interface. ### 3.6. Polarization Measurements Figure5 shows the polarization curves of mild steel dissolution in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. Introduction of EHEC and EHEC + KI into the acid solution was observed to shift the corrosion potentials of both inhibited systems slightly in the negative direction and in both cases inhibited the anodic metal dissolution reaction as well as the cathodic hydrogen evolution reaction. Since the E corr is not altered significantly, the implication is that the corrosion inhibition process is under mixed control with predominant cathodic effect. In addition, following observations in Figure 5, both cathodic and anodic partial reactions are affected as evident in decrease in the corrosion current densities. This implies that EHEC functioned as a mixed-type inhibitor for both systems. However, marked cathodic partial hydrogen evolution reaction is discerned. Moreso, documented report [45] has it that when displacement in E corr is >85 mV, the inhibitor can be regarded as a cathodic or anodic type inhibitor and if the displacement in E corr is <85 mV, the inhibitor can be seen as mixed type. In the present study, the displacement in the corrosion potential (E corr) in the presence of EHEC and EHEC + KI shifted 9.52 and 20.75 mV, respectively, in the cathodic direction, compared to the blank, which is a confirmation that the inhibitor acts as a mixed-type inhibitor with predominant cathodic effect.Figure 5 Polarization curves of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.Inhibition efficiency was calculated from the polarization data as follows:(11) I % = [ 1 - R p R p inh ] × 100 , where R p and R p inh are polarization resistance for uninhibited and inhibited systems, respectively. The calculated values are given in Table 4. Observations from Table 4 show that the inhibition efficiencies obtained from both impedance and polarization results are comparable. The data confirm the consistency of EHEC and EHEC + KI at the prevailing experimental condition.The cooperative effect between EHEC and KI in hindering the corrosion of mild steel in 1.0 M H2SO4 solution is also evident in both the Nyquist and Tafel polarization plots. Addition of KI resulted in a significant increase in the diameter of the Nyquist semicircle and hence an increase in R ct as well as I % and a decrease in the corrosion current density of the Tafel polarization curves. The presence of iodide ions shifts E corr more in the cathodic direction and further decreases the anodic and cathodic reaction kinetics. The mechanism of this synergistic effect has been described in detail in some reports [46]. The iodide ions are strongly chemisorbed on the corroding mild steel surface and facilitate EHEC adsorption by acting as intermediate bridges between the positively charged metal surface and EHEC cations. This stabilizes the adsorption of EHEC on the mild steel surface, leading to higher surface coverage. To account for the above observations, it is necessary to recognize that the process of adsorption of an organic inhibitor on a corroding metal surface depends on factors such as the nature and surface charge on the metal in the corrosive medium as well as the inhibitor structure. Consequently, more iodide ions are adsorbed on mild steel which presents a more positive surface, giving rise to increased synergistic interactions with protonated EHEC species and hence higher inhibition efficiencies. ### 3.7. Quantum Chemical Calculations The inhibition effectiveness of inhibitors has been reported to correlate with the quantum chemical parameters such as HOMO (the highest occupied molecular orbital), LUMO (the lowest unoccupied molecular orbital), and the energy gap between the LUMO and HOMO( Δ E = E LUMO - E HOMO ) [47–49]. A high E HOMO (less negative) is associated with the capacity of a molecule to donate electrons to an appropriated acceptor with empty molecular orbital that facilitated the adsorption process and therefore indicated good performance of the corrosion inhibitor [50]. E LUMO corresponds to a tendency for electron acceptance. Based on this, the calculated difference, Δ E, demonstrates inherent electron donating ability and measures the interaction of the inhibitor molecule with the metal surface.According to the frontier molecular orbital theory of chemical reactivity, transition of electrons is due to an interaction between the frontier orbitals, HOMO and LUMO, of reacting species. The energy of HOMO is directly related to the ionization potential and characterizes the susceptibility of the molecule toward attack by electrophiles. The energy of LUMO is directly related to the electron affinity and characterizes the susceptibility of the molecule toward attack by nucleophile. The lower the values ofE LUMO, the stronger the electron accepting ability of the molecule.The electronic structure of EHEC, the distribution of frontier molecular orbital, and Fukui indices have been modeled in order to establish the active sites as well as local reactivity of the inhibiting molecules. This was achieved using the DFT electronic structure programs, Forcite and DMol3, and using a Mulliken population analysis. Electronic parameters for the simulation include restricted spin polarization using the DND basis set as the Perdew Wang (PW) local correlation density functional. The geometry optimized structures of EHEC, HOMO and LUMO orbitals, Fukui functions, and the total electron density are presented in Figure6. In the EHEC molecule, the HOMO orbital is saturated around the aromatic nucleus which is the region of highest electron density and often the site at which electrophiles attack, and represents the active centres, with the utmost ability to bond to the metal surface. The LUMO orbital is saturated around the ethoxy function and represents the site at which nucleophilic attack occurs.Electronic properties of ethyl hydroxyethyl cellulose (EHEC) [C, grey; H, white; O, red]. (a) Optimized structure (b) Total electron density (c) HOMO orbital (d) LUMO orbital (e) Fukui function for nucleophilic attack (f) Fukui function for electrophilic attackLocal reactivity was analyzed by means of the Fukui indices to assess the active regions in terms of nucleophilic and electrophilic behaviour. Thus, the site for nucleophilic attack will be the place where the value off + is maximum. In turn, the site for electrophilic attack is controlled by the value of f -. The values of E HOMO, E LUMO, Δ E, and Fukui functions are given in Table 5. Higher values of E HOMO indicate a greater disposition of a molecule to donate electrons to a metal surface. In the same way, low values of the energy of the gap, Δ E will afford good inhibition efficiency, since the energy required to remove an electron from the last occupied orbital will be minimized [51]. The above descriptors, however, suggest that EHEC possessed good inhibiting potential. This is in agreement with the experimental findings.Table 5 Calculated values of quantum chemical properties for the most stable conformations of EHEC. Property EHEC E HOMO (eV) −6.154 E LUMO(eV) −2.323 E LUMO - HOMO (eV) 3.831 Maximumf + (Mulliken) 0.015 O(12) Maximumf - (Mulliken) 0.165 O(12) ## 3.1. Corrosion Rates The corrosion rates of metals and alloys in aggressive solutions can be determined using different electrochemical and nonelectrochemical techniques. The mechanism of anodic dissolution of iron in acidic solutions corresponds to [28] (2a) Fe + OH ⟺ FeO H ads + e - (2b) FeO H ads ⟶ FeO H + + e - (2c) FeO H + + H + ⟺ F 2 + + H 2 O .As a consequence of these reactions, including the high solubility of the corrosion products, the metal loses weight in the solution. The results of the gravimetric determination of mild steel corrosion rate as a function of time and concentration of the additive are given in Table1.Table 1 Calculated values of corrosion rate of mild steel in 1.0 M H2SO4 in the absence and presence of EHEC and KI. System Corrosion rate (mm/y) Day 1 2 3 4 5 Blank 25.27 22.36 20.16 19.03 17.99 0.5 g/L EHEC 14.50 12.07 10.93 10.39 10.10 0.5 g/L EHEC + KI 12.20 10.59 9.50 9.27 9.06 1.0 g/L EHEC 12.92 10.78 9.74 9.23 8.97 1.0 g/L EHEC + KI 9.39 7.97 7.25 7.17 7.19 1.5 g/L EHEC 13.22 11.43 10.31 9.79 9.51 1.5 g/L EHEC + KI 9.85 8.22 7.37 7.16 7.05 2.0 g/L EHEC 11.84 10.08 9.20 8.79 8.59 2.0 g/L EHEC + KI 10.47 8.37 7.31 6.81 6.60 2.5 g/L EHEC 11.40 9.92 9.35 8.80 8.59 2.5 g/L EHEC + KI 10.06 8.21 7.70 7.56 7.48These results show that the corrosion rate of mild steel in 1.0 M H2SO4 decreases with time in systems with additive and the blank acid solution. The effects of addition of different concentrations of EHEC on corrosion rates in the acid solution after 5 days of exposure are shown in Table 1. EHEC is observed to reduce the corrosion rate at the studied concentration of 0.5 g/L EHEC, indicating inhibition of the corrosion reaction. This effect becomes more pronounced with increasing concentration of the inhibitor, which suggests that the inhibition process is sensitive to the concentration (amount) of the additive present. ## 3.2. Inhibition Efficiency A quantitative evaluation of the effect of EHEC on mild steel corrosion in 1.0 M H2SO4 solution was achieved from appraisal of the inhibition efficiency (I %) given by (3) I % = [ 1 - R cinh R cblk ] × 100 , where R cinh and R cblk are the corrosion rates in inhibited and uninhibited solutions, respectively. The values obtained for the inhibition efficiency are given in Table 2.Table 2 Calculated values of inhibition efficiency of mild steel in 1.0 M H2SO4 in the presence of EHEC and KI. System Inhibition efficiency (I%) Day 1 2 3 4 5 0.5 g/L EHEC 42.62 46.02 45.78 45.40 43.86 0.5 g/L EHEC + KI 51.72 52.64 52.88 51.29 49.64 1.0 g/L EHEC 48.87 51.79 51.69 51.50 50.14 1.0 g/L EHEC + KI 62.84 64.36 64.04 62.32 60.03 1.5 g/L EHEC 47.69 48.88 48.86 48.55 47.14 1.5 g/L EHEC + KI 61.02 63.24 63.44 62.38 60.81 2.0 g/L EHEC 53.15 54.92 54.37 53.81 52.25 2.0 g/L EHEC + KI 58.57 62.57 63.74 64.21 63.31 2.5 g/L EHEC 54.89 55.64 53.62 53.76 52.25 2.5 g/L EHEC + KI 60.19 63.28 61.81 60.27 58.42The plots show thatI % increased progressively with concentration of the additive (Figure 1). Following the observed trend of inhibition, organic inhibitors are known to decrease metal dissolution by forming a protective adsorption film which blocks the metal surface, separating it from the corrosive medium [29–32]. Consequently, in inhibited solutions, the corrosion rate is indicative of the number of free corroding sites remaining after some sites have been effectively blocked by inhibitor adsorption. It has been suggested [33, 34], however, that anions such as Cl - , I - , SO 4 2 -, and S 2 - may also participate in forming reaction intermediates on the corroding metal surface, which either inhibit or stimulate corrosion. It is important to recognize that the suppression or stimulation of the dissolution process is initiated by the specific adsorption of anion on the metal surface.Figure 1 Variation of inhibition efficiency with concentration of EHEC. ## 3.3. Effect of Halide Ion Addition To further clarify the modes of inhibitor adsorption, experiments were conducted in the presence of iodide ions, which are strongly adsorbed on the surface of mild steel in acidic solution and facilitate adsorption of organic cation-type inhibitors by acting as intermediate bridges between the positive end of the organic cation and the positively charged metal surface. Specific adsorption of iodide ions on the metal surface leads to recharging the electrical double layer [35]. The inhibitor is then drawn into the double layer by electrostatic interaction with the adsorbed I - ions, forming ion pairs on the metal surface which increases the degree of surface coverage:(4a) I - sol ⟶ I - ads (4b) I - ads + Inh + sol ⟶ [ I - - Inh + ] ads .Thus, an improvement ofI % on addition of KI is an indication of the participation of protonated inhibitor species in the adsorption process (Figure 2). Table 2 illustrates the effect of addition of 0.5 g/L KI to the different concentrations of EHEC on the corrosion of mild steel in 1.0 M H2SO4 solution.Figure 2 Synergistic effect between KI and EHEC on the variation of inhibition efficiency with time. ## 3.4. Adsorption Consideration Basic parameters which are descriptors of the nature and modes of adsorption of organic inhibitor on the corroding metal surface can be provided by adsorption isotherms which depend on the degree of surface coverage,θ. The observed inhibition of the corrosion of mild steel in 1.0 M H2SO4 solution indicates high degree of surface coverage. From a theoretical perspective, the adsorption route is regarded as a substitution process between the organic inhibitor in the aqueous solution ( Inh sol ) and water molecules adsorbed at the metal surface ( H 2 O ads ) as follows [36–38]: (5) Inh ( sol ) + x H 2 O ( ads ) ⟺ Inh ( ads ) + x H 2 O ( sol ) , where x represents the number of water molecules replaced by one molecule of adsorbed inhibitor. The adsorption bond strength is dependent on the composition of the metal and corrodent, inhibitor structure, concentration, and orientation, as well as temperature. Since EHEC can be protonated in the presence of strong acids, it is quite necessary to consider both cationic and molecular species when discussing the adsorption process of EHEC. Figure 3 shows the plot of C / θ versus C to be linear, which is in agreement with the Langmuir equation [39]: (6) C θ = n K ads + n C , where C is the concentration of inhibitor and K ads is the equilibrium constant for the adsorption-desorption process.Figure 3 Langmuir isotherm for EHEC adsorption on mild steel surface in 1.0 M H2SO4.In general,K ads represents the adsorption power of the inhibitor molecule on the metal surface. The positive values confirm the adsorbability of EHEC on the metal surface. The linear plots obtained in Figure 3 suggest that EHEC adsorption from 1.0 M H2SO4 solution followed the Langmuir isotherm, though the isotherm parameters indicate some deviations from ideal Langmuir behaviour. The slope deviates from unity (see n values in Table 3) with nonzero intercept on the y-axis, which could be traced to some limitations in the underlying assumptions. The results in fact imply that each EHEC molecule occupies n active corrosion sites on the mild steel surface in 1.0 M H2SO4 solution.Table 3 Adsorption parameters from modified Langmuir isotherm. Day R 2 n K ads Δ G ads (kJ mol−1) 1 0.990 1.702 4.515 −45.381 2 0.988 1.692 5.795 −58.246 3 0.993 1.772 7.982 −80.228 4 0.993 1.764 7.412 −74.498 5 0.992 1.834 7.627 −76.659The free energy of adsorption( Δ G ads ) obtained from (7) which is evaluated from K ads obtained from intercepts of the Langmuir plots is given in Table 3: (7) Δ G ads = - R T ln ⁡ ( 55.5 K ads ) , where R and T are the universal gas constant and absolute temperature, respectively. The other parameter retains its previous meaning. The large negative Δ G ads values implied that the adsorption of EHEC on the mild steel surface was favourable from thermodynamics point of view and indicated that the inhibitor was strongly adsorbed, covering both anodic and cathodic regions.In addition, it is important to note that adsorption free energy values of −20 kJ mol−1 or less negative are associated with an electrostatic interaction between charged molecules and charged metal surface (physical adsorption). On the other hand, adsorption free energy values of −40 Kj mol−1 or more negative values involve charge sharing or transfer from the inhibitor molecules to the metal surface to form a co-ordinate covalent bond (chemical adsorption) [40]. ## 3.5. Impedance Measurements Electrochemical impedance spectroscopy analyses provide insight into the kinetics of electrode processes as well as the surface characteristics of the electrochemical system of interest. Figure4 presents the impedance spectra measured at E corr after 30 minutes of immersion and exemplified the Nyquist plots obtained for mild steel in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. The observed increase in the impedance parameters in inhibited solutions is associated with the corrosion inhibiting effect of EHEC. The Nyquist plots for all systems generally have the form of only one depressed semicircle, corresponding to one time constant, although a slight sign of low-frequency inductive behaviour can be discerned. The depression of the capacitance semicircle with centre below the real axis suggests a distribution of the capacitance due to inhomogeneities associated with the electrode surface.Figure 4 Nyquist impedance spectra of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.The presence of a single time constant may be attributed to the short exposure time in the corrosive medium which is not adequate to reveal degradation of the substrate [41]. A polarization resistance (R p) can be extracted from the intercept of the low-frequency loop at the real axis of impedance (Z re) in the Nyquist plots, since the inductive loop is negligible. The value of R p is very close to that of the charge transfer resistance R ct, which can be extracted from the diameter of the semicircle [41, 42]. The impedance spectra for the Nyquist plots were thus adequately analyzed by being fit to the equivalent circuit model R s ( Q dl R ct ), which has been previously used to model the mild steel/acid solution interface [41, 43].The values of the impedance parameters derived from the Nyquist plots using the selected equivalent circuit modelR s ( Q dl R ct ) are given in Table 4. The terms Q dl and n, respectively, represent the magnitude and exponent of the constant phase element (CPE) of the double layer. The CPE, with impedance given by Z CPE = Q - 1 ( j w ) - n, where j is an imaginary number and w is the angular frequency in rad/s is used in place of a capacitor to compensate for the deviations from ideal dielectric behaviour associated with the nonhomogeneity of the electrode surface. Introduction of EHEC into the acid corrodent leads to an increase in R ct and a reduction of Q dl, indicating a hindering of the corrosion reaction. The decrease in Q dl values, which normally results from a decrease in the dielectric constant and/or an increase in the double layer thickness, is due to inhibitor adsorption on the metal/electrolyte interface [44]. This implies that EHEC reduces the corrosion rate of the mild steel specimen by virtue of adsorption on the metal/electrolyte interface, a fact that has been previously established. A quantitative measure of the protective effect can be obtained by comparing the values of the charge transfer resistance in the absence (R ct) and presence of inhibitor ( R ctinh ) as follows: (8) I % = [ 1 - R ct R ctinh ] × 100 , where ( R ctinh ) and R ct are the charge transfer resistance for inhibited and uninhibited systems, respectively.Table 4 Impedance and polarization parameters for mild steel in 0.5 M H2SO4 in the presence and absence of EHEC and EHEC + KI. System Impedance data Polarization data R ct n I.E. % C dl (uF cm−2) × 10−3 E corr (mV (SCE)) R p (Ω cm2) I.E. % Blank 11.931 0.889 — 14.65 −468.35 16.27 — EHEC 37.502 0.876 68.19 5.13 −477.87 58.04 71.97 EHEC + KI 133.37 0.854 91.05 2.41 −489.10 177.51 90.83The double layer capacitance values of the systems were also examined and calculated using the expression:(9) C dl = 1 2 π f max ⁡ R ct , where f max ⁡ is the maximum frequency. The obtained values of C dl are presented in Table 4.Lower double layer capacitance suggests reduced electric charge stored, which is a consequence of increased adsorption layer that acted as a dielectric constant. The increase inR ct values in inhibited systems which corresponds to an increase in the diameter of the Nyquist semicircle confirms that the corrosion inhibiting effect of EHEC and EHEC + KI and is much more pronounced in the latter system, implying that KI synergistically enhanced the corrosion inhibiting effect of EHEC. In other words, lower C dl values correspond to reduced double layer capacitance, which, according to the Helmholtz model (10), results from a decrease in the dielectric constant (ε) or an increase in the interfacial layer thickness (δ): (10) C dl = ε ε o A δ , where ε is the dielectric constant of the medium, ε o is the vacuum permittivity, A is the electrode area, and δ is the thickness of the interfacial layer.Since adsorption of an organic inhibitor on a metal surface involves the replacement of adsorbed water molecules on the surface, the smaller dielectric constant of the organic molecule compared to water as well as the increased thickness of interfacial layer due to inhibitor adsorption acted simultaneously to reduce the double layer capacitance. This provides experimental evidence of adsorption of EHEC on mild steel surface. The significantly lowerC dl value of the EHEC + KI system supports the assertion that the iodide ion significantly enhances adsorption of EHEC on the metal/solution interface. ## 3.6. Polarization Measurements Figure5 shows the polarization curves of mild steel dissolution in 1.0 M H2SO4 solution in the absence and presence of EHEC and EHEC + KI. Introduction of EHEC and EHEC + KI into the acid solution was observed to shift the corrosion potentials of both inhibited systems slightly in the negative direction and in both cases inhibited the anodic metal dissolution reaction as well as the cathodic hydrogen evolution reaction. Since the E corr is not altered significantly, the implication is that the corrosion inhibition process is under mixed control with predominant cathodic effect. In addition, following observations in Figure 5, both cathodic and anodic partial reactions are affected as evident in decrease in the corrosion current densities. This implies that EHEC functioned as a mixed-type inhibitor for both systems. However, marked cathodic partial hydrogen evolution reaction is discerned. Moreso, documented report [45] has it that when displacement in E corr is >85 mV, the inhibitor can be regarded as a cathodic or anodic type inhibitor and if the displacement in E corr is <85 mV, the inhibitor can be seen as mixed type. In the present study, the displacement in the corrosion potential (E corr) in the presence of EHEC and EHEC + KI shifted 9.52 and 20.75 mV, respectively, in the cathodic direction, compared to the blank, which is a confirmation that the inhibitor acts as a mixed-type inhibitor with predominant cathodic effect.Figure 5 Polarization curves of mild steel corrosion in 1.0 M H2SO4 in the absence and presence of EHEC and EHEC + KI.Inhibition efficiency was calculated from the polarization data as follows:(11) I % = [ 1 - R p R p inh ] × 100 , where R p and R p inh are polarization resistance for uninhibited and inhibited systems, respectively. The calculated values are given in Table 4. Observations from Table 4 show that the inhibition efficiencies obtained from both impedance and polarization results are comparable. The data confirm the consistency of EHEC and EHEC + KI at the prevailing experimental condition.The cooperative effect between EHEC and KI in hindering the corrosion of mild steel in 1.0 M H2SO4 solution is also evident in both the Nyquist and Tafel polarization plots. Addition of KI resulted in a significant increase in the diameter of the Nyquist semicircle and hence an increase in R ct as well as I % and a decrease in the corrosion current density of the Tafel polarization curves. The presence of iodide ions shifts E corr more in the cathodic direction and further decreases the anodic and cathodic reaction kinetics. The mechanism of this synergistic effect has been described in detail in some reports [46]. The iodide ions are strongly chemisorbed on the corroding mild steel surface and facilitate EHEC adsorption by acting as intermediate bridges between the positively charged metal surface and EHEC cations. This stabilizes the adsorption of EHEC on the mild steel surface, leading to higher surface coverage. To account for the above observations, it is necessary to recognize that the process of adsorption of an organic inhibitor on a corroding metal surface depends on factors such as the nature and surface charge on the metal in the corrosive medium as well as the inhibitor structure. Consequently, more iodide ions are adsorbed on mild steel which presents a more positive surface, giving rise to increased synergistic interactions with protonated EHEC species and hence higher inhibition efficiencies. ## 3.7. Quantum Chemical Calculations The inhibition effectiveness of inhibitors has been reported to correlate with the quantum chemical parameters such as HOMO (the highest occupied molecular orbital), LUMO (the lowest unoccupied molecular orbital), and the energy gap between the LUMO and HOMO( Δ E = E LUMO - E HOMO ) [47–49]. A high E HOMO (less negative) is associated with the capacity of a molecule to donate electrons to an appropriated acceptor with empty molecular orbital that facilitated the adsorption process and therefore indicated good performance of the corrosion inhibitor [50]. E LUMO corresponds to a tendency for electron acceptance. Based on this, the calculated difference, Δ E, demonstrates inherent electron donating ability and measures the interaction of the inhibitor molecule with the metal surface.According to the frontier molecular orbital theory of chemical reactivity, transition of electrons is due to an interaction between the frontier orbitals, HOMO and LUMO, of reacting species. The energy of HOMO is directly related to the ionization potential and characterizes the susceptibility of the molecule toward attack by electrophiles. The energy of LUMO is directly related to the electron affinity and characterizes the susceptibility of the molecule toward attack by nucleophile. The lower the values ofE LUMO, the stronger the electron accepting ability of the molecule.The electronic structure of EHEC, the distribution of frontier molecular orbital, and Fukui indices have been modeled in order to establish the active sites as well as local reactivity of the inhibiting molecules. This was achieved using the DFT electronic structure programs, Forcite and DMol3, and using a Mulliken population analysis. Electronic parameters for the simulation include restricted spin polarization using the DND basis set as the Perdew Wang (PW) local correlation density functional. The geometry optimized structures of EHEC, HOMO and LUMO orbitals, Fukui functions, and the total electron density are presented in Figure6. In the EHEC molecule, the HOMO orbital is saturated around the aromatic nucleus which is the region of highest electron density and often the site at which electrophiles attack, and represents the active centres, with the utmost ability to bond to the metal surface. The LUMO orbital is saturated around the ethoxy function and represents the site at which nucleophilic attack occurs.Electronic properties of ethyl hydroxyethyl cellulose (EHEC) [C, grey; H, white; O, red]. (a) Optimized structure (b) Total electron density (c) HOMO orbital (d) LUMO orbital (e) Fukui function for nucleophilic attack (f) Fukui function for electrophilic attackLocal reactivity was analyzed by means of the Fukui indices to assess the active regions in terms of nucleophilic and electrophilic behaviour. Thus, the site for nucleophilic attack will be the place where the value off + is maximum. In turn, the site for electrophilic attack is controlled by the value of f -. The values of E HOMO, E LUMO, Δ E, and Fukui functions are given in Table 5. Higher values of E HOMO indicate a greater disposition of a molecule to donate electrons to a metal surface. In the same way, low values of the energy of the gap, Δ E will afford good inhibition efficiency, since the energy required to remove an electron from the last occupied orbital will be minimized [51]. The above descriptors, however, suggest that EHEC possessed good inhibiting potential. This is in agreement with the experimental findings.Table 5 Calculated values of quantum chemical properties for the most stable conformations of EHEC. Property EHEC E HOMO (eV) −6.154 E LUMO(eV) −2.323 E LUMO - HOMO (eV) 3.831 Maximumf + (Mulliken) 0.015 O(12) Maximumf - (Mulliken) 0.165 O(12) ## 4. Conclusions Ethyl hydroxyethyl cellulose was found to be an effective inhibitor of mild steel in 1.0 M H2SO4 solution and its inhibition efficiency increased with increasing concentration. The corrosion process is inhibited by adsorption of EHEC on the mild steel surface following the modified Langmuir isotherm. The inhibiting action is attributed to general adsorption of both protonated and molecular species of the additive on the cathodic and anodic sites on the corroding mild steel surface. In addition, corrosion inhibition is due to the formation of a chemisorbed film on the mild steel surface. The EIS measurement confirmed the adsorption of EHEC and EHEC + KI on the mild steel surface. Polarization studies showed that EHEC and EHEC + KI were mixed-type inhibitor systems with predominant cathodic effect. The theoretical study demonstrated that the inhibition efficiency is related to molecular structure of inhibitor whereby increase in E HOMO and decrease in E LUMO favoured inhibition efficiency. --- *Source: 101709-2014-03-13.xml*
2014
# Mountain Rainfall Estimation and BIM Technology Site Safety Management Based on Internet of Things **Authors:** Peng Liu **Journal:** Mobile Information Systems (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1017200 --- ## Abstract Mountain rainfall estimation is a major source of information for determining the safety of a geographical (mountainous) area. It can be done easily by using a modeling and simulation application, BIM, which is a building information modeling tool. It helps in transforming the real-time scenarios into the construction and business models. Now, this whole process can be easily realized by the help of an evolving technology known as IoT (Internet of Things). Internet of Things is supposedly going to take over the world by the end of this decade. It will reshape the whole communication architecture. IoT is actually going to be a basis for D2D (Device to Device) communication. Here, the MTC (Machine Type Communications) are going to take place which have almost zero human involvement. Now, in order to overcome the problem that the traditional construction site safety management method is difficult to accurately estimate the rainfall, resulting in poor safety management effect, a mountain rainfall estimation and BIM technology site safety management methods based on Internet of things are proposed. Firstly, based on the Internet of Things data, the limit learning machine method is used to accurately estimate the mountain rainfall. Secondly, based on the rainfall estimation results and combined with BIM technology, the construction site safety and management model is constructed. In the end, experimental verification is carried out. The experimental results show that this method can precisely estimate the rainfall in mountainous areas, and the computational results of safety factor are basically consistent with the actual results, indicating that the safety management effect of this system is good. In this paper, I reveal the complications and drawbacks associated with the ongoing mechanisms used for mountain rainfall estimations and how to overcome them by using the new technology, i.e., Internet of Things. --- ## Body ## 1. Introduction Rainfall is an important parameter of mountain construction, which largely reflects the trend of disasters. Rainfall has a significant impact on the agricultural production, soil, water loss, and engineering applications. Accurate prediction of rainfall in an area can help agriculture and water conservancy departments improve their ability to prevent drought and flood disasters and minimize the harm. For this, we can make a use of a modeling tool BIM. After modeling the whole scenario of a geographical (mountainous) zone, one can relate the rainfall estimation and BIM technology to the Internet of Things. This paper further throws light on how the whole process is going to take place including the estimation of rainfall and making model in a BIM technology.In recent years, with the frequent occurrence of flood disasters in China, precise and timely use of meteorological data to predict rainfall has become more and more dominant [1]. The advent of the big data era has also brought new challenges to the weather forecasting industry. Meteorological data mainly comes from ground observation, meteorological satellite remote sensing, weather radar, and numerical prediction products. These four types of data mentioned earlier account for more than 90% of the total data and are directly applied to meteorological operations, weather forecasting, climate forecasting, and meteorological services [2]. For this purpose, we might take help from stream data which is a set of digitally encoded and continuous signals. Generally, data flow can be regarded as an infinite over time and is widely used in many fields, such as network public opinion analysis, stock market trend, satellite positioning, real-time financial monitoring, Internet of Things monitoring, and real-time meteorological monitoring. Along with this, there is still much room for development in the field of rainfall prediction based on large-scale meteorological flow data [3].In order to solve the problem of construction land, the stability of mountain construction has become the key to the safety and stability of mountain construction projects. Among the many aforementioned factors affecting the mountain construction stability, rainfall infiltration is the main factor influencing the slope instability and failure, especially the threat of continuous heavy rainfall to slope stability cannot be ignored. On the account of this, study the impact of rainfall on mountain construction slope stability; it has an important reference value for upgrading mountain construction safety. The main contributions of this paper are as follows:(1) In this paper, we determining the safety of individual at a construction site using the IoT. The IoT devices are installed at constriction site and capture various parameters about the environment and surrounding conditions.(2) The rainfall is directly affecting the safety at construction site; therefore, the estimation of rainfall for a particular region or a specific site is mandatory. We have devised a scheme for rainfall estimation.The rest of the paper is organized as follows. Section2 contains the mountain rainfall estimation based on Internet of Things and BIM technology site safety management. Section 3 contains the experimental verification, and conclusion is mentioned in Section 4. ## 2. Mountain Rainfall Estimation and BIM Technology Site Safety Management Based on Internet of Things ### 2.1. Mountain Rainfall Estimation Based on Internet of Things With the sustained improvement and rapid growth of meteorological service informatization level, it is very essential to apply real-time meteorological data processing to meteorological information business. Furthermore, data mining technology is the most widely used and efficient method of meteorological data processing. With the support of Internet of Things technology, it is a motif of great significance to analyze the meteorological characteristics and laws of meteorological data hiding through data mining, provide meteorological researchers with knowledge decision-making based on meteorological data, and predict future meteorological changes [4].Single hidden layer feedforward neural network (SLFN) is widely used in many fields because of its ability to approximate nonlinear mapping, but it needs to set a large number of parameters artificially in the process of application, so the network will fall into local optimization due to improper selection of parameters. Huang Guangbin et al. proposed a learning algorithm of single hidden layer feedforward neural network, called limit learning machine (ELM), which does not need to adjust all the network specifications. The input weight and hidden layer deviation are given randomly at the beginning of training and fixed in the training process, while the output connection weight can be obtained by solving the least square solution of linear equations. Therefore, in order to enhance the accuracy of mountain rainfall estimation, based on the data provided by the Internet of things, the limit learning machine method is used to estimate mountain rainfall to ensure construction safety [5]. The specific estimation process is as follows.The mountain rainfall training sample set isN, where xi,ti meets xi=xi1,xi2,…,xinT∈Rn and ti=ti1,xti2,…,timT∈Rm. For a single hidden layer feedforward neural network with L hidden layer nodes, the excitation function can be expressed as(1)∑i=1Lβigiwi⋅xj+bi=oj,j=1,2,…,N,where wi=wi1,wi2,…,winT represents the input weight connecting the ith hidden layer and the input neuron, βi=βi1,βi2,…,βimT represents the output weight connecting the ith hidden layer node and the output neuron, and bi represents the deviation of the ith hidden layer node. If the single hidden layer feedforward neural network of the excitation function can approach N training samples xi,ti with zero error, then(2)∑j=1Loj−tj=0,(3)∑i=1Lβigwi⋅xj+bi=tj,j=1,2,…,N.Therefore, it can be expressed in the matrix form:(4)Hβ=T,(5)Hw1,w2,…,wL,b1,b2,…,bL,x1,x2,…,xN=gw1x1+b1⋯gwLx1+bL⋮⋮gw1xN+bL⋯gwLxN+bL,β=β1T⋮βLT,T=t1T⋮tNTN×m,where H represents the output matrix of the hidden layer, T represents the expected output, and β represents the output weight. In order to train the single hidden layer neural network, we want to get w^i, b^i, and β^i. The calculation formula is(6)Hw^i,b^iβ^i−T=minHwi,biβi−T.Then, the output layer weightβ can be obtained by solving the following linear equations by the least square method:(7)β=H+T,where H+ represents the Moore–Penrose generalized inverse of the hidden layer output matrix H.Let the initial training set beN0, and the data sample is trained by ELM algorithm in the batch learning mode. β0=K0−1H0TT0 can be obtained from formula (7), where K0=H0TH0. Assuming that there is another dataset N1, the problem solving becomes a minimization problem:(8)Emin=H0H1β−T0T1.When a new training data enters the system, assuming thatN1 samples enter the model, the following β1 can be obtained:(9)β1=K1−1H0H1T0T1=β0+K1−1H1TT1−H1β0.Formula (9) is extended to the general problem, that is, when k samples enter the model, the recursive formula of output weight β of online sequence limit learning machine algorithm can be obtained:(10)βk+1=βk+Kk+1−1Hk+1TTk+1−Hk+1βk.Formula (10) is a recursive formula derived in order to find βk+1 in an online sequence optimization problem.In order to make the online sequential limit learning machine model with multiple parallel nodes have higher prediction accuracy, this paper adopts the random gradient descent algorithm and weighted average method to dynamically adjust the error weight of the prediction output results of multiple different nodes in the cluster. Nodes with high accuracy of rainfall estimation are given higher weights. The final rainfall estimatey¯ can be obtained by weighted average of error weights through the output results of each node:(11)y¯j=∑i=1kσjiyji∑i=1kσji,where σji is the error j weight of the ith learning machine node, yji is the output value of the ith learning machine node, and j represents the jth batch in the estimation stage. Through the above estimation, based on the rainfall data in the Internet of Things, the limit learning machine method is used to estimate the rainfall in mountainous areas. ### 2.2. BIM Technology Site Safety Management It is of an immense importance to strengthen the application of BIM technology for construction safety management of mountain engineering. BIM technology is based on the forward-looking overall planning and planning of the construction safety management of the whole mountain project. At present, the mountain engineering construction industry attaches great importance to BIM technology and regards it as a key force to promote the reform of the whole mountain engineering construction industry [6–8].BIM (Building Information Modeling) simulates the real information of buildings through digital information simulation. The information includes three-dimensional geometry of buildings, materials, weight, progress, price, and other information of building components. In the engineering design stage, BIM technology mainly realizes virtual design or intelligent construction through the 3D model, and with the help of building related graphics, documents and other information these design elements are highly logical. When the model changes, the related graphics or documents will be updated automatically. Currently, collision detection, energy consumption analysis, and cost prediction are fully used in the design stage. The implementation of project integrated delivery management is also the leading component of BIM application. BIM is used for the whole life cycle management of the project, so as to realize the integration of virtual design, CCB, maintenance, and management [9]. Through dynamic management, effective integration, and visual operation, the information of project progress, manpower, materials, equipment, cost, and site in the construction stage can be dynamically integrated. At the same time, all parties of the project can realize project negotiation, negotiation, construction quality control, safety management, cost control, and progress management through the access system, so as to realize collaborative work and information sharing.In the safety management of mountain construction quality, first obtain the image information of mountain construction safety status, then acquire the corresponding two-dimensional array of mountain construction safety, after this establish the mountain construction safety model, calculate the two-dimensional data length of mountain construction safety, and gain the dynamic changes of mountain construction safety quality, so as to realize the safety management of mountain construction site [10–13]. The specific steps are as follows.Primarily, the safety status image of mountain construction site represented byTCCD is determined by using the gray values of R, G, and B of pixels, which is expressed as follows:(12)TCCD=fR,G,B,where f represents the brightness coefficient of the safety status monitoring image.Moreover, obtain the corresponding two-dimensional array of construction site safety, establish the dynamic monitoring model of construction safety, and calculate the two-dimensional array of construction safety by using the following formula:(13)T=A′×TCCD,where A′ represents the dynamic parameter of safety on the construction site. Then, the expression of the construction safety dynamic monitoring model is(14)si∗=yi−y¯Sy×xij∗.In the third place, the two-dimensional data field of construction site safety is calculated to provide a basis for obtaining the dynamic change expression of construction safety, so the two-dimensional data length calculation formula is(15)m1=P1⊗X0Tt1si∗.In the end, determine the dynamic changes of construction safety and complete the dynamic monitoring of construction safety. The expression is(16)y=m1−b˜n×km1.To sum up, it can be elaborated that the establishment of construction safety dynamic monitoring model can effectively complete the construction safety dynamic monitoring, but according to the problem of inaccurate monitoring, corresponding optimization treatment is needed [14].In view of the sparse distribution of dynamic supervision data of construction site safety and the large error of construction site safety supervision when using traditional monitoring methods to monitor and manage construction safety, a construction site safety management method based on BIM is proposed.In the process of safety supervision on the construction site, BIM is now integrated with the safety status monitoring information on the construction site. On the basis of obtaining all the safety status monitoring information on the construction site, the multiquadric radial basis function method is used to interpolate the monitoring data on the construction site, and the calculation results are used to build the safety supervision model on the construction site [15]. The specific calculation steps are as follows:(1) Carry out interpolation calculation on the safety data of the construction site to provide a basis for obtaining the interpolation three-variable function of the spatial data of the safety state monitoring of the construction site. On the basis of BIM and construction site safety monitoring information, use the Multiquadric radial basis function method to interpolate the construction site safety data length:(17)Fx,y=∑k=1nakhkx,y+∑k=1mbkqkx,y,wherex,y represents a point in the basis function of the construction site safety monitoring data field, qkx,y represents the polynomial basis of the construction site safety monitoring data field, which is less than m, hk represents the sparsity of the construction site safety monitoring data field, and ak and bk represent the coefficients of the construction site safety monitoring data field.(2) Calculate the interpolation three-variable function of spatial data of construction site safety status monitoring, so as to provide the foundation for establishing the information model of construction site safety monitoring. The independent variables of construction safety are extended to obtain the interpolation three variable functions of spatial data of construction safety status monitoring, which is expressed by the following formula:(18)Fx,y,z=∑j=1najx−xj2+y−yj2+z−zj2+c21/2Fx,y,wherec represents any constant.(3) Establish the information model of safety condition monitoring on the construction site, and update and refine it. By importing the values ofn points xi,yi,zi represented by fi into the above formula, the construction site safety status monitoring information model can be made, which is expressed by the following formula:(19)fi=∑j=1najxi−xj2+yi−yj2+zi−zj2+c2Fx,y,z.It is assumed thatQj represents any quadratic basis function and aj represents the coefficient of the monitoring model. After updating and enhancing the safety status monitoring information model of the construction site, the above model can be rewritten as(20)fi=∑j=1najQij.(4) Obtain the physical quantity value of the safety monitoring information node on the construction site. Now, the model expression in the above formula is transformed into the matrix form and expressed by the following formula:(21)F=QAfi.In the formula,F represents the expression matrix of the data information monitoring model of the safety state of the construction site, Q represents the difference basis function of the model, and A represents the safety state parameters of the construction site. Then, the physical quantity value of the construction site quality monitoring information node represented by P is calculated by the following formula:(22)P=QPQ−1F.In the formula,QP represents the discrete points of the safety monitoring information data field on the construction site and Q−1 represents the discrete basis function.To sum up, it can be illustrated that, in the process of safety supervision on the construction site, the multiquadric radial basis function is tied down to interpolate the safety monitoring data on the construction site, establish the data information monitoring model for safety status monitoring on the construction site, and calculate the physical value of the safety monitoring information node on the construction site, so as to lay a groundwork for the realization of safety management on the construction site.The physical value of the safety monitoring information node on the construction site is combined with the distance mapping method to obtain the similarity between the safety monitoring information competition layer neuron and its input mode weight vector. The dimension of the input vector is reduced and mapped to the two-dimensional plane, so as to effectively complete the safety management of the construction site. The specific computation steps are as follows:(1) Determine the neuron competition rules of safety monitoring data information on the construction site and obtain its input mode weight. SupposeP′ represents the input vector of safety status monitoring on the construction site, wj represents the initial value given to the weight vector, M represents the number of neurons in the output layer, and Euclidean distance is used as the discrimination function of competition; based on the physical quantity value of the construction site safety monitoring information node obtained by the above formula, the following formula is used to represent the competition rules of the construction site safety monitoring data information node:(23)P′−wg=minjp′−wj×pp′×wj×M.Using the above formula, the neuron represented byg can be determined as the competition winner. The weight of neuron j input mode in gG neighborhood is expressed by the subsequent formula:(24)wjt+1=wjt+ηthigtp′−wjtP′−wg,wherewjt+1 represents the modified weight of neuron j in g neighborhood, wjt represents the weight of neuron j in g neighborhood, ηt represents the iterative learning rate in step t, and higt represents the topology factor.(2) Determine the distance between the input vector and the initial value of the weight vector to complete the similarity comparison.M represents the number of neurons, which are fixed on the two-dimensional output plane grid, xj represents the coordinates, and distance between P and W is calculated by the following formula: represents the number of neurons, which are fixed on the two-dimensional output plane grid, X represents the coordinates, and the distance between p′ and wj is calculated by the following formula:(25)dj=M×p′−wjxj.Calculate the similarity between the trained output layer neuronsj and p′ by using the following formula:(26)Rjp′=fdjNgj∑i=1MfdjNgj,whereg represents the BMU of the construction site safety monitoring vector p′ and Ngj represents the exponential function of the construction site safety monitoring. Assuming that the neuron j is within the neighborhood g of j, the value of Ngj is 1, otherwise it is 0. fdj is a function of distance dj.(3) After obtaining the similarity between all output layer neurons of safety monitoring on the construction site and input vectorp′, the dimension of input vector p′ is reduced and mapped to two-dimensional plane by using the following formula to complete the safety supervision on the construction site:(27)xp=∑j=1MRjpxj.To sum up, it can be stated that, by fusing the distance mapping method to obtain the phase velocity of the construction site safety monitoring information, the competitive layer neuron, and its input mode weight vector, mapping the dimension reduction of the input vector to the two-dimensional flat key, and the construction site safety supervision can be effectively completed. ## 2.1. Mountain Rainfall Estimation Based on Internet of Things With the sustained improvement and rapid growth of meteorological service informatization level, it is very essential to apply real-time meteorological data processing to meteorological information business. Furthermore, data mining technology is the most widely used and efficient method of meteorological data processing. With the support of Internet of Things technology, it is a motif of great significance to analyze the meteorological characteristics and laws of meteorological data hiding through data mining, provide meteorological researchers with knowledge decision-making based on meteorological data, and predict future meteorological changes [4].Single hidden layer feedforward neural network (SLFN) is widely used in many fields because of its ability to approximate nonlinear mapping, but it needs to set a large number of parameters artificially in the process of application, so the network will fall into local optimization due to improper selection of parameters. Huang Guangbin et al. proposed a learning algorithm of single hidden layer feedforward neural network, called limit learning machine (ELM), which does not need to adjust all the network specifications. The input weight and hidden layer deviation are given randomly at the beginning of training and fixed in the training process, while the output connection weight can be obtained by solving the least square solution of linear equations. Therefore, in order to enhance the accuracy of mountain rainfall estimation, based on the data provided by the Internet of things, the limit learning machine method is used to estimate mountain rainfall to ensure construction safety [5]. The specific estimation process is as follows.The mountain rainfall training sample set isN, where xi,ti meets xi=xi1,xi2,…,xinT∈Rn and ti=ti1,xti2,…,timT∈Rm. For a single hidden layer feedforward neural network with L hidden layer nodes, the excitation function can be expressed as(1)∑i=1Lβigiwi⋅xj+bi=oj,j=1,2,…,N,where wi=wi1,wi2,…,winT represents the input weight connecting the ith hidden layer and the input neuron, βi=βi1,βi2,…,βimT represents the output weight connecting the ith hidden layer node and the output neuron, and bi represents the deviation of the ith hidden layer node. If the single hidden layer feedforward neural network of the excitation function can approach N training samples xi,ti with zero error, then(2)∑j=1Loj−tj=0,(3)∑i=1Lβigwi⋅xj+bi=tj,j=1,2,…,N.Therefore, it can be expressed in the matrix form:(4)Hβ=T,(5)Hw1,w2,…,wL,b1,b2,…,bL,x1,x2,…,xN=gw1x1+b1⋯gwLx1+bL⋮⋮gw1xN+bL⋯gwLxN+bL,β=β1T⋮βLT,T=t1T⋮tNTN×m,where H represents the output matrix of the hidden layer, T represents the expected output, and β represents the output weight. In order to train the single hidden layer neural network, we want to get w^i, b^i, and β^i. The calculation formula is(6)Hw^i,b^iβ^i−T=minHwi,biβi−T.Then, the output layer weightβ can be obtained by solving the following linear equations by the least square method:(7)β=H+T,where H+ represents the Moore–Penrose generalized inverse of the hidden layer output matrix H.Let the initial training set beN0, and the data sample is trained by ELM algorithm in the batch learning mode. β0=K0−1H0TT0 can be obtained from formula (7), where K0=H0TH0. Assuming that there is another dataset N1, the problem solving becomes a minimization problem:(8)Emin=H0H1β−T0T1.When a new training data enters the system, assuming thatN1 samples enter the model, the following β1 can be obtained:(9)β1=K1−1H0H1T0T1=β0+K1−1H1TT1−H1β0.Formula (9) is extended to the general problem, that is, when k samples enter the model, the recursive formula of output weight β of online sequence limit learning machine algorithm can be obtained:(10)βk+1=βk+Kk+1−1Hk+1TTk+1−Hk+1βk.Formula (10) is a recursive formula derived in order to find βk+1 in an online sequence optimization problem.In order to make the online sequential limit learning machine model with multiple parallel nodes have higher prediction accuracy, this paper adopts the random gradient descent algorithm and weighted average method to dynamically adjust the error weight of the prediction output results of multiple different nodes in the cluster. Nodes with high accuracy of rainfall estimation are given higher weights. The final rainfall estimatey¯ can be obtained by weighted average of error weights through the output results of each node:(11)y¯j=∑i=1kσjiyji∑i=1kσji,where σji is the error j weight of the ith learning machine node, yji is the output value of the ith learning machine node, and j represents the jth batch in the estimation stage. Through the above estimation, based on the rainfall data in the Internet of Things, the limit learning machine method is used to estimate the rainfall in mountainous areas. ## 2.2. BIM Technology Site Safety Management It is of an immense importance to strengthen the application of BIM technology for construction safety management of mountain engineering. BIM technology is based on the forward-looking overall planning and planning of the construction safety management of the whole mountain project. At present, the mountain engineering construction industry attaches great importance to BIM technology and regards it as a key force to promote the reform of the whole mountain engineering construction industry [6–8].BIM (Building Information Modeling) simulates the real information of buildings through digital information simulation. The information includes three-dimensional geometry of buildings, materials, weight, progress, price, and other information of building components. In the engineering design stage, BIM technology mainly realizes virtual design or intelligent construction through the 3D model, and with the help of building related graphics, documents and other information these design elements are highly logical. When the model changes, the related graphics or documents will be updated automatically. Currently, collision detection, energy consumption analysis, and cost prediction are fully used in the design stage. The implementation of project integrated delivery management is also the leading component of BIM application. BIM is used for the whole life cycle management of the project, so as to realize the integration of virtual design, CCB, maintenance, and management [9]. Through dynamic management, effective integration, and visual operation, the information of project progress, manpower, materials, equipment, cost, and site in the construction stage can be dynamically integrated. At the same time, all parties of the project can realize project negotiation, negotiation, construction quality control, safety management, cost control, and progress management through the access system, so as to realize collaborative work and information sharing.In the safety management of mountain construction quality, first obtain the image information of mountain construction safety status, then acquire the corresponding two-dimensional array of mountain construction safety, after this establish the mountain construction safety model, calculate the two-dimensional data length of mountain construction safety, and gain the dynamic changes of mountain construction safety quality, so as to realize the safety management of mountain construction site [10–13]. The specific steps are as follows.Primarily, the safety status image of mountain construction site represented byTCCD is determined by using the gray values of R, G, and B of pixels, which is expressed as follows:(12)TCCD=fR,G,B,where f represents the brightness coefficient of the safety status monitoring image.Moreover, obtain the corresponding two-dimensional array of construction site safety, establish the dynamic monitoring model of construction safety, and calculate the two-dimensional array of construction safety by using the following formula:(13)T=A′×TCCD,where A′ represents the dynamic parameter of safety on the construction site. Then, the expression of the construction safety dynamic monitoring model is(14)si∗=yi−y¯Sy×xij∗.In the third place, the two-dimensional data field of construction site safety is calculated to provide a basis for obtaining the dynamic change expression of construction safety, so the two-dimensional data length calculation formula is(15)m1=P1⊗X0Tt1si∗.In the end, determine the dynamic changes of construction safety and complete the dynamic monitoring of construction safety. The expression is(16)y=m1−b˜n×km1.To sum up, it can be elaborated that the establishment of construction safety dynamic monitoring model can effectively complete the construction safety dynamic monitoring, but according to the problem of inaccurate monitoring, corresponding optimization treatment is needed [14].In view of the sparse distribution of dynamic supervision data of construction site safety and the large error of construction site safety supervision when using traditional monitoring methods to monitor and manage construction safety, a construction site safety management method based on BIM is proposed.In the process of safety supervision on the construction site, BIM is now integrated with the safety status monitoring information on the construction site. On the basis of obtaining all the safety status monitoring information on the construction site, the multiquadric radial basis function method is used to interpolate the monitoring data on the construction site, and the calculation results are used to build the safety supervision model on the construction site [15]. The specific calculation steps are as follows:(1) Carry out interpolation calculation on the safety data of the construction site to provide a basis for obtaining the interpolation three-variable function of the spatial data of the safety state monitoring of the construction site. On the basis of BIM and construction site safety monitoring information, use the Multiquadric radial basis function method to interpolate the construction site safety data length:(17)Fx,y=∑k=1nakhkx,y+∑k=1mbkqkx,y,wherex,y represents a point in the basis function of the construction site safety monitoring data field, qkx,y represents the polynomial basis of the construction site safety monitoring data field, which is less than m, hk represents the sparsity of the construction site safety monitoring data field, and ak and bk represent the coefficients of the construction site safety monitoring data field.(2) Calculate the interpolation three-variable function of spatial data of construction site safety status monitoring, so as to provide the foundation for establishing the information model of construction site safety monitoring. The independent variables of construction safety are extended to obtain the interpolation three variable functions of spatial data of construction safety status monitoring, which is expressed by the following formula:(18)Fx,y,z=∑j=1najx−xj2+y−yj2+z−zj2+c21/2Fx,y,wherec represents any constant.(3) Establish the information model of safety condition monitoring on the construction site, and update and refine it. By importing the values ofn points xi,yi,zi represented by fi into the above formula, the construction site safety status monitoring information model can be made, which is expressed by the following formula:(19)fi=∑j=1najxi−xj2+yi−yj2+zi−zj2+c2Fx,y,z.It is assumed thatQj represents any quadratic basis function and aj represents the coefficient of the monitoring model. After updating and enhancing the safety status monitoring information model of the construction site, the above model can be rewritten as(20)fi=∑j=1najQij.(4) Obtain the physical quantity value of the safety monitoring information node on the construction site. Now, the model expression in the above formula is transformed into the matrix form and expressed by the following formula:(21)F=QAfi.In the formula,F represents the expression matrix of the data information monitoring model of the safety state of the construction site, Q represents the difference basis function of the model, and A represents the safety state parameters of the construction site. Then, the physical quantity value of the construction site quality monitoring information node represented by P is calculated by the following formula:(22)P=QPQ−1F.In the formula,QP represents the discrete points of the safety monitoring information data field on the construction site and Q−1 represents the discrete basis function.To sum up, it can be illustrated that, in the process of safety supervision on the construction site, the multiquadric radial basis function is tied down to interpolate the safety monitoring data on the construction site, establish the data information monitoring model for safety status monitoring on the construction site, and calculate the physical value of the safety monitoring information node on the construction site, so as to lay a groundwork for the realization of safety management on the construction site.The physical value of the safety monitoring information node on the construction site is combined with the distance mapping method to obtain the similarity between the safety monitoring information competition layer neuron and its input mode weight vector. The dimension of the input vector is reduced and mapped to the two-dimensional plane, so as to effectively complete the safety management of the construction site. The specific computation steps are as follows:(1) Determine the neuron competition rules of safety monitoring data information on the construction site and obtain its input mode weight. SupposeP′ represents the input vector of safety status monitoring on the construction site, wj represents the initial value given to the weight vector, M represents the number of neurons in the output layer, and Euclidean distance is used as the discrimination function of competition; based on the physical quantity value of the construction site safety monitoring information node obtained by the above formula, the following formula is used to represent the competition rules of the construction site safety monitoring data information node:(23)P′−wg=minjp′−wj×pp′×wj×M.Using the above formula, the neuron represented byg can be determined as the competition winner. The weight of neuron j input mode in gG neighborhood is expressed by the subsequent formula:(24)wjt+1=wjt+ηthigtp′−wjtP′−wg,wherewjt+1 represents the modified weight of neuron j in g neighborhood, wjt represents the weight of neuron j in g neighborhood, ηt represents the iterative learning rate in step t, and higt represents the topology factor.(2) Determine the distance between the input vector and the initial value of the weight vector to complete the similarity comparison.M represents the number of neurons, which are fixed on the two-dimensional output plane grid, xj represents the coordinates, and distance between P and W is calculated by the following formula: represents the number of neurons, which are fixed on the two-dimensional output plane grid, X represents the coordinates, and the distance between p′ and wj is calculated by the following formula:(25)dj=M×p′−wjxj.Calculate the similarity between the trained output layer neuronsj and p′ by using the following formula:(26)Rjp′=fdjNgj∑i=1MfdjNgj,whereg represents the BMU of the construction site safety monitoring vector p′ and Ngj represents the exponential function of the construction site safety monitoring. Assuming that the neuron j is within the neighborhood g of j, the value of Ngj is 1, otherwise it is 0. fdj is a function of distance dj.(3) After obtaining the similarity between all output layer neurons of safety monitoring on the construction site and input vectorp′, the dimension of input vector p′ is reduced and mapped to two-dimensional plane by using the following formula to complete the safety supervision on the construction site:(27)xp=∑j=1MRjpxj.To sum up, it can be stated that, by fusing the distance mapping method to obtain the phase velocity of the construction site safety monitoring information, the competitive layer neuron, and its input mode weight vector, mapping the dimension reduction of the input vector to the two-dimensional flat key, and the construction site safety supervision can be effectively completed. ## 3. Experimental Verification In order to verify the practical application performance of the proposed mountain rainfall estimation based on the Internet of Things and the field safety management method of BIM technology, comparative verification experiments are carried out. ### 3.1. Experimental Environment and Scheme To highlight the experimental effect, a mountain construction site in the province is selected as the research area, the rainfall in this area is estimated, and the safety of the construction site is managed. The construction site drawing is shown in Figure1.Figure 1 Construction site.According to the above construction site area, verify the rainfall estimation accuracy of this method and the calculation accuracy of safety factor on the construction site, and compare the calculation results of this method with the actual results to fully verify the safety management performance of this method. ### 3.2. Rainfall Estimation Accuracy Rainfall results are very important for the safety management of mountain construction site. In the process of safety management, we should always pay attention to the changes of rainfall. Therefore, in the process of verifying the performance of the method, the impact of rainfall should be considered. The comparison results between the rainfall estimation results of this method and the actual rainfall results are shown in Figure2.Figure 2 Rainfall estimation accuracy results.From the comparison results of rainfall estimation accuracy shown in Figure2, it can be seen that, within the period of one month construction, the rainfall estimation results of this method are basically the same as the actual rainfall results, and the maximum rainfall estimation error is no more than 2 mm. Therefore, it shows that this paper can accurately estimate the rainfall on the construction site through the Internet of things data, so as to provide a basis for improving the safety management of the construction site. ### 3.3. Safety Factor of Construction Site The safety factor of the construction site is the basis for judging the safety situation of the construction site. The safety factor of the construction site is calculated according to the rainfall estimation results, and the safety factor calculation results of this method are compared with the actual safety factor results to verify the performance of this method. The safety factor results of the construction site are shown in Figure3.Figure 3 Safety factor of construction site.From the comparison results of safety factors on the construction site shown in Figure3, it can be seen that this method can accurately calculate the safety factor on the construction site according to the rainfall estimation results, and the calculation results of the safety factor in this method are consistent with the actual safety factor results. Hence, it shows that this method can effectively manage the safety of the construction site. ## 3.1. Experimental Environment and Scheme To highlight the experimental effect, a mountain construction site in the province is selected as the research area, the rainfall in this area is estimated, and the safety of the construction site is managed. The construction site drawing is shown in Figure1.Figure 1 Construction site.According to the above construction site area, verify the rainfall estimation accuracy of this method and the calculation accuracy of safety factor on the construction site, and compare the calculation results of this method with the actual results to fully verify the safety management performance of this method. ## 3.2. Rainfall Estimation Accuracy Rainfall results are very important for the safety management of mountain construction site. In the process of safety management, we should always pay attention to the changes of rainfall. Therefore, in the process of verifying the performance of the method, the impact of rainfall should be considered. The comparison results between the rainfall estimation results of this method and the actual rainfall results are shown in Figure2.Figure 2 Rainfall estimation accuracy results.From the comparison results of rainfall estimation accuracy shown in Figure2, it can be seen that, within the period of one month construction, the rainfall estimation results of this method are basically the same as the actual rainfall results, and the maximum rainfall estimation error is no more than 2 mm. Therefore, it shows that this paper can accurately estimate the rainfall on the construction site through the Internet of things data, so as to provide a basis for improving the safety management of the construction site. ## 3.3. Safety Factor of Construction Site The safety factor of the construction site is the basis for judging the safety situation of the construction site. The safety factor of the construction site is calculated according to the rainfall estimation results, and the safety factor calculation results of this method are compared with the actual safety factor results to verify the performance of this method. The safety factor results of the construction site are shown in Figure3.Figure 3 Safety factor of construction site.From the comparison results of safety factors on the construction site shown in Figure3, it can be seen that this method can accurately calculate the safety factor on the construction site according to the rainfall estimation results, and the calculation results of the safety factor in this method are consistent with the actual safety factor results. Hence, it shows that this method can effectively manage the safety of the construction site. ## 4. Conclusion In order to improve the safety of mountain construction, it is mandatory to strengthen the safety management of construction site. Thus, a mountain rainfall estimation and BIM technology field safety management method based on Internet of Things is proposed, and the performance of the method is verified from both theory and experiment. This method has high accuracy of rainfall estimation and safety management system in rainfall estimation and safety management. The calculation results of the two indexes are basically consistent with the actual results. Accordingly, it shows that the proposed method based on Internet of Things and BIM technology can better meet the needs of rainfall estimation and safety management. ### 4.1. Future Scope This work can be extended further at a very major level. This method based on Internet of Things can actually meet the demands of different construction sites. It can also be applied to the plain areas other than the hilly areas. ## 4.1. Future Scope This work can be extended further at a very major level. This method based on Internet of Things can actually meet the demands of different construction sites. It can also be applied to the plain areas other than the hilly areas. --- *Source: 1017200-2021-10-07.xml*
1017200-2021-10-07_1017200-2021-10-07.md
45,678
Mountain Rainfall Estimation and BIM Technology Site Safety Management Based on Internet of Things
Peng Liu
Mobile Information Systems (2021)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1017200
1017200-2021-10-07.xml
--- ## Abstract Mountain rainfall estimation is a major source of information for determining the safety of a geographical (mountainous) area. It can be done easily by using a modeling and simulation application, BIM, which is a building information modeling tool. It helps in transforming the real-time scenarios into the construction and business models. Now, this whole process can be easily realized by the help of an evolving technology known as IoT (Internet of Things). Internet of Things is supposedly going to take over the world by the end of this decade. It will reshape the whole communication architecture. IoT is actually going to be a basis for D2D (Device to Device) communication. Here, the MTC (Machine Type Communications) are going to take place which have almost zero human involvement. Now, in order to overcome the problem that the traditional construction site safety management method is difficult to accurately estimate the rainfall, resulting in poor safety management effect, a mountain rainfall estimation and BIM technology site safety management methods based on Internet of things are proposed. Firstly, based on the Internet of Things data, the limit learning machine method is used to accurately estimate the mountain rainfall. Secondly, based on the rainfall estimation results and combined with BIM technology, the construction site safety and management model is constructed. In the end, experimental verification is carried out. The experimental results show that this method can precisely estimate the rainfall in mountainous areas, and the computational results of safety factor are basically consistent with the actual results, indicating that the safety management effect of this system is good. In this paper, I reveal the complications and drawbacks associated with the ongoing mechanisms used for mountain rainfall estimations and how to overcome them by using the new technology, i.e., Internet of Things. --- ## Body ## 1. Introduction Rainfall is an important parameter of mountain construction, which largely reflects the trend of disasters. Rainfall has a significant impact on the agricultural production, soil, water loss, and engineering applications. Accurate prediction of rainfall in an area can help agriculture and water conservancy departments improve their ability to prevent drought and flood disasters and minimize the harm. For this, we can make a use of a modeling tool BIM. After modeling the whole scenario of a geographical (mountainous) zone, one can relate the rainfall estimation and BIM technology to the Internet of Things. This paper further throws light on how the whole process is going to take place including the estimation of rainfall and making model in a BIM technology.In recent years, with the frequent occurrence of flood disasters in China, precise and timely use of meteorological data to predict rainfall has become more and more dominant [1]. The advent of the big data era has also brought new challenges to the weather forecasting industry. Meteorological data mainly comes from ground observation, meteorological satellite remote sensing, weather radar, and numerical prediction products. These four types of data mentioned earlier account for more than 90% of the total data and are directly applied to meteorological operations, weather forecasting, climate forecasting, and meteorological services [2]. For this purpose, we might take help from stream data which is a set of digitally encoded and continuous signals. Generally, data flow can be regarded as an infinite over time and is widely used in many fields, such as network public opinion analysis, stock market trend, satellite positioning, real-time financial monitoring, Internet of Things monitoring, and real-time meteorological monitoring. Along with this, there is still much room for development in the field of rainfall prediction based on large-scale meteorological flow data [3].In order to solve the problem of construction land, the stability of mountain construction has become the key to the safety and stability of mountain construction projects. Among the many aforementioned factors affecting the mountain construction stability, rainfall infiltration is the main factor influencing the slope instability and failure, especially the threat of continuous heavy rainfall to slope stability cannot be ignored. On the account of this, study the impact of rainfall on mountain construction slope stability; it has an important reference value for upgrading mountain construction safety. The main contributions of this paper are as follows:(1) In this paper, we determining the safety of individual at a construction site using the IoT. The IoT devices are installed at constriction site and capture various parameters about the environment and surrounding conditions.(2) The rainfall is directly affecting the safety at construction site; therefore, the estimation of rainfall for a particular region or a specific site is mandatory. We have devised a scheme for rainfall estimation.The rest of the paper is organized as follows. Section2 contains the mountain rainfall estimation based on Internet of Things and BIM technology site safety management. Section 3 contains the experimental verification, and conclusion is mentioned in Section 4. ## 2. Mountain Rainfall Estimation and BIM Technology Site Safety Management Based on Internet of Things ### 2.1. Mountain Rainfall Estimation Based on Internet of Things With the sustained improvement and rapid growth of meteorological service informatization level, it is very essential to apply real-time meteorological data processing to meteorological information business. Furthermore, data mining technology is the most widely used and efficient method of meteorological data processing. With the support of Internet of Things technology, it is a motif of great significance to analyze the meteorological characteristics and laws of meteorological data hiding through data mining, provide meteorological researchers with knowledge decision-making based on meteorological data, and predict future meteorological changes [4].Single hidden layer feedforward neural network (SLFN) is widely used in many fields because of its ability to approximate nonlinear mapping, but it needs to set a large number of parameters artificially in the process of application, so the network will fall into local optimization due to improper selection of parameters. Huang Guangbin et al. proposed a learning algorithm of single hidden layer feedforward neural network, called limit learning machine (ELM), which does not need to adjust all the network specifications. The input weight and hidden layer deviation are given randomly at the beginning of training and fixed in the training process, while the output connection weight can be obtained by solving the least square solution of linear equations. Therefore, in order to enhance the accuracy of mountain rainfall estimation, based on the data provided by the Internet of things, the limit learning machine method is used to estimate mountain rainfall to ensure construction safety [5]. The specific estimation process is as follows.The mountain rainfall training sample set isN, where xi,ti meets xi=xi1,xi2,…,xinT∈Rn and ti=ti1,xti2,…,timT∈Rm. For a single hidden layer feedforward neural network with L hidden layer nodes, the excitation function can be expressed as(1)∑i=1Lβigiwi⋅xj+bi=oj,j=1,2,…,N,where wi=wi1,wi2,…,winT represents the input weight connecting the ith hidden layer and the input neuron, βi=βi1,βi2,…,βimT represents the output weight connecting the ith hidden layer node and the output neuron, and bi represents the deviation of the ith hidden layer node. If the single hidden layer feedforward neural network of the excitation function can approach N training samples xi,ti with zero error, then(2)∑j=1Loj−tj=0,(3)∑i=1Lβigwi⋅xj+bi=tj,j=1,2,…,N.Therefore, it can be expressed in the matrix form:(4)Hβ=T,(5)Hw1,w2,…,wL,b1,b2,…,bL,x1,x2,…,xN=gw1x1+b1⋯gwLx1+bL⋮⋮gw1xN+bL⋯gwLxN+bL,β=β1T⋮βLT,T=t1T⋮tNTN×m,where H represents the output matrix of the hidden layer, T represents the expected output, and β represents the output weight. In order to train the single hidden layer neural network, we want to get w^i, b^i, and β^i. The calculation formula is(6)Hw^i,b^iβ^i−T=minHwi,biβi−T.Then, the output layer weightβ can be obtained by solving the following linear equations by the least square method:(7)β=H+T,where H+ represents the Moore–Penrose generalized inverse of the hidden layer output matrix H.Let the initial training set beN0, and the data sample is trained by ELM algorithm in the batch learning mode. β0=K0−1H0TT0 can be obtained from formula (7), where K0=H0TH0. Assuming that there is another dataset N1, the problem solving becomes a minimization problem:(8)Emin=H0H1β−T0T1.When a new training data enters the system, assuming thatN1 samples enter the model, the following β1 can be obtained:(9)β1=K1−1H0H1T0T1=β0+K1−1H1TT1−H1β0.Formula (9) is extended to the general problem, that is, when k samples enter the model, the recursive formula of output weight β of online sequence limit learning machine algorithm can be obtained:(10)βk+1=βk+Kk+1−1Hk+1TTk+1−Hk+1βk.Formula (10) is a recursive formula derived in order to find βk+1 in an online sequence optimization problem.In order to make the online sequential limit learning machine model with multiple parallel nodes have higher prediction accuracy, this paper adopts the random gradient descent algorithm and weighted average method to dynamically adjust the error weight of the prediction output results of multiple different nodes in the cluster. Nodes with high accuracy of rainfall estimation are given higher weights. The final rainfall estimatey¯ can be obtained by weighted average of error weights through the output results of each node:(11)y¯j=∑i=1kσjiyji∑i=1kσji,where σji is the error j weight of the ith learning machine node, yji is the output value of the ith learning machine node, and j represents the jth batch in the estimation stage. Through the above estimation, based on the rainfall data in the Internet of Things, the limit learning machine method is used to estimate the rainfall in mountainous areas. ### 2.2. BIM Technology Site Safety Management It is of an immense importance to strengthen the application of BIM technology for construction safety management of mountain engineering. BIM technology is based on the forward-looking overall planning and planning of the construction safety management of the whole mountain project. At present, the mountain engineering construction industry attaches great importance to BIM technology and regards it as a key force to promote the reform of the whole mountain engineering construction industry [6–8].BIM (Building Information Modeling) simulates the real information of buildings through digital information simulation. The information includes three-dimensional geometry of buildings, materials, weight, progress, price, and other information of building components. In the engineering design stage, BIM technology mainly realizes virtual design or intelligent construction through the 3D model, and with the help of building related graphics, documents and other information these design elements are highly logical. When the model changes, the related graphics or documents will be updated automatically. Currently, collision detection, energy consumption analysis, and cost prediction are fully used in the design stage. The implementation of project integrated delivery management is also the leading component of BIM application. BIM is used for the whole life cycle management of the project, so as to realize the integration of virtual design, CCB, maintenance, and management [9]. Through dynamic management, effective integration, and visual operation, the information of project progress, manpower, materials, equipment, cost, and site in the construction stage can be dynamically integrated. At the same time, all parties of the project can realize project negotiation, negotiation, construction quality control, safety management, cost control, and progress management through the access system, so as to realize collaborative work and information sharing.In the safety management of mountain construction quality, first obtain the image information of mountain construction safety status, then acquire the corresponding two-dimensional array of mountain construction safety, after this establish the mountain construction safety model, calculate the two-dimensional data length of mountain construction safety, and gain the dynamic changes of mountain construction safety quality, so as to realize the safety management of mountain construction site [10–13]. The specific steps are as follows.Primarily, the safety status image of mountain construction site represented byTCCD is determined by using the gray values of R, G, and B of pixels, which is expressed as follows:(12)TCCD=fR,G,B,where f represents the brightness coefficient of the safety status monitoring image.Moreover, obtain the corresponding two-dimensional array of construction site safety, establish the dynamic monitoring model of construction safety, and calculate the two-dimensional array of construction safety by using the following formula:(13)T=A′×TCCD,where A′ represents the dynamic parameter of safety on the construction site. Then, the expression of the construction safety dynamic monitoring model is(14)si∗=yi−y¯Sy×xij∗.In the third place, the two-dimensional data field of construction site safety is calculated to provide a basis for obtaining the dynamic change expression of construction safety, so the two-dimensional data length calculation formula is(15)m1=P1⊗X0Tt1si∗.In the end, determine the dynamic changes of construction safety and complete the dynamic monitoring of construction safety. The expression is(16)y=m1−b˜n×km1.To sum up, it can be elaborated that the establishment of construction safety dynamic monitoring model can effectively complete the construction safety dynamic monitoring, but according to the problem of inaccurate monitoring, corresponding optimization treatment is needed [14].In view of the sparse distribution of dynamic supervision data of construction site safety and the large error of construction site safety supervision when using traditional monitoring methods to monitor and manage construction safety, a construction site safety management method based on BIM is proposed.In the process of safety supervision on the construction site, BIM is now integrated with the safety status monitoring information on the construction site. On the basis of obtaining all the safety status monitoring information on the construction site, the multiquadric radial basis function method is used to interpolate the monitoring data on the construction site, and the calculation results are used to build the safety supervision model on the construction site [15]. The specific calculation steps are as follows:(1) Carry out interpolation calculation on the safety data of the construction site to provide a basis for obtaining the interpolation three-variable function of the spatial data of the safety state monitoring of the construction site. On the basis of BIM and construction site safety monitoring information, use the Multiquadric radial basis function method to interpolate the construction site safety data length:(17)Fx,y=∑k=1nakhkx,y+∑k=1mbkqkx,y,wherex,y represents a point in the basis function of the construction site safety monitoring data field, qkx,y represents the polynomial basis of the construction site safety monitoring data field, which is less than m, hk represents the sparsity of the construction site safety monitoring data field, and ak and bk represent the coefficients of the construction site safety monitoring data field.(2) Calculate the interpolation three-variable function of spatial data of construction site safety status monitoring, so as to provide the foundation for establishing the information model of construction site safety monitoring. The independent variables of construction safety are extended to obtain the interpolation three variable functions of spatial data of construction safety status monitoring, which is expressed by the following formula:(18)Fx,y,z=∑j=1najx−xj2+y−yj2+z−zj2+c21/2Fx,y,wherec represents any constant.(3) Establish the information model of safety condition monitoring on the construction site, and update and refine it. By importing the values ofn points xi,yi,zi represented by fi into the above formula, the construction site safety status monitoring information model can be made, which is expressed by the following formula:(19)fi=∑j=1najxi−xj2+yi−yj2+zi−zj2+c2Fx,y,z.It is assumed thatQj represents any quadratic basis function and aj represents the coefficient of the monitoring model. After updating and enhancing the safety status monitoring information model of the construction site, the above model can be rewritten as(20)fi=∑j=1najQij.(4) Obtain the physical quantity value of the safety monitoring information node on the construction site. Now, the model expression in the above formula is transformed into the matrix form and expressed by the following formula:(21)F=QAfi.In the formula,F represents the expression matrix of the data information monitoring model of the safety state of the construction site, Q represents the difference basis function of the model, and A represents the safety state parameters of the construction site. Then, the physical quantity value of the construction site quality monitoring information node represented by P is calculated by the following formula:(22)P=QPQ−1F.In the formula,QP represents the discrete points of the safety monitoring information data field on the construction site and Q−1 represents the discrete basis function.To sum up, it can be illustrated that, in the process of safety supervision on the construction site, the multiquadric radial basis function is tied down to interpolate the safety monitoring data on the construction site, establish the data information monitoring model for safety status monitoring on the construction site, and calculate the physical value of the safety monitoring information node on the construction site, so as to lay a groundwork for the realization of safety management on the construction site.The physical value of the safety monitoring information node on the construction site is combined with the distance mapping method to obtain the similarity between the safety monitoring information competition layer neuron and its input mode weight vector. The dimension of the input vector is reduced and mapped to the two-dimensional plane, so as to effectively complete the safety management of the construction site. The specific computation steps are as follows:(1) Determine the neuron competition rules of safety monitoring data information on the construction site and obtain its input mode weight. SupposeP′ represents the input vector of safety status monitoring on the construction site, wj represents the initial value given to the weight vector, M represents the number of neurons in the output layer, and Euclidean distance is used as the discrimination function of competition; based on the physical quantity value of the construction site safety monitoring information node obtained by the above formula, the following formula is used to represent the competition rules of the construction site safety monitoring data information node:(23)P′−wg=minjp′−wj×pp′×wj×M.Using the above formula, the neuron represented byg can be determined as the competition winner. The weight of neuron j input mode in gG neighborhood is expressed by the subsequent formula:(24)wjt+1=wjt+ηthigtp′−wjtP′−wg,wherewjt+1 represents the modified weight of neuron j in g neighborhood, wjt represents the weight of neuron j in g neighborhood, ηt represents the iterative learning rate in step t, and higt represents the topology factor.(2) Determine the distance between the input vector and the initial value of the weight vector to complete the similarity comparison.M represents the number of neurons, which are fixed on the two-dimensional output plane grid, xj represents the coordinates, and distance between P and W is calculated by the following formula: represents the number of neurons, which are fixed on the two-dimensional output plane grid, X represents the coordinates, and the distance between p′ and wj is calculated by the following formula:(25)dj=M×p′−wjxj.Calculate the similarity between the trained output layer neuronsj and p′ by using the following formula:(26)Rjp′=fdjNgj∑i=1MfdjNgj,whereg represents the BMU of the construction site safety monitoring vector p′ and Ngj represents the exponential function of the construction site safety monitoring. Assuming that the neuron j is within the neighborhood g of j, the value of Ngj is 1, otherwise it is 0. fdj is a function of distance dj.(3) After obtaining the similarity between all output layer neurons of safety monitoring on the construction site and input vectorp′, the dimension of input vector p′ is reduced and mapped to two-dimensional plane by using the following formula to complete the safety supervision on the construction site:(27)xp=∑j=1MRjpxj.To sum up, it can be stated that, by fusing the distance mapping method to obtain the phase velocity of the construction site safety monitoring information, the competitive layer neuron, and its input mode weight vector, mapping the dimension reduction of the input vector to the two-dimensional flat key, and the construction site safety supervision can be effectively completed. ## 2.1. Mountain Rainfall Estimation Based on Internet of Things With the sustained improvement and rapid growth of meteorological service informatization level, it is very essential to apply real-time meteorological data processing to meteorological information business. Furthermore, data mining technology is the most widely used and efficient method of meteorological data processing. With the support of Internet of Things technology, it is a motif of great significance to analyze the meteorological characteristics and laws of meteorological data hiding through data mining, provide meteorological researchers with knowledge decision-making based on meteorological data, and predict future meteorological changes [4].Single hidden layer feedforward neural network (SLFN) is widely used in many fields because of its ability to approximate nonlinear mapping, but it needs to set a large number of parameters artificially in the process of application, so the network will fall into local optimization due to improper selection of parameters. Huang Guangbin et al. proposed a learning algorithm of single hidden layer feedforward neural network, called limit learning machine (ELM), which does not need to adjust all the network specifications. The input weight and hidden layer deviation are given randomly at the beginning of training and fixed in the training process, while the output connection weight can be obtained by solving the least square solution of linear equations. Therefore, in order to enhance the accuracy of mountain rainfall estimation, based on the data provided by the Internet of things, the limit learning machine method is used to estimate mountain rainfall to ensure construction safety [5]. The specific estimation process is as follows.The mountain rainfall training sample set isN, where xi,ti meets xi=xi1,xi2,…,xinT∈Rn and ti=ti1,xti2,…,timT∈Rm. For a single hidden layer feedforward neural network with L hidden layer nodes, the excitation function can be expressed as(1)∑i=1Lβigiwi⋅xj+bi=oj,j=1,2,…,N,where wi=wi1,wi2,…,winT represents the input weight connecting the ith hidden layer and the input neuron, βi=βi1,βi2,…,βimT represents the output weight connecting the ith hidden layer node and the output neuron, and bi represents the deviation of the ith hidden layer node. If the single hidden layer feedforward neural network of the excitation function can approach N training samples xi,ti with zero error, then(2)∑j=1Loj−tj=0,(3)∑i=1Lβigwi⋅xj+bi=tj,j=1,2,…,N.Therefore, it can be expressed in the matrix form:(4)Hβ=T,(5)Hw1,w2,…,wL,b1,b2,…,bL,x1,x2,…,xN=gw1x1+b1⋯gwLx1+bL⋮⋮gw1xN+bL⋯gwLxN+bL,β=β1T⋮βLT,T=t1T⋮tNTN×m,where H represents the output matrix of the hidden layer, T represents the expected output, and β represents the output weight. In order to train the single hidden layer neural network, we want to get w^i, b^i, and β^i. The calculation formula is(6)Hw^i,b^iβ^i−T=minHwi,biβi−T.Then, the output layer weightβ can be obtained by solving the following linear equations by the least square method:(7)β=H+T,where H+ represents the Moore–Penrose generalized inverse of the hidden layer output matrix H.Let the initial training set beN0, and the data sample is trained by ELM algorithm in the batch learning mode. β0=K0−1H0TT0 can be obtained from formula (7), where K0=H0TH0. Assuming that there is another dataset N1, the problem solving becomes a minimization problem:(8)Emin=H0H1β−T0T1.When a new training data enters the system, assuming thatN1 samples enter the model, the following β1 can be obtained:(9)β1=K1−1H0H1T0T1=β0+K1−1H1TT1−H1β0.Formula (9) is extended to the general problem, that is, when k samples enter the model, the recursive formula of output weight β of online sequence limit learning machine algorithm can be obtained:(10)βk+1=βk+Kk+1−1Hk+1TTk+1−Hk+1βk.Formula (10) is a recursive formula derived in order to find βk+1 in an online sequence optimization problem.In order to make the online sequential limit learning machine model with multiple parallel nodes have higher prediction accuracy, this paper adopts the random gradient descent algorithm and weighted average method to dynamically adjust the error weight of the prediction output results of multiple different nodes in the cluster. Nodes with high accuracy of rainfall estimation are given higher weights. The final rainfall estimatey¯ can be obtained by weighted average of error weights through the output results of each node:(11)y¯j=∑i=1kσjiyji∑i=1kσji,where σji is the error j weight of the ith learning machine node, yji is the output value of the ith learning machine node, and j represents the jth batch in the estimation stage. Through the above estimation, based on the rainfall data in the Internet of Things, the limit learning machine method is used to estimate the rainfall in mountainous areas. ## 2.2. BIM Technology Site Safety Management It is of an immense importance to strengthen the application of BIM technology for construction safety management of mountain engineering. BIM technology is based on the forward-looking overall planning and planning of the construction safety management of the whole mountain project. At present, the mountain engineering construction industry attaches great importance to BIM technology and regards it as a key force to promote the reform of the whole mountain engineering construction industry [6–8].BIM (Building Information Modeling) simulates the real information of buildings through digital information simulation. The information includes three-dimensional geometry of buildings, materials, weight, progress, price, and other information of building components. In the engineering design stage, BIM technology mainly realizes virtual design or intelligent construction through the 3D model, and with the help of building related graphics, documents and other information these design elements are highly logical. When the model changes, the related graphics or documents will be updated automatically. Currently, collision detection, energy consumption analysis, and cost prediction are fully used in the design stage. The implementation of project integrated delivery management is also the leading component of BIM application. BIM is used for the whole life cycle management of the project, so as to realize the integration of virtual design, CCB, maintenance, and management [9]. Through dynamic management, effective integration, and visual operation, the information of project progress, manpower, materials, equipment, cost, and site in the construction stage can be dynamically integrated. At the same time, all parties of the project can realize project negotiation, negotiation, construction quality control, safety management, cost control, and progress management through the access system, so as to realize collaborative work and information sharing.In the safety management of mountain construction quality, first obtain the image information of mountain construction safety status, then acquire the corresponding two-dimensional array of mountain construction safety, after this establish the mountain construction safety model, calculate the two-dimensional data length of mountain construction safety, and gain the dynamic changes of mountain construction safety quality, so as to realize the safety management of mountain construction site [10–13]. The specific steps are as follows.Primarily, the safety status image of mountain construction site represented byTCCD is determined by using the gray values of R, G, and B of pixels, which is expressed as follows:(12)TCCD=fR,G,B,where f represents the brightness coefficient of the safety status monitoring image.Moreover, obtain the corresponding two-dimensional array of construction site safety, establish the dynamic monitoring model of construction safety, and calculate the two-dimensional array of construction safety by using the following formula:(13)T=A′×TCCD,where A′ represents the dynamic parameter of safety on the construction site. Then, the expression of the construction safety dynamic monitoring model is(14)si∗=yi−y¯Sy×xij∗.In the third place, the two-dimensional data field of construction site safety is calculated to provide a basis for obtaining the dynamic change expression of construction safety, so the two-dimensional data length calculation formula is(15)m1=P1⊗X0Tt1si∗.In the end, determine the dynamic changes of construction safety and complete the dynamic monitoring of construction safety. The expression is(16)y=m1−b˜n×km1.To sum up, it can be elaborated that the establishment of construction safety dynamic monitoring model can effectively complete the construction safety dynamic monitoring, but according to the problem of inaccurate monitoring, corresponding optimization treatment is needed [14].In view of the sparse distribution of dynamic supervision data of construction site safety and the large error of construction site safety supervision when using traditional monitoring methods to monitor and manage construction safety, a construction site safety management method based on BIM is proposed.In the process of safety supervision on the construction site, BIM is now integrated with the safety status monitoring information on the construction site. On the basis of obtaining all the safety status monitoring information on the construction site, the multiquadric radial basis function method is used to interpolate the monitoring data on the construction site, and the calculation results are used to build the safety supervision model on the construction site [15]. The specific calculation steps are as follows:(1) Carry out interpolation calculation on the safety data of the construction site to provide a basis for obtaining the interpolation three-variable function of the spatial data of the safety state monitoring of the construction site. On the basis of BIM and construction site safety monitoring information, use the Multiquadric radial basis function method to interpolate the construction site safety data length:(17)Fx,y=∑k=1nakhkx,y+∑k=1mbkqkx,y,wherex,y represents a point in the basis function of the construction site safety monitoring data field, qkx,y represents the polynomial basis of the construction site safety monitoring data field, which is less than m, hk represents the sparsity of the construction site safety monitoring data field, and ak and bk represent the coefficients of the construction site safety monitoring data field.(2) Calculate the interpolation three-variable function of spatial data of construction site safety status monitoring, so as to provide the foundation for establishing the information model of construction site safety monitoring. The independent variables of construction safety are extended to obtain the interpolation three variable functions of spatial data of construction safety status monitoring, which is expressed by the following formula:(18)Fx,y,z=∑j=1najx−xj2+y−yj2+z−zj2+c21/2Fx,y,wherec represents any constant.(3) Establish the information model of safety condition monitoring on the construction site, and update and refine it. By importing the values ofn points xi,yi,zi represented by fi into the above formula, the construction site safety status monitoring information model can be made, which is expressed by the following formula:(19)fi=∑j=1najxi−xj2+yi−yj2+zi−zj2+c2Fx,y,z.It is assumed thatQj represents any quadratic basis function and aj represents the coefficient of the monitoring model. After updating and enhancing the safety status monitoring information model of the construction site, the above model can be rewritten as(20)fi=∑j=1najQij.(4) Obtain the physical quantity value of the safety monitoring information node on the construction site. Now, the model expression in the above formula is transformed into the matrix form and expressed by the following formula:(21)F=QAfi.In the formula,F represents the expression matrix of the data information monitoring model of the safety state of the construction site, Q represents the difference basis function of the model, and A represents the safety state parameters of the construction site. Then, the physical quantity value of the construction site quality monitoring information node represented by P is calculated by the following formula:(22)P=QPQ−1F.In the formula,QP represents the discrete points of the safety monitoring information data field on the construction site and Q−1 represents the discrete basis function.To sum up, it can be illustrated that, in the process of safety supervision on the construction site, the multiquadric radial basis function is tied down to interpolate the safety monitoring data on the construction site, establish the data information monitoring model for safety status monitoring on the construction site, and calculate the physical value of the safety monitoring information node on the construction site, so as to lay a groundwork for the realization of safety management on the construction site.The physical value of the safety monitoring information node on the construction site is combined with the distance mapping method to obtain the similarity between the safety monitoring information competition layer neuron and its input mode weight vector. The dimension of the input vector is reduced and mapped to the two-dimensional plane, so as to effectively complete the safety management of the construction site. The specific computation steps are as follows:(1) Determine the neuron competition rules of safety monitoring data information on the construction site and obtain its input mode weight. SupposeP′ represents the input vector of safety status monitoring on the construction site, wj represents the initial value given to the weight vector, M represents the number of neurons in the output layer, and Euclidean distance is used as the discrimination function of competition; based on the physical quantity value of the construction site safety monitoring information node obtained by the above formula, the following formula is used to represent the competition rules of the construction site safety monitoring data information node:(23)P′−wg=minjp′−wj×pp′×wj×M.Using the above formula, the neuron represented byg can be determined as the competition winner. The weight of neuron j input mode in gG neighborhood is expressed by the subsequent formula:(24)wjt+1=wjt+ηthigtp′−wjtP′−wg,wherewjt+1 represents the modified weight of neuron j in g neighborhood, wjt represents the weight of neuron j in g neighborhood, ηt represents the iterative learning rate in step t, and higt represents the topology factor.(2) Determine the distance between the input vector and the initial value of the weight vector to complete the similarity comparison.M represents the number of neurons, which are fixed on the two-dimensional output plane grid, xj represents the coordinates, and distance between P and W is calculated by the following formula: represents the number of neurons, which are fixed on the two-dimensional output plane grid, X represents the coordinates, and the distance between p′ and wj is calculated by the following formula:(25)dj=M×p′−wjxj.Calculate the similarity between the trained output layer neuronsj and p′ by using the following formula:(26)Rjp′=fdjNgj∑i=1MfdjNgj,whereg represents the BMU of the construction site safety monitoring vector p′ and Ngj represents the exponential function of the construction site safety monitoring. Assuming that the neuron j is within the neighborhood g of j, the value of Ngj is 1, otherwise it is 0. fdj is a function of distance dj.(3) After obtaining the similarity between all output layer neurons of safety monitoring on the construction site and input vectorp′, the dimension of input vector p′ is reduced and mapped to two-dimensional plane by using the following formula to complete the safety supervision on the construction site:(27)xp=∑j=1MRjpxj.To sum up, it can be stated that, by fusing the distance mapping method to obtain the phase velocity of the construction site safety monitoring information, the competitive layer neuron, and its input mode weight vector, mapping the dimension reduction of the input vector to the two-dimensional flat key, and the construction site safety supervision can be effectively completed. ## 3. Experimental Verification In order to verify the practical application performance of the proposed mountain rainfall estimation based on the Internet of Things and the field safety management method of BIM technology, comparative verification experiments are carried out. ### 3.1. Experimental Environment and Scheme To highlight the experimental effect, a mountain construction site in the province is selected as the research area, the rainfall in this area is estimated, and the safety of the construction site is managed. The construction site drawing is shown in Figure1.Figure 1 Construction site.According to the above construction site area, verify the rainfall estimation accuracy of this method and the calculation accuracy of safety factor on the construction site, and compare the calculation results of this method with the actual results to fully verify the safety management performance of this method. ### 3.2. Rainfall Estimation Accuracy Rainfall results are very important for the safety management of mountain construction site. In the process of safety management, we should always pay attention to the changes of rainfall. Therefore, in the process of verifying the performance of the method, the impact of rainfall should be considered. The comparison results between the rainfall estimation results of this method and the actual rainfall results are shown in Figure2.Figure 2 Rainfall estimation accuracy results.From the comparison results of rainfall estimation accuracy shown in Figure2, it can be seen that, within the period of one month construction, the rainfall estimation results of this method are basically the same as the actual rainfall results, and the maximum rainfall estimation error is no more than 2 mm. Therefore, it shows that this paper can accurately estimate the rainfall on the construction site through the Internet of things data, so as to provide a basis for improving the safety management of the construction site. ### 3.3. Safety Factor of Construction Site The safety factor of the construction site is the basis for judging the safety situation of the construction site. The safety factor of the construction site is calculated according to the rainfall estimation results, and the safety factor calculation results of this method are compared with the actual safety factor results to verify the performance of this method. The safety factor results of the construction site are shown in Figure3.Figure 3 Safety factor of construction site.From the comparison results of safety factors on the construction site shown in Figure3, it can be seen that this method can accurately calculate the safety factor on the construction site according to the rainfall estimation results, and the calculation results of the safety factor in this method are consistent with the actual safety factor results. Hence, it shows that this method can effectively manage the safety of the construction site. ## 3.1. Experimental Environment and Scheme To highlight the experimental effect, a mountain construction site in the province is selected as the research area, the rainfall in this area is estimated, and the safety of the construction site is managed. The construction site drawing is shown in Figure1.Figure 1 Construction site.According to the above construction site area, verify the rainfall estimation accuracy of this method and the calculation accuracy of safety factor on the construction site, and compare the calculation results of this method with the actual results to fully verify the safety management performance of this method. ## 3.2. Rainfall Estimation Accuracy Rainfall results are very important for the safety management of mountain construction site. In the process of safety management, we should always pay attention to the changes of rainfall. Therefore, in the process of verifying the performance of the method, the impact of rainfall should be considered. The comparison results between the rainfall estimation results of this method and the actual rainfall results are shown in Figure2.Figure 2 Rainfall estimation accuracy results.From the comparison results of rainfall estimation accuracy shown in Figure2, it can be seen that, within the period of one month construction, the rainfall estimation results of this method are basically the same as the actual rainfall results, and the maximum rainfall estimation error is no more than 2 mm. Therefore, it shows that this paper can accurately estimate the rainfall on the construction site through the Internet of things data, so as to provide a basis for improving the safety management of the construction site. ## 3.3. Safety Factor of Construction Site The safety factor of the construction site is the basis for judging the safety situation of the construction site. The safety factor of the construction site is calculated according to the rainfall estimation results, and the safety factor calculation results of this method are compared with the actual safety factor results to verify the performance of this method. The safety factor results of the construction site are shown in Figure3.Figure 3 Safety factor of construction site.From the comparison results of safety factors on the construction site shown in Figure3, it can be seen that this method can accurately calculate the safety factor on the construction site according to the rainfall estimation results, and the calculation results of the safety factor in this method are consistent with the actual safety factor results. Hence, it shows that this method can effectively manage the safety of the construction site. ## 4. Conclusion In order to improve the safety of mountain construction, it is mandatory to strengthen the safety management of construction site. Thus, a mountain rainfall estimation and BIM technology field safety management method based on Internet of Things is proposed, and the performance of the method is verified from both theory and experiment. This method has high accuracy of rainfall estimation and safety management system in rainfall estimation and safety management. The calculation results of the two indexes are basically consistent with the actual results. Accordingly, it shows that the proposed method based on Internet of Things and BIM technology can better meet the needs of rainfall estimation and safety management. ### 4.1. Future Scope This work can be extended further at a very major level. This method based on Internet of Things can actually meet the demands of different construction sites. It can also be applied to the plain areas other than the hilly areas. ## 4.1. Future Scope This work can be extended further at a very major level. This method based on Internet of Things can actually meet the demands of different construction sites. It can also be applied to the plain areas other than the hilly areas. --- *Source: 1017200-2021-10-07.xml*
2021
# Two-Step Frequency Estimation of GNSS Signal in High-Dynamic Environment **Authors:** Chao Wu; Yafeng Li; Liyan Luo; Jian Xie; Ling Wang **Journal:** International Journal of Antennas and Propagation (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1017206 --- ## Abstract To improve frequency accuracy which is affected by two parameters in high-dynamic acquisition, we propose a two-step frequency estimation method based on the mean frequency (MF) model for high-dynamic parameters estimation. The first step is based on the discrete chirp-Fourier transform (DCFT) for coarse MF estimation, where the MF accuracy and frequency search step are derived. In the second step, the maximum likelihood estimation process (MLEP) is adopted for fine MF estimation. Compared with state-of-art methods, it is verified that the two-step method can improve the detection probability in coarse MF estimation and improve the MF accuracy with low computational burdens under conditions with a moderate signal-to-noise ratio (SNR). --- ## Body ## 1. Introduction In global navigation satellite system (GNSS) receiver techniques, the acquisition is the most important process for estimating code phase and carrier frequency [1]. For fast acquisition with lower computational complexity, the method [2–4] based on fast Fourier transform (FFT) was proposed. Since the pull-in range of the tracking loop is only a few hertz, the number of FFT points should be increased [5]. Generally, the coarse-to-fine acquisition methods are used to reduce the computation costs [6]. The code phase and coarse carrier frequency parameters can be obtained from the coarse acquisition, and the carrier frequency can be refined in a specific fine acquisition process with the code stripped off.For low-dynamic acquisition, the carrier Doppler can be estimated in the fine acquisition process. Tang et al. [7] proposed an accurate estimation method for residual Doppler. However, this method has a restriction on the initial Doppler search step, and more computation is required to obtain an accurate Doppler. To reduce computational load, a method [8] was proposed based on the coarse Doppler and sampling frequency in moderate SNR. However, to improve the Doppler accuracy in low SNR, a long-time correlation process is typically needed, which costs a lot of computations. To reduce the computations with long integration, Mohamed and Aboelmagd [5] proposed the Schmidt method which utilizes orthogonal searching. To further reduce the computations, article [9] proposed the zero-forcing and a double FFT-based method to improve Doppler frequency accuracy without increasing the computational load. However, because of the trade-off between the Doppler frequency resolution and the computational complexity, the maximum error of carrier frequency estimation depends on the number of FFT points. To improve the Doppler frequency accuracy, Nguyen et al. [10] proposed a residual frequency estimation method with differential processing. Due to the differential processing, it performs not well in the low SNR.Above all, the articles listed only focus on Doppler frequency accuracy. However, both initial frequency and chirping rate [11] affect the correlation peak in high-dynamic applications. Moreover, with a long integration time, the influence of these two parameters cannot be ignored. Among methods for initial frequency and chirping rate estimation, the authors in [12, 13] proposed frequency estimation methods based on Fractional Fourier transform (FRFT) for high-dynamic applications. In addition, we proposed a frequency estimation method based on discrete chirp-Fourier transform (DCFT) [14]. However, in some high-dynamic applications, more accurate frequency is usually desired.To further improve frequency accuracy for high-dynamic applications, a two-step frequency parameter estimation method is proposed in this paper. An MF model has been derived to improve the frequency accuracy. In the first step for coarse MF estimation, the chirping rate and initial frequency for MF estimation have been estimated based on the DCFT. A maximum likelihood estimation process (MLEP) has been proposed for the fine MF estimation in the 2nd step. With the two-step processing, the computational burdens can be reduced when the peak value is smaller than the configured threshold, and high-frequency accuracy can be obtained when the signal is present. Simulation results show that for coarse MF estimation, the proposed method has a higher detection probability compared with conventional methods, and for fine MF estimation, the two-step method has a higher frequency accuracy and lower computational burdens than the compared methods. ## 2. Signal Model After correlating with a one-period local code and a coarse Doppler bin [11], the postcorrelation signal can be obtained and depicted as follows:(1)Sn=Abnexpj2πf0nTs+μn2Ts2+Wn,where f0 represents residual Doppler frequency or initial frequency, μ represents the chirping rate or Doppler rate, Ts represents the sampling frequency, bn represents bit sign, A represents signal amplitude, and Wn denotes a zero-mean additive white Gaussian noise (AWGN) process. When the received signal is not aligned with local code or a wrong Doppler bin is detected, signal amplitude A≈0, which is called the signal-absent situation in the following analysis. Or, A≠0 is assumed as a constant 11. For GPS L1 CA signal with 1-ms code period, it is typical that f0  = (−250, 250) Hz, and μ  = (−500, 500) Hz/s. It is assumed that bn can be obtained by some auxiliary means 13, and bn equals 1 in the following analysis. ## 3. Proposed Method In this section, the process based on DCFT has been proposed for coarse MF estimation. Then, MLEP has been adopted for fine MF estimation. Finally, the two-step method which combines the two processes has been proposed. In this two-step method, signal detection and MF accuracy improvement are realized through the first and second steps respectively. ### 3.1. Coarse Search of MF Based on DCFT T transform of the postcorrelation signal can be written as follows:(2)STkT,αT=AT∑n=0N−1bnexpj2πΔTnTs+δTn2Ts2+w,where kT=0,±1,±2,... represents a searching range, and αT represents transform factor. w=wi+jwr. wi and wr both obey normal distribution N0,σw2. When T = f, it represents FRFT [15]. Af=Aexp−jπsgnsinαf/4+jαT/2/sinαf0.5expj1/2cotαfkF2. F=2π/NTscscαf. Δf=f0−kf/NTs. δf=μ+1/4πcotαf. When T = d, it represents DCFT. Ad=A. Δd=f0−kd/NTs. δd=μ−αd/N2Ts2, where αd represents the chirping rate factor. Based on (2), the correlation peak of FRFT is an unilinear function with αf, which may degrade the detection peak. Therefor DCFT is chosen in the following discussion.When the bit signs can be obtained by assisted means, the formula above can be simplified into(3)Sdkd,αd≈Ad∑n=0N−1expjω¯nTs+w=AdNsincω¯NTs2expjω¯N−12Ts+w,where N represents the integration time and ω¯=2πΔT+2πδTN−1/2Ts represents the MF from 0Ts to N−1Ts. It is assumed that A0=AdN. Based on the derivations above and Taylor expansion, the peak Ad∑n=0N−1expjω¯nTs can be approximated as(4)Ad∑n=0N−1expjω¯nTs=A0sincω¯NTs2≈A01−16ω¯NTs22,where Ad∑n=0N−1expjω¯nTs represents the amplitude of the signal Ad∑n=0N−1expjω¯nTs. It is assumed that the unit kd0,αd0 is corresponding to the peak, and Ad∑n=0N−1expjω¯nTs≥γA0, where γ is set based on the criterion that one search bin contains mostly useful energy [16]. Then, we can obtain(5)ω¯≤261−γ/NTs.Based on the equations above, the search step of MF in the first step is set to261−γ/NTs. Due to the influence between the initial frequency and chirping rate, the search step of the initial frequency needs to be configured first. Based on the influence of residual Doppler frequency on the correlation peak value of the postcorrelation signal in low-dynamic applications [4], the search step of initial frequency is set to 1/2NTs. Moreover, based on the relationship between the initial frequency search step and the MF search step in the first step, the search step of the chirping rate can be obtained.Above all, in the coarse search, the coarse estimations of both initial frequency and chirping rate can be obtained. Based on the coarse estimation, the coarse MF estimation can be obtained. ### 3.2. MLEP for Fine MF Estimation In the MLEP, firstly, the signal amplitude is estimated based on ML; then, due to the fine MF range, the restricted search criteria for the fine MF estimation are adopted. Finally, combining the criteria and ML function, the fine MF is estimated based on the estimated signal amplitude.Based on (3), the observed peak can be written as follows:(6)SdA0,ω¯0=A0sincω¯0NTs2expjω¯0N−12Ts+w,where ω¯0 represents residual MF, which ranges from −61−γ/NTs to 61−γ/NTs based on Section 3.1 analysis. The joint probability density function of wi,wr of wi,wr can be written as(7)fwi,wr=12πσw2exp−wi2+wr22σw2.Then, based on ML estimation, the optimized objective function can be obtained as(8)minJA0,ω¯0=min−lnfwi,wr≈minwi2+wr22σw2,where minJA0,ω¯0 represents the minimum of the objective function JA0,ω¯0. It is assumed that ∂JA0,ω¯0/∂A0=0. Then, we can obtain(9)A0c=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tssincω¯0NTs/2cosω¯0N−1Ts,where A0c represents the optimized value of A0 based on ML. Then, substituting (9) into (6), we can obtain(10)SdA0c,ω¯0=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Ts·expjω¯0N−12Ts+w,where SdA0c,ω¯0 represents the correlation peak in presence of noise. However, when cosω¯0N−1Ts=0, ω¯0=π2k+1/2N−1Ts represents singular points of (10), k=0,±1,±2,... represents segmentation variable, Lk represents the length of k. Consequently, the segmentation optimization is taken based on the singular points. When π2k+1/2N−1Ts<ω¯0<π2k+3/2N−1Ts, the segmentation optimization is based on the objective function as follows:(11)JA0c,ω¯0+Δω=wiω¯0+Δω2+wrω¯0+Δω22σw2=wiω¯0+JiΔω2+wrω¯0+JrΔω22σw2,where J represents the objective function. It is assumed that ∂JA0c,ω¯0+Δω/∂Δω=0, Δω can be obtained as follows:(12)Δω=−JiTwiω¯0−JrTwrω¯0JiTJi+JrTJr,where Ji=∂wiω¯0/∂ω¯0. Jr=∂wrω¯0/∂ω¯0. The several simulations show that the number of iterations It can be chosen to be 10. The local optimal solution ω¯0,k segmentation variable k can be obtained based on the range of ω¯0. The restricted search criterion for choosing ω¯0,k is given as follows:(13)k0=minkJA0c,ω¯0,k,sincω¯0,kNTs2>0.1,A0c|ω¯0,k>0,where ω¯0,k0 is the estimated frequency parameter in the 2nd step. Above all, MF can be obtained from the fine MF estimation of the 2nd step. ### 3.3. Two-Step Frequency Estimation Method The two-step frequency parameters estimation method is shown in Figure1. The method can be depicted in more detail as follows:(a) Calculating the search step of the initial frequency and chirping rate based on (5).(b) In coarse search, the estimated valueα˜0,f˜0 can be obtained based on the threshold γ.(c) In fine search, an iterative approach based on ML is adopted:With frequency errorΔω¯0 initialized, the amplitude can be obtained after calculating the amplitude function (9). Moreover, based on ΔT,δT, the range of ω¯0 can be obtained.Based on the amplitude and mean frequency error, the value of the differential functionsJi and Jr can be obtained as follows:(14)Ji=Ts∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c−TT1∗Tc∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Ts∗T2s∗ReSd∗Tc−ImSd∗Ts,T2c2Jr=Tc∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c+TT1∗Ts∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Tc∗T2s∗ReSd∗Tc−ImSd∗TsT2c2,whereTc  =  cosTT1, Ts  =  sinTT1, T2c  =  cos2TT1, T2s  =  sin2TT1, and TT1  =  N−1/2Tsω¯0. The peak error functions wiω¯0 and wrω¯0 can be written as follows:(15)wiω¯0=ImSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tssinω¯0N−12Ts,wrω¯0=ReSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tscosω¯0N−12Ts,where the unitk0,α0 is corresponding to α˜0,f˜0.Based on differential functions, peak error function and (11), feedback error Δω can be obtained. The feedback function in Figure 1 can be written as follows:(16)ω¯0=ω¯0−Δω.(d) Based on step (c),ω¯0,k can be obtained after 10 iterations. Then, based on (13), the final MF can be estimated. Above all, the frequency accuracy can be improved.Figure 1 The two-step method’s flow diagram combining DCFT for coarse frequency estimation and MLEP for fine frequency estimation. ## 3.1. Coarse Search of MF Based on DCFT T transform of the postcorrelation signal can be written as follows:(2)STkT,αT=AT∑n=0N−1bnexpj2πΔTnTs+δTn2Ts2+w,where kT=0,±1,±2,... represents a searching range, and αT represents transform factor. w=wi+jwr. wi and wr both obey normal distribution N0,σw2. When T = f, it represents FRFT [15]. Af=Aexp−jπsgnsinαf/4+jαT/2/sinαf0.5expj1/2cotαfkF2. F=2π/NTscscαf. Δf=f0−kf/NTs. δf=μ+1/4πcotαf. When T = d, it represents DCFT. Ad=A. Δd=f0−kd/NTs. δd=μ−αd/N2Ts2, where αd represents the chirping rate factor. Based on (2), the correlation peak of FRFT is an unilinear function with αf, which may degrade the detection peak. Therefor DCFT is chosen in the following discussion.When the bit signs can be obtained by assisted means, the formula above can be simplified into(3)Sdkd,αd≈Ad∑n=0N−1expjω¯nTs+w=AdNsincω¯NTs2expjω¯N−12Ts+w,where N represents the integration time and ω¯=2πΔT+2πδTN−1/2Ts represents the MF from 0Ts to N−1Ts. It is assumed that A0=AdN. Based on the derivations above and Taylor expansion, the peak Ad∑n=0N−1expjω¯nTs can be approximated as(4)Ad∑n=0N−1expjω¯nTs=A0sincω¯NTs2≈A01−16ω¯NTs22,where Ad∑n=0N−1expjω¯nTs represents the amplitude of the signal Ad∑n=0N−1expjω¯nTs. It is assumed that the unit kd0,αd0 is corresponding to the peak, and Ad∑n=0N−1expjω¯nTs≥γA0, where γ is set based on the criterion that one search bin contains mostly useful energy [16]. Then, we can obtain(5)ω¯≤261−γ/NTs.Based on the equations above, the search step of MF in the first step is set to261−γ/NTs. Due to the influence between the initial frequency and chirping rate, the search step of the initial frequency needs to be configured first. Based on the influence of residual Doppler frequency on the correlation peak value of the postcorrelation signal in low-dynamic applications [4], the search step of initial frequency is set to 1/2NTs. Moreover, based on the relationship between the initial frequency search step and the MF search step in the first step, the search step of the chirping rate can be obtained.Above all, in the coarse search, the coarse estimations of both initial frequency and chirping rate can be obtained. Based on the coarse estimation, the coarse MF estimation can be obtained. ## 3.2. MLEP for Fine MF Estimation In the MLEP, firstly, the signal amplitude is estimated based on ML; then, due to the fine MF range, the restricted search criteria for the fine MF estimation are adopted. Finally, combining the criteria and ML function, the fine MF is estimated based on the estimated signal amplitude.Based on (3), the observed peak can be written as follows:(6)SdA0,ω¯0=A0sincω¯0NTs2expjω¯0N−12Ts+w,where ω¯0 represents residual MF, which ranges from −61−γ/NTs to 61−γ/NTs based on Section 3.1 analysis. The joint probability density function of wi,wr of wi,wr can be written as(7)fwi,wr=12πσw2exp−wi2+wr22σw2.Then, based on ML estimation, the optimized objective function can be obtained as(8)minJA0,ω¯0=min−lnfwi,wr≈minwi2+wr22σw2,where minJA0,ω¯0 represents the minimum of the objective function JA0,ω¯0. It is assumed that ∂JA0,ω¯0/∂A0=0. Then, we can obtain(9)A0c=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tssincω¯0NTs/2cosω¯0N−1Ts,where A0c represents the optimized value of A0 based on ML. Then, substituting (9) into (6), we can obtain(10)SdA0c,ω¯0=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Ts·expjω¯0N−12Ts+w,where SdA0c,ω¯0 represents the correlation peak in presence of noise. However, when cosω¯0N−1Ts=0, ω¯0=π2k+1/2N−1Ts represents singular points of (10), k=0,±1,±2,... represents segmentation variable, Lk represents the length of k. Consequently, the segmentation optimization is taken based on the singular points. When π2k+1/2N−1Ts<ω¯0<π2k+3/2N−1Ts, the segmentation optimization is based on the objective function as follows:(11)JA0c,ω¯0+Δω=wiω¯0+Δω2+wrω¯0+Δω22σw2=wiω¯0+JiΔω2+wrω¯0+JrΔω22σw2,where J represents the objective function. It is assumed that ∂JA0c,ω¯0+Δω/∂Δω=0, Δω can be obtained as follows:(12)Δω=−JiTwiω¯0−JrTwrω¯0JiTJi+JrTJr,where Ji=∂wiω¯0/∂ω¯0. Jr=∂wrω¯0/∂ω¯0. The several simulations show that the number of iterations It can be chosen to be 10. The local optimal solution ω¯0,k segmentation variable k can be obtained based on the range of ω¯0. The restricted search criterion for choosing ω¯0,k is given as follows:(13)k0=minkJA0c,ω¯0,k,sincω¯0,kNTs2>0.1,A0c|ω¯0,k>0,where ω¯0,k0 is the estimated frequency parameter in the 2nd step. Above all, MF can be obtained from the fine MF estimation of the 2nd step. ## 3.3. Two-Step Frequency Estimation Method The two-step frequency parameters estimation method is shown in Figure1. The method can be depicted in more detail as follows:(a) Calculating the search step of the initial frequency and chirping rate based on (5).(b) In coarse search, the estimated valueα˜0,f˜0 can be obtained based on the threshold γ.(c) In fine search, an iterative approach based on ML is adopted:With frequency errorΔω¯0 initialized, the amplitude can be obtained after calculating the amplitude function (9). Moreover, based on ΔT,δT, the range of ω¯0 can be obtained.Based on the amplitude and mean frequency error, the value of the differential functionsJi and Jr can be obtained as follows:(14)Ji=Ts∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c−TT1∗Tc∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Ts∗T2s∗ReSd∗Tc−ImSd∗Ts,T2c2Jr=Tc∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c+TT1∗Ts∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Tc∗T2s∗ReSd∗Tc−ImSd∗TsT2c2,whereTc  =  cosTT1, Ts  =  sinTT1, T2c  =  cos2TT1, T2s  =  sin2TT1, and TT1  =  N−1/2Tsω¯0. The peak error functions wiω¯0 and wrω¯0 can be written as follows:(15)wiω¯0=ImSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tssinω¯0N−12Ts,wrω¯0=ReSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tscosω¯0N−12Ts,where the unitk0,α0 is corresponding to α˜0,f˜0.Based on differential functions, peak error function and (11), feedback error Δω can be obtained. The feedback function in Figure 1 can be written as follows:(16)ω¯0=ω¯0−Δω.(d) Based on step (c),ω¯0,k can be obtained after 10 iterations. Then, based on (13), the final MF can be estimated. Above all, the frequency accuracy can be improved.Figure 1 The two-step method’s flow diagram combining DCFT for coarse frequency estimation and MLEP for fine frequency estimation. ## 4. Algorithm Performance In this section, Cramér–Rao bound of the estimated MF of the proposed method is derived. Then, computational burdens and detection probability of the proposed method are analyzed for performance evaluation. ### 4.1. Cramér–Rao Bound (CRB) of the Proposed Method Based on the theory [17], the CRB of ω¯0 can be written as follows:(17)CRω¯0=1E∂2wi2+wr2/2σw2/∂2ω¯0=2σw2E∂2wi2+wr2∂2ω¯0,where wi=ImSd−A0sincω¯0NTs/2sinω¯0N−1/2Ts, and wr=ReSd−A0sincω¯0NTs/2cosω¯0N−1/2Ts. When sincω¯0NTs/2≈1,(18)CRω¯0≈2σw2N−12Ts2A02,where we can obtain the final CRB of ω¯0. When A0≈N, Ts=0.001s, N  = 200 ms, 400 ms and 600 ms, the CRB of MF is shown in Figure 2.Figure 2 CRB ofω¯0 under different integration times.In Figure2, CRB curves are shown with the integration time being 100, 200, and 300 ms, respectively. Under the same SNR, the longer the integration time is, the higher the frequency estimation accuracy is. This is because based on (19) the integration time is long, and the integration peak value is large. ### 4.2. Computations Analysis Based on (2) and (10), the computations of the proposed method can be obtained. Here, the BASIC [11] is chosen as the benchmark for the coarse MF estimation. The method estimates the frequency parameters based on the differential signal as follows:(19)Sdn=S∗nSn+M0=A2bnbn+M0expj2π2nμM0Ts2+f0M0Ts+μM02Ts2=A2bnbn+M0expj2π2nμM0Ts2+θ0,where θ0=f0M0Ts+μM02Ts2. Based on the Fourier transform, the frequency parameters can be estimated.In Table1, methods are chosen for coarse or fine MF estimation. Tα represents the number of chirping rate search bins and Tf represents the number of initial frequency search. CP,M and CP,A can be calculated as follows.Table 1 Computational burdens comparison. Coarse MF estimationComplex multiplicationsComplex additionsBASICTα+Tα/2log2Tα+Tf+Tf/2log2TfTαlog2Tα+Tflog2TfProposed method (1st step)TαTf/2log2TfTαTflog2TfFine MF estimationComplex multiplicationsComplex additionsProposed method(2nd step)N+CP,MCP,ASchmidt method (appendix table2) [5]2N+1M+1+1+2M!+M−1!+N+1∑m=1Mm!/20.5N−1M+2+MN+1+M−1!+M−2!+N∑m=1Mm!Table 2 Computational burdens for the Schmidt method algorithm [5]. MultiplicationsAdditionsR1,1=P12NN−1Rk,1=PkP1¯k=1,...,M1+MNMN−1C1=yP1¯N+1N−1fork=1,…,Mfori=1,…,kk>iRk,i=PkPi¯−∑j=1i−1αjiRk.jk=iRk,k=PkPk¯−∑j=1k−1αjk2Rj.j∑m=1Mm!N+1∑m=1Mm!Nfork=1,…,MCk=yPk¯−∑j=1k−1αjkCjMN+M−1!MN+M−2!gk=Ck/Rk,kM0θi=gi−∑k=i+1mRk,i/Ri,iθki=1,…,M2M!M−1!+MM represents the number of vectors, and N represents the length of the vector. gk represents coefficients of Schmidt orthogonalization.Firstly, A is calculated from (9). Calculating (9) involves 12 multiplications and 1 addition. Moreover, the value of cos () or sin () function can be realized by a look-up table, and their computations can be ignored. Then, calculating differential function costs 18∗2 multiplications and 2∗2 additions. Calculating the peak error function costs 5∗2 multiplications and 1∗2 additions. Hereafter, (12) costs 5 multiplications and 2 additions. Calculating the feedback function costs 1 addition. In addition, calculating (13) costs 19 multiplications and 3 additions.Above all, one complex multiplication equals two multiplications [4]. So, CP,M=Lk51Ii+19/2 and CP,A=Lk9Ii+3/2, where It represents the number of iterations and Lk represents the number of segmentations. Besides, Table 1 shows that the frequency accuracy of the Schmidt method is also dependent on the number of vectors.Since the computations simulation needs to set lots of simulation parameters, the simulation will be conducted in Section5. ### 4.3. Detection Performance Since the signal is detected in the first step of the proposed method, the section is to discuss the detection probability of the first step in the proposed method. The theoretical simulation will be conducted in Section5.1.The detection variableSdk0,α02 obeys the chi-square distribution. When the signal is absent or a wrong frequency bin is searched, Jd=Sdk0,α02 obeys the central chi-square distribution with the variance σw2, and the probability density function can be written as follows:(20)p0Jd=12σw2exp−Jd2σw2,where p0 represents probability density function when a wrong bin is searched. When the right frequency unit is detected, Jd=Sdk0,α02 obeys the noncentral chi-square distribution with the variance σw2, and the probability density function can be written as follows:(21)p1Jd=12σw2exp−Jd+a22σw2I0a2Jdσw4,where p1 represents the probability density function when the right frequency bin is detected. Under an incorrect frequency hypothesis H0, the false alarm probability Pfa can be written as:(22)Pfa=Py≥γD|H0=1−∫0γDp0ydyTfTα−1≈1−1−exp−γD2σw2TfTα−1,where γD represents the detection threshold. Under the correct frequency hypothesis H1, the detection probability PD can be written as follows:(23)PD=Px≥γD|H1=∫γD+∞p1x∫0xp0ydyTfTα−1dx,where Tf represents the number of initial frequency search bins in the first step of the proposed method and Ta represents the number of chirping rate search bins. When the configured Pfa is small, ∫0Jdp0JddJdTfTα−1≈1, and the detection probability PD can be simplified into(24)PD=PJd≥γD/H1≈Qaσw,γDσw,where a equals to Sdk0,α0 in the absence of noise. Based on the definition of miss detection probability [18], the miss probability of the proposed method can be written as:(25)PM=∫0γDp1x∫0γDp0ydyTfTα−1dx.When the setPfa is small, ∫0γDp0JddJdTfTα−1≈1, PD+PM=1. Above all, the detection probability of coarse MF estimation of the proposed method can be obtained. When the signal is detected, the second step of the proposed method for fine MF estimation can be performed. ## 4.1. Cramér–Rao Bound (CRB) of the Proposed Method Based on the theory [17], the CRB of ω¯0 can be written as follows:(17)CRω¯0=1E∂2wi2+wr2/2σw2/∂2ω¯0=2σw2E∂2wi2+wr2∂2ω¯0,where wi=ImSd−A0sincω¯0NTs/2sinω¯0N−1/2Ts, and wr=ReSd−A0sincω¯0NTs/2cosω¯0N−1/2Ts. When sincω¯0NTs/2≈1,(18)CRω¯0≈2σw2N−12Ts2A02,where we can obtain the final CRB of ω¯0. When A0≈N, Ts=0.001s, N  = 200 ms, 400 ms and 600 ms, the CRB of MF is shown in Figure 2.Figure 2 CRB ofω¯0 under different integration times.In Figure2, CRB curves are shown with the integration time being 100, 200, and 300 ms, respectively. Under the same SNR, the longer the integration time is, the higher the frequency estimation accuracy is. This is because based on (19) the integration time is long, and the integration peak value is large. ## 4.2. Computations Analysis Based on (2) and (10), the computations of the proposed method can be obtained. Here, the BASIC [11] is chosen as the benchmark for the coarse MF estimation. The method estimates the frequency parameters based on the differential signal as follows:(19)Sdn=S∗nSn+M0=A2bnbn+M0expj2π2nμM0Ts2+f0M0Ts+μM02Ts2=A2bnbn+M0expj2π2nμM0Ts2+θ0,where θ0=f0M0Ts+μM02Ts2. Based on the Fourier transform, the frequency parameters can be estimated.In Table1, methods are chosen for coarse or fine MF estimation. Tα represents the number of chirping rate search bins and Tf represents the number of initial frequency search. CP,M and CP,A can be calculated as follows.Table 1 Computational burdens comparison. Coarse MF estimationComplex multiplicationsComplex additionsBASICTα+Tα/2log2Tα+Tf+Tf/2log2TfTαlog2Tα+Tflog2TfProposed method (1st step)TαTf/2log2TfTαTflog2TfFine MF estimationComplex multiplicationsComplex additionsProposed method(2nd step)N+CP,MCP,ASchmidt method (appendix table2) [5]2N+1M+1+1+2M!+M−1!+N+1∑m=1Mm!/20.5N−1M+2+MN+1+M−1!+M−2!+N∑m=1Mm!Table 2 Computational burdens for the Schmidt method algorithm [5]. MultiplicationsAdditionsR1,1=P12NN−1Rk,1=PkP1¯k=1,...,M1+MNMN−1C1=yP1¯N+1N−1fork=1,…,Mfori=1,…,kk>iRk,i=PkPi¯−∑j=1i−1αjiRk.jk=iRk,k=PkPk¯−∑j=1k−1αjk2Rj.j∑m=1Mm!N+1∑m=1Mm!Nfork=1,…,MCk=yPk¯−∑j=1k−1αjkCjMN+M−1!MN+M−2!gk=Ck/Rk,kM0θi=gi−∑k=i+1mRk,i/Ri,iθki=1,…,M2M!M−1!+MM represents the number of vectors, and N represents the length of the vector. gk represents coefficients of Schmidt orthogonalization.Firstly, A is calculated from (9). Calculating (9) involves 12 multiplications and 1 addition. Moreover, the value of cos () or sin () function can be realized by a look-up table, and their computations can be ignored. Then, calculating differential function costs 18∗2 multiplications and 2∗2 additions. Calculating the peak error function costs 5∗2 multiplications and 1∗2 additions. Hereafter, (12) costs 5 multiplications and 2 additions. Calculating the feedback function costs 1 addition. In addition, calculating (13) costs 19 multiplications and 3 additions.Above all, one complex multiplication equals two multiplications [4]. So, CP,M=Lk51Ii+19/2 and CP,A=Lk9Ii+3/2, where It represents the number of iterations and Lk represents the number of segmentations. Besides, Table 1 shows that the frequency accuracy of the Schmidt method is also dependent on the number of vectors.Since the computations simulation needs to set lots of simulation parameters, the simulation will be conducted in Section5. ## 4.3. Detection Performance Since the signal is detected in the first step of the proposed method, the section is to discuss the detection probability of the first step in the proposed method. The theoretical simulation will be conducted in Section5.1.The detection variableSdk0,α02 obeys the chi-square distribution. When the signal is absent or a wrong frequency bin is searched, Jd=Sdk0,α02 obeys the central chi-square distribution with the variance σw2, and the probability density function can be written as follows:(20)p0Jd=12σw2exp−Jd2σw2,where p0 represents probability density function when a wrong bin is searched. When the right frequency unit is detected, Jd=Sdk0,α02 obeys the noncentral chi-square distribution with the variance σw2, and the probability density function can be written as follows:(21)p1Jd=12σw2exp−Jd+a22σw2I0a2Jdσw4,where p1 represents the probability density function when the right frequency bin is detected. Under an incorrect frequency hypothesis H0, the false alarm probability Pfa can be written as:(22)Pfa=Py≥γD|H0=1−∫0γDp0ydyTfTα−1≈1−1−exp−γD2σw2TfTα−1,where γD represents the detection threshold. Under the correct frequency hypothesis H1, the detection probability PD can be written as follows:(23)PD=Px≥γD|H1=∫γD+∞p1x∫0xp0ydyTfTα−1dx,where Tf represents the number of initial frequency search bins in the first step of the proposed method and Ta represents the number of chirping rate search bins. When the configured Pfa is small, ∫0Jdp0JddJdTfTα−1≈1, and the detection probability PD can be simplified into(24)PD=PJd≥γD/H1≈Qaσw,γDσw,where a equals to Sdk0,α0 in the absence of noise. Based on the definition of miss detection probability [18], the miss probability of the proposed method can be written as:(25)PM=∫0γDp1x∫0γDp0ydyTfTα−1dx.When the setPfa is small, ∫0γDp0JddJdTfTα−1≈1, PD+PM=1. Above all, the detection probability of coarse MF estimation of the proposed method can be obtained. When the signal is detected, the second step of the proposed method for fine MF estimation can be performed. ## 5. Simulation Results In this section, BASIC [11] and FRFT [13] are chosen as the benchmark for the 1st step of the proposed method, and Schmidt [5] is chosen as the benchmark for the 2nd step. The simulation parameters are listed in Table 3 where ⌈η⌉ represents the smallest integer that is larger than η.Table 3 Simulation parameters. ParametersValueInitial frequency range(−250, 250) HzChirping rate range(−500, 500) Hz/sInitial frequency search stepΔd1/2T0=1/2NTsSampling timeTs0.001 sFactorγ0.5Number of iterationsIt10Number of segmentationLk⌈4N−1Tsω¯max/π⌉Monte Carlo simulation5000Max mean frequencyω¯max2πΔd+2πδdN−1/2TsFalse alarm probabilityPfa2×10−10 ### 5.1. Coarse Frequency Detection Performance Comparison Although complex multiplications of the proposed method for coarse MF estimation are larger than that of BASIC in Figure3, when the SNR of the postcorrelation signal is larger than −10 dB, the detection probability of the proposed method is almost 100%, which is larger than other FRFT and BAISIC in Figure 4. This is because FRFT has a search bin α and BAISIC adopts a differential process, which may degrade the correlation peak and lead to lower detection probability.Figure 3 Complex multiplications comparison for coarse MF estimation methods.Figure 4 Detection probabilities comparison when is 200 ms, chirping rate is 200 Hz/s, and initial frequency is 100 Hz. ### 5.2. Fine MF Accuracy and Complexity Comparison Based on the simulation above, the DCFT method, two-step method, and Schmidt method are adopted for the fine MF search. In Figure5, the complex multiplications of Schmidt vary greatly with the change of MF search step and postcorrelation signal length.Figure 5 Complex multiplications comparison for fine MF estimation methods.In Figure6, even though 1 rad/s of MF search step is adopted for MF estimation, the two-step method based on MLEP gains higher precision than MF search based on Schmidt.Figure 6 MF square error comparison. The SNR of the postcorrelation signal is 5 dB. The MF search step is 1 rad/s. The integration time is 200 ms in coarse MF estimation. ## 5.1. Coarse Frequency Detection Performance Comparison Although complex multiplications of the proposed method for coarse MF estimation are larger than that of BASIC in Figure3, when the SNR of the postcorrelation signal is larger than −10 dB, the detection probability of the proposed method is almost 100%, which is larger than other FRFT and BAISIC in Figure 4. This is because FRFT has a search bin α and BAISIC adopts a differential process, which may degrade the correlation peak and lead to lower detection probability.Figure 3 Complex multiplications comparison for coarse MF estimation methods.Figure 4 Detection probabilities comparison when is 200 ms, chirping rate is 200 Hz/s, and initial frequency is 100 Hz. ## 5.2. Fine MF Accuracy and Complexity Comparison Based on the simulation above, the DCFT method, two-step method, and Schmidt method are adopted for the fine MF search. In Figure5, the complex multiplications of Schmidt vary greatly with the change of MF search step and postcorrelation signal length.Figure 5 Complex multiplications comparison for fine MF estimation methods.In Figure6, even though 1 rad/s of MF search step is adopted for MF estimation, the two-step method based on MLEP gains higher precision than MF search based on Schmidt.Figure 6 MF square error comparison. The SNR of the postcorrelation signal is 5 dB. The MF search step is 1 rad/s. The integration time is 200 ms in coarse MF estimation. ## 6. Conclusion To improve frequency accuracy in high-dynamic acquisition, we propose a two-step frequency estimation method. The proposed method combines a coarse frequency estimation method based on DCFT and MLEP for fine frequency estimation. In the 1st step, the search step of initial frequency and chirping rate is configured based on Taylor expansion, and coarse MF is obtained. In the 2nd step, due to low-frequency error, fine MF is estimated by MLEP. Although DCFT costs much more computation in Figure3 compared with BASIC, it improves the detection probability in Figure 4. Moreover, the proposed MLEP obtains higher mean frequency accuracy and lower complex multiplications compared with the conventional method Schmidt. Furthermore, in practice, the proposed two-step method can provide a theoretical basis for open-loop frequency tracking. --- *Source: 1017206-2022-07-07.xml*
1017206-2022-07-07_1017206-2022-07-07.md
35,663
Two-Step Frequency Estimation of GNSS Signal in High-Dynamic Environment
Chao Wu; Yafeng Li; Liyan Luo; Jian Xie; Ling Wang
International Journal of Antennas and Propagation (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1017206
1017206-2022-07-07.xml
--- ## Abstract To improve frequency accuracy which is affected by two parameters in high-dynamic acquisition, we propose a two-step frequency estimation method based on the mean frequency (MF) model for high-dynamic parameters estimation. The first step is based on the discrete chirp-Fourier transform (DCFT) for coarse MF estimation, where the MF accuracy and frequency search step are derived. In the second step, the maximum likelihood estimation process (MLEP) is adopted for fine MF estimation. Compared with state-of-art methods, it is verified that the two-step method can improve the detection probability in coarse MF estimation and improve the MF accuracy with low computational burdens under conditions with a moderate signal-to-noise ratio (SNR). --- ## Body ## 1. Introduction In global navigation satellite system (GNSS) receiver techniques, the acquisition is the most important process for estimating code phase and carrier frequency [1]. For fast acquisition with lower computational complexity, the method [2–4] based on fast Fourier transform (FFT) was proposed. Since the pull-in range of the tracking loop is only a few hertz, the number of FFT points should be increased [5]. Generally, the coarse-to-fine acquisition methods are used to reduce the computation costs [6]. The code phase and coarse carrier frequency parameters can be obtained from the coarse acquisition, and the carrier frequency can be refined in a specific fine acquisition process with the code stripped off.For low-dynamic acquisition, the carrier Doppler can be estimated in the fine acquisition process. Tang et al. [7] proposed an accurate estimation method for residual Doppler. However, this method has a restriction on the initial Doppler search step, and more computation is required to obtain an accurate Doppler. To reduce computational load, a method [8] was proposed based on the coarse Doppler and sampling frequency in moderate SNR. However, to improve the Doppler accuracy in low SNR, a long-time correlation process is typically needed, which costs a lot of computations. To reduce the computations with long integration, Mohamed and Aboelmagd [5] proposed the Schmidt method which utilizes orthogonal searching. To further reduce the computations, article [9] proposed the zero-forcing and a double FFT-based method to improve Doppler frequency accuracy without increasing the computational load. However, because of the trade-off between the Doppler frequency resolution and the computational complexity, the maximum error of carrier frequency estimation depends on the number of FFT points. To improve the Doppler frequency accuracy, Nguyen et al. [10] proposed a residual frequency estimation method with differential processing. Due to the differential processing, it performs not well in the low SNR.Above all, the articles listed only focus on Doppler frequency accuracy. However, both initial frequency and chirping rate [11] affect the correlation peak in high-dynamic applications. Moreover, with a long integration time, the influence of these two parameters cannot be ignored. Among methods for initial frequency and chirping rate estimation, the authors in [12, 13] proposed frequency estimation methods based on Fractional Fourier transform (FRFT) for high-dynamic applications. In addition, we proposed a frequency estimation method based on discrete chirp-Fourier transform (DCFT) [14]. However, in some high-dynamic applications, more accurate frequency is usually desired.To further improve frequency accuracy for high-dynamic applications, a two-step frequency parameter estimation method is proposed in this paper. An MF model has been derived to improve the frequency accuracy. In the first step for coarse MF estimation, the chirping rate and initial frequency for MF estimation have been estimated based on the DCFT. A maximum likelihood estimation process (MLEP) has been proposed for the fine MF estimation in the 2nd step. With the two-step processing, the computational burdens can be reduced when the peak value is smaller than the configured threshold, and high-frequency accuracy can be obtained when the signal is present. Simulation results show that for coarse MF estimation, the proposed method has a higher detection probability compared with conventional methods, and for fine MF estimation, the two-step method has a higher frequency accuracy and lower computational burdens than the compared methods. ## 2. Signal Model After correlating with a one-period local code and a coarse Doppler bin [11], the postcorrelation signal can be obtained and depicted as follows:(1)Sn=Abnexpj2πf0nTs+μn2Ts2+Wn,where f0 represents residual Doppler frequency or initial frequency, μ represents the chirping rate or Doppler rate, Ts represents the sampling frequency, bn represents bit sign, A represents signal amplitude, and Wn denotes a zero-mean additive white Gaussian noise (AWGN) process. When the received signal is not aligned with local code or a wrong Doppler bin is detected, signal amplitude A≈0, which is called the signal-absent situation in the following analysis. Or, A≠0 is assumed as a constant 11. For GPS L1 CA signal with 1-ms code period, it is typical that f0  = (−250, 250) Hz, and μ  = (−500, 500) Hz/s. It is assumed that bn can be obtained by some auxiliary means 13, and bn equals 1 in the following analysis. ## 3. Proposed Method In this section, the process based on DCFT has been proposed for coarse MF estimation. Then, MLEP has been adopted for fine MF estimation. Finally, the two-step method which combines the two processes has been proposed. In this two-step method, signal detection and MF accuracy improvement are realized through the first and second steps respectively. ### 3.1. Coarse Search of MF Based on DCFT T transform of the postcorrelation signal can be written as follows:(2)STkT,αT=AT∑n=0N−1bnexpj2πΔTnTs+δTn2Ts2+w,where kT=0,±1,±2,... represents a searching range, and αT represents transform factor. w=wi+jwr. wi and wr both obey normal distribution N0,σw2. When T = f, it represents FRFT [15]. Af=Aexp−jπsgnsinαf/4+jαT/2/sinαf0.5expj1/2cotαfkF2. F=2π/NTscscαf. Δf=f0−kf/NTs. δf=μ+1/4πcotαf. When T = d, it represents DCFT. Ad=A. Δd=f0−kd/NTs. δd=μ−αd/N2Ts2, where αd represents the chirping rate factor. Based on (2), the correlation peak of FRFT is an unilinear function with αf, which may degrade the detection peak. Therefor DCFT is chosen in the following discussion.When the bit signs can be obtained by assisted means, the formula above can be simplified into(3)Sdkd,αd≈Ad∑n=0N−1expjω¯nTs+w=AdNsincω¯NTs2expjω¯N−12Ts+w,where N represents the integration time and ω¯=2πΔT+2πδTN−1/2Ts represents the MF from 0Ts to N−1Ts. It is assumed that A0=AdN. Based on the derivations above and Taylor expansion, the peak Ad∑n=0N−1expjω¯nTs can be approximated as(4)Ad∑n=0N−1expjω¯nTs=A0sincω¯NTs2≈A01−16ω¯NTs22,where Ad∑n=0N−1expjω¯nTs represents the amplitude of the signal Ad∑n=0N−1expjω¯nTs. It is assumed that the unit kd0,αd0 is corresponding to the peak, and Ad∑n=0N−1expjω¯nTs≥γA0, where γ is set based on the criterion that one search bin contains mostly useful energy [16]. Then, we can obtain(5)ω¯≤261−γ/NTs.Based on the equations above, the search step of MF in the first step is set to261−γ/NTs. Due to the influence between the initial frequency and chirping rate, the search step of the initial frequency needs to be configured first. Based on the influence of residual Doppler frequency on the correlation peak value of the postcorrelation signal in low-dynamic applications [4], the search step of initial frequency is set to 1/2NTs. Moreover, based on the relationship between the initial frequency search step and the MF search step in the first step, the search step of the chirping rate can be obtained.Above all, in the coarse search, the coarse estimations of both initial frequency and chirping rate can be obtained. Based on the coarse estimation, the coarse MF estimation can be obtained. ### 3.2. MLEP for Fine MF Estimation In the MLEP, firstly, the signal amplitude is estimated based on ML; then, due to the fine MF range, the restricted search criteria for the fine MF estimation are adopted. Finally, combining the criteria and ML function, the fine MF is estimated based on the estimated signal amplitude.Based on (3), the observed peak can be written as follows:(6)SdA0,ω¯0=A0sincω¯0NTs2expjω¯0N−12Ts+w,where ω¯0 represents residual MF, which ranges from −61−γ/NTs to 61−γ/NTs based on Section 3.1 analysis. The joint probability density function of wi,wr of wi,wr can be written as(7)fwi,wr=12πσw2exp−wi2+wr22σw2.Then, based on ML estimation, the optimized objective function can be obtained as(8)minJA0,ω¯0=min−lnfwi,wr≈minwi2+wr22σw2,where minJA0,ω¯0 represents the minimum of the objective function JA0,ω¯0. It is assumed that ∂JA0,ω¯0/∂A0=0. Then, we can obtain(9)A0c=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tssincω¯0NTs/2cosω¯0N−1Ts,where A0c represents the optimized value of A0 based on ML. Then, substituting (9) into (6), we can obtain(10)SdA0c,ω¯0=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Ts·expjω¯0N−12Ts+w,where SdA0c,ω¯0 represents the correlation peak in presence of noise. However, when cosω¯0N−1Ts=0, ω¯0=π2k+1/2N−1Ts represents singular points of (10), k=0,±1,±2,... represents segmentation variable, Lk represents the length of k. Consequently, the segmentation optimization is taken based on the singular points. When π2k+1/2N−1Ts<ω¯0<π2k+3/2N−1Ts, the segmentation optimization is based on the objective function as follows:(11)JA0c,ω¯0+Δω=wiω¯0+Δω2+wrω¯0+Δω22σw2=wiω¯0+JiΔω2+wrω¯0+JrΔω22σw2,where J represents the objective function. It is assumed that ∂JA0c,ω¯0+Δω/∂Δω=0, Δω can be obtained as follows:(12)Δω=−JiTwiω¯0−JrTwrω¯0JiTJi+JrTJr,where Ji=∂wiω¯0/∂ω¯0. Jr=∂wrω¯0/∂ω¯0. The several simulations show that the number of iterations It can be chosen to be 10. The local optimal solution ω¯0,k segmentation variable k can be obtained based on the range of ω¯0. The restricted search criterion for choosing ω¯0,k is given as follows:(13)k0=minkJA0c,ω¯0,k,sincω¯0,kNTs2>0.1,A0c|ω¯0,k>0,where ω¯0,k0 is the estimated frequency parameter in the 2nd step. Above all, MF can be obtained from the fine MF estimation of the 2nd step. ### 3.3. Two-Step Frequency Estimation Method The two-step frequency parameters estimation method is shown in Figure1. The method can be depicted in more detail as follows:(a) Calculating the search step of the initial frequency and chirping rate based on (5).(b) In coarse search, the estimated valueα˜0,f˜0 can be obtained based on the threshold γ.(c) In fine search, an iterative approach based on ML is adopted:With frequency errorΔω¯0 initialized, the amplitude can be obtained after calculating the amplitude function (9). Moreover, based on ΔT,δT, the range of ω¯0 can be obtained.Based on the amplitude and mean frequency error, the value of the differential functionsJi and Jr can be obtained as follows:(14)Ji=Ts∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c−TT1∗Tc∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Ts∗T2s∗ReSd∗Tc−ImSd∗Ts,T2c2Jr=Tc∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c+TT1∗Ts∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Tc∗T2s∗ReSd∗Tc−ImSd∗TsT2c2,whereTc  =  cosTT1, Ts  =  sinTT1, T2c  =  cos2TT1, T2s  =  sin2TT1, and TT1  =  N−1/2Tsω¯0. The peak error functions wiω¯0 and wrω¯0 can be written as follows:(15)wiω¯0=ImSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tssinω¯0N−12Ts,wrω¯0=ReSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tscosω¯0N−12Ts,where the unitk0,α0 is corresponding to α˜0,f˜0.Based on differential functions, peak error function and (11), feedback error Δω can be obtained. The feedback function in Figure 1 can be written as follows:(16)ω¯0=ω¯0−Δω.(d) Based on step (c),ω¯0,k can be obtained after 10 iterations. Then, based on (13), the final MF can be estimated. Above all, the frequency accuracy can be improved.Figure 1 The two-step method’s flow diagram combining DCFT for coarse frequency estimation and MLEP for fine frequency estimation. ## 3.1. Coarse Search of MF Based on DCFT T transform of the postcorrelation signal can be written as follows:(2)STkT,αT=AT∑n=0N−1bnexpj2πΔTnTs+δTn2Ts2+w,where kT=0,±1,±2,... represents a searching range, and αT represents transform factor. w=wi+jwr. wi and wr both obey normal distribution N0,σw2. When T = f, it represents FRFT [15]. Af=Aexp−jπsgnsinαf/4+jαT/2/sinαf0.5expj1/2cotαfkF2. F=2π/NTscscαf. Δf=f0−kf/NTs. δf=μ+1/4πcotαf. When T = d, it represents DCFT. Ad=A. Δd=f0−kd/NTs. δd=μ−αd/N2Ts2, where αd represents the chirping rate factor. Based on (2), the correlation peak of FRFT is an unilinear function with αf, which may degrade the detection peak. Therefor DCFT is chosen in the following discussion.When the bit signs can be obtained by assisted means, the formula above can be simplified into(3)Sdkd,αd≈Ad∑n=0N−1expjω¯nTs+w=AdNsincω¯NTs2expjω¯N−12Ts+w,where N represents the integration time and ω¯=2πΔT+2πδTN−1/2Ts represents the MF from 0Ts to N−1Ts. It is assumed that A0=AdN. Based on the derivations above and Taylor expansion, the peak Ad∑n=0N−1expjω¯nTs can be approximated as(4)Ad∑n=0N−1expjω¯nTs=A0sincω¯NTs2≈A01−16ω¯NTs22,where Ad∑n=0N−1expjω¯nTs represents the amplitude of the signal Ad∑n=0N−1expjω¯nTs. It is assumed that the unit kd0,αd0 is corresponding to the peak, and Ad∑n=0N−1expjω¯nTs≥γA0, where γ is set based on the criterion that one search bin contains mostly useful energy [16]. Then, we can obtain(5)ω¯≤261−γ/NTs.Based on the equations above, the search step of MF in the first step is set to261−γ/NTs. Due to the influence between the initial frequency and chirping rate, the search step of the initial frequency needs to be configured first. Based on the influence of residual Doppler frequency on the correlation peak value of the postcorrelation signal in low-dynamic applications [4], the search step of initial frequency is set to 1/2NTs. Moreover, based on the relationship between the initial frequency search step and the MF search step in the first step, the search step of the chirping rate can be obtained.Above all, in the coarse search, the coarse estimations of both initial frequency and chirping rate can be obtained. Based on the coarse estimation, the coarse MF estimation can be obtained. ## 3.2. MLEP for Fine MF Estimation In the MLEP, firstly, the signal amplitude is estimated based on ML; then, due to the fine MF range, the restricted search criteria for the fine MF estimation are adopted. Finally, combining the criteria and ML function, the fine MF is estimated based on the estimated signal amplitude.Based on (3), the observed peak can be written as follows:(6)SdA0,ω¯0=A0sincω¯0NTs2expjω¯0N−12Ts+w,where ω¯0 represents residual MF, which ranges from −61−γ/NTs to 61−γ/NTs based on Section 3.1 analysis. The joint probability density function of wi,wr of wi,wr can be written as(7)fwi,wr=12πσw2exp−wi2+wr22σw2.Then, based on ML estimation, the optimized objective function can be obtained as(8)minJA0,ω¯0=min−lnfwi,wr≈minwi2+wr22σw2,where minJA0,ω¯0 represents the minimum of the objective function JA0,ω¯0. It is assumed that ∂JA0,ω¯0/∂A0=0. Then, we can obtain(9)A0c=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tssincω¯0NTs/2cosω¯0N−1Ts,where A0c represents the optimized value of A0 based on ML. Then, substituting (9) into (6), we can obtain(10)SdA0c,ω¯0=ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Ts·expjω¯0N−12Ts+w,where SdA0c,ω¯0 represents the correlation peak in presence of noise. However, when cosω¯0N−1Ts=0, ω¯0=π2k+1/2N−1Ts represents singular points of (10), k=0,±1,±2,... represents segmentation variable, Lk represents the length of k. Consequently, the segmentation optimization is taken based on the singular points. When π2k+1/2N−1Ts<ω¯0<π2k+3/2N−1Ts, the segmentation optimization is based on the objective function as follows:(11)JA0c,ω¯0+Δω=wiω¯0+Δω2+wrω¯0+Δω22σw2=wiω¯0+JiΔω2+wrω¯0+JrΔω22σw2,where J represents the objective function. It is assumed that ∂JA0c,ω¯0+Δω/∂Δω=0, Δω can be obtained as follows:(12)Δω=−JiTwiω¯0−JrTwrω¯0JiTJi+JrTJr,where Ji=∂wiω¯0/∂ω¯0. Jr=∂wrω¯0/∂ω¯0. The several simulations show that the number of iterations It can be chosen to be 10. The local optimal solution ω¯0,k segmentation variable k can be obtained based on the range of ω¯0. The restricted search criterion for choosing ω¯0,k is given as follows:(13)k0=minkJA0c,ω¯0,k,sincω¯0,kNTs2>0.1,A0c|ω¯0,k>0,where ω¯0,k0 is the estimated frequency parameter in the 2nd step. Above all, MF can be obtained from the fine MF estimation of the 2nd step. ## 3.3. Two-Step Frequency Estimation Method The two-step frequency parameters estimation method is shown in Figure1. The method can be depicted in more detail as follows:(a) Calculating the search step of the initial frequency and chirping rate based on (5).(b) In coarse search, the estimated valueα˜0,f˜0 can be obtained based on the threshold γ.(c) In fine search, an iterative approach based on ML is adopted:With frequency errorΔω¯0 initialized, the amplitude can be obtained after calculating the amplitude function (9). Moreover, based on ΔT,δT, the range of ω¯0 can be obtained.Based on the amplitude and mean frequency error, the value of the differential functionsJi and Jr can be obtained as follows:(14)Ji=Ts∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c−TT1∗Tc∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Ts∗T2s∗ReSd∗Tc−ImSd∗Ts,T2c2Jr=Tc∗TT1∗ImSd∗Tc+TT1∗ReSd∗TsT2c+TT1∗Ts∗ReSd∗Tc−ImSd∗TsT2c−2∗TT1∗Tc∗T2s∗ReSd∗Tc−ImSd∗TsT2c2,whereTc  =  cosTT1, Ts  =  sinTT1, T2c  =  cos2TT1, T2s  =  sin2TT1, and TT1  =  N−1/2Tsω¯0. The peak error functions wiω¯0 and wrω¯0 can be written as follows:(15)wiω¯0=ImSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tssinω¯0N−12Ts,wrω¯0=ReSdk0,α0−ReSdcosω¯0N−1/2Ts−ImSdsinω¯0N−1/2Tscosω¯0N−1Tscosω¯0N−12Ts,where the unitk0,α0 is corresponding to α˜0,f˜0.Based on differential functions, peak error function and (11), feedback error Δω can be obtained. The feedback function in Figure 1 can be written as follows:(16)ω¯0=ω¯0−Δω.(d) Based on step (c),ω¯0,k can be obtained after 10 iterations. Then, based on (13), the final MF can be estimated. Above all, the frequency accuracy can be improved.Figure 1 The two-step method’s flow diagram combining DCFT for coarse frequency estimation and MLEP for fine frequency estimation. ## 4. Algorithm Performance In this section, Cramér–Rao bound of the estimated MF of the proposed method is derived. Then, computational burdens and detection probability of the proposed method are analyzed for performance evaluation. ### 4.1. Cramér–Rao Bound (CRB) of the Proposed Method Based on the theory [17], the CRB of ω¯0 can be written as follows:(17)CRω¯0=1E∂2wi2+wr2/2σw2/∂2ω¯0=2σw2E∂2wi2+wr2∂2ω¯0,where wi=ImSd−A0sincω¯0NTs/2sinω¯0N−1/2Ts, and wr=ReSd−A0sincω¯0NTs/2cosω¯0N−1/2Ts. When sincω¯0NTs/2≈1,(18)CRω¯0≈2σw2N−12Ts2A02,where we can obtain the final CRB of ω¯0. When A0≈N, Ts=0.001s, N  = 200 ms, 400 ms and 600 ms, the CRB of MF is shown in Figure 2.Figure 2 CRB ofω¯0 under different integration times.In Figure2, CRB curves are shown with the integration time being 100, 200, and 300 ms, respectively. Under the same SNR, the longer the integration time is, the higher the frequency estimation accuracy is. This is because based on (19) the integration time is long, and the integration peak value is large. ### 4.2. Computations Analysis Based on (2) and (10), the computations of the proposed method can be obtained. Here, the BASIC [11] is chosen as the benchmark for the coarse MF estimation. The method estimates the frequency parameters based on the differential signal as follows:(19)Sdn=S∗nSn+M0=A2bnbn+M0expj2π2nμM0Ts2+f0M0Ts+μM02Ts2=A2bnbn+M0expj2π2nμM0Ts2+θ0,where θ0=f0M0Ts+μM02Ts2. Based on the Fourier transform, the frequency parameters can be estimated.In Table1, methods are chosen for coarse or fine MF estimation. Tα represents the number of chirping rate search bins and Tf represents the number of initial frequency search. CP,M and CP,A can be calculated as follows.Table 1 Computational burdens comparison. Coarse MF estimationComplex multiplicationsComplex additionsBASICTα+Tα/2log2Tα+Tf+Tf/2log2TfTαlog2Tα+Tflog2TfProposed method (1st step)TαTf/2log2TfTαTflog2TfFine MF estimationComplex multiplicationsComplex additionsProposed method(2nd step)N+CP,MCP,ASchmidt method (appendix table2) [5]2N+1M+1+1+2M!+M−1!+N+1∑m=1Mm!/20.5N−1M+2+MN+1+M−1!+M−2!+N∑m=1Mm!Table 2 Computational burdens for the Schmidt method algorithm [5]. MultiplicationsAdditionsR1,1=P12NN−1Rk,1=PkP1¯k=1,...,M1+MNMN−1C1=yP1¯N+1N−1fork=1,…,Mfori=1,…,kk>iRk,i=PkPi¯−∑j=1i−1αjiRk.jk=iRk,k=PkPk¯−∑j=1k−1αjk2Rj.j∑m=1Mm!N+1∑m=1Mm!Nfork=1,…,MCk=yPk¯−∑j=1k−1αjkCjMN+M−1!MN+M−2!gk=Ck/Rk,kM0θi=gi−∑k=i+1mRk,i/Ri,iθki=1,…,M2M!M−1!+MM represents the number of vectors, and N represents the length of the vector. gk represents coefficients of Schmidt orthogonalization.Firstly, A is calculated from (9). Calculating (9) involves 12 multiplications and 1 addition. Moreover, the value of cos () or sin () function can be realized by a look-up table, and their computations can be ignored. Then, calculating differential function costs 18∗2 multiplications and 2∗2 additions. Calculating the peak error function costs 5∗2 multiplications and 1∗2 additions. Hereafter, (12) costs 5 multiplications and 2 additions. Calculating the feedback function costs 1 addition. In addition, calculating (13) costs 19 multiplications and 3 additions.Above all, one complex multiplication equals two multiplications [4]. So, CP,M=Lk51Ii+19/2 and CP,A=Lk9Ii+3/2, where It represents the number of iterations and Lk represents the number of segmentations. Besides, Table 1 shows that the frequency accuracy of the Schmidt method is also dependent on the number of vectors.Since the computations simulation needs to set lots of simulation parameters, the simulation will be conducted in Section5. ### 4.3. Detection Performance Since the signal is detected in the first step of the proposed method, the section is to discuss the detection probability of the first step in the proposed method. The theoretical simulation will be conducted in Section5.1.The detection variableSdk0,α02 obeys the chi-square distribution. When the signal is absent or a wrong frequency bin is searched, Jd=Sdk0,α02 obeys the central chi-square distribution with the variance σw2, and the probability density function can be written as follows:(20)p0Jd=12σw2exp−Jd2σw2,where p0 represents probability density function when a wrong bin is searched. When the right frequency unit is detected, Jd=Sdk0,α02 obeys the noncentral chi-square distribution with the variance σw2, and the probability density function can be written as follows:(21)p1Jd=12σw2exp−Jd+a22σw2I0a2Jdσw4,where p1 represents the probability density function when the right frequency bin is detected. Under an incorrect frequency hypothesis H0, the false alarm probability Pfa can be written as:(22)Pfa=Py≥γD|H0=1−∫0γDp0ydyTfTα−1≈1−1−exp−γD2σw2TfTα−1,where γD represents the detection threshold. Under the correct frequency hypothesis H1, the detection probability PD can be written as follows:(23)PD=Px≥γD|H1=∫γD+∞p1x∫0xp0ydyTfTα−1dx,where Tf represents the number of initial frequency search bins in the first step of the proposed method and Ta represents the number of chirping rate search bins. When the configured Pfa is small, ∫0Jdp0JddJdTfTα−1≈1, and the detection probability PD can be simplified into(24)PD=PJd≥γD/H1≈Qaσw,γDσw,where a equals to Sdk0,α0 in the absence of noise. Based on the definition of miss detection probability [18], the miss probability of the proposed method can be written as:(25)PM=∫0γDp1x∫0γDp0ydyTfTα−1dx.When the setPfa is small, ∫0γDp0JddJdTfTα−1≈1, PD+PM=1. Above all, the detection probability of coarse MF estimation of the proposed method can be obtained. When the signal is detected, the second step of the proposed method for fine MF estimation can be performed. ## 4.1. Cramér–Rao Bound (CRB) of the Proposed Method Based on the theory [17], the CRB of ω¯0 can be written as follows:(17)CRω¯0=1E∂2wi2+wr2/2σw2/∂2ω¯0=2σw2E∂2wi2+wr2∂2ω¯0,where wi=ImSd−A0sincω¯0NTs/2sinω¯0N−1/2Ts, and wr=ReSd−A0sincω¯0NTs/2cosω¯0N−1/2Ts. When sincω¯0NTs/2≈1,(18)CRω¯0≈2σw2N−12Ts2A02,where we can obtain the final CRB of ω¯0. When A0≈N, Ts=0.001s, N  = 200 ms, 400 ms and 600 ms, the CRB of MF is shown in Figure 2.Figure 2 CRB ofω¯0 under different integration times.In Figure2, CRB curves are shown with the integration time being 100, 200, and 300 ms, respectively. Under the same SNR, the longer the integration time is, the higher the frequency estimation accuracy is. This is because based on (19) the integration time is long, and the integration peak value is large. ## 4.2. Computations Analysis Based on (2) and (10), the computations of the proposed method can be obtained. Here, the BASIC [11] is chosen as the benchmark for the coarse MF estimation. The method estimates the frequency parameters based on the differential signal as follows:(19)Sdn=S∗nSn+M0=A2bnbn+M0expj2π2nμM0Ts2+f0M0Ts+μM02Ts2=A2bnbn+M0expj2π2nμM0Ts2+θ0,where θ0=f0M0Ts+μM02Ts2. Based on the Fourier transform, the frequency parameters can be estimated.In Table1, methods are chosen for coarse or fine MF estimation. Tα represents the number of chirping rate search bins and Tf represents the number of initial frequency search. CP,M and CP,A can be calculated as follows.Table 1 Computational burdens comparison. Coarse MF estimationComplex multiplicationsComplex additionsBASICTα+Tα/2log2Tα+Tf+Tf/2log2TfTαlog2Tα+Tflog2TfProposed method (1st step)TαTf/2log2TfTαTflog2TfFine MF estimationComplex multiplicationsComplex additionsProposed method(2nd step)N+CP,MCP,ASchmidt method (appendix table2) [5]2N+1M+1+1+2M!+M−1!+N+1∑m=1Mm!/20.5N−1M+2+MN+1+M−1!+M−2!+N∑m=1Mm!Table 2 Computational burdens for the Schmidt method algorithm [5]. MultiplicationsAdditionsR1,1=P12NN−1Rk,1=PkP1¯k=1,...,M1+MNMN−1C1=yP1¯N+1N−1fork=1,…,Mfori=1,…,kk>iRk,i=PkPi¯−∑j=1i−1αjiRk.jk=iRk,k=PkPk¯−∑j=1k−1αjk2Rj.j∑m=1Mm!N+1∑m=1Mm!Nfork=1,…,MCk=yPk¯−∑j=1k−1αjkCjMN+M−1!MN+M−2!gk=Ck/Rk,kM0θi=gi−∑k=i+1mRk,i/Ri,iθki=1,…,M2M!M−1!+MM represents the number of vectors, and N represents the length of the vector. gk represents coefficients of Schmidt orthogonalization.Firstly, A is calculated from (9). Calculating (9) involves 12 multiplications and 1 addition. Moreover, the value of cos () or sin () function can be realized by a look-up table, and their computations can be ignored. Then, calculating differential function costs 18∗2 multiplications and 2∗2 additions. Calculating the peak error function costs 5∗2 multiplications and 1∗2 additions. Hereafter, (12) costs 5 multiplications and 2 additions. Calculating the feedback function costs 1 addition. In addition, calculating (13) costs 19 multiplications and 3 additions.Above all, one complex multiplication equals two multiplications [4]. So, CP,M=Lk51Ii+19/2 and CP,A=Lk9Ii+3/2, where It represents the number of iterations and Lk represents the number of segmentations. Besides, Table 1 shows that the frequency accuracy of the Schmidt method is also dependent on the number of vectors.Since the computations simulation needs to set lots of simulation parameters, the simulation will be conducted in Section5. ## 4.3. Detection Performance Since the signal is detected in the first step of the proposed method, the section is to discuss the detection probability of the first step in the proposed method. The theoretical simulation will be conducted in Section5.1.The detection variableSdk0,α02 obeys the chi-square distribution. When the signal is absent or a wrong frequency bin is searched, Jd=Sdk0,α02 obeys the central chi-square distribution with the variance σw2, and the probability density function can be written as follows:(20)p0Jd=12σw2exp−Jd2σw2,where p0 represents probability density function when a wrong bin is searched. When the right frequency unit is detected, Jd=Sdk0,α02 obeys the noncentral chi-square distribution with the variance σw2, and the probability density function can be written as follows:(21)p1Jd=12σw2exp−Jd+a22σw2I0a2Jdσw4,where p1 represents the probability density function when the right frequency bin is detected. Under an incorrect frequency hypothesis H0, the false alarm probability Pfa can be written as:(22)Pfa=Py≥γD|H0=1−∫0γDp0ydyTfTα−1≈1−1−exp−γD2σw2TfTα−1,where γD represents the detection threshold. Under the correct frequency hypothesis H1, the detection probability PD can be written as follows:(23)PD=Px≥γD|H1=∫γD+∞p1x∫0xp0ydyTfTα−1dx,where Tf represents the number of initial frequency search bins in the first step of the proposed method and Ta represents the number of chirping rate search bins. When the configured Pfa is small, ∫0Jdp0JddJdTfTα−1≈1, and the detection probability PD can be simplified into(24)PD=PJd≥γD/H1≈Qaσw,γDσw,where a equals to Sdk0,α0 in the absence of noise. Based on the definition of miss detection probability [18], the miss probability of the proposed method can be written as:(25)PM=∫0γDp1x∫0γDp0ydyTfTα−1dx.When the setPfa is small, ∫0γDp0JddJdTfTα−1≈1, PD+PM=1. Above all, the detection probability of coarse MF estimation of the proposed method can be obtained. When the signal is detected, the second step of the proposed method for fine MF estimation can be performed. ## 5. Simulation Results In this section, BASIC [11] and FRFT [13] are chosen as the benchmark for the 1st step of the proposed method, and Schmidt [5] is chosen as the benchmark for the 2nd step. The simulation parameters are listed in Table 3 where ⌈η⌉ represents the smallest integer that is larger than η.Table 3 Simulation parameters. ParametersValueInitial frequency range(−250, 250) HzChirping rate range(−500, 500) Hz/sInitial frequency search stepΔd1/2T0=1/2NTsSampling timeTs0.001 sFactorγ0.5Number of iterationsIt10Number of segmentationLk⌈4N−1Tsω¯max/π⌉Monte Carlo simulation5000Max mean frequencyω¯max2πΔd+2πδdN−1/2TsFalse alarm probabilityPfa2×10−10 ### 5.1. Coarse Frequency Detection Performance Comparison Although complex multiplications of the proposed method for coarse MF estimation are larger than that of BASIC in Figure3, when the SNR of the postcorrelation signal is larger than −10 dB, the detection probability of the proposed method is almost 100%, which is larger than other FRFT and BAISIC in Figure 4. This is because FRFT has a search bin α and BAISIC adopts a differential process, which may degrade the correlation peak and lead to lower detection probability.Figure 3 Complex multiplications comparison for coarse MF estimation methods.Figure 4 Detection probabilities comparison when is 200 ms, chirping rate is 200 Hz/s, and initial frequency is 100 Hz. ### 5.2. Fine MF Accuracy and Complexity Comparison Based on the simulation above, the DCFT method, two-step method, and Schmidt method are adopted for the fine MF search. In Figure5, the complex multiplications of Schmidt vary greatly with the change of MF search step and postcorrelation signal length.Figure 5 Complex multiplications comparison for fine MF estimation methods.In Figure6, even though 1 rad/s of MF search step is adopted for MF estimation, the two-step method based on MLEP gains higher precision than MF search based on Schmidt.Figure 6 MF square error comparison. The SNR of the postcorrelation signal is 5 dB. The MF search step is 1 rad/s. The integration time is 200 ms in coarse MF estimation. ## 5.1. Coarse Frequency Detection Performance Comparison Although complex multiplications of the proposed method for coarse MF estimation are larger than that of BASIC in Figure3, when the SNR of the postcorrelation signal is larger than −10 dB, the detection probability of the proposed method is almost 100%, which is larger than other FRFT and BAISIC in Figure 4. This is because FRFT has a search bin α and BAISIC adopts a differential process, which may degrade the correlation peak and lead to lower detection probability.Figure 3 Complex multiplications comparison for coarse MF estimation methods.Figure 4 Detection probabilities comparison when is 200 ms, chirping rate is 200 Hz/s, and initial frequency is 100 Hz. ## 5.2. Fine MF Accuracy and Complexity Comparison Based on the simulation above, the DCFT method, two-step method, and Schmidt method are adopted for the fine MF search. In Figure5, the complex multiplications of Schmidt vary greatly with the change of MF search step and postcorrelation signal length.Figure 5 Complex multiplications comparison for fine MF estimation methods.In Figure6, even though 1 rad/s of MF search step is adopted for MF estimation, the two-step method based on MLEP gains higher precision than MF search based on Schmidt.Figure 6 MF square error comparison. The SNR of the postcorrelation signal is 5 dB. The MF search step is 1 rad/s. The integration time is 200 ms in coarse MF estimation. ## 6. Conclusion To improve frequency accuracy in high-dynamic acquisition, we propose a two-step frequency estimation method. The proposed method combines a coarse frequency estimation method based on DCFT and MLEP for fine frequency estimation. In the 1st step, the search step of initial frequency and chirping rate is configured based on Taylor expansion, and coarse MF is obtained. In the 2nd step, due to low-frequency error, fine MF is estimated by MLEP. Although DCFT costs much more computation in Figure3 compared with BASIC, it improves the detection probability in Figure 4. Moreover, the proposed MLEP obtains higher mean frequency accuracy and lower complex multiplications compared with the conventional method Schmidt. Furthermore, in practice, the proposed two-step method can provide a theoretical basis for open-loop frequency tracking. --- *Source: 1017206-2022-07-07.xml*
2022
# Nutritional and Functional Assessment of Hospitalized Elderly: Impact of Sociodemographic Variables **Authors:** Emam M. M. Esmayel; Mohsen M. Eldarawy; Mohamed M. M. Hassan; Hassan Mahmoud Hassanin; Walid M. Reda Ashour; Wael Mahmoud **Journal:** Journal of Aging Research (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101725 --- ## Abstract Background. This work was constructed in order to assess the nutritional and functional status in hospitalized elderly and to study the associations between them and sociodemographic variables. Methods. 200 elderly patients (>65 years old) admitted to Internal Medicine and Neurology Departments in nonemergency conditions were included. Comprehensive geriatric assessments, including nutritional and functional assessments, were done according to nutritional checklist and Barthel index, respectively. Information was gathered from the patients, from the ward nurse responsible for the patient, and from family members who were reviewed. Results. According to the nutritional checklist, 56% of participants were at high risk, 18% were at moderate risk of malnutrition, and 26% had good nutrition. There was a high nutritional risk in patients with low income and good nutrition in patients with moderate income. Also, there was a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%). Barthel index score was significantly lower in those at high risk of malnutrition compared to those at moderate risk and those with good nutrition. Conclusions. Hospitalized elderly are exposed to malnutrition, and malnourished hospitalized patients are candidates for functional impairment. Significant associations are noticed between both nutritional and functional status and specific sociodemographic variables. --- ## Body ## 1. Introduction In recent years, there has been a sharp increase in the number of older persons worldwide [1]. It is estimated that almost half of the adults who are hospitalized are 65 years of age or older, although those older than 65 years represent only 12.5 percent of the population [2]. Aging is associated with various physiological changes and needs, which make elderly people vulnerable to malnutrition [3].Malnutrition is a major geriatric problem associated with poor health status and high mortality, and the impact of a patient’s nutritional condition on the clinical outcome has been widely recognized [4]. Application of nutritional support based on nutritional screening results significantly reduced the incidence of complications and the length of hospital stay [5]. The prevalence of malnutrition varies considerably depending on the population studied and the criteria used for the diagnosis [3].The nutritional screening checklist (NCL) is the most frequently used nutritional screening tool for community-dwelling older adults. It is intended to prevent impairment by identifying and treating nutritional problems before they become a detriment to the lives of older adults [6]. Nutrition and physical activity impact functional changes through changes in body composition [7].Tanimoto et al. recently reported that sarcopenia, defined by muscle mass, muscle strength, and physical performance, was associated with functional decline over a 2-year period in elderly Japanese [8]. Also, Andre et al. found that the percentage of elderly with dependence was significantly higher in the malnourished (87.6%) than in well-nourished elderly (50.9%) [3].Dependency interferes with the health and quality of life, not only for the elderly, but also for relatives and healthcare providers [9]. Assessment of functional capacity is a key element in geriatric health, as it can help in identifying what services or programs are needed [10]. The Barthel self-care index (BSI) is proposed as the standard index for clinical and research purposes [11], and it has proved to be a good predictor of in-hospital mortality [12]. Matzen et al. [13] recently concluded that it is a strong independent predictor of survival in older patients admitted to an acute geriatric unit, and they also suggested that it may have a potential role in decision making for the clinical management of frail geriatric inpatients. It is also found useful for evaluating the functional ability of elderly patients being treated in outpatient care [14].However, less attention is given to the underlying risk of functional decline and the vulnerability to hospital-associated complications [2]. Also, information about the nutritional and functional status and its correlation with sociodemographic variables in Egypt among hospitalized elderly are scarce; the sociodemographic data will focus a bit on which group is at increased risk of nutritional and functional instability, and hence which group deserves more attention. This study was conducted in order to clarify the impact of sociodemographic factors on nutritional and functional status of the elderly in, addition to the interrelationship between both. ## 2. Methods This study has been carried out on 200 elderly patients (>65 years old) admitted to Internal Medicine and Neurology Departments, Zagazig University Hospitals, which are educational hospitals serving Sharkia governorate, during the period between October 2009 and October 2010. The admission was due to gastrointestinal troubles in 88 patients, cardiac troubles in 28, chest troubles in 40, neurological in 28, or other causes in 16 patients.According to age, they were classified into young old (65–74.9 years), old old (75–84.9 years), and oldest old (≥85 years). Patients who required treatment in specialized units such as the intensive care units, were excluded. All participants gave informed consents and the study had approval from local ethical committee.Information was gathered on admission by the treating physician from the patients, from the ward nurse responsible for the patient, and from family members who reviewed. All the patients were subjected to full history taking and clinical examination.Nutritional assessment was done by physical examination by searching for clinical signs of nutritional deficiencies. We used the NCL for nutritional screening which included 10 items with a total score of 21 points. A score from 0 to 2 was considered as good nutrition, 3 to 5 as moderate nutritional risk, and 6 or more as high nutritional risk [15]. The nutritional checklist includes illness and tooth or mouth problems affecting feeding, number of meals per day, and types of foods and drinks. Also ability to eating alone, to cook, shop or feed oneself, and weight gain or loss, are included. One study using the NCL was carried out and the researchers found that, when compared to nutritional assessment criteria (anthropometric, dietary, and laboratory data), the NCL identified 83% of the population as being at high risk compared to 74% using the nutritional assessment criteria [16]. It has been suggested that the NCL is more appropriately used in observational epidemiological studies than for population screening [17].Functional assessment was done using the Barthel self-care index of activities of daily living [18]. The index included bowels, bladder, grooming, toilet use, feeding, transfer, mobility, dressing, stairs, and bathing. Total possible scores range from 0 to 20, with lower scores indicating increased disability. Sociodemographic information and nutritional checklist were collected by direct interviews, while Barthel index was collected using chart reading. ### 2.1. Statistical Analysis Data were checked, entered, and analyzed using SPSS version 15 for data processing and statistics. Continuous variables were tested for normal distribution and expressed as mean ± standard deviation and compared by Student’st test. Categorical variables were compared with the chi-square test. For all analyses, P value <0.05 was considered statistically significant. ## 2.1. Statistical Analysis Data were checked, entered, and analyzed using SPSS version 15 for data processing and statistics. Continuous variables were tested for normal distribution and expressed as mean ± standard deviation and compared by Student’st test. Categorical variables were compared with the chi-square test. For all analyses, P value <0.05 was considered statistically significant. ## 3. Results Demographic data of the studied groups are presented in Table1. A total of 200 elderly patients were included in the study. The large majority of patients were illiterate (84%) and none of them were on regular exercise. As for marital state, 56% were widowed and 40% married with only 4% having no children. 56% had low income with only 2% having means of transportation (Table 2).Table 1 Demographic data of patients. N % Age 65<75 164 82 75<85 28 14 ≥85 8 4 Gender Male 112 56 Female 88 44 Residence Rural 168 84 Urban 32 16Table 2 Sociodemographic variables in studied patients. Education Illiterate 168 84 Educated 32 16 Vocation Retired 64 32 Farmer 52 26 Housewife 72 36 Crafts man 12 6 Habits Exercise Yes 0 0 No 200 100 Sleep Satisfied 84 42 Not satisfied 116 58 Sexual activity No 156 78 Satisfied 8 4 Not satisfied 36 16 Substance abuse Yes 56 28 No 144 72 Marital status Yes 80 40 No 4 2 Widowed 112 56 Divorced 4 2 Children Yes 192 96 No 8 4 Presence of close friends Yes 136 68 No 64 32 Income Low 112 56 Moderate 88 44 High 0 0 Transportation Available 196 98 Not available 4 2According to NCL, 56% of the participants were at high risk and 26% had good nutrition (Table3). Nutritional checklist score was worst in age group ≥85 years, while age has no significant effect on BSI (Table 4). There was a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%); the mean of BSI was higher in patients from rural than that of urban areas with no significant difference (Table 5).Table 3 Nutritional and functional assessment of studied patients. Nutritional assessment (NCL) Good nutrition 52 patients 26% Moderate risk of malnutrition 36 patients 18% High risk of malnutrition 112 patients 56% Functional assessment (BSI) Range 8–20 Mean ± SD 16 ± 4.1Table 4 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with age. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD No. % No. % No. % Age Young old (N=164) 48 29.3 36 22.0 80 48.8 8–20 16.5 ± 3.84 Old old (N=28) 4 14.3 0 0.0 24 85.7 8–20 13.4 ± 5.2 Oldest old (N=8) 0 0 0 0 8 100 15–18 16.5 ± 2.1 X 2 10.4 3.65 P 0.001 0.018Table 5 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with residence. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Residence RuralN=168 40 23.8 24 14.3 104 61.9 8–20 15.6 ± 4.6 UrbanN=32 12 37.5 12 37.5 8 25 9–20 7.8 ± 3.67 X 2 8.26 1.81 P 0.016 0.07High nutritional risk was found in 52.4% of illiterate patients and 75% of educated patients, while there was no association between BSI and education (P=0.56) (Table 6). No significant association was found between BSI and gender (P=0.22) and also between NCL and gender (P=0.16) (Table 7).Table 6 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with education. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Education IlliterateN=168 52 31 28 16.7 88 52.4 8–20 15.97 ± 3.89 EducatedN=32 0 0 8 25 24 75 8–20 16.6 ± 4.9 X 2 6.7 0.85 P 0.035 0.56Table 7 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with gender. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Gender MaleN=112 24 21.4 16 14.3 72 64.3 8–20 15 . 4 ± 4.4 FemaleN=88 28 31.8 20 22.7 40 45.5 9–20 16.9 ± 3.4 X 2 3.56 4.35 P 0.16 0.22There was a significant association between NCL and income (P=0.017), as there was a high nutritional risk in patients with low income and good nutrition in patients with moderate income, in contrast there was no association between BSI and income (P=0.47) (Table 8). A significant association between NCL and BSI (<0.001) indicates that both nutrition and functional ability go together (Table 9).Table 8 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with financial security (income). NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Financial security (income) LowN=112 20 17.9 24 21.4 68 60.7 8–20 15.8 ± 3.9 ModerateN=88 36 40.9 8 9.1 44 50 8–20 16.4 ± 4.2 X 2 11.97 0.71 P 0.017 0.47Table 9 Association between Barthel self-care index (BSI) with nutritional checklist (NCL). NCL F P Good nutrition Moderate risk High risk BSI X ± SD 18.4 ± 1.86 18 ± 2.4 14.5 ± 4.3 13.4 <0.001 Range 15–20 13–20 8–20 ## 4. Discussion Older patients tend to have multiple organic, psychological, and social problems [19]. Their functional and physiological capacities are often diminished and the adverse effects of drugs are more pronounced. The concept of Comprehensive Geriatric Assessment (CGA) has evolved because of the many problems of elderly subjects [20]. Our study was concerned with patients admitted to Internal Medicine department with age above 65 years. Majority of age group was young old and represented 88% while the oldest old represented 4% of patients included in the study.As regards assessment of nutrition, considerable number of studies have examined the nutritional status of institutionalized elderly people and reported prevalence figures for malnutrition and nutritional problems [21]. According to the American Dietetic Association (ADA), the nutrient requirements of elderly peopleare not fully understood, although it is known that the physiological and functional changes that occur with agingcan result in changes in nutrient needs [22].The American Academy of Family Physicians and the National Council of Aging in the United States previously formed the Nutrition Screening Initiative for the development of strategies to detect nutritional risks among older people. One of the strategies included the development of a ten-question checklist, the NCL [23]. This checklist, which was designed to be self-administered, could also be administered by a healthcare professional.According to NCL, our results showed that about 56% of patients had high nutritional risk. A study conducted in Canada indicated that 37–62% of elderly were at risk of the poor nutrition [24], and the percent was 50.3% in another study in India [25]. A recent study in China [4] found that the overall prevalence of under nutrition and nutritional risk was 17.8% and 41.5%, respectively, using the NCL. Another recent study in Makkah found that among 102 recently hospitalized elderly and according to the mini nutritional assessment (MNA) tool, 22.6% were classified as malnourished, 57.8% were at risk of malnutrition, and 19.6% were well nourished [1]. Also, another study found that 10.2% of elderly individuals were malnourished and 39.9% were at risk of malnutrition according to the MNA screening tool [5]. Differences in prevalence rates of malnutrition among the different studies may be due to difference in selection criteria of elderly, different assessment tools, and differences in sociodemographic variables.In this study, there was an association between NCL and age group. We found that 100% of oldest old had high nutritional risk and 85.7% of old old have high nutritional risk, but the percent in young old was 48.8%. Our results agree with that obtained by Fang et al. [4] who found that the prevalence of nutritional risk was significantly higher in patients >70 years of age than in patients <70 years (64% versus 32%, P<0.001). Yap et al. found that high nutritional risk was more in older patients but with no significant association with age [26].Also in this study, we found that high nutritional risk represented 52.4% in illiterate patients and about 75% in educated patients; however, we cannot depend on this finding due to the large difference in sample size between illiterate and educated patients (84% versus 16%, resp.). However, the previous study [26] found no significant correlation between NCL and education, and the percent of patients with high nutritional risk was 26.2% and 33.6% in educated versus illiterate patients, respectively. Increased nutritional risk in males was 64.3% versus 45.5% in females (P=0.16), and this was confirmed by MacLellan and van Til [24]. Contrarily, Rambousková et al. [27] concluded that institutionalized women should be considered a nutritionally vulnerable population group; the reason for this difference may be the higher average age of women versus men (86.1±6.15 versus 81.5±7.97 years) in their study. Meanwhile, Fang et al. [4] found no gender difference in the prevalence of nutritional risk. We found a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%); this may be due to a higher income and a better nutrition knowledge in urban areas.High nutritional risk in patients with low income was 60.7%, while it was about 50% in patients with moderate income. A study in Nigeria concluded that males had significantly higher income, higher socioeconomic status scores and were also less vulnerable to malnutrition than females (P<0.05) [28]. Poverty is a nonphysiological cause of under nutrition in older people [29].As regards functional assessment, the Barthel self care index of activities of daily living, first developed in 1965 [30] and later modified by Granger et al. [31], measures functional disability by quantifying patient performance in 10 activities of daily life. The Barthel Index has been reported to have excellent reliability and validity and adequate responsiveness to change, in measuring neurologic physical disability. Hobart et al. compared psychometric properties of the Barthel index with newer and lengthier scales, the Functional Independence Measure (FIM) and the Functional Independence Measure + Functional Assessment Measure (FIM+FAM), in patients undergoing rehabilitation. This study suggested that the newer and more extensive rating scales of FIM and FIM+FAM offered few advantages over the more practical and economical Barthel index [32]. Similar results were observed in studies of patients with multiple sclerosis and stroke [33, 34].In this study, we found a significant correlation between disability and age, where functional activity decreased with advancing age. Hairi et al. showed that advancing age was significantly associated with functional limitation [35]. We did not find significant correlation between disability and income, however, a study done ona random sample of persons, 55–85 years of age, drawn from the population registers of 11 municipalities found that low socioeconomic status had been associated with physical disability [36]. Unless low socioeconomic status is associated with handwork, the elderly will be more susceptible to physical inactivity and lack of exercise.Gender had no effect on functional activity in our study. Bergamini et al. found that older Italian men and women showed similar prevalence of functional limitation, 31% and 28%, respectively [37]. However, Hairi et al. found that female gender was significantly associated with functional limitation; they suggested that disadvantages resulting from limited education may contribute to the greater physical disability and functional limitation burden experienced by older women in their study [35].Increased incidence of disability was associated with malnutrition in this study. These results are in agreement with Oliveira et al. [38] who assessed the relationship between nutritional status and indicators of functional capacity among recently hospitalized elderly in a general hospital, and showed that these indicators were significantly more deteriorated among the malnourished individuals [38]. Using mini nutritional assessment (MNA) and Barthel index on 123 resident elderly, Cereda et al. showed that the poorer functional status was associated with low nutrition [39]. Also, Chevalier et al., [40] in a study designed to estimate the prevalence of malnutrition in frail elders undergoing rehabilitation and the association between their nutritional status and physical function, showed that there was an interrelationship between nutritional and functional status. It has already been shown that malnutrition compromises the functional status of the individuals [40]. ## 5. Conclusions Results in this study suggest that the hospitalized elderly are exposed to malnutrition, which emphasizes the importance of early identification of malnutrition among them. High nutritional risk was more with older age, rural areas, educated patients, and low income, while gender had no significant effect. Functional abilities were better with younger age but had no significant correlation with other sociodemographic variables. Malnourished hospitalized patients are candidates for functional impairment. Significant associations were noticed between both nutritional and functional status and specific sociodemographic variables. These interrelationships require further studies to elucidate. Also, it is necessary to pay special attention to functional capacity when planning nutritional care for this vulnerable group.Limitations of This Study.The patients included in this study were a convenient sample from patients admitted to one hospital, so patients from distant areas were not included. The large percent of illiterate patients (84%) could have an impact on the study. Larger number of patients should be included in future studies. --- *Source: 101725-2013-10-10.xml*
101725-2013-10-10_101725-2013-10-10.md
21,841
Nutritional and Functional Assessment of Hospitalized Elderly: Impact of Sociodemographic Variables
Emam M. M. Esmayel; Mohsen M. Eldarawy; Mohamed M. M. Hassan; Hassan Mahmoud Hassanin; Walid M. Reda Ashour; Wael Mahmoud
Journal of Aging Research (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101725
101725-2013-10-10.xml
--- ## Abstract Background. This work was constructed in order to assess the nutritional and functional status in hospitalized elderly and to study the associations between them and sociodemographic variables. Methods. 200 elderly patients (>65 years old) admitted to Internal Medicine and Neurology Departments in nonemergency conditions were included. Comprehensive geriatric assessments, including nutritional and functional assessments, were done according to nutritional checklist and Barthel index, respectively. Information was gathered from the patients, from the ward nurse responsible for the patient, and from family members who were reviewed. Results. According to the nutritional checklist, 56% of participants were at high risk, 18% were at moderate risk of malnutrition, and 26% had good nutrition. There was a high nutritional risk in patients with low income and good nutrition in patients with moderate income. Also, there was a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%). Barthel index score was significantly lower in those at high risk of malnutrition compared to those at moderate risk and those with good nutrition. Conclusions. Hospitalized elderly are exposed to malnutrition, and malnourished hospitalized patients are candidates for functional impairment. Significant associations are noticed between both nutritional and functional status and specific sociodemographic variables. --- ## Body ## 1. Introduction In recent years, there has been a sharp increase in the number of older persons worldwide [1]. It is estimated that almost half of the adults who are hospitalized are 65 years of age or older, although those older than 65 years represent only 12.5 percent of the population [2]. Aging is associated with various physiological changes and needs, which make elderly people vulnerable to malnutrition [3].Malnutrition is a major geriatric problem associated with poor health status and high mortality, and the impact of a patient’s nutritional condition on the clinical outcome has been widely recognized [4]. Application of nutritional support based on nutritional screening results significantly reduced the incidence of complications and the length of hospital stay [5]. The prevalence of malnutrition varies considerably depending on the population studied and the criteria used for the diagnosis [3].The nutritional screening checklist (NCL) is the most frequently used nutritional screening tool for community-dwelling older adults. It is intended to prevent impairment by identifying and treating nutritional problems before they become a detriment to the lives of older adults [6]. Nutrition and physical activity impact functional changes through changes in body composition [7].Tanimoto et al. recently reported that sarcopenia, defined by muscle mass, muscle strength, and physical performance, was associated with functional decline over a 2-year period in elderly Japanese [8]. Also, Andre et al. found that the percentage of elderly with dependence was significantly higher in the malnourished (87.6%) than in well-nourished elderly (50.9%) [3].Dependency interferes with the health and quality of life, not only for the elderly, but also for relatives and healthcare providers [9]. Assessment of functional capacity is a key element in geriatric health, as it can help in identifying what services or programs are needed [10]. The Barthel self-care index (BSI) is proposed as the standard index for clinical and research purposes [11], and it has proved to be a good predictor of in-hospital mortality [12]. Matzen et al. [13] recently concluded that it is a strong independent predictor of survival in older patients admitted to an acute geriatric unit, and they also suggested that it may have a potential role in decision making for the clinical management of frail geriatric inpatients. It is also found useful for evaluating the functional ability of elderly patients being treated in outpatient care [14].However, less attention is given to the underlying risk of functional decline and the vulnerability to hospital-associated complications [2]. Also, information about the nutritional and functional status and its correlation with sociodemographic variables in Egypt among hospitalized elderly are scarce; the sociodemographic data will focus a bit on which group is at increased risk of nutritional and functional instability, and hence which group deserves more attention. This study was conducted in order to clarify the impact of sociodemographic factors on nutritional and functional status of the elderly in, addition to the interrelationship between both. ## 2. Methods This study has been carried out on 200 elderly patients (>65 years old) admitted to Internal Medicine and Neurology Departments, Zagazig University Hospitals, which are educational hospitals serving Sharkia governorate, during the period between October 2009 and October 2010. The admission was due to gastrointestinal troubles in 88 patients, cardiac troubles in 28, chest troubles in 40, neurological in 28, or other causes in 16 patients.According to age, they were classified into young old (65–74.9 years), old old (75–84.9 years), and oldest old (≥85 years). Patients who required treatment in specialized units such as the intensive care units, were excluded. All participants gave informed consents and the study had approval from local ethical committee.Information was gathered on admission by the treating physician from the patients, from the ward nurse responsible for the patient, and from family members who reviewed. All the patients were subjected to full history taking and clinical examination.Nutritional assessment was done by physical examination by searching for clinical signs of nutritional deficiencies. We used the NCL for nutritional screening which included 10 items with a total score of 21 points. A score from 0 to 2 was considered as good nutrition, 3 to 5 as moderate nutritional risk, and 6 or more as high nutritional risk [15]. The nutritional checklist includes illness and tooth or mouth problems affecting feeding, number of meals per day, and types of foods and drinks. Also ability to eating alone, to cook, shop or feed oneself, and weight gain or loss, are included. One study using the NCL was carried out and the researchers found that, when compared to nutritional assessment criteria (anthropometric, dietary, and laboratory data), the NCL identified 83% of the population as being at high risk compared to 74% using the nutritional assessment criteria [16]. It has been suggested that the NCL is more appropriately used in observational epidemiological studies than for population screening [17].Functional assessment was done using the Barthel self-care index of activities of daily living [18]. The index included bowels, bladder, grooming, toilet use, feeding, transfer, mobility, dressing, stairs, and bathing. Total possible scores range from 0 to 20, with lower scores indicating increased disability. Sociodemographic information and nutritional checklist were collected by direct interviews, while Barthel index was collected using chart reading. ### 2.1. Statistical Analysis Data were checked, entered, and analyzed using SPSS version 15 for data processing and statistics. Continuous variables were tested for normal distribution and expressed as mean ± standard deviation and compared by Student’st test. Categorical variables were compared with the chi-square test. For all analyses, P value <0.05 was considered statistically significant. ## 2.1. Statistical Analysis Data were checked, entered, and analyzed using SPSS version 15 for data processing and statistics. Continuous variables were tested for normal distribution and expressed as mean ± standard deviation and compared by Student’st test. Categorical variables were compared with the chi-square test. For all analyses, P value <0.05 was considered statistically significant. ## 3. Results Demographic data of the studied groups are presented in Table1. A total of 200 elderly patients were included in the study. The large majority of patients were illiterate (84%) and none of them were on regular exercise. As for marital state, 56% were widowed and 40% married with only 4% having no children. 56% had low income with only 2% having means of transportation (Table 2).Table 1 Demographic data of patients. N % Age 65<75 164 82 75<85 28 14 ≥85 8 4 Gender Male 112 56 Female 88 44 Residence Rural 168 84 Urban 32 16Table 2 Sociodemographic variables in studied patients. Education Illiterate 168 84 Educated 32 16 Vocation Retired 64 32 Farmer 52 26 Housewife 72 36 Crafts man 12 6 Habits Exercise Yes 0 0 No 200 100 Sleep Satisfied 84 42 Not satisfied 116 58 Sexual activity No 156 78 Satisfied 8 4 Not satisfied 36 16 Substance abuse Yes 56 28 No 144 72 Marital status Yes 80 40 No 4 2 Widowed 112 56 Divorced 4 2 Children Yes 192 96 No 8 4 Presence of close friends Yes 136 68 No 64 32 Income Low 112 56 Moderate 88 44 High 0 0 Transportation Available 196 98 Not available 4 2According to NCL, 56% of the participants were at high risk and 26% had good nutrition (Table3). Nutritional checklist score was worst in age group ≥85 years, while age has no significant effect on BSI (Table 4). There was a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%); the mean of BSI was higher in patients from rural than that of urban areas with no significant difference (Table 5).Table 3 Nutritional and functional assessment of studied patients. Nutritional assessment (NCL) Good nutrition 52 patients 26% Moderate risk of malnutrition 36 patients 18% High risk of malnutrition 112 patients 56% Functional assessment (BSI) Range 8–20 Mean ± SD 16 ± 4.1Table 4 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with age. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD No. % No. % No. % Age Young old (N=164) 48 29.3 36 22.0 80 48.8 8–20 16.5 ± 3.84 Old old (N=28) 4 14.3 0 0.0 24 85.7 8–20 13.4 ± 5.2 Oldest old (N=8) 0 0 0 0 8 100 15–18 16.5 ± 2.1 X 2 10.4 3.65 P 0.001 0.018Table 5 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with residence. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Residence RuralN=168 40 23.8 24 14.3 104 61.9 8–20 15.6 ± 4.6 UrbanN=32 12 37.5 12 37.5 8 25 9–20 7.8 ± 3.67 X 2 8.26 1.81 P 0.016 0.07High nutritional risk was found in 52.4% of illiterate patients and 75% of educated patients, while there was no association between BSI and education (P=0.56) (Table 6). No significant association was found between BSI and gender (P=0.22) and also between NCL and gender (P=0.16) (Table 7).Table 6 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with education. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Education IlliterateN=168 52 31 28 16.7 88 52.4 8–20 15.97 ± 3.89 EducatedN=32 0 0 8 25 24 75 8–20 16.6 ± 4.9 X 2 6.7 0.85 P 0.035 0.56Table 7 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with gender. NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Gender MaleN=112 24 21.4 16 14.3 72 64.3 8–20 15 . 4 ± 4.4 FemaleN=88 28 31.8 20 22.7 40 45.5 9–20 16.9 ± 3.4 X 2 3.56 4.35 P 0.16 0.22There was a significant association between NCL and income (P=0.017), as there was a high nutritional risk in patients with low income and good nutrition in patients with moderate income, in contrast there was no association between BSI and income (P=0.47) (Table 8). A significant association between NCL and BSI (<0.001) indicates that both nutrition and functional ability go together (Table 9).Table 8 Association between nutritional checklist (NCL) and Barthel self-care index (BSI) with financial security (income). NCL BSI Good nutrition Moderate risk High risk Range Mean ± SD N % N % N % Financial security (income) LowN=112 20 17.9 24 21.4 68 60.7 8–20 15.8 ± 3.9 ModerateN=88 36 40.9 8 9.1 44 50 8–20 16.4 ± 4.2 X 2 11.97 0.71 P 0.017 0.47Table 9 Association between Barthel self-care index (BSI) with nutritional checklist (NCL). NCL F P Good nutrition Moderate risk High risk BSI X ± SD 18.4 ± 1.86 18 ± 2.4 14.5 ± 4.3 13.4 <0.001 Range 15–20 13–20 8–20 ## 4. Discussion Older patients tend to have multiple organic, psychological, and social problems [19]. Their functional and physiological capacities are often diminished and the adverse effects of drugs are more pronounced. The concept of Comprehensive Geriatric Assessment (CGA) has evolved because of the many problems of elderly subjects [20]. Our study was concerned with patients admitted to Internal Medicine department with age above 65 years. Majority of age group was young old and represented 88% while the oldest old represented 4% of patients included in the study.As regards assessment of nutrition, considerable number of studies have examined the nutritional status of institutionalized elderly people and reported prevalence figures for malnutrition and nutritional problems [21]. According to the American Dietetic Association (ADA), the nutrient requirements of elderly peopleare not fully understood, although it is known that the physiological and functional changes that occur with agingcan result in changes in nutrient needs [22].The American Academy of Family Physicians and the National Council of Aging in the United States previously formed the Nutrition Screening Initiative for the development of strategies to detect nutritional risks among older people. One of the strategies included the development of a ten-question checklist, the NCL [23]. This checklist, which was designed to be self-administered, could also be administered by a healthcare professional.According to NCL, our results showed that about 56% of patients had high nutritional risk. A study conducted in Canada indicated that 37–62% of elderly were at risk of the poor nutrition [24], and the percent was 50.3% in another study in India [25]. A recent study in China [4] found that the overall prevalence of under nutrition and nutritional risk was 17.8% and 41.5%, respectively, using the NCL. Another recent study in Makkah found that among 102 recently hospitalized elderly and according to the mini nutritional assessment (MNA) tool, 22.6% were classified as malnourished, 57.8% were at risk of malnutrition, and 19.6% were well nourished [1]. Also, another study found that 10.2% of elderly individuals were malnourished and 39.9% were at risk of malnutrition according to the MNA screening tool [5]. Differences in prevalence rates of malnutrition among the different studies may be due to difference in selection criteria of elderly, different assessment tools, and differences in sociodemographic variables.In this study, there was an association between NCL and age group. We found that 100% of oldest old had high nutritional risk and 85.7% of old old have high nutritional risk, but the percent in young old was 48.8%. Our results agree with that obtained by Fang et al. [4] who found that the prevalence of nutritional risk was significantly higher in patients >70 years of age than in patients <70 years (64% versus 32%, P<0.001). Yap et al. found that high nutritional risk was more in older patients but with no significant association with age [26].Also in this study, we found that high nutritional risk represented 52.4% in illiterate patients and about 75% in educated patients; however, we cannot depend on this finding due to the large difference in sample size between illiterate and educated patients (84% versus 16%, resp.). However, the previous study [26] found no significant correlation between NCL and education, and the percent of patients with high nutritional risk was 26.2% and 33.6% in educated versus illiterate patients, respectively. Increased nutritional risk in males was 64.3% versus 45.5% in females (P=0.16), and this was confirmed by MacLellan and van Til [24]. Contrarily, Rambousková et al. [27] concluded that institutionalized women should be considered a nutritionally vulnerable population group; the reason for this difference may be the higher average age of women versus men (86.1±6.15 versus 81.5±7.97 years) in their study. Meanwhile, Fang et al. [4] found no gender difference in the prevalence of nutritional risk. We found a high nutritional risk in rural residents (61.9%) in comparison with urban residents (25%); this may be due to a higher income and a better nutrition knowledge in urban areas.High nutritional risk in patients with low income was 60.7%, while it was about 50% in patients with moderate income. A study in Nigeria concluded that males had significantly higher income, higher socioeconomic status scores and were also less vulnerable to malnutrition than females (P<0.05) [28]. Poverty is a nonphysiological cause of under nutrition in older people [29].As regards functional assessment, the Barthel self care index of activities of daily living, first developed in 1965 [30] and later modified by Granger et al. [31], measures functional disability by quantifying patient performance in 10 activities of daily life. The Barthel Index has been reported to have excellent reliability and validity and adequate responsiveness to change, in measuring neurologic physical disability. Hobart et al. compared psychometric properties of the Barthel index with newer and lengthier scales, the Functional Independence Measure (FIM) and the Functional Independence Measure + Functional Assessment Measure (FIM+FAM), in patients undergoing rehabilitation. This study suggested that the newer and more extensive rating scales of FIM and FIM+FAM offered few advantages over the more practical and economical Barthel index [32]. Similar results were observed in studies of patients with multiple sclerosis and stroke [33, 34].In this study, we found a significant correlation between disability and age, where functional activity decreased with advancing age. Hairi et al. showed that advancing age was significantly associated with functional limitation [35]. We did not find significant correlation between disability and income, however, a study done ona random sample of persons, 55–85 years of age, drawn from the population registers of 11 municipalities found that low socioeconomic status had been associated with physical disability [36]. Unless low socioeconomic status is associated with handwork, the elderly will be more susceptible to physical inactivity and lack of exercise.Gender had no effect on functional activity in our study. Bergamini et al. found that older Italian men and women showed similar prevalence of functional limitation, 31% and 28%, respectively [37]. However, Hairi et al. found that female gender was significantly associated with functional limitation; they suggested that disadvantages resulting from limited education may contribute to the greater physical disability and functional limitation burden experienced by older women in their study [35].Increased incidence of disability was associated with malnutrition in this study. These results are in agreement with Oliveira et al. [38] who assessed the relationship between nutritional status and indicators of functional capacity among recently hospitalized elderly in a general hospital, and showed that these indicators were significantly more deteriorated among the malnourished individuals [38]. Using mini nutritional assessment (MNA) and Barthel index on 123 resident elderly, Cereda et al. showed that the poorer functional status was associated with low nutrition [39]. Also, Chevalier et al., [40] in a study designed to estimate the prevalence of malnutrition in frail elders undergoing rehabilitation and the association between their nutritional status and physical function, showed that there was an interrelationship between nutritional and functional status. It has already been shown that malnutrition compromises the functional status of the individuals [40]. ## 5. Conclusions Results in this study suggest that the hospitalized elderly are exposed to malnutrition, which emphasizes the importance of early identification of malnutrition among them. High nutritional risk was more with older age, rural areas, educated patients, and low income, while gender had no significant effect. Functional abilities were better with younger age but had no significant correlation with other sociodemographic variables. Malnourished hospitalized patients are candidates for functional impairment. Significant associations were noticed between both nutritional and functional status and specific sociodemographic variables. These interrelationships require further studies to elucidate. Also, it is necessary to pay special attention to functional capacity when planning nutritional care for this vulnerable group.Limitations of This Study.The patients included in this study were a convenient sample from patients admitted to one hospital, so patients from distant areas were not included. The large percent of illiterate patients (84%) could have an impact on the study. Larger number of patients should be included in future studies. --- *Source: 101725-2013-10-10.xml*
2013
# Fusion Surgery Required for Recurrent Pediatric Atlantoaxial Rotatory Fixation after Failure of Temporary Fixation with Instrumentation **Authors:** Yoshiyuki Matsuyama; Tetsuhiro Ishikawa; Ei Ozone; Masaaki Aramomi; Seiji Ohtori **Journal:** Case Reports in Orthopedics (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1017307 --- ## Abstract In cases of chronic irreducible and recurrent unstable atlantoaxial rotatory fixation (AARF), closed reduction and its maintenance are often unsuccessful, requiring surgical treatment. The purpose of the present report is to describe a rare case of pediatric AARF that required multiple treatments. A 6-year-old boy was diagnosed as having type 2 AARF. After conservative treatment, the patient was treated with temporary fixation surgery (C1-C2 Magerl) without a bone graft in consideration of motion preservation after screw removal. AARF recurred after the screw removal and required fusion surgery (Magerl–Brooks) with an iliac bone graft. Ultimately, bone union was achieved and the screws were removed 11 months after the surgery. We recommend surgeons be cautious when choosing temporary fixation surgery for AARF in small children. Further investigation is needed to determine the optimal time before screw removal. --- ## Body ## 1. Introduction In 1977, Fielding and Hawkins classified atlantoaxial rotatory fixation (AARF) into four types depending on the degree of anterior or posterior displacement of the atlas [1]. AARF causes torticollis and neck pain because of dislocation or subluxation of the atlantoaxial joint. It has been found mainly in children and may accompany mild trauma, upper respiratory infection, a damaged joint capsule and ligament, or hypermyotonia [2–4]. Anatomical features of the C1-C2 facet in children are less stable than those in adults because of a loose joint capsule, a wide joint range of motion, and a horizontal articular surface [1, 5, 6]. AARF sometimes results from dysfunction of the transverse ligament of the atlas, which causes inflammation of the tonsils, the pharynx, and the upper airway [3, 7].Most acute AARFs can be treated successfully by conservative therapy including medication, closed manipulation, or cervical traction followed by bracing [2, 4]. In chronic cases, closed reduction and its maintenance are often unsuccessful, requiring surgical treatment for such patients with chronic irreducible and recurrent unstable AARF [2, 8, 9].In this paper, we review the literature and report a case of AARF in a 6-year-old boy requiring fusion surgery after failed treatment with temporary fixation and motion preservation surgery. ## 2. Case Report A 6-year-old boy visited our department with torticollis and neck pain that occurred after a small fight with his brother 8 days earlier. The patient had a previous history of a chronic sinus problem with a nasal discharge. The white blood cell (WBC) count was 12,800, and the C-reactive protein (CRP) level was 0.5. His physical examination showed torticollis, with head tilting, neck rotation, and a characteristic “cock-robin” position (Figure1(a)). We diagnosed AARF (Fielding type 2) by plain radiography (Figure 1(b)) and computed tomography (CT) (Figures 1(c) and 1(d)). 3D CT showed anterior subluxation of the right C1-C2 facet (Figures 1(c) and 1(d)), and a C2 facet deformity had not appeared. His extremities appeared normal on neurological examination.Figure 1 Patient showing torticollis before treatment, with head tilting, neck rotation, and a characteristic “cock-robin” position (a). Simple cervical open mouth view radiograph showed lateral tilting of the cervical spine and rotated atlas on the axis (b). Coronal (c), axial (d), and 3D (e). CT images showed lateral tilting of C1 and anterior rotated atlas on the axis. Plain radiograph after closed reduction under general anesthesia (e, f). (a) (b) (c) (d) (e) (f)Three weeks after conservative treatment with a neck collar, AARF had not improved. We recommended inhospital care and cervical traction, but the parents denied long-term admission because they have other small children and were busy with work. Therefore, closed reduction under general anesthesia was performed and AARF improved (Figures1(e) and 1(f)). However, AARF recurred when he woke up and walked to the bathroom 3 hours after the closed reduction.We performed C1-C2 Magerl surgery without bone fusion as a temporary fixation (Figures2(a) and 2(b)) because the patient was 6 years old and we expected C1-C2 motion preservation after the removal of the screws [10]. We chose the Magerl technique instead of C1 lateral mass screw-C2 pedicle screw and rod fixation because there was abnormal hypervascularity posterior to C1 on preoperative 3D CT angiogram (Figure 2(e)). A plain radiograph and CT image after the surgery showed improvement of AARF, but the odontoid process appeared osteolytic (Figure 2(c)).Figure 2 Result of C1-C2 Magerl surgery without a bone graft (a, b). Odontoid process shows osteolysis after C1-C2 Magerl surgery (c). Type 2 AARF recurred 2 weeks after the screw removal. The reconstruction CT shows osteolysis of the odontoid process (d). Preoperative 3D CT showed hypervascularity posterior to C1 (e). (a) (b) (c) (d) (e)Considering bone union between C1 and C2, we removed the screws at 17 weeks after the surgery. However, the type 2 AARF recurred 2 weeks after the screw removal. Reconstructed CT images showed osteolysis of the odontoid process (Figure2(d)). Just before the removal surgery, the WBC count was 10,800 and the CRP level was 0.0; inflammation of the transverse ligament and the influence of upper respiratory infection were suspected. Although we did not notice initially, retrospectively, osteolysis of the odontoid process was shown in the preoperative CT scan (Figure 1(c)).We performed posterior fixation surgery using the Magerl–Brooks method with an iliac bone graft 5 weeks after the recurrence (Figure3) [10, 11]. Ultimately, bone union was achieved 11 months after the surgery, and the screws were removed because we were concerned about growth disorder of the cervical bone without removal. On the most recent CT images, the odontoid process showed sclerosis and there were no signs of inflammation of the transverse ligament.Figure 3 Result of posterior fixation surgery using the Magerl–Brooks method with an iliac bone graft. (a) (b) ## 3. Discussion AARF causes torticollis and neck pain because of dislocation or subluxation of the atlantoaxial joint [2, 11, 12]. Fielding and Hawkins proposed four types of AARF. Type 1: unilateral facet subluxation with an intact transverse ligament. This is the most common type; the dens acts as a pivot. Type 2: unilateral facet subluxation with an atlantodental interval (ADI) of 3–5 mm. This type is associated with the transverse ligament injury; the facet acts as a pivot. Type 3: bilateral anterior facet displacement of >5 mm. This type is rare, with a risk of neurologic deficit. Type 4: posterior displacement of the atlas, associated with dens deficiency [4]. This type is rare, with a risk of neurological deficit. Most acute AARFs can be treated successfully by conservative therapy including closed manipulation or cervical halter traction followed by a cervical orthosis [6, 7]. The pathophysiology of the chronic and recurrent AARF remains unclear despite many previous studies of AARF [2–4,8,9]. Ishii et al. reported that nontraumatic AARF is associated with pharyngeal infection [7]. In the present case, we considered that AARF was a result of trauma; however, the patient had a chronic sinus problem with nasal discharge, so laxation of the transverse ligament and the odontoid destruction caused by chronic inflammation might have influenced the delay and recurrence of AARF.With early diagnosis and reduction, most patients can be successfully treated and cured, but a delay in diagnosis correlates with recurrence and leads to chronic AARF [13]. When conservative treatment fails, open reduction and posterior fixation are necessary. Tauchi et al. successfully treated chronic AARF in children with posterior fusion, such as C1-C2 transarticular fixation, and C1 lateral mass screw and C2 pedicle screw fixation [13]. Atlantoaxial arthrodesis is associated with several problems such as pseudarthrosis, long operation time, and loss of range of motion (ROM) at the atlantoaxial joint. Han et al. reported a case series in 13 patients with type 2 odontoid fractures, using temporary pedicle screw fixation without bone fusion for motion preservation [14]. Ni et al. also reported posterior reduction and temporary fixation with odontoid fracture in 22 consecutive patients, and fracture healing was obtained in 21 [15]. Furthermore, after removing the instrumentation, the ROM of C1-C2 in rotation was obtained, and the neck pain and stiffness were relieved [15]. In reference to these methods, to avoid the loss of C1-C2 motion in small children, we chose temporary fixation surgery and removed the screws before C1-C2 had fused at 17 weeks after the surgery. Our findings indicate that 17 weeks is too early to obtain stability. However, we could not find any reports regarding an appropriate time for temporary fixation for small children. We recommend caution when choosing temporary fixation surgery. Further investigation is needed to clarify the appropriate time necessary to achieve stable reduction of AARF in small children.In summary, we experienced multiple recurrences of AARF in a small child. Temporary fixation and motion preservation did not accomplish stability, but ultimately we could treat the condition with posterior fixation surgery using the Magerl–Brooks method. --- *Source: 1017307-2017-12-26.xml*
1017307-2017-12-26_1017307-2017-12-26.md
9,857
Fusion Surgery Required for Recurrent Pediatric Atlantoaxial Rotatory Fixation after Failure of Temporary Fixation with Instrumentation
Yoshiyuki Matsuyama; Tetsuhiro Ishikawa; Ei Ozone; Masaaki Aramomi; Seiji Ohtori
Case Reports in Orthopedics (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1017307
1017307-2017-12-26.xml
--- ## Abstract In cases of chronic irreducible and recurrent unstable atlantoaxial rotatory fixation (AARF), closed reduction and its maintenance are often unsuccessful, requiring surgical treatment. The purpose of the present report is to describe a rare case of pediatric AARF that required multiple treatments. A 6-year-old boy was diagnosed as having type 2 AARF. After conservative treatment, the patient was treated with temporary fixation surgery (C1-C2 Magerl) without a bone graft in consideration of motion preservation after screw removal. AARF recurred after the screw removal and required fusion surgery (Magerl–Brooks) with an iliac bone graft. Ultimately, bone union was achieved and the screws were removed 11 months after the surgery. We recommend surgeons be cautious when choosing temporary fixation surgery for AARF in small children. Further investigation is needed to determine the optimal time before screw removal. --- ## Body ## 1. Introduction In 1977, Fielding and Hawkins classified atlantoaxial rotatory fixation (AARF) into four types depending on the degree of anterior or posterior displacement of the atlas [1]. AARF causes torticollis and neck pain because of dislocation or subluxation of the atlantoaxial joint. It has been found mainly in children and may accompany mild trauma, upper respiratory infection, a damaged joint capsule and ligament, or hypermyotonia [2–4]. Anatomical features of the C1-C2 facet in children are less stable than those in adults because of a loose joint capsule, a wide joint range of motion, and a horizontal articular surface [1, 5, 6]. AARF sometimes results from dysfunction of the transverse ligament of the atlas, which causes inflammation of the tonsils, the pharynx, and the upper airway [3, 7].Most acute AARFs can be treated successfully by conservative therapy including medication, closed manipulation, or cervical traction followed by bracing [2, 4]. In chronic cases, closed reduction and its maintenance are often unsuccessful, requiring surgical treatment for such patients with chronic irreducible and recurrent unstable AARF [2, 8, 9].In this paper, we review the literature and report a case of AARF in a 6-year-old boy requiring fusion surgery after failed treatment with temporary fixation and motion preservation surgery. ## 2. Case Report A 6-year-old boy visited our department with torticollis and neck pain that occurred after a small fight with his brother 8 days earlier. The patient had a previous history of a chronic sinus problem with a nasal discharge. The white blood cell (WBC) count was 12,800, and the C-reactive protein (CRP) level was 0.5. His physical examination showed torticollis, with head tilting, neck rotation, and a characteristic “cock-robin” position (Figure1(a)). We diagnosed AARF (Fielding type 2) by plain radiography (Figure 1(b)) and computed tomography (CT) (Figures 1(c) and 1(d)). 3D CT showed anterior subluxation of the right C1-C2 facet (Figures 1(c) and 1(d)), and a C2 facet deformity had not appeared. His extremities appeared normal on neurological examination.Figure 1 Patient showing torticollis before treatment, with head tilting, neck rotation, and a characteristic “cock-robin” position (a). Simple cervical open mouth view radiograph showed lateral tilting of the cervical spine and rotated atlas on the axis (b). Coronal (c), axial (d), and 3D (e). CT images showed lateral tilting of C1 and anterior rotated atlas on the axis. Plain radiograph after closed reduction under general anesthesia (e, f). (a) (b) (c) (d) (e) (f)Three weeks after conservative treatment with a neck collar, AARF had not improved. We recommended inhospital care and cervical traction, but the parents denied long-term admission because they have other small children and were busy with work. Therefore, closed reduction under general anesthesia was performed and AARF improved (Figures1(e) and 1(f)). However, AARF recurred when he woke up and walked to the bathroom 3 hours after the closed reduction.We performed C1-C2 Magerl surgery without bone fusion as a temporary fixation (Figures2(a) and 2(b)) because the patient was 6 years old and we expected C1-C2 motion preservation after the removal of the screws [10]. We chose the Magerl technique instead of C1 lateral mass screw-C2 pedicle screw and rod fixation because there was abnormal hypervascularity posterior to C1 on preoperative 3D CT angiogram (Figure 2(e)). A plain radiograph and CT image after the surgery showed improvement of AARF, but the odontoid process appeared osteolytic (Figure 2(c)).Figure 2 Result of C1-C2 Magerl surgery without a bone graft (a, b). Odontoid process shows osteolysis after C1-C2 Magerl surgery (c). Type 2 AARF recurred 2 weeks after the screw removal. The reconstruction CT shows osteolysis of the odontoid process (d). Preoperative 3D CT showed hypervascularity posterior to C1 (e). (a) (b) (c) (d) (e)Considering bone union between C1 and C2, we removed the screws at 17 weeks after the surgery. However, the type 2 AARF recurred 2 weeks after the screw removal. Reconstructed CT images showed osteolysis of the odontoid process (Figure2(d)). Just before the removal surgery, the WBC count was 10,800 and the CRP level was 0.0; inflammation of the transverse ligament and the influence of upper respiratory infection were suspected. Although we did not notice initially, retrospectively, osteolysis of the odontoid process was shown in the preoperative CT scan (Figure 1(c)).We performed posterior fixation surgery using the Magerl–Brooks method with an iliac bone graft 5 weeks after the recurrence (Figure3) [10, 11]. Ultimately, bone union was achieved 11 months after the surgery, and the screws were removed because we were concerned about growth disorder of the cervical bone without removal. On the most recent CT images, the odontoid process showed sclerosis and there were no signs of inflammation of the transverse ligament.Figure 3 Result of posterior fixation surgery using the Magerl–Brooks method with an iliac bone graft. (a) (b) ## 3. Discussion AARF causes torticollis and neck pain because of dislocation or subluxation of the atlantoaxial joint [2, 11, 12]. Fielding and Hawkins proposed four types of AARF. Type 1: unilateral facet subluxation with an intact transverse ligament. This is the most common type; the dens acts as a pivot. Type 2: unilateral facet subluxation with an atlantodental interval (ADI) of 3–5 mm. This type is associated with the transverse ligament injury; the facet acts as a pivot. Type 3: bilateral anterior facet displacement of >5 mm. This type is rare, with a risk of neurologic deficit. Type 4: posterior displacement of the atlas, associated with dens deficiency [4]. This type is rare, with a risk of neurological deficit. Most acute AARFs can be treated successfully by conservative therapy including closed manipulation or cervical halter traction followed by a cervical orthosis [6, 7]. The pathophysiology of the chronic and recurrent AARF remains unclear despite many previous studies of AARF [2–4,8,9]. Ishii et al. reported that nontraumatic AARF is associated with pharyngeal infection [7]. In the present case, we considered that AARF was a result of trauma; however, the patient had a chronic sinus problem with nasal discharge, so laxation of the transverse ligament and the odontoid destruction caused by chronic inflammation might have influenced the delay and recurrence of AARF.With early diagnosis and reduction, most patients can be successfully treated and cured, but a delay in diagnosis correlates with recurrence and leads to chronic AARF [13]. When conservative treatment fails, open reduction and posterior fixation are necessary. Tauchi et al. successfully treated chronic AARF in children with posterior fusion, such as C1-C2 transarticular fixation, and C1 lateral mass screw and C2 pedicle screw fixation [13]. Atlantoaxial arthrodesis is associated with several problems such as pseudarthrosis, long operation time, and loss of range of motion (ROM) at the atlantoaxial joint. Han et al. reported a case series in 13 patients with type 2 odontoid fractures, using temporary pedicle screw fixation without bone fusion for motion preservation [14]. Ni et al. also reported posterior reduction and temporary fixation with odontoid fracture in 22 consecutive patients, and fracture healing was obtained in 21 [15]. Furthermore, after removing the instrumentation, the ROM of C1-C2 in rotation was obtained, and the neck pain and stiffness were relieved [15]. In reference to these methods, to avoid the loss of C1-C2 motion in small children, we chose temporary fixation surgery and removed the screws before C1-C2 had fused at 17 weeks after the surgery. Our findings indicate that 17 weeks is too early to obtain stability. However, we could not find any reports regarding an appropriate time for temporary fixation for small children. We recommend caution when choosing temporary fixation surgery. Further investigation is needed to clarify the appropriate time necessary to achieve stable reduction of AARF in small children.In summary, we experienced multiple recurrences of AARF in a small child. Temporary fixation and motion preservation did not accomplish stability, but ultimately we could treat the condition with posterior fixation surgery using the Magerl–Brooks method. --- *Source: 1017307-2017-12-26.xml*
2017